Cognitive engineering of mixed reality

 

iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.
iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.

In the 1970s and 1980s the concept ‘cognitive engineering’ was used in the industry labs to describe an approach trying to apply cognitive science lessons to the design and engineering fields. There were people like Donald A. Norman, who wanted to devise systems that are not only easy, or powerful, but most importantly pleasant and even fun to use.

One of the classical challenges of making technology suit humans, is that humans change and evolve, and differ greatly in motivations and abilities, while technological systems tend to stay put. Machines are created in a certain manner, and are mostly locked within the strict walls of material and functional specifications they are based on, and (if correctly manufactured) operate reliably within those parameters. Humans, however, are fallible and changeable, but also capable of learning.

In his 1986 article, Norman uses the example of a novice and experienced sailor, who greatly differ in their abilities to take the information from compass, and translate that into a desirable boat movement (through the use of tiller, and rudder). There have been significant advances in multiple industries in making increasingly clear and simple systems, that are easy to use by almost anyone, and this in turn has translated into increasingly ubiquitous or pervasive application of information and communication technologies in all areas of life. The televisions in our living rooms are computing systems (often equipped with apps of various kinds), our cars are filled with online-connected computers and assistive technologies, and in our pockets we carry powerful terminals into information, entertainment, and into the ebb and flows of social networks.

There is, however, also an alternative interpretation of what ‘cognitive engineering’ could be, in this dawning era of pervasive computing and mixed reality. Rather than only limited to engineering products that attempt to adapt to the innate operations, tendencies and limitations of human cognition and psychology, engineering systems that are actively used by large numbers of people also means designing and affecting the spaces, within which our cognitive and learning processes will then evolve, fit in, and adapt into. Cognitive engineering does not only mean designing and manufacturing certain kinds of machines, but it also translates into an impact that is made into the human element of this dialogical relationship.

Graeme Kirkpatrick (2013) has written about the ‘streamlined self’ of the gamer. There are social theorists who argue that living in a society based on computers and information networks produces new difficulties for people. Social, cultural, technological and economic transitions linked with the life in late modern, capitalist societies involve movements from projects to new projects, and associated necessity for constant re-training. There is necessarily no “connecting theme” in life, or even sense of personal progression. Following Boltanski and Chiapello (2005), Kirkpatrick analyses the subjective condition where life in contradiction – between exigency of adaptation and demand for authenticity – means that the rational course in this kind of systemic reality is to “focus on playing the game well today”. As Kirkpatrick writes, “Playing well means maintaining popularity levels on Facebook, or establishing new connections on LinkedIn, while being no less intensely focused on the details of the project I am currently engaged in. It is permissible to enjoy the work but necessary to appear to be enjoying it and to share this feeling with other involved parties. That is the key to success in the game.” (Kirkpatrick 2013, 25.)

One of the key theoretical trajectories of cognitive science has been focused on what has been called “distributed cognition”: our thinking is not only situated within our individual brains, but it is in complex and important ways also embodied and situated within our environments, and our artefacts, in social, cultural and technological means. Gaming is one example of an activity where people can be witnessed to construct a sense of self and its functional parameters out of resources that they are familiar with, and which they can freely exploit and explore in their everyday lives. Such technologically framed play is also increasingly common in working life, and our schools can similarly be approached as complex, designed and evolving systems that are constituted by institutions, (implicit, as well as explicit) social rules and several layers of historically sedimented technologies.

Beyond all hype of new commercial technologies related to virtual reality, augmented reality and mixed reality technologies of various kinds, lies the fact that we have always already lived in complex substrate of mixed realities: a mixture of ideas, values, myths and concepts of various kinds, that are intermixed and communicated within different physical and immaterial expressive forms and media. Cognitive engineering of mixed reality in this, more comprehensive sense, involves involvement in dialogical cycles of design, analysis and interpretation, where practices of adaptation and adoption of technology are also forming the shapes these technologies are realized within. Within the context of game studies, Kirkpatrick (2013, 27) formulates this as follows: “What we see here, then, is an interplay between the social imaginary of the networked society, with its distinctive limitations, and the development of gaming as a practice partly in response to those limitations. […] Ironically, gaming practices are a key driver for the development of the very situation that produces the need for recuperation.” There are multiple other areas of technology-intertwined lives where similar double bind relationships are currently surfacing: in social use of mobile media, in organisational ICT, in so-called smart homes, and smart traffic design and user culture processes. – A summary? We live in interesting times.

References:
– Boltanski, Luc, ja Eve Chiapello (2005) The New Spirit of Capitalism. London & New York: Verso.
– Kirkpatrick, Graeme (2013) Computer Games and the Social Imaginary. Cambridge: Polity.
– Norman, Donald A. (1986) Cognitive engineering. User Centered System Design31(61).

Thunderbolt 3, eGPUs

(This is the first post in a planned series, focusing on various aspects of contemporary information and communication technologies.)

The contemporary computing is all about flow of information: be it a personal computer, a mainframe server, a mobile device or even an embedded system in a vehicle, for example, the computers of today are not isolated. Be it for better or worse, increasingly all things are integrated into world-wide networks of information and computation. This also means that the ports and interfaces for all that data transfer take even higher prominence and priority, than in the old days of more locally situated processing.

Thinking about transfer of data, some older generation computer users still might remember things like floppy disks or other magnetic media, that were used both for saving the work files, and often distributing and sharing that work with others. Later, optical disks, external hard drives, and USB flash drives superseded floppies, but a more fundamental shift was brought along by Internet, and “cloud-based” storage options. In some sense the development has meant that personal computing has returned to the historical roots of distributed computing in ARPANET and its motivation in sharing of computing resources. But regardless what kind of larger network infrastructure mediates the operations of user and the service provider, all that data still needs to flow around, somehow.

The key technologies for information and communication flows today appear to be largely wireless. The mobile phone and tablet communicate to the networks with wireless technologies, either WiFi (wireless local area networking) or cellular networks (GSM, 3G and their successors). However, all those wireless connections end up linking into wired backbone networks, that operate at much higher speeds and reliability standards, than the often flaky, local wireless connections. As data algorithms for coding, decoding and compression of data have evolved, it is possible to use wireless connections today to stream 4K Ultra HD video, or to play high speed multiplayer games online. However, in most cases, wired connections will provide lower latency (meaning more immediate response), better reliability from errors and higher speeds. And while there are efforts to bring wireless charging to mobile phones, for example, most of the information technology we use today still needs to be plugged into some kind of wire for charging its batteries, at least.

Thunderbolt 3 infographic, (c) Intel
Thunderbolt 3 infographic, (c) Intel

This is where new standards like USB-C and Thunderbolt come to the picture. Thunderbolt (currently Thunderbolt 3 is the most recent version) is a “hardware interface”, meaning it is a physical, electronics based system that allows two computing systems to exchange information. This is a different thing, though, from the actual physical connector: “USB Type C” is the full name of the most recent reincarnation of “Universal Serial Bus”, an industry standard of protocols, cables, and connectors that were originally released already in 1996. The introduction of original USB was a major step into the interoperability of electronics, as the earlier situation had been developing into a jungle of propriety, non-compatible connectors – and USB is a major success story, with several billion connectors (and cables) shipped every year. Somewhat confusingly, the physical, bi-directional connectors of USB-C can hide behind them many different kinds of electronics, so that some USB-C connectors comply with USB 3.1 mode (with data transfer speeds up to 10 Gbit/s in “USB 3.1 Gen 2” version) and some are implemented with Thunderbolt – and some support both.

USB-C and Thunderbolt have in certain sense achieved a considerable engineering marvel: with backward compatibility to older USB 2.0 mode devices, this one port and cable should be able to connect to multiple displays with 4K resolutions, external data storage devices (with up to 40 Gbit/s speeds), while also working as a power cable: with Thunderbolt support, a single USB-C type port can serve, or drain, up to 100 watts electric power – making it possible to remove separate power connectors, and share power bricks between phones, tablets, laptop computers and other devices. The small form factor Apple MacBook (“Retina”, 2015) is an example of this line of thinking. One downside for the user of this beautiful simplicity of a single port in the laptop is need for carrying various adapters to connect with anything outside of the brave new USB-C world. In an ideal situation, however, it would be a much simpler life if there would only be this one connector type to worry about, and it would be possible to use a single cable to dock any device to the network, gain access to large displays, storage drives, high speed networks, and even external graphics solutions.

The heterogeneity and historical layering of everyday technologies are complicating the landscape that electronics manufacturers would like to paint for us. As any student of history of science and technology can tell, even the most successful technologies did not replace the earlier ones immediately, and there has always been reasons why people have been opposing the adoption of new technologies. For USB-C and Thunderbolt, the process of wider adoption is clearly currently well underway, but there are also multiple factors that slow it down. The most typical peripheral does not yet come with USB-C, but rather with the older versions. Even in expensive, high end mobile phones, there are still multiple models that manufacturers ship with older USB connectors, rather than with the new USB-C ones.

A potentially more crucial issue for most regular users is that Thunderbolt 3 & USB-C is still relatively new and immature technology. The setup is also rather complex, and with its integration of DisplayPort (video), PCI Express (PCIe, data) and DC power into a single hardware interface it typically requires multiple manufacturers’ firmware and driver updates to work seamlessly together, for TB3 magic to start happening. An integrated systems provider such as Apple has best possibilities to make this work, as they control both hardware as well as software of their macOS computers. Apple is also, together with Intel, the developer of the original Thunderbolt, and the interface was first commercially made available in the 2011 version of MacBook Pro. However, today there is an explosion of various USB-C and Thunderbolt compatible devices coming to the market from multiple manufacturers, and the users are eager to explore the full potential of this new, high speed, interoperable wired ecosystem.

eGPU, or External Graphics Processing Unit, is a good example of this. There are entire hobbyist forums like eGPU.io website dedicated to the fine art of connecting a full powered, desktop graphics card to a laptop computer via fast lane connections – either Expresscard or Thunderbolt 3. The rationale for this is (apart from the sheer joy of tweaking) that in this manner, one can both have a slim ultrabook computer for daily use, with a long battery life, that is then capable of transforming into an impressive workstation or gaming machine, when plugged into an external enclosure that houses the power hungry graphics card (these TB3 boxes typically have full length PCIe slots for installing GPUs, different sets of connection ports, and a separate desktop PC style power supply).  VR (virtual reality) applications are one example of an area where current generation of laptops have problems: while there are e.g. Nvidia GeForce GTX 10 series (1060 etc.) equipped laptops available today, most of them are not thin and light for everyday mobile use, or, if they are, their battery life and/or fan noise present issues.

Razer, a American-Chinese computing hardware manufacturer is known as a pioneer in popularizing the field of eGPUs, with their introduction of Razer Blade Stealth ultrabook, which can be plugged with a TB3 cable into the Razer Core enclosure (sold separately), for utilizing powerful GPU cards that can be installed inside the Core unit. A popular use case for TB3/eGPU connections is for plugging a powerful external graphics card into a MacBook Pro, in order to make it into a more capable gaming machine. In practice, the early adopters have faced struggles with firmwares and drivers that do not provide direct support from either the macOS side, or from the eGPU unit for the Thunderbolt 3 implementation to actually work. (See e.g. https://egpu.io/akitio-node-review-the-state-of-thunderbolt-3-egpu/ .) However, more and more manufacturers have added support and modified their firmware updates, so the situation is already much better than a few months ago (see instructions at: https://egpu.io/setup-guide-external-graphics-card-mac/ .) In the area of PC laptops running Windows 10, the situation is comparable: a work in progress, with more software support slowly emerging. Still, it is easy to get lost in this, still evolving field. For example, Dell revealed in January that they had restricted the Thunderbolt 3 PCIe data lanes in their implementation of the premium XPS 15 notebook computer: rather than using full 4 lanes, XPS 15 had only 2 PCIe lanes connected in the TB3. There is e.g. this discussion in Reddit comparing the effects this has, in the typical case that eGPU is feeding image into an external display, rather than back to the internal display of the laptop computer (see: https://www.reddit.com/r/Dell/comments/5otmir/an_approximation_of_the_difference_between_x2_x4/). The effects are not that radical, but it is one of the technical details that the early users of eGPU setups have struggled with.

While fascinating from an engineering or hobbyist perspective, the situation of contemporary technologies for connecting the everyday devices is still far from perfect. In thousands of meeting rooms and presentation auditoriums every day, people fail to connect their computers, get anything into the screen, or get access to their presentation due to the failures of online connectivity. A universal, high speed wireless standard for sharing data and displaying video would no doubt be the best solution for all. Meanwhile, a reliable and flexible, high speed standard in wired connectivity would go a long way already. The future will show whether Thunderbolt 3 can reach that kind of ubiquitous support. The present situation is pretty mixed and messy at best.

Tietokone, henk.koht. (On personal computers)

lenovo-x1-yoga-feature-3
Thinkpad X1 Yoga (photo © by Lenovo)

[Note in Finnish about the use and evolution of personal computers] Henkilökohtaiset tietokoneet ovat verrattain nuori ilmiö, ja ajatus yhden ihmisen käyttöön suunnitellusta ja hankitusta tietokoneesta olisi ollut vielä 1950- ja 60-luvuilla lähes käsittämätön. Tietotekniikan hinta on kuitenkin alentunut, ja samalla ajatus tietokoneesta on ihmiskeskeistynyt. Kalliit tieteelliset ja taloushallinnon laskimet ovat mukautuneet ja mukautettu ihmisten mitä moninaisimpien tarpeiden palvelukseen. Tietokone tallentaa ja arkistoi tekstiä ja dataa, hallitsee digitaalisia kalentereita, mutta myös taipuu tuottamaan ja toistamaan musiikkia, kuvia ja mallintamaan vuorovaikutteisia virtuaalisia tiloja. Yhdistyneenä tietoverkkoihin, tietokoneet ovat monikanavaisia ja monimuotoisia viestimiä, itseilmaisun ja sosiaalisen järjestäytymisen välineitä, arkisen elämän, viihteen ja taiteen näyttämöitä.

Apple_I_Computer
Apple I (photo by By Ed Uthman – originally posted to Flickr as Apple I Computer, CC BY-SA 2.0)

Kun 80-luvulla aloittelin silloisten kotitietokoneiden parissa tietokoneharrastusta, mahdollisuudet olivat avoinna tietotekniikan kehityksen osalta moneen suuntaan. Erilaisia kokeiluja, tuotekategorioita ja digitaalisten sisältöjen lajityyppejä kehiteltiin aktiivisesti. 1990- ja 2000-lukujen aikana tuntui ajoittain siltä, että merkittävät innovaatiot olivat jo takanapäin, ja mielenkiinto rajoittui lähinnä siihen, milloin 286:n jälkeen tulee 386 ja 486, ja mikä Windows 3.1:sta seuraavan käyttöjärjestelmäversion nimeksi tulee.

Mobiililaitteet, ympäristöön sulautuva ja kaikkialla läsnäoleva tietotekniikka on muuttanut tätä perustilannetta niin, että nyt 2010-luvun lopulla tieto- ja viestintätekninen tulevaisuus vaikuttaa jälleen kiehtovalta. Samalla globaalit ongelmat ovat kuitenkin myös nousseet sellaiseen mittaan ja tietoisuuteen, että tietotekniikka itsessään vaikuttaa jossain määrin triviaalilta ja marginaaliseltakin aihepiiriltä. Sosiaalisen median kautta tapahtuva yhteiskunnallisen vastakkainasettelun kasvu ja julkisen keskustelun kriisi kuitenkin osaltaan todistavat myös siitä, kuinka merkittävällä tavalla viestinnän ja vuorovaikutuksen järjestämisen tekniikoillamme on syvällistä vaikutusta arkielämän ja yhteiskunnan kehitykseen.

OLPC: Kannettava tietokone jokaiselle lapselle -järjestön esituotantomalli (photo by “Fuse-Project”; OLPC-Wiki: “Walter”)

Koneiden kanssa keskustelu on myös vuoropuhelua oman, teknologisesti sävyttyneen ja rakentuneen minuutemme kanssa. Mikään laite mitä käytämme tänään, ei ole tietääkseni saapunut ulkoavaruudesta keskuuteemme, vaan kyse on ihmisenä olemisen laajennuksista, joita olemme itse kehitelleet ja joihin olemme syystä tai toisesta ripustautuneet. Kierros kodinkonemyymälässä tai autokaupassa jättää itselleni usein saman, hieman kummastuneen ja kunnioittavan tunnelman kuin esimerkiksi käynti kansatieteellisessä museossa, loputtoman monimuotoisesti kirjailtujen päähineiden tai toisistaan eri tavoin eroavien rukinlapojen keskellä. Ecce homo. Totuus löytyy uusimmasta automaattivaihteistosta.

Tänä vuonna ihmisillä jotka uhraavat aikaansa ja vaivojaan henkilökohtaisten tietokoneiden kehittämiseen, vaikuttaa olevan käsillä useita perustavia erimielisyyksiä ja vaihtoehtoisia kehityssuuntia sille, mitä tietokoneen tulisi meille olla ja merkitä. Osin kyse on henkilökohtaisen tietokoneen jäämisestä kehityksen sivuraiteelle: tietokoneiden pohdiskelua paljon suurempi osa energiastamme menee siihen kun yritämme taivuttaa käyttäytymistämme sellaiseen muotoon että Facebookin, Googlen tai Applen kaltaisten yritysten palveluihinsa kehittämät algoritmit paljastaisivat meille maailmasta ne kasvot joista olemme kiinnostuneita, ja samalla kuvastaisivat meitä itseämme toisille ihmisille siten kuin meistä hyvältä tuntuu. Tai siihen kun valitsemme uutta älypuhelinmallia ja siihen päivittäisen elämän kannalta olennaista valikoimaa mobiilisovelluksia.

Osa kehittäjistä pyrkii sekoittamaan tietokoneen ja mobiililaitteen välistä rajaa: hybridilaitteet kukoistavat. Osa pyrkii pitämään esimerkiksi kosketusnäyttöihin, hahmontunnistukseen ja puheohjaukseen liittyvät kehityskulut poissa henkilökohtaisten tietokoneiden rajoja ja olemusta hämärtämästä. Osa yrittää tehdä tietokoneesta mahdollisimman ohuen ja kevyesti kaikkialle mukana kulkevan, vaivattomasti auki sujahtavan ja päiväkausia yhdellä latauksella toimivan. Toisille henkilökohtainen tietokone on vain tietynlainen pääte pilvipalveluissa raksuttaviin toiminnallisuuksiin ja dataan – tietokone voi kulkea taskussa, ja sen käyttöliittymä korvassa. Yhdelle kehittäjäryhmälle puolestaan henkilökohtaisen tietokoneen tehokkuus on kaikki kaikessa, ja tavoitteena on pakata virtuaalitodellisuuden edellyttämää suorituskykyä myös kannettavan tietokoneen kehysten sisään, ja varustaa se liitännällä silmikkonäyttöön. Suuri joukko kehittäjiä ja valmistajia pyrkii tuomaan henkilökohtaisen tietokoneen hintaa niin alas, että se olisi kilpailukykyinen jopa edullisempien älypuhelintenkin kanssa, vaikka silläkin riskillä että pitkälle tingitty laite ei enää selviäisi vähääkään haastavammista tehtävistä tökkimättä. Toisiin pyrkimyksiin liittyy muotoilu, missä kestävyys ja käytännöllisyys ovat keskeisintä, toisissa puolestaan henkilökohtaista tietokonetta pyritään kehittämään paitsi elektroniikan, myös värien, viimeistelyn ja hienomekaanisen insinöörityön alueella niin sofistikoituneeksi ja yksilölliseksi kokonaisuudeksi kuin mahdollista.

Leonardos-Laptop
Ben Shneiderman, Leonardo’s Laptop (2002) – sikäli kun tiedän, ei ole juurikaan tehty “cultural laptop studies”-tutkimusta sillä kriittis-analyyttisellä tutkimusotteella kuin Paul du Gay ym. “Story of Sony Walkman” -kirjassaan (1996)  – Schneiderman keskittyy ‘universal usability’-teemaan.

Hyötyrationaalinen tarve on vain yksi ulottuvuus ihmisen suhteessa teknologiaansa. Tosin, jos omat tähänhetkiset tarpeeni jos ottaa esimerkiksi, ollaan näissäkin nopeasti yhteensovittamattomien ristiriitojen viidakossa. Pitkät työpäivät, pienet näytöt, pieni teksti ja helposti väsyvät silmät ovat yhdistelmä, mihin parhaiten vastaisi laite, missä näyttö on vähintään 40-50-tuumainen, ja sitä katsottaisiin ainakin puolentoista, parin metrin etäisyydeltä. Toisaalta liikkuvassa työssä mukana kannettavan laitteen olisi tärkeää olla mahdollisimman kompakti, kevyt ja toisaalta siinä pitäisi olla akku jonka varassa kymmentuntinenkin työrupeama sujuu tarvittaessa ilman yhteyttä seinäpistokkeeseen. Niin kauan kuin nämä laitteet eivät osaa vielä lukea ajatuksia, mahdollisimman monipuoliset mahdollisuudet itseilmaisuun ja vuorovaikutukseen erilaisten sisältöjen luomisessa olisivat tärkeitä: mahdollisuus paitsi kirjoittaa ergonomialtaan korkealuokkaisella mekaanisella näppäimistöllä (siksikin koska sanelu ei vielä täysin luotettavasti toimi), piirtää ja värittää, maalata ja valokuvata, myös tallentaa hyvälaatuista videota ja ääntä suoraan laitteesta, esimerkiksi videoitujen luentojen ja neuvottelujen tarpeisiin. Pelien, virtuaalimaailmojen, multimedian, analyysiohjelmistojen ja erilaisten kehitystyökalujen parissa tehtävä työ puolestaan edellyttäisi laskentatehoa, muistia ja korkearesoluutioisia näyttötiloja, jotka ovat ristiriidassa vaikkapa keveyden ja pitkän akunkeston kanssa. Henkilökohtainen tietokone on siis kameleonttimaisena, digiaikakauden “kaiken teknologian” leikkauspisteenä ja pullonkaulana sikäli epäkiitollisessa asemassa, että oli se mitä tahansa, se aina sulkee pois jotain muuta, mitä henkilökohtainen tietokone myös mielellään saisi olla – ainakin joskus, jonakin päivänä ja hetkenä.

Vaikka mainostajat mielellään korostavat pyrkimystä täydellisyyteen ja kaupustelemiensa tuotteiden tinkimättömyyttä mahdollisimman monella osa-alueella, niin kehittäjät kuin useimmat käyttäjät ymmärtävät että henkilökohtainen tietokone on aina jossain määrin epätyydyttävä kompromissi. Sitä leimaa puute ja vajavaisuus – jotain joka usein paljastuu kaikkein kiusallisimmalla hetkellä, kun akku loppuu, teho osoittautuu riittämättömäksi, tai kun riittämättömät ohjauslaitteet ja näytön ominaisuudet tuskastuttavat jännetuppitulehduksen ja päänsäryn piinaamaa käyttäjää. Jotkut yrittävät luopua tietokoneista kokonaan, käyttää jotain muuta tekniikkaa, tai ottaa etäisyyttä kaikkeen tietotekniikkaan. Totuus kuitenkin on, että olemme edelleen lähes jokaisena päivänä myös henkilöitä, joita meidän jokapäiväinen henkilökohtainen tietokoneemme määrittää, rajoittaa, kiusaa ja ajoittain myös palkitsee. Tietokoneen monet mahdollisuudet tuovat esiin omat rajoituksemme – katsot tietokonetta, ja tietokoneestasi katsoo takaisin sinä itse.

Porsche-Design-Book-One
Book One (photo © by Porsche Design)

Future of interfaces: AirPods

apple-airpods
Apple AirPods (image © Apple).

I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.

Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).

There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.

The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.

The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.

Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.

There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: https://www.cnet.com/how-to/the-complete-list-of-siri-commands/). The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.

All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see: http://www.apple.com/airpods/.

Pokémon GO Plus: The challenge of casual pervasive gaming?

Pokémon GO Plus package contents.
Pokémon GO Plus package contents.

Our research projects have explored the directions of pervasive gaming and more general ludification trends in culture and society. One of the success stories of last year was Pokémon GO, the location-based mobile game by Niantic (a Google spin-off) and Pokémon Company. When winter came, the player numbers dropped: at least in Finnish winter is became practically impossible to play a smartphone outdoors game in below-freezing temperatures. Considering that, I have been interested in trying the Pokémon GO Plus accessory – it is a small bluetooth device with one button that you can wear, so that constant handling of smartphone is no longer needed.

Pokémon GO Plus notifications via iPhone in Pebble Time 2 smartwatch.
Pokémon GO Plus notifications via iPhone in Pebble Time 2 smartwatch.

Based on a couple of hours quick testing, this kind of add-on certainly has certain potential. It reduces (an already rather simple) game into its most basic elements: the buzz and colourful led signals when there is a familiar (green) or new (yellow) Pokémon creature nearby, ready for catching. Pressing the button will automatically try to capture the virtual critter: easy ones usually register as “captured” in a few seconds (rainbow-style multi-coloured led signal), more challenging ones might “flee” (red light). When one arrives next to a Pokéstop, there will be a blue light & buzz signal, and with a press of button one can quickly interact with the stop, and get all available items registered into ones inventory. This is actually much more convenient than the usual routine of clicking and swiping at stops, Pokémons and balls. When the “Plus” is active, the game app itself also keeps running in the background, registering walking distances also when the phone is locked. This is how the game should function in the first place, of course. It seems that it is also much easier to capture Pokémons with the “Plus” than without it (how fair this is to other gamers, is a subject of discussion, too).

Pokémon GO Plus notifications on iPhone 6 Plus screen.
Pokémon GO Plus notifications on iPhone 6 Plus screen.

The larger question that remains is, what “casual pervasive gaming” will become, in the long run. If this kind of devices show the direction, it might be that a casual, always-on game will be more like a “zero player game”: an automated simulated of gaming, where game server and game client keep on making steady progress in the game, while the human player is free to concentrate on other things. Maybe it is enough just to check the game progress at the end of the day, getting some kind of summary of what the automated, “surrogate player” had experienced, during the day?

Playing Pokémon GO with the “Plus” add-on is not quite there, though. There were moments today when the device was buzzing every few second, asking for its button to be pressed. I quickly collected a nice selection of random, low level Pokémon, but I also ran out of Poke Balls in a minute. Maybe the device is made for “Pokémon GO whales”: those players who use real money to buy an endless suppy of poke-balls, and who are happy to have this semi-automatic collecting practice going on, whole day, in order to grind their way towards higher levels?

The strategic element of choice is mostly missing while using the “Plus”. I have no specific knowledge which Pokémon I am trying to capture, and as the game is configured to use only the basic sort of Poke Ball automatically, any “Great”, or “Ultra” balls, for example, are not used, which means that any more challenging, high-level Pokémon will most likely be missed and flee. At the same time, the occasionall buzz of the device taps evokes the “play frame” of Pokémon GO – which relates to the “playful mindset” that we also have been researching – so it is easier to keep on having a contact with a pervasive gaming reality, while mostly concentrating on mundane, everyday things, like doing grocery shopping. Some of us are better at multitasking, but experiments like Pokémon GO Plus provide us with a better understanding on how to scale both the game-related information, as well as the in-game tasks and functionalities, so that they do not seriously interfere with the other daily activities, but rather support them in the manner we see preferable. At least for me, wearing the “Plus” made those winter walking trips a bit more interesting and motivating again today.

Tech Tips for New Students

Working cross-platform
Going cross-platform: same text accessed via various versions of MS Word and Dropbox in Surface Pro 4, iPad Mini (with Zagg slim book keyboard case), Toshiba Chromebook 2, and iPhone 6 Plus, in the front.

There are many useful practices and tools that can be recommended for new university students; many good study practices are pretty universal, but then there are also elements that relate to what one studies, where one studies – to the institutional or disciplinary frames of academic work. A student that works on a degree in theoretical physics, electronics engineering, organic chemistry, history of the Middle Ages, Japanese language or business administration, for example, all will probably have elements in their studies that are unique to their fields. I will here focus on some simple technicalities should be useful for many students in the humanities, social sciences or digital media studies related fields, as well as for those in our own, Internet and Game Studies degree program.

There are study practices that belong to the daily organisation of work, to the tools, the services and software that one will use, for example. My focus here is on the digital tools and technology that I have found useful – even essential – for today’s university studies, but that does not mean I would downplay the importance of non-digital, informal and more traditional ways of doing things. The ways of taking notes in lectures and seminars is one thing, for example. For many people the use of pen or pencil on paper is absolutely essential, and they are most effective when using their hands in drawing and writing physically to the paper. Also, rather than just participating in online discussion fora, having really good, traditional discussions in the campus café or bar with the fellow students are important in quite many ways. But taken that, there are also some other tools and environments that are worth considering.

It used to be that computers were boxy things that were used in university’s PC classes (apart from terminals, used to access the mainframes). Today, the information and communication technology landscape has greatly changed. Most students carry in their pockets smartphones that are much more capable devices than the mainframes of the past. Also, the operating systems do not matter as much as they did only a few years ago. It used to be a major choice whether one went and joined the camp of Windows (Microsoft-empowered PC computers), that of Apple Macintosh computers, those with Linux, or some other, more obscure camp. The capabilities and software available for each environment were different. Today, it is perfectly possible to access same tools, software or services with all major operating environments. Thus, there is more freedom of choice.

The basic functions most of us in academia probably need daily include reading, writing, communicating/collaborating, research, data collecting, scheduling and other work organisation tasks and use of the related tools. It is an interesting situation that most of these tasks can be achieved already with the mobile device many of us carry with us all the time. A smartphone of iOS or Android kind can be combined with an external Bluetooth keyboard and used for taking notes in the lectures, accessing online reading materials, for using cloud services and most other necessary tasks. In addition, smartphone is of course an effective tool for communication, with its apps for instant messaging, video or voice conferencing. The cameraphone capabilities can be used for taking visual notes, or for scanning one’s physical notes with their mindmaps, drawings and handwriting into digital format. The benefit of that kind of hybrid strategy is it allows taking advantage both of the supreme tactile qualities of physical pen and paper, while also allowing the organisation of scanned materials into digital folders, possibly even in full-text searchable format.

The best tools for this basic task of note taking and organisation are Evernote and MS OneNote. OneNote is the more fully featured one – and more complex – of these two, and allows one to create multiple notebooks, each with several different sections and pages that can include text, images, lists and many other kinds of items. Taking some time to learn how to use OneNote effectively to organise multiple materials is definitely worth it. There are also OneNote plugins for most internet browsers, allowing one to capture materials quickly while surfing various sites.

MS OneNote
MS OneNote, Microsoft tutorial materials.

Evernote is more simple and straightforward tool, and this is perhaps exactly why many prefer it. Saving and searching materials in Evernote is very quick, and it has excellent integration to mobile. OneNote is particularly strong if one invests to Microsoft Surface Pro 4 (or Surface Book), which have a Surface Pen that is a great note taking tool, and allows one to quickly capture materials from a browser window, writing on top of web pages, etc. On the other hand, if one is using an Apple iPhone, iPad or Android phone or tablet, Evernote has characteristics that shine there. On Samsung Note devices with “S Pen” one can take screenshots and make handwritten notes in mostly similar manner than one can do with the MS Surface Pen in the Microsoft environment.

In addition to the note solution, a cloud service is one of the bedrocks of today’s academic world. Some years ago it was perfectly possible to have software or hardware crash and realize that (backups missing), all that important work is now gone. Cloud services have their question marks regarding privacy and security, but for most users the benefits are overwhelming. A tool like Dropbox will silently work in the background and make sure that the most recent versions of all files are always backed up. A file that is in the cloud can also be shared with other users, and some services have expanded into real-time collaboration environments where multiple people can discuss and work together on shared documents. This is especially strong in Google Drive and Google Docs, which includes simplified versions of familiar office tools: text editor, spreadsheet, and presentation programs (cf. classic versions of Microsoft Office: Word, Excel, and PowerPoint; LibreOffice has similar, free, open-source versions). Microsoft cloud service, Office 365 is currently provided for our university’s students and staff as the default environment free of charge, and it includes the OneDrive storage service as well as Outlook email system, and access to both desktop as well as cloud-hosted versions of Office applications – Word Online, Excel Online, PowerPoint Online, and OneNote Online. Apple has their own iCloud system, with Mac office tools (Pages, Numbers, and Keynote) also can be operated in browser, as iCloud versions. All major productivity tools have also iOS and Android mobile app versions of their core functionalities available. It is also possible to save, for example, MS Office documents into the MS OneCloud, or into Dropbox – a seamless synchronization with multiple devices and operating systems is an excellent thing, as it makes possible to start writing on desktop computer, continue with a mobile device, and then finish things up with a laptop computer, for example.

Microsoft Windows, Apple OS X (Macintosh computers) and Linux have a longer history, but I recommend students also having a look at Google’s Chrome OS and Chromebook devices. They are generally cheaper, and provide reliable and very easy to maintain environment that can be used for perhaps 80 % or 90 % of the basic academic tasks. Chromebooks work really well with Google Drive and Google Docs, but principally any service that be accessed as a browser-based, cloud version also works in Chromebooks. It is possible, for example, to create documents in Word or PowerPoint Online, and save them into OneDrive or Dropbox so that they will sync with the other personal computers and mobile devices one might be using. There is a development project at Google to make it possible to run Android mobile applications in Chrome OS devices, which means that the next generation of Chromebooks (which will all most likely support touchscreens) will be even more attractive than today’s versions.

For planning, teamwork, task deadlines and calendar sharing, there are multiple tools available that range from MS Outlook to Google Calendar. I have found that sharing of calendars generally works easier with the Google system, while Outlook allows deeper integration into organisation’s personnel databases etc. It is really good idea to plan and break down all key course work into manageable parts and set milestones (interim deadlines) for them. This can be achieved with careful use of calendars, where one can mark down the hours that are required for personal, as well as teamwork, in addition to lectures, seminars and exercise classes your timetable might include. That way, not all crucial jobs are packed next to the end of term or period deadlines. I personally use a combination of several Google Calendars (the core one synced with the official UTA Outlook calendar) and Wunderlist to-do list app/service. There are also several dedicated project management tools (Asana, Trello, etc.), but mostly you can work the tasks with basic tools like Google Docs, Sheets (Word, Excel) and then break down the tasks and milestones into the calendar you share with your team. Communications are also essential, and apart from email, people today generally utilize Facebook (Messenger, Groups, Pages), Skype, WhatsApp, Google+/Hangouts, Twitter, Instagram and similar social media tools. One of the key skills in this area is to create multiple filter settings or more fine-grained sharing settings (possibly even different accounts and profiles) for professional and private purposes. The intermixing of personal, study related and various commercial dimensions is almost inevitable in these services, which is why some people try to avoid social media altogether. Wisely used, these services can be nevertheless immensely useful in many ways.

All those tools and services require accounts and login details that are easily rather unsafe, by e.g. our tendency to recycle same or very similar passwords. Please do not do that – there will inevitably be a hacking incident or some other issue with some of those services, and that will lead you into trouble in all the others, too. There are various rules-based ways of generating complex passwords for different services, and I recommend using two-factor authentication always when it is available. This is a system where typically a separate mobile app or text messages act as a backup security measure whenever the service is accessed from a new device or location. Life is also much easier using a password manager like LastPass or 1Password, where one only needs to remember the master password – the service will remember the other, complex and automatically generated passwords for you. In several contemporary systems, there are also face recognition (Windows 10 Hello), fingerprint authentication or iris recognition technologies that are designed to provide a further layer of protection at the hardware level. The operating systems are also getting better in protecting against computer viruses, even without a dedicated anti-virus software. There are multiple scams and social engineering hacks in the connected, online world that even the most sophisticated anti-virus tools cannot protect you against.

Finally, a reference database is an important part of any study project. While it is certainly possible to have a physical shoebox full of index cards, filled with quotes, notes and bibliographic details of journal articles, conference papers and book chapters, it is not the most efficient way of doing things. There are comprehensive reference database management services like RefWorks (supported by UTA) and EndNote that are good for this job. I personally like Zotero, which exists both as cloud/browser-based service in Zotero.org, but most importantly allows quick capture of full reference details through browser plugins, and then inserting references in all standard formats into course papers and thesis works, in simple copy-paste style. There can also be set up shared, topics based bibliographic databases, managed by teams in Zotero.org – an example is Zotero version of DigiPlay bibliography (created by Jason Rutter, and converted by Jesper Juul): https://www.zotero.org/groups/digiplay .

As a final note, regardless of the actual tools one uses, it is the systematic and innovative application of those that really sets excellent study practices apart. Even the most cutting edge tools do not automate the research and learning – this is something that needs to be done by yourself, and in your individual style. There are also other solutions, that have not been explored in this short note, that might suit your style. Scrivener, for example, is a more comprehensive “writing studio”, where one can collect snippets of research, order fragments and create structure in more flexible manner than is possible than in e.g. MS Word (even while its Outline View is too underused). The landscape of digital, physical, social and creative opportunities is all the time expanding and changing – if you have suggestions for additions to this topic, please feel free to make those below in the comments.

Tablets as productivity devices

Logitech Ultrathin Keyboard for iPad Air
Logitech Ultrathin Keyboard for iPad Air
Professionally, I have a sort of on-off relationship with tablets (iPads, Android tablets, mainly, but I count also touch-screen small-size factor Windows 2-in-1’s in this category). As small and light, tablets are a natural solution when you have piles of papers and books in your bag, and want to travel light. There are so many things that every now and then I try to make and do with a tablet – only to clash again against some of the limitations of them: the inability to edit some particular file quickly in the native format, inability to simply copy and paste data between documents that are open in different applications, limitations of multitasking. Inability to quickly start that PC game that you are writing about, or re-run that SPSS analysis we urgently need for that paper we are working on.

But when you know what those limitations are, tablets are just great for those remaining 80 % or so of the stuff that we do in mobile office slash research sort of work. And there are actually features of tablets that may make them even stronger as productivity oriented devices than personal computers or fully powered laptops can be. There is the small, elite class of thin, light and very powerful laptop computers with touch screens (running Windows 10) which probably can be configured to be “best of both worlds”, but otherwise – a tablet with high dpi screen, fast enough processor (for those mobile-optimized apps) and excellent battery life simply flies above using a crappy, under-powered and heavy laptop or office PC from the last decade. The user experience is just so much better: everything reacts immediately, looks beautiful, runs for hours, and behaves gracefully. Particularly in iOS / Apple ecosystem this is true (Android can be a bit more bumpy ride), as the careful quality control and fierce competition in the iOS app space takes care that only those applications that are designed with the near-perfect balance of functionality and aesthetics get into the prime limelight. Compare that to the typical messy interfaces and menu jungles of traditional computer productivity software, and you’ll see what I mean.

The primary challenge of tablets for me is the text entry. I can happily surf, read, game, and watch video content of various kinds in a tablet, but when it comes to making those fast notes in a meeting where you need to have two or three background documents open at the same time, copy text or images from them, plus some links or other materials from the Internet, the limitations of tablets do tend to surface. (Accidentally, Surface 4 Pro or Surface Book by Microsoft would be solutions that I’d love to test some of these days – just in case someone from MS sales department happens to read this blog…) But there are ways to go around some of these limitations, using a combination of cloud services running in browser windows and dedicated apps and quickly rotating between them, so that the mobile operating system does not kill them and lose the important data view in the background. Also, having a full keyboard connected with the tablet device is a good solution for that day of work with a tablet. iPad Air with a premium wireless keyboard like Logitech K811 is shoulders above the situation where one is forced to slowly tap in individual letters with the standard virtual keyboard of a mobile device. (I am a touch-typist, which may explain my perspective here.)

In the future, it is increasingly likely that the differences between personal computers and mobile devices continues to erode and vanish. The high standards of ease of use, and user experience more generally, set by mobile device already influence the ways in which also computer software is being (re-)designed. The challenges waiting there are not trivial, though: when a powerful, professional tool is suddenly reduced into a “toy version” of itself, in the name of usability, the power users will cry foul. There are probably few lessons in the area of game (interface) design that can inform also the design of utility software, as the different “difficulty levels” or novice/standard/expert modes are being fine-tuned, or the lessons from tutorials of various kinds, and adaptive challenge levels or information density is being balanced.