Cognitive engineering of mixed reality


iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.
iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.

In the 1970s and 1980s the concept ‘cognitive engineering’ was used in the industry labs to describe an approach trying to apply cognitive science lessons to the design and engineering fields. There were people like Donald A. Norman, who wanted to devise systems that are not only easy, or powerful, but most importantly pleasant and even fun to use.

One of the classical challenges of making technology suit humans, is that humans change and evolve, and differ greatly in motivations and abilities, while technological systems tend to stay put. Machines are created in a certain manner, and are mostly locked within the strict walls of material and functional specifications they are based on, and (if correctly manufactured) operate reliably within those parameters. Humans, however, are fallible and changeable, but also capable of learning.

In his 1986 article, Norman uses the example of a novice and experienced sailor, who greatly differ in their abilities to take the information from compass, and translate that into a desirable boat movement (through the use of tiller, and rudder). There have been significant advances in multiple industries in making increasingly clear and simple systems, that are easy to use by almost anyone, and this in turn has translated into increasingly ubiquitous or pervasive application of information and communication technologies in all areas of life. The televisions in our living rooms are computing systems (often equipped with apps of various kinds), our cars are filled with online-connected computers and assistive technologies, and in our pockets we carry powerful terminals into information, entertainment, and into the ebb and flows of social networks.

There is, however, also an alternative interpretation of what ‘cognitive engineering’ could be, in this dawning era of pervasive computing and mixed reality. Rather than only limited to engineering products that attempt to adapt to the innate operations, tendencies and limitations of human cognition and psychology, engineering systems that are actively used by large numbers of people also means designing and affecting the spaces, within which our cognitive and learning processes will then evolve, fit in, and adapt into. Cognitive engineering does not only mean designing and manufacturing certain kinds of machines, but it also translates into an impact that is made into the human element of this dialogical relationship.

Graeme Kirkpatrick (2013) has written about the ‘streamlined self’ of the gamer. There are social theorists who argue that living in a society based on computers and information networks produces new difficulties for people. Social, cultural, technological and economic transitions linked with the life in late modern, capitalist societies involve movements from projects to new projects, and associated necessity for constant re-training. There is necessarily no “connecting theme” in life, or even sense of personal progression. Following Boltanski and Chiapello (2005), Kirkpatrick analyses the subjective condition where life in contradiction – between exigency of adaptation and demand for authenticity – means that the rational course in this kind of systemic reality is to “focus on playing the game well today”. As Kirkpatrick writes, “Playing well means maintaining popularity levels on Facebook, or establishing new connections on LinkedIn, while being no less intensely focused on the details of the project I am currently engaged in. It is permissible to enjoy the work but necessary to appear to be enjoying it and to share this feeling with other involved parties. That is the key to success in the game.” (Kirkpatrick 2013, 25.)

One of the key theoretical trajectories of cognitive science has been focused on what has been called “distributed cognition”: our thinking is not only situated within our individual brains, but it is in complex and important ways also embodied and situated within our environments, and our artefacts, in social, cultural and technological means. Gaming is one example of an activity where people can be witnessed to construct a sense of self and its functional parameters out of resources that they are familiar with, and which they can freely exploit and explore in their everyday lives. Such technologically framed play is also increasingly common in working life, and our schools can similarly be approached as complex, designed and evolving systems that are constituted by institutions, (implicit, as well as explicit) social rules and several layers of historically sedimented technologies.

Beyond all hype of new commercial technologies related to virtual reality, augmented reality and mixed reality technologies of various kinds, lies the fact that we have always already lived in complex substrate of mixed realities: a mixture of ideas, values, myths and concepts of various kinds, that are intermixed and communicated within different physical and immaterial expressive forms and media. Cognitive engineering of mixed reality in this, more comprehensive sense, involves involvement in dialogical cycles of design, analysis and interpretation, where practices of adaptation and adoption of technology are also forming the shapes these technologies are realized within. Within the context of game studies, Kirkpatrick (2013, 27) formulates this as follows: “What we see here, then, is an interplay between the social imaginary of the networked society, with its distinctive limitations, and the development of gaming as a practice partly in response to those limitations. […] Ironically, gaming practices are a key driver for the development of the very situation that produces the need for recuperation.” There are multiple other areas of technology-intertwined lives where similar double bind relationships are currently surfacing: in social use of mobile media, in organisational ICT, in so-called smart homes, and smart traffic design and user culture processes. – A summary? We live in interesting times.

– Boltanski, Luc, ja Eve Chiapello (2005) The New Spirit of Capitalism. London & New York: Verso.
– Kirkpatrick, Graeme (2013) Computer Games and the Social Imaginary. Cambridge: Polity.
– Norman, Donald A. (1986) Cognitive engineering. User Centered System Design31(61).

Special Issue: Reflecting and Evaluating Game Studies – Games & Culture

This is now published:

Games & Culture:
Volume 12, Issue 6, September 2017
Special Issue: Reflecting and Evaluating Game Studies

Guest Editors: Frans Mäyrä and Olli Sotamaa


Need for Perspective:
Introducing the Special Issue “Reflecting and Evaluating Game Studies”
by Frans Mäyrä & Olli Sotamaa
(Free Access:

The Game Definition Game: A Review
by Jaakko Stenros

The Pyrrhic Victory of Game Studies: Assessing the Past, Present, and Future of Interdisciplinary Game Research
by Sebastian Deterding

How to Present the History of Digital Games: Enthusiast, Emancipatory, Genealogical, and Pathological Approaches
by Jaakko Suominen

What We Know About Games: A Scientometric Approach to Game Studies in the 2000s
by Samuel Coavoux, Manuel Boutet & Vinciane Zabban

What Is It Like to Be a Player? The Qualia Revolution in Game Studies
by Ivan Mosca

by Bart Simon

Many thanks to all the authors, reviewers, and the staff of the journal!

LARP: Art not worthy?

worldcon75Worldcon 75 in Helsinki has generally been an excellent event with multiple cultures, diverse forms of art and innumerable communities of fandom coming together. However, what left bit of a bad taste to the mouth was the organizers’ decision yesterday to cancel a LARP (live action role-play), dealing with old people and dementia. The decision is highly controversial, and apparently based on some (non-Nordic) participants strongly communicating their upset at such a sensitive topic has been even allowed to be submitted in the form of a “game”, into the con program. On the other hand, same people would apparently be completely fine with Altzheimer and similar conditions being handled in form of a novel, for example.

There will be no doubt multiple reactions coming in to this from experts of this field in the future. My short comment: this is an unfortunate case of censorship, based on cultural perception of play and games as inherently trivializing or “fun-based” form of low culture. It seems that for some people, there still are strict cultural hierarchies even within the popular culture, with games at the very bottom – and that handling something sensitive with the form of role-play, for example, can be an insult. Such position completely ignores the work that has been done for decades in Nordic LARP and in digital indie “art games” (and also within the academic traditions of game studies) to expand the range of games and play for cultural expression, and to remove expectation or stigma of automatic trivialism from the interactive forms of art and culture. The organisers have obviously been pressurised by some vocal individuals, but the outcome in this case was a failure to stand up, explain the value and potential of role-playing games, and Nordic LARP in particular to an international audience, and make a difference. A sad day.

Link: Worldcon 75 cancellation statement (currently in updated and revised form) in Facebook regarding “The Old Home” [edit: should be “A Home for the Old”] LARP:

(There has been multiple exchanges regarding this matter in Twitter, for example, but not linking them here.)

(Edit: the documentation for the said LARP is available for download here:

(Edit2: LARP scholars and experts Jaakko Stenros and Markus Montola have published a thorough account of this incident here

(Edit3: Wordcon organisers have now published a more thorough explanation and reasons for their decision here

Server Update: Elementary Error?

I have been running a Windows server in our basement pretty much nonstop since 2008. Originally a personal Web server, this HP Proliant machine has in recent years mostly worked as a LAN file server for backups, media archives and for home-internal sharing. Even with a new 1.5 terabyte disk installed some years ago, it was running out of disk space. The old Windows 2008 Server was also getting painfully slow.

New server components (August 2017)
New server components (August 2017)

I decided to do bit of an update, and got a “small” 120 GB SSD for the new system, and a WD Red 4.0 terabyte NAT disk for data. (I also considered their 8 TB “Archive” disk, but I do not need quite that much space, yet, and the “Red” model was a bit faster for my general purpose use. It was also cheaper.)

This time I decided to go Linux way – my aging dual-core Xeon based system is more suitable for a bit lighter OS than a full Windows Server installation. On the other hand I was curious to try newer Linux distributions, so I picked up the “elementary OS”, which has attracted some positive press recently.

HP Proliant ML110 G5, opened
HP Proliant ML110 G5, opened.

The hardware installation took it’s time, but I must say that I respect the build quality of this budget-class Proliant ML110 Gen5 machine. It has been running soon ten years without a single issue (hardware-related, I mean), and it is very solid, and pleasure to open and maintain (something that cannot be said of several consumer oriented computers that I have used).

Installing elementary OS ("loki")
Installing elementary OS (“loki”)

Also the Linux installation, with my Samba and Dropbox components is now finally up and running. But I have to say that I am a bit disappointed with the elementary OS (0.41 “loki”) at the moment. It might have been wrong distribution for my needs, to start with. It surely looks pretty, but it is also very restricted – many essential administrative tools or features are disabled or not available, by design. Apparently it is made so easy and safe for beginners that it is hard to use this “eOS” for most things that Linux normally is used for: development, programming, systems administration.

It is possible to tweak Linux installations, of course, and I have now patched or hacked the new system to be more allowing and capable, but some new issues have emerged in the process. I wonder if it is possible just to overwrite the “elementary” into a regular Ubuntu Server version, for example, or do I need to reinstall everything and lose the work that I have already done? I need to study the wonderful world of Linux distros a bit more, obviously.

Yoga 510, Signature Edition

2017-07-30 18.39.57At home, I have been setting up and testing a new, dual-boot Win10/Linux system. Lenovo Yoga 510 is a budget-class, two-in-one device that I am currently setting up as a replacement for my old Vivobook (unfortunately, it has a broken power plug/motherboard, now). Technical key specs (510-14ISK, 80S70082MX model, Signature Edition) include an Intel i5-6200U processor (a 2,30-2,80 GHz Skylake model), Intel HD Graphics 520 graphics, 4 GB of DDR4 memory, 128 GB SSD, IPS Full HD (1920 x 1080) 14″ touch-screen display, and a Synaptics touchpad and a backlit keyboard. There is a WiFi (802.11 a/b/g/n/ac) and Bluetooth 4.0. Contrasted to some other, thinner and lighter devices, this one has a nice set of connectors: one USB 2.0, two USB 3.0 ports (no Thunderbolt, though). There is also a combo headphone/mic jack, Harman branded speakers, a memory card slot (SD, SDHC, SDXC, MMC), 720p webcam, and a HDMI connector. There is also a small hidden “Novo Button”, which is needed to get to the BIOS settings.

This is a last-year model (there is already a “Yoga 520” with Kaby Lake chips available), and I got a relatively good deal from Gigantti store (499 euros). (Edit. I forgot to mention this has also a regular, full size wired gigabit ethernet port, which is also nice.)

The strong points (as contrasted to my trusty old Vivobook, that is) are: battery life, which according to my experience and Lenovo promises is over eight hours of light use. The IPS panel is not the best I have seen (MS Surface Pro has really excellent display), but it is still really good as compared to the older, TN panels. Multi-touch also operates pretty well, even if the touchpad is not so much to my taste (its feel is a bit ‘plasticky’, and it uses inferior Synaptics drivers as contrasted to the “precision touchpads”, which send raw data directly to Windows to handle).

2017-08-01 19.21.39The high point of Lenovo Thinkpad laptops has traditionally been their keyboards. This Yoga model is not one of the professional Thinkpad line, but the keyboard is rather good, as compared to the shallow, non responsive keyboards that seem to be the trend these days. The only real problem is the non-standard positioning of up-arrow/PageUp and RightShift keys – it is really maddening to write, and while touch-typing every Right-Shift press produces erroneous keypress that moves the cursor up (potentially e.g. moving focus to “Send Email” rather than to typing, as I have already witnessed). But this can sort of be fixed by use of KeyTweak or similar tool, which can be used to remap these two keys to other way around. Not optimal, but a small nuisance, really.

2017-07-30 18.41.48Installing dual boot Ubuntu requires the usual procedures (disabling Secure Boot, fast startup, shrinking the Windows partition, etc.), but in the end Linux runs on this Lenovo laptop really well. The touch screen and all special keys I have tested work flawlessly right after the standard Ubuntu 17.04 installation, without any gimmicky hacking. Having a solid (bit heavy though) laptop with a 14-inch touch-enabled, 360 degree rotating screen, and which can be used without issues in the most recent versions of both Windows 10 and Linux is a rather nice thing. Happy with this, at the moment.

Note on working with PDFs and digital signatures

Adobe Global Guide to Electronic Signature Law
Adobe Global Guide to Electronic Signature Law

Portable Document Format (PDF) files are a pretty standard element in academic and business life these days. It is sort of a compromise, a tool for living life that is partly based on traditional paper documents and their conventions, and part on new, digital functionalities. A PDF file should maintain the appearance of the document same, as moved from device to device and user to user, and it can facilitate various more advanced functionalities.

One such key function is ability to sign a document (an agreement, a certificate, or such) with digital signatures. This can greatly speed up many critical processes in contemporary, global, mobile and distributed lives of individuals and organisations. Rather than waiting for a key person to arrive back from trip to their office, to physically use pen and paper to sign a document, a PDF document version of the document (for example) can be just mailed to the person, who then adds their digital signature to the file, saves, and sends the signed version back.

In legal and technical terms, there is nothing stopping from moving completely to using digital signatures. There are explanations of the legal situation e.g. here:

And Adobe, the leading company in electronic documents business, provides step-by-step instructions on how to add or generate the cryptographic mechanisms to ensure the authenticity of digital signatures in PDFs with their Acrobat toolset:

According to my experience, most contracts and certificates still are required to be signed with a physical pen, ink, and paper, even while the digital tools exist. The reasons are not legal or technical, but rather rooted in organisation routines and processes. Many traditional organisations are still not “digital” or “paperless”, but rather build upon decades (or: centuries!) of paper-trail. If the entire workflow is built upon the authority of authentic, physically signed contracts and other legal (paper) documents, it is hard to transform the system. At the same time, the current situation is far from optimal: in many cases there is double work, as everything needs to exist both as the physical papers (for signing, and for paper-based archiving), and then scanned into PDFs (for distribution, in intranets, in email, in other electronic archives that people use in practice).

While all of us can make some small steps towards using digital signatures and get rid of the double work (and wasting of natural resources), we can also read about the long history of “paperless office” – a vision of the future, originally popularized by a Business Week article in 1975 (see: and the 2001 critique by Sellen & Harper:

And, btw, a couple of useful tips:

Brydge 12.3, Surface Pro 4

Surface Pro 4, with Brydge 12.3 and MS Type Cover
Surface Pro 4, with Brydge 12.3 and MS Type Cover

Getting the input right is one of the most challenging issues in todays world of pervasive, multimodal computing and services. Surface Pro 4 is an excellent multitouch tablet, and with the Surface Pen it is perfect for review and marking (key elements in academic life). The problem with a tablet as a main computer is that much of the productivity oriented tasks really call for a mouse and keyboard style approach.

There are pretty good add-on keyboards for today’s tablet computers, and one can of course also attach to a Surface Pro a full size keyboard and mouse combo. However, a keyboard cover that is always with you is the optimal companion for a tablet user. The official Type Cover by Microsoft is a really good compromise: it is thin, light, has decent keys, excellent touchpad, and backlight, which is really important for business use. There is certain wobbly, flexible quality in the keys though, and writing a whole day with one can create certain strain.

I have now tested a new, much more solid alternative: Brydge 12.3 keyboard cover. It is made of strong aluminium, has 160 degrees rotating hinges that create a firm grip on the corners of the tablet, and its island style keys also are backlighted. According to my experience, the usability issues with Brydge relate to the unreliability of Bluetooth connection on one hand – sometimes I would spend several minutes after tablet wake-up waiting for keyboard to re-establish its connection. Other thing is that the integrated touchpad is rather bad. It is hard to control precisely, pointer movement is wobbly, and not all Windows 10 mouse gestures are supported. It is also very small by today’s standards, and clicks register randomly. The sensible use for the Brydge is to use it alongside a wired or wireless mouse – this, however, diminishes its value as a real laptop replacement option. The trackpad in Type Cover is so much better that in regular use that in the end it trumps Brydge’s better (or at least more solid) keyboard. The plus side of using Brydge is that in tactile terms, it transforms Surface Pro into a (small and heavy) laptop computer.

It is apparently hard to get a 2-in-1 device right. However, multiple manufacturs have recently introduced their own takes on the same theme, so there might be better options out there already.

Surface Pro 4, with Brydge 12.3 and MS Type Cover
Surface Pro 4, with Brydge 12.3 and MS Type Cover