Personal Computers as Multistratal Technology

HP-Sure-Run-Error
HP “Sure Run” technology here getting into conflicts with the OS and/or computer BIOS itself.

As I was struggling through some operating system updates and other installs (and uninstalls) this week, I was again reminded about the history of personal computers, and about their (fascinating, yet often also frustrating) character as multistratal technology. By this I mean their historically, commercially and pragmatically multi-layered nature. A typical contemporary personal computer is a laptop more often than a desktop computer (this has been the situation for numerous years already, see e.g. https://www.statista.com/statistics/272595/global-shipments-forecast-for-tablets-laptops-and-desktop-pcs/). Whereas a personal computer in a desktop format is still something that one can realistically consider to construct by combining various standards-following parts and modules, and expect to start operating after installation of an operating system (plus typically some device drivers), the laptop computer is always configured and tweaked into particular interpretation of what a personal computing device should be – for this price group, for this usage category, with these special, differentiating features. The keyboard is typically customised to fit into the (metal and/or plastic) body so that the functions of a standard 101/102-key PC keyboard layout (originally by Mark Tiddens of Key Tronic, 1982, then adopted by IBM) are fitted into e.g. c. 80 physical keys of a laptop computer. As the portable computers have become smaller, there has been increased need to do various customised solutions, and a keyboard is a good example of this, as different manufacturers appear to resort each into their own style of fitting e.g. function keys, volume up/down, brightness controls and other special keys into same physical keys, using various keyboard press combinations. While this means that it is hard to be a complete touch-typist if one is changing from one brand of laptops to another one (as the special keys will be in different places), one should still remember that in the early days of computers, and even in the era of early home and personal computers, the keyboards were even much more different from each other, than they are in today’s personal computers. (See e.g. Wikipedia articles for: https://en.wikipedia.org/wiki/Computer_keyboard and https://en.wikipedia.org/wiki/Function_key).

The heritage of IBM personal computers (the “original PCs”) coupled with the Microsoft operating systems, (first DOS, then various Windows versions) has meant that there is much shared DNA in how the hardware and software of contemporary personal computers is designed. And even Apple Macintosh computers share much of similar roots with those of IBM PC heritage – most importantly due to the influential role that the graphical user interface and with its (keyboard and mouse accessed) windows, menus and other graphical elements originating in Douglas Engelbart’s On-Line System, then in Xerox PARC and Alto computers had for both Apple’s macOS and Microsoft Windows. All these historical elements, influences and (industry) standards are nevertheless layered in complex manner in today’s computing systems. It is not feasible to “start from an empty table”, as the software that organisations and individuals have invested in using needs to be accessible in the new systems, as also the skill sets of human users themselves are based on similarity and compatibility with the old ways of operating computers.

Today Apple with its Mac computers and Google with the Chromebook computers that it specifies (and sometimes also designs to the hardware level) are most optimally positioned to produce a harmonious and unified whole, out of these disjointed origins. And the reliability and generally positive user experiences provided both by Macs and Chromebooks indeed bears witness to the strength of unified hardware-software design and production. On the other hand, the most popular platform – that of a personal computer running a Microsoft Windows operating system – is the most challenging from the unity, coherence and reliability perspectives. (According to reports, the market share of Windows is above 75 %, macOS at c. 20 %, Google’s ChromeOS at c. 5 % and Linux at c. 2 % in most markets of desktop and laptop computers.)

A contemporary Windows laptop is set up in a complex network of collaborative, competitive and parallel operations networks of multiple operators. There is the actual manufacturer and packager of computers that markets and delivers certain, branded products to users: Acer, ASUS, Dell, HP, Lenovo, and numerous others. Then there is Microsoft who develops and licences the Windows operating system to these OEMs (Original Equipment Manufacturers), collaborating to various degrees with them, and with the developers of PC components and other device makers. For example, a “peripheral” manufacturer like Logitech develops computer mice, keyboards and other devices that should install and run in a seamless manner when connected to desktop or laptop computer that has been put together by some OEM, which, in turn, has been combining hardware and software elements coming from e.g. Intel (which develops and manufactures the CPUs, Central Processing Units, but also affiliated motherboard “chipsets”, integrated graphics processing units and such), Samsung (which develops and manufactures e.g. memory chips, solid state drives and display components) or Qualcomm (which is best known for their wireless components, such as cellular modems, Bluetooth products and Wi-Fi chipsets). In order for the new personal computer to run smoothly after it has been turned on for the first time, the operating system should have right updates and drivers for all such components. As new technologies are constantly introduced, and the laptop computer in particular follows the evolution of smartphones in sensor technologies (e.g. in using fingerprint readers or multiple camera systems to do biometric authentication of the user), there are constant needs for updates that involve both the operating system itself, and the firmware (deep, hardware-close level software) as well as operating system level drivers and utility programs, that are provided by the component, device, or computer manufacturers.

The sad truth is, that often these updates do not work out that fine. There are endless stories in the user discussion and support forums in the Internet, where unhappy customers describe their frustrations while attempting to update Windows (as Microsoft advices them), the drivers and utility programs (as the computer manufacturer instructs them), and/or the device drivers (that are directly provided by the component manufacturers, such as Intel or Qualcomm). There is just so much opportunity for conflicts and errors, even while the big companies of course try to test their software before it is released to customers. The Windows PC ecosystem is just so messy, heterogeneous and historically layered, that it is impossible to test beforehand every possible combination of hardware and software that the user might be having on their devices.

Adobe-Update-Issue
Adobe Acrobat Reader update error.

In practice there are just few common rules of thumb. E.g. it is a good idea to postpone installing the most recent version of the operating system as long as possible, since the new one will always have more compatibility issues until it has been tested in “real world”, and updated a few times. Secondly, while the most recent and advanced functionalities are something that are used in marketing and in differentiation of the laptop from the competing models, it is in these new features where most of the problems will probably appear. One could play safe, and wipe out all software and drivers that the OEM had installed into their computer, and reinstall a “pure” Windows OS into the new computer instead. But this can mean that some of the new components do not operate in advertised ways. Myself, I usually test the OEM recommended setup and software (and all recommended updates) for a while, but also do regular backups, restore points, and keep a reinstall media available, just in case something goes wrong. And unfortunately, quite often this happens, and returning to the original state, or even doing a full, clean reinstall is needed. In a more “typical” or average combination of hardware and software such issues are not so common, but if one works with new technologies and features, then such consequences of complexity, heterogeneity and multistratal character of personal computers can indeed be expected. Sometimes, only trial and error helps: the most recent software and drivers might be needed to solve issues, but sometimes it is precisely the new software that produces the problems, and the solution is going back to some older versions. Sometimes disabling some function helps, sometimes only way into proper reliability is just completely uninstalling an entire software suite by a certain manufacturer, even if it means giving up some promised, advanced functionalities. Life might just be simpler that way.

Zombies and the Shared Sensorium

I have studied immersive phenomena over the years, and still am fascinated by what Finnish language so aptly catches with the idiom “Muissa maailmoissa” (literally: “in other worlds” – my dictionary suggests as an English translation “away with the fairies”, but I am not sure about that).

There is a growing concern with the effects of digital technologies, social media, and with games and smartphones in particular, as they appear to be capable of transporting increasing numbers of people into other worlds. It is unnerving to be living surrounded by zombies, we are told: people who stare into other realities, and do not respond to our words, need for eye contact or physical touch. Zombies are everywhere: sitting in cafeterias and shopping centres, sometimes slowly walking, with their eyes focused in gleaming screens, or listening some invisible sounds. Zombies have left their bodies here, in our material world, but their minds and mental focus has left this world, and is instead transported somewhere else.

The problem with the capacity to construct mental models and living the life as semiotic life-forms has always included somewhat troublesome existential polyphony – or, as Bakhtin wrote, it is impossible for the self to completely coincide with itself. We are inaccessible to ourselves, as much as we are to others. Our technologies have not historically remedied this condition. The storytelling technologies made our universes polyphonic with myths and mythical beings; our electronic communication technologies made our mental ecosystems polyphonic with channels, windows, and (non-material) rooms; and our computing technologies made our distributed cognition polyphonic with polyphonic memory and intelligence that does not coincide with our person, even when designed to be personalized.

Of course, we need science fiction for our redemption, like it has always been. There are multiple storyworlds with predictive power that forecast the coming of shared sensorium: seeing what you see, with your eyes, hearing your hearings. We’ll inevitably also ask: how about memory, cognition, emotion – cannot we also remember your remembering, and feel your thinking? Perhaps. Yet, the effect will no doubt fail to remedy our condition, once more. There can be interesting variations of mise-en-abyme: shared embeddedness into each other’s feeds, layers, windows and whispers. Yet, all that sharing can still contain only moments of clear togetherness, or desolate loneliness. But the polyphony of it all will be again an order of magnitude more complex than the previous polyphonies we have inhabited.

Summer Computing

20180519_190444.jpg
Working with my Toshiba Chromebook 2, in a sunny day.

I am not sure whether this is true for other countries, but after a long, dark and cold winter, Finns want to be outdoors, when it is finally warm and sunny. Sometimes one might even do remote work outdoors, from a park, cafe or bar terrace, and that is when things can get interesting – with that “nightless night” (the sun shining even at midnight), and all.

Surely, for most aims and purposes, summer is for relaxing and dragging your work and laptop always with you to your summer cottage or beach is not a good idea. This is definitely precious time, and you should spend it to with your family and friends, and rewind from the hurries of work. But, if you would prefer (or, even need to, for a reason or another) take some of your work outdoors, the standard work laptop computer is not usually optimal tool for that.

It is interesting to note, that your standard computer screens even today are optimised for a different style of use, as compared to the screens of today’s mobile devices. While the brightest smartphone screens today – e.g. the excellent OLED screen used in Samsung Galaxy S9 – exceed 1000 nits (units of luminance: candela per square meter; the S9 screen is reported to produce max 1130 nits), your typical laptop computer screens max out around measly 200 nits (see e.g. this Laptop Mag test table: https://www.laptopmag.com/benchmarks/display-brightness ). While this is perfectly good while working in a typical indoor, office environment, it is very hard to make out any details of such screens in bright sunlight. You will just squint, get a headache, and hurt your eyes, in the long run. Also, many typical laptop screens today are highly reflective, glossy glass screens, and the matte surfaces, which help against reflections, have been getting very rare.

It is as the “mobile work” that is one of the key puzzwords and trends today, means in practice only indoor-to-indoor style of mobility, rather than implying development of tools for truly mobile work, that would also make it possible to work from a park bench in a sunny day, or from that classical location: dock, next to your trusty rowing boat?

I have been hunting for business oriented laptops that would also have enough maximum screen brightness to scale up to comfortable levels in brighly lit environments, and there are not really that many. Even if you go for tablet computers, which should be optimised for mobile work, the brightness is not really at level with the best smartphone screens. Some of the best figures come from Samsung Galaxy Tab S3, which is 441 nits, iPad Pro 10.5 inch model is reportedly 600 nits, and Google Pixel C has 509 nits maximum. And a tablet devices – even the best of them – do not really work well for all work tasks.

HP ZBook Studio x360 G5
HP ZBook Studio x360 G5 (photo © HP)

HP has recently introduced some interesting devices, that go beyond the dim screens that most other manufacturers are happy with. For example, HP ZBook Studio x360 G5 supposedly comes with a 4k, high resolution anti-glare touch display that supports 100 percent Adobe RPG and which has 600 nits of brightness, which is “20 percent brighter than the Apple MacBook Pro 15-inch Retina display and 50 percent brighter than the Dell XPS UltraSharp 4K display”, according to HP. With its 8th generation Xeon processors (pro-equivalent to the hexacore Core i9), this is a powerful, and expensive device, but I am glad someone is showing the way.

EliteBook-X360-2018
HP advertising their new bright laptop display (image © HP)

Even better, the upcoming, updated HP EliteBook x360 G3 convertible should come with a touchscreen that has maximum brightness of 700 nits. HP is advertising this as the “world’s first outdoor viewable display” for a business laptop, which at least sounds very promising. Note though, that this 700 nits can be achieved with only the 1920 x 1080 resolution model; the 4K touch display option has 500 nits, which is not that bad, either. The EliteBooks I have tested also have excellent keyboards, good quality construction and some productivity oriented enhancements that make them an interesting option for any “truly mobile” worker. One of such enhancement is a 4G/LTE data connectivity option, which is a real bless, if one moves fast, opening and closing the laptop in different environments, so that there is no reliable Wi-Fi connection available all the time. (More on HP EliteBook models at: http://www8.hp.com/us/en/elite-family/elitebook-x360-1030-1020.html.)

HP-EliteBook-x360-1030-G3_Tablet
EliteBook x360 G3 in tablet mode (photo © HP)

Apart from the challenges related to reliable data connectivity, a cloud-based file system is something that should be default for any mobile worker. This is related to data security: in mobile work contexts, it is much easier to lose one’s laptop, or get it robbed. Having a fast and reliable (biometric) authentication, encrypted local file system, and instantaneous syncronisation/backup to the cloud, would minimise the risk of critical loss of work, or important data, even if the mobile workstation would drop into a lake, or get lost. In this regard, Google’s Chromebooks are superior, but they typically lack the LTE connectivity, and other similar business essentials, that e.g. the above EliteBook model features. Using a Windows 10 laptop with either full Dropbox synchronisation enabled, or with Microsoft OneDrive as the default save location will come rather close, even if the Google Drive/Docs ecosystem in Chromebooks is the only one that is truly “cloud-native”, in the sense that all applications, settings and everything else also lives in the cloud. Getting back to where you left your work in the Chrome OS means that one just picks up any Chromebook, logs in, and starts with a full access to one’s files, folders, browser addons, bookmarks, etc. Starting to use a new PC is a much less frictionless process (with multiple software installations, add-ons, service account logins, the setup can easily take full working days).

20180519_083722.jpgIf I’d have my ideal, mobile work oriented tool from today’s tech world, I’d pick the business-enhanced hardware of HP EliteBook, with it’s bright display and LTE connectivity, and couple those with a Chrome OS, with it’s reliability and seamless online synchronisation. But I doubt that such a combo can be achieved – or, not yet, at least. Meanwhile, we can try to enjoy the summer, and some summer work, in bit more sheltered, shady locations.

Workshop in Singapore

I will spend the next week visiting Singapore, where Vivian Chen, from Wee Kim Wee School of Communication and Information in Nanyang Technological University has put together an interesting international seminar focused on games and play, particularly from the perspective of eSports phenomena. Together with several esteemed colleagues, I also will give a talk there; mine is titled “Evolution and Tensions in Gaming Communities”.

Since I have not found the full program online, I will share the most recent draft that I have, below Continue reading “Workshop in Singapore”

Chili season 2018

Time to start preparing for the next summer’s chili season. This time I have promised myself that I will not fool around with any silly Ikea “passive hydroponics” system or similar. Just old-fashioned soil, some peat, water and a light. But I will make use of the Ikea cultivation pots and led lights, as much as possible.

I will also try to radically cut down the number of plants that I’ll grow this time. Last summer was cold, damp, dark and bad in so many ways, but one part of the problem was that I had just too many plants in the end. Packing plants too densely into a small greenhouse will just predispose all plants to pests and diseases. Smaller number is also good for getting enough sunshine and good airflow around all plants.

I am again putting my trust in Finnish chili seeds from Fatalii.net (Jukka Kilpinen’s “Chile Pepper Empire”). I am trying to grow five plants:

  • Naga Morich (C. chinense)
  • Carolina Reaper x 7pot Douglah (C. Chinense hybrid, F2 generation)
  • 7pot Primo Orange (C. chinense)
  • Moruga Scorpion (C. chinense)
  • Rocoto Riesen, Yellow (C. pubescens)

You might spot a pattern here: this is apparently the year of superhots for me (the Rocoto Riesen is the odd one out – thanks to Fatalii for dropping it into my order as a “surprise extra”). Originally I was planning on focusing on just my regular kitchen varieties (Lemon Drop, etc.), but losing all my hot chilies last summer left some kind of craving for retribution. If all these grow into proper plants, and yield proper crops, I will be in trouble. But: let’s see!

Cognitive engineering of mixed reality

 

iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.
iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.

In the 1970s and 1980s the concept ‘cognitive engineering’ was used in the industry labs to describe an approach trying to apply cognitive science lessons to the design and engineering fields. There were people like Donald A. Norman, who wanted to devise systems that are not only easy, or powerful, but most importantly pleasant and even fun to use.

One of the classical challenges of making technology suit humans, is that humans change and evolve, and differ greatly in motivations and abilities, while technological systems tend to stay put. Machines are created in a certain manner, and are mostly locked within the strict walls of material and functional specifications they are based on, and (if correctly manufactured) operate reliably within those parameters. Humans, however, are fallible and changeable, but also capable of learning.

In his 1986 article, Norman uses the example of a novice and experienced sailor, who greatly differ in their abilities to take the information from compass, and translate that into a desirable boat movement (through the use of tiller, and rudder). There have been significant advances in multiple industries in making increasingly clear and simple systems, that are easy to use by almost anyone, and this in turn has translated into increasingly ubiquitous or pervasive application of information and communication technologies in all areas of life. The televisions in our living rooms are computing systems (often equipped with apps of various kinds), our cars are filled with online-connected computers and assistive technologies, and in our pockets we carry powerful terminals into information, entertainment, and into the ebb and flows of social networks.

There is, however, also an alternative interpretation of what ‘cognitive engineering’ could be, in this dawning era of pervasive computing and mixed reality. Rather than only limited to engineering products that attempt to adapt to the innate operations, tendencies and limitations of human cognition and psychology, engineering systems that are actively used by large numbers of people also means designing and affecting the spaces, within which our cognitive and learning processes will then evolve, fit in, and adapt into. Cognitive engineering does not only mean designing and manufacturing certain kinds of machines, but it also translates into an impact that is made into the human element of this dialogical relationship.

Graeme Kirkpatrick (2013) has written about the ‘streamlined self’ of the gamer. There are social theorists who argue that living in a society based on computers and information networks produces new difficulties for people. Social, cultural, technological and economic transitions linked with the life in late modern, capitalist societies involve movements from projects to new projects, and associated necessity for constant re-training. There is necessarily no “connecting theme” in life, or even sense of personal progression. Following Boltanski and Chiapello (2005), Kirkpatrick analyses the subjective condition where life in contradiction – between exigency of adaptation and demand for authenticity – means that the rational course in this kind of systemic reality is to “focus on playing the game well today”. As Kirkpatrick writes, “Playing well means maintaining popularity levels on Facebook, or establishing new connections on LinkedIn, while being no less intensely focused on the details of the project I am currently engaged in. It is permissible to enjoy the work but necessary to appear to be enjoying it and to share this feeling with other involved parties. That is the key to success in the game.” (Kirkpatrick 2013, 25.)

One of the key theoretical trajectories of cognitive science has been focused on what has been called “distributed cognition”: our thinking is not only situated within our individual brains, but it is in complex and important ways also embodied and situated within our environments, and our artefacts, in social, cultural and technological means. Gaming is one example of an activity where people can be witnessed to construct a sense of self and its functional parameters out of resources that they are familiar with, and which they can freely exploit and explore in their everyday lives. Such technologically framed play is also increasingly common in working life, and our schools can similarly be approached as complex, designed and evolving systems that are constituted by institutions, (implicit, as well as explicit) social rules and several layers of historically sedimented technologies.

Beyond all hype of new commercial technologies related to virtual reality, augmented reality and mixed reality technologies of various kinds, lies the fact that we have always already lived in complex substrate of mixed realities: a mixture of ideas, values, myths and concepts of various kinds, that are intermixed and communicated within different physical and immaterial expressive forms and media. Cognitive engineering of mixed reality in this, more comprehensive sense, involves involvement in dialogical cycles of design, analysis and interpretation, where practices of adaptation and adoption of technology are also forming the shapes these technologies are realized within. Within the context of game studies, Kirkpatrick (2013, 27) formulates this as follows: “What we see here, then, is an interplay between the social imaginary of the networked society, with its distinctive limitations, and the development of gaming as a practice partly in response to those limitations. […] Ironically, gaming practices are a key driver for the development of the very situation that produces the need for recuperation.” There are multiple other areas of technology-intertwined lives where similar double bind relationships are currently surfacing: in social use of mobile media, in organisational ICT, in so-called smart homes, and smart traffic design and user culture processes. – A summary? We live in interesting times.

References:
– Boltanski, Luc, ja Eve Chiapello (2005) The New Spirit of Capitalism. London & New York: Verso.
– Kirkpatrick, Graeme (2013) Computer Games and the Social Imaginary. Cambridge: Polity.
– Norman, Donald A. (1986) Cognitive engineering. User Centered System Design31(61).

Special Issue: Reflecting and Evaluating Game Studies – Games & Culture

This is now published:

Games & Culture:
Volume 12, Issue 6, September 2017
Special Issue: Reflecting and Evaluating Game Studies
(http://journals.sagepub.com/toc/gaca/12/6)

Guest Editors: Frans Mäyrä and Olli Sotamaa

Articles

Need for Perspective:
Introducing the Special Issue “Reflecting and Evaluating Game Studies”
by Frans Mäyrä & Olli Sotamaa
(Free Access: http://journals.sagepub.com/doi/pdf/10.1177/1555412016672780)

The Game Definition Game: A Review
by Jaakko Stenros

The Pyrrhic Victory of Game Studies: Assessing the Past, Present, and Future of Interdisciplinary Game Research
by Sebastian Deterding

How to Present the History of Digital Games: Enthusiast, Emancipatory, Genealogical, and Pathological Approaches
by Jaakko Suominen

What We Know About Games: A Scientometric Approach to Game Studies in the 2000s
by Samuel Coavoux, Manuel Boutet & Vinciane Zabban

What Is It Like to Be a Player? The Qualia Revolution in Game Studies
by Ivan Mosca

Unserious
by Bart Simon

Many thanks to all the authors, reviewers, and the staff of the journal!