Life with Photography: Then and Now

I have kept a diary, too, but I think that the best record of life and times comes from the photographs taken over the years. Much of the last century (pre-2000s) photos of mine are collected in traditional photo albums: I used to love the craft of making photo collages, cutting and combining pieces of photographs, written text and various found materials, such as travel tickets or brochure pieces into travel photo albums. Some albums were more experimental: in pre-digital times it was difficult to know if a shot was technically successful or not, and as I have always mostly worked in colour rather than black-and-white, I used to order the film rolls developed and every frame printed, without seeing the final outcomes. With some out-of-focus, blurred or plain random, accidental shots included into every film spool, I had plenty of materials to build collages that were focused on play with colour, dynamics of composition or some visual motif. This was fun stuff, and while one certainly can do this (and more) e.g. with Photoshop with the digital photos, there is something in cutting and combining physical photos that is not the same as a digital collage.

The first camera of my own was Chinon CE-4, a budget-class Japanese film camera from the turn of 1970s/1980s. It served me well over many years, and with it’s manual and “semi-automatic” (Aperture Priority) exposure system and support for easy double exposures.

Chinon CE-4 (credit:
https://www.flickr.com/photos/pwiwe/463041799/in/pool-camerawiki/ ).

I started transitioning to digital photography first by scanning paper photos and slides into digital versions that could then be used for editing and publishing. Probably among my earliest actual digital cameras was HP PhotoSmart 318, a cheap and almost toy-like device with 2.4-megapixel resolution, 8 MB internal flash memory (plus supported CompactFlash cards), a fixed f/2.8 lens and TTL contrast detection autofocus. I think I was shooting occasionally with this camera already in 2001, at least.

Few years after that I started to use digital photography a bit more in travels at least. I remember getting my first Canon cameras for this purpose. I owned at least a Canon Digital IXUS v3 – this I was using at least already in the first DiGRA conference in Utrecht, in November 2003. Even while still clearly a “point-and-shoot” style (compact) camera, this Canon one was based on metal construction and the photos it produced were a clear step up above the plastic HP device. I started to convert into a believer: the future was in digital photography.

Canon Digital IXUS v3 (credit:
https://fi.m.wikipedia.org/wiki/Tiedosto:Canon_Digital_Ixus_V3.jpg ).

After some saving, I finally invested into my first digital “system camera” (DSLR) in 2005. I remember taking photos in the warm Midsummer night that year with the new Canon EOS 350D, and how magical it felt. The 8.0-megapixel CMOS image sensor and DIGIC II signal processing and control unit (a single-chip system), coupled with some decent Canon lenses meant that it was possible to experiment with multiple shooting modes and get finely-detailed and nuanced night and nature photos with it. This was also time when I both built my own (HTML based) online and offline “digital photo albums”, but also joined the first digital photo community services, such as Flickr.

Canon EOS 550D (credit:
https://www.canon.fi/for_home/product_finder/cameras/digital_slr/eos_550d/ ).

It was five years later, when I again upgraded my Canon system, this time into EOS 550D (“Rebel T2i” in the US, “Kiss X4” in Japan). This again meant considerable leap both in the image quality and also in features that relate both to the speed, “intelligence” and convenience of shooting photos, as well as to the processing options that are available in-camera. The optical characteristics of cameras as such have not radically changed, and there are people who consider some vintage Zeiss, Nikkor or Leica camera lenses as works of art. The benefits of 550D over 350D for me were mostly related to the higher resolution sensor (18.0-megapixel this time) and the ways in which DIGIC 4 processor reduced noise, provided much higher speeds, and even 1080p video (with live view and external microphone input).

Today, in 2019, I am still taking Canon EOS 550D with me in any event or travel where I want to get the best quality photographs. This is mostly due to the lenses than the actual camera body, though. My two current smartphones – Huawei Mate 20 Pro and iPhone 8 Plus – both have cameras that come with both arguably better sensors and much more capable processors than this aging, entry-level “system camera”. iPhone has dual 12.0-megapixel sensors (f/1.8, 28mm/wide, with optical image stabilization; f/2.8, 57mm/telephoto) that both are accompanied by PDAF (a fast autofocus technology based on Phase Detection). The optics in Huawei are developed in collaboration with Leica and come as a seamless combination of three (!) cameras: the first has a very large 40.0-megapixel sensor (f/1.8, 27mm/wide), the second one has 20.0-megapixels (f/2.2, 16mm/ultrawide), and the third 8.0-megapixels (f/2.4, 80mm/telephoto). It is possible to use both optical and digital zoom capabilities in Huawei, make use of efficient optical image stabilization, plus a hybrid technology involving phase detection as well as laser autofocus (a tiny laser transmitter sends a beam into the subject, and with the received information the processor is capable of calculating and adjusting for the correct focus). Huawei also utilizes advanced AI algorithms and its powerful Kirin 980 processor (with two “Neural Processing Units, NPUs) to optimize the camera settings, and apply quickly some in-camera postprocessing to produce “desirable” outcomes. According to available information, Huawei Mate 20 Pro can process and recognize “4,500 images per minute and is able to differentiate between 5,000 different kinds of objects and 1,500 different photography scenarios across 25 categories” (whatever those are).

Huawei Mate 20 Pro, with it’s three cameras (credit: Frans Mäyrä).

But with all that computing power today’s smartphones are not capable (not yet, at least) to outplay the pure optical benefits available to system cameras. This is not so crucial when documenting a birthday party, for example, as the lenses in smartphones are perfectly capable for short distance and wide-angle situations. Proper portraits are somewhat borderline case today: a high-quality system camera lens is capable to “separate” the person from the background and blur the background (create the beautiful “bokeh” effect). But the powerful smartphones like iPhone and Huawei mentioned above come effectively with an AI-assisted Photoshop built into them, and can therefore detect the key object, separate it, and blur the background with algorithms. The results can be rather good (good enough, for many users and use cases), but at the same time it must be said that when a professional photographer aims for something that can be enlarged, printed out full-page in a magazine, or otherwise used in a demanding context, a good lens attached into a system camera will prevail. This relates to basic optical laws: the aperture (hole, where the light comes in) can be much larger in such camera lenses, providing more information for the image sensor, the focal length longer – and the sensor itself can also be much larger, meaning that e.g. fast-moving objects (sports, animal photography) and low-light conditions will benefit. With several small lenses and sensors, the future “smart cameras” can probably provide an ever-improving challenge to more traditional photography equipment, combining, processing data and filling-in such information that is derived from machine learning, but a good lens coupled with a system camera can help creating unique pictures in more traditional manner. Both are needed, and both have a future in photography cultures, I think.

The main everyday benefit of e.g. Huawei Mate 20 Pro vs old-school DSLR such as Canon EOS 550D is the portability. Few people go to school or work with a DSLR hanging in their neck, but a pocket-size camera can always travel with you – and be available when that unique situation, light condition or a rare bird/butterfly presents itself. With the camera technologies improving, the system cameras are also getting smaller and lighter, though. Many professionals still prefer rather large and heavy camera bodies, as the big “grip” and solid buttons/controls provide better ergonomics, and the heavy body is also a proper counterbalance for large and heavy telephoto lenses that many serious nature or sports photographers need for their work, for example. Said that, I am currently thinking that my next system camera will no longer probably be based on the traditional SLR (Single-Lens Reflex) architecture – which, btw, is already over three hundred years old, if the first reflex mirror “camera obscura” systems are taken into an account. The mirrorless interchangeable lens camera systems are maintaining the component-based architecture of body+lenses, but eliminate the moving mirror and reflective prisms of SLR systems, and use electronic viewfinders instead.

I have still my homework to do regarding the differences in how various mirrorless systems are being implemented, but it also looks to my eye that there has been a rather rapid period of technical R&D in this area recently, with Sony in particular leading the way, but the big camera manufacturers like Canon and Nikon now following, releasing their own mirrorless solutions. There is not yet quite as much variety to choose for amateur, small-budget photographers such as myself, with many initial models released into the upper, serious-enthusiast/professionals price range of multiple-thousands. But I’d guess that the sensible budget models will also follow, next, and I am interested to see if it is possible to move into a new decade with a light, yet powerful system that would combine some of the best aspects from the history of photography with the opportunities opened by the new computing technologies.

Sony a6000, a small mirrorless system camera body announced in 2014 (credit: https://en.wikipedia.org/wiki/Sony_α6000#/media/File:Sony_Alpha_ILCE-6000_APS-C-frame_camera_no_body_cap-Crop.jpeg).

Personal Computers as Multistratal Technology

HP-Sure-Run-Error
HP “Sure Run” technology here getting into conflicts with the OS and/or computer BIOS itself.

As I was struggling through some operating system updates and other installs (and uninstalls) this week, I was again reminded about the history of personal computers, and about their (fascinating, yet often also frustrating) character as multistratal technology. By this I mean their historically, commercially and pragmatically multi-layered nature. A typical contemporary personal computer is a laptop more often than a desktop computer (this has been the situation for numerous years already, see e.g. https://www.statista.com/statistics/272595/global-shipments-forecast-for-tablets-laptops-and-desktop-pcs/). Whereas a personal computer in a desktop format is still something that one can realistically consider to construct by combining various standards-following parts and modules, and expect to start operating after installation of an operating system (plus typically some device drivers), the laptop computer is always configured and tweaked into particular interpretation of what a personal computing device should be – for this price group, for this usage category, with these special, differentiating features. The keyboard is typically customised to fit into the (metal and/or plastic) body so that the functions of a standard 101/102-key PC keyboard layout (originally by Mark Tiddens of Key Tronic, 1982, then adopted by IBM) are fitted into e.g. c. 80 physical keys of a laptop computer. As the portable computers have become smaller, there has been increased need to do various customised solutions, and a keyboard is a good example of this, as different manufacturers appear to resort each into their own style of fitting e.g. function keys, volume up/down, brightness controls and other special keys into same physical keys, using various keyboard press combinations. While this means that it is hard to be a complete touch-typist if one is changing from one brand of laptops to another one (as the special keys will be in different places), one should still remember that in the early days of computers, and even in the era of early home and personal computers, the keyboards were even much more different from each other, than they are in today’s personal computers. (See e.g. Wikipedia articles for: https://en.wikipedia.org/wiki/Computer_keyboard and https://en.wikipedia.org/wiki/Function_key).

The heritage of IBM personal computers (the “original PCs”) coupled with the Microsoft operating systems, (first DOS, then various Windows versions) has meant that there is much shared DNA in how the hardware and software of contemporary personal computers is designed. And even Apple Macintosh computers share much of similar roots with those of IBM PC heritage – most importantly due to the influential role that the graphical user interface and with its (keyboard and mouse accessed) windows, menus and other graphical elements originating in Douglas Engelbart’s On-Line System, then in Xerox PARC and Alto computers had for both Apple’s macOS and Microsoft Windows. All these historical elements, influences and (industry) standards are nevertheless layered in complex manner in today’s computing systems. It is not feasible to “start from an empty table”, as the software that organisations and individuals have invested in using needs to be accessible in the new systems, as also the skill sets of human users themselves are based on similarity and compatibility with the old ways of operating computers.

Today Apple with its Mac computers and Google with the Chromebook computers that it specifies (and sometimes also designs to the hardware level) are most optimally positioned to produce a harmonious and unified whole, out of these disjointed origins. And the reliability and generally positive user experiences provided both by Macs and Chromebooks indeed bears witness to the strength of unified hardware-software design and production. On the other hand, the most popular platform – that of a personal computer running a Microsoft Windows operating system – is the most challenging from the unity, coherence and reliability perspectives. (According to reports, the market share of Windows is above 75 %, macOS at c. 20 %, Google’s ChromeOS at c. 5 % and Linux at c. 2 % in most markets of desktop and laptop computers.)

A contemporary Windows laptop is set up in a complex network of collaborative, competitive and parallel operations networks of multiple operators. There is the actual manufacturer and packager of computers that markets and delivers certain, branded products to users: Acer, ASUS, Dell, HP, Lenovo, and numerous others. Then there is Microsoft who develops and licences the Windows operating system to these OEMs (Original Equipment Manufacturers), collaborating to various degrees with them, and with the developers of PC components and other device makers. For example, a “peripheral” manufacturer like Logitech develops computer mice, keyboards and other devices that should install and run in a seamless manner when connected to desktop or laptop computer that has been put together by some OEM, which, in turn, has been combining hardware and software elements coming from e.g. Intel (which develops and manufactures the CPUs, Central Processing Units, but also affiliated motherboard “chipsets”, integrated graphics processing units and such), Samsung (which develops and manufactures e.g. memory chips, solid state drives and display components) or Qualcomm (which is best known for their wireless components, such as cellular modems, Bluetooth products and Wi-Fi chipsets). In order for the new personal computer to run smoothly after it has been turned on for the first time, the operating system should have right updates and drivers for all such components. As new technologies are constantly introduced, and the laptop computer in particular follows the evolution of smartphones in sensor technologies (e.g. in using fingerprint readers or multiple camera systems to do biometric authentication of the user), there are constant needs for updates that involve both the operating system itself, and the firmware (deep, hardware-close level software) as well as operating system level drivers and utility programs, that are provided by the component, device, or computer manufacturers.

The sad truth is, that often these updates do not work out that fine. There are endless stories in the user discussion and support forums in the Internet, where unhappy customers describe their frustrations while attempting to update Windows (as Microsoft advices them), the drivers and utility programs (as the computer manufacturer instructs them), and/or the device drivers (that are directly provided by the component manufacturers, such as Intel or Qualcomm). There is just so much opportunity for conflicts and errors, even while the big companies of course try to test their software before it is released to customers. The Windows PC ecosystem is just so messy, heterogeneous and historically layered, that it is impossible to test beforehand every possible combination of hardware and software that the user might be having on their devices.

Adobe-Update-Issue
Adobe Acrobat Reader update error.

In practice there are just few common rules of thumb. E.g. it is a good idea to postpone installing the most recent version of the operating system as long as possible, since the new one will always have more compatibility issues until it has been tested in “real world”, and updated a few times. Secondly, while the most recent and advanced functionalities are something that are used in marketing and in differentiation of the laptop from the competing models, it is in these new features where most of the problems will probably appear. One could play safe, and wipe out all software and drivers that the OEM had installed into their computer, and reinstall a “pure” Windows OS into the new computer instead. But this can mean that some of the new components do not operate in advertised ways. Myself, I usually test the OEM recommended setup and software (and all recommended updates) for a while, but also do regular backups, restore points, and keep a reinstall media available, just in case something goes wrong. And unfortunately, quite often this happens, and returning to the original state, or even doing a full, clean reinstall is needed. In a more “typical” or average combination of hardware and software such issues are not so common, but if one works with new technologies and features, then such consequences of complexity, heterogeneity and multistratal character of personal computers can indeed be expected. Sometimes, only trial and error helps: the most recent software and drivers might be needed to solve issues, but sometimes it is precisely the new software that produces the problems, and the solution is going back to some older versions. Sometimes disabling some function helps, sometimes only way into proper reliability is just completely uninstalling an entire software suite by a certain manufacturer, even if it means giving up some promised, advanced functionalities. Life might just be simpler that way.

Summer Computing

20180519_190444.jpg
Working with my Toshiba Chromebook 2, in a sunny day.

I am not sure whether this is true for other countries, but after a long, dark and cold winter, Finns want to be outdoors, when it is finally warm and sunny. Sometimes one might even do remote work outdoors, from a park, cafe or bar terrace, and that is when things can get interesting – with that “nightless night” (the sun shining even at midnight), and all.

Surely, for most aims and purposes, summer is for relaxing and dragging your work and laptop always with you to your summer cottage or beach is not a good idea. This is definitely precious time, and you should spend it to with your family and friends, and rewind from the hurries of work. But, if you would prefer (or, even need to, for a reason or another) take some of your work outdoors, the standard work laptop computer is not usually optimal tool for that.

It is interesting to note, that your standard computer screens even today are optimised for a different style of use, as compared to the screens of today’s mobile devices. While the brightest smartphone screens today – e.g. the excellent OLED screen used in Samsung Galaxy S9 – exceed 1000 nits (units of luminance: candela per square meter; the S9 screen is reported to produce max 1130 nits), your typical laptop computer screens max out around measly 200 nits (see e.g. this Laptop Mag test table: https://www.laptopmag.com/benchmarks/display-brightness ). While this is perfectly good while working in a typical indoor, office environment, it is very hard to make out any details of such screens in bright sunlight. You will just squint, get a headache, and hurt your eyes, in the long run. Also, many typical laptop screens today are highly reflective, glossy glass screens, and the matte surfaces, which help against reflections, have been getting very rare.

It is as the “mobile work” that is one of the key puzzwords and trends today, means in practice only indoor-to-indoor style of mobility, rather than implying development of tools for truly mobile work, that would also make it possible to work from a park bench in a sunny day, or from that classical location: dock, next to your trusty rowing boat?

I have been hunting for business oriented laptops that would also have enough maximum screen brightness to scale up to comfortable levels in brighly lit environments, and there are not really that many. Even if you go for tablet computers, which should be optimised for mobile work, the brightness is not really at level with the best smartphone screens. Some of the best figures come from Samsung Galaxy Tab S3, which is 441 nits, iPad Pro 10.5 inch model is reportedly 600 nits, and Google Pixel C has 509 nits maximum. And a tablet devices – even the best of them – do not really work well for all work tasks.

HP ZBook Studio x360 G5
HP ZBook Studio x360 G5 (photo © HP)

HP has recently introduced some interesting devices, that go beyond the dim screens that most other manufacturers are happy with. For example, HP ZBook Studio x360 G5 supposedly comes with a 4k, high resolution anti-glare touch display that supports 100 percent Adobe RPG and which has 600 nits of brightness, which is “20 percent brighter than the Apple MacBook Pro 15-inch Retina display and 50 percent brighter than the Dell XPS UltraSharp 4K display”, according to HP. With its 8th generation Xeon processors (pro-equivalent to the hexacore Core i9), this is a powerful, and expensive device, but I am glad someone is showing the way.

EliteBook-X360-2018
HP advertising their new bright laptop display (image © HP)

Even better, the upcoming, updated HP EliteBook x360 G3 convertible should come with a touchscreen that has maximum brightness of 700 nits. HP is advertising this as the “world’s first outdoor viewable display” for a business laptop, which at least sounds very promising. Note though, that this 700 nits can be achieved with only the 1920 x 1080 resolution model; the 4K touch display option has 500 nits, which is not that bad, either. The EliteBooks I have tested also have excellent keyboards, good quality construction and some productivity oriented enhancements that make them an interesting option for any “truly mobile” worker. One of such enhancement is a 4G/LTE data connectivity option, which is a real bless, if one moves fast, opening and closing the laptop in different environments, so that there is no reliable Wi-Fi connection available all the time. (More on HP EliteBook models at: http://www8.hp.com/us/en/elite-family/elitebook-x360-1030-1020.html.)

HP-EliteBook-x360-1030-G3_Tablet
EliteBook x360 G3 in tablet mode (photo © HP)

Apart from the challenges related to reliable data connectivity, a cloud-based file system is something that should be default for any mobile worker. This is related to data security: in mobile work contexts, it is much easier to lose one’s laptop, or get it robbed. Having a fast and reliable (biometric) authentication, encrypted local file system, and instantaneous syncronisation/backup to the cloud, would minimise the risk of critical loss of work, or important data, even if the mobile workstation would drop into a lake, or get lost. In this regard, Google’s Chromebooks are superior, but they typically lack the LTE connectivity, and other similar business essentials, that e.g. the above EliteBook model features. Using a Windows 10 laptop with either full Dropbox synchronisation enabled, or with Microsoft OneDrive as the default save location will come rather close, even if the Google Drive/Docs ecosystem in Chromebooks is the only one that is truly “cloud-native”, in the sense that all applications, settings and everything else also lives in the cloud. Getting back to where you left your work in the Chrome OS means that one just picks up any Chromebook, logs in, and starts with a full access to one’s files, folders, browser addons, bookmarks, etc. Starting to use a new PC is a much less frictionless process (with multiple software installations, add-ons, service account logins, the setup can easily take full working days).

20180519_083722.jpgIf I’d have my ideal, mobile work oriented tool from today’s tech world, I’d pick the business-enhanced hardware of HP EliteBook, with it’s bright display and LTE connectivity, and couple those with a Chrome OS, with it’s reliability and seamless online synchronisation. But I doubt that such a combo can be achieved – or, not yet, at least. Meanwhile, we can try to enjoy the summer, and some summer work, in bit more sheltered, shady locations.

Recommended laptops, March 2018

Every now and then I am asked to recommend what PC to buy. The great variety in individual needs and preferences make this ungrateful task – it is dangerous to follow someone else’s advice, and not to do your own homework, and hands-on testing yourself. But, said that, here are some of my current favourites, based on my individual and highly idiosyncratic preferences:

My key criterion is to start from a laptop, rather than a desktop PC: laptops are powerful enough for almost anything, and they provide more versatility. When used in office, or home desk, one can plug in external keyboard, mouse/trackball and display, and use the local network resources such as printers and file servers. The Thunderbolt interface has made it easy to have all those things plugged in via a single connector, so I’d recommend checking that the laptop comes with Thunderbolt (it uses USB-C type connector, but not all USB-C ports are Thunderbolt ports).

When we talk about laptops, my key criteria would be to first look at the weight and get as light device as possible, considering two other key criteria: excellent keyboard and good touch display.

The reasons for those priorities are that I personally carry the laptop with me pretty much always, and weight is then a really important factor. If thing is heavy, the temptation is just to leave it where it sits, rather than pick it up while rushing into a quick meeting. And when in the meeting one needs to make notes, or check some information, one is at the mercy of a smartphone picked from the pocket, and the ergonomics are much worse in that situation. Ergonomics relate to the point about excellent keyboard and display, alike. Keyboard is to me the main interface, since I write a lot. Bad or even average keyboard will make things painful in the long run, if you write hours and hours daily. Prioritising the keyboard is something that your hands, health and general life satisfaction will thank, in the long run.

Touch display is something that will probably divide the opinions of many technology experts, even. In the Apple Macintosh ecosystem of computers there is no touch screen computer available: that modality is reserved to iPad and iPhone mobile devices. I think that having a touch screen on a laptop is something that once learned, one cannot go away from. I find myself trying to scroll and swipe my non-touchscreen devices nowadays all the time. Windows 10 as an operating system has currently the best support for touch screen gestures, but there are devices in the Linux and Chromebook ecosystems that also support touch. Touch screen display makes handling applications, files easier, and zooming in and out of text and images a snap. Moving hands away from keyboard and touchpad every now and then to the edges of the screen is probably also good for ergonomics. However, trying to keep one’s hands on the laptop screen for extended times is not a good idea, as it is straining. Touch screen is not absolutely needed, but it is an excellent extra. However, it is important that the screen is bright, sharp, and has wide viewing angles; it is really frustrating to work on dim washed-out displays, particularly in brightly lit conditions. You have to squint, and end up with a terrible headache at the end of the day. In LCD screens look for IPS (in-plane switching) technology, or for OLED screens. The latter, however, are still rather rare and expensive in laptops. But OLED has the best contrast, and it is the technology that smartphone manufacturers like Samsung and Apple use in their flagship mobile devices.

All other technical specifications in a laptop PC are, for me, secondary for those three. It is good to have a lot of memory, a large and fast SSD disk, and a powerful processor (CPU), for example, but according to my experience, if you have a modern laptop that is light-weight, and has excellent keyboard and display, it will also come with other specs that are more than enough for all everyday computing tasks. Things are a bit different if we are talking about a PC that will have gaming as its primary use, for example. Then it would be important to have a discrete graphics card (GPU) rather than only the built-in, integrated graphics in the laptop. That feature, with related added requirements to other technology means that such laptops are usually more pricey, and a desktop PC is in most cases better choice for heavy duty gaming than a laptop. But dedicated gaming laptops (with discrete graphics currently in the Nvidia Pascal architecture level – including GTX 1050, 1060 and even 1080 types) are evolving, and becoming all the time more popular choices. Even while many of such laptops are thick and heavy, for many gamers it is nice to be able to carry the “hulking monster” into a LAN party, eSports event, or such. But gaming laptops are not your daily, thin and light work devices for basic tasks. They are too overpowered for such uses (and consume their battery too fast), and – on the other hand – if a manufacturer tries fitting in a powerful discrete graphics card into a slim, lightweight frame, there will be generally overheating problems, if one really starts to put the system under heavy gaming loads. The overheated system will then start “throttling”, which means that it will automatically decrease the speed it is operating with, in order to cool down. These limitations will perhaps be eased with the next, “Volta” generation of GPU microarchitecture, making thin, light and very powerful laptop computers more viable. They will probably come with a high price, though.

Said all that, I can then highlight few systems that I think are worthy of consideration at this timepoint – late March, 2018.

To start from the basics, I think that most general users would profit from having a close look at Chromebook type of laptop computers. They are a bit different from Windows/Mac type personal computers that many people are mostly familiar with, and have their own limitations, but also clear benefits. The ChromeOS (operating system by Google) is a stripped down version of Linux, and provides fast and reliable user experience, as the web-based, “thin-client” system does not slow down in same way as a more complex operating system that needs to cope with all kinds of applications that are installed locally into it over the years. Chromebooks are fast and simple, and also secure in the sense that the operating system features auto-updating, running code in secure “sandbox”, and verified boot, where the initial boot code checks for any system compromises. The default file location in Chomebooks is a cloud service, which might turn away some, but for a regular user it is mostly a good idea to have cloud storage: a disk crash or lost computer does not lead into losing one’s files, as the cloud operates as an automatic backup.

ASUS Chromebook Flip (C302CA)
ASUS Chromebook Flip (C302CA; photo © ASUS).

ASUS Chromebook Flip (C302CA model) [see link] has been getting good reviews. I have not used this one personally, and it is on the expensive side of Chromebooks, but it has nice design, it is rather light (1,18 kg / 2,6 pounds), and keyboard and display are reportedly decent or even good. It has a touch screen, and can run Android apps, which is becoming one of the key future directions where the ChromeOS is heading. As an alternative, consider Samsung Chromebook Pro [see link], which apparently has worse keyboard, but features an active stylus, which makes it strong when used as a tablet device.

For premium business use, I’d recommend having a look at the classic Thinkpad line of laptop computers. Thin and light Thinkpad X1 Carbon (2018) [see link] comes now also with a touch screen option (only in FHD/1080p resolution, though), and has a very good keyboard. It has been recently updated into 8th generation Intel processors, which as quad-core systems provide a performance boost. For a more touch screen oriented users, I recommend considering Thinkpad X1 Yoga [see link] model. Both of these Lenovo offerings are quite expensive, but come with important business use features, like (optional) 4G/LTE-A data card connectivity. Wi-Fi is often unreliable, and going through the tethering process via a smartphone mobile hotspot is not optimal, if you are running fast from meeting to meeting, or working while on the road. The Yoga model also used to have a striking OLED display, but that is being discontinued in the X1 Yoga 3rd generation (2018) models; that is replaced by a 14-inch “Dolby Vision HDR touchscreen” (max brightness of 500 nits, 2,560 x 1,440 resolution). HDR is still an emerging technology in laptop displays (and elsewhere as well), but it promises a wider colour gamut – a set of available colours. Though, I am personally happy with the OLED in the 2017 model X1 Yoga I am mostly using for daily work these days. X1 Carbon is lighter (1,13 kg), but X1 Yoga is not too heavy either (1,27 kg). Note though, that the keyboard in Yoga is not as good as in the Carbon.

Thinkpad X1 Yoga
Thinkpad X1 Yoga (image © Lenovo).

There are several interesting alternatives, all with their distinctive strengths (and weaknesses). I mention here just shortly these:

  • Dell XPS 13 (2018) [see link] line of ultraportable laptops with their excellent “InfinityEdge” displays has also been updated to 8th gen quad core processors, and is marketed as the “world’s smallest 13-inch laptop”, due to the very thin bezels. With the weight of 1,21 kg (2,67 pounds), XPS 13 is very compact, and some might even miss having a bit wider bezels, for easier screen handling. XPS does not offer 4G/LTE module option, to my knowledge.
  • ASUS Zenbook Pro (UX550) [see link] is a 15-inch laptop, which is a bit heavier (with 1,8 kg), but it scales up to 4k displays, and can come with discrete GTX 1050 Ti graphics option. For being a bit thicker and heavier, Zenbook Pro is reported to have a long battery life, and rather capable graphics performance, with relatively minor throttling issues. It has still 7th gen processors (as quad core versions, though).
  • Nice, pretty lightweight 15-inch laptops come from Dell (XPS 15) [see link] and LG, for example – particularly with LG gram 15 [see link], which is apparently a very impressive device, and weighs only 1,1 kg while being a 15-inch laptop; it is shame we cannot get it here in Finland, though.
  • Huawei Matebook X Pro
    Huawei Matebook X Pro (photo © Huawei).
  • As Apple has (for my eyes) ruined their excellent Macbook Pro line, with too shallow keyboard, and by not proving any touch screen options, people are free to hunt for Macbook-like experiences elsewhere. Chinese manufacturers are always fast to copy things, and Huawei Matebook X Pro [see link] is an interesting example: it has a touch screen (3K LTPS display, 3000 x 2000 resolution with 260 PPI, 100 % colour space, 450 nits brightness), 8th gen processors, GTX MX 150 discrete graphics, 57,4 Wh battery, Dolby Atmos sound system, etc, etc. This package weighs 1,33 kg. It is particularly nice to see them not copying Apple in their highly limited ports and connectivity – Matebook X Pro has both Thunderbolt/USB-C, but also the older USB-A, and a regular 3,5 mm headphone port. I am dubious about the quality of the keyboard, though, until I have tested it personally. And, one can always be a bit paranoid about the underlying security of Chinese-made information technology; but then again, the Western companies have not proved necessarily any better in that area. It is good to have more competition in the high end of laptops, as well.
  • Finally, one must mention also Microsoft, which sells its own Surface line of products, which have very good integration with the touch features of Windows 10, of course, and also generally come with displays, keyboards and touchpads that are among the very best. Surface Book 2 [see link] is their most versatile and powerful device: there are both 15-inch and 13,5-inch models, both having quad-core processors, discrete graphics (up to GTX 1060), and good battery life (advertised up to 17 hours, but one can trust that the real-life use times will be much less). Book 2 is a two-in-one device with a detachable screen that can work independently as a tablet. However, this setup is heavier (1,6 kg for 13,5-inch, 1,9 kg for the 15-inch model) than the Surface Laptop [see link], which does not work as a tablet, but has a great touch-screen, and weighs less (c. 1,5 kg). The “surface” of this Surface laptop is pleasurable alcantara, a cloth material.

MS Surface Laptop with Alcantara
MS Surface Laptop with alcantara (image © Microsoft).

To sum up, there are many really good options these days in personal computers, and laptops in general have evolved in many important areas. Still it is important to have hands-on experience before committing – particularly if one is using the new workhorse intensely, this is a crucial tool decision, after all. And personal preference (and, of course, available budget) really matters.

Tools for Trade

Lenovo X1 Yoga (2nd gen) in tablet mode
Lenovo X1 Yoga (2nd gen) in tablet mode.

The key research infrastructures these days include e.g. access to online publication databases, and ability to communicate with your colleagues (including such prosaic things as email, file sharing and real-time chat). While an astrophysicist relies on satellite data and a physicist to a particle accelerator, for example, in research and humanities and human sciences is less reliant on expensive technical infrastructures. Understanding how to do an interview, design a reliable survey, or being able to carefully read, analyse and interpret human texts and expressions is often enough.

Said that, there are tools that are useful for researchers of many kinds and fields. Solid reference database system is one (I use Zotero). In everyday meetings and in the field, note taking is one of the key skills and practices. While most of us carry our trusty laptops everywhere, one can do with a lightweight device, such as iPad Pro. There are nice keyboard covers and precise active pens available for today’s tablet computers. When I type more, I usually pick up my trusty Logitech K810 (I have several of those). But Lenovo Yoga 510 that I have at home has also that kind of keyboard that I love: snappy and precise, but light of touch, and of low profile. It is also a two-in-one, convertible laptop, but a much better version from same company is X1 Yoga (2nd generation). That one is equipped with a built-in active pen, while being also flexible and powerful enough so that it can run both utility software, and contemporary games and VR applications – at least when linked with an eGPU system. For that, I use Asus ROG XG Station 2, which connects to X1 Yoga with a Thunderbolt 3 cable, thereby plugging into the graphics power of NVIDIA GeForce GTX 1070. A system like this has the benefit that one can carry around a reasonably light and thin laptop computer, which scales up to workstation class capabilities when plugged in at the desk.

ROG XG Station 2 with Thunderbolt 3.
ROG XG Station 2 with Thunderbolt 3.

One of the most useful research tools is actually a capable smartphone. For example, with a good mobile camera one can take photos to make visual notes, photograph one’s handwritten notes, or shoot copies of projected presentation slides at seminars and conferences. When coupled with a fast 4G or Wi-Fi connection and automatic upload to a cloud service, the same photo notes almost immediately appear also the laptop computer, so that they can be attached to the right folder, or combined with typed observation notes and metadata. This is much faster than having a high-resolution video recording of the event; that kind of more robust documentation setups are necessary in certain experimental settings, focus group interview sessions, collaborative innovation workshops, etc., but in many occasions written notes and mobile phone photos are just enough. I personally use both iPhone (8 Plus) and Android systems (Samsung Galaxy Note 4 and S7).

Writing is one of they key things academics do, and writing software is a research tool category on its own. For active pen handwriting I use both Microsoft OneNote and Nebo by MyScript. Nebo is particularly good in real-time text recognition and automatic conversion of drawn shapes into vector graphics. I link a video by them below:

My main note database is at Evernote, while online collaborative writing and planning is mostly done in Google Docs/Drive, and consortium project file sharing is done either in Dropbox or in Office365.

Microsoft Word may be the gold standard of writing software in stand-alone documents, but their relative share has radically gone down in today’s distributed and collaborative work. And while MS Word might still have the best multi-lingual proofing tools, for example, the first draft might come from an online Google Document, and the final copy end up into WordPress, to be published in some research project blog or website, or in a peer-reviewed online academic publication, for example. The long, book length projects are best handled in dedicated writing environment such as Scrivener, but most collaborative book projects are best handled with a combination of different tools, combined with cloud based sharing and collaboration in services like Dropbox, Drive, or Office365.

If you have not collaborated in this kind of environment, have a look at tutorials, here is just a short video introduction by Google into sharing in Docs:

What are your favourite research and writing tools?

Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

Cognitive engineering of mixed reality

 

iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.
iOS 11: user-adaptable control centre, with application and function shortcuts in the lock screen.

In the 1970s and 1980s the concept ‘cognitive engineering’ was used in the industry labs to describe an approach trying to apply cognitive science lessons to the design and engineering fields. There were people like Donald A. Norman, who wanted to devise systems that are not only easy, or powerful, but most importantly pleasant and even fun to use.

One of the classical challenges of making technology suit humans, is that humans change and evolve, and differ greatly in motivations and abilities, while technological systems tend to stay put. Machines are created in a certain manner, and are mostly locked within the strict walls of material and functional specifications they are based on, and (if correctly manufactured) operate reliably within those parameters. Humans, however, are fallible and changeable, but also capable of learning.

In his 1986 article, Norman uses the example of a novice and experienced sailor, who greatly differ in their abilities to take the information from compass, and translate that into a desirable boat movement (through the use of tiller, and rudder). There have been significant advances in multiple industries in making increasingly clear and simple systems, that are easy to use by almost anyone, and this in turn has translated into increasingly ubiquitous or pervasive application of information and communication technologies in all areas of life. The televisions in our living rooms are computing systems (often equipped with apps of various kinds), our cars are filled with online-connected computers and assistive technologies, and in our pockets we carry powerful terminals into information, entertainment, and into the ebb and flows of social networks.

There is, however, also an alternative interpretation of what ‘cognitive engineering’ could be, in this dawning era of pervasive computing and mixed reality. Rather than only limited to engineering products that attempt to adapt to the innate operations, tendencies and limitations of human cognition and psychology, engineering systems that are actively used by large numbers of people also means designing and affecting the spaces, within which our cognitive and learning processes will then evolve, fit in, and adapt into. Cognitive engineering does not only mean designing and manufacturing certain kinds of machines, but it also translates into an impact that is made into the human element of this dialogical relationship.

Graeme Kirkpatrick (2013) has written about the ‘streamlined self’ of the gamer. There are social theorists who argue that living in a society based on computers and information networks produces new difficulties for people. Social, cultural, technological and economic transitions linked with the life in late modern, capitalist societies involve movements from projects to new projects, and associated necessity for constant re-training. There is necessarily no “connecting theme” in life, or even sense of personal progression. Following Boltanski and Chiapello (2005), Kirkpatrick analyses the subjective condition where life in contradiction – between exigency of adaptation and demand for authenticity – means that the rational course in this kind of systemic reality is to “focus on playing the game well today”. As Kirkpatrick writes, “Playing well means maintaining popularity levels on Facebook, or establishing new connections on LinkedIn, while being no less intensely focused on the details of the project I am currently engaged in. It is permissible to enjoy the work but necessary to appear to be enjoying it and to share this feeling with other involved parties. That is the key to success in the game.” (Kirkpatrick 2013, 25.)

One of the key theoretical trajectories of cognitive science has been focused on what has been called “distributed cognition”: our thinking is not only situated within our individual brains, but it is in complex and important ways also embodied and situated within our environments, and our artefacts, in social, cultural and technological means. Gaming is one example of an activity where people can be witnessed to construct a sense of self and its functional parameters out of resources that they are familiar with, and which they can freely exploit and explore in their everyday lives. Such technologically framed play is also increasingly common in working life, and our schools can similarly be approached as complex, designed and evolving systems that are constituted by institutions, (implicit, as well as explicit) social rules and several layers of historically sedimented technologies.

Beyond all hype of new commercial technologies related to virtual reality, augmented reality and mixed reality technologies of various kinds, lies the fact that we have always already lived in complex substrate of mixed realities: a mixture of ideas, values, myths and concepts of various kinds, that are intermixed and communicated within different physical and immaterial expressive forms and media. Cognitive engineering of mixed reality in this, more comprehensive sense, involves involvement in dialogical cycles of design, analysis and interpretation, where practices of adaptation and adoption of technology are also forming the shapes these technologies are realized within. Within the context of game studies, Kirkpatrick (2013, 27) formulates this as follows: “What we see here, then, is an interplay between the social imaginary of the networked society, with its distinctive limitations, and the development of gaming as a practice partly in response to those limitations. […] Ironically, gaming practices are a key driver for the development of the very situation that produces the need for recuperation.” There are multiple other areas of technology-intertwined lives where similar double bind relationships are currently surfacing: in social use of mobile media, in organisational ICT, in so-called smart homes, and smart traffic design and user culture processes. – A summary? We live in interesting times.

References:
– Boltanski, Luc, ja Eve Chiapello (2005) The New Spirit of Capitalism. London & New York: Verso.
– Kirkpatrick, Graeme (2013) Computer Games and the Social Imaginary. Cambridge: Polity.
– Norman, Donald A. (1986) Cognitive engineering. User Centered System Design31(61).