The Rise and Fall and Rise of MS Word and the Notepad

MS Word installation floppy. (Image: Wikipedia.)

Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.

Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)

WordPerfect 5.1 screen. (Image: Wikipedia.)

It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).

The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).

Microsoft Word 6.0, Mac version. (Image: user “MR” at https://www.macintoshrepository.org/851-microsoft-word-6)

These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.

Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing as note-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.

The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.

Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).

As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.

MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).

There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.

There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.

Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.

Scrivener. (Image: Literature and Latte.)

Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.

In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.

iA Writer. (Image: iA Inc.)

The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).

For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.

I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.

In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.

Dropbox. (Image: Dropbox, Inc.)

In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.

So, happy 36th anniversary, MS Word.

Life with Photography: Then and Now

I have kept a diary, too, but I think that the best record of life and times comes from the photographs taken over the years. Much of the last century (pre-2000s) photos of mine are collected in traditional photo albums: I used to love the craft of making photo collages, cutting and combining pieces of photographs, written text and various found materials, such as travel tickets or brochure pieces into travel photo albums. Some albums were more experimental: in pre-digital times it was difficult to know if a shot was technically successful or not, and as I have always mostly worked in colour rather than black-and-white, I used to order the film rolls developed and every frame printed, without seeing the final outcomes. With some out-of-focus, blurred or plain random, accidental shots included into every film spool, I had plenty of materials to build collages that were focused on play with colour, dynamics of composition or some visual motif. This was fun stuff, and while one certainly can do this (and more) e.g. with Photoshop with the digital photos, there is something in cutting and combining physical photos that is not the same as a digital collage.

The first camera of my own was Chinon CE-4, a budget-class Japanese film camera from the turn of 1970s/1980s. It served me well over many years, and with it’s manual and “semi-automatic” (Aperture Priority) exposure system and support for easy double exposures.

Chinon CE-4 (credit:
https://www.flickr.com/photos/pwiwe/463041799/in/pool-camerawiki/ ).

I started transitioning to digital photography first by scanning paper photos and slides into digital versions that could then be used for editing and publishing. Probably among my earliest actual digital cameras was HP PhotoSmart 318, a cheap and almost toy-like device with 2.4-megapixel resolution, 8 MB internal flash memory (plus supported CompactFlash cards), a fixed f/2.8 lens and TTL contrast detection autofocus. I think I was shooting occasionally with this camera already in 2001, at least.

Few years after that I started to use digital photography a bit more in travels at least. I remember getting my first Canon cameras for this purpose. I owned at least a Canon Digital IXUS v3 – this I was using at least already in the first DiGRA conference in Utrecht, in November 2003. Even while still clearly a “point-and-shoot” style (compact) camera, this Canon one was based on metal construction and the photos it produced were a clear step up above the plastic HP device. I started to convert into a believer: the future was in digital photography.

Canon Digital IXUS v3 (credit:
https://fi.m.wikipedia.org/wiki/Tiedosto:Canon_Digital_Ixus_V3.jpg ).

After some saving, I finally invested into my first digital “system camera” (DSLR) in 2005. I remember taking photos in the warm Midsummer night that year with the new Canon EOS 350D, and how magical it felt. The 8.0-megapixel CMOS image sensor and DIGIC II signal processing and control unit (a single-chip system), coupled with some decent Canon lenses meant that it was possible to experiment with multiple shooting modes and get finely-detailed and nuanced night and nature photos with it. This was also time when I both built my own (HTML based) online and offline “digital photo albums”, but also joined the first digital photo community services, such as Flickr.

Canon EOS 550D (credit:
https://www.canon.fi/for_home/product_finder/cameras/digital_slr/eos_550d/ ).

It was five years later, when I again upgraded my Canon system, this time into EOS 550D (“Rebel T2i” in the US, “Kiss X4” in Japan). This again meant considerable leap both in the image quality and also in features that relate both to the speed, “intelligence” and convenience of shooting photos, as well as to the processing options that are available in-camera. The optical characteristics of cameras as such have not radically changed, and there are people who consider some vintage Zeiss, Nikkor or Leica camera lenses as works of art. The benefits of 550D over 350D for me were mostly related to the higher resolution sensor (18.0-megapixel this time) and the ways in which DIGIC 4 processor reduced noise, provided much higher speeds, and even 1080p video (with live view and external microphone input).

Today, in 2019, I am still taking Canon EOS 550D with me in any event or travel where I want to get the best quality photographs. This is mostly due to the lenses than the actual camera body, though. My two current smartphones – Huawei Mate 20 Pro and iPhone 8 Plus – both have cameras that come with both arguably better sensors and much more capable processors than this aging, entry-level “system camera”. iPhone has dual 12.0-megapixel sensors (f/1.8, 28mm/wide, with optical image stabilization; f/2.8, 57mm/telephoto) that both are accompanied by PDAF (a fast autofocus technology based on Phase Detection). The optics in Huawei are developed in collaboration with Leica and come as a seamless combination of three (!) cameras: the first has a very large 40.0-megapixel sensor (f/1.8, 27mm/wide), the second one has 20.0-megapixels (f/2.2, 16mm/ultrawide), and the third 8.0-megapixels (f/2.4, 80mm/telephoto). It is possible to use both optical and digital zoom capabilities in Huawei, make use of efficient optical image stabilization, plus a hybrid technology involving phase detection as well as laser autofocus (a tiny laser transmitter sends a beam into the subject, and with the received information the processor is capable of calculating and adjusting for the correct focus). Huawei also utilizes advanced AI algorithms and its powerful Kirin 980 processor (with two “Neural Processing Units, NPUs) to optimize the camera settings, and apply quickly some in-camera postprocessing to produce “desirable” outcomes. According to available information, Huawei Mate 20 Pro can process and recognize “4,500 images per minute and is able to differentiate between 5,000 different kinds of objects and 1,500 different photography scenarios across 25 categories” (whatever those are).

Huawei Mate 20 Pro, with it’s three cameras (credit: Frans Mäyrä).

But with all that computing power today’s smartphones are not capable (not yet, at least) to outplay the pure optical benefits available to system cameras. This is not so crucial when documenting a birthday party, for example, as the lenses in smartphones are perfectly capable for short distance and wide-angle situations. Proper portraits are somewhat borderline case today: a high-quality system camera lens is capable to “separate” the person from the background and blur the background (create the beautiful “bokeh” effect). But the powerful smartphones like iPhone and Huawei mentioned above come effectively with an AI-assisted Photoshop built into them, and can therefore detect the key object, separate it, and blur the background with algorithms. The results can be rather good (good enough, for many users and use cases), but at the same time it must be said that when a professional photographer aims for something that can be enlarged, printed out full-page in a magazine, or otherwise used in a demanding context, a good lens attached into a system camera will prevail. This relates to basic optical laws: the aperture (hole, where the light comes in) can be much larger in such camera lenses, providing more information for the image sensor, the focal length longer – and the sensor itself can also be much larger, meaning that e.g. fast-moving objects (sports, animal photography) and low-light conditions will benefit. With several small lenses and sensors, the future “smart cameras” can probably provide an ever-improving challenge to more traditional photography equipment, combining, processing data and filling-in such information that is derived from machine learning, but a good lens coupled with a system camera can help creating unique pictures in more traditional manner. Both are needed, and both have a future in photography cultures, I think.

The main everyday benefit of e.g. Huawei Mate 20 Pro vs old-school DSLR such as Canon EOS 550D is the portability. Few people go to school or work with a DSLR hanging in their neck, but a pocket-size camera can always travel with you – and be available when that unique situation, light condition or a rare bird/butterfly presents itself. With the camera technologies improving, the system cameras are also getting smaller and lighter, though. Many professionals still prefer rather large and heavy camera bodies, as the big “grip” and solid buttons/controls provide better ergonomics, and the heavy body is also a proper counterbalance for large and heavy telephoto lenses that many serious nature or sports photographers need for their work, for example. Said that, I am currently thinking that my next system camera will no longer probably be based on the traditional SLR (Single-Lens Reflex) architecture – which, btw, is already over three hundred years old, if the first reflex mirror “camera obscura” systems are taken into an account. The mirrorless interchangeable lens camera systems are maintaining the component-based architecture of body+lenses, but eliminate the moving mirror and reflective prisms of SLR systems, and use electronic viewfinders instead.

I have still my homework to do regarding the differences in how various mirrorless systems are being implemented, but it also looks to my eye that there has been a rather rapid period of technical R&D in this area recently, with Sony in particular leading the way, but the big camera manufacturers like Canon and Nikon now following, releasing their own mirrorless solutions. There is not yet quite as much variety to choose for amateur, small-budget photographers such as myself, with many initial models released into the upper, serious-enthusiast/professionals price range of multiple-thousands. But I’d guess that the sensible budget models will also follow, next, and I am interested to see if it is possible to move into a new decade with a light, yet powerful system that would combine some of the best aspects from the history of photography with the opportunities opened by the new computing technologies.

Sony a6000, a small mirrorless system camera body announced in 2014 (credit: https://en.wikipedia.org/wiki/Sony_α6000#/media/File:Sony_Alpha_ILCE-6000_APS-C-frame_camera_no_body_cap-Crop.jpeg).

Recommended laptops, March 2018

Every now and then I am asked to recommend what PC to buy. The great variety in individual needs and preferences make this ungrateful task – it is dangerous to follow someone else’s advice, and not to do your own homework, and hands-on testing yourself. But, said that, here are some of my current favourites, based on my individual and highly idiosyncratic preferences:

My key criterion is to start from a laptop, rather than a desktop PC: laptops are powerful enough for almost anything, and they provide more versatility. When used in office, or home desk, one can plug in external keyboard, mouse/trackball and display, and use the local network resources such as printers and file servers. The Thunderbolt interface has made it easy to have all those things plugged in via a single connector, so I’d recommend checking that the laptop comes with Thunderbolt (it uses USB-C type connector, but not all USB-C ports are Thunderbolt ports).

When we talk about laptops, my key criteria would be to first look at the weight and get as light device as possible, considering two other key criteria: excellent keyboard and good touch display.

The reasons for those priorities are that I personally carry the laptop with me pretty much always, and weight is then a really important factor. If thing is heavy, the temptation is just to leave it where it sits, rather than pick it up while rushing into a quick meeting. And when in the meeting one needs to make notes, or check some information, one is at the mercy of a smartphone picked from the pocket, and the ergonomics are much worse in that situation. Ergonomics relate to the point about excellent keyboard and display, alike. Keyboard is to me the main interface, since I write a lot. Bad or even average keyboard will make things painful in the long run, if you write hours and hours daily. Prioritising the keyboard is something that your hands, health and general life satisfaction will thank, in the long run.

Touch display is something that will probably divide the opinions of many technology experts, even. In the Apple Macintosh ecosystem of computers there is no touch screen computer available: that modality is reserved to iPad and iPhone mobile devices. I think that having a touch screen on a laptop is something that once learned, one cannot go away from. I find myself trying to scroll and swipe my non-touchscreen devices nowadays all the time. Windows 10 as an operating system has currently the best support for touch screen gestures, but there are devices in the Linux and Chromebook ecosystems that also support touch. Touch screen display makes handling applications, files easier, and zooming in and out of text and images a snap. Moving hands away from keyboard and touchpad every now and then to the edges of the screen is probably also good for ergonomics. However, trying to keep one’s hands on the laptop screen for extended times is not a good idea, as it is straining. Touch screen is not absolutely needed, but it is an excellent extra. However, it is important that the screen is bright, sharp, and has wide viewing angles; it is really frustrating to work on dim washed-out displays, particularly in brightly lit conditions. You have to squint, and end up with a terrible headache at the end of the day. In LCD screens look for IPS (in-plane switching) technology, or for OLED screens. The latter, however, are still rather rare and expensive in laptops. But OLED has the best contrast, and it is the technology that smartphone manufacturers like Samsung and Apple use in their flagship mobile devices.

All other technical specifications in a laptop PC are, for me, secondary for those three. It is good to have a lot of memory, a large and fast SSD disk, and a powerful processor (CPU), for example, but according to my experience, if you have a modern laptop that is light-weight, and has excellent keyboard and display, it will also come with other specs that are more than enough for all everyday computing tasks. Things are a bit different if we are talking about a PC that will have gaming as its primary use, for example. Then it would be important to have a discrete graphics card (GPU) rather than only the built-in, integrated graphics in the laptop. That feature, with related added requirements to other technology means that such laptops are usually more pricey, and a desktop PC is in most cases better choice for heavy duty gaming than a laptop. But dedicated gaming laptops (with discrete graphics currently in the Nvidia Pascal architecture level – including GTX 1050, 1060 and even 1080 types) are evolving, and becoming all the time more popular choices. Even while many of such laptops are thick and heavy, for many gamers it is nice to be able to carry the “hulking monster” into a LAN party, eSports event, or such. But gaming laptops are not your daily, thin and light work devices for basic tasks. They are too overpowered for such uses (and consume their battery too fast), and – on the other hand – if a manufacturer tries fitting in a powerful discrete graphics card into a slim, lightweight frame, there will be generally overheating problems, if one really starts to put the system under heavy gaming loads. The overheated system will then start “throttling”, which means that it will automatically decrease the speed it is operating with, in order to cool down. These limitations will perhaps be eased with the next, “Volta” generation of GPU microarchitecture, making thin, light and very powerful laptop computers more viable. They will probably come with a high price, though.

Said all that, I can then highlight few systems that I think are worthy of consideration at this timepoint – late March, 2018.

To start from the basics, I think that most general users would profit from having a close look at Chromebook type of laptop computers. They are a bit different from Windows/Mac type personal computers that many people are mostly familiar with, and have their own limitations, but also clear benefits. The ChromeOS (operating system by Google) is a stripped down version of Linux, and provides fast and reliable user experience, as the web-based, “thin-client” system does not slow down in same way as a more complex operating system that needs to cope with all kinds of applications that are installed locally into it over the years. Chromebooks are fast and simple, and also secure in the sense that the operating system features auto-updating, running code in secure “sandbox”, and verified boot, where the initial boot code checks for any system compromises. The default file location in Chomebooks is a cloud service, which might turn away some, but for a regular user it is mostly a good idea to have cloud storage: a disk crash or lost computer does not lead into losing one’s files, as the cloud operates as an automatic backup.

ASUS Chromebook Flip (C302CA)
ASUS Chromebook Flip (C302CA; photo © ASUS).

ASUS Chromebook Flip (C302CA model) [see link] has been getting good reviews. I have not used this one personally, and it is on the expensive side of Chromebooks, but it has nice design, it is rather light (1,18 kg / 2,6 pounds), and keyboard and display are reportedly decent or even good. It has a touch screen, and can run Android apps, which is becoming one of the key future directions where the ChromeOS is heading. As an alternative, consider Samsung Chromebook Pro [see link], which apparently has worse keyboard, but features an active stylus, which makes it strong when used as a tablet device.

For premium business use, I’d recommend having a look at the classic Thinkpad line of laptop computers. Thin and light Thinkpad X1 Carbon (2018) [see link] comes now also with a touch screen option (only in FHD/1080p resolution, though), and has a very good keyboard. It has been recently updated into 8th generation Intel processors, which as quad-core systems provide a performance boost. For a more touch screen oriented users, I recommend considering Thinkpad X1 Yoga [see link] model. Both of these Lenovo offerings are quite expensive, but come with important business use features, like (optional) 4G/LTE-A data card connectivity. Wi-Fi is often unreliable, and going through the tethering process via a smartphone mobile hotspot is not optimal, if you are running fast from meeting to meeting, or working while on the road. The Yoga model also used to have a striking OLED display, but that is being discontinued in the X1 Yoga 3rd generation (2018) models; that is replaced by a 14-inch “Dolby Vision HDR touchscreen” (max brightness of 500 nits, 2,560 x 1,440 resolution). HDR is still an emerging technology in laptop displays (and elsewhere as well), but it promises a wider colour gamut – a set of available colours. Though, I am personally happy with the OLED in the 2017 model X1 Yoga I am mostly using for daily work these days. X1 Carbon is lighter (1,13 kg), but X1 Yoga is not too heavy either (1,27 kg). Note though, that the keyboard in Yoga is not as good as in the Carbon.

Thinkpad X1 Yoga
Thinkpad X1 Yoga (image © Lenovo).

There are several interesting alternatives, all with their distinctive strengths (and weaknesses). I mention here just shortly these:

  • Dell XPS 13 (2018) [see link] line of ultraportable laptops with their excellent “InfinityEdge” displays has also been updated to 8th gen quad core processors, and is marketed as the “world’s smallest 13-inch laptop”, due to the very thin bezels. With the weight of 1,21 kg (2,67 pounds), XPS 13 is very compact, and some might even miss having a bit wider bezels, for easier screen handling. XPS does not offer 4G/LTE module option, to my knowledge.
  • ASUS Zenbook Pro (UX550) [see link] is a 15-inch laptop, which is a bit heavier (with 1,8 kg), but it scales up to 4k displays, and can come with discrete GTX 1050 Ti graphics option. For being a bit thicker and heavier, Zenbook Pro is reported to have a long battery life, and rather capable graphics performance, with relatively minor throttling issues. It has still 7th gen processors (as quad core versions, though).
  • Nice, pretty lightweight 15-inch laptops come from Dell (XPS 15) [see link] and LG, for example – particularly with LG gram 15 [see link], which is apparently a very impressive device, and weighs only 1,1 kg while being a 15-inch laptop; it is shame we cannot get it here in Finland, though.
  • Huawei Matebook X Pro
    Huawei Matebook X Pro (photo © Huawei).
  • As Apple has (for my eyes) ruined their excellent Macbook Pro line, with too shallow keyboard, and by not proving any touch screen options, people are free to hunt for Macbook-like experiences elsewhere. Chinese manufacturers are always fast to copy things, and Huawei Matebook X Pro [see link] is an interesting example: it has a touch screen (3K LTPS display, 3000 x 2000 resolution with 260 PPI, 100 % colour space, 450 nits brightness), 8th gen processors, GTX MX 150 discrete graphics, 57,4 Wh battery, Dolby Atmos sound system, etc, etc. This package weighs 1,33 kg. It is particularly nice to see them not copying Apple in their highly limited ports and connectivity – Matebook X Pro has both Thunderbolt/USB-C, but also the older USB-A, and a regular 3,5 mm headphone port. I am dubious about the quality of the keyboard, though, until I have tested it personally. And, one can always be a bit paranoid about the underlying security of Chinese-made information technology; but then again, the Western companies have not proved necessarily any better in that area. It is good to have more competition in the high end of laptops, as well.
  • Finally, one must mention also Microsoft, which sells its own Surface line of products, which have very good integration with the touch features of Windows 10, of course, and also generally come with displays, keyboards and touchpads that are among the very best. Surface Book 2 [see link] is their most versatile and powerful device: there are both 15-inch and 13,5-inch models, both having quad-core processors, discrete graphics (up to GTX 1060), and good battery life (advertised up to 17 hours, but one can trust that the real-life use times will be much less). Book 2 is a two-in-one device with a detachable screen that can work independently as a tablet. However, this setup is heavier (1,6 kg for 13,5-inch, 1,9 kg for the 15-inch model) than the Surface Laptop [see link], which does not work as a tablet, but has a great touch-screen, and weighs less (c. 1,5 kg). The “surface” of this Surface laptop is pleasurable alcantara, a cloth material.

MS Surface Laptop with Alcantara
MS Surface Laptop with alcantara (image © Microsoft).

To sum up, there are many really good options these days in personal computers, and laptops in general have evolved in many important areas. Still it is important to have hands-on experience before committing – particularly if one is using the new workhorse intensely, this is a crucial tool decision, after all. And personal preference (and, of course, available budget) really matters.

Tools for Trade

Lenovo X1 Yoga (2nd gen) in tablet mode
Lenovo X1 Yoga (2nd gen) in tablet mode.

The key research infrastructures these days include e.g. access to online publication databases, and ability to communicate with your colleagues (including such prosaic things as email, file sharing and real-time chat). While an astrophysicist relies on satellite data and a physicist to a particle accelerator, for example, in research and humanities and human sciences is less reliant on expensive technical infrastructures. Understanding how to do an interview, design a reliable survey, or being able to carefully read, analyse and interpret human texts and expressions is often enough.

Said that, there are tools that are useful for researchers of many kinds and fields. Solid reference database system is one (I use Zotero). In everyday meetings and in the field, note taking is one of the key skills and practices. While most of us carry our trusty laptops everywhere, one can do with a lightweight device, such as iPad Pro. There are nice keyboard covers and precise active pens available for today’s tablet computers. When I type more, I usually pick up my trusty Logitech K810 (I have several of those). But Lenovo Yoga 510 that I have at home has also that kind of keyboard that I love: snappy and precise, but light of touch, and of low profile. It is also a two-in-one, convertible laptop, but a much better version from same company is X1 Yoga (2nd generation). That one is equipped with a built-in active pen, while being also flexible and powerful enough so that it can run both utility software, and contemporary games and VR applications – at least when linked with an eGPU system. For that, I use Asus ROG XG Station 2, which connects to X1 Yoga with a Thunderbolt 3 cable, thereby plugging into the graphics power of NVIDIA GeForce GTX 1070. A system like this has the benefit that one can carry around a reasonably light and thin laptop computer, which scales up to workstation class capabilities when plugged in at the desk.

ROG XG Station 2 with Thunderbolt 3.
ROG XG Station 2 with Thunderbolt 3.

One of the most useful research tools is actually a capable smartphone. For example, with a good mobile camera one can take photos to make visual notes, photograph one’s handwritten notes, or shoot copies of projected presentation slides at seminars and conferences. When coupled with a fast 4G or Wi-Fi connection and automatic upload to a cloud service, the same photo notes almost immediately appear also the laptop computer, so that they can be attached to the right folder, or combined with typed observation notes and metadata. This is much faster than having a high-resolution video recording of the event; that kind of more robust documentation setups are necessary in certain experimental settings, focus group interview sessions, collaborative innovation workshops, etc., but in many occasions written notes and mobile phone photos are just enough. I personally use both iPhone (8 Plus) and Android systems (Samsung Galaxy Note 4 and S7).

Writing is one of they key things academics do, and writing software is a research tool category on its own. For active pen handwriting I use both Microsoft OneNote and Nebo by MyScript. Nebo is particularly good in real-time text recognition and automatic conversion of drawn shapes into vector graphics. I link a video by them below:

My main note database is at Evernote, while online collaborative writing and planning is mostly done in Google Docs/Drive, and consortium project file sharing is done either in Dropbox or in Office365.

Microsoft Word may be the gold standard of writing software in stand-alone documents, but their relative share has radically gone down in today’s distributed and collaborative work. And while MS Word might still have the best multi-lingual proofing tools, for example, the first draft might come from an online Google Document, and the final copy end up into WordPress, to be published in some research project blog or website, or in a peer-reviewed online academic publication, for example. The long, book length projects are best handled in dedicated writing environment such as Scrivener, but most collaborative book projects are best handled with a combination of different tools, combined with cloud based sharing and collaboration in services like Dropbox, Drive, or Office365.

If you have not collaborated in this kind of environment, have a look at tutorials, here is just a short video introduction by Google into sharing in Docs:

What are your favourite research and writing tools?

Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

Future of interfaces: AirPods

apple-airpods
Apple AirPods (image © Apple).

I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.

Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).

There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.

The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.

The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.

Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.

There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: https://www.cnet.com/how-to/the-complete-list-of-siri-commands/). The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.

All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see: http://www.apple.com/airpods/.

Apple TV, 4th generation

Apple has been developing their television offerings in multiple fronts: in one sense, much television content and viewers have already moved into Apple (and Google) platforms, as online video and streaming media keeps on growing in popularity. According to one market research report, in 18-24 age group (in America), between 2011 and 2016, traditional television viewing has dropped by almost 40 %. At the same time, subscriptions to streaming video services (like Netflix) are growing. Particularly among the young people, some reports already suggest that they are spending more time watching streaming video as contrasted to watching live television programs. Just in the period from 2012 to 2014, mobile video views increased by 400 %.

Still, the television set remains as the centrepiece of most Western living rooms. Apple TV is designed to adapt games, music, photos and movies from the Apple ecosystem to the big screen. After some problems with the old, second generation Apple TV, I got today the new, 4th generation Apple TV. It has more powerful processor, more memory, a new remote control that has a small touch surface, and runs a new version of tvOS. The most important aspect regarding expansions into new services is the ability to download and install apps and games from thousands that are available in the App Store for tvOS.

After some quick testing, I think that I will prefer using the Remote app in my iPhone 6 Plus, rather than navigating with the small physical remote, which feels a bit finicky. Also, for games the dedicated video game controller (Steelseries Nimbus) would definitely provide a better sense of control. The Nimbus should also play nice with iPhone and iPad games, in addition to Apple TV ones.

Setup of the system was simple enough, and was most easily handled via another Apple device – iCloud was utilized to access Wi-Fi and other registered home settings automatically. Apart from the bit tricky touch controls, the user experience is excellent. Even the default screensavers of the new system are this time high-definition video clips, which are great to behold in themselves. This is not a 4k system, though, so if you have already upgraded the living room television into 4k version, the new Apple TV does not support that. Ours is still a Full HD Sony Bravia, so no problem for us. Compared to some other competing streaming media boxes (like Roku 4, Amazon Fire TV, Nvidia Shield Android TV), the feature set of Apple TV in comparison to its price might seem a bit lacklustre. The entire Apple ecosystem has its own benefits (as well as downsides) though.

Tech Tips for New Students

Working cross-platform
Going cross-platform: same text accessed via various versions of MS Word and Dropbox in Surface Pro 4, iPad Mini (with Zagg slim book keyboard case), Toshiba Chromebook 2, and iPhone 6 Plus, in the front.

There are many useful practices and tools that can be recommended for new university students; many good study practices are pretty universal, but then there are also elements that relate to what one studies, where one studies – to the institutional or disciplinary frames of academic work. A student that works on a degree in theoretical physics, electronics engineering, organic chemistry, history of the Middle Ages, Japanese language or business administration, for example, all will probably have elements in their studies that are unique to their fields. I will here focus on some simple technicalities should be useful for many students in the humanities, social sciences or digital media studies related fields, as well as for those in our own, Internet and Game Studies degree program.

There are study practices that belong to the daily organisation of work, to the tools, the services and software that one will use, for example. My focus here is on the digital tools and technology that I have found useful – even essential – for today’s university studies, but that does not mean I would downplay the importance of non-digital, informal and more traditional ways of doing things. The ways of taking notes in lectures and seminars is one thing, for example. For many people the use of pen or pencil on paper is absolutely essential, and they are most effective when using their hands in drawing and writing physically to the paper. Also, rather than just participating in online discussion fora, having really good, traditional discussions in the campus café or bar with the fellow students are important in quite many ways. But taken that, there are also some other tools and environments that are worth considering.

It used to be that computers were boxy things that were used in university’s PC classes (apart from terminals, used to access the mainframes). Today, the information and communication technology landscape has greatly changed. Most students carry in their pockets smartphones that are much more capable devices than the mainframes of the past. Also, the operating systems do not matter as much as they did only a few years ago. It used to be a major choice whether one went and joined the camp of Windows (Microsoft-empowered PC computers), that of Apple Macintosh computers, those with Linux, or some other, more obscure camp. The capabilities and software available for each environment were different. Today, it is perfectly possible to access same tools, software or services with all major operating environments. Thus, there is more freedom of choice.

The basic functions most of us in academia probably need daily include reading, writing, communicating/collaborating, research, data collecting, scheduling and other work organisation tasks and use of the related tools. It is an interesting situation that most of these tasks can be achieved already with the mobile device many of us carry with us all the time. A smartphone of iOS or Android kind can be combined with an external Bluetooth keyboard and used for taking notes in the lectures, accessing online reading materials, for using cloud services and most other necessary tasks. In addition, smartphone is of course an effective tool for communication, with its apps for instant messaging, video or voice conferencing. The cameraphone capabilities can be used for taking visual notes, or for scanning one’s physical notes with their mindmaps, drawings and handwriting into digital format. The benefit of that kind of hybrid strategy is it allows taking advantage both of the supreme tactile qualities of physical pen and paper, while also allowing the organisation of scanned materials into digital folders, possibly even in full-text searchable format.

The best tools for this basic task of note taking and organisation are Evernote and MS OneNote. OneNote is the more fully featured one – and more complex – of these two, and allows one to create multiple notebooks, each with several different sections and pages that can include text, images, lists and many other kinds of items. Taking some time to learn how to use OneNote effectively to organise multiple materials is definitely worth it. There are also OneNote plugins for most internet browsers, allowing one to capture materials quickly while surfing various sites.

MS OneNote
MS OneNote, Microsoft tutorial materials.

Evernote is more simple and straightforward tool, and this is perhaps exactly why many prefer it. Saving and searching materials in Evernote is very quick, and it has excellent integration to mobile. OneNote is particularly strong if one invests to Microsoft Surface Pro 4 (or Surface Book), which have a Surface Pen that is a great note taking tool, and allows one to quickly capture materials from a browser window, writing on top of web pages, etc. On the other hand, if one is using an Apple iPhone, iPad or Android phone or tablet, Evernote has characteristics that shine there. On Samsung Note devices with “S Pen” one can take screenshots and make handwritten notes in mostly similar manner than one can do with the MS Surface Pen in the Microsoft environment.

In addition to the note solution, a cloud service is one of the bedrocks of today’s academic world. Some years ago it was perfectly possible to have software or hardware crash and realize that (backups missing), all that important work is now gone. Cloud services have their question marks regarding privacy and security, but for most users the benefits are overwhelming. A tool like Dropbox will silently work in the background and make sure that the most recent versions of all files are always backed up. A file that is in the cloud can also be shared with other users, and some services have expanded into real-time collaboration environments where multiple people can discuss and work together on shared documents. This is especially strong in Google Drive and Google Docs, which includes simplified versions of familiar office tools: text editor, spreadsheet, and presentation programs (cf. classic versions of Microsoft Office: Word, Excel, and PowerPoint; LibreOffice has similar, free, open-source versions). Microsoft cloud service, Office 365 is currently provided for our university’s students and staff as the default environment free of charge, and it includes the OneDrive storage service as well as Outlook email system, and access to both desktop as well as cloud-hosted versions of Office applications – Word Online, Excel Online, PowerPoint Online, and OneNote Online. Apple has their own iCloud system, with Mac office tools (Pages, Numbers, and Keynote) also can be operated in browser, as iCloud versions. All major productivity tools have also iOS and Android mobile app versions of their core functionalities available. It is also possible to save, for example, MS Office documents into the MS OneCloud, or into Dropbox – a seamless synchronization with multiple devices and operating systems is an excellent thing, as it makes possible to start writing on desktop computer, continue with a mobile device, and then finish things up with a laptop computer, for example.

Microsoft Windows, Apple OS X (Macintosh computers) and Linux have a longer history, but I recommend students also having a look at Google’s Chrome OS and Chromebook devices. They are generally cheaper, and provide reliable and very easy to maintain environment that can be used for perhaps 80 % or 90 % of the basic academic tasks. Chromebooks work really well with Google Drive and Google Docs, but principally any service that be accessed as a browser-based, cloud version also works in Chromebooks. It is possible, for example, to create documents in Word or PowerPoint Online, and save them into OneDrive or Dropbox so that they will sync with the other personal computers and mobile devices one might be using. There is a development project at Google to make it possible to run Android mobile applications in Chrome OS devices, which means that the next generation of Chromebooks (which will all most likely support touchscreens) will be even more attractive than today’s versions.

For planning, teamwork, task deadlines and calendar sharing, there are multiple tools available that range from MS Outlook to Google Calendar. I have found that sharing of calendars generally works easier with the Google system, while Outlook allows deeper integration into organisation’s personnel databases etc. It is really good idea to plan and break down all key course work into manageable parts and set milestones (interim deadlines) for them. This can be achieved with careful use of calendars, where one can mark down the hours that are required for personal, as well as teamwork, in addition to lectures, seminars and exercise classes your timetable might include. That way, not all crucial jobs are packed next to the end of term or period deadlines. I personally use a combination of several Google Calendars (the core one synced with the official UTA Outlook calendar) and Wunderlist to-do list app/service. There are also several dedicated project management tools (Asana, Trello, etc.), but mostly you can work the tasks with basic tools like Google Docs, Sheets (Word, Excel) and then break down the tasks and milestones into the calendar you share with your team. Communications are also essential, and apart from email, people today generally utilize Facebook (Messenger, Groups, Pages), Skype, WhatsApp, Google+/Hangouts, Twitter, Instagram and similar social media tools. One of the key skills in this area is to create multiple filter settings or more fine-grained sharing settings (possibly even different accounts and profiles) for professional and private purposes. The intermixing of personal, study related and various commercial dimensions is almost inevitable in these services, which is why some people try to avoid social media altogether. Wisely used, these services can be nevertheless immensely useful in many ways.

All those tools and services require accounts and login details that are easily rather unsafe, by e.g. our tendency to recycle same or very similar passwords. Please do not do that – there will inevitably be a hacking incident or some other issue with some of those services, and that will lead you into trouble in all the others, too. There are various rules-based ways of generating complex passwords for different services, and I recommend using two-factor authentication always when it is available. This is a system where typically a separate mobile app or text messages act as a backup security measure whenever the service is accessed from a new device or location. Life is also much easier using a password manager like LastPass or 1Password, where one only needs to remember the master password – the service will remember the other, complex and automatically generated passwords for you. In several contemporary systems, there are also face recognition (Windows 10 Hello), fingerprint authentication or iris recognition technologies that are designed to provide a further layer of protection at the hardware level. The operating systems are also getting better in protecting against computer viruses, even without a dedicated anti-virus software. There are multiple scams and social engineering hacks in the connected, online world that even the most sophisticated anti-virus tools cannot protect you against.

Finally, a reference database is an important part of any study project. While it is certainly possible to have a physical shoebox full of index cards, filled with quotes, notes and bibliographic details of journal articles, conference papers and book chapters, it is not the most efficient way of doing things. There are comprehensive reference database management services like RefWorks (supported by UTA) and EndNote that are good for this job. I personally like Zotero, which exists both as cloud/browser-based service in Zotero.org, but most importantly allows quick capture of full reference details through browser plugins, and then inserting references in all standard formats into course papers and thesis works, in simple copy-paste style. There can also be set up shared, topics based bibliographic databases, managed by teams in Zotero.org – an example is Zotero version of DigiPlay bibliography (created by Jason Rutter, and converted by Jesper Juul): https://www.zotero.org/groups/digiplay .

As a final note, regardless of the actual tools one uses, it is the systematic and innovative application of those that really sets excellent study practices apart. Even the most cutting edge tools do not automate the research and learning – this is something that needs to be done by yourself, and in your individual style. There are also other solutions, that have not been explored in this short note, that might suit your style. Scrivener, for example, is a more comprehensive “writing studio”, where one can collect snippets of research, order fragments and create structure in more flexible manner than is possible than in e.g. MS Word (even while its Outline View is too underused). The landscape of digital, physical, social and creative opportunities is all the time expanding and changing – if you have suggestions for additions to this topic, please feel free to make those below in the comments.

All-in-one: still not there

HP-elite-x2
HP Elite X2 1012 press photo (image © HP).
Some time ago, I blogged about tablets as productivity devices, and then I also have written about some early experiences as a user of Microsoft Surface Pro 4: a Windows 10, 2-in-one tablet PC that relies on combination of touch screen, pen computing, and keyboard and touchpad cover (plus Cortana voice assistant, if you are a US/English user). It just might be that I am restless and curious by nature, but these days I find myself jumping from Microsoft to Apple to Google ecosystems, and not really finding what I am looking for from any of them.

When I am using an iOS or Android tablet, the file management is usually a mess, external keyboard and mouse inputs are not working reliably, and multitasking between several apps and services, copy-pasting or otherwise sharing information between them all is a pain.

When I am on a regular Windows laptop or PC, keyboard and mouse/touchpad usually are just fine, and file management, multitasking and copy-pasting work fine. Touch screen inputs and the ease of use lag behind tablet systems, though. (This is true also to the Apple OS X desktop environment, but I have pretty much given up the use of Macs for my work these days, I just could not configure the system to work and behave in the ways I want – as a Microsoft OS/PC user who has hacked his way around DOS, then Windows 3.0 etc., and thus has certain things pretty much “hard-wired” in the way I work.)

Surface Pro 4 is the most optimal, almost “all-in-one” system I have found so far, but I have started to increasingly dislike its keyboard cover. Surface Pro 4 cover is not that bad, but if you are a touch-typist, it is not perfect. There is still slight flex in the plastic construction and shallow key movement that turns me off, and produces typing errors exactly when you are in a hurry and you’d need to type fast. I am currently trying to find a way to get rid of the type cover, and instead use my favorite, Logitech K810 instead. But: I am not able to attach it to Surface Pro in solid enough way, and there is no touchpad in K810, so workflow with all those mouse right-clicks becomes rather complex.

I really like the simplicity of Chromebooks, and this blog note, for example, is written with my trusty Toshiba Chromebook 2, which has excellent, solid keyboard (though not backlighted), and a good, crisp Full HD IPS screen plus a responsive, large touchpad. However, I keep reaching out and trying to scroll the screen, which is not a touch version. (Asus Chromebook Flip would be one with a touch screen.) And there is nothing comparable to the Surface Pen, which is truly useful when one e.g. reads and makes notes to a pile of student papers in PDF/electronic formats. Also, file management in a Chrome OS is a mess, and web versions of popular apps still respond more slowly and are more limited than real desktop versions.

So, I keep on looking. Recently I tested the HP Elite X2 1012 (pictured), which is pretty identical to the Surface Pro systems that Microsoft produces, but has an excellent, metallic and solid keyboard cover, as well as other productivity oriented enhancements like the optional 4G/LTE sim card slot, USB C port with Thunderbolt technology, and a decent enough screen, pen and kickstand design. However, Elite X2 falls short in using less powerful Intel Core M series processors (Surface Pro 4 goes for regular Core i5 or i7 after the entry-level model), by being rather expensive, and according to the reviews I have read, also the battery life of Elite X2 is not something a real mobile office worker would prefere.

Maybe I can find a way to connect the Elite X2 metallic keyboard cover to the Surface Pro 4? Or maybe not.

(Edit: The battery life of Elite X2 actually appears to be good; the screen on the other hand only so-and-so.)

Tablets as productivity devices

Logitech Ultrathin Keyboard for iPad Air
Logitech Ultrathin Keyboard for iPad Air
Professionally, I have a sort of on-off relationship with tablets (iPads, Android tablets, mainly, but I count also touch-screen small-size factor Windows 2-in-1’s in this category). As small and light, tablets are a natural solution when you have piles of papers and books in your bag, and want to travel light. There are so many things that every now and then I try to make and do with a tablet – only to clash again against some of the limitations of them: the inability to edit some particular file quickly in the native format, inability to simply copy and paste data between documents that are open in different applications, limitations of multitasking. Inability to quickly start that PC game that you are writing about, or re-run that SPSS analysis we urgently need for that paper we are working on.

But when you know what those limitations are, tablets are just great for those remaining 80 % or so of the stuff that we do in mobile office slash research sort of work. And there are actually features of tablets that may make them even stronger as productivity oriented devices than personal computers or fully powered laptops can be. There is the small, elite class of thin, light and very powerful laptop computers with touch screens (running Windows 10) which probably can be configured to be “best of both worlds”, but otherwise – a tablet with high dpi screen, fast enough processor (for those mobile-optimized apps) and excellent battery life simply flies above using a crappy, under-powered and heavy laptop or office PC from the last decade. The user experience is just so much better: everything reacts immediately, looks beautiful, runs for hours, and behaves gracefully. Particularly in iOS / Apple ecosystem this is true (Android can be a bit more bumpy ride), as the careful quality control and fierce competition in the iOS app space takes care that only those applications that are designed with the near-perfect balance of functionality and aesthetics get into the prime limelight. Compare that to the typical messy interfaces and menu jungles of traditional computer productivity software, and you’ll see what I mean.

The primary challenge of tablets for me is the text entry. I can happily surf, read, game, and watch video content of various kinds in a tablet, but when it comes to making those fast notes in a meeting where you need to have two or three background documents open at the same time, copy text or images from them, plus some links or other materials from the Internet, the limitations of tablets do tend to surface. (Accidentally, Surface 4 Pro or Surface Book by Microsoft would be solutions that I’d love to test some of these days – just in case someone from MS sales department happens to read this blog…) But there are ways to go around some of these limitations, using a combination of cloud services running in browser windows and dedicated apps and quickly rotating between them, so that the mobile operating system does not kill them and lose the important data view in the background. Also, having a full keyboard connected with the tablet device is a good solution for that day of work with a tablet. iPad Air with a premium wireless keyboard like Logitech K811 is shoulders above the situation where one is forced to slowly tap in individual letters with the standard virtual keyboard of a mobile device. (I am a touch-typist, which may explain my perspective here.)

In the future, it is increasingly likely that the differences between personal computers and mobile devices continues to erode and vanish. The high standards of ease of use, and user experience more generally, set by mobile device already influence the ways in which also computer software is being (re-)designed. The challenges waiting there are not trivial, though: when a powerful, professional tool is suddenly reduced into a “toy version” of itself, in the name of usability, the power users will cry foul. There are probably few lessons in the area of game (interface) design that can inform also the design of utility software, as the different “difficulty levels” or novice/standard/expert modes are being fine-tuned, or the lessons from tutorials of various kinds, and adaptive challenge levels or information density is being balanced.