Transition to Mac, Pt. 2

I got the first part of my ‘Transition to Mac’ project (almost) ready by the end of my summer vacation. This was focused around a Mac Mini (M1/16GB/512GB model), which I set up as the new main “workstation” for my home office and photography editing work. This is in nutshell what extras and customisations I have done to it, so far:

– set up as the keyboard Logitech MX Keys Mini for Mac
– and as the mouse, Logitech G Pro X Superlight Wireless Gaming Mouse, White
– for fast additional ssd storage, Samsung X5 External SSD 2TB (with nominal read/write speeds of 2,800/2,300 MB/s)
– and then, made certain add-ons/modifications to the MacOS:
– AltTab (makes alt-tab key-combo cycle through all open app windows, not only between applications, like cmd-tab)
– Alfred (for extending the already excellent Spotlight search to third-party apps, direct access to various system commands and other advanced functionalities)
– installed BetterSnapTool (for adding snap to sides / corners functionality into the MacOS windows management)
– set Sublime Text as the default text editor
– DockMate (for getting Win10-style app window previews into the Mac dock, without which I feel the standard dock is pretty useless)
– And then installing the standard software that I use daily (Adobe Creative Cloud/Lightroom/Photoshop; MS Office 365; DxO Pure RAW; Topaz DeNoise AI & Sharpen AI, most notably)
– The browser plugin installations and login procedures for the browsers I use is a major undertaking, and still ongoing.
– I use 1Password app and service for managing and synchronising logins/passwords and other sensitive information across devices and that speeds up the login procedures a bit these days.
– There was one major hiccup in the process so far, but in the end it was nothing to blame Mac Mini for; I got a colour-calibrated 27″ 4k Asus ProArt display to attach into the Mac, but there was immediately major issues with display being stuck to black when Mac woke from sleep. As this “black screen after sleep” issue is something that has been reported with some M1 Mac Minis, I was sure that I had got a faulty computer. But as I made some tests with several other display cables and by comparing with another 4k monitor, I was able to isolate the issue as a fault with the Asus instead. Also, there was a mechanical issue with the small plastic power switch in this display (it got repeatedly stuck, and had to be forcibly pried back in place). I was just happy being able to return this one, and ordered a different monitor, from Lenovo this time, as they had a special discount currently in place for a model that also has a built-in Thunderbolt dock – something that should be useful as the M1 Mac Mini has a rather small selection of ports.
– There has been some weird moments recently of not getting any image into my temporary, replacement monitor, too, so the jury is still out, whether there is indeed something wrong in the Mac Mini regarding this issue, also.
– I have not much of actual daily usage yet behind, with this system, but my first impressions are predominantly positive. The speed is one main thing: in my photo editing processes there are some functions that take almost the same time as in my older PC workstation, but mostly things happen much faster. The general impression is that I can now process my large RAW file collections maybe twice as fast as before. But there are some tools that obviously have already been optimised for Apple Silicon/M1, since they run lightning-fast. (E.g. Topaz Sharpen AI was now so fast that I didn’t even notice it running the operation before it was already done. This really changes my workflow.)
– The smooth integration of Apple ecosystem is another obvious thing to notice. I rarely bother to boot up my PC computers any more, as I can just use an iPad Pro or Mac (or iPhone), both wake up immediately, and I can find my working documents seamlessly synced and updated in whatever device I take hold of.
– There are some irritating elements in the Mac for a long-time Windows/PC user, of course, too. Mac is designed to push simplicity to a degree that it actually makes some things very hard for user. Some design decisions I simply do not understand. For example, the simple cut-and-paste keyboard combination does not work in a Mac Finder (file manager). You need to apply a modifier key (Option, in addition to the usual Cmd-V). You can drag files between folders with a mouse, but why not use the standard Command-V for pasting files. And then there are things like the (very important) keyboard shortcut for pasting text without formatting: “Option + Cmd + Shift + V”! I have not yet managed to imprint either of this kind of long combo keys into my muscle memory, and looking at the Internet discussions, many frustrated users seem to have similar issues with this kind of Mac “personality issues”. But, otherwise, a nice system!

Transition to Mac

Apple’s M1 Processor Lineup, March 2022. (Source: Apple.)

I have been an occasional Mac user in the past: in 2007, I bought a Mac Mini (an Intel Core 2 Duo, 2.0 GHz model) from Tokyo where I was for the DiGRA conference. And in November 2013, I invested into a MacBook Pro with Retina Display (late 2013 model, with 2.4GHz Core i5, Intel Iris graphics). Both were wonderful systems for their times, but also sort of “walled garden” style environments, with no real possiblity for user upgrades and soon outpaced by PC systems, particularly in gaming. So, I found myself using the more powerful PC desktop computers and laptops, again and again.

Now, I have again started the process of moving back into the Apple/Mac ecosystem, this time full-time, with both the work and home devices, both in computing as well as in mobile tech being most likely in Apple camp, at some point later this year. Why, you might ask – what has changed?

The limitations of Apple in upgradability and general freedom of choice are still the same. Apple devices also continue to be typically more expensive than the comparably specced competitors from the non-Apple camp. It is a bit amusing to look at a bunch of smart professionals sitting next to each other, each tapping at the identical, Apple-logo laptops, glancing at their identical iPhones. Apple has managed to get a powerful hold on the independent professional scene (including e.g. professors, researchers, designers and developers), even while the large IT departments continue to prefer PCs, mostly due to the cheaper unit-prices and better support for centralised “desktop management”. This is visible in the universities, too, where the IT department gets PCs for support personnel and offers them as the default choice for new employees, yet many people pick up a Mac if they can decide themselves.

In my case, the decision to go back to Apple ecosystem is connected to two primary factors: the effects of corona pandemic, and the technical progress of “Apple silicon”.

The first factor consists of all the cumulative effects that are results from three years of remote and hybrid work. The requirements for fast and reliable systems that can support multitasking, video and audio really well are of paramount importance now. The hybrid meeting and teaching situations are particularly complex, as there is now need to run several communications tools simultaneously, stream high-quality video and audio, possibly also record and edit audio and video, while also making online publications (e.g., course environments, public lecture web pages, entire research project websites) that integrate video and photographic content more than used to be the case before.

In my case, it is particularly the lack of reliability and the incapability of PC systems in processing of image and video data that has led to the decision of going back to Apple. I have a relatively powerful thin-and-light laptop for work, and a Core i5/RTX 2060 Super based gaming/workstation PC at home. The laptop became underpowered first, and some meetings are now starting maybe 5-10 minutes late, with my laptop trying to find the strength needed to run few browser windows, some office software, a couple of communication and messaging apps, plus the required real-time video and audio streams. And my PC workstation can still run many older games, but when I import some photo and video files while also having a couple of editing tools open, everything becomes stuck. There is nothing as frustrating as staring on a computer screen where the “Wheel of Death” is spinning, when you have many urgent things to do. I have developed a habit of clicking on different background windows constantly, and keeping the Windows Task Manager all the time open, so that I can use it to immediately kill any stuck processes and try recovering my work to where I was.

Recently I got the chance to test an M1 MacBook Pro (thanks, Laura), and while the laptop was equal to my mighty PC workstation in some tasks, there were processes which were easily 5-10 times faster in the Mac, particularly everything related to file management, photo and video editing. And the overall feeling of responsiveness and fluency in multitasking was just awesome. The new “Apple silicon” chips and architectures are providing user experiences that are just so much better than anything that I have had in the PC side during the recent years.

There are multiple reasons behind this, and there are technical people who can explain the underlying factors much better than I can (see, e.g., what Erik Engheim from Oslo writes here: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-3262b158cba2). The basic benefits are coming from very deep integration of Apple’s System-on-a-Chip (SOC), where in an M1 chip package, a whole computer has been designed and packed into one, integrated package:

  • Central processing unit (CPU) – the “brains” of the SoC. Runs most of the code of the operating system and your apps.
  • Graphics processing unit (GPU) — handles graphics-related tasks, such as visualizing an app’s user interface and 2D/3D gaming.
  • Image processing unit (ISP) — can be used to speed up common tasks done by image processing applications.
  • Digital signal processor (DSP) — handles more mathematically intensive functions than a CPU. Includes decompressing music files.
  • Neural processing unit (NPU) — used in high-end smartphones to accelerate machine learning (A.I.) tasks. These include voice recognition and camera processing.
  • Video encoder/decoder — handles the power-efficient conversion of video files and formats.
  • Secure Enclave — encryption, authentication, and security.
  • Unified memory — allows the CPU, GPU, and other cores to quickly exchange information
    (Source: E. Engheim, “Why Is Apple’s M1 Chip So Fast?”)

The underlying architecture of Apple Silicon comes from their mobile devices, iPhones and iPads, in particular. While mainstream PC components have grown over the years increasingly massive and power-hungry, the mobile environment has set its strict limits and requirements for the efficiency of system architecture. There are efforts to utilise the same ARM (advanced “reduced instruction set”) architectures that e.g. mobile chip maker Qualcomm uses in their processors for Android mobile phones, also in the “Windows on Arm” computers. While the Android phones are doing fine, the Arm-based Windows computers have been generally so slow and limited in their software support that they have remained in the margins.

In addition to the reliability, stability, speed and power-efficiency benefits, Apple can today also provide that kind of seamless integration between computers, tablet devices, smartphones and wearable technology (e.g., AirPod headphones and Apple Watch devices) that the users of more hybrid ecosystems can only dream about. This is now also becoming increasingly important, as (post-pandemic), we are moving between home office, the main office, various “third spaces” and e.g. conference travel, while also still keeping up the remote meetings and events regime that emerged during the corona isolation years. Life is just so much easier when e.g. notifications, calls and data follow you more or less seamlessly from device to device, depending on where you are — sitting, running or changing trains. As the controlling developer-manufacturer of both hardware, software and underlying online services, Apple is in the enviable position to implement a polished, hybrid environment that works well together – and, thus, is one less source of stress.

The Rise and Fall and Rise of MS Word and the Notepad

MS Word installation floppy. (Image: Wikipedia.)

Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.

Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)

WordPerfect 5.1 screen. (Image: Wikipedia.)

It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).

The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).

Microsoft Word 6.0, Mac version. (Image: user “MR” at https://www.macintoshrepository.org/851-microsoft-word-6)

These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.

Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing as note-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.

The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.

Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).

As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.

MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).

There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.

There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.

Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.

Scrivener. (Image: Literature and Latte.)

Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.

In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.

iA Writer. (Image: iA Inc.)

The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).

For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.

I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.

In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.

Dropbox. (Image: Dropbox, Inc.)

In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.

So, happy 36th anniversary, MS Word.

Life with Photography: Then and Now

I have kept a diary, too, but I think that the best record of life and times comes from the photographs taken over the years. Much of the last century (pre-2000s) photos of mine are collected in traditional photo albums: I used to love the craft of making photo collages, cutting and combining pieces of photographs, written text and various found materials, such as travel tickets or brochure pieces into travel photo albums. Some albums were more experimental: in pre-digital times it was difficult to know if a shot was technically successful or not, and as I have always mostly worked in colour rather than black-and-white, I used to order the film rolls developed and every frame printed, without seeing the final outcomes. With some out-of-focus, blurred or plain random, accidental shots included into every film spool, I had plenty of materials to build collages that were focused on play with colour, dynamics of composition or some visual motif. This was fun stuff, and while one certainly can do this (and more) e.g. with Photoshop with the digital photos, there is something in cutting and combining physical photos that is not the same as a digital collage.

The first camera of my own was Chinon CE-4, a budget-class Japanese film camera from the turn of 1970s/1980s. It served me well over many years, and with it’s manual and “semi-automatic” (Aperture Priority) exposure system and support for easy double exposures.

Chinon CE-4 (credit:
https://www.flickr.com/photos/pwiwe/463041799/in/pool-camerawiki/ ).

I started transitioning to digital photography first by scanning paper photos and slides into digital versions that could then be used for editing and publishing. Probably among my earliest actual digital cameras was HP PhotoSmart 318, a cheap and almost toy-like device with 2.4-megapixel resolution, 8 MB internal flash memory (plus supported CompactFlash cards), a fixed f/2.8 lens and TTL contrast detection autofocus. I think I was shooting occasionally with this camera already in 2001, at least.

Few years after that I started to use digital photography a bit more in travels at least. I remember getting my first Canon cameras for this purpose. I owned at least a Canon Digital IXUS v3 – this I was using at least already in the first DiGRA conference in Utrecht, in November 2003. Even while still clearly a “point-and-shoot” style (compact) camera, this Canon one was based on metal construction and the photos it produced were a clear step up above the plastic HP device. I started to convert into a believer: the future was in digital photography.

Canon Digital IXUS v3 (credit:
https://fi.m.wikipedia.org/wiki/Tiedosto:Canon_Digital_Ixus_V3.jpg ).

After some saving, I finally invested into my first digital “system camera” (DSLR) in 2005. I remember taking photos in the warm Midsummer night that year with the new Canon EOS 350D, and how magical it felt. The 8.0-megapixel CMOS image sensor and DIGIC II signal processing and control unit (a single-chip system), coupled with some decent Canon lenses meant that it was possible to experiment with multiple shooting modes and get finely-detailed and nuanced night and nature photos with it. This was also time when I both built my own (HTML based) online and offline “digital photo albums”, but also joined the first digital photo community services, such as Flickr.

Canon EOS 550D (credit:
https://www.canon.fi/for_home/product_finder/cameras/digital_slr/eos_550d/ ).

It was five years later, when I again upgraded my Canon system, this time into EOS 550D (“Rebel T2i” in the US, “Kiss X4” in Japan). This again meant considerable leap both in the image quality and also in features that relate both to the speed, “intelligence” and convenience of shooting photos, as well as to the processing options that are available in-camera. The optical characteristics of cameras as such have not radically changed, and there are people who consider some vintage Zeiss, Nikkor or Leica camera lenses as works of art. The benefits of 550D over 350D for me were mostly related to the higher resolution sensor (18.0-megapixel this time) and the ways in which DIGIC 4 processor reduced noise, provided much higher speeds, and even 1080p video (with live view and external microphone input).

Today, in 2019, I am still taking Canon EOS 550D with me in any event or travel where I want to get the best quality photographs. This is mostly due to the lenses than the actual camera body, though. My two current smartphones – Huawei Mate 20 Pro and iPhone 8 Plus – both have cameras that come with both arguably better sensors and much more capable processors than this aging, entry-level “system camera”. iPhone has dual 12.0-megapixel sensors (f/1.8, 28mm/wide, with optical image stabilization; f/2.8, 57mm/telephoto) that both are accompanied by PDAF (a fast autofocus technology based on Phase Detection). The optics in Huawei are developed in collaboration with Leica and come as a seamless combination of three (!) cameras: the first has a very large 40.0-megapixel sensor (f/1.8, 27mm/wide), the second one has 20.0-megapixels (f/2.2, 16mm/ultrawide), and the third 8.0-megapixels (f/2.4, 80mm/telephoto). It is possible to use both optical and digital zoom capabilities in Huawei, make use of efficient optical image stabilization, plus a hybrid technology involving phase detection as well as laser autofocus (a tiny laser transmitter sends a beam into the subject, and with the received information the processor is capable of calculating and adjusting for the correct focus). Huawei also utilizes advanced AI algorithms and its powerful Kirin 980 processor (with two “Neural Processing Units, NPUs) to optimize the camera settings, and apply quickly some in-camera postprocessing to produce “desirable” outcomes. According to available information, Huawei Mate 20 Pro can process and recognize “4,500 images per minute and is able to differentiate between 5,000 different kinds of objects and 1,500 different photography scenarios across 25 categories” (whatever those are).

Huawei Mate 20 Pro, with it’s three cameras (credit: Frans Mäyrä).

But with all that computing power today’s smartphones are not capable (not yet, at least) to outplay the pure optical benefits available to system cameras. This is not so crucial when documenting a birthday party, for example, as the lenses in smartphones are perfectly capable for short distance and wide-angle situations. Proper portraits are somewhat borderline case today: a high-quality system camera lens is capable to “separate” the person from the background and blur the background (create the beautiful “bokeh” effect). But the powerful smartphones like iPhone and Huawei mentioned above come effectively with an AI-assisted Photoshop built into them, and can therefore detect the key object, separate it, and blur the background with algorithms. The results can be rather good (good enough, for many users and use cases), but at the same time it must be said that when a professional photographer aims for something that can be enlarged, printed out full-page in a magazine, or otherwise used in a demanding context, a good lens attached into a system camera will prevail. This relates to basic optical laws: the aperture (hole, where the light comes in) can be much larger in such camera lenses, providing more information for the image sensor, the focal length longer – and the sensor itself can also be much larger, meaning that e.g. fast-moving objects (sports, animal photography) and low-light conditions will benefit. With several small lenses and sensors, the future “smart cameras” can probably provide an ever-improving challenge to more traditional photography equipment, combining, processing data and filling-in such information that is derived from machine learning, but a good lens coupled with a system camera can help creating unique pictures in more traditional manner. Both are needed, and both have a future in photography cultures, I think.

The main everyday benefit of e.g. Huawei Mate 20 Pro vs old-school DSLR such as Canon EOS 550D is the portability. Few people go to school or work with a DSLR hanging in their neck, but a pocket-size camera can always travel with you – and be available when that unique situation, light condition or a rare bird/butterfly presents itself. With the camera technologies improving, the system cameras are also getting smaller and lighter, though. Many professionals still prefer rather large and heavy camera bodies, as the big “grip” and solid buttons/controls provide better ergonomics, and the heavy body is also a proper counterbalance for large and heavy telephoto lenses that many serious nature or sports photographers need for their work, for example. Said that, I am currently thinking that my next system camera will no longer probably be based on the traditional SLR (Single-Lens Reflex) architecture – which, btw, is already over three hundred years old, if the first reflex mirror “camera obscura” systems are taken into an account. The mirrorless interchangeable lens camera systems are maintaining the component-based architecture of body+lenses, but eliminate the moving mirror and reflective prisms of SLR systems, and use electronic viewfinders instead.

I have still my homework to do regarding the differences in how various mirrorless systems are being implemented, but it also looks to my eye that there has been a rather rapid period of technical R&D in this area recently, with Sony in particular leading the way, but the big camera manufacturers like Canon and Nikon now following, releasing their own mirrorless solutions. There is not yet quite as much variety to choose for amateur, small-budget photographers such as myself, with many initial models released into the upper, serious-enthusiast/professionals price range of multiple-thousands. But I’d guess that the sensible budget models will also follow, next, and I am interested to see if it is possible to move into a new decade with a light, yet powerful system that would combine some of the best aspects from the history of photography with the opportunities opened by the new computing technologies.

Sony a6000, a small mirrorless system camera body announced in 2014 (credit: https://en.wikipedia.org/wiki/Sony_α6000#/media/File:Sony_Alpha_ILCE-6000_APS-C-frame_camera_no_body_cap-Crop.jpeg).

Recommended laptops, March 2018

Every now and then I am asked to recommend what PC to buy. The great variety in individual needs and preferences make this ungrateful task – it is dangerous to follow someone else’s advice, and not to do your own homework, and hands-on testing yourself. But, said that, here are some of my current favourites, based on my individual and highly idiosyncratic preferences:

My key criterion is to start from a laptop, rather than a desktop PC: laptops are powerful enough for almost anything, and they provide more versatility. When used in office, or home desk, one can plug in external keyboard, mouse/trackball and display, and use the local network resources such as printers and file servers. The Thunderbolt interface has made it easy to have all those things plugged in via a single connector, so I’d recommend checking that the laptop comes with Thunderbolt (it uses USB-C type connector, but not all USB-C ports are Thunderbolt ports).

When we talk about laptops, my key criteria would be to first look at the weight and get as light device as possible, considering two other key criteria: excellent keyboard and good touch display.

The reasons for those priorities are that I personally carry the laptop with me pretty much always, and weight is then a really important factor. If thing is heavy, the temptation is just to leave it where it sits, rather than pick it up while rushing into a quick meeting. And when in the meeting one needs to make notes, or check some information, one is at the mercy of a smartphone picked from the pocket, and the ergonomics are much worse in that situation. Ergonomics relate to the point about excellent keyboard and display, alike. Keyboard is to me the main interface, since I write a lot. Bad or even average keyboard will make things painful in the long run, if you write hours and hours daily. Prioritising the keyboard is something that your hands, health and general life satisfaction will thank, in the long run.

Touch display is something that will probably divide the opinions of many technology experts, even. In the Apple Macintosh ecosystem of computers there is no touch screen computer available: that modality is reserved to iPad and iPhone mobile devices. I think that having a touch screen on a laptop is something that once learned, one cannot go away from. I find myself trying to scroll and swipe my non-touchscreen devices nowadays all the time. Windows 10 as an operating system has currently the best support for touch screen gestures, but there are devices in the Linux and Chromebook ecosystems that also support touch. Touch screen display makes handling applications, files easier, and zooming in and out of text and images a snap. Moving hands away from keyboard and touchpad every now and then to the edges of the screen is probably also good for ergonomics. However, trying to keep one’s hands on the laptop screen for extended times is not a good idea, as it is straining. Touch screen is not absolutely needed, but it is an excellent extra. However, it is important that the screen is bright, sharp, and has wide viewing angles; it is really frustrating to work on dim washed-out displays, particularly in brightly lit conditions. You have to squint, and end up with a terrible headache at the end of the day. In LCD screens look for IPS (in-plane switching) technology, or for OLED screens. The latter, however, are still rather rare and expensive in laptops. But OLED has the best contrast, and it is the technology that smartphone manufacturers like Samsung and Apple use in their flagship mobile devices.

All other technical specifications in a laptop PC are, for me, secondary for those three. It is good to have a lot of memory, a large and fast SSD disk, and a powerful processor (CPU), for example, but according to my experience, if you have a modern laptop that is light-weight, and has excellent keyboard and display, it will also come with other specs that are more than enough for all everyday computing tasks. Things are a bit different if we are talking about a PC that will have gaming as its primary use, for example. Then it would be important to have a discrete graphics card (GPU) rather than only the built-in, integrated graphics in the laptop. That feature, with related added requirements to other technology means that such laptops are usually more pricey, and a desktop PC is in most cases better choice for heavy duty gaming than a laptop. But dedicated gaming laptops (with discrete graphics currently in the Nvidia Pascal architecture level – including GTX 1050, 1060 and even 1080 types) are evolving, and becoming all the time more popular choices. Even while many of such laptops are thick and heavy, for many gamers it is nice to be able to carry the “hulking monster” into a LAN party, eSports event, or such. But gaming laptops are not your daily, thin and light work devices for basic tasks. They are too overpowered for such uses (and consume their battery too fast), and – on the other hand – if a manufacturer tries fitting in a powerful discrete graphics card into a slim, lightweight frame, there will be generally overheating problems, if one really starts to put the system under heavy gaming loads. The overheated system will then start “throttling”, which means that it will automatically decrease the speed it is operating with, in order to cool down. These limitations will perhaps be eased with the next, “Volta” generation of GPU microarchitecture, making thin, light and very powerful laptop computers more viable. They will probably come with a high price, though.

Said all that, I can then highlight few systems that I think are worthy of consideration at this timepoint – late March, 2018.

To start from the basics, I think that most general users would profit from having a close look at Chromebook type of laptop computers. They are a bit different from Windows/Mac type personal computers that many people are mostly familiar with, and have their own limitations, but also clear benefits. The ChromeOS (operating system by Google) is a stripped down version of Linux, and provides fast and reliable user experience, as the web-based, “thin-client” system does not slow down in same way as a more complex operating system that needs to cope with all kinds of applications that are installed locally into it over the years. Chromebooks are fast and simple, and also secure in the sense that the operating system features auto-updating, running code in secure “sandbox”, and verified boot, where the initial boot code checks for any system compromises. The default file location in Chomebooks is a cloud service, which might turn away some, but for a regular user it is mostly a good idea to have cloud storage: a disk crash or lost computer does not lead into losing one’s files, as the cloud operates as an automatic backup.

ASUS Chromebook Flip (C302CA)
ASUS Chromebook Flip (C302CA; photo © ASUS).

ASUS Chromebook Flip (C302CA model) [see link] has been getting good reviews. I have not used this one personally, and it is on the expensive side of Chromebooks, but it has nice design, it is rather light (1,18 kg / 2,6 pounds), and keyboard and display are reportedly decent or even good. It has a touch screen, and can run Android apps, which is becoming one of the key future directions where the ChromeOS is heading. As an alternative, consider Samsung Chromebook Pro [see link], which apparently has worse keyboard, but features an active stylus, which makes it strong when used as a tablet device.

For premium business use, I’d recommend having a look at the classic Thinkpad line of laptop computers. Thin and light Thinkpad X1 Carbon (2018) [see link] comes now also with a touch screen option (only in FHD/1080p resolution, though), and has a very good keyboard. It has been recently updated into 8th generation Intel processors, which as quad-core systems provide a performance boost. For a more touch screen oriented users, I recommend considering Thinkpad X1 Yoga [see link] model. Both of these Lenovo offerings are quite expensive, but come with important business use features, like (optional) 4G/LTE-A data card connectivity. Wi-Fi is often unreliable, and going through the tethering process via a smartphone mobile hotspot is not optimal, if you are running fast from meeting to meeting, or working while on the road. The Yoga model also used to have a striking OLED display, but that is being discontinued in the X1 Yoga 3rd generation (2018) models; that is replaced by a 14-inch “Dolby Vision HDR touchscreen” (max brightness of 500 nits, 2,560 x 1,440 resolution). HDR is still an emerging technology in laptop displays (and elsewhere as well), but it promises a wider colour gamut – a set of available colours. Though, I am personally happy with the OLED in the 2017 model X1 Yoga I am mostly using for daily work these days. X1 Carbon is lighter (1,13 kg), but X1 Yoga is not too heavy either (1,27 kg). Note though, that the keyboard in Yoga is not as good as in the Carbon.

Thinkpad X1 Yoga
Thinkpad X1 Yoga (image © Lenovo).

There are several interesting alternatives, all with their distinctive strengths (and weaknesses). I mention here just shortly these:

  • Dell XPS 13 (2018) [see link] line of ultraportable laptops with their excellent “InfinityEdge” displays has also been updated to 8th gen quad core processors, and is marketed as the “world’s smallest 13-inch laptop”, due to the very thin bezels. With the weight of 1,21 kg (2,67 pounds), XPS 13 is very compact, and some might even miss having a bit wider bezels, for easier screen handling. XPS does not offer 4G/LTE module option, to my knowledge.
  • ASUS Zenbook Pro (UX550) [see link] is a 15-inch laptop, which is a bit heavier (with 1,8 kg), but it scales up to 4k displays, and can come with discrete GTX 1050 Ti graphics option. For being a bit thicker and heavier, Zenbook Pro is reported to have a long battery life, and rather capable graphics performance, with relatively minor throttling issues. It has still 7th gen processors (as quad core versions, though).
  • Nice, pretty lightweight 15-inch laptops come from Dell (XPS 15) [see link] and LG, for example – particularly with LG gram 15 [see link], which is apparently a very impressive device, and weighs only 1,1 kg while being a 15-inch laptop; it is shame we cannot get it here in Finland, though.
  • Huawei Matebook X Pro
    Huawei Matebook X Pro (photo © Huawei).
  • As Apple has (for my eyes) ruined their excellent Macbook Pro line, with too shallow keyboard, and by not proving any touch screen options, people are free to hunt for Macbook-like experiences elsewhere. Chinese manufacturers are always fast to copy things, and Huawei Matebook X Pro [see link] is an interesting example: it has a touch screen (3K LTPS display, 3000 x 2000 resolution with 260 PPI, 100 % colour space, 450 nits brightness), 8th gen processors, GTX MX 150 discrete graphics, 57,4 Wh battery, Dolby Atmos sound system, etc, etc. This package weighs 1,33 kg. It is particularly nice to see them not copying Apple in their highly limited ports and connectivity – Matebook X Pro has both Thunderbolt/USB-C, but also the older USB-A, and a regular 3,5 mm headphone port. I am dubious about the quality of the keyboard, though, until I have tested it personally. And, one can always be a bit paranoid about the underlying security of Chinese-made information technology; but then again, the Western companies have not proved necessarily any better in that area. It is good to have more competition in the high end of laptops, as well.
  • Finally, one must mention also Microsoft, which sells its own Surface line of products, which have very good integration with the touch features of Windows 10, of course, and also generally come with displays, keyboards and touchpads that are among the very best. Surface Book 2 [see link] is their most versatile and powerful device: there are both 15-inch and 13,5-inch models, both having quad-core processors, discrete graphics (up to GTX 1060), and good battery life (advertised up to 17 hours, but one can trust that the real-life use times will be much less). Book 2 is a two-in-one device with a detachable screen that can work independently as a tablet. However, this setup is heavier (1,6 kg for 13,5-inch, 1,9 kg for the 15-inch model) than the Surface Laptop [see link], which does not work as a tablet, but has a great touch-screen, and weighs less (c. 1,5 kg). The “surface” of this Surface laptop is pleasurable alcantara, a cloth material.

MS Surface Laptop with Alcantara
MS Surface Laptop with alcantara (image © Microsoft).

To sum up, there are many really good options these days in personal computers, and laptops in general have evolved in many important areas. Still it is important to have hands-on experience before committing – particularly if one is using the new workhorse intensely, this is a crucial tool decision, after all. And personal preference (and, of course, available budget) really matters.

Tools for Trade

Lenovo X1 Yoga (2nd gen) in tablet mode
Lenovo X1 Yoga (2nd gen) in tablet mode.

The key research infrastructures these days include e.g. access to online publication databases, and ability to communicate with your colleagues (including such prosaic things as email, file sharing and real-time chat). While an astrophysicist relies on satellite data and a physicist to a particle accelerator, for example, in research and humanities and human sciences is less reliant on expensive technical infrastructures. Understanding how to do an interview, design a reliable survey, or being able to carefully read, analyse and interpret human texts and expressions is often enough.

Said that, there are tools that are useful for researchers of many kinds and fields. Solid reference database system is one (I use Zotero). In everyday meetings and in the field, note taking is one of the key skills and practices. While most of us carry our trusty laptops everywhere, one can do with a lightweight device, such as iPad Pro. There are nice keyboard covers and precise active pens available for today’s tablet computers. When I type more, I usually pick up my trusty Logitech K810 (I have several of those). But Lenovo Yoga 510 that I have at home has also that kind of keyboard that I love: snappy and precise, but light of touch, and of low profile. It is also a two-in-one, convertible laptop, but a much better version from same company is X1 Yoga (2nd generation). That one is equipped with a built-in active pen, while being also flexible and powerful enough so that it can run both utility software, and contemporary games and VR applications – at least when linked with an eGPU system. For that, I use Asus ROG XG Station 2, which connects to X1 Yoga with a Thunderbolt 3 cable, thereby plugging into the graphics power of NVIDIA GeForce GTX 1070. A system like this has the benefit that one can carry around a reasonably light and thin laptop computer, which scales up to workstation class capabilities when plugged in at the desk.

ROG XG Station 2 with Thunderbolt 3.
ROG XG Station 2 with Thunderbolt 3.

One of the most useful research tools is actually a capable smartphone. For example, with a good mobile camera one can take photos to make visual notes, photograph one’s handwritten notes, or shoot copies of projected presentation slides at seminars and conferences. When coupled with a fast 4G or Wi-Fi connection and automatic upload to a cloud service, the same photo notes almost immediately appear also the laptop computer, so that they can be attached to the right folder, or combined with typed observation notes and metadata. This is much faster than having a high-resolution video recording of the event; that kind of more robust documentation setups are necessary in certain experimental settings, focus group interview sessions, collaborative innovation workshops, etc., but in many occasions written notes and mobile phone photos are just enough. I personally use both iPhone (8 Plus) and Android systems (Samsung Galaxy Note 4 and S7).

Writing is one of they key things academics do, and writing software is a research tool category on its own. For active pen handwriting I use both Microsoft OneNote and Nebo by MyScript. Nebo is particularly good in real-time text recognition and automatic conversion of drawn shapes into vector graphics. I link a video by them below:

My main note database is at Evernote, while online collaborative writing and planning is mostly done in Google Docs/Drive, and consortium project file sharing is done either in Dropbox or in Office365.

Microsoft Word may be the gold standard of writing software in stand-alone documents, but their relative share has radically gone down in today’s distributed and collaborative work. And while MS Word might still have the best multi-lingual proofing tools, for example, the first draft might come from an online Google Document, and the final copy end up into WordPress, to be published in some research project blog or website, or in a peer-reviewed online academic publication, for example. The long, book length projects are best handled in dedicated writing environment such as Scrivener, but most collaborative book projects are best handled with a combination of different tools, combined with cloud based sharing and collaboration in services like Dropbox, Drive, or Office365.

If you have not collaborated in this kind of environment, have a look at tutorials, here is just a short video introduction by Google into sharing in Docs:

What are your favourite research and writing tools?

Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

Future of interfaces: AirPods

apple-airpods
Apple AirPods (image © Apple).

I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.

Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).

There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.

The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.

The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.

Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.

There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: https://www.cnet.com/how-to/the-complete-list-of-siri-commands/). The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.

All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see: http://www.apple.com/airpods/.

Apple TV, 4th generation

Apple has been developing their television offerings in multiple fronts: in one sense, much television content and viewers have already moved into Apple (and Google) platforms, as online video and streaming media keeps on growing in popularity. According to one market research report, in 18-24 age group (in America), between 2011 and 2016, traditional television viewing has dropped by almost 40 %. At the same time, subscriptions to streaming video services (like Netflix) are growing. Particularly among the young people, some reports already suggest that they are spending more time watching streaming video as contrasted to watching live television programs. Just in the period from 2012 to 2014, mobile video views increased by 400 %.

Still, the television set remains as the centrepiece of most Western living rooms. Apple TV is designed to adapt games, music, photos and movies from the Apple ecosystem to the big screen. After some problems with the old, second generation Apple TV, I got today the new, 4th generation Apple TV. It has more powerful processor, more memory, a new remote control that has a small touch surface, and runs a new version of tvOS. The most important aspect regarding expansions into new services is the ability to download and install apps and games from thousands that are available in the App Store for tvOS.

After some quick testing, I think that I will prefer using the Remote app in my iPhone 6 Plus, rather than navigating with the small physical remote, which feels a bit finicky. Also, for games the dedicated video game controller (Steelseries Nimbus) would definitely provide a better sense of control. The Nimbus should also play nice with iPhone and iPad games, in addition to Apple TV ones.

Setup of the system was simple enough, and was most easily handled via another Apple device – iCloud was utilized to access Wi-Fi and other registered home settings automatically. Apart from the bit tricky touch controls, the user experience is excellent. Even the default screensavers of the new system are this time high-definition video clips, which are great to behold in themselves. This is not a 4k system, though, so if you have already upgraded the living room television into 4k version, the new Apple TV does not support that. Ours is still a Full HD Sony Bravia, so no problem for us. Compared to some other competing streaming media boxes (like Roku 4, Amazon Fire TV, Nvidia Shield Android TV), the feature set of Apple TV in comparison to its price might seem a bit lacklustre. The entire Apple ecosystem has its own benefits (as well as downsides) though.

Tech Tips for New Students

Working cross-platform
Going cross-platform: same text accessed via various versions of MS Word and Dropbox in Surface Pro 4, iPad Mini (with Zagg slim book keyboard case), Toshiba Chromebook 2, and iPhone 6 Plus, in the front.

There are many useful practices and tools that can be recommended for new university students; many good study practices are pretty universal, but then there are also elements that relate to what one studies, where one studies – to the institutional or disciplinary frames of academic work. A student that works on a degree in theoretical physics, electronics engineering, organic chemistry, history of the Middle Ages, Japanese language or business administration, for example, all will probably have elements in their studies that are unique to their fields. I will here focus on some simple technicalities should be useful for many students in the humanities, social sciences or digital media studies related fields, as well as for those in our own, Internet and Game Studies degree program.

There are study practices that belong to the daily organisation of work, to the tools, the services and software that one will use, for example. My focus here is on the digital tools and technology that I have found useful – even essential – for today’s university studies, but that does not mean I would downplay the importance of non-digital, informal and more traditional ways of doing things. The ways of taking notes in lectures and seminars is one thing, for example. For many people the use of pen or pencil on paper is absolutely essential, and they are most effective when using their hands in drawing and writing physically to the paper. Also, rather than just participating in online discussion fora, having really good, traditional discussions in the campus café or bar with the fellow students are important in quite many ways. But taken that, there are also some other tools and environments that are worth considering.

It used to be that computers were boxy things that were used in university’s PC classes (apart from terminals, used to access the mainframes). Today, the information and communication technology landscape has greatly changed. Most students carry in their pockets smartphones that are much more capable devices than the mainframes of the past. Also, the operating systems do not matter as much as they did only a few years ago. It used to be a major choice whether one went and joined the camp of Windows (Microsoft-empowered PC computers), that of Apple Macintosh computers, those with Linux, or some other, more obscure camp. The capabilities and software available for each environment were different. Today, it is perfectly possible to access same tools, software or services with all major operating environments. Thus, there is more freedom of choice.

The basic functions most of us in academia probably need daily include reading, writing, communicating/collaborating, research, data collecting, scheduling and other work organisation tasks and use of the related tools. It is an interesting situation that most of these tasks can be achieved already with the mobile device many of us carry with us all the time. A smartphone of iOS or Android kind can be combined with an external Bluetooth keyboard and used for taking notes in the lectures, accessing online reading materials, for using cloud services and most other necessary tasks. In addition, smartphone is of course an effective tool for communication, with its apps for instant messaging, video or voice conferencing. The cameraphone capabilities can be used for taking visual notes, or for scanning one’s physical notes with their mindmaps, drawings and handwriting into digital format. The benefit of that kind of hybrid strategy is it allows taking advantage both of the supreme tactile qualities of physical pen and paper, while also allowing the organisation of scanned materials into digital folders, possibly even in full-text searchable format.

The best tools for this basic task of note taking and organisation are Evernote and MS OneNote. OneNote is the more fully featured one – and more complex – of these two, and allows one to create multiple notebooks, each with several different sections and pages that can include text, images, lists and many other kinds of items. Taking some time to learn how to use OneNote effectively to organise multiple materials is definitely worth it. There are also OneNote plugins for most internet browsers, allowing one to capture materials quickly while surfing various sites.

MS OneNote
MS OneNote, Microsoft tutorial materials.

Evernote is more simple and straightforward tool, and this is perhaps exactly why many prefer it. Saving and searching materials in Evernote is very quick, and it has excellent integration to mobile. OneNote is particularly strong if one invests to Microsoft Surface Pro 4 (or Surface Book), which have a Surface Pen that is a great note taking tool, and allows one to quickly capture materials from a browser window, writing on top of web pages, etc. On the other hand, if one is using an Apple iPhone, iPad or Android phone or tablet, Evernote has characteristics that shine there. On Samsung Note devices with “S Pen” one can take screenshots and make handwritten notes in mostly similar manner than one can do with the MS Surface Pen in the Microsoft environment.

In addition to the note solution, a cloud service is one of the bedrocks of today’s academic world. Some years ago it was perfectly possible to have software or hardware crash and realize that (backups missing), all that important work is now gone. Cloud services have their question marks regarding privacy and security, but for most users the benefits are overwhelming. A tool like Dropbox will silently work in the background and make sure that the most recent versions of all files are always backed up. A file that is in the cloud can also be shared with other users, and some services have expanded into real-time collaboration environments where multiple people can discuss and work together on shared documents. This is especially strong in Google Drive and Google Docs, which includes simplified versions of familiar office tools: text editor, spreadsheet, and presentation programs (cf. classic versions of Microsoft Office: Word, Excel, and PowerPoint; LibreOffice has similar, free, open-source versions). Microsoft cloud service, Office 365 is currently provided for our university’s students and staff as the default environment free of charge, and it includes the OneDrive storage service as well as Outlook email system, and access to both desktop as well as cloud-hosted versions of Office applications – Word Online, Excel Online, PowerPoint Online, and OneNote Online. Apple has their own iCloud system, with Mac office tools (Pages, Numbers, and Keynote) also can be operated in browser, as iCloud versions. All major productivity tools have also iOS and Android mobile app versions of their core functionalities available. It is also possible to save, for example, MS Office documents into the MS OneCloud, or into Dropbox – a seamless synchronization with multiple devices and operating systems is an excellent thing, as it makes possible to start writing on desktop computer, continue with a mobile device, and then finish things up with a laptop computer, for example.

Microsoft Windows, Apple OS X (Macintosh computers) and Linux have a longer history, but I recommend students also having a look at Google’s Chrome OS and Chromebook devices. They are generally cheaper, and provide reliable and very easy to maintain environment that can be used for perhaps 80 % or 90 % of the basic academic tasks. Chromebooks work really well with Google Drive and Google Docs, but principally any service that be accessed as a browser-based, cloud version also works in Chromebooks. It is possible, for example, to create documents in Word or PowerPoint Online, and save them into OneDrive or Dropbox so that they will sync with the other personal computers and mobile devices one might be using. There is a development project at Google to make it possible to run Android mobile applications in Chrome OS devices, which means that the next generation of Chromebooks (which will all most likely support touchscreens) will be even more attractive than today’s versions.

For planning, teamwork, task deadlines and calendar sharing, there are multiple tools available that range from MS Outlook to Google Calendar. I have found that sharing of calendars generally works easier with the Google system, while Outlook allows deeper integration into organisation’s personnel databases etc. It is really good idea to plan and break down all key course work into manageable parts and set milestones (interim deadlines) for them. This can be achieved with careful use of calendars, where one can mark down the hours that are required for personal, as well as teamwork, in addition to lectures, seminars and exercise classes your timetable might include. That way, not all crucial jobs are packed next to the end of term or period deadlines. I personally use a combination of several Google Calendars (the core one synced with the official UTA Outlook calendar) and Wunderlist to-do list app/service. There are also several dedicated project management tools (Asana, Trello, etc.), but mostly you can work the tasks with basic tools like Google Docs, Sheets (Word, Excel) and then break down the tasks and milestones into the calendar you share with your team. Communications are also essential, and apart from email, people today generally utilize Facebook (Messenger, Groups, Pages), Skype, WhatsApp, Google+/Hangouts, Twitter, Instagram and similar social media tools. One of the key skills in this area is to create multiple filter settings or more fine-grained sharing settings (possibly even different accounts and profiles) for professional and private purposes. The intermixing of personal, study related and various commercial dimensions is almost inevitable in these services, which is why some people try to avoid social media altogether. Wisely used, these services can be nevertheless immensely useful in many ways.

All those tools and services require accounts and login details that are easily rather unsafe, by e.g. our tendency to recycle same or very similar passwords. Please do not do that – there will inevitably be a hacking incident or some other issue with some of those services, and that will lead you into trouble in all the others, too. There are various rules-based ways of generating complex passwords for different services, and I recommend using two-factor authentication always when it is available. This is a system where typically a separate mobile app or text messages act as a backup security measure whenever the service is accessed from a new device or location. Life is also much easier using a password manager like LastPass or 1Password, where one only needs to remember the master password – the service will remember the other, complex and automatically generated passwords for you. In several contemporary systems, there are also face recognition (Windows 10 Hello), fingerprint authentication or iris recognition technologies that are designed to provide a further layer of protection at the hardware level. The operating systems are also getting better in protecting against computer viruses, even without a dedicated anti-virus software. There are multiple scams and social engineering hacks in the connected, online world that even the most sophisticated anti-virus tools cannot protect you against.

Finally, a reference database is an important part of any study project. While it is certainly possible to have a physical shoebox full of index cards, filled with quotes, notes and bibliographic details of journal articles, conference papers and book chapters, it is not the most efficient way of doing things. There are comprehensive reference database management services like RefWorks (supported by UTA) and EndNote that are good for this job. I personally like Zotero, which exists both as cloud/browser-based service in Zotero.org, but most importantly allows quick capture of full reference details through browser plugins, and then inserting references in all standard formats into course papers and thesis works, in simple copy-paste style. There can also be set up shared, topics based bibliographic databases, managed by teams in Zotero.org – an example is Zotero version of DigiPlay bibliography (created by Jason Rutter, and converted by Jesper Juul): https://www.zotero.org/groups/digiplay .

As a final note, regardless of the actual tools one uses, it is the systematic and innovative application of those that really sets excellent study practices apart. Even the most cutting edge tools do not automate the research and learning – this is something that needs to be done by yourself, and in your individual style. There are also other solutions, that have not been explored in this short note, that might suit your style. Scrivener, for example, is a more comprehensive “writing studio”, where one can collect snippets of research, order fragments and create structure in more flexible manner than is possible than in e.g. MS Word (even while its Outline View is too underused). The landscape of digital, physical, social and creative opportunities is all the time expanding and changing – if you have suggestions for additions to this topic, please feel free to make those below in the comments.