The Rise and Fall and Rise of MS Word and the Notepad

MS Word installation floppy. (Image: Wikipedia.)

Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.

Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)

WordPerfect 5.1 screen. (Image: Wikipedia.)

It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).

The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).

Microsoft Word 6.0, Mac version. (Image: user “MR” at https://www.macintoshrepository.org/851-microsoft-word-6)

These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.

Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing as note-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.

The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.

Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).

As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.

MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).

There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.

There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.

Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.

Scrivener. (Image: Literature and Latte.)

Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.

In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.

iA Writer. (Image: iA Inc.)

The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).

For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.

I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.

In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.

Dropbox. (Image: Dropbox, Inc.)

In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.

So, happy 36th anniversary, MS Word.

Stretching the little Canon to the max

There has been these endless discussions among photography enthusiasts on the strengths and weaknesses of various camera manufacturers for decades. It has been interesting to note that as the history-awareness has increased, some of this discussion has moved into a sort of meta-level: rather than talking about the suitablity of certain camera equipment for (certain kinds of) photography, the discussion has partly moved to discuss the strengths and weaknesses of entire philosophy or product-line strategy of various manufacturers.

Canon is an example that I am interested here, particularly as this is the manufacturer whose products I have been mostly using for the past two decades or more. The dominant criticism of Canon today seems to be that they (as late adopters of mirrorless systems camera technologies) are now spreading their efforts into too many directions, and thereby making it hard to provide anything really strong and credible for anyone. The history of Canon is great, of course, and I think that they still have the best user interface for their digital cameras, for example, and the back catalogue of Canon lenses is impressive. The problem today nevertheless is that it is difficult to see if Canon is still committed to continuing the DSLR camera and lens development in professional and enthusiast levels long into the future (as their recent releases of EOS 90D and 1D X Mark III DSLR bodies seems to suggest), or if anyone with an eye towards the future should invest into the RF mount lenses and EOS R series full-frame mirrorless cameras instead. (RF system is the most recent Canon camera family, it was announced in September 2018; Canon’s full-frame DSLR cameras have used the EF mount lenses since from 1987.) And what is the destiny of APS-C (“crop frame”) cameras, and the EF-M mount system (introduced in 2012) in all of this?

I have long used crop frame system cameras and either EF or EF-S (yet another Canon lens family) lenses, due to the nice balance that this combination provides in terms of versatility, compact sizes, image quality and price – which is always an important concern for a hobbyist photographer. Few months ago I made the move into the “mirrorless era”, deciding to invest into the most affordable of these alternative systems, the Canon EF-M mount family (my choice of camera body was the tiny, yet powerful EOS M50).

The initial experiences (as I have reported in this blog already earlier) have been mostly positive – it is easy to take a good photo with this system and some decent, native EF-M lens. And it is nice that I can use an adapter to attach my older, EF mount lenses into the new, EF-M mount body, even while the autofocus might not be as fast that way. But the fact is that most of the new Canon lenses now appear to be coming out to the other, mirrorless Canon system: the full-frame RF mount cameras. And it is particularly the “serious enthusiast” or advanced hobbyist category that seems to be left in the middle. Some, more sports and wildlife oriented Canon lenses and cameras that would suit them are being published in the DSLR (EF mount) ecosystem. Some of the most advanced lenses are coming out in RF system, but the prices of many of those are more in the professional, multiple-thousands of euros/dollars category per lens. But the R system bodies seem to be missing many of the features that true professionals would need from their camera systems, so that is not really working so well, either. And those amateur photographers (like myself) who have opted for Canon EF-M mirrorless mount system are mostly provided with compact lenses that do not have the image quality or aperture values that more advanced photography would profit from. And investing into a heavy EF lens, and then adding adapter to get it to work with the EF-M body does not make particularly good sense. That lens is not designed for a mirrorless system to start with, and the combination of ultra-compact camera body and heavy, full-frame DSLR lens is not a balanced one.

So, the advanced hobbyist / enthusiast crowd is sort of asking: Quo Vadis, Canon?

Some people have already voted with their feet, sold their Canon cameras and lenses and bought into a Sony or Fujifilm ecosystems instead. Those competing manufacturers have the benefit of simpler and more clear mirrorless (and APS-C) camera and lens strategies. They do not have so many millions of existing users with legacy camera and lens equipment to support, of course.

I am currently just trying to make the best out of my existing cameras and lenses. My lakeside camera walk today involved mostly using the Canon L-series 70-200 mm f/4 EF lens with the old APS-C, DSLR body (550D), which has better grip for handling a larger lens. And the landscape photos and detailed close-ups I shot with the new M50 and the sharp 22mm f/2 EF-M lens.

Maybe the third-party manufacturers will provide some help in strengthening the EF-M ecosystem in the future. For example, SIGMA has announced that it will soon port three of its good quality prime lenses into EF-M system: Sigma 16mm, 30mm, and 56mm F1.4 DC DN Contemporary. Hopefully there will be more of such quality glass coming up – also from Canon itself. Producing good quality lenses that are also physically small enough to make sense when attached into an EF-M camera, and which have also affordable enough price, is not trivial achievement, it looks like.

SIGMA lenses.
New SIGMA lenses for the Canon EF-M mount cameras.

Perfect blues

Hervantajärvi, a moment after the sunset.

While learning to take better photos with within the opportunities and limitations provided by whatever camera technology offers, it is also interesting now and then to stop to reflect on how things are evolving.

This weekend, I took some time to study rainy tones of Autumn, and also to hunt for the “perfect blues” of the Blue Hour – the time both some time before sunrise and after the sunset, when indirect sunlight coming from the sky is dominated by short, blue wavelenghts.

After a few attempts I think I got into the right spot at the right time (see the above photo, taken tonight at the beach of Hervantajärvi lake). At the time of this photo it was already so dark that I actually had trouble finding my gear and changing lenses.

I made the simple experiment of taking an evening, low-light photo with the same lens (Canon EF 50 mm f/1.8 STM) with two of my camera bodies – both the old, Canon EOS 550D (DSLR) and new EOS M50 (mirrorless). I tried to use the exact same settings for both photos, taking them only moments apart from the same spot, using a tripod. Below are two cropped details that I tried to frame into same area of the photos.

Evening photo, using EOS 550D.
Same spot, same lens, same settings – using EOS M50.

I am not an expert in signal processing or camera electronics, but it is interesting to see how much more detail there is in the lower, M50 version. I thought that the main differences might be in how much noise there is in the low-light photo, but the differences appear to go deeper.

The cameras are generations apart from each other: the processor of 550D is DIGIC 4, while M50 has the new DIGIC 8. That sure has a effect, but I think that the sensor might play even larger role in this experiment. There are some information available from the sensors of both cameras – see the links below:

While the physical sizes of the sensors are exactly the same (22.3 x 14.9 mm), the pixel counts are different (18 megapixels vs. 24.1 megapixels). Also, the pixel density differs: 5.43 MP/cm² vs. 7.27 MP/cm², which just verifies that these two cameras, launched almost a decade apart, have very different imaging technology under the hood.

I like using both of them, but it is important to understand their strengths and limitations. I like using the old DSLR in daylight and particularly when trying to photograph birds or other fast moving targets. The large grip and good-sized physical controls make a DSLR like EOS 550D very easy and comfortable to handle.

On the other hand, when really sharp images are needed, I now rely on the mirrorless M50. Since it is a mirrorless camera, it is easy to see the final outcome of applied settings directly from the electronic viewfinder. M50 also has an articulated, rotating LCD screen, which is really excellent feature when I need to reach very low, or very high, to get a nice shot. On the other hand, the buttons and the grip are just physically a bit too small to be comfortable. I never seem to hit the right switch when trying to react in a hurry, missing some nice opportunities. But when it is a still-life composition, I have good time to consult the tiny controls of M50.

To conclude: things are changing, good (and bad) photos can be taken, with all kinds of technology. And there is no one perfect camera, just different cameras that are best suited for slightly different uses and purposes.

Mobile photography – RAW or JPG?

Is it worth setting your smartphone camera to use RAW format, instead (or: alongside) of the standard JPG format?

I must say I am not sure. Above you should be able to see three versions of the same photo. The first one is one produced with the automatic settings of my Huawei Mate 20 Pro. It is a f/1.8 photo coming from the main camera module, processed with various algorithms to create a “nice”, tonally rather balanced JPG with 2736 x 3648 pixels.

The second one is direct/non-edited conversion of the original RAW (imported into desktop Lightroom, then directly turned into JPG), with 5456 x 7280 pixels and plenty of information that is potentially valuable for editing, yet it is also bit too dark and the lens quality is frankly probably not quite worth all those pixels, to start with (the depth of field is narrow here, and most of the photo is soft, when you look it 1:1 in a large screen).

The third version is the RAW-based and Lightroom-edited photo, where I have just accepted some “auto” corrections that the software has available for beginners. This time, we can see many of the details again better, since Lightroom has tweaked the exposure and contrast settings and tonal curves. Yet, the change of white balance setting into the automatic “daylight” version has made the cold, Autumn morning photo to appear a bit too warm in colours to my mind.

This could be of course fixed in further, more nuanced and sensible Lightroom editing, but the point perhaps is that the out-of-camera JPG that Huawei is capable of producing is a rather nice compromise in itself. It is optimised for what the small, fixed lenses are capable of achieving, and the file size is good for sharing in social media – which is what most smartphone photos are used for, in any case. Artificial intelligence does it best to produce what a typical “Autumn Leaves” shot should look like. That might then again be something that you like – or not.

It is surely possible to achieve more striking and artistically ambitious (“non-typical”) outcomes when the original photo is taken in RAW, even when it is coming from a smartphone camera. But I would say that the RAW based workflow probably really makes sense when you are using a SLR style camera with a lens that is sharp enough for you to really go deep into the details, do some more ambitious cropping or tonal adjustments, for example.

Or, what do you think?

There are various articles online that you can also have a look on this, e.g.

On Tweakability

Screenshot: Linux Mint 19.2.
Linux Mint 19.2 Tina Cinnamon Edition. (See: https://www.linuxmint.com/rel_tina_cinnamon_whatsnew.php.)

Two years ago, in August 2017, I installed a new operating system into my trusty old home server (HP Proliant ML110 Gen5). That was a rather new Linux distro called ElementaryOS, which looked nice, but the 0.4 Loki that was available at the time was not an optimal choice for a server, as it soon turned out afterwards. It was optimized for a laptop use, and while I could also set it up as a file & printer server, many things required patching and tweaking to start working. But since I install and maintain multiple operating systems in my device environment partly out of curiosity, keeping my brain alert, and for this particular kind of fun – of tweaking – I persisted, and lived with Elementary OS for two years.

Recently, there had been some interesting new versions that had come out from multiple other operating system versions. While I do most of my daily stuff in Windows 10 and in iOS (or ipadOS, as the iPad variant is now called), it is interesting to also try out e.g. different Linux versions, and I am also fan of ChomeOS, which usually does not provide surprises, but rather steadily improves, while staying very clear, simple and reliable in that it does.

In terms of the particular characteristic that I am here talking about – let’s call it “tweakability”– an iPad or Chromebook are pretty much from the opposite ends of spectrum, as compared to a personal computer or server system running some version of Linux. While the other OSs excel in presenting the user with an extremely fine-tuned, clear and simple working environment that is simultaneously rather limited in terms of personalisation and modification, the bare-bones, expert oriented Linux distributions in particular hardly ever are “ready” straight after the initial setup. The basic installation is in these cases rather just the starting point for the user to start building their own vision of an ideal system, complete with the tools, graphical shells, and/or command-line interpreters etc. that suit their ways of working. Some strongly prefer the other, some the opposite style of OS with their associated user experiences. I feel it is optimal to be able to move from one kind of system to another, on basis of what one is trying to do, and also how one wants to do it.

Tweakability is, in this sense, a measure of customisability and modifiability of the system that is particularly important for so-called “power users”, who have a very definite needs, high IT skill levels, and also clear (sometimes idiosyncratic) ideas of how computing should be done. I am personally not entirely comfortable in that style of operation, and often rather feel happy that someone else has set up an easy-to-use system for me, which is good enough for most things. Particularly in those days when it is email, some text editing, browser-based research in databases and publications (with some social media thrown in), a Chromebook, iPad Pro or a Windows machine with a nice keyboard and good enough screen & battery life are all that I need.

But, coming back to that home server and new operating system installation: as my current printer has network sharing, scanning, email and all kinds of apps built-in, and I do not want to run a web server from my home any more either, it is just the basic backup and file server needs that this server box needs to handle. And a modern NAS box with some decent-sized disks could very well do that job. Thus, the setup of this Proliant server is more of less a hobby project that is very much oriented towards optimal tweakability these days (though not quite as much as my experiments with various Raspberry Pi hobby computers, and their operating systems).

So, I finally ended up considering three options as the new OS for this machine: Ubuntu Server 18.04.3 LTS (which would have been a solid choice, but since I was already running Ubuntu in my Lenovo Yoga laptop, I wanted something a bit different). The second option would have been the new Debian 10 (Buster) Minimal Server (probably optimal for my old and small home server use – but I wanted to also experiment with the desktop side of operating system in this installation). So, finally I ended up with Linux Mint 19.2 Tina Cinnamon Edition. It seemed to have the optimal balance between reliable Debian elements, Ubuntu application ecosystem, combined with some nice tweaks that enhance ease of use and also aesthetic side of the OS.

I did a wipe-clean-style installation of Mint into my 120 GB SSD drive, but decided to try and keep all data in the WD Red 4 TB disk. I knew in principle that this could lead into some issues, as in most new operating system installations, the new OS will come with a new user account, and the file systems will keep the files registered into the original User, Group and Other specifications, from the old OS installation. It would have been better to have a separate archive media available with all the folder structures and files, and then format the data disk, copy all data under the new user account, and thereby have all file properties, ownership details etc. exactly right. But I had already accumulated something like 2,7 terabytes of data into this particular disk and there was no exact backup of it all – since this was the backup server itself, for several devices in our house. So, I just read a quick reminder on how chmod and chown commands work again, and proceeded to mount the old data disks within the new Mint installation, take ownership of all directories and data, and tweak the user, group and other permissions into some kind of working order.

Samba, the cross-platform file sharing system that I need for the mixed Windows-Linux local network to operate was the first really difficult part this time. It was just plain confusing to get the right disks, shares and folders to appear in our LAN for the Windows users, so that the backup and file sharing could work. Again, I ended up reading dozens of hobbyist discussions and info pages from different decades and from different forums, making tweak after tweak in users, groups, permissions and settings in the /etc/smb.conf settings file (followed every time to stop and restart the Samba service daemon, to see the effects of changes). After a few hours I got that running, but then the actual fun started, when I tried to install Dropbox, my main cloud archive, backup and sharing system on top of the (terabyte-size) data that I had in my old Dropbox folder. In principle you can achieve this transition by first renaming the old folder e.g. as “Dropbox-OLD”, then starting the new instance of service and letting it create a new folder named “Dropbox”, then killing the software, deleting the new folder and renaming the old folder back to its own default name. After which restarting the Dropbox software should find the old data directory where it expects one to be, and start re-indexing all that data, but not re-downloading all of that from the cloud – which could take several days over a slow home network.

This time, however, something went wrong (I think there was an error in how the “Selective sync” was switched on at certain point), leading into a situation where all the existing folders were renamed by the system as server’s “Conflicting Copy”, then copied into the Dropbox cloud (including c. 330 000 files), while exactly same files and folders were also downloaded back from the cloud into exact same folders, without the “Conflicting Copy” marking. And of course I was away from the machine at this point, so when I realised what was going on, I had to kill Dropbox, and start manually bringing back the Dropbox to the state it was before this mess. It should be noted that there was also a “Rewind Dropbox” feature in this Dropbox Plus account (which is exactly designed for rolling back in this kind of large situations). But I was no longer sure into which point in time I should rewind back to, so I ended up going through about 100 different cases of conflicting copies, and also trying to manually recover various shared project folders that had become dis-joined in this same process. (Btw, apologies to any of my colleagues who got some weird notifications from these project shares during this weekend.)

After spending most of one night doing this, I tried to set up my other old services into the new Mint server installation in the following day. I started from Plex, which is a media server and client software/service system that I use e.g. to stream our family video clips from the server into our smart television. There is an entire, 2600 word essay on Linux file and folder permissions at the Plex site (see: https://support.plex.tv/articles/200288596-linux-permissions-guide/). But in the end I just had to lift my hands up. There is something in the way system sees (or: doesn’t see) the data that is in the old 4 TB disk, and all my tricks with different users and permission settings that I tried, do not allow Plex to see any of that data from that disk. I tested that if I copy the files into that small system disk (the 120 GB SSD), then the server can see and stream them normally. Maybe I will at some point get another large hard drive, try setting up that one under the current OS and user, copy all data there, and then try to reinstall and run Plex again. Meanwhile, I just have to say that I have got my share of tweakability for some time now. I think that Linux Mint in itself is indeed perfectly nice and capable operating system. It is just that software such as Dropbox or Plex do not play so nicely and reliably together with it. Not at least with the tweaking skills that I possess. (While I am writing this, there are currently still over 283 500 files that Dropbox client should restore from the cloud into that problematic data drive. And the program keeps on crashing every few hours…)

New version of “The Demon” (retrospectives, pt. 1)

Louis (Brad Pitt) destroying the Theatre of the Vampires in Interview with the Vampire
(dir. Neil Jordan). © Warner Bros., 1994.

My first book published in English was outcome of my PhD work conducted in late 1990s – The Demonic Texts and Textual Demons (Tampere University Press, 1999). As the subtitle hints (“The Demonic Tradition, the Self, and Popular Fiction”), this work was both a historically oriented inquiry into the demonic tradition across centuries, and an attempt to recast certain poststructuralist questions about textuality in terms of agency, or “Self”.

The methodological and theoretical subtext of this book was focused on politically-committed cultural studies on the one hand: I was reading texts like horror movies, classical tragedies, science fiction, The Bible, and Rushdie’s The Satanic Verses from perspectives opened up by our bodily and situated existence, suffering, and possibilities for empowerment. On the other hand, I was also interested in both participating and ‘deconstructing’ some of the theoretical contributions that the humanities – literary and art studies particularly – had made to scholarship during the 20th century. In a manner, I was turning “demonic possession” as a self-contradictory and polyphonic image of poststructuralism itself: the pursuit of overtly convoluted theoretical discourses (that both reveal and hide the actual intellectual contributions at the same time) particularly both fascinated and irritated me. The vampires, zombies and cyborgs were my tools for opening the black boxes in the charnel houses of twisted “high theory” (afflicted by a syndrome that I called ‘cognitocentrism’ – the desire to hide the desiring body and situatedness of the theorizing self from true commitment and responsibility in the actual world of people).

I have now produced a new version of this book online, as Open Access. After the recent merger of universities, the Tampere University Press (TUP) books are no longer available as physical copies, and all rights of the works have returned to the authors (see this notice). Since I also undertook considerable detective work at the time to secure the image rights (e.g. by writing to Vatican Libraries, and Warner Brothers), I have now also restored all images – or as close versions of the originals as I could find.

The illustrated, free (Creative Commons) version can be found from this address: https://people.uta.fi/~tlilma/Demon_2005/.

I hope that the new version will find a few new readers to this early work. Here are a couple of words from my Lectio Praecursoria, delivered in the doctoral defense at 29th March, 1999:

… It is my view, that the vast majority of contemporary demonic texts are created and consumed because of the anxiety evoked by such flattening and gradual loss of meaningful differences. When everything is the same, nothing really matters. Demons face us with visions which make indifference impossible.
A cultural critic should also be able to make distinctions. The ability to distinguish different audiences is important as it makes us aware how radically polyphonic people’s interpretations really can be. We may live in the same world, but we do not necessarily share the same reality. As the demonic texts strain the most sensitive of cultural division lines, they highlight and emphasise such differences. Two extreme forms of reactions appear as particularly problematic in this context: the univocal and one-dimensional rejection or denial of the demonic mode of expression, and, on the other hand, the univocal and uncritical endorsement of this area. If a critical voice has a task to do here, it is in creating dialogue, in unlocking the black-and-white positions, and in pointing out that the demonic, if properly understood, is never any single thing, but a dynamic and polyphonic field of both destructive and creative impulses.

Frans Ilkka Mäyrä (1999)

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!