Worldcon 75 in Helsinki has generally been an excellent event with multiple cultures, diverse forms of art and innumerable communities of fandom coming together. However, what left bit of a bad taste to the mouth was the organizers’ decision yesterday to cancel a LARP (live action role-play), dealing with old people and dementia. The decision is highly controversial, and apparently based on some (non-Nordic) participants strongly communicating their upset at such a sensitive topic has been even allowed to be submitted in the form of a “game”, into the con program. On the other hand, same people would apparently be completely fine with Altzheimer and similar conditions being handled in form of a novel, for example.
There will be no doubt multiple reactions coming in to this from experts of this field in the future. My short comment: this is an unfortunate case of censorship, based on cultural perception of play and games as inherently trivializing or “fun-based” form of low culture. It seems that for some people, there still are strict cultural hierarchies even within the popular culture, with games at the very bottom – and that handling something sensitive with the form of role-play, for example, can be an insult. Such position completely ignores the work that has been done for decades in Nordic LARP and in digital indie “art games” (and also within the academic traditions of game studies) to expand the range of games and play for cultural expression, and to remove expectation or stigma of automatic trivialism from the interactive forms of art and culture. The organisers have obviously been pressurised by some vocal individuals, but the outcome in this case was a failure to stand up, explain the value and potential of role-playing games, and Nordic LARP in particular to an international audience, and make a difference. A sad day.
I have been running a Windows server in our basement pretty much nonstop since 2008. Originally a personal Web server, this HP Proliant machine has in recent years mostly worked as a LAN file server for backups, media archives and for home-internal sharing. Even with a new 1.5 terabyte disk installed some years ago, it was running out of disk space. The old Windows 2008 Server was also getting painfully slow.
I decided to do bit of an update, and got a “small” 120 GB SSD for the new system, and a WD Red 4.0 terabyte NAT disk for data. (I also considered their 8 TB “Archive” disk, but I do not need quite that much space, yet, and the “Red” model was a bit faster for my general purpose use. It was also cheaper.)
This time I decided to go Linux way – my aging dual-core Xeon based system is more suitable for a bit lighter OS than a full Windows Server installation. On the other hand I was curious to try newer Linux distributions, so I picked up the “elementary OS”, which has attracted some positive press recently.
The hardware installation took it’s time, but I must say that I respect the build quality of this budget-class Proliant ML110 Gen5 machine. It has been running soon ten years without a single issue (hardware-related, I mean), and it is very solid, and pleasure to open and maintain (something that cannot be said of several consumer oriented computers that I have used).
Also the Linux installation, with my Samba and Dropbox components is now finally up and running. But I have to say that I am a bit disappointed with the elementary OS (0.41 “loki”) at the moment. It might have been wrong distribution for my needs, to start with. It surely looks pretty, but it is also very restricted – many essential administrative tools or features are disabled or not available, by design. Apparently it is made so easy and safe for beginners that it is hard to use this “eOS” for most things that Linux normally is used for: development, programming, systems administration.
It is possible to tweak Linux installations, of course, and I have now patched or hacked the new system to be more allowing and capable, but some new issues have emerged in the process. I wonder if it is possible just to overwrite the “elementary” into a regular Ubuntu Server version, for example, or do I need to reinstall everything and lose the work that I have already done? I need to study the wonderful world of Linux distros a bit more, obviously.
At home, I have been setting up and testing a new, dual-boot Win10/Linux system. Lenovo Yoga 510 is a budget-class, two-in-one device that I am currently setting up as a replacement for my old Vivobook (unfortunately, it has a broken power plug/motherboard, now). Technical key specs (510-14ISK, 80S70082MX model, Signature Edition) include an Intel i5-6200U processor (a 2,30-2,80 GHz Skylake model), Intel HD Graphics 520 graphics, 4 GB of DDR4 memory, 128 GB SSD, IPS Full HD (1920 x 1080) 14″ touch-screen display, and a Synaptics touchpad and a backlit keyboard. There is a WiFi (802.11 a/b/g/n/ac) and Bluetooth 4.0. Contrasted to some other, thinner and lighter devices, this one has a nice set of connectors: one USB 2.0, two USB 3.0 ports (no Thunderbolt, though). There is also a combo headphone/mic jack, Harman branded speakers, a memory card slot (SD, SDHC, SDXC, MMC), 720p webcam, and a HDMI connector. There is also a small hidden “Novo Button”, which is needed to get to the BIOS settings.
This is a last-year model (there is already a “Yoga 520” with Kaby Lake chips available), and I got a relatively good deal from Gigantti store (499 euros). (Edit. I forgot to mention this has also a regular, full size wired gigabit ethernet port, which is also nice.)
The strong points (as contrasted to my trusty old Vivobook, that is) are: battery life, which according to my experience and Lenovo promises is over eight hours of light use. The IPS panel is not the best I have seen (MS Surface Pro has really excellent display), but it is still really good as compared to the older, TN panels. Multi-touch also operates pretty well, even if the touchpad is not so much to my taste (its feel is a bit ‘plasticky’, and it uses inferior Synaptics drivers as contrasted to the “precision touchpads”, which send raw data directly to Windows to handle).
The high point of Lenovo Thinkpad laptops has traditionally been their keyboards. This Yoga model is not one of the professional Thinkpad line, but the keyboard is rather good, as compared to the shallow, non responsive keyboards that seem to be the trend these days. The only real problem is the non-standard positioning of up-arrow/PageUp and RightShift keys – it is really maddening to write, and while touch-typing every Right-Shift press produces erroneous keypress that moves the cursor up (potentially e.g. moving focus to “Send Email” rather than to typing, as I have already witnessed). But this can sort of be fixed by use of KeyTweak or similar tool, which can be used to remap these two keys to other way around. Not optimal, but a small nuisance, really.
Installing dual boot Ubuntu requires the usual procedures (disabling Secure Boot, fast startup, shrinking the Windows partition, etc.), but in the end Linux runs on this Lenovo laptop really well. The touch screen and all special keys I have tested work flawlessly right after the standard Ubuntu 17.04 installation, without any gimmicky hacking. Having a solid (bit heavy though) laptop with a 14-inch touch-enabled, 360 degree rotating screen, and which can be used without issues in the most recent versions of both Windows 10 and Linux is a rather nice thing. Happy with this, at the moment.
Portable Document Format (PDF) files are a pretty standard element in academic and business life these days. It is sort of a compromise, a tool for living life that is partly based on traditional paper documents and their conventions, and part on new, digital functionalities. A PDF file should maintain the appearance of the document same, as moved from device to device and user to user, and it can facilitate various more advanced functionalities.
One such key function is ability to sign a document (an agreement, a certificate, or such) with digital signatures. This can greatly speed up many critical processes in contemporary, global, mobile and distributed lives of individuals and organisations. Rather than waiting for a key person to arrive back from trip to their office, to physically use pen and paper to sign a document, a PDF document version of the document (for example) can be just mailed to the person, who then adds their digital signature to the file, saves, and sends the signed version back.
In legal and technical terms, there is nothing stopping from moving completely to using digital signatures. There are explanations of the legal situation e.g. here:
And Adobe, the leading company in electronic documents business, provides step-by-step instructions on how to add or generate the cryptographic mechanisms to ensure the authenticity of digital signatures in PDFs with their Acrobat toolset:
According to my experience, most contracts and certificates still are required to be signed with a physical pen, ink, and paper, even while the digital tools exist. The reasons are not legal or technical, but rather rooted in organisation routines and processes. Many traditional organisations are still not “digital” or “paperless”, but rather build upon decades (or: centuries!) of paper-trail. If the entire workflow is built upon the authority of authentic, physically signed contracts and other legal (paper) documents, it is hard to transform the system. At the same time, the current situation is far from optimal: in many cases there is double work, as everything needs to exist both as the physical papers (for signing, and for paper-based archiving), and then scanned into PDFs (for distribution, in intranets, in email, in other electronic archives that people use in practice).
There are useful new tools like Kami (https://www.kamihq.com/) that facilitate move to “paperless classroom”, with their easy to use functions for drawing, editing, and commenting on PDFs (Adobe’s business oriented solutions are not the best answer to all users and situations)
Getting the input right is one of the most challenging issues in todays world of pervasive, multimodal computing and services. Surface Pro 4 is an excellent multitouch tablet, and with the Surface Pen it is perfect for review and marking (key elements in academic life). The problem with a tablet as a main computer is that much of the productivity oriented tasks really call for a mouse and keyboard style approach.
There are pretty good add-on keyboards for today’s tablet computers, and one can of course also attach to a Surface Pro a full size keyboard and mouse combo. However, a keyboard cover that is always with you is the optimal companion for a tablet user. The official Type Cover by Microsoft is a really good compromise: it is thin, light, has decent keys, excellent touchpad, and backlight, which is really important for business use. There is certain wobbly, flexible quality in the keys though, and writing a whole day with one can create certain strain.
I have now tested a new, much more solid alternative: Brydge 12.3 keyboard cover. It is made of strong aluminium, has 160 degrees rotating hinges that create a firm grip on the corners of the tablet, and its island style keys also are backlighted. According to my experience, the usability issues with Brydge relate to the unreliability of Bluetooth connection on one hand – sometimes I would spend several minutes after tablet wake-up waiting for keyboard to re-establish its connection. Other thing is that the integrated touchpad is rather bad. It is hard to control precisely, pointer movement is wobbly, and not all Windows 10 mouse gestures are supported. It is also very small by today’s standards, and clicks register randomly. The sensible use for the Brydge is to use it alongside a wired or wireless mouse – this, however, diminishes its value as a real laptop replacement option. The trackpad in Type Cover is so much better that in regular use that in the end it trumps Brydge’s better (or at least more solid) keyboard. The plus side of using Brydge is that in tactile terms, it transforms Surface Pro into a (small and heavy) laptop computer.
It is apparently hard to get a 2-in-1 device right. However, multiple manufacturs have recently introduced their own takes on the same theme, so there might be better options out there already.
I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.
Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).
There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.
The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.
The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.
Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.
There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: https://www.cnet.com/how-to/the-complete-list-of-siri-commands/). The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.
All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see: http://www.apple.com/airpods/.