Server Update: Elementary Error?

I have been running a Windows server in our basement pretty much nonstop since 2008. Originally a personal Web server, this HP Proliant machine has in recent years mostly worked as a LAN file server for backups, media archives and for home-internal sharing. Even with a new 1.5 terabyte disk installed some years ago, it was running out of disk space. The old Windows 2008 Server was also getting painfully slow.

New server components (August 2017)
New server components (August 2017)

I decided to do bit of an update, and got a “small” 120 GB SSD for the new system, and a WD Red 4.0 terabyte NAT disk for data. (I also considered their 8 TB “Archive” disk, but I do not need quite that much space, yet, and the “Red” model was a bit faster for my general purpose use. It was also cheaper.)

This time I decided to go Linux way – my aging dual-core Xeon based system is more suitable for a bit lighter OS than a full Windows Server installation. On the other hand I was curious to try newer Linux distributions, so I picked up the “elementary OS”, which has attracted some positive press recently.

HP Proliant ML110 G5, opened
HP Proliant ML110 G5, opened.

The hardware installation took it’s time, but I must say that I respect the build quality of this budget-class Proliant ML110 Gen5 machine. It has been running soon ten years without a single issue (hardware-related, I mean), and it is very solid, and pleasure to open and maintain (something that cannot be said of several consumer oriented computers that I have used).

Installing elementary OS ("loki")
Installing elementary OS (“loki”)

Also the Linux installation, with my Samba and Dropbox components is now finally up and running. But I have to say that I am a bit disappointed with the elementary OS (0.41 “loki”) at the moment. It might have been wrong distribution for my needs, to start with. It surely looks pretty, but it is also very restricted – many essential administrative tools or features are disabled or not available, by design. Apparently it is made so easy and safe for beginners that it is hard to use this “eOS” for most things that Linux normally is used for: development, programming, systems administration.

It is possible to tweak Linux installations, of course, and I have now patched or hacked the new system to be more allowing and capable, but some new issues have emerged in the process. I wonder if it is possible just to overwrite the “elementary” into a regular Ubuntu Server version, for example, or do I need to reinstall everything and lose the work that I have already done? I need to study the wonderful world of Linux distros a bit more, obviously.

Yoga 510, Signature Edition

2017-07-30 18.39.57At home, I have been setting up and testing a new, dual-boot Win10/Linux system. Lenovo Yoga 510 is a budget-class, two-in-one device that I am currently setting up as a replacement for my old Vivobook (unfortunately, it has a broken power plug/motherboard, now). Technical key specs (510-14ISK, 80S70082MX model, Signature Edition) include an Intel i5-6200U processor (a 2,30-2,80 GHz Skylake model), Intel HD Graphics 520 graphics, 4 GB of DDR4 memory, 128 GB SSD, IPS Full HD (1920 x 1080) 14″ touch-screen display, and a Synaptics touchpad and a backlit keyboard. There is a WiFi (802.11 a/b/g/n/ac) and Bluetooth 4.0. Contrasted to some other, thinner and lighter devices, this one has a nice set of connectors: one USB 2.0, two USB 3.0 ports (no Thunderbolt, though). There is also a combo headphone/mic jack, Harman branded speakers, a memory card slot (SD, SDHC, SDXC, MMC), 720p webcam, and a HDMI connector. There is also a small hidden “Novo Button”, which is needed to get to the BIOS settings.

This is a last-year model (there is already a “Yoga 520” with Kaby Lake chips available), and I got a relatively good deal from Gigantti store (499 euros). (Edit. I forgot to mention this has also a regular, full size wired gigabit ethernet port, which is also nice.)

The strong points (as contrasted to my trusty old Vivobook, that is) are: battery life, which according to my experience and Lenovo promises is over eight hours of light use. The IPS panel is not the best I have seen (MS Surface Pro has really excellent display), but it is still really good as compared to the older, TN panels. Multi-touch also operates pretty well, even if the touchpad is not so much to my taste (its feel is a bit ‘plasticky’, and it uses inferior Synaptics drivers as contrasted to the “precision touchpads”, which send raw data directly to Windows to handle).

2017-08-01 19.21.39The high point of Lenovo Thinkpad laptops has traditionally been their keyboards. This Yoga model is not one of the professional Thinkpad line, but the keyboard is rather good, as compared to the shallow, non responsive keyboards that seem to be the trend these days. The only real problem is the non-standard positioning of up-arrow/PageUp and RightShift keys – it is really maddening to write, and while touch-typing every Right-Shift press produces erroneous keypress that moves the cursor up (potentially e.g. moving focus to “Send Email” rather than to typing, as I have already witnessed). But this can sort of be fixed by use of KeyTweak or similar tool, which can be used to remap these two keys to other way around. Not optimal, but a small nuisance, really.

2017-07-30 18.41.48Installing dual boot Ubuntu requires the usual procedures (disabling Secure Boot, fast startup, shrinking the Windows partition, etc.), but in the end Linux runs on this Lenovo laptop really well. The touch screen and all special keys I have tested work flawlessly right after the standard Ubuntu 17.04 installation, without any gimmicky hacking. Having a solid (bit heavy though) laptop with a 14-inch touch-enabled, 360 degree rotating screen, and which can be used without issues in the most recent versions of both Windows 10 and Linux is a rather nice thing. Happy with this, at the moment.

Note on working with PDFs and digital signatures

Adobe Global Guide to Electronic Signature Law
Adobe Global Guide to Electronic Signature Law

Portable Document Format (PDF) files are a pretty standard element in academic and business life these days. It is sort of a compromise, a tool for living life that is partly based on traditional paper documents and their conventions, and part on new, digital functionalities. A PDF file should maintain the appearance of the document same, as moved from device to device and user to user, and it can facilitate various more advanced functionalities.

One such key function is ability to sign a document (an agreement, a certificate, or such) with digital signatures. This can greatly speed up many critical processes in contemporary, global, mobile and distributed lives of individuals and organisations. Rather than waiting for a key person to arrive back from trip to their office, to physically use pen and paper to sign a document, a PDF document version of the document (for example) can be just mailed to the person, who then adds their digital signature to the file, saves, and sends the signed version back.

In legal and technical terms, there is nothing stopping from moving completely to using digital signatures. There are explanations of the legal situation e.g. here:

And Adobe, the leading company in electronic documents business, provides step-by-step instructions on how to add or generate the cryptographic mechanisms to ensure the authenticity of digital signatures in PDFs with their Acrobat toolset:

According to my experience, most contracts and certificates still are required to be signed with a physical pen, ink, and paper, even while the digital tools exist. The reasons are not legal or technical, but rather rooted in organisation routines and processes. Many traditional organisations are still not “digital” or “paperless”, but rather build upon decades (or: centuries!) of paper-trail. If the entire workflow is built upon the authority of authentic, physically signed contracts and other legal (paper) documents, it is hard to transform the system. At the same time, the current situation is far from optimal: in many cases there is double work, as everything needs to exist both as the physical papers (for signing, and for paper-based archiving), and then scanned into PDFs (for distribution, in intranets, in email, in other electronic archives that people use in practice).

While all of us can make some small steps towards using digital signatures and get rid of the double work (and wasting of natural resources), we can also read about the long history of “paperless office” – a vision of the future, originally popularized by a Business Week article in 1975 (see: https://en.wikipedia.org/wiki/Paperless_office and the 2001 critique by Sellen & Harper: https://mitpress.mit.edu/books/myth-paperless-office).

And, btw, a couple of useful tips:

Brydge 12.3, Surface Pro 4

Surface Pro 4, with Brydge 12.3 and MS Type Cover
Surface Pro 4, with Brydge 12.3 and MS Type Cover

Getting the input right is one of the most challenging issues in todays world of pervasive, multimodal computing and services. Surface Pro 4 is an excellent multitouch tablet, and with the Surface Pen it is perfect for review and marking (key elements in academic life). The problem with a tablet as a main computer is that much of the productivity oriented tasks really call for a mouse and keyboard style approach.

There are pretty good add-on keyboards for today’s tablet computers, and one can of course also attach to a Surface Pro a full size keyboard and mouse combo. However, a keyboard cover that is always with you is the optimal companion for a tablet user. The official Type Cover by Microsoft is a really good compromise: it is thin, light, has decent keys, excellent touchpad, and backlight, which is really important for business use. There is certain wobbly, flexible quality in the keys though, and writing a whole day with one can create certain strain.

I have now tested a new, much more solid alternative: Brydge 12.3 keyboard cover. It is made of strong aluminium, has 160 degrees rotating hinges that create a firm grip on the corners of the tablet, and its island style keys also are backlighted. According to my experience, the usability issues with Brydge relate to the unreliability of Bluetooth connection on one hand – sometimes I would spend several minutes after tablet wake-up waiting for keyboard to re-establish its connection. Other thing is that the integrated touchpad is rather bad. It is hard to control precisely, pointer movement is wobbly, and not all Windows 10 mouse gestures are supported. It is also very small by today’s standards, and clicks register randomly. The sensible use for the Brydge is to use it alongside a wired or wireless mouse – this, however, diminishes its value as a real laptop replacement option. The trackpad in Type Cover is so much better that in regular use that in the end it trumps Brydge’s better (or at least more solid) keyboard. The plus side of using Brydge is that in tactile terms, it transforms Surface Pro into a (small and heavy) laptop computer.

It is apparently hard to get a 2-in-1 device right. However, multiple manufacturs have recently introduced their own takes on the same theme, so there might be better options out there already.

Surface Pro 4, with Brydge 12.3 and MS Type Cover
Surface Pro 4, with Brydge 12.3 and MS Type Cover

Linux on Vivobook X202E

Ubuntu on Vivobook X202E
Ubuntu on Vivobook X202E

In January 2013 I bought a Asus Vivobook X202E, a small, budget class, touch screen laptop. It has now served me almost four and a half years – an eternity in ICT terms. For some time it has been upgraded from Windows 8 into Windows 10, which in principle operates rather well. It is just that the operating system eats almost all resources, and it is painfully slow to do anything useful, with contemporary web apps and browsers particularly. Even a Chromebook serves better in that regard.

Last night I tried installing Linux – Ubuntu 17.04 version – into multiboot configuration to X202E. There were certain hurdles in the setup: it was necessary to disable Secure Boot, get into the UEFI/BIOS (fast F2 pressing in boot sequence), disable Fast Boot, enable Lauch CSM (disable Launch PXE OpROM), and enable USB options, in order to make the system bootable from an USB installation stick. (Also, my first attempts were all failures, and it was only when I tried to use another USB stick when the boot from USB disk option came available in UEFI/BIOS.)

Currently, all seems to be ok in Ubuntu, and laptp works much faster than in the Windows side. The battery of this laptop has never been strong, and in its current condition I would say that 2-3 hours is probably maximum it can go, unplugged. Thermal cooling is also weak, but if run ‘indicator-cpufreq’ tool and drop the CPU into slower speeds, the system stays manageable. The reality is, however, that the realistic life cycle of this little machine is coming towards its final rounds. But it is nice to see how Linux can be used to breath some new life into the aging system. Also, the touch controls and gestures are better today in Ubuntu, than they were only few years ago. Linux is not a touch-focused operating system by design, and gestures work rather badly in e.g. Firefox – Chrome is better in that regard. Windows 10 is much more modern in that area, and pen-based computing is something that one can really integrate in one’s daily work flow only in Windows 10. But writing, coding, and various editing tasks for example can be achieved in a small Ubuntu laptop quite nicely. Chromebooks, however, are also making promising steps by opening the vast repositories of Android apps that is good news for hybrid devices and touch-oriented users. Linux remains strong as a geek environment, but when user cultures and mainstream users needs are considered, other software and service ecosystems are currently evolving faster.

Testing Steam Link

Steam Link
Steam Link, unboxing.

It is my summer vacation period now, and during a rainy week, it would be nice to play also some PC games – either alone, or together as a social experience, if a game from a suitable genre is available. To bring the PC experience from my “media cave” to the living room, I installed Steam Link, a small device that is designed for remotely streaming and accessing the PC games, running on the gaming desktop PC (which is equipped with a powerful graphics card) in the basement, from the living room large-screen tv.

The idea is pretty plug-and-play style simple, but it actually took over an hour of troubleshooting to get the system setup right. Initially, there was no image in the television screen, apart from the blue Link boot symbol, and the trick that finally solved this issue was to change the HDMI cable to another one – the Link box appears to be a bit picky on those. Then, my “Xbox One Controller with the Wireless Adapter” did not work with Steam Link (it works fine with the PC), but my old PS3 Dualshock controllers appeared to work just fine, both in wired and wireless modes. Finally, there was an issue with “Dota 2”, the game I first tested, where the game got stuck with every dialog box, and did not accept any input from either the gamepad or from mouse/keyboard (one can connect also Bluetooth devices to the Steam Link) – I had to run downstairs to access the game locally from the PC to get over it (I wonder what was behind that one). Oh yes, and finally it appeared that there was no game sound in the living room television, from any game running in the Steam Link. This could also be fixed by going downstairs, and changing manually the Windows 10 playback device to be the living room television set – the Steam software appears to get confused, and automatic configuration will end up muting and/or playing sounds via wrong audio devices.

Steam Store, running via Steam Link.
Steam Store, running via Steam Link.

But after those ones, we got some nice, all-family gameplay action with the “Jones on Fire” PC version. And there are now several more games downloading from the Steam store, so developing and selling – rather cheaply – the Steam Link box appears to be a smart move from Valve. Now, if only the multiple components and services in a typical h0me network would play together a bit more reliably, and the support for wireless game controllers (such as the wireless Xbox One version) would be better, this would be an excellent setup.

Thunderbolt 3, eGPUs

(This is the first post in a planned series, focusing on various aspects of contemporary information and communication technologies.)

The contemporary computing is all about flow of information: be it a personal computer, a mainframe server, a mobile device or even an embedded system in a vehicle, for example, the computers of today are not isolated. Be it for better or worse, increasingly all things are integrated into world-wide networks of information and computation. This also means that the ports and interfaces for all that data transfer take even higher prominence and priority, than in the old days of more locally situated processing.

Thinking about transfer of data, some older generation computer users still might remember things like floppy disks or other magnetic media, that were used both for saving the work files, and often distributing and sharing that work with others. Later, optical disks, external hard drives, and USB flash drives superseded floppies, but a more fundamental shift was brought along by Internet, and “cloud-based” storage options. In some sense the development has meant that personal computing has returned to the historical roots of distributed computing in ARPANET and its motivation in sharing of computing resources. But regardless what kind of larger network infrastructure mediates the operations of user and the service provider, all that data still needs to flow around, somehow.

The key technologies for information and communication flows today appear to be largely wireless. The mobile phone and tablet communicate to the networks with wireless technologies, either WiFi (wireless local area networking) or cellular networks (GSM, 3G and their successors). However, all those wireless connections end up linking into wired backbone networks, that operate at much higher speeds and reliability standards, than the often flaky, local wireless connections. As data algorithms for coding, decoding and compression of data have evolved, it is possible to use wireless connections today to stream 4K Ultra HD video, or to play high speed multiplayer games online. However, in most cases, wired connections will provide lower latency (meaning more immediate response), better reliability from errors and higher speeds. And while there are efforts to bring wireless charging to mobile phones, for example, most of the information technology we use today still needs to be plugged into some kind of wire for charging its batteries, at least.

Thunderbolt 3 infographic, (c) Intel
Thunderbolt 3 infographic, (c) Intel

This is where new standards like USB-C and Thunderbolt come to the picture. Thunderbolt (currently Thunderbolt 3 is the most recent version) is a “hardware interface”, meaning it is a physical, electronics based system that allows two computing systems to exchange information. This is a different thing, though, from the actual physical connector: “USB Type C” is the full name of the most recent reincarnation of “Universal Serial Bus”, an industry standard of protocols, cables, and connectors that were originally released already in 1996. The introduction of original USB was a major step into the interoperability of electronics, as the earlier situation had been developing into a jungle of propriety, non-compatible connectors – and USB is a major success story, with several billion connectors (and cables) shipped every year. Somewhat confusingly, the physical, bi-directional connectors of USB-C can hide behind them many different kinds of electronics, so that some USB-C connectors comply with USB 3.1 mode (with data transfer speeds up to 10 Gbit/s in “USB 3.1 Gen 2” version) and some are implemented with Thunderbolt – and some support both.

USB-C and Thunderbolt have in certain sense achieved a considerable engineering marvel: with backward compatibility to older USB 2.0 mode devices, this one port and cable should be able to connect to multiple displays with 4K resolutions, external data storage devices (with up to 40 Gbit/s speeds), while also working as a power cable: with Thunderbolt support, a single USB-C type port can serve, or drain, up to 100 watts electric power – making it possible to remove separate power connectors, and share power bricks between phones, tablets, laptop computers and other devices. The small form factor Apple MacBook (“Retina”, 2015) is an example of this line of thinking. One downside for the user of this beautiful simplicity of a single port in the laptop is need for carrying various adapters to connect with anything outside of the brave new USB-C world. In an ideal situation, however, it would be a much simpler life if there would only be this one connector type to worry about, and it would be possible to use a single cable to dock any device to the network, gain access to large displays, storage drives, high speed networks, and even external graphics solutions.

The heterogeneity and historical layering of everyday technologies are complicating the landscape that electronics manufacturers would like to paint for us. As any student of history of science and technology can tell, even the most successful technologies did not replace the earlier ones immediately, and there has always been reasons why people have been opposing the adoption of new technologies. For USB-C and Thunderbolt, the process of wider adoption is clearly currently well underway, but there are also multiple factors that slow it down. The most typical peripheral does not yet come with USB-C, but rather with the older versions. Even in expensive, high end mobile phones, there are still multiple models that manufacturers ship with older USB connectors, rather than with the new USB-C ones.

A potentially more crucial issue for most regular users is that Thunderbolt 3 & USB-C is still relatively new and immature technology. The setup is also rather complex, and with its integration of DisplayPort (video), PCI Express (PCIe, data) and DC power into a single hardware interface it typically requires multiple manufacturers’ firmware and driver updates to work seamlessly together, for TB3 magic to start happening. An integrated systems provider such as Apple has best possibilities to make this work, as they control both hardware as well as software of their macOS computers. Apple is also, together with Intel, the developer of the original Thunderbolt, and the interface was first commercially made available in the 2011 version of MacBook Pro. However, today there is an explosion of various USB-C and Thunderbolt compatible devices coming to the market from multiple manufacturers, and the users are eager to explore the full potential of this new, high speed, interoperable wired ecosystem.

eGPU, or External Graphics Processing Unit, is a good example of this. There are entire hobbyist forums like eGPU.io website dedicated to the fine art of connecting a full powered, desktop graphics card to a laptop computer via fast lane connections – either Expresscard or Thunderbolt 3. The rationale for this is (apart from the sheer joy of tweaking) that in this manner, one can both have a slim ultrabook computer for daily use, with a long battery life, that is then capable of transforming into an impressive workstation or gaming machine, when plugged into an external enclosure that houses the power hungry graphics card (these TB3 boxes typically have full length PCIe slots for installing GPUs, different sets of connection ports, and a separate desktop PC style power supply).  VR (virtual reality) applications are one example of an area where current generation of laptops have problems: while there are e.g. Nvidia GeForce GTX 10 series (1060 etc.) equipped laptops available today, most of them are not thin and light for everyday mobile use, or, if they are, their battery life and/or fan noise present issues.

Razer, a American-Chinese computing hardware manufacturer is known as a pioneer in popularizing the field of eGPUs, with their introduction of Razer Blade Stealth ultrabook, which can be plugged with a TB3 cable into the Razer Core enclosure (sold separately), for utilizing powerful GPU cards that can be installed inside the Core unit. A popular use case for TB3/eGPU connections is for plugging a powerful external graphics card into a MacBook Pro, in order to make it into a more capable gaming machine. In practice, the early adopters have faced struggles with firmwares and drivers that do not provide direct support from either the macOS side, or from the eGPU unit for the Thunderbolt 3 implementation to actually work. (See e.g. https://egpu.io/akitio-node-review-the-state-of-thunderbolt-3-egpu/ .) However, more and more manufacturers have added support and modified their firmware updates, so the situation is already much better than a few months ago (see instructions at: https://egpu.io/setup-guide-external-graphics-card-mac/ .) In the area of PC laptops running Windows 10, the situation is comparable: a work in progress, with more software support slowly emerging. Still, it is easy to get lost in this, still evolving field. For example, Dell revealed in January that they had restricted the Thunderbolt 3 PCIe data lanes in their implementation of the premium XPS 15 notebook computer: rather than using full 4 lanes, XPS 15 had only 2 PCIe lanes connected in the TB3. There is e.g. this discussion in Reddit comparing the effects this has, in the typical case that eGPU is feeding image into an external display, rather than back to the internal display of the laptop computer (see: https://www.reddit.com/r/Dell/comments/5otmir/an_approximation_of_the_difference_between_x2_x4/). The effects are not that radical, but it is one of the technical details that the early users of eGPU setups have struggled with.

While fascinating from an engineering or hobbyist perspective, the situation of contemporary technologies for connecting the everyday devices is still far from perfect. In thousands of meeting rooms and presentation auditoriums every day, people fail to connect their computers, get anything into the screen, or get access to their presentation due to the failures of online connectivity. A universal, high speed wireless standard for sharing data and displaying video would no doubt be the best solution for all. Meanwhile, a reliable and flexible, high speed standard in wired connectivity would go a long way already. The future will show whether Thunderbolt 3 can reach that kind of ubiquitous support. The present situation is pretty mixed and messy at best.