Thunderbolt 3, eGPUs

(This is the first post in a planned series, focusing on various aspects of contemporary information and communication technologies.)

The contemporary computing is all about flow of information: be it a personal computer, a mainframe server, a mobile device or even an embedded system in a vehicle, for example, the computers of today are not isolated. Be it for better or worse, increasingly all things are integrated into world-wide networks of information and computation. This also means that the ports and interfaces for all that data transfer take even higher prominence and priority, than in the old days of more locally situated processing.

Thinking about transfer of data, some older generation computer users still might remember things like floppy disks or other magnetic media, that were used both for saving the work files, and often distributing and sharing that work with others. Later, optical disks, external hard drives, and USB flash drives superseded floppies, but a more fundamental shift was brought along by Internet, and “cloud-based” storage options. In some sense the development has meant that personal computing has returned to the historical roots of distributed computing in ARPANET and its motivation in sharing of computing resources. But regardless what kind of larger network infrastructure mediates the operations of user and the service provider, all that data still needs to flow around, somehow.

The key technologies for information and communication flows today appear to be largely wireless. The mobile phone and tablet communicate to the networks with wireless technologies, either WiFi (wireless local area networking) or cellular networks (GSM, 3G and their successors). However, all those wireless connections end up linking into wired backbone networks, that operate at much higher speeds and reliability standards, than the often flaky, local wireless connections. As data algorithms for coding, decoding and compression of data have evolved, it is possible to use wireless connections today to stream 4K Ultra HD video, or to play high speed multiplayer games online. However, in most cases, wired connections will provide lower latency (meaning more immediate response), better reliability from errors and higher speeds. And while there are efforts to bring wireless charging to mobile phones, for example, most of the information technology we use today still needs to be plugged into some kind of wire for charging its batteries, at least.

Thunderbolt 3 infographic, (c) Intel
Thunderbolt 3 infographic, (c) Intel

This is where new standards like USB-C and Thunderbolt come to the picture. Thunderbolt (currently Thunderbolt 3 is the most recent version) is a “hardware interface”, meaning it is a physical, electronics based system that allows two computing systems to exchange information. This is a different thing, though, from the actual physical connector: “USB Type C” is the full name of the most recent reincarnation of “Universal Serial Bus”, an industry standard of protocols, cables, and connectors that were originally released already in 1996. The introduction of original USB was a major step into the interoperability of electronics, as the earlier situation had been developing into a jungle of propriety, non-compatible connectors – and USB is a major success story, with several billion connectors (and cables) shipped every year. Somewhat confusingly, the physical, bi-directional connectors of USB-C can hide behind them many different kinds of electronics, so that some USB-C connectors comply with USB 3.1 mode (with data transfer speeds up to 10 Gbit/s in “USB 3.1 Gen 2” version) and some are implemented with Thunderbolt – and some support both.

USB-C and Thunderbolt have in certain sense achieved a considerable engineering marvel: with backward compatibility to older USB 2.0 mode devices, this one port and cable should be able to connect to multiple displays with 4K resolutions, external data storage devices (with up to 40 Gbit/s speeds), while also working as a power cable: with Thunderbolt support, a single USB-C type port can serve, or drain, up to 100 watts electric power – making it possible to remove separate power connectors, and share power bricks between phones, tablets, laptop computers and other devices. The small form factor Apple MacBook (“Retina”, 2015) is an example of this line of thinking. One downside for the user of this beautiful simplicity of a single port in the laptop is need for carrying various adapters to connect with anything outside of the brave new USB-C world. In an ideal situation, however, it would be a much simpler life if there would only be this one connector type to worry about, and it would be possible to use a single cable to dock any device to the network, gain access to large displays, storage drives, high speed networks, and even external graphics solutions.

The heterogeneity and historical layering of everyday technologies are complicating the landscape that electronics manufacturers would like to paint for us. As any student of history of science and technology can tell, even the most successful technologies did not replace the earlier ones immediately, and there has always been reasons why people have been opposing the adoption of new technologies. For USB-C and Thunderbolt, the process of wider adoption is clearly currently well underway, but there are also multiple factors that slow it down. The most typical peripheral does not yet come with USB-C, but rather with the older versions. Even in expensive, high end mobile phones, there are still multiple models that manufacturers ship with older USB connectors, rather than with the new USB-C ones.

A potentially more crucial issue for most regular users is that Thunderbolt 3 & USB-C is still relatively new and immature technology. The setup is also rather complex, and with its integration of DisplayPort (video), PCI Express (PCIe, data) and DC power into a single hardware interface it typically requires multiple manufacturers’ firmware and driver updates to work seamlessly together, for TB3 magic to start happening. An integrated systems provider such as Apple has best possibilities to make this work, as they control both hardware as well as software of their macOS computers. Apple is also, together with Intel, the developer of the original Thunderbolt, and the interface was first commercially made available in the 2011 version of MacBook Pro. However, today there is an explosion of various USB-C and Thunderbolt compatible devices coming to the market from multiple manufacturers, and the users are eager to explore the full potential of this new, high speed, interoperable wired ecosystem.

eGPU, or External Graphics Processing Unit, is a good example of this. There are entire hobbyist forums like website dedicated to the fine art of connecting a full powered, desktop graphics card to a laptop computer via fast lane connections – either Expresscard or Thunderbolt 3. The rationale for this is (apart from the sheer joy of tweaking) that in this manner, one can both have a slim ultrabook computer for daily use, with a long battery life, that is then capable of transforming into an impressive workstation or gaming machine, when plugged into an external enclosure that houses the power hungry graphics card (these TB3 boxes typically have full length PCIe slots for installing GPUs, different sets of connection ports, and a separate desktop PC style power supply).  VR (virtual reality) applications are one example of an area where current generation of laptops have problems: while there are e.g. Nvidia GeForce GTX 10 series (1060 etc.) equipped laptops available today, most of them are not thin and light for everyday mobile use, or, if they are, their battery life and/or fan noise present issues.

Razer, a American-Chinese computing hardware manufacturer is known as a pioneer in popularizing the field of eGPUs, with their introduction of Razer Blade Stealth ultrabook, which can be plugged with a TB3 cable into the Razer Core enclosure (sold separately), for utilizing powerful GPU cards that can be installed inside the Core unit. A popular use case for TB3/eGPU connections is for plugging a powerful external graphics card into a MacBook Pro, in order to make it into a more capable gaming machine. In practice, the early adopters have faced struggles with firmwares and drivers that do not provide direct support from either the macOS side, or from the eGPU unit for the Thunderbolt 3 implementation to actually work. (See e.g. .) However, more and more manufacturers have added support and modified their firmware updates, so the situation is already much better than a few months ago (see instructions at: .) In the area of PC laptops running Windows 10, the situation is comparable: a work in progress, with more software support slowly emerging. Still, it is easy to get lost in this, still evolving field. For example, Dell revealed in January that they had restricted the Thunderbolt 3 PCIe data lanes in their implementation of the premium XPS 15 notebook computer: rather than using full 4 lanes, XPS 15 had only 2 PCIe lanes connected in the TB3. There is e.g. this discussion in Reddit comparing the effects this has, in the typical case that eGPU is feeding image into an external display, rather than back to the internal display of the laptop computer (see: The effects are not that radical, but it is one of the technical details that the early users of eGPU setups have struggled with.

While fascinating from an engineering or hobbyist perspective, the situation of contemporary technologies for connecting the everyday devices is still far from perfect. In thousands of meeting rooms and presentation auditoriums every day, people fail to connect their computers, get anything into the screen, or get access to their presentation due to the failures of online connectivity. A universal, high speed wireless standard for sharing data and displaying video would no doubt be the best solution for all. Meanwhile, a reliable and flexible, high speed standard in wired connectivity would go a long way already. The future will show whether Thunderbolt 3 can reach that kind of ubiquitous support. The present situation is pretty mixed and messy at best.

Future of interfaces: AirPods

Apple AirPods (image © Apple).

I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.

Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).

There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.

The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.

The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.

Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.

There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.

All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see:

10-year-update: my home pages

screenshot-2016-12-26-16-23-27Update: the new design is now live at: – My current university side home pages are from year 2006, so there is a decade of Internet and WWW evolution looming over them. Static HTML is not so bad in itself – it is actually fast and reliable, as compared to some more flaky ways of doing things. However, people access online content increasingly with mobile devices and getting a more “responsive” design (that is, web page design code that scales and adapts content into small or large screen devices differently) is clearly in order.

When one builds institutional home pages as part of the university or other organisation infrastructure, there are usually various technical limitations or other issues, so also in this case. While I have a small “personnel card” style, official contact page in our staff directory, I have wanted my personal home pages to include more content that would reflect my personal interests, publication activity, and to carry links to various resources that I find important or relevant. Our IT admin, however, has limited the WWW server technologies to a pretty minimal set, and there is not, for example “mod_rewrite” module loaded to the Apache that serves our home pages. That means that my original idea to go with a “flat file CMS” to create the new pages (e.g. Kirby: did not work. There was only one CMS that worked without mod_rewrite that I could find (CMSimple:, and testing that was pain (it was too clumsy and limited in terms of design templates and editing functions for my, non-coder tastes). The other main alternative was to set up a CMS that relies on an actual database (MySQL or similar), but that was forbidden from personal home pages in our university, too.

For a while I toyed with an idea that I would actually set up a development server of my own, and use it to generate static code that I would then publish on the university server. Jekyll ( was most promising option in that area. I did indeed spend few hours (after kids have gone to bed) in setting up a development environment into my Surface Pro 4, building on top of the Bash/Ubuntu subsystem, adding Python, Ruby, etc., but there was some SSH public key signing bug that broke the connection to GitHub, which is pretty essential for running Jekyll. Debugging that road proved to be too much for me – the “Windows Subsystem for Linux” is still pretty much a work-in-progress thing. Then I also tried to set up an Oracle VM VirtualBox with WordPress built in, but that produced some other, interesting problems of its own. (It just also might be a good idea to use something a bit more powerful than Surface Pro for running multiple server, photo editing and other tools at the same time – but for many things, this tablet is actually surprisingly good.)

Currently, the plan is that I will develop my new home pages in WordPress, using a commercial “Premium” theme that comes with actual tutorials on how to use and adapt it for my needs (plus they promise support, when I’ll inevitably lose my way). In last couple of days, I have made decent progress using the Microsoft Webmatric package, which includes an IIS server, and pretty fully featured WordPress that runs on top of that (see: I have installed the theme of my choice, and plugins it requires, and started the selection and conversion of content for the new framework. Microsoft, however, has decided to discontinue Webmatrix, and the current setup seems bit buggy, which makes actual content production somewhat frustrating. The server can suddenly lose reading rights to some key graphics file, for example. Or a WordPress page with long and complex code starts breaking down at some point, so that it fails to render correctly. For example, when I had reached about the half way point in creating the code and design for my publications page, the new text and graphics started appearing again from the top of the page, on top of the text that was there already!

I will probably end up setting up the home pages into another server, where I can actually get a full Apache, with mod_rewrite, MySQL and other necessary functions for implementing WordPress pages. In UTA home pages there would then be a redirect code that would show the way to the new pages. This is not optimal, since the search engines will not find my publications and content any more under the domain, but this is perhaps the simplest solution in getting the functionalities I want to actually run as they should. Alternatively, there are some ways to turn a WordPress site into static HTML pages, which can then be uploaded to the UTA servers. But I do not hold my breath whether all WordPress plugins and other more advanced features would work that way.

Happy Geek Holidays!

The Expanse, and renaissance of space operas

The Expanse, poster
The Expanse, poster.

There is currently clear need for some escapism, the would help to overcome the lack of vision and hope in today’s political arenas, and provide energy to keep on doing something to keep this planet of ours as humane and sustainable living environment as possible. In domain of entertainment, space operas have held one specific place for visions of future, and for hope. Star Trek television series is a good reminder of this. I started recently watching a new, streaming video series The Expanse, that I knew nothing about beforehand. Soon, I found myself spellbound, and had to spend most of Finnish Father’s Day glued to binge watching the entire first season.

Without providing too many spoilers, this is a (semi-)hard science fiction television series (based on a book series of same name) that is taking place in the future of our Solar System, where humans have colonized Moon, Mars, and several major asteroids in the “Belt”. There is a mystery, and threat of interplanetary war, that sets events into motion, but most drama is taking place at the level of single individuals, representing different factions, sets of motivations, and life stories.

The Expanse could not be possible without many “adult” science fiction series that have come before it, Babylon 5 in particular comes to mind. There is gritty, even dystopian feel of unfair and unfinished world in The Expanse, and it is made clear that children and other innocents are always suffering from the fundamental struggle for power and wealth, that is not going away at least in those 200 years that this series takes place in the future. Yet, none of the people are completely evil nor totally good, rather depicting how certain idealism and self-sacrifice is also an inalienable strain of humanity. Saying that, the end of season one was rather heavy going, bringing up memories of holocaust and military-scientific evils of the worst kind of our history. I would very much welcome the season two as soon as possible, to see how all of this is going to evolve further. Or, I just need to get my hands to some of those books. It is great to see that there is again faith in science fiction that can take also political and existential questions into agenda, yet also firmly keep true to its entertainment roots.

Apple TV, 4th generation

Apple has been developing their television offerings in multiple fronts: in one sense, much television content and viewers have already moved into Apple (and Google) platforms, as online video and streaming media keeps on growing in popularity. According to one market research report, in 18-24 age group (in America), between 2011 and 2016, traditional television viewing has dropped by almost 40 %. At the same time, subscriptions to streaming video services (like Netflix) are growing. Particularly among the young people, some reports already suggest that they are spending more time watching streaming video as contrasted to watching live television programs. Just in the period from 2012 to 2014, mobile video views increased by 400 %.

Still, the television set remains as the centrepiece of most Western living rooms. Apple TV is designed to adapt games, music, photos and movies from the Apple ecosystem to the big screen. After some problems with the old, second generation Apple TV, I got today the new, 4th generation Apple TV. It has more powerful processor, more memory, a new remote control that has a small touch surface, and runs a new version of tvOS. The most important aspect regarding expansions into new services is the ability to download and install apps and games from thousands that are available in the App Store for tvOS.

After some quick testing, I think that I will prefer using the Remote app in my iPhone 6 Plus, rather than navigating with the small physical remote, which feels a bit finicky. Also, for games the dedicated video game controller (Steelseries Nimbus) would definitely provide a better sense of control. The Nimbus should also play nice with iPhone and iPad games, in addition to Apple TV ones.

Setup of the system was simple enough, and was most easily handled via another Apple device – iCloud was utilized to access Wi-Fi and other registered home settings automatically. Apart from the bit tricky touch controls, the user experience is excellent. Even the default screensavers of the new system are this time high-definition video clips, which are great to behold in themselves. This is not a 4k system, though, so if you have already upgraded the living room television into 4k version, the new Apple TV does not support that. Ours is still a Full HD Sony Bravia, so no problem for us. Compared to some other competing streaming media boxes (like Roku 4, Amazon Fire TV, Nvidia Shield Android TV), the feature set of Apple TV in comparison to its price might seem a bit lacklustre. The entire Apple ecosystem has its own benefits (as well as downsides) though.

Three movies

I had some movie tickets that were expiring in Sunday, so I went for it, watching in a row three recent movies in cinema. All of these were transmedia storytelling – two of these were movies based on digital games, one was based on a book. I have no time to write actual reviews but a couple of notes:

Angry Birds Movie: the starting point feels almost like the rumoured Tetris Movie Trilogy – not much narrative material exists in the game to start with, but what little there is, it will be liberally exploited and expanded upon. In this case, we will learn why the birds are angry. In the original games the different birds were colour coded game units that each enabled different slingshot trajectories or other abilities. The movie version does decent work in providing them with personality, and for developing (bit silly and comedy-oriented) backstory for the conflict between the birds and the pigs.

The BFG (Big Friendly Giant): this is probably the strongest of three, when evaluated in terms of its overall cinematic qualities. The combination of Roald Dahl’s innovative children’s book and Steven Spielberg’s skills in high production value adventure movies provides a balanced mixture of humour, sense of wonder and a touch of some darker themes. The most memorable element is the friendly, 24 feet (over 7 meter) giant himself, played by Mark Rylance, and translated into detailed digital version by advanced motion capture technologies and computer generated imagery. The eyes of this friedly figure are particularly lively, deep and expressive.

Warcraft: The Beginning: like the title says, this movie is set to the early stages in the history of Azeroth, the main world of Warcraft game series. Gul’dan, an orc warlock, uses fel magic (evil, vampiric style of magic) to open a portal from Draenor, homeworld of orcs (now destroyed by fel magic) to Azeroth, inhabited by humans, elves and dwarves, and a dramatic conflict ensues. The challenge in Warcraft movie appears to be the exact opposite from the Angry Birds one: here, an abundance of characters, plotlines, wars, races, mythical places etc. has to be reduced into something that resembles more or less coherent, classical movie storyline. The reviews have generally been negative, but I actually rather liked the movie – perhaps due to having spent considerable time in Ironforge, Stormwind etc. myself, as a player of Warcraft RTS and World of Warcraft games in the past. The movie does not get very far in itself: there is perhaps ten or more significant characters, some of them are killed, some plots unravelled and others set into motion, and in the end everything just stops, after this prologue having provided hints at important future developments. But landscapes are impressive, some characters relatable, and there is constant “epic tone” in all of it (that might feel ridiculous or appropriate, depending what one’s tastes in genre fantasy are).

All in all, this day of movies just pointed out how central fantasy as an element, impulse and setting has become for popular culture, and how various storyworld elements cross media boundaries with ever-increasing ease.

Angry Birds Movie © Columbia Pictures and Rovio.
The BFG © Disney.
Warcraft: The Beginning © NBCUniversal
Warcraft: The Beginning © NBCUniversal