Future of interfaces: AirPods

apple-airpods
Apple AirPods (image © Apple).

I am a regular user of headphones of various kinds, both wired and wireless, closed and open, with noise cancellation, and without. The latest piece of this technology I invested in are the “AirPods” by Apple.

Externally, these things are almost comically similar to the standard “EarPods” they provide with, or as the upgrade option for their mobile devices. The classic white Apple design is there, just the cord has been cut, leaving the connector stems protruding from the user ears, like small antennas (which they probably also indeed are, as well as directional microphone arms).

There are wireless headphone-microphone sets that have slightly better sound quality (even if AirPods are perfectly decent as wireless earbuds), or even more neutral design. What is here interesting in one part is the “seamless” user experience which Apple has invested in – and the “artificial intelligence” Siri assistant which is another key part of the AirPod concept.

The user experience of AirPods is superior to any other headphones I have tested, which is related to the way the small and light AirPods immediatelly connect with the Apple iPhones, detect when they are placed into the ear, or or not, and work hours on one charge – and quickly recharge after a short session inside their stylishly designed, smart battery case. These things “just work”, in the spirit of original Apple philosophy. In order to achieve this, Apple has managed to create a seamless combination of tiny sensors, battery technology, and a dedicated “W1 chip” which manages the wireless functionalities of AirPods.

The integration with Siri assistant is the other key part of AirPod concept, and the one that probably divides user’s views more than any other feature. A double tap to the side of an AirPod activates Siri, which can indeed understand short commands in multiple languages, and respond to them, carrying out even simple conversations with the user. Talking to an invisible assistant is not, however, part of today’s mobile user cultures – even if Spike Jonze’s film “Her” (2013) shows that the idea is certainly floating around today. Still, mobile devices are often used while on the move, in public places, in buses, trains or in airplanes, and it is just not feasible nor socially acceptable that people carry out constant conversations with their invisible assistants in this kind of environments – not yet today, at least.

Regardless of this, Apple AirPods are actually to a certain degree designed to rely on such constant conversations, which both makes them futuristic and ambitious, but also a rather controversial piece of design and engineering. Most notably, there are no physical buttons or other ways for adjusting volume in these headphones: you just double tap to the side of AirPods, and verbally tell Siri to turn the volume up, or down. This mostly works just fine, Siri does the j0b, but a small touch control gesture would be just so much more user friendly.

There is something engaging in testing Siri with the AirPods, nevertheless. I did find myself walking around the neighborhood, talking to the air, and testing what Siri can do. There are already dozens of commands and actions that can be activated with the help of AirPods and Siri (there is no official listing, but examples are given in lists like this one: https://www.cnet.com/how-to/the-complete-list-of-siri-commands/). The abilities of Siri still fall short in many areas, it did not completely understand Finnish I used in my testing, and the integration of third party apps is often limited, which is a real bottleneck, as these apps are what most of us are using our mobile devices for, most of the time. Actually, Google and the assistant they have in Android is better than Siri in many areas relevant for daily life (maps, traffic information, for example), but the user experience of their assistant is not yet as seamless or integrated whole as that of Apple’s Siri is.

All this considered, using AirPods is certainly another step into the general developmental direction where pervasive computing, AI, conversational interfaces and augmented reality are taking us, in good or bad. Well worth checking out, at least – for more in Apple’s own pages, see: http://www.apple.com/airpods/.

Google Glass: quick impression

Google Glass
Google Glass

Today I had a chance to do a very quick test of Google Glass in local PC/tech store. The situation was hardly optimal for any real user testing, but at least this was a possibility to try on this coveted/hated device. Despite its largish frame, the eyewear is actually rather light and easy to carry. The screen (when you finally get it in your field of vision, it requires some adjustment first) is bright and sharp enough and seems to float up there, few virtual meters away. My main frustration was with the voice control: I kept on repeating “Ok Glass”, but at least this model requires that you first activate the specs through the touchpad in the right side, navigate into the correct menu mode, which displays the time of clock and the text “Ok Glass”, and it is only after this when you can give voice commands (e.g. “take photo”, “record video”, “google University of Tampere” – which actually produced wikipedia entry for “University of Tampa” – close enough…) This is not good, and I hope this was only a feature of some out-of-date firmware (?) Otherwise, I cannot see it as very convenient to use your hand to flip through the menus (displayed in a tiny, semi-transparent floating screen) in order to get into mode where you can “naturally” enter commands. Also, the voice output from Glass was at such low level that it was almost impossible to hear anything at the noisy store environment.

Ok, Glass. Interesting, but we will have a closer look again at some future event.

Ljubljana COST Meeting

COST Meeting, Ljubljana
COST Meeting, Ljubljana

This week, our “hybrid playfulness” research team is joining the COST FP1104 (“hybrid COST”) researchers around the Europe, and elsewhere to discuss the future of media, both printed and electronic, and particularly the in-between. The first day agenda was fully packed, including invited presentations on topics such as 2D codes, Augmented Reality, innovative printing, rich media mobile advertising, plus keynotes by professor Naomi Baron (American University) on reading in print and digital, and principal researcher Richard Harper (Microsoft Research Cambridge) on reading as writing and collaboration in the era of cloud computing. The Slovenian capital, Ljubljana, appears like a beautiful city, even while I have had little opportunity to look around so far.

Ljubljana, Slovenia
Ljubljana, Slovenia
%d bloggers like this: