Transition to Mac

Apple’s M1 Processor Lineup, March 2022. (Source: Apple.)

I have been an occasional Mac user in the past: in 2007, I bought a Mac Mini (an Intel Core 2 Duo, 2.0 GHz model) from Tokyo where I was for the DiGRA conference. And in November 2013, I invested into a MacBook Pro with Retina Display (late 2013 model, with 2.4GHz Core i5, Intel Iris graphics). Both were wonderful systems for their times, but also sort of “walled garden” style environments, with no real possiblity for user upgrades and soon outpaced by PC systems, particularly in gaming. So, I found myself using the more powerful PC desktop computers and laptops, again and again.

Now, I have again started the process of moving back into the Apple/Mac ecosystem, this time full-time, with both the work and home devices, both in computing as well as in mobile tech being most likely in Apple camp, at some point later this year. Why, you might ask – what has changed?

The limitations of Apple in upgradability and general freedom of choice are still the same. Apple devices also continue to be typically more expensive than the comparably specced competitors from the non-Apple camp. It is a bit amusing to look at a bunch of smart professionals sitting next to each other, each tapping at the identical, Apple-logo laptops, glancing at their identical iPhones. Apple has managed to get a powerful hold on the independent professional scene (including e.g. professors, researchers, designers and developers), even while the large IT departments continue to prefer PCs, mostly due to the cheaper unit-prices and better support for centralised “desktop management”. This is visible in the universities, too, where the IT department gets PCs for support personnel and offers them as the default choice for new employees, yet many people pick up a Mac if they can decide themselves.

In my case, the decision to go back to Apple ecosystem is connected to two primary factors: the effects of corona pandemic, and the technical progress of “Apple silicon”.

The first factor consists of all the cumulative effects that are results from three years of remote and hybrid work. The requirements for fast and reliable systems that can support multitasking, video and audio really well are of paramount importance now. The hybrid meeting and teaching situations are particularly complex, as there is now need to run several communications tools simultaneously, stream high-quality video and audio, possibly also record and edit audio and video, while also making online publications (e.g., course environments, public lecture web pages, entire research project websites) that integrate video and photographic content more than used to be the case before.

In my case, it is particularly the lack of reliability and the incapability of PC systems in processing of image and video data that has led to the decision of going back to Apple. I have a relatively powerful thin-and-light laptop for work, and a Core i5/RTX 2060 Super based gaming/workstation PC at home. The laptop became underpowered first, and some meetings are now starting maybe 5-10 minutes late, with my laptop trying to find the strength needed to run few browser windows, some office software, a couple of communication and messaging apps, plus the required real-time video and audio streams. And my PC workstation can still run many older games, but when I import some photo and video files while also having a couple of editing tools open, everything becomes stuck. There is nothing as frustrating as staring on a computer screen where the “Wheel of Death” is spinning, when you have many urgent things to do. I have developed a habit of clicking on different background windows constantly, and keeping the Windows Task Manager all the time open, so that I can use it to immediately kill any stuck processes and try recovering my work to where I was.

Recently I got the chance to test an M1 MacBook Pro (thanks, Laura), and while the laptop was equal to my mighty PC workstation in some tasks, there were processes which were easily 5-10 times faster in the Mac, particularly everything related to file management, photo and video editing. And the overall feeling of responsiveness and fluency in multitasking was just awesome. The new “Apple silicon” chips and architectures are providing user experiences that are just so much better than anything that I have had in the PC side during the recent years.

There are multiple reasons behind this, and there are technical people who can explain the underlying factors much better than I can (see, e.g., what Erik Engheim from Oslo writes here: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-3262b158cba2). The basic benefits are coming from very deep integration of Apple’s System-on-a-Chip (SOC), where in an M1 chip package, a whole computer has been designed and packed into one, integrated package:

  • Central processing unit (CPU) – the “brains” of the SoC. Runs most of the code of the operating system and your apps.
  • Graphics processing unit (GPU) — handles graphics-related tasks, such as visualizing an app’s user interface and 2D/3D gaming.
  • Image processing unit (ISP) — can be used to speed up common tasks done by image processing applications.
  • Digital signal processor (DSP) — handles more mathematically intensive functions than a CPU. Includes decompressing music files.
  • Neural processing unit (NPU) — used in high-end smartphones to accelerate machine learning (A.I.) tasks. These include voice recognition and camera processing.
  • Video encoder/decoder — handles the power-efficient conversion of video files and formats.
  • Secure Enclave — encryption, authentication, and security.
  • Unified memory — allows the CPU, GPU, and other cores to quickly exchange information
    (Source: E. Engheim, “Why Is Apple’s M1 Chip So Fast?”)

The underlying architecture of Apple Silicon comes from their mobile devices, iPhones and iPads, in particular. While mainstream PC components have grown over the years increasingly massive and power-hungry, the mobile environment has set its strict limits and requirements for the efficiency of system architecture. There are efforts to utilise the same ARM (advanced “reduced instruction set”) architectures that e.g. mobile chip maker Qualcomm uses in their processors for Android mobile phones, also in the “Windows on Arm” computers. While the Android phones are doing fine, the Arm-based Windows computers have been generally so slow and limited in their software support that they have remained in the margins.

In addition to the reliability, stability, speed and power-efficiency benefits, Apple can today also provide that kind of seamless integration between computers, tablet devices, smartphones and wearable technology (e.g., AirPod headphones and Apple Watch devices) that the users of more hybrid ecosystems can only dream about. This is now also becoming increasingly important, as (post-pandemic), we are moving between home office, the main office, various “third spaces” and e.g. conference travel, while also still keeping up the remote meetings and events regime that emerged during the corona isolation years. Life is just so much easier when e.g. notifications, calls and data follow you more or less seamlessly from device to device, depending on where you are — sitting, running or changing trains. As the controlling developer-manufacturer of both hardware, software and underlying online services, Apple is in the enviable position to implement a polished, hybrid environment that works well together – and, thus, is one less source of stress.

Thunderbolt 3, eGPUs

(This is the first post in a planned series, focusing on various aspects of contemporary information and communication technologies.)

The contemporary computing is all about flow of information: be it a personal computer, a mainframe server, a mobile device or even an embedded system in a vehicle, for example, the computers of today are not isolated. Be it for better or worse, increasingly all things are integrated into world-wide networks of information and computation. This also means that the ports and interfaces for all that data transfer take even higher prominence and priority, than in the old days of more locally situated processing.

Thinking about transfer of data, some older generation computer users still might remember things like floppy disks or other magnetic media, that were used both for saving the work files, and often distributing and sharing that work with others. Later, optical disks, external hard drives, and USB flash drives superseded floppies, but a more fundamental shift was brought along by Internet, and “cloud-based” storage options. In some sense the development has meant that personal computing has returned to the historical roots of distributed computing in ARPANET and its motivation in sharing of computing resources. But regardless what kind of larger network infrastructure mediates the operations of user and the service provider, all that data still needs to flow around, somehow.

The key technologies for information and communication flows today appear to be largely wireless. The mobile phone and tablet communicate to the networks with wireless technologies, either WiFi (wireless local area networking) or cellular networks (GSM, 3G and their successors). However, all those wireless connections end up linking into wired backbone networks, that operate at much higher speeds and reliability standards, than the often flaky, local wireless connections. As data algorithms for coding, decoding and compression of data have evolved, it is possible to use wireless connections today to stream 4K Ultra HD video, or to play high speed multiplayer games online. However, in most cases, wired connections will provide lower latency (meaning more immediate response), better reliability from errors and higher speeds. And while there are efforts to bring wireless charging to mobile phones, for example, most of the information technology we use today still needs to be plugged into some kind of wire for charging its batteries, at least.

Thunderbolt 3 infographic, (c) Intel
Thunderbolt 3 infographic, (c) Intel

This is where new standards like USB-C and Thunderbolt come to the picture. Thunderbolt (currently Thunderbolt 3 is the most recent version) is a “hardware interface”, meaning it is a physical, electronics based system that allows two computing systems to exchange information. This is a different thing, though, from the actual physical connector: “USB Type C” is the full name of the most recent reincarnation of “Universal Serial Bus”, an industry standard of protocols, cables, and connectors that were originally released already in 1996. The introduction of original USB was a major step into the interoperability of electronics, as the earlier situation had been developing into a jungle of propriety, non-compatible connectors – and USB is a major success story, with several billion connectors (and cables) shipped every year. Somewhat confusingly, the physical, bi-directional connectors of USB-C can hide behind them many different kinds of electronics, so that some USB-C connectors comply with USB 3.1 mode (with data transfer speeds up to 10 Gbit/s in “USB 3.1 Gen 2” version) and some are implemented with Thunderbolt – and some support both.

USB-C and Thunderbolt have in certain sense achieved a considerable engineering marvel: with backward compatibility to older USB 2.0 mode devices, this one port and cable should be able to connect to multiple displays with 4K resolutions, external data storage devices (with up to 40 Gbit/s speeds), while also working as a power cable: with Thunderbolt support, a single USB-C type port can serve, or drain, up to 100 watts electric power – making it possible to remove separate power connectors, and share power bricks between phones, tablets, laptop computers and other devices. The small form factor Apple MacBook (“Retina”, 2015) is an example of this line of thinking. One downside for the user of this beautiful simplicity of a single port in the laptop is need for carrying various adapters to connect with anything outside of the brave new USB-C world. In an ideal situation, however, it would be a much simpler life if there would only be this one connector type to worry about, and it would be possible to use a single cable to dock any device to the network, gain access to large displays, storage drives, high speed networks, and even external graphics solutions.

The heterogeneity and historical layering of everyday technologies are complicating the landscape that electronics manufacturers would like to paint for us. As any student of history of science and technology can tell, even the most successful technologies did not replace the earlier ones immediately, and there has always been reasons why people have been opposing the adoption of new technologies. For USB-C and Thunderbolt, the process of wider adoption is clearly currently well underway, but there are also multiple factors that slow it down. The most typical peripheral does not yet come with USB-C, but rather with the older versions. Even in expensive, high end mobile phones, there are still multiple models that manufacturers ship with older USB connectors, rather than with the new USB-C ones.

A potentially more crucial issue for most regular users is that Thunderbolt 3 & USB-C is still relatively new and immature technology. The setup is also rather complex, and with its integration of DisplayPort (video), PCI Express (PCIe, data) and DC power into a single hardware interface it typically requires multiple manufacturers’ firmware and driver updates to work seamlessly together, for TB3 magic to start happening. An integrated systems provider such as Apple has best possibilities to make this work, as they control both hardware as well as software of their macOS computers. Apple is also, together with Intel, the developer of the original Thunderbolt, and the interface was first commercially made available in the 2011 version of MacBook Pro. However, today there is an explosion of various USB-C and Thunderbolt compatible devices coming to the market from multiple manufacturers, and the users are eager to explore the full potential of this new, high speed, interoperable wired ecosystem.

eGPU, or External Graphics Processing Unit, is a good example of this. There are entire hobbyist forums like eGPU.io website dedicated to the fine art of connecting a full powered, desktop graphics card to a laptop computer via fast lane connections – either Expresscard or Thunderbolt 3. The rationale for this is (apart from the sheer joy of tweaking) that in this manner, one can both have a slim ultrabook computer for daily use, with a long battery life, that is then capable of transforming into an impressive workstation or gaming machine, when plugged into an external enclosure that houses the power hungry graphics card (these TB3 boxes typically have full length PCIe slots for installing GPUs, different sets of connection ports, and a separate desktop PC style power supply).  VR (virtual reality) applications are one example of an area where current generation of laptops have problems: while there are e.g. Nvidia GeForce GTX 10 series (1060 etc.) equipped laptops available today, most of them are not thin and light for everyday mobile use, or, if they are, their battery life and/or fan noise present issues.

Razer, a American-Chinese computing hardware manufacturer is known as a pioneer in popularizing the field of eGPUs, with their introduction of Razer Blade Stealth ultrabook, which can be plugged with a TB3 cable into the Razer Core enclosure (sold separately), for utilizing powerful GPU cards that can be installed inside the Core unit. A popular use case for TB3/eGPU connections is for plugging a powerful external graphics card into a MacBook Pro, in order to make it into a more capable gaming machine. In practice, the early adopters have faced struggles with firmwares and drivers that do not provide direct support from either the macOS side, or from the eGPU unit for the Thunderbolt 3 implementation to actually work. (See e.g. https://egpu.io/akitio-node-review-the-state-of-thunderbolt-3-egpu/ .) However, more and more manufacturers have added support and modified their firmware updates, so the situation is already much better than a few months ago (see instructions at: https://egpu.io/setup-guide-external-graphics-card-mac/ .) In the area of PC laptops running Windows 10, the situation is comparable: a work in progress, with more software support slowly emerging. Still, it is easy to get lost in this, still evolving field. For example, Dell revealed in January that they had restricted the Thunderbolt 3 PCIe data lanes in their implementation of the premium XPS 15 notebook computer: rather than using full 4 lanes, XPS 15 had only 2 PCIe lanes connected in the TB3. There is e.g. this discussion in Reddit comparing the effects this has, in the typical case that eGPU is feeding image into an external display, rather than back to the internal display of the laptop computer (see: https://www.reddit.com/r/Dell/comments/5otmir/an_approximation_of_the_difference_between_x2_x4/). The effects are not that radical, but it is one of the technical details that the early users of eGPU setups have struggled with.

While fascinating from an engineering or hobbyist perspective, the situation of contemporary technologies for connecting the everyday devices is still far from perfect. In thousands of meeting rooms and presentation auditoriums every day, people fail to connect their computers, get anything into the screen, or get access to their presentation due to the failures of online connectivity. A universal, high speed wireless standard for sharing data and displaying video would no doubt be the best solution for all. Meanwhile, a reliable and flexible, high speed standard in wired connectivity would go a long way already. The future will show whether Thunderbolt 3 can reach that kind of ubiquitous support. The present situation is pretty mixed and messy at best.

%d bloggers like this: