I probably get passionate about somewhat silly things, but (like my family has noticed) I have already amassed rather sizable collection of backbags – most optimised for travelling with a laptop computer, photography setup, or both.
What is pictured here (below) is something a bit different, a compact and lightweight hiking backbag, Osprey Talon 22. It belongs to the “day bag” / “daypack” category, which means that while with its 22 liter dimensions it is probably too small to handle all of your stuff for a longer travel, it is perfect for all those things one is likely to carry around on a short trip.
The reason why I like this model particularly, relates to its carrying system. I have tried all sorts of straps and belts systems, but the one in Talon 22 is really good for the relatively light loads that this bag is designed for. It has an adjustable-length back plate with a foam-honeycomb structure, ergonomic shoulder straps, and the wide hipbelt also has a soft multilayered construction with air-channels. Combine this with a rich selection of various straps that allow adjusting the load into a very close, organic contact with your body, and you have a nice backbag indeed.
There are all kinds of advanced minor details in the Ospray feature list (that you can check from the link below), that might matter to more active hikers, for example, but basic feature set of this comfortable and highly adjustable all-round backbag are already something that many people can probably appreciate.
I am not sure whether this is true for other countries, but after a long, dark and cold winter, Finns want to be outdoors, when it is finally warm and sunny. Sometimes one might even do remote work outdoors, from a park, cafe or bar terrace, and that is when things can get interesting – with that “nightless night” (the sun shining even at midnight), and all.
Surely, for most aims and purposes, summer is for relaxing and dragging your work and laptop always with you to your summer cottage or beach is not a good idea. This is definitely precious time, and you should spend it to with your family and friends, and rewind from the hurries of work. But, if you would prefer (or, even need to, for a reason or another) take some of your work outdoors, the standard work laptop computer is not usually optimal tool for that.
It is interesting to note, that your standard computer screens even today are optimised for a different style of use, as compared to the screens of today’s mobile devices. While the brightest smartphone screens today – e.g. the excellent OLED screen used in Samsung Galaxy S9 – exceed 1000 nits (units of luminance: candela per square meter; the S9 screen is reported to produce max 1130 nits), your typical laptop computer screens max out around measly 200 nits (see e.g. this Laptop Mag test table: https://www.laptopmag.com/benchmarks/display-brightness ). While this is perfectly good while working in a typical indoor, office environment, it is very hard to make out any details of such screens in bright sunlight. You will just squint, get a headache, and hurt your eyes, in the long run. Also, many typical laptop screens today are highly reflective, glossy glass screens, and the matte surfaces, which help against reflections, have been getting very rare.
It is as the “mobile work” that is one of the key puzzwords and trends today, means in practice only indoor-to-indoor style of mobility, rather than implying development of tools for truly mobile work, that would also make it possible to work from a park bench in a sunny day, or from that classical location: dock, next to your trusty rowing boat?
I have been hunting for business oriented laptops that would also have enough maximum screen brightness to scale up to comfortable levels in brighly lit environments, and there are not really that many. Even if you go for tablet computers, which should be optimised for mobile work, the brightness is not really at level with the best smartphone screens. Some of the best figures come from Samsung Galaxy Tab S3, which is 441 nits, iPad Pro 10.5 inch model is reportedly 600 nits, and Google Pixel C has 509 nits maximum. And a tablet devices – even the best of them – do not really work well for all work tasks.
HP has recently introduced some interesting devices, that go beyond the dim screens that most other manufacturers are happy with. For example, HP ZBook Studio x360 G5 supposedly comes with a 4k, high resolution anti-glare touch display that supports 100 percent Adobe RPG and which has 600 nits of brightness, which is “20 percent brighter than the Apple MacBook Pro 15-inch Retina display and 50 percent brighter than the Dell XPS UltraSharp 4K display”, according to HP. With its 8th generation Xeon processors (pro-equivalent to the hexacore Core i9), this is a powerful, and expensive device, but I am glad someone is showing the way.
Even better, the upcoming, updated HP EliteBook x360 G3 convertible should come with a touchscreen that has maximum brightness of 700 nits. HP is advertising this as the “world’s first outdoor viewable display” for a business laptop, which at least sounds very promising. Note though, that this 700 nits can be achieved with only the 1920 x 1080 resolution model; the 4K touch display option has 500 nits, which is not that bad, either. The EliteBooks I have tested also have excellent keyboards, good quality construction and some productivity oriented enhancements that make them an interesting option for any “truly mobile” worker. One of such enhancement is a 4G/LTE data connectivity option, which is a real bless, if one moves fast, opening and closing the laptop in different environments, so that there is no reliable Wi-Fi connection available all the time. (More on HP EliteBook models at: http://www8.hp.com/us/en/elite-family/elitebook-x360-1030-1020.html.)
Apart from the challenges related to reliable data connectivity, a cloud-based file system is something that should be default for any mobile worker. This is related to data security: in mobile work contexts, it is much easier to lose one’s laptop, or get it robbed. Having a fast and reliable (biometric) authentication, encrypted local file system, and instantaneous syncronisation/backup to the cloud, would minimise the risk of critical loss of work, or important data, even if the mobile workstation would drop into a lake, or get lost. In this regard, Google’s Chromebooks are superior, but they typically lack the LTE connectivity, and other similar business essentials, that e.g. the above EliteBook model features. Using a Windows 10 laptop with either full Dropbox synchronisation enabled, or with Microsoft OneDrive as the default save location will come rather close, even if the Google Drive/Docs ecosystem in Chromebooks is the only one that is truly “cloud-native”, in the sense that all applications, settings and everything else also lives in the cloud. Getting back to where you left your work in the Chrome OS means that one just picks up any Chromebook, logs in, and starts with a full access to one’s files, folders, browser addons, bookmarks, etc. Starting to use a new PC is a much less frictionless process (with multiple software installations, add-ons, service account logins, the setup can easily take full working days).
If I’d have my ideal, mobile work oriented tool from today’s tech world, I’d pick the business-enhanced hardware of HP EliteBook, with it’s bright display and LTE connectivity, and couple those with a Chrome OS, with it’s reliability and seamless online synchronisation. But I doubt that such a combo can be achieved – or, not yet, at least. Meanwhile, we can try to enjoy the summer, and some summer work, in bit more sheltered, shady locations.
The key research infrastructures these days include e.g. access to online publication databases, and ability to communicate with your colleagues (including such prosaic things as email, file sharing and real-time chat). While an astrophysicist relies on satellite data and a physicist to a particle accelerator, for example, in research and humanities and human sciences is less reliant on expensive technical infrastructures. Understanding how to do an interview, design a reliable survey, or being able to carefully read, analyse and interpret human texts and expressions is often enough.
Said that, there are tools that are useful for researchers of many kinds and fields. Solid reference database system is one (I use Zotero). In everyday meetings and in the field, note taking is one of the key skills and practices. While most of us carry our trusty laptops everywhere, one can do with a lightweight device, such as iPad Pro. There are nice keyboard covers and precise active pens available for today’s tablet computers. When I type more, I usually pick up my trusty Logitech K810 (I have several of those). But Lenovo Yoga 510 that I have at home has also that kind of keyboard that I love: snappy and precise, but light of touch, and of low profile. It is also a two-in-one, convertible laptop, but a much better version from same company is X1 Yoga (2nd generation). That one is equipped with a built-in active pen, while being also flexible and powerful enough so that it can run both utility software, and contemporary games and VR applications – at least when linked with an eGPU system. For that, I use Asus ROG XG Station 2, which connects to X1 Yoga with a Thunderbolt 3 cable, thereby plugging into the graphics power of NVIDIA GeForce GTX 1070. A system like this has the benefit that one can carry around a reasonably light and thin laptop computer, which scales up to workstation class capabilities when plugged in at the desk.
One of the most useful research tools is actually a capable smartphone. For example, with a good mobile camera one can take photos to make visual notes, photograph one’s handwritten notes, or shoot copies of projected presentation slides at seminars and conferences. When coupled with a fast 4G or Wi-Fi connection and automatic upload to a cloud service, the same photo notes almost immediately appear also the laptop computer, so that they can be attached to the right folder, or combined with typed observation notes and metadata. This is much faster than having a high-resolution video recording of the event; that kind of more robust documentation setups are necessary in certain experimental settings, focus group interview sessions, collaborative innovation workshops, etc., but in many occasions written notes and mobile phone photos are just enough. I personally use both iPhone (8 Plus) and Android systems (Samsung Galaxy Note 4 and S7).
Writing is one of they key things academics do, and writing software is a research tool category on its own. For active pen handwriting I use both Microsoft OneNote and Nebo by MyScript. Nebo is particularly good in real-time text recognition and automatic conversion of drawn shapes into vector graphics. I link a video by them below:
My main note database is at Evernote, while online collaborative writing and planning is mostly done in Google Docs/Drive, and consortium project file sharing is done either in Dropbox or in Office365.
Microsoft Word may be the gold standard of writing software in stand-alone documents, but their relative share has radically gone down in today’s distributed and collaborative work. And while MS Word might still have the best multi-lingual proofing tools, for example, the first draft might come from an online Google Document, and the final copy end up into WordPress, to be published in some research project blog or website, or in a peer-reviewed online academic publication, for example. The long, book length projects are best handled in dedicated writing environment such as Scrivener, but most collaborative book projects are best handled with a combination of different tools, combined with cloud based sharing and collaboration in services like Dropbox, Drive, or Office365.
If you have not collaborated in this kind of environment, have a look at tutorials, here is just a short video introduction by Google into sharing in Docs:
What are your favourite research and writing tools?
The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.
There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.
But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:
In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).
Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…
In the 1970s and 1980s the concept ‘cognitive engineering’ was used in the industry labs to describe an approach trying to apply cognitive science lessons to the design and engineering fields. There were people like Donald A. Norman, who wanted to devise systems that are not only easy, or powerful, but most importantly pleasant and even fun to use.
One of the classical challenges of making technology suit humans, is that humans change and evolve, and differ greatly in motivations and abilities, while technological systems tend to stay put. Machines are created in a certain manner, and are mostly locked within the strict walls of material and functional specifications they are based on, and (if correctly manufactured) operate reliably within those parameters. Humans, however, are fallible and changeable, but also capable of learning.
In his 1986 article, Norman uses the example of a novice and experienced sailor, who greatly differ in their abilities to take the information from compass, and translate that into a desirable boat movement (through the use of tiller, and rudder). There have been significant advances in multiple industries in making increasingly clear and simple systems, that are easy to use by almost anyone, and this in turn has translated into increasingly ubiquitous or pervasive application of information and communication technologies in all areas of life. The televisions in our living rooms are computing systems (often equipped with apps of various kinds), our cars are filled with online-connected computers and assistive technologies, and in our pockets we carry powerful terminals into information, entertainment, and into the ebb and flows of social networks.
There is, however, also an alternative interpretation of what ‘cognitive engineering’ could be, in this dawning era of pervasive computing and mixed reality. Rather than only limited to engineering products that attempt to adapt to the innate operations, tendencies and limitations of human cognition and psychology, engineering systems that are actively used by large numbers of people also means designing and affecting the spaces, within which our cognitive and learning processes will then evolve, fit in, and adapt into. Cognitive engineering does not only mean designing and manufacturing certain kinds of machines, but it also translates into an impact that is made into the human element of this dialogical relationship.
Graeme Kirkpatrick (2013) has written about the ‘streamlined self’ of the gamer. There are social theorists who argue that living in a society based on computers and information networks produces new difficulties for people. Social, cultural, technological and economic transitions linked with the life in late modern, capitalist societies involve movements from projects to new projects, and associated necessity for constant re-training. There is necessarily no “connecting theme” in life, or even sense of personal progression. Following Boltanski and Chiapello (2005), Kirkpatrick analyses the subjective condition where life in contradiction – between exigency of adaptation and demand for authenticity – means that the rational course in this kind of systemic reality is to “focus on playing the game well today”. As Kirkpatrick writes, “Playing well means maintaining popularity levels on Facebook, or establishing new connections on LinkedIn, while being no less intensely focused on the details of the project I am currently engaged in. It is permissible to enjoy the work but necessary to appear to be enjoying it and to share this feeling with other involved parties. That is the key to success in the game.” (Kirkpatrick 2013, 25.)
One of the key theoretical trajectories of cognitive science has been focused on what has been called “distributed cognition”: our thinking is not only situated within our individual brains, but it is in complex and important ways also embodied and situated within our environments, and our artefacts, in social, cultural and technological means. Gaming is one example of an activity where people can be witnessed to construct a sense of self and its functional parameters out of resources that they are familiar with, and which they can freely exploit and explore in their everyday lives. Such technologically framed play is also increasingly common in working life, and our schools can similarly be approached as complex, designed and evolving systems that are constituted by institutions, (implicit, as well as explicit) social rules and several layers of historically sedimented technologies.
Beyond all hype of new commercial technologies related to virtual reality, augmented reality and mixed reality technologies of various kinds, lies the fact that we have always already lived in complex substrate of mixed realities: a mixture of ideas, values, myths and concepts of various kinds, that are intermixed and communicated within different physical and immaterial expressive forms and media. Cognitive engineering of mixed reality in this, more comprehensive sense, involves involvement in dialogical cycles of design, analysis and interpretation, where practices of adaptation and adoption of technology are also forming the shapes these technologies are realized within. Within the context of game studies, Kirkpatrick (2013, 27) formulates this as follows: “What we see here, then, is an interplay between the social imaginary of the networked society, with its distinctive limitations, and the development of gaming as a practice partly in response to those limitations. […] Ironically, gaming practices are a key driver for the development of the very situation that produces the need for recuperation.” There are multiple other areas of technology-intertwined lives where similar double bind relationships are currently surfacing: in social use of mobile media, in organisational ICT, in so-called smart homes, and smart traffic design and user culture processes. – A summary? We live in interesting times.
– Boltanski, Luc, ja Eve Chiapello (2005) The New Spirit of Capitalism. London & New York: Verso.
– Kirkpatrick, Graeme (2013) Computer Games and the Social Imaginary. Cambridge: Polity.
– Norman, Donald A. (1986) Cognitive engineering. User Centered System Design, 31(61).
(This is the first post in a planned series, focusing on various aspects of contemporary information and communication technologies.)
The contemporary computing is all about flow of information: be it a personal computer, a mainframe server, a mobile device or even an embedded system in a vehicle, for example, the computers of today are not isolated. Be it for better or worse, increasingly all things are integrated into world-wide networks of information and computation. This also means that the ports and interfaces for all that data transfer take even higher prominence and priority, than in the old days of more locally situated processing.
Thinking about transfer of data, some older generation computer users still might remember things like floppy disks or other magnetic media, that were used both for saving the work files, and often distributing and sharing that work with others. Later, optical disks, external hard drives, and USB flash drives superseded floppies, but a more fundamental shift was brought along by Internet, and “cloud-based” storage options. In some sense the development has meant that personal computing has returned to the historical roots of distributed computing in ARPANET and its motivation in sharing of computing resources. But regardless what kind of larger network infrastructure mediates the operations of user and the service provider, all that data still needs to flow around, somehow.
The key technologies for information and communication flows today appear to be largely wireless. The mobile phone and tablet communicate to the networks with wireless technologies, either WiFi (wireless local area networking) or cellular networks (GSM, 3G and their successors). However, all those wireless connections end up linking into wired backbone networks, that operate at much higher speeds and reliability standards, than the often flaky, local wireless connections. As data algorithms for coding, decoding and compression of data have evolved, it is possible to use wireless connections today to stream 4K Ultra HD video, or to play high speed multiplayer games online. However, in most cases, wired connections will provide lower latency (meaning more immediate response), better reliability from errors and higher speeds. And while there are efforts to bring wireless charging to mobile phones, for example, most of the information technology we use today still needs to be plugged into some kind of wire for charging its batteries, at least.
This is where new standards like USB-C and Thunderbolt come to the picture. Thunderbolt (currently Thunderbolt 3 is the most recent version) is a “hardware interface”, meaning it is a physical, electronics based system that allows two computing systems to exchange information. This is a different thing, though, from the actual physical connector: “USB Type C” is the full name of the most recent reincarnation of “Universal Serial Bus”, an industry standard of protocols, cables, and connectors that were originally released already in 1996. The introduction of original USB was a major step into the interoperability of electronics, as the earlier situation had been developing into a jungle of propriety, non-compatible connectors – and USB is a major success story, with several billion connectors (and cables) shipped every year. Somewhat confusingly, the physical, bi-directional connectors of USB-C can hide behind them many different kinds of electronics, so that some USB-C connectors comply with USB 3.1 mode (with data transfer speeds up to 10 Gbit/s in “USB 3.1 Gen 2” version) and some are implemented with Thunderbolt – and some support both.
USB-C and Thunderbolt have in certain sense achieved a considerable engineering marvel: with backward compatibility to older USB 2.0 mode devices, this one port and cable should be able to connect to multiple displays with 4K resolutions, external data storage devices (with up to 40 Gbit/s speeds), while also working as a power cable: with Thunderbolt support, a single USB-C type port can serve, or drain, up to 100 watts electric power – making it possible to remove separate power connectors, and share power bricks between phones, tablets, laptop computers and other devices. The small form factor Apple MacBook (“Retina”, 2015) is an example of this line of thinking. One downside for the user of this beautiful simplicity of a single port in the laptop is need for carrying various adapters to connect with anything outside of the brave new USB-C world. In an ideal situation, however, it would be a much simpler life if there would only be this one connector type to worry about, and it would be possible to use a single cable to dock any device to the network, gain access to large displays, storage drives, high speed networks, and even external graphics solutions.
The heterogeneity and historical layering of everyday technologies are complicating the landscape that electronics manufacturers would like to paint for us. As any student of history of science and technology can tell, even the most successful technologies did not replace the earlier ones immediately, and there has always been reasons why people have been opposing the adoption of new technologies. For USB-C and Thunderbolt, the process of wider adoption is clearly currently well underway, but there are also multiple factors that slow it down. The most typical peripheral does not yet come with USB-C, but rather with the older versions. Even in expensive, high end mobile phones, there are still multiple models that manufacturers ship with older USB connectors, rather than with the new USB-C ones.
A potentially more crucial issue for most regular users is that Thunderbolt 3 & USB-C is still relatively new and immature technology. The setup is also rather complex, and with its integration of DisplayPort (video), PCI Express (PCIe, data) and DC power into a single hardware interface it typically requires multiple manufacturers’ firmware and driver updates to work seamlessly together, for TB3 magic to start happening. An integrated systems provider such as Apple has best possibilities to make this work, as they control both hardware as well as software of their macOS computers. Apple is also, together with Intel, the developer of the original Thunderbolt, and the interface was first commercially made available in the 2011 version of MacBook Pro. However, today there is an explosion of various USB-C and Thunderbolt compatible devices coming to the market from multiple manufacturers, and the users are eager to explore the full potential of this new, high speed, interoperable wired ecosystem.
eGPU, or External Graphics Processing Unit, is a good example of this. There are entire hobbyist forums like eGPU.io website dedicated to the fine art of connecting a full powered, desktop graphics card to a laptop computer via fast lane connections – either Expresscard or Thunderbolt 3. The rationale for this is (apart from the sheer joy of tweaking) that in this manner, one can both have a slim ultrabook computer for daily use, with a long battery life, that is then capable of transforming into an impressive workstation or gaming machine, when plugged into an external enclosure that houses the power hungry graphics card (these TB3 boxes typically have full length PCIe slots for installing GPUs, different sets of connection ports, and a separate desktop PC style power supply). VR (virtual reality) applications are one example of an area where current generation of laptops have problems: while there are e.g. Nvidia GeForce GTX 10 series (1060 etc.) equipped laptops available today, most of them are not thin and light for everyday mobile use, or, if they are, their battery life and/or fan noise present issues.
Razer, a American-Chinese computing hardware manufacturer is known as a pioneer in popularizing the field of eGPUs, with their introduction of Razer Blade Stealth ultrabook, which can be plugged with a TB3 cable into the Razer Core enclosure (sold separately), for utilizing powerful GPU cards that can be installed inside the Core unit. A popular use case for TB3/eGPU connections is for plugging a powerful external graphics card into a MacBook Pro, in order to make it into a more capable gaming machine. In practice, the early adopters have faced struggles with firmwares and drivers that do not provide direct support from either the macOS side, or from the eGPU unit for the Thunderbolt 3 implementation to actually work. (See e.g. https://egpu.io/akitio-node-review-the-state-of-thunderbolt-3-egpu/ .) However, more and more manufacturers have added support and modified their firmware updates, so the situation is already much better than a few months ago (see instructions at: https://egpu.io/setup-guide-external-graphics-card-mac/ .) In the area of PC laptops running Windows 10, the situation is comparable: a work in progress, with more software support slowly emerging. Still, it is easy to get lost in this, still evolving field. For example, Dell revealed in January that they had restricted the Thunderbolt 3 PCIe data lanes in their implementation of the premium XPS 15 notebook computer: rather than using full 4 lanes, XPS 15 had only 2 PCIe lanes connected in the TB3. There is e.g. this discussion in Reddit comparing the effects this has, in the typical case that eGPU is feeding image into an external display, rather than back to the internal display of the laptop computer (see: https://www.reddit.com/r/Dell/comments/5otmir/an_approximation_of_the_difference_between_x2_x4/). The effects are not that radical, but it is one of the technical details that the early users of eGPU setups have struggled with.
While fascinating from an engineering or hobbyist perspective, the situation of contemporary technologies for connecting the everyday devices is still far from perfect. In thousands of meeting rooms and presentation auditoriums every day, people fail to connect their computers, get anything into the screen, or get access to their presentation due to the failures of online connectivity. A universal, high speed wireless standard for sharing data and displaying video would no doubt be the best solution for all. Meanwhile, a reliable and flexible, high speed standard in wired connectivity would go a long way already. The future will show whether Thunderbolt 3 can reach that kind of ubiquitous support. The present situation is pretty mixed and messy at best.