Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

iPhone 6: boring, but must-have?

iPhone 6 & 6 Plus © Apple.
iPhone 6 & 6 Plus © Apple.

There have been substantial delays in my advance order for iPhone 6 Plus (apparently Apple underestimated the demand), and I have had some time to reflect on why I want to get the damned thing in the first place. There are no unique technological features in this phone that really set it apart in today’s hi-tech landscape (Apple Pay, for example, is not working in Finland). The screen is nice, the phone (both models, 6 and 6 Plus) are well-designed and thin, but then again – so are many other flagship smartphones today. Feature-wise, Apple has never really been the one to play the “we have the most, we get there first” game, rather, they are famous for coming in later, and for perfecting few selected ideas that often have been previously introduced by someone else.

I have never been an active “Apple fan”, even while it has been interesting to follow what they have to offer. Apple pays very close attention to design, but on the other hand closes down many options for hacking, personalising and extending their systems, which is something that a typical power-user or geek type abhors – or, at least used to.

What has changed then, if anything? On one hand, the crucial thing is that in the tech ecosystem, devices are increasingly just interfaces and entry points to content and services that reside in the cloud. My projects, documents, photos, and increasingly also the applications I use, live in the cloud. There is simply not that much need for tweaking the operating system, installing specific software, customising keyboard shortcuts, system parameters etc. than before – or is it just that I have got lazy? Moving all the time from office to the meeting room, then to the lecture hall, next to seminar room, then to home, and next to the airport, there are multiple devices while on the road that serve as portals for information, documents and services that are needed then and there. Internet connectivity and electricity rather than CPU cycles or available RAM are the key currencies today.

While on the run, I carry four tools with me today: Samsung Galaxy S4 (work phone), iPhone 4S (personal phone), iPad Air (main work tablet device), and Macbook Pro 13 Retina (personal laptop). I also use three Windows laptops (Asus Vivobook at home, Vaio Z and Vaio Z3 which I run in tandem in the office), and in the basement is the PC workstation/gaming PC that I self-assembled in December 2011. (The video gaming consoles, alternative tablets, media servers and streaming media boxes are not included in the discussion here.) All in all, it is S4 that is the most crucial element here, simply because it is mostly at hand whenever I need to check some discussion or document, look for some fact, reply to someone – and while a rather large smartphone, it is still compact enough so that I can carry it with me all the time, and it is also fast and responsive, and it has large enough, sharp touchscreen that allows interacting with all that media and communication in timely and effortless manner. I use iPhone 4S much less, mainly because its screen is so small. (Also, since both iOS 8 and today’s apps have been designed for much speedier iPhone versions, it is terribly slow.) Yet, the Android apps regularly fall short when compared to their iOS counterparts: there are missing features, updates arrive later, the user experience is not optimised for the device. For example, I really like Samsung Note 10.1 2014 Edition, which is – with its S Pen and multitasking features – arguably a better professional tablet device than iPad; yet, I do not carry it with me daily, simply as the Android apps are still often terrible. (Have you used e.g. official Facebook app in a large-screen Android tablet? The user interface looks like it is just the smartphone UI, blown up to 10 inches. Text is so small you have to squint.)

iPhone 6, and particularly 6 Plus, show Apple rising up to the challenge of screen size and performance level that Android users have enjoyed for some time already. Since many US based tech companies still have “iOS first” strategy, the app ecosystem of iPhones is so much stronger than its Android counterpart that in my kinds of use at least, investing to the expensive Apple offering makes sense. I study digital culture, media, Internet and games by profession, and many interesting games and apps only come available to the Apple land, or Android versions come later or in stripped-down forms. I am also avid mobile photographer, and while iPhone 6 and 6 Plus have smaller number of megapixels to offer than their leading rivals, their fast auto-focus, natural colours, and good low-light performance makes the new iPhones good choices also from the mobile photographer angle. (Top Lumia phones would have even better mobile cameras in this standpoint, but Windows Phone app ecosystem is even worse than Android one, where at least the numbers of apps have been rising, as the world-wide adoption of Android handsets creates demand for low-cost apps, in particular.)

To summarise, mobile is where the spotlight of information and communication technologies lies at the moment, and where games and digital culture in general is undergoing powerful developments. While raw processing power or piles of advanced features are no longer the pinnacle or guarantee for best user experiences, it is all those key elements in the minimalistic design, unified software and service ecosystem that support smooth and effortless access to content, that really counts. And while the new iPhone in terms of its technology and UI design is frankly pretty boring, it is for many people the optimal entrance to those services, discussions and creative efforts of theirs that they really care about.

So, where is that damned 6 Plus of mine, again? <sigh>

HDR in iOS 4.1 – yes or not?

There has been reports around that show how you can shoot HDR (High Dynamic Range) photos with iPhone that has been updated into iOS 4.1. What is weird, since I have the update and looking into the Setting > Photos, I cannot see that option. Yet, e.g. in this video it is there – getting curious… http://www.mobiiliblogi.com/2010/09/15/videolla-iphone-4n-uusi-hdr-kuvausominaisuus/