“Soft” and “sharp” photos

Christmas decorations, photo taken with f/1.2, 50mm lens.

As holidays are traditionally time to be lazy and just rest, I have not undertaken any major photography projects either. One thing that I have been wondering though, has been the distinction between “soft” and “sharp” photos. There are actually many things intermingling here. In old times, the lenses I used were not capable of delivering optically sharp images, and due to long exposure times, unsensitive film (later: sensors), the images were also often blurry: I had not got the subject in focus and/or there was blur caused by movement (of target and/or the camera shaking). Sometimes the blurry outcomes were visually or artistically interesting, but this was mostly due to pure luck, rather than any skill and planning.

Later, it became feasible to get images that were technically controlled and good-looking according to the standard measurements of image quality. Particularly the smartphone photos have changed the situation in major ways. It should be noted that the small sensor and small lenses in early mobile phone cameras did not even need to have any sort of focus mechanisms – they were called ‘hyperfocal lenses’, meaning that everything from very close distance to infinity would always be “in focus” (at least theoretically). As long as you’d have enough light and not too much movement in the image, you would get “sharp” photos.

Non-optimal “soft” photo: a mobile phone (iPhone) photo, taken with 10x “digital zoom”, which is actually just a cropped detail from the image optically created in the small sensor.

However, sharpness in this sense is not always what a photographer wants. Yes, you might want to have your main subject to be sharp (have a lot of details, and be in perfect focus), but if everything in the image background shows such detail and focus as well, that might be distracting, and aesthetically displeasing.

Thus, the expensive professional cameras and lenses (full frame bodies, and “fast”, wide-aperture lenses) are actually particularly good in producing “soft” rather than “sharp” images. Or, to put it slightly better, they will provide the photographer larger creative space: those systems can be used to produce both sharp and soft looking effects, and the photographer has better control on where both will appear in the image. The smartphone manufacturers have also added algorithmic techniques that are used to make the uniformly-sharp mobile photos softer, or blurry, in selected areas (typically e.g. in the background areas of portrait photos).

Sharpness in photos is both a question of information, and how it is visually expressed. For example, a camera with very low resolution sensor cannot be used to produce large, sharp images, as there is not enough information to start with. A small-size version of the same photo might look acceptably sharp, though. On the other hand, a camera with massively high-resolution sensor does not automatically procude sharp looking images. There are multiple other factors in play, and the visual acuity and contrast are perhaps the most crucial ones. The ray of light that comes through the lens and falls on the sensor produces what is called a “circle of confusion”, and a single spot of the subject should ideally be focused on so small spot in the sensor that it would look like a nice, sharp spot also in the finished image (note that this is also dependent on the visual acuity, the eyes of the person looking at it – meaning that discussions of “sharpness” are also in certain ways always subjective). Good quality optics have little diffraction effects that would optically produce visual blur to the photo.

Daytime photo of naakka (jackdaw) in winter, taken with a 600mm telephoto lens (SIGMA), f/7.1, exposure 1/400 seconds, ISO value at 6400 – with some EOS 550D body/sensor’s visual noise removed in postproduction at Lightroom. Note how the sharp subject is isolated with the blurry bacground even with the f/7+ aperture value, courtesy of long focal-range optics.

Similarly, the sharp and soft images may be affected by “visual noise”, which generally is created in the image sensor. In film days, the “grain” of photography was due to the actual small grains of the photosensitive particles that were used to capture the light and dark areas in the image. There were “low ISO” (less light-sensitive) film materials that had very fine-grained particles, and “high ISO” (highly light-sensitive) films that had larger and coarser particles. Thus, it was possible to take photos in low-light conditions (or e.g. with fast shutter speeds) with the sensitive film, but the downside was that there was more grain (i.e. less sharp details, and more visual noise) in the final developed and enlarged photographs. The same physical principles apply also today, in the case of photosensitive, semiconductive camera sensors: when the amplification of light signal is boosted, the ISO values go up, faster shots or images in darker conditions can be captured, but there will be more visual noise in the finished photos. Thus, the perfectly sharp, noise-free image cannot always be achieved.

But like many photographers seek for the soft “bokeh” effect into the backgrounds (or foregrounds) of their carefully composed photos, some photographers do not shy away from the grainy effects of visual noise, or high ISO values. Similar to the control of sharpness and softness in focus, the use of grain is also a question of control and planning: if all and everything one can produce has noise and grain, there is no real creative choice. Understanding the limitations of photographic equipment (with a lot of training and experimentation) will eventually allow one to utilize also visual “imperfections” to achieve desired atmospheres and artistic effects.

Chocolates were shot with f/1.4 value (50mm lens) – the ‘dreamy’ look was desired here, but note how even the second piece of chocolate is already blurred, as the “zone of acceptable sharpness” (also known as the “depth of field”) is very narrow here.

Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

iPhone 6: boring, but must-have?

iPhone 6 & 6 Plus © Apple.
iPhone 6 & 6 Plus © Apple.

There have been substantial delays in my advance order for iPhone 6 Plus (apparently Apple underestimated the demand), and I have had some time to reflect on why I want to get the damned thing in the first place. There are no unique technological features in this phone that really set it apart in today’s hi-tech landscape (Apple Pay, for example, is not working in Finland). The screen is nice, the phone (both models, 6 and 6 Plus) are well-designed and thin, but then again – so are many other flagship smartphones today. Feature-wise, Apple has never really been the one to play the “we have the most, we get there first” game, rather, they are famous for coming in later, and for perfecting few selected ideas that often have been previously introduced by someone else.

I have never been an active “Apple fan”, even while it has been interesting to follow what they have to offer. Apple pays very close attention to design, but on the other hand closes down many options for hacking, personalising and extending their systems, which is something that a typical power-user or geek type abhors – or, at least used to.

What has changed then, if anything? On one hand, the crucial thing is that in the tech ecosystem, devices are increasingly just interfaces and entry points to content and services that reside in the cloud. My projects, documents, photos, and increasingly also the applications I use, live in the cloud. There is simply not that much need for tweaking the operating system, installing specific software, customising keyboard shortcuts, system parameters etc. than before – or is it just that I have got lazy? Moving all the time from office to the meeting room, then to the lecture hall, next to seminar room, then to home, and next to the airport, there are multiple devices while on the road that serve as portals for information, documents and services that are needed then and there. Internet connectivity and electricity rather than CPU cycles or available RAM are the key currencies today.

While on the run, I carry four tools with me today: Samsung Galaxy S4 (work phone), iPhone 4S (personal phone), iPad Air (main work tablet device), and Macbook Pro 13 Retina (personal laptop). I also use three Windows laptops (Asus Vivobook at home, Vaio Z and Vaio Z3 which I run in tandem in the office), and in the basement is the PC workstation/gaming PC that I self-assembled in December 2011. (The video gaming consoles, alternative tablets, media servers and streaming media boxes are not included in the discussion here.) All in all, it is S4 that is the most crucial element here, simply because it is mostly at hand whenever I need to check some discussion or document, look for some fact, reply to someone – and while a rather large smartphone, it is still compact enough so that I can carry it with me all the time, and it is also fast and responsive, and it has large enough, sharp touchscreen that allows interacting with all that media and communication in timely and effortless manner. I use iPhone 4S much less, mainly because its screen is so small. (Also, since both iOS 8 and today’s apps have been designed for much speedier iPhone versions, it is terribly slow.) Yet, the Android apps regularly fall short when compared to their iOS counterparts: there are missing features, updates arrive later, the user experience is not optimised for the device. For example, I really like Samsung Note 10.1 2014 Edition, which is – with its S Pen and multitasking features – arguably a better professional tablet device than iPad; yet, I do not carry it with me daily, simply as the Android apps are still often terrible. (Have you used e.g. official Facebook app in a large-screen Android tablet? The user interface looks like it is just the smartphone UI, blown up to 10 inches. Text is so small you have to squint.)

iPhone 6, and particularly 6 Plus, show Apple rising up to the challenge of screen size and performance level that Android users have enjoyed for some time already. Since many US based tech companies still have “iOS first” strategy, the app ecosystem of iPhones is so much stronger than its Android counterpart that in my kinds of use at least, investing to the expensive Apple offering makes sense. I study digital culture, media, Internet and games by profession, and many interesting games and apps only come available to the Apple land, or Android versions come later or in stripped-down forms. I am also avid mobile photographer, and while iPhone 6 and 6 Plus have smaller number of megapixels to offer than their leading rivals, their fast auto-focus, natural colours, and good low-light performance makes the new iPhones good choices also from the mobile photographer angle. (Top Lumia phones would have even better mobile cameras in this standpoint, but Windows Phone app ecosystem is even worse than Android one, where at least the numbers of apps have been rising, as the world-wide adoption of Android handsets creates demand for low-cost apps, in particular.)

To summarise, mobile is where the spotlight of information and communication technologies lies at the moment, and where games and digital culture in general is undergoing powerful developments. While raw processing power or piles of advanced features are no longer the pinnacle or guarantee for best user experiences, it is all those key elements in the minimalistic design, unified software and service ecosystem that support smooth and effortless access to content, that really counts. And while the new iPhone in terms of its technology and UI design is frankly pretty boring, it is for many people the optimal entrance to those services, discussions and creative efforts of theirs that they really care about.

So, where is that damned 6 Plus of mine, again? <sigh>

HDR in iOS 4.1 – yes or not?

There has been reports around that show how you can shoot HDR (High Dynamic Range) photos with iPhone that has been updated into iOS 4.1. What is weird, since I have the update and looking into the Setting > Photos, I cannot see that option. Yet, e.g. in this video it is there – getting curious… http://www.mobiiliblogi.com/2010/09/15/videolla-iphone-4n-uusi-hdr-kuvausominaisuus/

%d bloggers like this: