Learning to experiment

I have been recently thinking why I feel that I’ve not really made any real progress in my photography for the last few years. There are a few periods when some kind of leap has seemed to take place; e.g. when I moved into using my first DSRL, and also in the early days of entering the young Internet photography communities, such like Flickr. Reflecting on those, rather than the tools themselves (a better camera, software, or service), the crucial element in those perhaps has been that the “new” element just stimulated exploration, experimentation, and willingness to learn. If one does not take photos, one does not evolve. And I suppose one can get the energy and passion to continue doing things in experimental manner – every day (or: at least in sometimes) – from many things.

Currently I am constantly pushing against certain technical limitations (but cannot really afford to upgrade my camera and lenses), and there’s also lack of time and opportunity that a bit restrict more radical experiments with any exotic locations, but there are other areas where I definitely can learn to do more: e.g. in a) selecting the subject matter, b) in composition, and c) in post-production. Going to places with new eyes, or, finding an alternative perspective in “old” places, or, just learning new ways to handle and process all those photos.

I have never really bothered to study deeper the fine art of digital photo editing, as I have felt that the photos should stand by themselves, and also stay “real”, as documents of moments in life. But there are actually many ways that one can do to overcome technical limitations of cameras and lenses, that can also help in creating sort of “psychological photorealism”: to create the feelings and associations that the original situation, feeling or subject matter evoked, rather than just trying to live with the lines, colours and contrast values that the machinery was capable of registering originally. When the software post-processing is added to the creative toolbox, it can also remove bottlenecks from the creative subject matter selection, and from finding those interesting, alternative perspectives to all those “old” scenes and situations – that one might feel have already been worn out and exhausted.

Thus: I personally recommend going a bit avant-garde, now and then, even in the name of enhanced realism. 🙂

Life with Photography: Then and Now

I have kept a diary, too, but I think that the best record of life and times comes from the photographs taken over the years. Much of the last century (pre-2000s) photos of mine are collected in traditional photo albums: I used to love the craft of making photo collages, cutting and combining pieces of photographs, written text and various found materials, such as travel tickets or brochure pieces into travel photo albums. Some albums were more experimental: in pre-digital times it was difficult to know if a shot was technically successful or not, and as I have always mostly worked in colour rather than black-and-white, I used to order the film rolls developed and every frame printed, without seeing the final outcomes. With some out-of-focus, blurred or plain random, accidental shots included into every film spool, I had plenty of materials to build collages that were focused on play with colour, dynamics of composition or some visual motif. This was fun stuff, and while one certainly can do this (and more) e.g. with Photoshop with the digital photos, there is something in cutting and combining physical photos that is not the same as a digital collage.

The first camera of my own was Chinon CE-4, a budget-class Japanese film camera from the turn of 1970s/1980s. It served me well over many years, and with it’s manual and “semi-automatic” (Aperture Priority) exposure system and support for easy double exposures.

Chinon CE-4 (credit:
https://www.flickr.com/photos/pwiwe/463041799/in/pool-camerawiki/ ).

I started transitioning to digital photography first by scanning paper photos and slides into digital versions that could then be used for editing and publishing. Probably among my earliest actual digital cameras was HP PhotoSmart 318, a cheap and almost toy-like device with 2.4-megapixel resolution, 8 MB internal flash memory (plus supported CompactFlash cards), a fixed f/2.8 lens and TTL contrast detection autofocus. I think I was shooting occasionally with this camera already in 2001, at least.

Few years after that I started to use digital photography a bit more in travels at least. I remember getting my first Canon cameras for this purpose. I owned at least a Canon Digital IXUS v3 – this I was using at least already in the first DiGRA conference in Utrecht, in November 2003. Even while still clearly a “point-and-shoot” style (compact) camera, this Canon one was based on metal construction and the photos it produced were a clear step up above the plastic HP device. I started to convert into a believer: the future was in digital photography.

Canon Digital IXUS v3 (credit:
https://fi.m.wikipedia.org/wiki/Tiedosto:Canon_Digital_Ixus_V3.jpg ).

After some saving, I finally invested into my first digital “system camera” (DSLR) in 2005. I remember taking photos in the warm Midsummer night that year with the new Canon EOS 350D, and how magical it felt. The 8.0-megapixel CMOS image sensor and DIGIC II signal processing and control unit (a single-chip system), coupled with some decent Canon lenses meant that it was possible to experiment with multiple shooting modes and get finely-detailed and nuanced night and nature photos with it. This was also time when I both built my own (HTML based) online and offline “digital photo albums”, but also joined the first digital photo community services, such as Flickr.

Canon EOS 550D (credit:
https://www.canon.fi/for_home/product_finder/cameras/digital_slr/eos_550d/ ).

It was five years later, when I again upgraded my Canon system, this time into EOS 550D (“Rebel T2i” in the US, “Kiss X4” in Japan). This again meant considerable leap both in the image quality and also in features that relate both to the speed, “intelligence” and convenience of shooting photos, as well as to the processing options that are available in-camera. The optical characteristics of cameras as such have not radically changed, and there are people who consider some vintage Zeiss, Nikkor or Leica camera lenses as works of art. The benefits of 550D over 350D for me were mostly related to the higher resolution sensor (18.0-megapixel this time) and the ways in which DIGIC 4 processor reduced noise, provided much higher speeds, and even 1080p video (with live view and external microphone input).

Today, in 2019, I am still taking Canon EOS 550D with me in any event or travel where I want to get the best quality photographs. This is mostly due to the lenses than the actual camera body, though. My two current smartphones – Huawei Mate 20 Pro and iPhone 8 Plus – both have cameras that come with both arguably better sensors and much more capable processors than this aging, entry-level “system camera”. iPhone has dual 12.0-megapixel sensors (f/1.8, 28mm/wide, with optical image stabilization; f/2.8, 57mm/telephoto) that both are accompanied by PDAF (a fast autofocus technology based on Phase Detection). The optics in Huawei are developed in collaboration with Leica and come as a seamless combination of three (!) cameras: the first has a very large 40.0-megapixel sensor (f/1.8, 27mm/wide), the second one has 20.0-megapixels (f/2.2, 16mm/ultrawide), and the third 8.0-megapixels (f/2.4, 80mm/telephoto). It is possible to use both optical and digital zoom capabilities in Huawei, make use of efficient optical image stabilization, plus a hybrid technology involving phase detection as well as laser autofocus (a tiny laser transmitter sends a beam into the subject, and with the received information the processor is capable of calculating and adjusting for the correct focus). Huawei also utilizes advanced AI algorithms and its powerful Kirin 980 processor (with two “Neural Processing Units, NPUs) to optimize the camera settings, and apply quickly some in-camera postprocessing to produce “desirable” outcomes. According to available information, Huawei Mate 20 Pro can process and recognize “4,500 images per minute and is able to differentiate between 5,000 different kinds of objects and 1,500 different photography scenarios across 25 categories” (whatever those are).

Huawei Mate 20 Pro, with it’s three cameras (credit: Frans Mäyrä).

But with all that computing power today’s smartphones are not capable (not yet, at least) to outplay the pure optical benefits available to system cameras. This is not so crucial when documenting a birthday party, for example, as the lenses in smartphones are perfectly capable for short distance and wide-angle situations. Proper portraits are somewhat borderline case today: a high-quality system camera lens is capable to “separate” the person from the background and blur the background (create the beautiful “bokeh” effect). But the powerful smartphones like iPhone and Huawei mentioned above come effectively with an AI-assisted Photoshop built into them, and can therefore detect the key object, separate it, and blur the background with algorithms. The results can be rather good (good enough, for many users and use cases), but at the same time it must be said that when a professional photographer aims for something that can be enlarged, printed out full-page in a magazine, or otherwise used in a demanding context, a good lens attached into a system camera will prevail. This relates to basic optical laws: the aperture (hole, where the light comes in) can be much larger in such camera lenses, providing more information for the image sensor, the focal length longer – and the sensor itself can also be much larger, meaning that e.g. fast-moving objects (sports, animal photography) and low-light conditions will benefit. With several small lenses and sensors, the future “smart cameras” can probably provide an ever-improving challenge to more traditional photography equipment, combining, processing data and filling-in such information that is derived from machine learning, but a good lens coupled with a system camera can help creating unique pictures in more traditional manner. Both are needed, and both have a future in photography cultures, I think.

The main everyday benefit of e.g. Huawei Mate 20 Pro vs old-school DSLR such as Canon EOS 550D is the portability. Few people go to school or work with a DSLR hanging in their neck, but a pocket-size camera can always travel with you – and be available when that unique situation, light condition or a rare bird/butterfly presents itself. With the camera technologies improving, the system cameras are also getting smaller and lighter, though. Many professionals still prefer rather large and heavy camera bodies, as the big “grip” and solid buttons/controls provide better ergonomics, and the heavy body is also a proper counterbalance for large and heavy telephoto lenses that many serious nature or sports photographers need for their work, for example. Said that, I am currently thinking that my next system camera will no longer probably be based on the traditional SLR (Single-Lens Reflex) architecture – which, btw, is already over three hundred years old, if the first reflex mirror “camera obscura” systems are taken into an account. The mirrorless interchangeable lens camera systems are maintaining the component-based architecture of body+lenses, but eliminate the moving mirror and reflective prisms of SLR systems, and use electronic viewfinders instead.

I have still my homework to do regarding the differences in how various mirrorless systems are being implemented, but it also looks to my eye that there has been a rather rapid period of technical R&D in this area recently, with Sony in particular leading the way, but the big camera manufacturers like Canon and Nikon now following, releasing their own mirrorless solutions. There is not yet quite as much variety to choose for amateur, small-budget photographers such as myself, with many initial models released into the upper, serious-enthusiast/professionals price range of multiple-thousands. But I’d guess that the sensible budget models will also follow, next, and I am interested to see if it is possible to move into a new decade with a light, yet powerful system that would combine some of the best aspects from the history of photography with the opportunities opened by the new computing technologies.

Sony a6000, a small mirrorless system camera body announced in 2014 (credit: https://en.wikipedia.org/wiki/Sony_α6000#/media/File:Sony_Alpha_ILCE-6000_APS-C-frame_camera_no_body_cap-Crop.jpeg).

Microblogging

Diablo3.
My updates about e.g. Diablo3, or Pokémon GO, will go into https://frans.game.blog/.

I decided to experiment with microblogging, and set up three new sites: https://frans.photo.blog/https://frans.tech.blog/ and https://frans.game.blog/. All these “dot-blog” subdomains are now offered free by WordPress.com (see: https://en.blog.wordpress.com/2018/11/28/announcing-free-dotblog-subdomains/). The idea is to post my photos, game and tech updates into these sites, for fast updates and for better organisation, than in a “general” blog site, and also to avoid spamming those in social media, who are not interested in these topics. Feel free to subscribe – or, set up your own blog.

Photography and artificial intelligence

Google Clips camera
Google Clips camera (image copyright: Google).

The main media attention in applications of AI, artificial intelligence and machine learning, has been on such application areas as smart traffic, autonomous cars, recommendation algorithms, and expert systems in all kinds of professional work. There are, however, also very interesting developments taking place around photography currently.

There are multiple areas where AI is augmenting or transforming photography. One is in how the software tools that professional and amateur photographers are using are advancing. It is getting all the time easier to select complex areas in photos, for example, and apply all kinds of useful, interesting or creative effects and functions in them (see e.g. what Adobe is writing about this in: https://blogs.adobe.com/conversations/2017/10/primer-on-artificial-intelligence.html). The technical quality of photos is improving, as AI and advanced algorithmic techniques are applied in e.g. enhancing the level of detail in digital photos. Even a blurry, low-pixel file can be augmented with AI to look like a very realistic, high resolution photo of the subject (on this, see: https://petapixel.com/2017/11/01/photo-enhancement-starting-get-crazy/.

But the applications of AI do not stop there. Google and other developers are experimenting with “AI-augmented cameras” that can recognize persons and events taking place, and take action, making photos and videos at moments and topics that the AI, rather than the human photographer deemed as worthy (see, e.g. Google Clips: https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct). This development can go into multiple directions. There are already smart surveillance cameras, for example, that learn to recognize the family members, and differentiate them from unknown persons entering the house, for example. Such a camera, combined with a conversant backend service, can also serve the human users in their various information needs: telling whether kids have come home in time, or in keeping track of any out-of-ordinary events that the camera and algorithms might have noticed. In the below video is featured Lighthouse AI, that combines a smart security camera with such an “interactive assistant”:

In the domain of amateur (and also professional) photographer practices, AI also means many fundamental changes. There are already add-on tools like Arsenal, the “smart camera assistant”, which is based on the idea that manually tweaking all the complex settings of modern DSLR cameras is not that inspiring, or even necessary, for many users, and that a cloud-based intelligence could handle many challenging photography situations with better success than a fumbling regular user (see their Kickstarter video at: https://www.youtube.com/watch?v=mmfGeaBX-0Q). Such algorithms are already also being built into the cameras of flagship smartphones (see, e.g. AI-enhanced camera functionalities in Huawei Mate 10, and in Google’s Pixel 2, which use AI to produce sharper photos with better image stabilization and better optimized dynamic range). Such smartphones, like Apple’s iPhone X, typically come with a dedicated chip for AI/machine learning operations, like the “Neural Engine” of Apple. (See e.g. https://www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/).

Many of these developments point the way towards a future age of “computational photography”, where algorithms play as crucial role in the creation of visual representations as optics do today (see: https://en.wikipedia.org/wiki/Computational_photography). It is interesting, for example, to think about situations where photographic presentations are constructed from data derived from myriad of different kinds of optical sensors, scattered in wearable technologies and into the environment, and who will try their best to match the mood, tone or message, set by the human “creative director”, who is no longer employed as the actual camera-man/woman. It is also becoming increasingly complex to define authorship and ownership of photos, and most importantly, the privacy and related processing issues related to the visual and photographic data. – We are living interesting times…

Sony RX100: pocket, meet camera

Photography is an interesting thing – many interesting things. Take cameras, for example. For some people, cameras and lenses appear to mean perhaps more than the actual photographs they are supposed to use those equipment for. The global growth of revenue from digital camera industry continues its upwards trend, and by some estimates is expected to reach $46 billion by 2017. There are cameras for multiple uses, and the strengths of one system in one context turn into weaknesses in another. Compare DSLR “systems camera” to a cameraphone (or smartphone), for example: the versatility provided by multiple, interchangeable lenses combined to large image sensor and powerful image processing is unbeatable when the pure technical side of photography as a form of expression is being considered. On the other hand, in everyday daily lives, few people go about hauling their professional DSLR system everywhere. Having a good camera integrated into the mobile phone is your best bet to have camera at hand when the spontaneus opportunity for an interesting photo presents itself. Though, the limitations of small lens and small image sensor inevitably set its limits to what one can achieve with a smartphone camera.

I am going to experiment next by acquiring a compact, “pocket camera” that hopefully would be small enough to actually be feasible to carry around daily in my overcoat pocket, while also having better optics and more versatile feature set than a smartphone camera.

My choice (balancing budget and wish list) concluded into Sony Cyber-shot DSC-RX100 model. This is a compact camera that was introduced already in summer 2012, and there are already several more feature-rich, upgraded versions of RX100 available (Mark II, III, and now also IV, released in summer 2015). My priority here though was to focus on the essential aspects of solid optics combined with decent image sensor and build quality, and the original RX100 ranks high in that department, and the price is pretty competitive by now.

There are few things that smartphone cameras do really well, and extensive app ecosystem, strong computing power in compact form factor and excellent touch screen interfaces are among the key such elements. If the lens and sensor are priority in a compact camera, to get that high quality shot, and you are carrying a powerful smartphone also with you everywhere, it does not make sense to try to duplicate smartphone functions in the camera itself. It is enough to be able to get the photo from camera to the smartphone, and then do the post-processing and possible social media sharing, or archiving from there (or, via a cloud service and/or a PC, for that matter). RX100 does not have a built-in WiFi or other wireless functions, so I have now equipped my new Sony with the Eye-Fi Mobi Pro 32 GB SD memory card, which has the WiFi, and can connect to e.g. iPhone Eye-Fi Mobi app, where from you can take the editing and sharing business as far as you want.

I also invested to some other small add-ons: the official camera LCD screen protector (PCK-LM15) and the Sony AG-R2 Attachment Grip. The latter affects the slim, flat design of RX100 a bit, but is really good for getting reliable hold of the camera so that you can confidently work through multiple positions, without fear of dropping the camera.

RX100 is one of the most popular cameras in the relatively new “enthusiast compact” category, that I guess emerged out of Darwinian adaptation process, where mobile phones took over most of the “snapshot” market, and the compact camera manufacturers were forced to evolve and differentiate their offerings from the most basic and casual photography needs. The manual of RX100 is a rather thick volume, so it has fair number of various options and functions, and this camera has also a rather large, one-inch image sensor (of 20 megapixels), a Carl Zeiss Vario-Sonnar T lens (28-100mm equiv., f/1.8-4.9), image stabilization, automatic face recognition, customizable controls and the ability to shoot recording RAW images – something that the more professional (or nerdy) photo tweakers can value.

It is still too early to say whether the idea of having a daily pocket camera available actually makes any real sense, so that the extra 240 grams of weight in my jacket pocket really pays off. But I guess that in those conference trip breaks this would allow one to jump on and off the “tourist mode” with a bit more expressive range available than just a mobile phone camera would allow. We will see.

4K Ultra HD monitor

Samsung U28D590D
Samsung U28D590D.
Sharper is better. I just booked the last remaining unit of Samsung U28D590D, an Ultra HD, 4K monitor from the local PC store (a display unit) at nice, 320 euros price. This is probably the most budget-consious alternative in 4K, 28″ monitors you can find; there are better, IPS screens (this is a high quality TN), and particularly professional models have better ergonomic in the stand (this is a completely fixed thing, and no VESA mounting either). But the colour reproduction, brightness are excellent, and particularly having 3840 x 2160 resolution at 60Hz, with 1 ms speed (over the Display Port 1.2) makes this pretty much what I have been looking for my gaming and photo editing needs. I am also regularly plugging in several computers (PC/gaming workstation, Macbook Pro Retina, Chromebook) to the same display at my desk, and there is interesting PIP (picture-in-picture) mode in U28D590D where you can keep an eye on the second PC while simultaneously working full screen on the other (let’s see how useful this will be in reality, though). If you think there is a better deal available from somewhere at the 300 euros price range, let me know. More information: http://www.samsung.com/levant/consumer/it/monitor/uhd-monitor/LU28D590DS/ZN.

Edit: this is the thread with instructions for getting 52 Hz at 4k on the retina MacBook Pro 13: http://forums.macrumors.com/threads/4k-display-and-retina-macbook-pro-13.1741440/

Fixing crashing iPhone’s camera app

iOS 5 and the new iPhone 4S are not without their bugs, even while mostly being on the ‘it just works’ department. One rather nasty one that just jumped on me is the iPhone camera & camera roll application crash issue: the monent you try to access any of your photos, or when you try to shoot a new one, the app just crashes. This issue has been around in various forms for more than a year now, it appears, so I am not sure this is related to iOS 5 or recent hardware. Apple care appears to either recommend totally wiping your device to factory settings, or they have even replaced the malfunctioning phone with a new one. There appears to be a fix, however, that solved the issue at least in my case. You need to download software (e.g. iExplorer) that you can use to delete any ‘sqlite’ marked metadata files related to your photos, and then let iOS rebuild its photo database. I am not sure what is going wrong, but I truly hope Cupertino will take notice and fix their OS soon. For more detailed instructions, and thanks for spreading the word, please visit the iPhoneInCanada site:

http://www.iphoneincanada.ca/how-to/how-to-fix-iphone-camera-roll-crash-and-photos-turned-to-other-in-itunes/

iPhone camera fix