Compact lenses, great photos?

Sports and wildlife photographers in particular are famous (or notorious) for investing in and carrying around lenses that are often just huge: large, long, and heavy. Is it possible to take great photos with small, compact lenses, or is an expensive and large lens the only option for a hobbysist photographer who’d want to reach better results?

Winter details, captured with Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.

I am by no means an authority in optics or lens design, but I think certain key principles are important to take into consideration.

Perhaps one of the first ones is the style of photography one is engaged with. Are you shooting portrait photos indoors, or even in a studio? Or, are you tripping outdoors, trying to get closeup photos of elusive birds and animals? Or, are you rather a landscape photographer? Or, a street photographer?

Sometimes the intended use of photos is also a factor to consider. Are these party photos, or something that you’ll aim to share mostly among your friends in social media? Or, is this that important photo art project that you aim output into large-format prints, and hang to your walls – or, in to a gallery even?

These days, digital camera sensors are “sharp” enough for pretty much any purpose – one of my smartphones, Huawei Mate 20 Pro, for example, has a 40 megapixel main photo sensor, with 7296 × 5472 native resolution. That is more than what you need for a large poster print (depending on viewing distances and PPI settings, a 4000 x 6000 pixels, or even 2000 x 3000 pixels might be enough for a poster print). There are many professional photographers who took their commercial photos for years with cameras that had only 6 or 8 megapixel sensors. And many of those photos were reproduced in large posters, or in covers of glossy magazines, and no-one complained.

Frozen grass, photographed using Huawei Mate 20 Pro smartphone.

The lens and quality of optics are more of a bottleneck: if the lens is “soft”, meaning that it is not capable of focusing all rays of light in consistent, sharp manner, there is no way of achieving very clear looking images with that. But truth be told, in perhaps 90 % of cases with blurry photos, I blame myself rather than my equipment these days. There are badly focused photos, I had a wrong aperture setting or too long exposure time (and was not using a tripod but shooting handheld) and all that contributes to getting a lot of blurry looking photos.

But it is also true, that if one is trying to achieve very high quality results in terms of optical quality, using a more expensive lens is usually something that many people will do. But actually there are “mainstream” photography situations where a cheap lens will produce results that are just – good enough. It is particularly the more extreme situations, where one is for example trying to get a really lot of light into the lens, to capture really detailed scenes in a very consistent manner, where large, heavy and expensive lenses come to play a role. This is also true of portraiture, where a high-quality lens is also used to deliver good separation of person from the background, and the glass elements, their positioning and the aperture blades are designed to produce particularly nice looking “bokeh” effect (the out-of-focus highlights are blurred in an aesthetically pleasing manner). And of course those bird and wildlife photographers value their well-designed, long telephoto range lenses that also capture a lot of light, thereby enabling the photographer to use short enough exposure times and get sharp images of even moving targets.

A cropped detail, photo taken with SIGMA 150-600 mm f/5-6,3 DG OS HSM Contemporary tele-zoom lens on a dim winter’s day.

In many cases it is actually other characteristics rather than the optical image quality that makes a particular lens expensive. It might be the mechanical build quality, weather-proofing, or the manner the focusing, zooming and aperture mechanisms, and how control rings are implemented that are something a professional photographer might be willing to pay for, in one of their main tools.

In street photography, for example, there are completely different kind of priorities as compared to wildlife photography, or studio portraiture, where using a solid tripod is common. In a street, one is constantly moving, and also trying not to be very conspicuous while taking photos. A compact camera with a compact lens is good for those kinds of reasons. Also, if the targets are people and views on city streets, a “normal range” lens is usually preferable. A long-range telephoto lens, or very wide-angle lens will produce very different kinds of effects as compared to the visual feel and visual experiences that people usually experience as “normal images”. In a 35 mm film camera, or “full-frame” digital camera, a 50 mm lens is usually considered a normal lens, whereas with a camera equipped with a (Canon) “crop” sensor (APS-C, 22.2 x 14.8 mm sensor size) would require c. 30 mm lens to produce similar field of view for the image as a 50 mm in a full-frame camera. Lenses with this kinds of short focal ranges can be designed to be physically smaller, and can deliver very good image quality for their intended purposes, even while being nicely budget-priced. There are these days many such excellent “prime” lenses (as contrasted to more complex “zoom” lenses) available from many manufacturers.

One should note here that in case of smartphone photography, everything is of course even much more compact. A typical modern smartphone camera might have a sensor of only few millimeters in size (e.g. in popular 1/3″ type, the sensor is 4.8 x 3.6 mm), so actual focal length of the (fixed) lens may be perhaps 4.25 mm, but that translates into a 26 mm equivalent lens field-of-view, in a full-frame camera. This is thus effectively a wide-angle lens that is good for many indoor photography situations. Many smartphones feature a “2x” (or even “5x”) sensor-lens combinations, that can deliver a normal range (50 mm equivalent in full-frame) or even telephoto ranges, with their small mechanical and optical constructions. This is an impressive achievement – it is much more comfortable to put a camera capable of high-quality photography into your back pocket, rather than lug it around in a dedicate backbag, for example.

Icy view was taken with Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.

Perhaps the main limitation of smartphone cameras for artistic purposes is that they do not have adjustable apertures. There is always the same, rather small hole where rays of light will enter the lens and finally focus on the image sensor. It is difficult to control the “zone of acceptable sharpness” (or, “depth of field”) with a lens where you cannot adjust aperture size. In fact, it is easy to achieve “hyperfocal” images with very small-sensor cameras: everything in image will be sharp, from very close to infinity. But the more recent smartphones have already slighly larger sensors, and there have already even been experiments to implement adjustable aperture system inside these tiny lenses (Nokia N86 and Samsung Galaxy S9 at least have advertised adjustable apertures). Some manufacturers resort to using algorithmic background blurring to create full-frame camera looking, soft background while still using optically small lenses that naturally have much wider depth of field. When you take a look at the results of such “computational photography” in a large and sharp monitor, the results are usually not as good as with a real, optical system. But, if the main use scenario for such photos is to look at them from small-screen, mobile devices, then – again – the lens and augmentation system together may be “good enough”.

All the photos attached into this blog post are taken with either a compact kit lens, or with a smartphone camera (apart from that single bird photo above). Looking at them from a very high resolution computer monitor, I can find blurriness and all kinds of other optical issues. But personally, I can live with those. My use case in this case did not involve printing these out in poster sizes, and I just enjoyed having a winter-day walk, and taking photos while not carrying too heavy setup. I will also be posting the photos online, so the typical viewing size and situation for them pretty much obfuscates maybe 80 % of the optical issues. So: compact cameras, compact lenses – great photos? I am not sure. But: good enough.

More frozen grass, Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.

Kiuas 2020

Ten years ago, 2010, I blogged about the new kiuas (stove) for our sauna, Harvia Figaro. Just before Christmas this year, this kiuas broke down. There was an electric failure (the junction box of kiuas basically exploded, there was an electric short circuit), and I am not an engineer enough to say whether some underlying failure in stove itself was the reason, or just the weakly designed connections in the junction box failing over time. Luckily our circuit breakers worked just fine – we just missed one fundamental Finnish tradition: joulusauna.

Looking inside the Harvia after 10 years of use was eye-opening. The heater elements (lämpövastukset) of kiuas were pretty much gone. Also, we could not really completely trust the controller, timer and other electronics inside Harvia after the dramatic short circuit. So I decided to get a new kiuas.

After some careful examination and discussion about the needs and priorities of our family, the choice was Tulikivi Sumu ST 9 kW model. This is specced for a 8-13 m³ sauna room, so hopefully it would be suitable for our case. (More, see: https://www.tulikivi.fi/tuotteet/Sumu_ST .)

The working class hero. – The sad final state of our Harvia Figaro, FG90.

One of the lessons from Harvia was that a long construction with an open side that exposes the stones is challenging: the small stones can easily even squeeze out through the steel bars, and the tall heater elements become strained among the moving stones. I blame myself for not being diligent enough to take the stones out at least a few times per year, washing them, and then putting them back – the heater elements would no doubt stayed in better shape and the entire kiuas maybe even lived longer that way. At the same time, it must be said that positioning stones among the heater elements inside a kiuas that is 94 cm deep, is hard. The inside edges or rim of the steel box were so sharp in Harvia Figaro, that it was bit painful to squeeze your hand (and stones) deep inside kiuas. And this kiuas took maximum amount of 90 kg of stones. This sounded great when we got it (in theory at least, a massive kiuas gives more balanced “löyly” – the experience derived from such elements as right temperature, the release of steam and the atmosphere), but in the end this design was one of the reasons we did not maintain the kiuas in the manner it should have been done (we did of course change the stones, but probably not as often as it should have been done).

The new kiuas, Tulikivi Sumu, is also rather tall, but the external dimensions hide the fact that Tulikivi relies on dual-casing construction: there are isolating cavities inside kiuas, and this model only takes 60 kg of stones (rather than 90 kg of Harvia Figaro). Together with the smaller internal dimensions, this is clearly easier kiuas to handle and maintain.

We also (of course) always use a professional electrician to install a kiuas. This time, the installed, new junction box was also a more sturdy and hopefully electrically safer and more durable model.

Our new Tulikivi Sumu ST, black 9 kW model.

The dual shell casing of Tulikivi means that it is also safer – this is something that the company also advertises, alonside with their expertise in traditional soapstone stoves (vuolukivi – they are based in Nunnalahti, Juuka, North Karelia). The outer surfaces of this stove get warm, but they do not get so hot that you would get burns, if you touch it while it is heating. Btw, this also means that the safe distances to wooden seats (lauteet) or walls can be very small. One could even integrate this kiuas inside lauteet, having the top of kiuas with its hot stones sticking out among the people sitting in lauteet. We are not going for that option, though.

One thing that I really tried to do carefully this time was positioning of the kiuaskivet – stones of the stove. I have become increasingly aware that you should not just randomly throw stones into the electric stove, and hope that kiuas would give good löyly – or that it would even be safe.

The instruction manual of Tulikivi even explitly says that their warrantly will be void, if the stones are positioned wrongly, and that if stones are too tightly or too loosely positioned, it can even cause fire.

The basic idea is that there should always be enough air cavities inside a kiuas, but also that the electric heater elements are not bare at any point. There should be a sort of internal architecture to the kiuaskivet: one needs to find large stones that fit in and work as supports in larger spaces, flat stones that are like internal “support beams”, taking the weight and supporting those stones that will come on the next layer. The weight of stones should not be focused on the heater elements, as otherwise they will become twisted and deformed under pressure. There should also be air channels like internal chimneys that allow hot air to move upwards, and transfer the heat from the heater elements into the kiuaskivet (stones) and also into the air of sauna room.

Positioning the rough-cut olivine diabase rocks on top of each other, trying to create a suitable labyrinthine, yet also solid internal structure inside the kiuas.

We use olivine diabase as the sauna stones – this is what Tulikivi also recommends. This is a rather durable and heavy rock material, meaning that it will not break under temperature change strains quickly, and since it is a heavy stone, it will also store and release heat. We use also small number of rounded olivine diabase stones at the top of kiuas. This is mostly for decorative purposes, even if some experts claim that rounded stones will also spread löylyvesi (water you throw into kiuas in Finnish sauna to get löyly), as water flows smoothly from rounded stones, end up deeper inside kiuas, and thereby produce more smooth löyly.

It should be said that selection of löylykivet, their positioning, and all such details of “sauna-knowhow” are subjects of endless passionate debates among the Finns. You can go e.g. into this good site (use the Google Translator, if needed) and read more: https://saunologia.fi/kiuaskivet/ .

Now, we’ll just need to take care of those lauteet, too. – Meanwhile: Hyviä löylyjä!

Ensimmäisiä löylyjä odotellessa…

“Soft” and “sharp” photos

Christmas decorations, photo taken with f/1.2, 50mm lens.

As holidays are traditionally time to be lazy and just rest, I have not undertaken any major photography projects either. One thing that I have been wondering though, has been the distinction between “soft” and “sharp” photos. There are actually many things intermingling here. In old times, the lenses I used were not capable of delivering optically sharp images, and due to long exposure times, unsensitive film (later: sensors), the images were also often blurry: I had not got the subject in focus and/or there was blur caused by movement (of target and/or the camera shaking). Sometimes the blurry outcomes were visually or artistically interesting, but this was mostly due to pure luck, rather than any skill and planning.

Later, it became feasible to get images that were technically controlled and good-looking according to the standard measurements of image quality. Particularly the smartphone photos have changed the situation in major ways. It should be noted that the small sensor and small lenses in early mobile phone cameras did not even need to have any sort of focus mechanisms – they were called ‘hyperfocal lenses’, meaning that everything from very close distance to infinity would always be “in focus” (at least theoretically). As long as you’d have enough light and not too much movement in the image, you would get “sharp” photos.

Non-optimal “soft” photo: a mobile phone (iPhone) photo, taken with 10x “digital zoom”, which is actually just a cropped detail from the image optically created in the small sensor.

However, sharpness in this sense is not always what a photographer wants. Yes, you might want to have your main subject to be sharp (have a lot of details, and be in perfect focus), but if everything in the image background shows such detail and focus as well, that might be distracting, and aesthetically displeasing.

Thus, the expensive professional cameras and lenses (full frame bodies, and “fast”, wide-aperture lenses) are actually particularly good in producing “soft” rather than “sharp” images. Or, to put it slightly better, they will provide the photographer larger creative space: those systems can be used to produce both sharp and soft looking effects, and the photographer has better control on where both will appear in the image. The smartphone manufacturers have also added algorithmic techniques that are used to make the uniformly-sharp mobile photos softer, or blurry, in selected areas (typically e.g. in the background areas of portrait photos).

Sharpness in photos is both a question of information, and how it is visually expressed. For example, a camera with very low resolution sensor cannot be used to produce large, sharp images, as there is not enough information to start with. A small-size version of the same photo might look acceptably sharp, though. On the other hand, a camera with massively high-resolution sensor does not automatically procude sharp looking images. There are multiple other factors in play, and the visual acuity and contrast are perhaps the most crucial ones. The ray of light that comes through the lens and falls on the sensor produces what is called a “circle of confusion”, and a single spot of the subject should ideally be focused on so small spot in the sensor that it would look like a nice, sharp spot also in the finished image (note that this is also dependent on the visual acuity, the eyes of the person looking at it – meaning that discussions of “sharpness” are also in certain ways always subjective). Good quality optics have little diffraction effects that would optically produce visual blur to the photo.

Daytime photo of naakka (jackdaw) in winter, taken with a 600mm telephoto lens (SIGMA), f/7.1, exposure 1/400 seconds, ISO value at 6400 – with some EOS 550D body/sensor’s visual noise removed in postproduction at Lightroom. Note how the sharp subject is isolated with the blurry bacground even with the f/7+ aperture value, courtesy of long focal-range optics.

Similarly, the sharp and soft images may be affected by “visual noise”, which generally is created in the image sensor. In film days, the “grain” of photography was due to the actual small grains of the photosensitive particles that were used to capture the light and dark areas in the image. There were “low ISO” (less light-sensitive) film materials that had very fine-grained particles, and “high ISO” (highly light-sensitive) films that had larger and coarser particles. Thus, it was possible to take photos in low-light conditions (or e.g. with fast shutter speeds) with the sensitive film, but the downside was that there was more grain (i.e. less sharp details, and more visual noise) in the final developed and enlarged photographs. The same physical principles apply also today, in the case of photosensitive, semiconductive camera sensors: when the amplification of light signal is boosted, the ISO values go up, faster shots or images in darker conditions can be captured, but there will be more visual noise in the finished photos. Thus, the perfectly sharp, noise-free image cannot always be achieved.

But like many photographers seek for the soft “bokeh” effect into the backgrounds (or foregrounds) of their carefully composed photos, some photographers do not shy away from the grainy effects of visual noise, or high ISO values. Similar to the control of sharpness and softness in focus, the use of grain is also a question of control and planning: if all and everything one can produce has noise and grain, there is no real creative choice. Understanding the limitations of photographic equipment (with a lot of training and experimentation) will eventually allow one to utilize also visual “imperfections” to achieve desired atmospheres and artistic effects.

Chocolates were shot with f/1.4 value (50mm lens) – the ‘dreamy’ look was desired here, but note how even the second piece of chocolate is already blurred, as the “zone of acceptable sharpness” (also known as the “depth of field”) is very narrow here.

Testing Sigma 150-600mm/5.0-6.3 DG OS HSM Contemporary

I have long been thinking about a longer, telephoto range zoom lens, as this is perhaps the main technical bottleneck in my topic selection currently. After finding a nice offer, I made the jump and invested into Sigma 150-600mm/5.0-6.3 DG OS HSM Contemporary lens for Canon. It is not a true “professional” level wildlife lens (those are in 10 000+ euros/dollars price range in this focal length). But his has got some nice reviews on its image quality and portability. Though, by my standards this is a pretty heavy piece of glass (1,930 g).

The 150-600 mm focal range is in itself highly useful, but when you add this into a “crop sensor” body as I do (Canon has 1.6x crop multiplier), the effective focal range becomes 240-960mm, which is even more into the long end of telephoto lenses. The question is, whether there is still enough light left in the cropped setting at the sensor to allow autofocus to work reliably, and to let me shoot with apertures that allow using pretty noise-free ISO sensitivity settings.

I have only made one photo walk with my new setup yet, but my feelings are clearly at the positive side at this point. I could get decent images with my old 550D DSLR body with this lens, even in a dark, cloudy winter’s day. The situation improved yet lightly when I attached the Sigma into a Viltrox EF-M Speed Booster adapter and EOS M50 body. In this setup I lost the crop multiplier (speedboosters effectively operate as inverted teleconverters), but gained 1.4x multiplier in larger aperture. In a dark day more light was more important than getting that extra crop multiplier. There is nevertheless clear vignetting when Sigma 150-600 mm is used with Viltrox speedbooster. As I was typically cropping this kind of telephoto images in Lightroom in any case, that was not an issue for me.

The ergonomics of using the tiny M50 with a heavy lens are not that good, of course, but I am using a lens this heavy with a monopod or tripod (attached into the tripod collar/handle), in any case. The small body can just comfortably “hang about”, while one concentrates on handling the big lens and monopod/tripod.

In daylight, the autofocus operation was good, both with 550D and M50 bodies. Neither is a really solid wildlife camera, though, so the slow speed of setting the scene and focusing on a moving subject is somewhat of a challenge. I probably need to study the camera behaviour and optimal settings still a bit more, and also actually start learning the art of “wildlife photography”, if I intend to use this lens into its full potential.

My SIGMA 150-600 mm / Canon EOS M50 setup.

The Rise and Fall and Rise of MS Word and the Notepad

MS Word installation floppy. (Image: Wikipedia.)

Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.

Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)

WordPerfect 5.1 screen. (Image: Wikipedia.)

It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).

The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).

Microsoft Word 6.0, Mac version. (Image: user “MR” at https://www.macintoshrepository.org/851-microsoft-word-6)

These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.

Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing as note-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.

The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.

Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).

As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.

MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).

There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.

There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.

Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.

Scrivener. (Image: Literature and Latte.)

Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.

In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.

iA Writer. (Image: iA Inc.)

The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).

For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.

I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.

In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.

Dropbox. (Image: Dropbox, Inc.)

In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.

So, happy 36th anniversary, MS Word.

Perfect blues

Hervantajärvi, a moment after the sunset.

While learning to take better photos with within the opportunities and limitations provided by whatever camera technology offers, it is also interesting now and then to stop to reflect on how things are evolving.

This weekend, I took some time to study rainy tones of Autumn, and also to hunt for the “perfect blues” of the Blue Hour – the time both some time before sunrise and after the sunset, when indirect sunlight coming from the sky is dominated by short, blue wavelenghts.

After a few attempts I think I got into the right spot at the right time (see the above photo, taken tonight at the beach of Hervantajärvi lake). At the time of this photo it was already so dark that I actually had trouble finding my gear and changing lenses.

I made the simple experiment of taking an evening, low-light photo with the same lens (Canon EF 50 mm f/1.8 STM) with two of my camera bodies – both the old, Canon EOS 550D (DSLR) and new EOS M50 (mirrorless). I tried to use the exact same settings for both photos, taking them only moments apart from the same spot, using a tripod. Below are two cropped details that I tried to frame into same area of the photos.

Evening photo, using EOS 550D.
Same spot, same lens, same settings – using EOS M50.

I am not an expert in signal processing or camera electronics, but it is interesting to see how much more detail there is in the lower, M50 version. I thought that the main differences might be in how much noise there is in the low-light photo, but the differences appear to go deeper.

The cameras are generations apart from each other: the processor of 550D is DIGIC 4, while M50 has the new DIGIC 8. That sure has a effect, but I think that the sensor might play even larger role in this experiment. There are some information available from the sensors of both cameras – see the links below:

While the physical sizes of the sensors are exactly the same (22.3 x 14.9 mm), the pixel counts are different (18 megapixels vs. 24.1 megapixels). Also, the pixel density differs: 5.43 MP/cm² vs. 7.27 MP/cm², which just verifies that these two cameras, launched almost a decade apart, have very different imaging technology under the hood.

I like using both of them, but it is important to understand their strengths and limitations. I like using the old DSLR in daylight and particularly when trying to photograph birds or other fast moving targets. The large grip and good-sized physical controls make a DSLR like EOS 550D very easy and comfortable to handle.

On the other hand, when really sharp images are needed, I now rely on the mirrorless M50. Since it is a mirrorless camera, it is easy to see the final outcome of applied settings directly from the electronic viewfinder. M50 also has an articulated, rotating LCD screen, which is really excellent feature when I need to reach very low, or very high, to get a nice shot. On the other hand, the buttons and the grip are just physically a bit too small to be comfortable. I never seem to hit the right switch when trying to react in a hurry, missing some nice opportunities. But when it is a still-life composition, I have good time to consult the tiny controls of M50.

To conclude: things are changing, good (and bad) photos can be taken, with all kinds of technology. And there is no one perfect camera, just different cameras that are best suited for slightly different uses and purposes.

On Tweakability

Screenshot: Linux Mint 19.2.
Linux Mint 19.2 Tina Cinnamon Edition. (See: https://www.linuxmint.com/rel_tina_cinnamon_whatsnew.php.)

Two years ago, in August 2017, I installed a new operating system into my trusty old home server (HP Proliant ML110 Gen5). That was a rather new Linux distro called ElementaryOS, which looked nice, but the 0.4 Loki that was available at the time was not an optimal choice for a server, as it soon turned out afterwards. It was optimized for a laptop use, and while I could also set it up as a file & printer server, many things required patching and tweaking to start working. But since I install and maintain multiple operating systems in my device environment partly out of curiosity, keeping my brain alert, and for this particular kind of fun – of tweaking – I persisted, and lived with Elementary OS for two years.

Recently, there had been some interesting new versions that had come out from multiple other operating system versions. While I do most of my daily stuff in Windows 10 and in iOS (or ipadOS, as the iPad variant is now called), it is interesting to also try out e.g. different Linux versions, and I am also fan of ChomeOS, which usually does not provide surprises, but rather steadily improves, while staying very clear, simple and reliable in that it does.

In terms of the particular characteristic that I am here talking about – let’s call it “tweakability”– an iPad or Chromebook are pretty much from the opposite ends of spectrum, as compared to a personal computer or server system running some version of Linux. While the other OSs excel in presenting the user with an extremely fine-tuned, clear and simple working environment that is simultaneously rather limited in terms of personalisation and modification, the bare-bones, expert oriented Linux distributions in particular hardly ever are “ready” straight after the initial setup. The basic installation is in these cases rather just the starting point for the user to start building their own vision of an ideal system, complete with the tools, graphical shells, and/or command-line interpreters etc. that suit their ways of working. Some strongly prefer the other, some the opposite style of OS with their associated user experiences. I feel it is optimal to be able to move from one kind of system to another, on basis of what one is trying to do, and also how one wants to do it.

Tweakability is, in this sense, a measure of customisability and modifiability of the system that is particularly important for so-called “power users”, who have a very definite needs, high IT skill levels, and also clear (sometimes idiosyncratic) ideas of how computing should be done. I am personally not entirely comfortable in that style of operation, and often rather feel happy that someone else has set up an easy-to-use system for me, which is good enough for most things. Particularly in those days when it is email, some text editing, browser-based research in databases and publications (with some social media thrown in), a Chromebook, iPad Pro or a Windows machine with a nice keyboard and good enough screen & battery life are all that I need.

But, coming back to that home server and new operating system installation: as my current printer has network sharing, scanning, email and all kinds of apps built-in, and I do not want to run a web server from my home any more either, it is just the basic backup and file server needs that this server box needs to handle. And a modern NAS box with some decent-sized disks could very well do that job. Thus, the setup of this Proliant server is more of less a hobby project that is very much oriented towards optimal tweakability these days (though not quite as much as my experiments with various Raspberry Pi hobby computers, and their operating systems).

So, I finally ended up considering three options as the new OS for this machine: Ubuntu Server 18.04.3 LTS (which would have been a solid choice, but since I was already running Ubuntu in my Lenovo Yoga laptop, I wanted something a bit different). The second option would have been the new Debian 10 (Buster) Minimal Server (probably optimal for my old and small home server use – but I wanted to also experiment with the desktop side of operating system in this installation). So, finally I ended up with Linux Mint 19.2 Tina Cinnamon Edition. It seemed to have the optimal balance between reliable Debian elements, Ubuntu application ecosystem, combined with some nice tweaks that enhance ease of use and also aesthetic side of the OS.

I did a wipe-clean-style installation of Mint into my 120 GB SSD drive, but decided to try and keep all data in the WD Red 4 TB disk. I knew in principle that this could lead into some issues, as in most new operating system installations, the new OS will come with a new user account, and the file systems will keep the files registered into the original User, Group and Other specifications, from the old OS installation. It would have been better to have a separate archive media available with all the folder structures and files, and then format the data disk, copy all data under the new user account, and thereby have all file properties, ownership details etc. exactly right. But I had already accumulated something like 2,7 terabytes of data into this particular disk and there was no exact backup of it all – since this was the backup server itself, for several devices in our house. So, I just read a quick reminder on how chmod and chown commands work again, and proceeded to mount the old data disks within the new Mint installation, take ownership of all directories and data, and tweak the user, group and other permissions into some kind of working order.

Samba, the cross-platform file sharing system that I need for the mixed Windows-Linux local network to operate was the first really difficult part this time. It was just plain confusing to get the right disks, shares and folders to appear in our LAN for the Windows users, so that the backup and file sharing could work. Again, I ended up reading dozens of hobbyist discussions and info pages from different decades and from different forums, making tweak after tweak in users, groups, permissions and settings in the /etc/smb.conf settings file (followed every time to stop and restart the Samba service daemon, to see the effects of changes). After a few hours I got that running, but then the actual fun started, when I tried to install Dropbox, my main cloud archive, backup and sharing system on top of the (terabyte-size) data that I had in my old Dropbox folder. In principle you can achieve this transition by first renaming the old folder e.g. as “Dropbox-OLD”, then starting the new instance of service and letting it create a new folder named “Dropbox”, then killing the software, deleting the new folder and renaming the old folder back to its own default name. After which restarting the Dropbox software should find the old data directory where it expects one to be, and start re-indexing all that data, but not re-downloading all of that from the cloud – which could take several days over a slow home network.

This time, however, something went wrong (I think there was an error in how the “Selective sync” was switched on at certain point), leading into a situation where all the existing folders were renamed by the system as server’s “Conflicting Copy”, then copied into the Dropbox cloud (including c. 330 000 files), while exactly same files and folders were also downloaded back from the cloud into exact same folders, without the “Conflicting Copy” marking. And of course I was away from the machine at this point, so when I realised what was going on, I had to kill Dropbox, and start manually bringing back the Dropbox to the state it was before this mess. It should be noted that there was also a “Rewind Dropbox” feature in this Dropbox Plus account (which is exactly designed for rolling back in this kind of large situations). But I was no longer sure into which point in time I should rewind back to, so I ended up going through about 100 different cases of conflicting copies, and also trying to manually recover various shared project folders that had become dis-joined in this same process. (Btw, apologies to any of my colleagues who got some weird notifications from these project shares during this weekend.)

After spending most of one night doing this, I tried to set up my other old services into the new Mint server installation in the following day. I started from Plex, which is a media server and client software/service system that I use e.g. to stream our family video clips from the server into our smart television. There is an entire, 2600 word essay on Linux file and folder permissions at the Plex site (see: https://support.plex.tv/articles/200288596-linux-permissions-guide/). But in the end I just had to lift my hands up. There is something in the way system sees (or: doesn’t see) the data that is in the old 4 TB disk, and all my tricks with different users and permission settings that I tried, do not allow Plex to see any of that data from that disk. I tested that if I copy the files into that small system disk (the 120 GB SSD), then the server can see and stream them normally. Maybe I will at some point get another large hard drive, try setting up that one under the current OS and user, copy all data there, and then try to reinstall and run Plex again. Meanwhile, I just have to say that I have got my share of tweakability for some time now. I think that Linux Mint in itself is indeed perfectly nice and capable operating system. It is just that software such as Dropbox or Plex do not play so nicely and reliably together with it. Not at least with the tweaking skills that I possess. (While I am writing this, there are currently still over 283 500 files that Dropbox client should restore from the cloud into that problematic data drive. And the program keeps on crashing every few hours…)

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!

M50: first experiences

Ouf-of-camera JPG (M50, with EOS-M 22mm f/2 lens).

I have been using the new Canon EOS M50 mirrorless system camera now for a month or so. The main experiences are pretty positive, but I have also some comments on what this camera is good and not so optimal for.

In terms of image quality and feature set, this is a pretty complete package. Canon can make good cameras. However, the small physical size of this camera is perhaps its most defining main characteristic. This means that M50 is excellent as a light and small travel companion, but also that it has too small grip to carry comfortably this body when there are some heavy “pro” lenses or telephoto lenses attached. One must carry the system from the lens instead.

I really like the touch screen interface of M50. The swiveling LCD is really functional, and it is easy to take that quick photo from extra low or high angles. The LCD touch interface Canon uses is perhaps the best in the market today: it is responsive, well designed and logically organised. This is particularly important for M50, since it has only few physical buttons, and a single rotating control. Photographer using M50 needs to use the touch UI for many key functions. This is perhaps something that many manual-settings oriented professional and enthusiast photographers do not like; it you like to set the aperture, exposure time and ISO from the physical controls, then M50 is not for you (one should consider e.g. Fujifilm X-T3 or T30 instead). But if one is comfortable working with electronic controls, then M50 provides multiple opportunities.

My old EOS camera had only few (nine) autofocus points (phase-detect), and only the single point in the middle was with the fast, cross-type AF. This M50 has 99 selectable AF points (143 with some lenses), covering 80 % of the sensor area (dual-pixel type). Coupled with the touch screen, this change has had an effect on my photography style. It is now possible to first compose the photo, look through the electronic viewfinder, and simultaneously use a thumb to drag the AF point/area (in a “computer mouse/touchpad style”) to the desired point in the screen. I am not completely fluent in this technique yet though, and my usual technique of center focusing first, then half-pressing to lock the focus, and then quickly making the final composition, and shooting, is perhaps in most situations quicker and simpler than moving the focus point around the screen. But since M50 remembers in Program mode (which I use most) where the AF point was left the last time, the center focusing method does not work properly any more. I just need to learn new tricks, and keep moving the AF points in the screen (or, let the camera do everything, in Full Auto mode, or go into Manual mode, and do focusing with the lens ring instead.

As a modern mirrorless camera, M50 is packed with sensors and comes with a powerful DIGIG8 processor, bright LCD screen and electronic viewfinder. All of this consumes electricity, and the battery life of M50 is nowhere near my old 550D (which, btw, also had an extra battery grip). A full day of shooting takes either two or three fully loaded LP-E12 batteries. Thus, this camera behaves like a smartphone with poor battery life. You need to be using that battery charger all the time. (The standard rating is 235 shots-per-charge, CIPA.)

When travelling, I have been using a lot the wireless capabilities of M50. It is really handy that one can move full resolution, or reduced resolution versions of photos into an iPhone, iPad or Android device while on the go. On the other hand, this is nowhere as easy as when shooting and sharing directly from a smartphone. Moving typical 200-300 photos from a shooting session into an iPad for editing and uploading is slow, and feels like it takes ages. (I have not yet cracked how to get the advertised real-time Bluetooth photo transfer to work.) The traditional workflow where the entire memory card is first read into a PC and processed with Lightroom makes still better sense, but it is nice to have the alternative, for mobile processing and sharing some individual photos at least.

Many reviewers of M50 have written a lot about the limitations of 4K video mode (high crop factor, no dual-pixel autofocus). I use video rarely, and then only full HD, so that is not an issue for me. There is an external microphone input, which might be handy, and the LCD screen can be turned to point forward, if I ever go into video blogging (not that I plan to do it).

The main plusses for me in M50 are the compact size, the excellent touch UI, and very nice image quality in still images. That I can use both the new, compact EF-M mount lenses, and (with adapters) also the traditional Canon EF lenses was a major factor when making the purchase decision, since the lens collection of a photographer is typically much more expensive part of the equipment, than the body only. Changing to Nikon, Fuji or Sony would have been a big investement.

The autofocus system in M50 is fast, and in burst mode the camera can shoot 10 fps for 30 jpg shots in a row to fill the buffer. I am not a sports or wildlife photographer as such, so this is good enough for me. A physically bigger body would make the camera easier to handle with large and heavy lenses, but shooting with a large lens is a two-hand operation in any case (and in some cases requires using a tripod), so that is not so critical. I still need to train more to use the controls and switch between camera modes faster, and touch interface is probably never going to be as fast as using a camera with several dedicated physical controls. But this is a compromise one can make, to get this feature set, image quality and lens compatibility in this small package, in this price.

You can find the full M50 tech specs and feature set here in English: https://www.canon.co.uk/cameras/eos-m50/specifications/ and in Finnish: https://www.canon.fi/cameras/eos-m50/specifications/.

Mirrorless hype is over?

My mirrorless Canon EOS M50, with a 50 mm EF lens, and a “speed booster” style mount Viltrox adapter.

It has been interesting to follow how since last year, there has been several articles published that discuss the “mirrorless camera hype”, and put forward various kinds of criticism of either this technology, or related camera industry strategies. One repeated criticism is rooted to the fact that many professional (and enthusiast) photographers still find a typical DSLR camera body to work better for their needs than a mirrorless one. There are at least three main differences: a mirrorless interchangeable camera body is typically smaller than a DSLR, the battery life is weaker, and the image from an electronic viewfinder and/or LCD back screen offers a less realistic image than a traditional optical viewfinder in a (D)SLR camera.

The industry critiques appear to be focused on worries that as the digital camera market as a whole is going down, the big companies like Canon and Nikon are directing their product development resources for putting out mirrorless camera bodies with new lens mounts, and new lenses for these systems, rather than evolving their existing product lines in DSLR markets. Many seem to think that this is bad business sense, since large populations of professionals and photography enthusiasts are deeply invested in these more traditional ecosystems, and lack of progress in them means that there is not enough incentive to upgrade and invest, for all of those who remain in those parts of the market.

There might be some truth in both lines of argumentation – yet, they are also not the whole story. It is true that Sony, with their α7, α7R and α7S lines of cameras have stolen much of the momentum that could had been strong for Canon and Nikon, if they would had invested into mirrorless technologies earlier. Currently, the full frame systems like Canon EOS R, or Nikon Z6 & Z7, are apparently not selling very strongly. In early May of this year, for example, it was publicised how Sony α7 III sold more units in Japan at least than the Canon and Nikon full frame mirrorless systems combined (see: https://www.dpreview.com/news/3587145682/sony-a7-iii-sales-beat-combined-efforts-of-canon-and-nikon-in-japan ). Some are ready to declare Canon and Nikon’s efforts as dead on arrival, but both companies have claimed to be strategically committed into their new mirrorless systems, developing and launching lenses that are necessary for their future growth. Overall though, both Canon and Nikon are producing and selling much more digital cameras than Sony, even while their sales numbers have been declining (in Japan at least, Fujifilm was interestingly the big winner in year-over-year analysis; see: https://www.canonrumors.com/latest-sales-data-shows-canon-maintains-big-marketshare-lead-in-japan-for-the-year/ ).

From a photographer perspective, the first mentioned concerns might be the more crucial than the business ones, though. Are mirrorless cameras actually worse than comparable DSLR cameras?

There is the curious quality when you move from a large (D)SLR body into using a typical mirrorless: the small camera can feel a bit like a toy, the handling is different, and using the electronic viewfinder and LCD screen can produce flashbacks of compact, point-and-shoot cameras of earlier years. In terms of pure image quality and feature sets, the mirrorless cameras are already equals to DSLRs, and in some areas have arguably moved already beyond most of them. There are multiple reasons for this, and the primary relates to the intimate link there is between the light sensor, image processor and viewfinder in mirrorless cameras. As a photographer you are not looking at a reflection of light coming from the lens through an alternative route into the optical viewfinder – you are looking at the image that is produced from the actual, real-time data that the sensor and image processor are “seeing”. The mechanical construction of mirrorless cameras can be made simpler, and when the mirror is removed, the entire lens system can be moved closer to the image sensor – something that is technically called shorter flange distance. This should allow engineers to design lenses for mirrorless systems that have a large aperture and fast focusing capabilities (you can check out a video, where a Nikon lens engineer explains how this works here: https://www.youtube.com/watch?v=LxT17A40d50 ). The physical dimensions of the camera body in itself can be made small or large, as desired. Nikon Z series cameras are rather sizable, with a conventional “pro camera” style grip (handle); my Canon EOS M50 is diminutive, from the other extreme.

I think that the development of cameras with ever more stronger processors and their machine learning and algorithm-based novel capabilities will push the general direction of photography technology towards various mirrorless systems. Said that, I completely understand the benefits of more traditional DSLRs and why they might feel superior for many photographers at the moment. There has been some rumours (in the Canon space at least, which I am personally mostly following) that new DSLR camera bodies will be released into the upper-enthusiast APS-C / semi-professional DSLR category (search e.g. for “EOS 90D” rumours), so I think that DSLR cameras are by no means dead. There are many ways in which the latest camera technologies can be implemented into mirror-bodies, as well as into the mirrorless ones. The big strategic question of course is that how many different mount and lens ecosystems can be maintained and developed simultaneously. If some of the current mounts will stop getting lenses in the near future, there is at least a market for adapter manufacturers.