Four camera-lens options for bird photography (Canon, Spring 2023)

No-one asked for this, but I will provide some bird/wildlife photography gear suggestions below, anyways. This is only focused on the Canon options, because that is where I have my personal experience. (Canon is the leading camera manufacturer, but many other probably have somewhat similar options.) The use-case and price are major factors, so I have estimated those (quoted prices are something that I could find currently here in Finland). Any comments are welcome! 😊

Beginner / occasional nature photographer’s suggested option:

  • Canon EOS R50 / M50 Mark II & Sigma 150-600mm f/5-6,3 DG OS HSM Contemporary
  • Price: 1748 euros = 669 euros (M50m2) + 1079 euros (Sigma150-600) (Note: the new R50 will be priced here at c. 879 euros)
  • Pros: over 24 megapixels, APS-C (with 1.6x crop) brings wildlife closer; Dual Pixel autofocus is generally good and pretty fast. Simple and easy to use.
  • Cons: these entry-level cameras are pretty small, ergonomics is not good, there is no wheel or joystick control, one must use the touch screen to move fast the AF area while taking photos; the Sigma lens provides good reach (225-960mm in full-frame terms), but it needs an EF-adapter, and it is slower and more uncertain to focus than a true, modern Canon RF-mount lens. And frankly, these cameras are optimized for taking photos of people rather than birds or wildlife, but they can be stretched for it, too. (This is where I started when I moved from more general photography to bird-focused nature photography, three years ago.)

Travelling, and weekend nature photographer’s suggested option:

  • Canon EOS R7 & RF 100-400mm f/5.6-8 IS USM
  • Price: 2448 euros = 1699 euros (R7) + 749 euros (RF100-400)
  • Pros: R7 is a very lightweight, yet capable camera – it has 33-megapixel APS-C sensor, the new Digic X processor, a blazingly fast 30 fps electronic shutter, two SDXC UHS-II card slots, IBIS (image stabilization), Dual Pixel AF II with animal eye-focus, and even some weather sealing.
  • Cons: the 651 AF focus points is good, but not pro-level; the camera will every now and then fail to lock focus. The sensor read speed is slow, leading to noticeable “rolling shutter” distortion effects, relating to camera movement during shooting. One needs to shoot more frames, to get some that are distortion-free. There is also only one control wheel, set in “non-standard” position around the joystick. RF100-400 lens is a really nice “walkaround lens” for R7 (it is 160-640mm in full-frame terms). But if the reach is the key priority rather than mobility, then one could consider a heavier option, like the Sigma 150-600mm above.

Enthusiast / advanced hobbyist option:

  • Canon EOS R5 & RF 100-500mm f/4.5-7.1 L IS USM
  • Price: 6939 euros = 3700 euros (R5, a campaign price right now) + 3239 euros (RF100-500)
  • Pros: R5 is already a more pro-level tool; it is weather sealed, has a 45-megapixel sensor, Digic X, 20 fps, IBIS, animal eye-focus with 5940 focus points, dual slots (CFexpress Type B, & SD/SDHC/SDXC), etc.
  • Cons: this combination is much heavier to carry around than the above, R7 one (738+1365g vs. 612+635g). There will probably be a “Mark II” of R5 coming within a year or so (the AF system and some features are already “old generation” as compared to R3 / R7). The combination of full-frame sensor and max 500mm focal length means that far-away targets will be rather small in the viewfinder; the 45-megapixel sensor will provide considerable room for cropping in editing, though.

Working professional / bathing-in-money option:

  • Canon EOS R3 & RF 600mm F4 L IS USM
  • Price: 19990 euros = 5990 euros (R3) + 14000 euros (RF600)
  • Pros: new generation back-illuminated, stacked sensor (24 megapixels), max 30 fps, max ISO 204800, Digic X, new generation eye-controlled AF, enhanced subject tracking, 4779 selectable AF points, etc.
  • Cons: R3 is the current “flagship” of Canon mirrorless systems, but in terms of pixel count, it is behind R5. Some professionals prefer the speed and more advanced autofocus system of R3, while some use R5 because it allows more sharp pixels / room for cropping in the editing phase. The key element here is the (monstrously sized) professional 600mm f/4 prime lens. The image quality and subject separation is beyond anything that the more reasonably priced lenses can offer. The downside is that these kinds of lenses are huge, require mounting them on a tripod pretty much always when you shoot, and the price, of course, puts these out of question for most amateur nature photographers. (Note: as a colleague commented, these large lenses can also be found used sometimes, for much cheaper, if one is lucky.)

And there are, of course, ways of mixing and combining cameras and lenses in many other ways, too, but these are what I consider notable options that differ clearly in terms of use-case (and pricing).

(Photo credit: Canon / EOS Magazine.)

The return of the culture of blogging?

Monique Judge writes in The Verge about the need to start blogging again, and go back to the ”Web 1.0 era”. My new year’s resolution might be to write at least a bit more into this, my main site (www.fransmayra.fi) and also publish my photos more in my photo blog site (https://frans.photo.blog), rather than just sharing everything into the daily social media feeds. For me, the main positive might be getting a better focus, concentration of the longer form, and also gaining better sense of ”ownership” by having my content on my own site (rather than just everything vanishing somewhere into the deep data mines of Meta/Facebook/Instagram).

The downside is that the culture of Old Internet is there no longer, and almost no-one subscribes to blogs and follows them. Well, at least there will be more peace and quiet, then. Or, will the rise of Fediverse bring along also some kind of renaissance of independent publishing platforms?

https://www.theverge.com/23513418/bring-back-personal-blogging

Transition to Mac, Pt. 2

I got the first part of my ‘Transition to Mac’ project (almost) ready by the end of my summer vacation. This was focused around a Mac Mini (M1/16GB/512GB model), which I set up as the new main “workstation” for my home office and photography editing work. This is in nutshell what extras and customisations I have done to it, so far:

– set up as the keyboard Logitech MX Keys Mini for Mac
– and as the mouse, Logitech G Pro X Superlight Wireless Gaming Mouse, White
– for fast additional ssd storage, Samsung X5 External SSD 2TB (with nominal read/write speeds of 2,800/2,300 MB/s)
– and then, made certain add-ons/modifications to the MacOS:
– AltTab (makes alt-tab key-combo cycle through all open app windows, not only between applications, like cmd-tab)
– Alfred (for extending the already excellent Spotlight search to third-party apps, direct access to various system commands and other advanced functionalities)
– installed BetterSnapTool (for adding snap to sides / corners functionality into the MacOS windows management)
– set Sublime Text as the default text editor
– DockMate (for getting Win10-style app window previews into the Mac dock, without which I feel the standard dock is pretty useless)
– And then installing the standard software that I use daily (Adobe Creative Cloud/Lightroom/Photoshop; MS Office 365; DxO Pure RAW; Topaz DeNoise AI & Sharpen AI, most notably)
– The browser plugin installations and login procedures for the browsers I use is a major undertaking, and still ongoing.
– I use 1Password app and service for managing and synchronising logins/passwords and other sensitive information across devices and that speeds up the login procedures a bit these days.
– There was one major hiccup in the process so far, but in the end it was nothing to blame Mac Mini for; I got a colour-calibrated 27″ 4k Asus ProArt display to attach into the Mac, but there was immediately major issues with display being stuck to black when Mac woke from sleep. As this “black screen after sleep” issue is something that has been reported with some M1 Mac Minis, I was sure that I had got a faulty computer. But as I made some tests with several other display cables and by comparing with another 4k monitor, I was able to isolate the issue as a fault with the Asus instead. Also, there was a mechanical issue with the small plastic power switch in this display (it got repeatedly stuck, and had to be forcibly pried back in place). I was just happy being able to return this one, and ordered a different monitor, from Lenovo this time, as they had a special discount currently in place for a model that also has a built-in Thunderbolt dock – something that should be useful as the M1 Mac Mini has a rather small selection of ports.
– There has been some weird moments recently of not getting any image into my temporary, replacement monitor, too, so the jury is still out, whether there is indeed something wrong in the Mac Mini regarding this issue, also.
– I have not much of actual daily usage yet behind, with this system, but my first impressions are predominantly positive. The speed is one main thing: in my photo editing processes there are some functions that take almost the same time as in my older PC workstation, but mostly things happen much faster. The general impression is that I can now process my large RAW file collections maybe twice as fast as before. But there are some tools that obviously have already been optimised for Apple Silicon/M1, since they run lightning-fast. (E.g. Topaz Sharpen AI was now so fast that I didn’t even notice it running the operation before it was already done. This really changes my workflow.)
– The smooth integration of Apple ecosystem is another obvious thing to notice. I rarely bother to boot up my PC computers any more, as I can just use an iPad Pro or Mac (or iPhone), both wake up immediately, and I can find my working documents seamlessly synced and updated in whatever device I take hold of.
– There are some irritating elements in the Mac for a long-time Windows/PC user, of course, too. Mac is designed to push simplicity to a degree that it actually makes some things very hard for user. Some design decisions I simply do not understand. For example, the simple cut-and-paste keyboard combination does not work in a Mac Finder (file manager). You need to apply a modifier key (Option, in addition to the usual Cmd-V). You can drag files between folders with a mouse, but why not use the standard Command-V for pasting files. And then there are things like the (very important) keyboard shortcut for pasting text without formatting: “Option + Cmd + Shift + V”! I have not yet managed to imprint either of this kind of long combo keys into my muscle memory, and looking at the Internet discussions, many frustrated users seem to have similar issues with this kind of Mac “personality issues”. But, otherwise, a nice system!

PC Build, Midsummer 2020

I have followed an about five-year PC upgrade cycle – making smaller, incremental parts upgrades in-between, and building a totally new computer every four-five years. My previous two completely new systems were built during a Xmas break – in December 2011 and 2015. This time, I was seeking for something to put my mind into right now (year 2020 has been a tough one), and specced my five-year build now in Midsummer, already.

It somehow feels that every year is a bad year to invest into computer systems. There is always something much better coming up, just around the corner. This time, it seems that there will be both a new processor generation and new major graphics card generation coming up, later in 2020. But after doing some comparative research for a couple of weeks, in the end, I did not really care. The system I’ll build with the 2020 level of technology, should be much more capable than the 2015 one, in any case. Hopefully the daily system slowdowns and bottlenecks would ease, now.

Originally, I thought that this year would be the year of AMD: both the AMD Zen 2 architecture based, Ryzen 3000 series CPUs and Radeon RX 5000 GPUs appeared very promising in terms of value for money. In the end, it looks like this might be my last Intel-Nvidia system (?), instead. My main question-marks related to the single-core performance in CPUs, and to the driver reliability in Radeon 5000 GPUs. The more I read, and discussed with people who had experience with the Radeon 5000 GPUs, the more I heard stories about blue screens and crashing systems. The speed and price of the AMD hardware itself seemed excellent. In CPUs, on the other hand, I evaluated my own main use cases, and ended up with the conclusion that the slightly better single core performance of Intel 10th generation processors would mean a bit more to me, than the solid multi-core, multithread-performance of similarly priced, modern Ryzen processors.

After a couple of weeks of study into mid-priced, medium-powered components, here are the core elements chosen for my new, Midsummer 2020 system:

Intel Core i5-10600K, LGA1200, 4.10 GHz, 12MB, Boxed (there is some overclocking potential in this CPU, too)

ARCTIC Freezer 34 eSports DUO – Red, processor cooler (I studied both various watercooling solutions, and the high-powered Noctua air coolers, before settling on this one; the watercooling systems did not appear quite as durable in the long run, and the premium NH-D15 was a bit too large to fit comfortably into the case; this appared to be a good compromise)

MSI MAG Z490 TOMAHAWK, ATX motherboard (this motherboard appears to strike a nice balance between price vs. solid construction, feature set, and investments put into the Voltage Regulator Modules, VRMs, and other key electronic circuit components)

Corsair 32GB (2 x 16GB) Vengeance LPX, DDR4 3200MHz, CL16, 1.35V memory modules (this amount of memory is not needed for gaming, I think, but for all my other, multitasking and multi-threaded everyday uses)

MSI GeForce RTX 2060 Super ARMOR OC GPU, 8GB GDDR6 (this is entry level ray-tracing technology – that should be capable enough for my use, for a couple of years at least)

Samsung 1TB 970 EVO Plus SSD M.2 2280, PCIe 3.0 x4, NVMe, 3500/3300 MB/s (this is the system disk; there will be another SSD and a large HDD, plus a several-terabyte backup solution)

Corsair 750W RM750x (2018), modular power unit, 80 Plus Gold (there should be enough reliable power available in this PSU)

Cooler Master MasterBox TD500 Mesh w/ controller, ATX, Black (this is chosen on the basis of available test results – the priorities for me here were easy installation, efficient air flow, and thirdly silent operation)

As a final note, it was interesting to note that during the intervening 2015-2020 period, there was time when RGB lights became the de facto standard in PC parts: everything was radiating and pulsating in multiple LED colours like a Xmas tree. It is ok to think about design, and aim towards some kind of futurism, even, in this context. But some things are just plain ridiculous, and I am happy to see a bit more minimalism winning ground in PC enthusiast level components, too.

On remote education: tech tips

A wireless headset.

The global coronavirus epidemic has already caused deaths, suffering and cancellations (e.g. those of our DiGRA 2020 conference, and the Immersive Experiences seminar), but there are still luckily many things that go on in our daily academic lives, albeit often in somewhat reorganised manner. In schools and universities, the move into remote education is particularly one producing wide-ranging changes, and one that is currently been implemented hastily in innumerable courses that were not originally planned to be run in this manner at all.

I am not in a position to provide pedagogic advice, as the responsible teachers will know much better what are the actual learning goals of their courses, and therefore they are also best capable of thinking how to reach those goals with alternative means. But since I have been directing, implementing or participating in remote education over 20 years already (time flies!), here are at least some practical tips I can share.

Often the main part of remote education is independent, individual learning processes, which just need to be somehow supported. Finding information online, gathering data, reading, analysing, thinking and writing is something that does not fundamentally change, even if the in-class meetings would be replaced by reporting and commenting taking place online (though, the workload for teachers can easily skyrocket, which is something to be aware of). This is particularly true in asynchronous remote education, where everyone does their tasks in their own pace. It is when teamwork, group communications, more advanced collaborations, or when some special software or tools are required, when more challenges emerge. There are ways to get around most of those issues, too. But it is certainly true that not all education can be converted into remote education, not at least with identical learning goals.

According to my experience, there are three main types of problems in real-time group meetings or audio/videoconferences: 1) connection problems, 2) audio & video problems, and 3) conversation rules problems. Let’s deal with them, one by one.

1) The connection problems are due to bad or unreliable internet connection. My main advice is either to make sure that one can use wired rather than Wi-Fi/cellular connection while attempting to join a real-time online meeting or get very close to Wi-Fi router in order to get as strong signal as possible. If one has weak connection, the experience of everyone will suffer, as there will likely be garbled noises and video artefacts coming from you, rather than good-quality streams.

2) The audio and video problems relate to echo, weak sound levels, background noise, or dark, badly positioned or unclear video. If there are several people taking part in a joint meeting, it might be worth thinking carefully whether a video stream is actually needed. In most cases people are working intensely with their laptops or mobile devices during the online meeting, reviewing documents and making notes, and since there are challenges in getting a real eye-to-eye contact with other people (that is pretty impossible still, with current consumer technology), there are multiple distancing factors that will lower the feeling of social presence in any case. Good quality audio link might be enough to have a decent meeting. For that, I really recommend using a headset (headphones with a built-in microphone) rather than just the built-in microphone and speakers of the laptop, for example. There will be much less echo, and the microphone will be close to speakers’ mouths meaning that speech is picked up in much clear and loud manner, and the surround noises are easier to control. But it is highly advisable to move into a silent room for the period of teleconference.

Another tip: I suggest always first connecting the headset (or external microphone and speakers), BEFORE starting the software tool used for teleconferencing. This way, you can make sure that the correct audio devices (both output and input devices) are set as active or default ones, before you start the remote meeting tool. It is pretty easy to get this messed up and end up with low-quality audio coming from the wrong microphone or speakers rather than the intended ones. Note that there are indeed two layers here: in most cases, there are separate audio device settings both in the operating system (see Start/Settings/System/Sound in Windows 10), and another, e.g. “Preferences” item with other audio device settings hidden inside most remote meeting tools. Both of those need to be checked – prior to the meeting.

Thus, yet one tip: please always dedicate e.g. 10-15 minutes of technical preparation time before a remote education or remote meeting session for double-checking and solving connection, audio, video or other technical problems. It is sad (and irresponsible) use of everyone’s precious time, if every session starts with half of the speakers missing the ability to speak, or not being able to hear anyone else. Though, this kind of scenario is still unfortunately pretty typical. Remote meeting technology is notoriously unreliable, and when there are multiple people, multiple devices and multiple connections involved, the likelihood of problems multiplies exponentially.

Please be considerate and patient towards other people. No-one wants to be the person having tech problems.

3) The discussion rules related problems are the final category, and one that might be also culturally dependent. In a typical remote meeting among several Finnish people, for example, it might be that everyone just keeps quiet most of the time. That is normal, polite behaviour in the Finnish cultural context for face-to-face meetings – but something that is very difficult to decode when in an online audio teleconference where you are missing all the subtle gestures, confused looks, smiles and other nonverbal cues. In some other cultural setting, the difficulty might be people speaking on top of each other. Or the issues might be related to people asking questions without making it clear who is actually being addressed by the question or comment.

It is usually a good policy to have a chair or moderator appointed for an online meeting, particularly if it is larger than only a couple of people. The speakers can use the chat or other tools in the software to make signals when they’d want to speak. The chairperson makes it clear which item is being currently discussed and gives floor to each participant in their turn. Everyone tries to be concise, and also remembers to use names while presenting questions or comments to others. It is also good practice to start each meeting with a round of introductions, so that everyone can connect the sound of voice with a particular person. Repeating one’s name also later in the meeting when one is speaking up, does not hurt, either.

In most of our online meetings today we are using a collaboratively edited online document for notetaking during the meeting. This helps everyone to follow what has been said or decided upon. People can fix typos in notes in real time or add links and other materials without requiring the meeting chairperson (or secretary) to do so for them. There are many such online notetaking tools in standard office suites. Google Docs works fine, for example, and has probably the easiest way of generating an editing-allowed link, which can then be shared in the meeting chat window with all participants, without requiring them to have a separate service account and login to do so. Microsoft has their own, integrated tools that work best when everyone is from within the same organisation.

Finally, online collaboration has come a long way from the first experiments in 1960s (google “NLS” and Douglas Engelbart), but it still continues to have its challenges. But if we are all aware of the most typical issues, dedicate a few minutes before each session for setup and testing (and invest 15 euros/dollars into a reliable plug-and-play headset with USB connectivity) we can remove many annoying elements and make the experience much better for everyone. Then, it is much easier to start improving the actual content and pedagogic side of things. – Btw, if you have any additional tips, or comments to share on this, please drop a note below.

Nice meetings!

Enlightened by Full Frame?

Magpie (Harakka), 15 February, 2020.

I have long been in Canon camp in terms of my DSLR equipment, and it was interesting to notice that they announced last week a new, improved full frame mirrorless camera body: EOS R5 (link to official short annoucement). While Canon was left behind competitors such as Sony in entering the mirrorless era, this time the camera giant appears to be serious. This new flagship is promised to feature in-body image stabilization that “will work in combination with the lens stabilization system” – first in Canon cameras. Also, while the implementations of 4k video in Canon DSLRs have left professionals critical in past, this camera is promised to feature 8k video. The leaks (featured in sites like Canon Rumors) have been discussing further features such as a 45mp full frame sensor, 12/20fps continuous shooting, and Canon also verified a new, “Image.Canon” cloud platform, which will be used to stream photos for further editing live, while shooting.

Hatanpää, 15 February 2020.

But does one really need a system like that? Aren’t cameras already good enough, with comparable image quality available at fraction of the cost (EOS R5 might be in 3500-4000 euros range, body only).

In some sense such critique might be true. I have not named the equipment I have used for shooting the photos featured in this blog post, for example – some are taken with my mirrorless systems camera, some are coming from a smartphone camera. For online publishing and hobbyist use, many contemporary camera systems are “good enough”, and can be flexibly utilized for different kinds of purposes. And the lens is today way more important element than the camera body, or the sensor.

Standing in the rain. February 15, 2020.

Said that, there are some elements where a professional, full frame camera is indeed stronger than a consumer model with an APS-C (crop) sensor, for example. It can capture more light into the larger sensor and thus deliver somewhat wider dynamic range and less noise under similar conditions. Thus, one might be able to use higher ISO values and get noise-free, professional looking and sharp images in lower light conditions.

On the other hand, the larger sensor optically means more narrow depth of field – this is something that a portrait photographer working in studio might love, but it might actually be a limitation for a landscape photographer. I do actually like using my smartphone for most everyday event photography and some landscape photos, too, as the small lens and sensor is good for such uses (if you understand the limitations, too). A modern, mirrorless APS-C camera is actually a really flexible tool for many purposes, but ideally one has a selection of good quality lenses to suit the mount and smaller format camera. For Canon, there is striking difference in R&D investments Canon have made in recent years into the full frame, mirrorless RF mount lenses, as compared to the “consumer line” M mount lenses. This is based on business thinking, of course: the casual photographers are changing into using smartphones more and more, and there is smaller market and much tighter competition left in the high-end, professional and serious enthusiast lenses and cameras, where Canon (and Nikon, and many others) are hoping to make their profits in the future.

Thus: more expensive professional full frame optimised lenses, and only few for APS-C systems? We’ll see, but it might indeed be that smaller budget hobbyists (like myself) will need to turn towards third-party developers for filling in the gaps left by Canon.

Systems in the rain…

One downside of the more compact, cheaper APS-C cameras (like Canon M mount systems) is that while they are much nicer to carry around, they do not have as good ergonomics and weather proofing as more pro-grade, full frame alternatives. This is aggravated in winter conditions. It is sometimes close to impossible to get your cold, gloved fingers to strike the right buttons and dials when they are as small as in my EOS M50. The cheaper camera bodies and lenses are also missing the silicone seals and gaskets that are typically an element that secures all connectors, couplings and buttons in a pro system. Thus, I get a bit nervous when outside with my budget-friendly system in a weather like today. But, after some time spent in careful wiping and cleaning, everything seems to continue working just fine.

Joining the company. 15 February, 2020.

Absolute number one lesson I have learned in these years of photography, is that the main limitation of getting great photos is rarely in equipment. There are more and less optimal, or innovative, ways of using same setup, and with careful study and experimentation it is possible to learn ways of working around technical limitations. The top-of-the-line, full frame professional camera and lenses system might have wider “opportunity space” for someone who has learned how to use it. But with additional complexity, heavy and expensive elements, those systems also have their inevitable downsides. – Happy photography, everyone!

The Rise and Fall and Rise of MS Word and the Notepad

MS Word installation floppy. (Image: Wikipedia.)

Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.

Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)

WordPerfect 5.1 screen. (Image: Wikipedia.)

It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).

The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).

Microsoft Word 6.0, Mac version. (Image: user “MR” at https://www.macintoshrepository.org/851-microsoft-word-6)

These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.

Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing as note-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.

The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.

Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).

As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.

MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).

There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.

There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.

Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.

Scrivener. (Image: Literature and Latte.)

Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.

In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.

iA Writer. (Image: iA Inc.)

The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).

For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.

I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.

In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.

Dropbox. (Image: Dropbox, Inc.)

In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.

So, happy 36th anniversary, MS Word.

Perfect blues

Hervantajärvi, a moment after the sunset.

While learning to take better photos with within the opportunities and limitations provided by whatever camera technology offers, it is also interesting now and then to stop to reflect on how things are evolving.

This weekend, I took some time to study rainy tones of Autumn, and also to hunt for the “perfect blues” of the Blue Hour – the time both some time before sunrise and after the sunset, when indirect sunlight coming from the sky is dominated by short, blue wavelenghts.

After a few attempts I think I got into the right spot at the right time (see the above photo, taken tonight at the beach of Hervantajärvi lake). At the time of this photo it was already so dark that I actually had trouble finding my gear and changing lenses.

I made the simple experiment of taking an evening, low-light photo with the same lens (Canon EF 50 mm f/1.8 STM) with two of my camera bodies – both the old, Canon EOS 550D (DSLR) and new EOS M50 (mirrorless). I tried to use the exact same settings for both photos, taking them only moments apart from the same spot, using a tripod. Below are two cropped details that I tried to frame into same area of the photos.

Evening photo, using EOS 550D.
Same spot, same lens, same settings – using EOS M50.

I am not an expert in signal processing or camera electronics, but it is interesting to see how much more detail there is in the lower, M50 version. I thought that the main differences might be in how much noise there is in the low-light photo, but the differences appear to go deeper.

The cameras are generations apart from each other: the processor of 550D is DIGIC 4, while M50 has the new DIGIC 8. That sure has a effect, but I think that the sensor might play even larger role in this experiment. There are some information available from the sensors of both cameras – see the links below:

While the physical sizes of the sensors are exactly the same (22.3 x 14.9 mm), the pixel counts are different (18 megapixels vs. 24.1 megapixels). Also, the pixel density differs: 5.43 MP/cm² vs. 7.27 MP/cm², which just verifies that these two cameras, launched almost a decade apart, have very different imaging technology under the hood.

I like using both of them, but it is important to understand their strengths and limitations. I like using the old DSLR in daylight and particularly when trying to photograph birds or other fast moving targets. The large grip and good-sized physical controls make a DSLR like EOS 550D very easy and comfortable to handle.

On the other hand, when really sharp images are needed, I now rely on the mirrorless M50. Since it is a mirrorless camera, it is easy to see the final outcome of applied settings directly from the electronic viewfinder. M50 also has an articulated, rotating LCD screen, which is really excellent feature when I need to reach very low, or very high, to get a nice shot. On the other hand, the buttons and the grip are just physically a bit too small to be comfortable. I never seem to hit the right switch when trying to react in a hurry, missing some nice opportunities. But when it is a still-life composition, I have good time to consult the tiny controls of M50.

To conclude: things are changing, good (and bad) photos can be taken, with all kinds of technology. And there is no one perfect camera, just different cameras that are best suited for slightly different uses and purposes.

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!

Mirrorless hype is over?

My mirrorless Canon EOS M50, with a 50 mm EF lens, and a “speed booster” style mount Viltrox adapter.

It has been interesting to follow how since last year, there has been several articles published that discuss the “mirrorless camera hype”, and put forward various kinds of criticism of either this technology, or related camera industry strategies. One repeated criticism is rooted to the fact that many professional (and enthusiast) photographers still find a typical DSLR camera body to work better for their needs than a mirrorless one. There are at least three main differences: a mirrorless interchangeable camera body is typically smaller than a DSLR, the battery life is weaker, and the image from an electronic viewfinder and/or LCD back screen offers a less realistic image than a traditional optical viewfinder in a (D)SLR camera.

The industry critiques appear to be focused on worries that as the digital camera market as a whole is going down, the big companies like Canon and Nikon are directing their product development resources for putting out mirrorless camera bodies with new lens mounts, and new lenses for these systems, rather than evolving their existing product lines in DSLR markets. Many seem to think that this is bad business sense, since large populations of professionals and photography enthusiasts are deeply invested in these more traditional ecosystems, and lack of progress in them means that there is not enough incentive to upgrade and invest, for all of those who remain in those parts of the market.

There might be some truth in both lines of argumentation – yet, they are also not the whole story. It is true that Sony, with their α7, α7R and α7S lines of cameras have stolen much of the momentum that could had been strong for Canon and Nikon, if they would had invested into mirrorless technologies earlier. Currently, the full frame systems like Canon EOS R, or Nikon Z6 & Z7, are apparently not selling very strongly. In early May of this year, for example, it was publicised how Sony α7 III sold more units in Japan at least than the Canon and Nikon full frame mirrorless systems combined (see: https://www.dpreview.com/news/3587145682/sony-a7-iii-sales-beat-combined-efforts-of-canon-and-nikon-in-japan ). Some are ready to declare Canon and Nikon’s efforts as dead on arrival, but both companies have claimed to be strategically committed into their new mirrorless systems, developing and launching lenses that are necessary for their future growth. Overall though, both Canon and Nikon are producing and selling much more digital cameras than Sony, even while their sales numbers have been declining (in Japan at least, Fujifilm was interestingly the big winner in year-over-year analysis; see: https://www.canonrumors.com/latest-sales-data-shows-canon-maintains-big-marketshare-lead-in-japan-for-the-year/ ).

From a photographer perspective, the first mentioned concerns might be the more crucial than the business ones, though. Are mirrorless cameras actually worse than comparable DSLR cameras?

There is the curious quality when you move from a large (D)SLR body into using a typical mirrorless: the small camera can feel a bit like a toy, the handling is different, and using the electronic viewfinder and LCD screen can produce flashbacks of compact, point-and-shoot cameras of earlier years. In terms of pure image quality and feature sets, the mirrorless cameras are already equals to DSLRs, and in some areas have arguably moved already beyond most of them. There are multiple reasons for this, and the primary relates to the intimate link there is between the light sensor, image processor and viewfinder in mirrorless cameras. As a photographer you are not looking at a reflection of light coming from the lens through an alternative route into the optical viewfinder – you are looking at the image that is produced from the actual, real-time data that the sensor and image processor are “seeing”. The mechanical construction of mirrorless cameras can be made simpler, and when the mirror is removed, the entire lens system can be moved closer to the image sensor – something that is technically called shorter flange distance. This should allow engineers to design lenses for mirrorless systems that have a large aperture and fast focusing capabilities (you can check out a video, where a Nikon lens engineer explains how this works here: https://www.youtube.com/watch?v=LxT17A40d50 ). The physical dimensions of the camera body in itself can be made small or large, as desired. Nikon Z series cameras are rather sizable, with a conventional “pro camera” style grip (handle); my Canon EOS M50 is diminutive, from the other extreme.

I think that the development of cameras with ever more stronger processors and their machine learning and algorithm-based novel capabilities will push the general direction of photography technology towards various mirrorless systems. Said that, I completely understand the benefits of more traditional DSLRs and why they might feel superior for many photographers at the moment. There has been some rumours (in the Canon space at least, which I am personally mostly following) that new DSLR camera bodies will be released into the upper-enthusiast APS-C / semi-professional DSLR category (search e.g. for “EOS 90D” rumours), so I think that DSLR cameras are by no means dead. There are many ways in which the latest camera technologies can be implemented into mirror-bodies, as well as into the mirrorless ones. The big strategic question of course is that how many different mount and lens ecosystems can be maintained and developed simultaneously. If some of the current mounts will stop getting lenses in the near future, there is at least a market for adapter manufacturers.