Four camera-lens options for bird photography (Canon, Spring 2023)

No-one asked for this, but I will provide some bird/wildlife photography gear suggestions below, anyways. This is only focused on the Canon options, because that is where I have my personal experience. (Canon is the leading camera manufacturer, but many other probably have somewhat similar options.) The use-case and price are major factors, so I have estimated those (quoted prices are something that I could find currently here in Finland). Any comments are welcome! 😊

Beginner / occasional nature photographer’s suggested option:

  • Canon EOS R50 / M50 Mark II & Sigma 150-600mm f/5-6,3 DG OS HSM Contemporary
  • Price: 1748 euros = 669 euros (M50m2) + 1079 euros (Sigma150-600) (Note: the new R50 will be priced here at c. 879 euros)
  • Pros: over 24 megapixels, APS-C (with 1.6x crop) brings wildlife closer; Dual Pixel autofocus is generally good and pretty fast. Simple and easy to use.
  • Cons: these entry-level cameras are pretty small, ergonomics is not good, there is no wheel or joystick control, one must use the touch screen to move fast the AF area while taking photos; the Sigma lens provides good reach (225-960mm in full-frame terms), but it needs an EF-adapter, and it is slower and more uncertain to focus than a true, modern Canon RF-mount lens. And frankly, these cameras are optimized for taking photos of people rather than birds or wildlife, but they can be stretched for it, too. (This is where I started when I moved from more general photography to bird-focused nature photography, three years ago.)

Travelling, and weekend nature photographer’s suggested option:

  • Canon EOS R7 & RF 100-400mm f/5.6-8 IS USM
  • Price: 2448 euros = 1699 euros (R7) + 749 euros (RF100-400)
  • Pros: R7 is a very lightweight, yet capable camera – it has 33-megapixel APS-C sensor, the new Digic X processor, a blazingly fast 30 fps electronic shutter, two SDXC UHS-II card slots, IBIS (image stabilization), Dual Pixel AF II with animal eye-focus, and even some weather sealing.
  • Cons: the 651 AF focus points is good, but not pro-level; the camera will every now and then fail to lock focus. The sensor read speed is slow, leading to noticeable “rolling shutter” distortion effects, relating to camera movement during shooting. One needs to shoot more frames, to get some that are distortion-free. There is also only one control wheel, set in “non-standard” position around the joystick. RF100-400 lens is a really nice “walkaround lens” for R7 (it is 160-640mm in full-frame terms). But if the reach is the key priority rather than mobility, then one could consider a heavier option, like the Sigma 150-600mm above.

Enthusiast / advanced hobbyist option:

  • Canon EOS R5 & RF 100-500mm f/4.5-7.1 L IS USM
  • Price: 6939 euros = 3700 euros (R5, a campaign price right now) + 3239 euros (RF100-500)
  • Pros: R5 is already a more pro-level tool; it is weather sealed, has a 45-megapixel sensor, Digic X, 20 fps, IBIS, animal eye-focus with 5940 focus points, dual slots (CFexpress Type B, & SD/SDHC/SDXC), etc.
  • Cons: this combination is much heavier to carry around than the above, R7 one (738+1365g vs. 612+635g). There will probably be a “Mark II” of R5 coming within a year or so (the AF system and some features are already “old generation” as compared to R3 / R7). The combination of full-frame sensor and max 500mm focal length means that far-away targets will be rather small in the viewfinder; the 45-megapixel sensor will provide considerable room for cropping in editing, though.

Working professional / bathing-in-money option:

  • Canon EOS R3 & RF 600mm F4 L IS USM
  • Price: 19990 euros = 5990 euros (R3) + 14000 euros (RF600)
  • Pros: new generation back-illuminated, stacked sensor (24 megapixels), max 30 fps, max ISO 204800, Digic X, new generation eye-controlled AF, enhanced subject tracking, 4779 selectable AF points, etc.
  • Cons: R3 is the current “flagship” of Canon mirrorless systems, but in terms of pixel count, it is behind R5. Some professionals prefer the speed and more advanced autofocus system of R3, while some use R5 because it allows more sharp pixels / room for cropping in the editing phase. The key element here is the (monstrously sized) professional 600mm f/4 prime lens. The image quality and subject separation is beyond anything that the more reasonably priced lenses can offer. The downside is that these kinds of lenses are huge, require mounting them on a tripod pretty much always when you shoot, and the price, of course, puts these out of question for most amateur nature photographers. (Note: as a colleague commented, these large lenses can also be found used sometimes, for much cheaper, if one is lucky.)

And there are, of course, ways of mixing and combining cameras and lenses in many other ways, too, but these are what I consider notable options that differ clearly in terms of use-case (and pricing).

(Photo credit: Canon / EOS Magazine.)

The return of the culture of blogging?

Monique Judge writes in The Verge about the need to start blogging again, and go back to the ”Web 1.0 era”. My new year’s resolution might be to write at least a bit more into this, my main site (www.fransmayra.fi) and also publish my photos more in my photo blog site (https://frans.photo.blog), rather than just sharing everything into the daily social media feeds. For me, the main positive might be getting a better focus, concentration of the longer form, and also gaining better sense of ”ownership” by having my content on my own site (rather than just everything vanishing somewhere into the deep data mines of Meta/Facebook/Instagram).

The downside is that the culture of Old Internet is there no longer, and almost no-one subscribes to blogs and follows them. Well, at least there will be more peace and quiet, then. Or, will the rise of Fediverse bring along also some kind of renaissance of independent publishing platforms?

https://www.theverge.com/23513418/bring-back-personal-blogging

Transition to Mac, Pt. 2

I got the first part of my ‘Transition to Mac’ project (almost) ready by the end of my summer vacation. This was focused around a Mac Mini (M1/16GB/512GB model), which I set up as the new main “workstation” for my home office and photography editing work. This is in nutshell what extras and customisations I have done to it, so far:

– set up as the keyboard Logitech MX Keys Mini for Mac
– and as the mouse, Logitech G Pro X Superlight Wireless Gaming Mouse, White
– for fast additional ssd storage, Samsung X5 External SSD 2TB (with nominal read/write speeds of 2,800/2,300 MB/s)
– and then, made certain add-ons/modifications to the MacOS:
– AltTab (makes alt-tab key-combo cycle through all open app windows, not only between applications, like cmd-tab)
– Alfred (for extending the already excellent Spotlight search to third-party apps, direct access to various system commands and other advanced functionalities)
– installed BetterSnapTool (for adding snap to sides / corners functionality into the MacOS windows management)
– set Sublime Text as the default text editor
– DockMate (for getting Win10-style app window previews into the Mac dock, without which I feel the standard dock is pretty useless)
– And then installing the standard software that I use daily (Adobe Creative Cloud/Lightroom/Photoshop; MS Office 365; DxO Pure RAW; Topaz DeNoise AI & Sharpen AI, most notably)
– The browser plugin installations and login procedures for the browsers I use is a major undertaking, and still ongoing.
– I use 1Password app and service for managing and synchronising logins/passwords and other sensitive information across devices and that speeds up the login procedures a bit these days.
– There was one major hiccup in the process so far, but in the end it was nothing to blame Mac Mini for; I got a colour-calibrated 27″ 4k Asus ProArt display to attach into the Mac, but there was immediately major issues with display being stuck to black when Mac woke from sleep. As this “black screen after sleep” issue is something that has been reported with some M1 Mac Minis, I was sure that I had got a faulty computer. But as I made some tests with several other display cables and by comparing with another 4k monitor, I was able to isolate the issue as a fault with the Asus instead. Also, there was a mechanical issue with the small plastic power switch in this display (it got repeatedly stuck, and had to be forcibly pried back in place). I was just happy being able to return this one, and ordered a different monitor, from Lenovo this time, as they had a special discount currently in place for a model that also has a built-in Thunderbolt dock – something that should be useful as the M1 Mac Mini has a rather small selection of ports.
– There has been some weird moments recently of not getting any image into my temporary, replacement monitor, too, so the jury is still out, whether there is indeed something wrong in the Mac Mini regarding this issue, also.
– I have not much of actual daily usage yet behind, with this system, but my first impressions are predominantly positive. The speed is one main thing: in my photo editing processes there are some functions that take almost the same time as in my older PC workstation, but mostly things happen much faster. The general impression is that I can now process my large RAW file collections maybe twice as fast as before. But there are some tools that obviously have already been optimised for Apple Silicon/M1, since they run lightning-fast. (E.g. Topaz Sharpen AI was now so fast that I didn’t even notice it running the operation before it was already done. This really changes my workflow.)
– The smooth integration of Apple ecosystem is another obvious thing to notice. I rarely bother to boot up my PC computers any more, as I can just use an iPad Pro or Mac (or iPhone), both wake up immediately, and I can find my working documents seamlessly synced and updated in whatever device I take hold of.
– There are some irritating elements in the Mac for a long-time Windows/PC user, of course, too. Mac is designed to push simplicity to a degree that it actually makes some things very hard for user. Some design decisions I simply do not understand. For example, the simple cut-and-paste keyboard combination does not work in a Mac Finder (file manager). You need to apply a modifier key (Option, in addition to the usual Cmd-V). You can drag files between folders with a mouse, but why not use the standard Command-V for pasting files. And then there are things like the (very important) keyboard shortcut for pasting text without formatting: “Option + Cmd + Shift + V”! I have not yet managed to imprint either of this kind of long combo keys into my muscle memory, and looking at the Internet discussions, many frustrated users seem to have similar issues with this kind of Mac “personality issues”. But, otherwise, a nice system!

Transition to Mac

Apple’s M1 Processor Lineup, March 2022. (Source: Apple.)

I have been an occasional Mac user in the past: in 2007, I bought a Mac Mini (an Intel Core 2 Duo, 2.0 GHz model) from Tokyo where I was for the DiGRA conference. And in November 2013, I invested into a MacBook Pro with Retina Display (late 2013 model, with 2.4GHz Core i5, Intel Iris graphics). Both were wonderful systems for their times, but also sort of “walled garden” style environments, with no real possiblity for user upgrades and soon outpaced by PC systems, particularly in gaming. So, I found myself using the more powerful PC desktop computers and laptops, again and again.

Now, I have again started the process of moving back into the Apple/Mac ecosystem, this time full-time, with both the work and home devices, both in computing as well as in mobile tech being most likely in Apple camp, at some point later this year. Why, you might ask – what has changed?

The limitations of Apple in upgradability and general freedom of choice are still the same. Apple devices also continue to be typically more expensive than the comparably specced competitors from the non-Apple camp. It is a bit amusing to look at a bunch of smart professionals sitting next to each other, each tapping at the identical, Apple-logo laptops, glancing at their identical iPhones. Apple has managed to get a powerful hold on the independent professional scene (including e.g. professors, researchers, designers and developers), even while the large IT departments continue to prefer PCs, mostly due to the cheaper unit-prices and better support for centralised “desktop management”. This is visible in the universities, too, where the IT department gets PCs for support personnel and offers them as the default choice for new employees, yet many people pick up a Mac if they can decide themselves.

In my case, the decision to go back to Apple ecosystem is connected to two primary factors: the effects of corona pandemic, and the technical progress of “Apple silicon”.

The first factor consists of all the cumulative effects that are results from three years of remote and hybrid work. The requirements for fast and reliable systems that can support multitasking, video and audio really well are of paramount importance now. The hybrid meeting and teaching situations are particularly complex, as there is now need to run several communications tools simultaneously, stream high-quality video and audio, possibly also record and edit audio and video, while also making online publications (e.g., course environments, public lecture web pages, entire research project websites) that integrate video and photographic content more than used to be the case before.

In my case, it is particularly the lack of reliability and the incapability of PC systems in processing of image and video data that has led to the decision of going back to Apple. I have a relatively powerful thin-and-light laptop for work, and a Core i5/RTX 2060 Super based gaming/workstation PC at home. The laptop became underpowered first, and some meetings are now starting maybe 5-10 minutes late, with my laptop trying to find the strength needed to run few browser windows, some office software, a couple of communication and messaging apps, plus the required real-time video and audio streams. And my PC workstation can still run many older games, but when I import some photo and video files while also having a couple of editing tools open, everything becomes stuck. There is nothing as frustrating as staring on a computer screen where the “Wheel of Death” is spinning, when you have many urgent things to do. I have developed a habit of clicking on different background windows constantly, and keeping the Windows Task Manager all the time open, so that I can use it to immediately kill any stuck processes and try recovering my work to where I was.

Recently I got the chance to test an M1 MacBook Pro (thanks, Laura), and while the laptop was equal to my mighty PC workstation in some tasks, there were processes which were easily 5-10 times faster in the Mac, particularly everything related to file management, photo and video editing. And the overall feeling of responsiveness and fluency in multitasking was just awesome. The new “Apple silicon” chips and architectures are providing user experiences that are just so much better than anything that I have had in the PC side during the recent years.

There are multiple reasons behind this, and there are technical people who can explain the underlying factors much better than I can (see, e.g., what Erik Engheim from Oslo writes here: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-3262b158cba2). The basic benefits are coming from very deep integration of Apple’s System-on-a-Chip (SOC), where in an M1 chip package, a whole computer has been designed and packed into one, integrated package:

  • Central processing unit (CPU) – the “brains” of the SoC. Runs most of the code of the operating system and your apps.
  • Graphics processing unit (GPU) — handles graphics-related tasks, such as visualizing an app’s user interface and 2D/3D gaming.
  • Image processing unit (ISP) — can be used to speed up common tasks done by image processing applications.
  • Digital signal processor (DSP) — handles more mathematically intensive functions than a CPU. Includes decompressing music files.
  • Neural processing unit (NPU) — used in high-end smartphones to accelerate machine learning (A.I.) tasks. These include voice recognition and camera processing.
  • Video encoder/decoder — handles the power-efficient conversion of video files and formats.
  • Secure Enclave — encryption, authentication, and security.
  • Unified memory — allows the CPU, GPU, and other cores to quickly exchange information
    (Source: E. Engheim, “Why Is Apple’s M1 Chip So Fast?”)

The underlying architecture of Apple Silicon comes from their mobile devices, iPhones and iPads, in particular. While mainstream PC components have grown over the years increasingly massive and power-hungry, the mobile environment has set its strict limits and requirements for the efficiency of system architecture. There are efforts to utilise the same ARM (advanced “reduced instruction set”) architectures that e.g. mobile chip maker Qualcomm uses in their processors for Android mobile phones, also in the “Windows on Arm” computers. While the Android phones are doing fine, the Arm-based Windows computers have been generally so slow and limited in their software support that they have remained in the margins.

In addition to the reliability, stability, speed and power-efficiency benefits, Apple can today also provide that kind of seamless integration between computers, tablet devices, smartphones and wearable technology (e.g., AirPod headphones and Apple Watch devices) that the users of more hybrid ecosystems can only dream about. This is now also becoming increasingly important, as (post-pandemic), we are moving between home office, the main office, various “third spaces” and e.g. conference travel, while also still keeping up the remote meetings and events regime that emerged during the corona isolation years. Life is just so much easier when e.g. notifications, calls and data follow you more or less seamlessly from device to device, depending on where you are — sitting, running or changing trains. As the controlling developer-manufacturer of both hardware, software and underlying online services, Apple is in the enviable position to implement a polished, hybrid environment that works well together – and, thus, is one less source of stress.

My old camera

I wanted to revisit my old gear tonight, so I dug up my trusty EOS 550D, coupled with the BG-E8 battery grip and the classic, Canon 70-200mm f4L USM lens. The Friendly Cat provided again the modelling services.

I was immediately reminded by the obvious strengths of this older, bigger camera body: the ergonomics are just so much better when you can really hold the camera comfortably and steadily in your hand, and have large, mechanical control knobs that you can quickly and effortlessly experiment with.

On the other hand, the limitations were again also immediately obvious; in particular, the mirrorless digital camera (EOS M50) that I am mostly using these days allows one seamlessly move from using the viewfinder to the live view in the rear display, while making the composition. 550D also has rear display live view, but you need to specifically switch it on, and it is slow and imprecise, and the autofocus in particular is just terrible when shooting with it.

The optical viewfinder, on the other hand, is excellent, and the very limited nine (9) AF points do their job just well enough for this kind of slow “portrait” work. The low maximum ISO of 6400 also does not matter when taking pictures under the bright evening sun, and sharpness of that old Canon L lens fits nicely the 18-megapixel image sensor’s resolution capabilities.

Thus, if I would think about a “perfect camera” for my use, I would be happy with current M50 image sensor resolution (24,1 megapixels), but I would be really happy for a bit more capable autofocus system, and for more low-light performance in particular. The single most beneficial upgrade could however be a body with larger physical dimensions, with better/larger mechanical controls for selecting the program mode, aperture, and making the other key adjustments.

While the new EOS R series Canon cameras provide exactly that, the issue for me is that those are full frame cameras; and I am very happy in taking my photos with APS-C (the “crop sensor”). Full frame lenses, and new Canon RF lenses in particular, tend to be both large and expensive to a degree that does not make much sense for my kind of “Sunday photographer”.

There are alternatives like Fujifilm, with their excellent APS-C camera bodies (X-T30, X-T4, for example), and their sharp and relatively compact and affordable lenses. But I am deeply invested in the Canon ecosystem – it would be so much easier if Canon would come up with a well-designed camera like Canon 7D Mark II, but updated and upgraded into current, mirrorless sensors’ and image processors’ capabilities. One can always make wishes? Happy weekend, everyone!

Operating systems: now and then, what next?

Ubuntu on HP Elitebook x360 (screenshot).There are multiple operating systems you can operate. Some are feature-rich, some not so much. While those who are enthusiastic and passionate about these kinds of things continue to be passionate, the actual differences between systems where you can operate are growing less and less important, year by year.

The basics of digital environments are today “good enough”, pretty much everywhere you go.

There are certain significant differences still, of course. Windows has the legacy of great popularity over decades in highly heterogeneous, work and private use contexts. It has a huge backlog of software and hardware that has been created or supported in Windows computers. This is both a blessing and a challenge. It is very difficult to produce a new version of the OS that would not conflict with some software, or some driver-hardware combination out there, as the recent hurdles of Windows 10 upgrade installations have proved.

Apple Macintosh users have more often been left in the cold, as there has been many devices which never came with drivers to make them work with a Mac. There has been arguably a lot of high quality, professional software available for Macs, but in purely numeric terms, Windows software ecosystem is order of magnitude larger.

A bit similarly, iOS (the operating system for Apple mobile devices) is limited by design: there are many restrictions for modifying and customising the default operation and setup of an iOS system. On the other hand, the software developers can rely on highly standardised environment, and users get a very reliable (even if unified and rigid) experience.

There are thus obvious pluses and minuses with the various philosophies that operating systems have adopted, or have been based on.

The current leader, Windows 10 is overall strong in diversity, meaning here particularly the software and hardware support. Be it business software, services or games, Windows is the default environment with most alternatives. On the other hand, a Windows user is challenged by certain loss of control: both the operating system and much of available software and system add-ons and drivers are proprietary. The environment is effectively filled with black boxes that do something – and the user can in most cases only hope that what goes on is based on the right and correct principles. And as there are multiple actors in all Windows installations, the cumulative effects can be surprising: there is Microsoft, trying its best both to introduce new functions and technologies, while at the same time maintaining backward compatibility with their long history of legacy systems. Then there is the OEM (original equipment manufacturer), like Dell or HP, who typically configure their Windows computers with their own, custom-made tools and drivers. Then comes the user, who also installs various kinds of elements into this environment. There is the saying “tårta på tårta” in Swedish – cake upon cake. No-one is capable of carrying responsibility of how the entire conglomerate operates in a Windows computer. In many cases the results are good enough, and the freedom of choice and diversity of support for multiple use cases is what the users are looking for. On the other hand, there is also a well-documented history of bugs and problems related to the piling up effects of the sprawling and ineffective software ecosystem.

As the leading open-source alternative, Linux is known for rather effective use of computing resources. A typical Linux distribution runs well on even ageing computer hardware, and on modern, powerful systems one can really experience what a fast and reliable OS can mean. There are (of course) certain downsides to Linux, as well. The main challenges in this case lie in the somewhat higher threshold of learning. While there are increasingly easy distributions that come pre-configured with graphical tools that allow the non-expert user to take hold of their system, and configure it to their liking, the foundation of Linux is in command-line tools and text-format configuration files. Even today I find that after a new, out-of-the-box Linux distro installation, I feel the need to spend perhaps an hour or two in command line, hunting and installing the various tweaking tools, add-ons and other elements that are lacking in the default installation. But Linux is getting better. Particularly the support for new hardware is now much better than what it used to be ten years ago. While the laptop computer user of Linux in the past would in many cases find out that most of the controllers, special keys and other elements of one’s device would not work at all, or only after considerable efforts, today the situation is different. Most things actually work, which is great. But if something does not work in a Linux installation, one is mostly left to one’s own devices (and for hunting for help in the various community websites online). However, as an alternative example, Lenovo recently announced that they will certify their entire workstation portfolio to run Linux – “every model, every configuration” (see: https://news.lenovo.com/pressroom/press-releases/lenovo-brings-linux-certification-to-thinkpad-and-thinkstation-workstation-portfolio-easing-deployment-for-developers-data-scientists/).

I myself recently configured two laptops with a dual-boot, Windows/Linux setup: Microsoft Surface Pro 4 and HP Elitebook x360 1030 G3. I considered both more challenging devices from a Linux perspective, since these are both two-in-one, hybrid devices with touch screens, which means that they most probably rely on many proprietary drivers to keep all their functionalities running. There were certain challenges (in BIOS/UEFI settings, in configuring the GRUB2 system boot menu, and in the disk partitioning), but Linux itself actually did handle both devices just fine. I was using the most recent, 20.04 release of Ubuntu desktop distribution, but there are several other alternatives that could work just equally well, or even better. Elitebook x360 is my main daily driver, and while my Windows 10 installation makes it run burning hot, fans blowing, Ubuntu is snappy, quiet and cool. And I actually can operate both the touch screen and touchpad with gestures that I have fully customised to my own liking, the active pen is also working fine with the screen, and there are only a couple of things that fall short of Windows 10. The special keys for controlling brigtness do not work (I use control sliders instead), and probably neither does the infrared camera (for facial recognition & login) and the LTE modem (I have not tested it though). One thing that I noticed is that this system sounds currently much better under Windows – the sound system is Bang & Olufsen certified, and they have probably configured the sound drivers and equalizers for optimal sound delivery, as the audio quality of music under Windows perhaps the best of any laptop I have used. But there is a highly detailed software tool, called PulseEffects, available for Linux that allows one to create a customized audio profile – if one is ready to dedicate the time and effort for tweaking and testing. That is the reality of Linux still, for good or bad; but luckily most of the essentials for work use will run just fine, directly out-of-the-box.

As a complete opposite of the high “tweakability” of Linux, iOS/ipadOS systems limit the user possibilities to a radical degree. The upside is then that an iPhone or iPad is very easy to use, one can always find the same settings from same places. It used to be that Apple mobile devices had excellent battery life and system reliability, but could only do one thing at a time. With the launch of iOS/ipadOS 13 (and coming version 14), multitasking became a certain kind of option in iPad Pro devices particularly. One cam also buy (a premium, and rather expensive) “Magic Keyboard” add-on to iPad Pro, and it will come with really nice scissor keys, plus a touchpad that allows mouse-and-keyboard style control of iOS. With iOS 14 there will be some more user-configurable elements added, such as (Android-style) widgets into the desktop. There are inevitable complications related to the added capabilities. iPad Pro which is constantly polling the touchpad (or, burning the back-light in the keyboard) does not have as long battery life as one without it. The multitasking and various split screen modes in ipadOS are rather clumsy and hard to control without considerable dedication into learning new gestures and skills of touch control.

Thus, I would say that we are currently in rather good situation in terms of having several good alternatives to choose from. I myself prefer to have both Windows 10 and Linux installed in my main computers, and keep them updated to their most recent versions. But I also use iOS, ipadOS and Android daily, and all of them have their distinctive strengths and weaknesses. If something does not work in one environment very well, it is often better to try something different, rather than trying to force the operating system out of its own “comfort zone”. I suspect this basic situation will remain the same in the foreseeable future, too.

PC Build, Midsummer 2020

I have followed an about five-year PC upgrade cycle – making smaller, incremental parts upgrades in-between, and building a totally new computer every four-five years. My previous two completely new systems were built during a Xmas break – in December 2011 and 2015. This time, I was seeking for something to put my mind into right now (year 2020 has been a tough one), and specced my five-year build now in Midsummer, already.

It somehow feels that every year is a bad year to invest into computer systems. There is always something much better coming up, just around the corner. This time, it seems that there will be both a new processor generation and new major graphics card generation coming up, later in 2020. But after doing some comparative research for a couple of weeks, in the end, I did not really care. The system I’ll build with the 2020 level of technology, should be much more capable than the 2015 one, in any case. Hopefully the daily system slowdowns and bottlenecks would ease, now.

Originally, I thought that this year would be the year of AMD: both the AMD Zen 2 architecture based, Ryzen 3000 series CPUs and Radeon RX 5000 GPUs appeared very promising in terms of value for money. In the end, it looks like this might be my last Intel-Nvidia system (?), instead. My main question-marks related to the single-core performance in CPUs, and to the driver reliability in Radeon 5000 GPUs. The more I read, and discussed with people who had experience with the Radeon 5000 GPUs, the more I heard stories about blue screens and crashing systems. The speed and price of the AMD hardware itself seemed excellent. In CPUs, on the other hand, I evaluated my own main use cases, and ended up with the conclusion that the slightly better single core performance of Intel 10th generation processors would mean a bit more to me, than the solid multi-core, multithread-performance of similarly priced, modern Ryzen processors.

After a couple of weeks of study into mid-priced, medium-powered components, here are the core elements chosen for my new, Midsummer 2020 system:

Intel Core i5-10600K, LGA1200, 4.10 GHz, 12MB, Boxed (there is some overclocking potential in this CPU, too)

ARCTIC Freezer 34 eSports DUO – Red, processor cooler (I studied both various watercooling solutions, and the high-powered Noctua air coolers, before settling on this one; the watercooling systems did not appear quite as durable in the long run, and the premium NH-D15 was a bit too large to fit comfortably into the case; this appared to be a good compromise)

MSI MAG Z490 TOMAHAWK, ATX motherboard (this motherboard appears to strike a nice balance between price vs. solid construction, feature set, and investments put into the Voltage Regulator Modules, VRMs, and other key electronic circuit components)

Corsair 32GB (2 x 16GB) Vengeance LPX, DDR4 3200MHz, CL16, 1.35V memory modules (this amount of memory is not needed for gaming, I think, but for all my other, multitasking and multi-threaded everyday uses)

MSI GeForce RTX 2060 Super ARMOR OC GPU, 8GB GDDR6 (this is entry level ray-tracing technology – that should be capable enough for my use, for a couple of years at least)

Samsung 1TB 970 EVO Plus SSD M.2 2280, PCIe 3.0 x4, NVMe, 3500/3300 MB/s (this is the system disk; there will be another SSD and a large HDD, plus a several-terabyte backup solution)

Corsair 750W RM750x (2018), modular power unit, 80 Plus Gold (there should be enough reliable power available in this PSU)

Cooler Master MasterBox TD500 Mesh w/ controller, ATX, Black (this is chosen on the basis of available test results – the priorities for me here were easy installation, efficient air flow, and thirdly silent operation)

As a final note, it was interesting to note that during the intervening 2015-2020 period, there was time when RGB lights became the de facto standard in PC parts: everything was radiating and pulsating in multiple LED colours like a Xmas tree. It is ok to think about design, and aim towards some kind of futurism, even, in this context. But some things are just plain ridiculous, and I am happy to see a bit more minimalism winning ground in PC enthusiast level components, too.

On remote education: tech tips

A wireless headset.

The global coronavirus epidemic has already caused deaths, suffering and cancellations (e.g. those of our DiGRA 2020 conference, and the Immersive Experiences seminar), but there are still luckily many things that go on in our daily academic lives, albeit often in somewhat reorganised manner. In schools and universities, the move into remote education is particularly one producing wide-ranging changes, and one that is currently been implemented hastily in innumerable courses that were not originally planned to be run in this manner at all.

I am not in a position to provide pedagogic advice, as the responsible teachers will know much better what are the actual learning goals of their courses, and therefore they are also best capable of thinking how to reach those goals with alternative means. But since I have been directing, implementing or participating in remote education over 20 years already (time flies!), here are at least some practical tips I can share.

Often the main part of remote education is independent, individual learning processes, which just need to be somehow supported. Finding information online, gathering data, reading, analysing, thinking and writing is something that does not fundamentally change, even if the in-class meetings would be replaced by reporting and commenting taking place online (though, the workload for teachers can easily skyrocket, which is something to be aware of). This is particularly true in asynchronous remote education, where everyone does their tasks in their own pace. It is when teamwork, group communications, more advanced collaborations, or when some special software or tools are required, when more challenges emerge. There are ways to get around most of those issues, too. But it is certainly true that not all education can be converted into remote education, not at least with identical learning goals.

According to my experience, there are three main types of problems in real-time group meetings or audio/videoconferences: 1) connection problems, 2) audio & video problems, and 3) conversation rules problems. Let’s deal with them, one by one.

1) The connection problems are due to bad or unreliable internet connection. My main advice is either to make sure that one can use wired rather than Wi-Fi/cellular connection while attempting to join a real-time online meeting or get very close to Wi-Fi router in order to get as strong signal as possible. If one has weak connection, the experience of everyone will suffer, as there will likely be garbled noises and video artefacts coming from you, rather than good-quality streams.

2) The audio and video problems relate to echo, weak sound levels, background noise, or dark, badly positioned or unclear video. If there are several people taking part in a joint meeting, it might be worth thinking carefully whether a video stream is actually needed. In most cases people are working intensely with their laptops or mobile devices during the online meeting, reviewing documents and making notes, and since there are challenges in getting a real eye-to-eye contact with other people (that is pretty impossible still, with current consumer technology), there are multiple distancing factors that will lower the feeling of social presence in any case. Good quality audio link might be enough to have a decent meeting. For that, I really recommend using a headset (headphones with a built-in microphone) rather than just the built-in microphone and speakers of the laptop, for example. There will be much less echo, and the microphone will be close to speakers’ mouths meaning that speech is picked up in much clear and loud manner, and the surround noises are easier to control. But it is highly advisable to move into a silent room for the period of teleconference.

Another tip: I suggest always first connecting the headset (or external microphone and speakers), BEFORE starting the software tool used for teleconferencing. This way, you can make sure that the correct audio devices (both output and input devices) are set as active or default ones, before you start the remote meeting tool. It is pretty easy to get this messed up and end up with low-quality audio coming from the wrong microphone or speakers rather than the intended ones. Note that there are indeed two layers here: in most cases, there are separate audio device settings both in the operating system (see Start/Settings/System/Sound in Windows 10), and another, e.g. “Preferences” item with other audio device settings hidden inside most remote meeting tools. Both of those need to be checked – prior to the meeting.

Thus, yet one tip: please always dedicate e.g. 10-15 minutes of technical preparation time before a remote education or remote meeting session for double-checking and solving connection, audio, video or other technical problems. It is sad (and irresponsible) use of everyone’s precious time, if every session starts with half of the speakers missing the ability to speak, or not being able to hear anyone else. Though, this kind of scenario is still unfortunately pretty typical. Remote meeting technology is notoriously unreliable, and when there are multiple people, multiple devices and multiple connections involved, the likelihood of problems multiplies exponentially.

Please be considerate and patient towards other people. No-one wants to be the person having tech problems.

3) The discussion rules related problems are the final category, and one that might be also culturally dependent. In a typical remote meeting among several Finnish people, for example, it might be that everyone just keeps quiet most of the time. That is normal, polite behaviour in the Finnish cultural context for face-to-face meetings – but something that is very difficult to decode when in an online audio teleconference where you are missing all the subtle gestures, confused looks, smiles and other nonverbal cues. In some other cultural setting, the difficulty might be people speaking on top of each other. Or the issues might be related to people asking questions without making it clear who is actually being addressed by the question or comment.

It is usually a good policy to have a chair or moderator appointed for an online meeting, particularly if it is larger than only a couple of people. The speakers can use the chat or other tools in the software to make signals when they’d want to speak. The chairperson makes it clear which item is being currently discussed and gives floor to each participant in their turn. Everyone tries to be concise, and also remembers to use names while presenting questions or comments to others. It is also good practice to start each meeting with a round of introductions, so that everyone can connect the sound of voice with a particular person. Repeating one’s name also later in the meeting when one is speaking up, does not hurt, either.

In most of our online meetings today we are using a collaboratively edited online document for notetaking during the meeting. This helps everyone to follow what has been said or decided upon. People can fix typos in notes in real time or add links and other materials without requiring the meeting chairperson (or secretary) to do so for them. There are many such online notetaking tools in standard office suites. Google Docs works fine, for example, and has probably the easiest way of generating an editing-allowed link, which can then be shared in the meeting chat window with all participants, without requiring them to have a separate service account and login to do so. Microsoft has their own, integrated tools that work best when everyone is from within the same organisation.

Finally, online collaboration has come a long way from the first experiments in 1960s (google “NLS” and Douglas Engelbart), but it still continues to have its challenges. But if we are all aware of the most typical issues, dedicate a few minutes before each session for setup and testing (and invest 15 euros/dollars into a reliable plug-and-play headset with USB connectivity) we can remove many annoying elements and make the experience much better for everyone. Then, it is much easier to start improving the actual content and pedagogic side of things. – Btw, if you have any additional tips, or comments to share on this, please drop a note below.

Nice meetings!

Enlightened by Full Frame?

Magpie (Harakka), 15 February, 2020.

I have long been in Canon camp in terms of my DSLR equipment, and it was interesting to notice that they announced last week a new, improved full frame mirrorless camera body: EOS R5 (link to official short annoucement). While Canon was left behind competitors such as Sony in entering the mirrorless era, this time the camera giant appears to be serious. This new flagship is promised to feature in-body image stabilization that “will work in combination with the lens stabilization system” – first in Canon cameras. Also, while the implementations of 4k video in Canon DSLRs have left professionals critical in past, this camera is promised to feature 8k video. The leaks (featured in sites like Canon Rumors) have been discussing further features such as a 45mp full frame sensor, 12/20fps continuous shooting, and Canon also verified a new, “Image.Canon” cloud platform, which will be used to stream photos for further editing live, while shooting.

Hatanpää, 15 February 2020.

But does one really need a system like that? Aren’t cameras already good enough, with comparable image quality available at fraction of the cost (EOS R5 might be in 3500-4000 euros range, body only).

In some sense such critique might be true. I have not named the equipment I have used for shooting the photos featured in this blog post, for example – some are taken with my mirrorless systems camera, some are coming from a smartphone camera. For online publishing and hobbyist use, many contemporary camera systems are “good enough”, and can be flexibly utilized for different kinds of purposes. And the lens is today way more important element than the camera body, or the sensor.

Standing in the rain. February 15, 2020.

Said that, there are some elements where a professional, full frame camera is indeed stronger than a consumer model with an APS-C (crop) sensor, for example. It can capture more light into the larger sensor and thus deliver somewhat wider dynamic range and less noise under similar conditions. Thus, one might be able to use higher ISO values and get noise-free, professional looking and sharp images in lower light conditions.

On the other hand, the larger sensor optically means more narrow depth of field – this is something that a portrait photographer working in studio might love, but it might actually be a limitation for a landscape photographer. I do actually like using my smartphone for most everyday event photography and some landscape photos, too, as the small lens and sensor is good for such uses (if you understand the limitations, too). A modern, mirrorless APS-C camera is actually a really flexible tool for many purposes, but ideally one has a selection of good quality lenses to suit the mount and smaller format camera. For Canon, there is striking difference in R&D investments Canon have made in recent years into the full frame, mirrorless RF mount lenses, as compared to the “consumer line” M mount lenses. This is based on business thinking, of course: the casual photographers are changing into using smartphones more and more, and there is smaller market and much tighter competition left in the high-end, professional and serious enthusiast lenses and cameras, where Canon (and Nikon, and many others) are hoping to make their profits in the future.

Thus: more expensive professional full frame optimised lenses, and only few for APS-C systems? We’ll see, but it might indeed be that smaller budget hobbyists (like myself) will need to turn towards third-party developers for filling in the gaps left by Canon.

Systems in the rain…

One downside of the more compact, cheaper APS-C cameras (like Canon M mount systems) is that while they are much nicer to carry around, they do not have as good ergonomics and weather proofing as more pro-grade, full frame alternatives. This is aggravated in winter conditions. It is sometimes close to impossible to get your cold, gloved fingers to strike the right buttons and dials when they are as small as in my EOS M50. The cheaper camera bodies and lenses are also missing the silicone seals and gaskets that are typically an element that secures all connectors, couplings and buttons in a pro system. Thus, I get a bit nervous when outside with my budget-friendly system in a weather like today. But, after some time spent in careful wiping and cleaning, everything seems to continue working just fine.

Joining the company. 15 February, 2020.

Absolute number one lesson I have learned in these years of photography, is that the main limitation of getting great photos is rarely in equipment. There are more and less optimal, or innovative, ways of using same setup, and with careful study and experimentation it is possible to learn ways of working around technical limitations. The top-of-the-line, full frame professional camera and lenses system might have wider “opportunity space” for someone who has learned how to use it. But with additional complexity, heavy and expensive elements, those systems also have their inevitable downsides. – Happy photography, everyone!

Compact lenses, great photos?

Sports and wildlife photographers in particular are famous (or notorious) for investing in and carrying around lenses that are often just huge: large, long, and heavy. Is it possible to take great photos with small, compact lenses, or is an expensive and large lens the only option for a hobbysist photographer who’d want to reach better results?

Winter details, captured with Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.

I am by no means an authority in optics or lens design, but I think certain key principles are important to take into consideration.

Perhaps one of the first ones is the style of photography one is engaged with. Are you shooting portrait photos indoors, or even in a studio? Or, are you tripping outdoors, trying to get closeup photos of elusive birds and animals? Or, are you rather a landscape photographer? Or, a street photographer?

Sometimes the intended use of photos is also a factor to consider. Are these party photos, or something that you’ll aim to share mostly among your friends in social media? Or, is this that important photo art project that you aim output into large-format prints, and hang to your walls – or, in to a gallery even?

These days, digital camera sensors are “sharp” enough for pretty much any purpose – one of my smartphones, Huawei Mate 20 Pro, for example, has a 40 megapixel main photo sensor, with 7296 × 5472 native resolution. That is more than what you need for a large poster print (depending on viewing distances and PPI settings, a 4000 x 6000 pixels, or even 2000 x 3000 pixels might be enough for a poster print). There are many professional photographers who took their commercial photos for years with cameras that had only 6 or 8 megapixel sensors. And many of those photos were reproduced in large posters, or in covers of glossy magazines, and no-one complained.

Frozen grass, photographed using Huawei Mate 20 Pro smartphone.

The lens and quality of optics are more of a bottleneck: if the lens is “soft”, meaning that it is not capable of focusing all rays of light in consistent, sharp manner, there is no way of achieving very clear looking images with that. But truth be told, in perhaps 90 % of cases with blurry photos, I blame myself rather than my equipment these days. There are badly focused photos, I had a wrong aperture setting or too long exposure time (and was not using a tripod but shooting handheld) and all that contributes to getting a lot of blurry looking photos.

But it is also true, that if one is trying to achieve very high quality results in terms of optical quality, using a more expensive lens is usually something that many people will do. But actually there are “mainstream” photography situations where a cheap lens will produce results that are just – good enough. It is particularly the more extreme situations, where one is for example trying to get a really lot of light into the lens, to capture really detailed scenes in a very consistent manner, where large, heavy and expensive lenses come to play a role. This is also true of portraiture, where a high-quality lens is also used to deliver good separation of person from the background, and the glass elements, their positioning and the aperture blades are designed to produce particularly nice looking “bokeh” effect (the out-of-focus highlights are blurred in an aesthetically pleasing manner). And of course those bird and wildlife photographers value their well-designed, long telephoto range lenses that also capture a lot of light, thereby enabling the photographer to use short enough exposure times and get sharp images of even moving targets.

A cropped detail, photo taken with SIGMA 150-600 mm f/5-6,3 DG OS HSM Contemporary tele-zoom lens on a dim winter’s day.

In many cases it is actually other characteristics rather than the optical image quality that makes a particular lens expensive. It might be the mechanical build quality, weather-proofing, or the manner the focusing, zooming and aperture mechanisms, and how control rings are implemented that are something a professional photographer might be willing to pay for, in one of their main tools.

In street photography, for example, there are completely different kind of priorities as compared to wildlife photography, or studio portraiture, where using a solid tripod is common. In a street, one is constantly moving, and also trying not to be very conspicuous while taking photos. A compact camera with a compact lens is good for those kinds of reasons. Also, if the targets are people and views on city streets, a “normal range” lens is usually preferable. A long-range telephoto lens, or very wide-angle lens will produce very different kinds of effects as compared to the visual feel and visual experiences that people usually experience as “normal images”. In a 35 mm film camera, or “full-frame” digital camera, a 50 mm lens is usually considered a normal lens, whereas with a camera equipped with a (Canon) “crop” sensor (APS-C, 22.2 x 14.8 mm sensor size) would require c. 30 mm lens to produce similar field of view for the image as a 50 mm in a full-frame camera. Lenses with this kinds of short focal ranges can be designed to be physically smaller, and can deliver very good image quality for their intended purposes, even while being nicely budget-priced. There are these days many such excellent “prime” lenses (as contrasted to more complex “zoom” lenses) available from many manufacturers.

One should note here that in case of smartphone photography, everything is of course even much more compact. A typical modern smartphone camera might have a sensor of only few millimeters in size (e.g. in popular 1/3″ type, the sensor is 4.8 x 3.6 mm), so actual focal length of the (fixed) lens may be perhaps 4.25 mm, but that translates into a 26 mm equivalent lens field-of-view, in a full-frame camera. This is thus effectively a wide-angle lens that is good for many indoor photography situations. Many smartphones feature a “2x” (or even “5x”) sensor-lens combinations, that can deliver a normal range (50 mm equivalent in full-frame) or even telephoto ranges, with their small mechanical and optical constructions. This is an impressive achievement – it is much more comfortable to put a camera capable of high-quality photography into your back pocket, rather than lug it around in a dedicate backbag, for example.

Icy view was taken with Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.

Perhaps the main limitation of smartphone cameras for artistic purposes is that they do not have adjustable apertures. There is always the same, rather small hole where rays of light will enter the lens and finally focus on the image sensor. It is difficult to control the “zone of acceptable sharpness” (or, “depth of field”) with a lens where you cannot adjust aperture size. In fact, it is easy to achieve “hyperfocal” images with very small-sensor cameras: everything in image will be sharp, from very close to infinity. But the more recent smartphones have already slighly larger sensors, and there have already even been experiments to implement adjustable aperture system inside these tiny lenses (Nokia N86 and Samsung Galaxy S9 at least have advertised adjustable apertures). Some manufacturers resort to using algorithmic background blurring to create full-frame camera looking, soft background while still using optically small lenses that naturally have much wider depth of field. When you take a look at the results of such “computational photography” in a large and sharp monitor, the results are usually not as good as with a real, optical system. But, if the main use scenario for such photos is to look at them from small-screen, mobile devices, then – again – the lens and augmentation system together may be “good enough”.

All the photos attached into this blog post are taken with either a compact kit lens, or with a smartphone camera (apart from that single bird photo above). Looking at them from a very high resolution computer monitor, I can find blurriness and all kinds of other optical issues. But personally, I can live with those. My use case in this case did not involve printing these out in poster sizes, and I just enjoyed having a winter-day walk, and taking photos while not carrying too heavy setup. I will also be posting the photos online, so the typical viewing size and situation for them pretty much obfuscates maybe 80 % of the optical issues. So: compact cameras, compact lenses – great photos? I am not sure. But: good enough.

More frozen grass, Canon EOS M50, and the kit lens: EF-M 15-45mm f/3.5-6.3 IS STM.