Stretching the little Canon to the max

There has been these endless discussions among photography enthusiasts on the strengths and weaknesses of various camera manufacturers for decades. It has been interesting to note that as the history-awareness has increased, some of this discussion has moved into a sort of meta-level: rather than talking about the suitablity of certain camera equipment for (certain kinds of) photography, the discussion has partly moved to discuss the strenghts and weaknesses of entire philosophy or product-line strategy of various manufacturers.

Canon is an example that I am interested here, particularly as this is the manufacturer whose products I have been mostly using for the past two decades or more. The dominant criticism of Canon today seems to be that they (as late adopters of mirrorless systems camera technologies) are now spreading their efforts into too many directions, and thereby making it hard to provide anything really strong and credible for anyone. The history of Canon is great, of course, and I think that they still have the best user interface for their digital cameras, for example, and the back catalogue of Canon lenses is impressive. The problem today nevertheless is that it is difficult to see if Canon is still committed to continuing the DSLR camera and lens delopment in professional and enthusiast levels long into the future (as their recent releases of EOS 90D and 1D X Mark III DSLR bodies seems to suggest), or if anyone with an eye towards the future should invest into the RF mount lenses and EOS R series full-frame mirrorless cameras instead. (RF system is the most recent Canon camera family, it was announced in September 2018; Canon’s full-frame DSLR cameras have used the EF mount lenses since from 1987.) And what is the destiny of APS-C (“crop frame”) cameras, and the EF-M mount system (introduced in 2012) in all of this?

I have long used crop frame system cameras and either EF or EF-S (yet another Canon lens family) lenses, due to the nice balance that this combination makes in terms of versatility, compact sizes, image quality and price – which is always an important concern for a hobbyist photographer. Few months ago I made the move into the “mirrorless era”, deciding to invest into the most affordable of these alternative systems, the Canon EF-M mount family (my choice of camera body was the tiny, yet powerful EOS M50).

The initial experiences (as I have reported in this blog already earlier) have been mostly positive – it is easy to have take a good photo with this system and some decent, native EF-M lens. And it is nice that I can use an adapter to attach my older, EF mount lenses into the new, EF-M mount body, even while the autofocus might not be as fast that way. But the fact is that most of the new Canon lenses now appear to be coming out to the other, mirrorless Canon system: the full-frame RF mount cameras. And it is particularly the “serious enthusiast” or advanced hobbyist category that seems to be left in the middle. Some, more sports and wildlife oriented Canon lenses and cameras that would suit them are being published in the DSLR (EF mount) ecosystem. Some of the most advanced lenses are coming out in RF system, but the prices of many of those are more in the professional, multiple-thousands of euros/dollars category per lens. But the R system bodies seem to be missing many of the features that true professionals would need from their camera systems, so that is not really working so well, either. And those amateur photographers (like myself) who have opted for Canon EF-M mirrorless mount system are mostly provided with compact lenses that do not have the image quality or aperture values that more advanced photography would profit from. And investing into a heavy EF lens, and then adding adapter to get it to work with the EF-M body does not make particularly good sense. That lens is not designed for a mirrorless system to start with, and the combination of ultra-compact camera body and heavy, full-frame DSLR lens is not balanced one.

So, the advanced hobbyist / enthusiast crowd is sort of asking: Quo Vadis, Canon?

Some people have already voted with their feet, sold their Canon cameras and lenses and bought into a Sony or Fujifilm ecosystems instead. Those competing manufacturers have the benefit of simpler and more clear mirrorless (and APS-C) camera and lens strategies. They do not have so many millions of existing users with legacy camera and lens equipment to support, of course.

I am currently just trying to make the best out of my existing cameras and lenses. My lakeside camera walk today involved mostly using the Canon L-series 70-200 mm f/4 EF lens with the old APS-C, DSLR body (550D), which has better grip for handling a larger lens. And the landscape photos and detailed close-ups I shot with the new M50 and sharp 22mm f/2 EF-M lens.

Maybe the third-party manufacturers will provide some help in strengthening the EF-M ecosystem in the future. For example, SIGMA has announced that it will soon port three of its good quality prime lenses into EF-M system: Sigma 16mm, 30mm, and 56mm F1.4 DC DN Contemporary. Hopefully there will be more of such quality glass coming up – also from Canon itself. Producing good quality lenses that are also physically small enough to make sense when attached into a EF-M camera, and which have also affordable enough price, is not trivial achievement, it looks like.

SIGMA lenses.
New SIGMA lenses for the Canon EF-M mount cameras.

Mobile photography – RAW or JPG?

Is it worth setting your smartphone camera to use RAW format, instead (or: alongside) of the standard JPG format?

I must say I am not sure. Above you should be able to see three versions of the same photo. The first one is one produced with the automatic settings of my Huawei Mate 20 Pro. It is a f/1.8 photo coming from the main camera module, processed with various algorithms to create a “nice”, tonally rather balanced JPG with 2736 x 3648 pixels.

The second one is direct/non-edited conversion of the original RAW (imported into desktop Lightroom, then directly turned into JPG), with 5456 x 7280 pixels and plenty of information that is potentially valuable for editing, yet it is also bit too dark and the lens quality is frankly probably not quite worth all those pixels, to start with (the depth of field is narrow here, and most of the photo is soft, when you look it 1:1 in a large screen).

The third version is the RAW-based and Lightroom-edited photo, where I have just accepted some “auto” corrections that the software has available for beginners. This time, we can see many of the details again better, since Lightroom has tweaked the exposure and contrast settings and tonal curves. Yet, the change of white balance setting into the automatic “daylight” version has made the cold, Autumn morning photo to appear a bit too warm in colours to my mind.

This could be of course fixed in further, more nuanced and sensible Lightroom editing, but the point perhaps is that the out-of-camera JPG that Huawei is capable of producing is a rather nice compromise in itself. It is optimised for what the small, fixed lenses are capable of achieving, and the file size is good for sharing in social media – which is what most smartphone photos are used for, in any case. Artificial intelligence does it best to produce what a typical “Autumn Leaves” shot should look like. That might then again be something that you like – or not.

It is surely possible to achieve more striking and artistically ambitious (“non-typical”) outcomes when the original photo is taken in RAW, even when it is coming from a smartphone camera. But I would say that the RAW based workflow probably really makes sense when you are using a SLR style camera with a lens that is sharp enough for you to really go deep into the details, do some more ambitious cropping or tonal adjustments, for example.

Or, what do you think?

There are various articles online that you can also have a look on this, e.g.

On Tweakability

Screenshot: Linux Mint 19.2.
Linux Mint 19.2 Tina Cinnamon Edition. (See: https://www.linuxmint.com/rel_tina_cinnamon_whatsnew.php.)

Two years ago, in August 2017, I installed a new operating system into my trusty old home server (HP Proliant ML110 Gen5). That was a rather new Linux distro called ElementaryOS, which looked nice, but the 0.4 Loki that was available at the time was not an optimal choice for a server, as it soon turned out afterwards. It was optimized for a laptop use, and while I could also set it up as a file & printer server, many things required patching and tweaking to start working. But since I install and maintain multiple operating systems in my device environment partly out of curiosity, keeping my brain alert, and for this particular kind of fun – of tweaking – I persisted, and lived with Elementary OS for two years.

Recently, there had been some interesting new versions that had come out from multiple other operating system versions. While I do most of my daily stuff in Windows 10 and in iOS (or ipadOS, as the iPad variant is now called), it is interesting to also try out e.g. different Linux versions, and I am also fan of ChomeOS, which usually does not provide surprises, but rather steadily improves, while staying very clear, simple and reliable in that it does.

In terms of the particular characteristic that I am here talking about – let’s call it “tweakability”– an iPad or Chromebook are pretty much from the opposite ends of spectrum, as compared to a personal computer or server system running some version of Linux. While the other OSs excel in presenting the user with an extremely fine-tuned, clear and simple working environment that is simultaneously rather limited in terms of personalisation and modification, the bare-bones, expert oriented Linux distributions in particular hardly ever are “ready” straight after the initial setup. The basic installation is in these cases rather just the starting point for the user to start building their own vision of an ideal system, complete with the tools, graphical shells, and/or command-line interpreters etc. that suit their ways of working. Some strongly prefer the other, some the opposite style of OS with their associated user experiences. I feel it is optimal to be able to move from one kind of system to another, on basis of what one is trying to do, and also how one wants to do it.

Tweakability is, in this sense, a measure of customisability and modifiability of the system that is particularly important for so-called “power users”, who have a very definite needs, high IT skill levels, and also clear (sometimes idiosyncratic) ideas of how computing should be done. I am personally not entirely comfortable in that style of operation, and often rather feel happy that someone else has set up an easy-to-use system for me, which is good enough for most things. Particularly in those days when it is email, some text editing, browser-based research in databases and publications (with some social media thrown in), a Chromebook, iPad Pro or a Windows machine with a nice keyboard and good enough screen & battery life are all that I need.

But, coming back to that home server and new operating system installation: as my current printer has network sharing, scanning, email and all kinds of apps built-in, and I do not want to run a web server from my home any more either, it is just the basic backup and file server needs that this server box needs to handle. And a modern NAS box with some decent-sized disks could very well do that job. Thus, the setup of this Proliant server is more of less a hobby project that is very much oriented towards optimal tweakability these days (though not quite as much as my experiments with various Raspberry Pi hobby computers, and their operating systems).

So, I finally ended up considering three options as the new OS for this machine: Ubuntu Server 18.04.3 LTS (which would have been a solid choice, but since I was already running Ubuntu in my Lenovo Yoga laptop, I wanted something a bit different). The second option would have been the new Debian 10 (Buster) Minimal Server (probably optimal for my old and small home server use – but I wanted to also experiment with the desktop side of operating system in this installation). So, finally I ended up with Linux Mint 19.2 Tina Cinnamon Edition. It seemed to have the optimal balance between reliable Debian elements, Ubuntu application ecosystem, combined with some nice tweaks that enhance ease of use and also aesthetic side of the OS.

I did a wipe-clean-style installation of Mint into my 120 GB SSD drive, but decided to try and keep all data in the WD Red 4 TB disk. I knew in principle that this could lead into some issues, as in most new operating system installations, the new OS will come with a new user account, and the file systems will keep the files registered into the original User, Group and Other specifications, from the old OS installation. It would have been better to have a separate archive media available with all the folder structures and files, and then format the data disk, copy all data under the new user account, and thereby have all file properties, ownership details etc. exactly right. But I had already accumulated something like 2,7 terabytes of data into this particular disk and there was no exact backup of it all – since this was the backup server itself, for several devices in our house. So, I just read a quick reminder on how chmod and chown commands work again, and proceeded to mount the old data disks within the new Mint installation, take ownership of all directories and data, and tweak the user, group and other permissions into some kind of working order.

Samba, the cross-platform file sharing system that I need for the mixed Windows-Linux local network to operate was the first really difficult part this time. It was just plain confusing to get the right disks, shares and folders to appear in our LAN for the Windows users, so that the backup and file sharing could work. Again, I ended up reading dozens of hobbyist discussions and info pages from different decades and from different forums, making tweak after tweak in users, groups, permissions and settings in the /etc/smb.conf settings file (followed every time to stop and restart the Samba service daemon, to see the effects of changes). After a few hours I got that running, but then the actual fun started, when I tried to install Dropbox, my main cloud archive, backup and sharing system on top of the (terabyte-size) data that I had in my old Dropbox folder. In principle you can achieve this transition by first renaming the old folder e.g. as “Dropbox-OLD”, then starting the new instance of service and letting it create a new folder named “Dropbox”, then killing the software, deleting the new folder and renaming the old folder back to its own default name. After which restarting the Dropbox software should find the old data directory where it expects one to be, and start re-indexing all that data, but not re-downloading all of that from the cloud – which could take several days over a slow home network.

This time, however, something went wrong (I think there was an error in how the “Selective sync” was switched on at certain point), leading into a situation where all the existing folders were renamed by the system as server’s “Conflicting Copy”, then copied into the Dropbox cloud (including c. 330 000 files), while exactly same files and folders were also downloaded back from the cloud into exact same folders, without the “Conflicting Copy” marking. And of course I was away from the machine at this point, so when I realised what was going on, I had to kill Dropbox, and start manually bringing back the Dropbox to the state it was before this mess. It should be noted that there was also a “Rewind Dropbox” feature in this Dropbox Plus account (which is exactly designed for rolling back in this kind of large situations). But I was no longer sure into which point in time I should rewind back to, so I ended up going through about 100 different cases of conflicting copies, and also trying to manually recover various shared project folders that had become dis-joined in this same process. (Btw, apologies to any of my colleagues who got some weird notifications from these project shares during this weekend.)

After spending most of one night doing this, I tried to set up my other old services into the new Mint server installation in the following day. I started from Plex, which is a media server and client software/service system that I use e.g. to stream our family video clips from the server into our smart television. There is an entire, 2600 word essay on Linux file and folder permissions at the Plex site (see: https://support.plex.tv/articles/200288596-linux-permissions-guide/). But in the end I just had to lift my hands up. There is something in the way system sees (or: doesn’t see) the data that is in the old 4 TB disk, and all my tricks with different users and permission settings that I tried, do not allow Plex to see any of that data from that disk. I tested that if I copy the files into that small system disk (the 120 GB SSD), then the server can see and stream them normally. Maybe I will at some point get another large hard drive, try setting up that one under the current OS and user, copy all data there, and then try to reinstall and run Plex again. Meanwhile, I just have to say that I have got my share of tweakability for some time now. I think that Linux Mint in itself is indeed perfectly nice and capable operating system. It is just that software such as Dropbox or Plex do not play so nicely and reliably together with it. Not at least with the tweaking skills that I possess. (While I am writing this, there are currently still over 283 500 files that Dropbox client should restore from the cloud into that problematic data drive. And the program keeps on crashing every few hours…)

New version of “The Demon” (retrospectives, pt. 1)

Louis (Brad Pitt) destroying the Theatre of the Vampires in Interview with the Vampire
(dir. Neil Jordan). © Warner Bros., 1994.

My first book published in English was outcome of my PhD work conducted in late 1990s – The Demonic Texts and Textual Demons (Tampere University Press, 1999). As the subtitle hints (“The Demonic Tradition, the Self, and Popular Fiction”), this work was both a historically oriented inquiry into the demonic tradition across centuries, and an attempt to recast certain poststructuralist questions about textuality in terms of agency, or “Self”.

The methodological and theoretical subtext of this book was focused on politically-committed cultural studies on the one hand: I was reading texts like horror movies, classical tragedies, science fiction, The Bible, and Rushdie’s The Satanic Verses from perspectives opened up by our bodily and situated existence, suffering, and possibilities for empowerment. On the other hand, I was also interested in both participating and ‘deconstructing’ some of the theoretical contributions that the humanities – literary and art studies particularly – had made to scholarship during the 20th century. In a manner, I was turning “demonic possession” as a self-contradictory and polyphonic image of poststructuralism itself: the pursuit of overtly convoluted theoretical discourses (that both reveal and hide the actual intellectual contributions at the same time) particularly both fascinated and irritated me. The vampires, zombies and cyborgs were my tools for opening the black boxes in the charnel houses of twisted “high theory” (afflicted by a syndrome that I called ‘cognitocentrism’ – the desire to hide the desiring body and situatedness of the theorizing self from true commitment and responsibility in the actual world of people).

I have now produced a new version of this book online, as Open Access. After the recent merger of universities, the Tampere University Press (TUP) books are no longer available as physical copies, and all rights of the works have returned to the authors (see this notice). Since I also undertook considerable detective work at the time to secure the image rights (e.g. by writing to Vatican Libraries, and Warner Brothers), I have now also restored all images – or as close versions of the originals as I could find.

The illustrated, free (Creative Commons) version can be found from this address: https://people.uta.fi/~tlilma/Demon_2005/.

I hope that the new version will find a few new readers to this early work. Here are a couple of words from my Lectio Praecursoria, delivered in the doctoral defense at 29th March, 1999:

… It is my view, that the vast majority of contemporary demonic texts are created and consumed because of the anxiety evoked by such flattening and gradual loss of meaningful differences. When everything is the same, nothing really matters. Demons face us with visions which make indifference impossible.
A cultural critic should also be able to make distinctions. The ability to distinguish different audiences is important as it makes us aware how radically polyphonic people’s interpretations really can be. We may live in the same world, but we do not necessarily share the same reality. As the demonic texts strain the most sensitive of cultural division lines, they highlight and emphasise such differences. Two extreme forms of reactions appear as particularly problematic in this context: the univocal and one-dimensional rejection or denial of the demonic mode of expression, and, on the other hand, the univocal and uncritical endorsement of this area. If a critical voice has a task to do here, it is in creating dialogue, in unlocking the black-and-white positions, and in pointing out that the demonic, if properly understood, is never any single thing, but a dynamic and polyphonic field of both destructive and creative impulses.

Frans Ilkka Mäyrä (1999)

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!

M50: first experiences

Ouf-of-camera JPG (M50, with EOS-M 22mm f/2 lens).

I have been using the new Canon EOS M50 mirrorless system camera now for a month or so. The main experiences are pretty positive, but I have also some comments on what this camera is good and not so optimal for.

In terms of image quality and feature set, this is a pretty complete package. Canon can make good cameras. However, the small physical size of this camera is perhaps its most defining main characteristic. This means that M50 is excellent as a light and small travel companion, but also that it has too small grip to carry comfortably this body when there are some heavy “pro” lenses or telephoto lenses attached. One must carry the system from the lens instead.

I really like the touch screen interface of M50. The swiveling LCD is really functional, and it is easy to take that quick photo from extra low or high angles. The LCD touch interface Canon uses is perhaps the best in the market today: it is responsive, well designed and logically organised. This is particularly important for M50, since it has only few physical buttons, and a single rotating control. Photographer using M50 needs to use the touch UI for many key functions. This is perhaps something that many manual-settings oriented professional and enthusiast photographers do not like; it you like to set the aperture, exposure time and ISO from the physical controls, then M50 is not for you (one should consider e.g. Fujifilm X-T3 or T30 instead). But if one is comfortable working with electronic controls, then M50 provides multiple opportunities.

My old EOS camera had only few (nine) autofocus points (phase-detect), and only the single point in the middle was with the fast, cross-type AF. This M50 has 99 selectable AF points (143 with some lenses), covering 80 % of the sensor area (dual-pixel type). Coupled with the touch screen, this change has had an effect on my photography style. It is now possible to first compose the photo, look through the electronic viewfinder, and simultaneously use a thumb to drag the AF point/area (in a “computer mouse/touchpad style”) to the desired point in the screen. I am not completely fluent in this technique yet though, and my usual technique of center focusing first, then half-pressing to lock the focus, and then quickly making the final composition, and shooting, is perhaps in most situations quicker and simpler than moving the focus point around the screen. But since M50 remembers in Program mode (which I use most) where the AF point was left the last time, the center focusing method does not work properly any more. I just need to learn new tricks, and keep moving the AF points in the screen (or, let the camera do everything, in Full Auto mode, or go into Manual mode, and do focusing with the lens ring instead.

As a modern mirrorless camera, M50 is packed with sensors and comes with a powerful DIGIG8 processor, bright LCD screen and electronic viewfinder. All of this consumes electricity, and the battery life of M50 is nowhere near my old 550D (which, btw, also had an extra battery grip). A full day of shooting takes either two or three fully loaded LP-E12 batteries. Thus, this camera behaves like a smartphone with poor battery life. You need to be using that battery charger all the time. (The standard rating is 235 shots-per-charge, CIPA.)

When travelling, I have been using a lot the wireless capabilities of M50. It is really handy that one can move full resolution, or reduced resolution versions of photos into an iPhone, iPad or Android device while on the go. On the other hand, this is nowhere as easy as when shooting and sharing directly from a smartphone. Moving typical 200-300 photos from a shooting session into an iPad for editing and uploading is slow, and feels like it takes ages. (I have not yet cracked how to get the advertised real-time Bluetooth photo transfer to work.) The traditional workflow where the entire memory card is first read into a PC and processed with Lightroom makes still better sense, but it is nice to have the alternative, for mobile processing and sharing some individual photos at least.

Many reviewers of M50 have written a lot about the limitations of 4K video mode (high crop factor, no dual-pixel autofocus). I use video rarely, and then only full HD, so that is not an issue for me. There is an external microphone input, which might be handy, and the LCD screen can be turned to point forward, if I ever go into video blogging (not that I plan to do it).

The main plusses for me in M50 are the compact size, the excellent touch UI, and very nice image quality in still images. That I can use both the new, compact EF-M mount lenses, and (with adapters) also the traditional Canon EF lenses was a major factor when making the purchase decision, since the lens collection of a photographer is typically much more expensive part of the equipment, than the body only. Changing to Nikon, Fuji or Sony would have been a big investement.

The autofocus system in M50 is fast, and in burst mode the camera can shoot 10 fps for 30 jpg shots in a row to fill the buffer. I am not a sports or wildlife photographer as such, so this is good enough for me. A physically bigger body would make the camera easier to handle with large and heavy lenses, but shooting with a large lens is a two-hand operation in any case (and in some cases requires using a tripod), so that is not so critical. I still need to train more to use the controls and switch between camera modes faster, and touch interface is probably never going to be as fast as using a camera with several dedicated physical controls. But this is a compromise one can make, to get this feature set, image quality and lens compatibility in this small package, in this price.

You can find the full M50 tech specs and feature set here in English: https://www.canon.co.uk/cameras/eos-m50/specifications/ and in Finnish: https://www.canon.fi/cameras/eos-m50/specifications/.

EOS M mount: interesting adapters

Attaching EF lenses to M mount camera requires an adapter – which adds a bit to the bulk of a small camera, but is also an interesting opportunity, since it is possible to fit new electronic or optical functionalities inside that middle piece.

I have both the official, Canon-made “EF-EOS M” mount adapter, which keeps the optical characteristics of the lens similar to what they would be if used on an EF-S mount camera (crop and all). The other adapter is “Viltrox EF-EOS M2 Lens Adapter 0.71x Speed Booster” (a real mouthful), which has the interesting capability of multiplying the focal length by factor of 0.71. This is a sort of “inverted teleconverter” as it reduces the image size that the lens produces, allowing more light to fit into the smaller (APS C) sensor, and almost eliminates the crop factor.

Most interestingly, as the booster collects more light into the sensor, this also has an effect of increasing the maximum aperture of my EF/EF-S lenses in an M mount camera. When I attach Viltrox into my 70-200 mm F4, it appears to my M50 camera as an F2.8 lens (with that constant aperture over the entire zoom range). The image quality that these “active speed booster adapters” produce is apparently a somewhat contested topic among camera enthusiasts. In my personal, initial tests, I have been pretty happy: the sharpness and corner vignetting also appear to be well controlled and the images produced of rather good quality – or good enough for me, at least.

When I put this into my 50 mm F2.8 portrait lens, this lens functions as having F1.2 maximum aperture. This is pretty cool, e.g. the capability to shoot in lower-light conditions is much better this way, and the narrow depth of field is similar to much more heavy and expensive, full frame camera system when using this adapter.

In my tests so far, all my Canon EF lenses have worked perfectly with Viltrox. However, when testing with the Tamron 16-300 mm F/3.5-6.3 Di II VC PZD super-zoom lens, there are issues. The adapter focuses light in a wrong manner when using this lens, and the result is that the corners are cut away from images (see the picture below). So, your mileage may vary. I have written to Viltrox customer service and asked what they suggest in the Tamron case (I have updated the adapter into the most recent available firmware – this can be done very simply using a PC and the built-in micro-usb connector in the adapter).

You can read a bit more about this technology (in connection to the first, Metabones product) from here: https://www.newsshooter.com/2013/01/14/metabones-speed-booster-adapter-gives-lenses-an-extra-fstop-and-nearly-full-frame-focal-lengths-on-aps-c-sensors/