New version of “The Demon” (retrospectives, pt. 1)

Louis (Brad Pitt) destroying the Theatre of the Vampires in Interview with the Vampire
(dir. Neil Jordan). © Warner Bros., 1994.

My first book published in English was outcome of my PhD work conducted in late 1990s – The Demonic Texts and Textual Demons (Tampere University Press, 1999). As the subtitle hints (“The Demonic Tradition, the Self, and Popular Fiction”), this work was both a historically oriented inquiry into the demonic tradition across centuries, and an attempt to recast certain poststructuralist questions about textuality in terms of agency, or “Self”.

The methodological and theoretical subtext of this book was focused on politically-committed cultural studies on the one hand: I was reading texts like horror movies, classical tragedies, science fiction, The Bible, and Rushdie’s The Satanic Verses from perspectives opened up by our bodily and situated existence, suffering, and possibilities for empowerment. On the other hand, I was also interested in both participating and ‘deconstructing’ some of the theoretical contributions that the humanities – literary and art studies particularly – had made to scholarship during the 20th century. In a manner, I was turning “demonic possession” as a self-contradictory and polyphonic image of poststructuralism itself: the pursuit of overtly convoluted theoretical discourses (that both reveal and hide the actual intellectual contributions at the same time) particularly both fascinated and irritated me. The vampires, zombies and cyborgs were my tools for opening the black boxes in the charnel houses of twisted “high theory” (afflicted by a syndrome that I called ‘cognitocentrism’ – the desire to hide the desiring body and situatedness of the theorizing self from true commitment and responsibility in the actual world of people).

I have now produced a new version of this book online, as Open Access. After the recent merger of universities, the Tampere University Press (TUP) books are no longer available as physical copies, and all rights of the works have returned to the authors (see this notice). Since I also undertook considerable detective work at the time to secure the image rights (e.g. by writing to Vatican Libraries, and Warner Brothers), I have now also restored all images – or as close versions of the originals as I could find.

The illustrated, free (Creative Commons) version can be found from this address: https://people.uta.fi/~tlilma/Demon_2005/.

I hope that the new version will find a few new readers to this early work. Here are a couple of words from my Lectio Praecursoria, delivered in the doctoral defense at 29th March, 1999:

… It is my view, that the vast majority of contemporary demonic texts are created and consumed because of the anxiety evoked by such flattening and gradual loss of meaningful differences. When everything is the same, nothing really matters. Demons face us with visions which make indifference impossible.
A cultural critic should also be able to make distinctions. The ability to distinguish different audiences is important as it makes us aware how radically polyphonic people’s interpretations really can be. We may live in the same world, but we do not necessarily share the same reality. As the demonic texts strain the most sensitive of cultural division lines, they highlight and emphasise such differences. Two extreme forms of reactions appear as particularly problematic in this context: the univocal and one-dimensional rejection or denial of the demonic mode of expression, and, on the other hand, the univocal and uncritical endorsement of this area. If a critical voice has a task to do here, it is in creating dialogue, in unlocking the black-and-white positions, and in pointing out that the demonic, if properly understood, is never any single thing, but a dynamic and polyphonic field of both destructive and creative impulses.

Frans Ilkka Mäyrä (1999)

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!

M50: first experiences

Ouf-of-camera JPG (M50, with EOS-M 22mm f/2 lens).

I have been using the new Canon EOS M50 mirrorless system camera now for a month or so. The main experiences are pretty positive, but I have also some comments on what this camera is good and not so optimal for.

In terms of image quality and feature set, this is a pretty complete package. Canon can make good cameras. However, the small physical size of this camera is perhaps its most defining main characteristic. This means that M50 is excellent as a light and small travel companion, but also that it has too small grip to carry comfortably this body when there are some heavy “pro” lenses or telephoto lenses attached. One must carry the system from the lens instead.

I really like the touch screen interface of M50. The swiveling LCD is really functional, and it is easy to take that quick photo from extra low or high angles. The LCD touch interface Canon uses is perhaps the best in the market today: it is responsive, well designed and logically organised. This is particularly important for M50, since it has only few physical buttons, and a single rotating control. Photographer using M50 needs to use the touch UI for many key functions. This is perhaps something that many manual-settings oriented professional and enthusiast photographers do not like; it you like to set the aperture, exposure time and ISO from the physical controls, then M50 is not for you (one should consider e.g. Fujifilm X-T3 or T30 instead). But if one is comfortable working with electronic controls, then M50 provides multiple opportunities.

My old EOS camera had only few (nine) autofocus points (phase-detect), and only the single point in the middle was with the fast, cross-type AF. This M50 has 99 selectable AF points (143 with some lenses), covering 80 % of the sensor area (dual-pixel type). Coupled with the touch screen, this change has had an effect on my photography style. It is now possible to first compose the photo, look through the electronic viewfinder, and simultaneously use a thumb to drag the AF point/area (in a “computer mouse/touchpad style”) to the desired point in the screen. I am not completely fluent in this technique yet though, and my usual technique of center focusing first, then half-pressing to lock the focus, and then quickly making the final composition, and shooting, is perhaps in most situations quicker and simpler than moving the focus point around the screen. But since M50 remembers in Program mode (which I use most) where the AF point was left the last time, the center focusing method does not work properly any more. I just need to learn new tricks, and keep moving the AF points in the screen (or, let the camera do everything, in Full Auto mode, or go into Manual mode, and do focusing with the lens ring instead.

As a modern mirrorless camera, M50 is packed with sensors and comes with a powerful DIGIG8 processor, bright LCD screen and electronic viewfinder. All of this consumes electricity, and the battery life of M50 is nowhere near my old 550D (which, btw, also had an extra battery grip). A full day of shooting takes either two or three fully loaded LP-E12 batteries. Thus, this camera behaves like a smartphone with poor battery life. You need to be using that battery charger all the time. (The standard rating is 235 shots-per-charge, CIPA.)

When travelling, I have been using a lot the wireless capabilities of M50. It is really handy that one can move full resolution, or reduced resolution versions of photos into an iPhone, iPad or Android device while on the go. On the other hand, this is nowhere as easy as when shooting and sharing directly from a smartphone. Moving typical 200-300 photos from a shooting session into an iPad for editing and uploading is slow, and feels like it takes ages. (I have not yet cracked how to get the advertised real-time Bluetooth photo transfer to work.) The traditional workflow where the entire memory card is first read into a PC and processed with Lightroom makes still better sense, but it is nice to have the alternative, for mobile processing and sharing some individual photos at least.

Many reviewers of M50 have written a lot about the limitations of 4K video mode (high crop factor, no dual-pixel autofocus). I use video rarely, and then only full HD, so that is not an issue for me. There is an external microphone input, which might be handy, and the LCD screen can be turned to point forward, if I ever go into video blogging (not that I plan to do it).

The main plusses for me in M50 are the compact size, the excellent touch UI, and very nice image quality in still images. That I can use both the new, compact EF-M mount lenses, and (with adapters) also the traditional Canon EF lenses was a major factor when making the purchase decision, since the lens collection of a photographer is typically much more expensive part of the equipment, than the body only. Changing to Nikon, Fuji or Sony would have been a big investement.

The autofocus system in M50 is fast, and in burst mode the camera can shoot 10 fps for 30 jpg shots in a row to fill the buffer. I am not a sports or wildlife photographer as such, so this is good enough for me. A physically bigger body would make the camera easier to handle with large and heavy lenses, but shooting with a large lens is a two-hand operation in any case (and in some cases requires using a tripod), so that is not so critical. I still need to train more to use the controls and switch between camera modes faster, and touch interface is probably never going to be as fast as using a camera with several dedicated physical controls. But this is a compromise one can make, to get this feature set, image quality and lens compatibility in this small package, in this price.

You can find the full M50 tech specs and feature set here in English: https://www.canon.co.uk/cameras/eos-m50/specifications/ and in Finnish: https://www.canon.fi/cameras/eos-m50/specifications/.

Mirrorless hype is over?

My mirrorless Canon EOS M50, with a 50 mm EF lens, and a “speed booster” style mount Viltrox adapter.

It has been interesting to follow how since last year, there has been several articles published that discuss the “mirrorless camera hype”, and put forward various kinds of criticism of either this technology, or related camera industry strategies. One repeated criticism is rooted to the fact that many professional (and enthusiast) photographers still find a typical DSLR camera body to work better for their needs than a mirrorless one. There are at least three main differences: a mirrorless interchangeable camera body is typically smaller than a DSLR, the battery life is weaker, and the image from an electronic viewfinder and/or LCD back screen offers a less realistic image than a traditional optical viewfinder in a (D)SLR camera.

The industry critiques appear to be focused on worries that as the digital camera market as a whole is going down, the big companies like Canon and Nikon are directing their product development resources for putting out mirrorless camera bodies with new lens mounts, and new lenses for these systems, rather than evolving their existing product lines in DSLR markets. Many seem to think that this is bad business sense, since large populations of professionals and photography enthusiasts are deeply invested in these more traditional ecosystems, and lack of progress in them means that there is not enough incentive to upgrade and invest, for all of those who remain in those parts of the market.

There might be some truth in both lines of argumentation – yet, they are also not the whole story. It is true that Sony, with their α7, α7R and α7S lines of cameras have stolen much of the momentum that could had been strong for Canon and Nikon, if they would had invested into mirrorless technologies earlier. Currently, the full frame systems like Canon EOS R, or Nikon Z6 & Z7, are apparently not selling very strongly. In early May of this year, for example, it was publicised how Sony α7 III sold more units in Japan at least than the Canon and Nikon full frame mirrorless systems combined (see: https://www.dpreview.com/news/3587145682/sony-a7-iii-sales-beat-combined-efforts-of-canon-and-nikon-in-japan ). Some are ready to declare Canon and Nikon’s efforts as dead on arrival, but both companies have claimed to be strategically committed into their new mirrorless systems, developing and launching lenses that are necessary for their future growth. Overall though, both Canon and Nikon are producing and selling much more digital cameras than Sony, even while their sales numbers have been declining (in Japan at least, Fujifilm was interestingly the big winner in year-over-year analysis; see: https://www.canonrumors.com/latest-sales-data-shows-canon-maintains-big-marketshare-lead-in-japan-for-the-year/ ).

From a photographer perspective, the first mentioned concerns might be the more crucial than the business ones, though. Are mirrorless cameras actually worse than comparable DSLR cameras?

There is the curious quality when you move from a large (D)SLR body into using a typical mirrorless: the small camera can feel a bit like a toy, the handling is different, and using the electronic viewfinder and LCD screen can produce flashbacks of compact, point-and-shoot cameras of earlier years. In terms of pure image quality and feature sets, the mirrorless cameras are already equals to DSLRs, and in some areas have arguably moved already beyond most of them. There are multiple reasons for this, and the primary relates to the intimate link there is between the light sensor, image processor and viewfinder in mirrorless cameras. As a photographer you are not looking at a reflection of light coming from the lens through an alternative route into the optical viewfinder – you are looking at the image that is produced from the actual, real-time data that the sensor and image processor are “seeing”. The mechanical construction of mirrorless cameras can be made simpler, and when the mirror is removed, the entire lens system can be moved closer to the image sensor – something that is technically called shorter flange distance. This should allow engineers to design lenses for mirrorless systems that have a large aperture and fast focusing capabilities (you can check out a video, where a Nikon lens engineer explains how this works here: https://www.youtube.com/watch?v=LxT17A40d50 ). The physical dimensions of the camera body in itself can be made small or large, as desired. Nikon Z series cameras are rather sizable, with a conventional “pro camera” style grip (handle); my Canon EOS M50 is diminutive, from the other extreme.

I think that the development of cameras with ever more stronger processors and their machine learning and algorithm-based novel capabilities will push the general direction of photography technology towards various mirrorless systems. Said that, I completely understand the benefits of more traditional DSLRs and why they might feel superior for many photographers at the moment. There has been some rumours (in the Canon space at least, which I am personally mostly following) that new DSLR camera bodies will be released into the upper-enthusiast APS-C / semi-professional DSLR category (search e.g. for “EOS 90D” rumours), so I think that DSLR cameras are by no means dead. There are many ways in which the latest camera technologies can be implemented into mirror-bodies, as well as into the mirrorless ones. The big strategic question of course is that how many different mount and lens ecosystems can be maintained and developed simultaneously. If some of the current mounts will stop getting lenses in the near future, there is at least a market for adapter manufacturers.

EOS M mount: interesting adapters

Attaching EF lenses to M mount camera requires an adapter – which adds a bit to the bulk of a small camera, but is also an interesting opportunity, since it is possible to fit new electronic or optical functionalities inside that middle piece.

I have both the official, Canon-made “EF-EOS M” mount adapter, which keeps the optical characteristics of the lens similar to what they would be if used on an EF-S mount camera (crop and all). The other adapter is “Viltrox EF-EOS M2 Lens Adapter 0.71x Speed Booster” (a real mouthful), which has the interesting capability of multiplying the focal length by factor of 0.71. This is a sort of “inverted teleconverter” as it reduces the image size that the lens produces, allowing more light to fit into the smaller (APS C) sensor, and almost eliminates the crop factor.

Most interestingly, as the booster collects more light into the sensor, this also has an effect of increasing the maximum aperture of my EF/EF-S lenses in an M mount camera. When I attach Viltrox into my 70-200 mm F4, it appears to my M50 camera as an F2.8 lens (with that constant aperture over the entire zoom range). The image quality that these “active speed booster adapters” produce is apparently a somewhat contested topic among camera enthusiasts. In my personal, initial tests, I have been pretty happy: the sharpness and corner vignetting also appear to be well controlled and the images produced of rather good quality – or good enough for me, at least.

When I put this into my 50 mm F2.8 portrait lens, this lens functions as having F1.2 maximum aperture. This is pretty cool, e.g. the capability to shoot in lower-light conditions is much better this way, and the narrow depth of field is similar to much more heavy and expensive, full frame camera system when using this adapter.

In my tests so far, all my Canon EF lenses have worked perfectly with Viltrox. However, when testing with the Tamron 16-300 mm F/3.5-6.3 Di II VC PZD super-zoom lens, there are issues. The adapter focuses light in a wrong manner when using this lens, and the result is that the corners are cut away from images (see the picture below). So, your mileage may vary. I have written to Viltrox customer service and asked what they suggest in the Tamron case (I have updated the adapter into the most recent available firmware – this can be done very simply using a PC and the built-in micro-usb connector in the adapter).

You can read a bit more about this technology (in connection to the first, Metabones product) from here: https://www.newsshooter.com/2013/01/14/metabones-speed-booster-adapter-gives-lenses-an-extra-fstop-and-nearly-full-frame-focal-lengths-on-aps-c-sensors/

Going mirrorless (EOS M50)

I have today started to learn to take photos with an ultra-compact EOS M50, after using the much bigger SLR or DSLR cameras for decades. This is surely an interesting experience. Some of the fundamentals of photography are still the same, but some areas I clearly need to study more, and learn new approaches.

Canon EOS M50 (photo credit: Canon).

These involve particularly learning how to collaborate with the embedded computer (DIGIG 8 processor) better. It is fascinating to note how fast e.g. the automatic focusing system is – I can suddenly use an old lens like my trusty Canon EF 70-200mm f/4 L USM to get in-flight photos of rather fast birds. The new system tracks moving targets much faster and in a more reliable manner. However, I am by no means a bird photographer, having mostly worked with still life, landscapes and portraits. Getting to handle the dual options of creating the photo either through the electronic viewfinder, or, the vari-angle touchscreen takes some getting used to.

Also, there are many ways to use this new system, and finding the right settings among many different menus (there must be hundreds of options in all) takes some time. Also, coming from much older EOS 550D, it was weird to realise that the entire screen is now filled with autofocus points, and that it is possible to slide the AF point with a thumb (using the touchscreen as a “mouse”) into the optimal spot, while simultaneously composing, focusing, zooming and shooting – 10 frames per second, maximum. I am filling up the memory card fast now.

My Canon EOS 550D and M50, side by side. Note that I am using a battery grip on 550D, which is rather small DSLR camera in itself.

It is easy to do many basic photo editing tasks in-camera now. It actually feels like there is small “Photoshop” built into the camera. However, there is a fundamental decision that needs to be made: of either using photos as they come, directly from camera, or after some post-processing in the computer. This is important since JPG or RAW based workflows are a bit different. These days, I am using quite a lot of mobile apps and tools, and the ability to wirelessly copy photos from the camera into a smartphone or tablet computer (via Wi-Fi, Bluetooth + NFC), in the field, is definitely something that I like doing. Currently thus the JPG options make most sense for me personally.

Chilies in the greenhouse

First flowers: chilies, 2019 season.

My first hydroponics chili pepper growing season has been bit of a mixed experience so far. On the one hand, the passive hydroponic setup that I installed (based on the AutoPot 4pot system, HydroCoco, and Canna Coco A+B) was a great success. The plats really grew fast.

So fast actually, that I was soon in trouble with them. My planting schedule was based on my earlier experiences with soil-based gardening, but the growth speed in hydroponics is much faster. I germinated the seeds in early February, moved selected seedlings into the AutoPot system in 25th February, and already in early April the plants were so tall they should had been moved to the greenhouse already. My LED plant light system was particularly a bottleneck – the fast-growing chili plants grew quickly up to the maximum height that I could adjust the LED strips into, and I needed to cut them down quite a bit. Even then, the plant growth would have needed better light, and real, strong sunlight that would had been coming from multiple angles, not just from those narrow LED strips.

But we got snow and “takatalvi” (cold spell & wintry weather) in April, and I could not move the plants into the greenhouse. I just kept growing them, cutting them down, growing more – and waiting for the weather to get warmer.

It was only in late May (18th May, to be exact), when the weather forecast told that further snow was now highly unlikely. I started moving the overgrown plants to the greenhouse, but lost maybe half of their branches. The weak, big plants were just not made for punishing physical handling. The hydroponics setup is not designed to be moved around, either.

Poor chilies, moved too late to the greenhouse.

But, I got the plants out, set up the AutoPot system again, this time into the greenhouse, filled it in with water and Canna Coco, and hoped for the best.

All four plants are still alive, which is nice. CAP 270 is in bloom, and is bearing the first fruit even now. But the plants are not that nice looking, as they lost much of their branches in the move, and the growth patterns are not that good, thinking about the future crop. The branches should be stronger, thicker and more symmetrical, to support decent amount of chili pods. Well, we’ll see what the final outcome will be.

The lesson? Maybe I need to carefully think about my cultivation schedules: the plants should be much smaller at the point when they still can safely be moved from indoors to the greenhouse. They should be pruned, so that the powerful growth can be controlled. But otherwise: hydroponic gardening seems like a really interesting option!

The first fruit (C. baccatum, “CAP 270”).