New version of “The Demon” (retrospectives, pt. 1)

Louis (Brad Pitt) destroying the Theatre of the Vampires in Interview with the Vampire
(dir. Neil Jordan). © Warner Bros., 1994.

My first book published in English was outcome of my PhD work conducted in late 1990s – The Demonic Texts and Textual Demons (Tampere University Press, 1999). As the subtitle hints (“The Demonic Tradition, the Self, and Popular Fiction”), this work was both a historically oriented inquiry into the demonic tradition across centuries, and an attempt to recast certain poststructuralist questions about textuality in terms of agency, or “Self”.

The methodological and theoretical subtext of this book was focused on politically-committed cultural studies on the one hand: I was reading texts like horror movies, classical tragedies, science fiction, The Bible, and Rushdie’s The Satanic Verses from perspectives opened up by our bodily and situated existence, suffering, and possibilities for empowerment. On the other hand, I was also interested in both participating and ‘deconstructing’ some of the theoretical contributions that the humanities – literary and art studies particularly – had made to scholarship during the 20th century. In a manner, I was turning “demonic possession” as a self-contradictory and polyphonic image of poststructuralism itself: the pursuit of overtly convoluted theoretical discourses (that both reveal and hide the actual intellectual contributions at the same time) particularly both fascinated and irritated me. The vampires, zombies and cyborgs were my tools for opening the black boxes in the charnel houses of twisted “high theory” (afflicted by a syndrome that I called ‘cognitocentrism’ – the desire to hide the desiring body and situatedness of the theorizing self from true commitment and responsibility in the actual world of people).

I have now produced a new version of this book online, as Open Access. After the recent merger of universities, the Tampere University Press (TUP) books are no longer available as physical copies, and all rights of the works have returned to the authors (see this notice). Since I also undertook considerable detective work at the time to secure the image rights (e.g. by writing to Vatican Libraries, and Warner Brothers), I have now also restored all images – or as close versions of the originals as I could find.

The illustrated, free (Creative Commons) version can be found from this address: https://people.uta.fi/~tlilma/Demon_2005/.

I hope that the new version will find a few new readers to this early work. Here are a couple of words from my Lectio Praecursoria, delivered in the doctoral defense at 29th March, 1999:

… It is my view, that the vast majority of contemporary demonic texts are created and consumed because of the anxiety evoked by such flattening and gradual loss of meaningful differences. When everything is the same, nothing really matters. Demons face us with visions which make indifference impossible.
A cultural critic should also be able to make distinctions. The ability to distinguish different audiences is important as it makes us aware how radically polyphonic people’s interpretations really can be. We may live in the same world, but we do not necessarily share the same reality. As the demonic texts strain the most sensitive of cultural division lines, they highlight and emphasise such differences. Two extreme forms of reactions appear as particularly problematic in this context: the univocal and one-dimensional rejection or denial of the demonic mode of expression, and, on the other hand, the univocal and uncritical endorsement of this area. If a critical voice has a task to do here, it is in creating dialogue, in unlocking the black-and-white positions, and in pointing out that the demonic, if properly understood, is never any single thing, but a dynamic and polyphonic field of both destructive and creative impulses.

Frans Ilkka Mäyrä (1999)

Switching to NVMe SSD

Samsung 970 EVO Plus NVMe M.2 SSD (image credit: Samsung).

I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .

I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.

The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.

I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).

There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):

In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration

Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID

Go back one screen then, select Onboard Device Configuration.

Set SATA Mode Configuration to – SATA Express

Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.

Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only

Go back one screen. Click on Secure Boot to go to next screen.

Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode

Go back one screen. Look for Boot Option Priorities – Boot Option 1. Click on the down arrow in the outlined box to the right and look for your flash drive. It should be preceded by UEFI, (example UEFI Sandisk Cruzer). Select it so that it appears in this box.
(Source: https://rog.asus.com/forum/showthread.php?106842-Required-bios-settings-for-Samsung-970-evo-Nvme-M-2-SSD)

Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.

Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.

Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.

Cool summer, everyone!

M50: first experiences

Ouf-of-camera JPG (M50, with EOS-M 22mm f/2 lens).

I have been using the new Canon EOS M50 mirrorless system camera now for a month or so. The main experiences are pretty positive, but I have also some comments on what this camera is good and not so optimal for.

In terms of image quality and feature set, this is a pretty complete package. Canon can make good cameras. However, the small physical size of this camera is perhaps its most defining main characteristic. This means that M50 is excellent as a light and small travel companion, but also that it has too small grip to carry comfortably this body when there are some heavy “pro” lenses or telephoto lenses attached. One must carry the system from the lens instead.

I really like the touch screen interface of M50. The swiveling LCD is really functional, and it is easy to take that quick photo from extra low or high angles. The LCD touch interface Canon uses is perhaps the best in the market today: it is responsive, well designed and logically organised. This is particularly important for M50, since it has only few physical buttons, and a single rotating control. Photographer using M50 needs to use the touch UI for many key functions. This is perhaps something that many manual-settings oriented professional and enthusiast photographers do not like; it you like to set the aperture, exposure time and ISO from the physical controls, then M50 is not for you (one should consider e.g. Fujifilm X-T3 or T30 instead). But if one is comfortable working with electronic controls, then M50 provides multiple opportunities.

My old EOS camera had only few (nine) autofocus points (phase-detect), and only the single point in the middle was with the fast, cross-type AF. This M50 has 99 selectable AF points (143 with some lenses), covering 80 % of the sensor area (dual-pixel type). Coupled with the touch screen, this change has had an effect on my photography style. It is now possible to first compose the photo, look through the electronic viewfinder, and simultaneously use a thumb to drag the AF point/area (in a “computer mouse/touchpad style”) to the desired point in the screen. I am not completely fluent in this technique yet though, and my usual technique of center focusing first, then half-pressing to lock the focus, and then quickly making the final composition, and shooting, is perhaps in most situations quicker and simpler than moving the focus point around the screen. But since M50 remembers in Program mode (which I use most) where the AF point was left the last time, the center focusing method does not work properly any more. I just need to learn new tricks, and keep moving the AF points in the screen (or, let the camera do everything, in Full Auto mode, or go into Manual mode, and do focusing with the lens ring instead.

As a modern mirrorless camera, M50 is packed with sensors and comes with a powerful DIGIG8 processor, bright LCD screen and electronic viewfinder. All of this consumes electricity, and the battery life of M50 is nowhere near my old 550D (which, btw, also had an extra battery grip). A full day of shooting takes either two or three fully loaded LP-E12 batteries. Thus, this camera behaves like a smartphone with poor battery life. You need to be using that battery charger all the time. (The standard rating is 235 shots-per-charge, CIPA.)

When travelling, I have been using a lot the wireless capabilities of M50. It is really handy that one can move full resolution, or reduced resolution versions of photos into an iPhone, iPad or Android device while on the go. On the other hand, this is nowhere as easy as when shooting and sharing directly from a smartphone. Moving typical 200-300 photos from a shooting session into an iPad for editing and uploading is slow, and feels like it takes ages. (I have not yet cracked how to get the advertised real-time Bluetooth photo transfer to work.) The traditional workflow where the entire memory card is first read into a PC and processed with Lightroom makes still better sense, but it is nice to have the alternative, for mobile processing and sharing some individual photos at least.

Many reviewers of M50 have written a lot about the limitations of 4K video mode (high crop factor, no dual-pixel autofocus). I use video rarely, and then only full HD, so that is not an issue for me. There is an external microphone input, which might be handy, and the LCD screen can be turned to point forward, if I ever go into video blogging (not that I plan to do it).

The main plusses for me in M50 are the compact size, the excellent touch UI, and very nice image quality in still images. That I can use both the new, compact EF-M mount lenses, and (with adapters) also the traditional Canon EF lenses was a major factor when making the purchase decision, since the lens collection of a photographer is typically much more expensive part of the equipment, than the body only. Changing to Nikon, Fuji or Sony would have been a big investement.

The autofocus system in M50 is fast, and in burst mode the camera can shoot 10 fps for 30 jpg shots in a row to fill the buffer. I am not a sports or wildlife photographer as such, so this is good enough for me. A physically bigger body would make the camera easier to handle with large and heavy lenses, but shooting with a large lens is a two-hand operation in any case (and in some cases requires using a tripod), so that is not so critical. I still need to train more to use the controls and switch between camera modes faster, and touch interface is probably never going to be as fast as using a camera with several dedicated physical controls. But this is a compromise one can make, to get this feature set, image quality and lens compatibility in this small package, in this price.

You can find the full M50 tech specs and feature set here in English: https://www.canon.co.uk/cameras/eos-m50/specifications/ and in Finnish: https://www.canon.fi/cameras/eos-m50/specifications/.

EOS M mount: interesting adapters

Attaching EF lenses to M mount camera requires an adapter – which adds a bit to the bulk of a small camera, but is also an interesting opportunity, since it is possible to fit new electronic or optical functionalities inside that middle piece.

I have both the official, Canon-made “EF-EOS M” mount adapter, which keeps the optical characteristics of the lens similar to what they would be if used on an EF-S mount camera (crop and all). The other adapter is “Viltrox EF-EOS M2 Lens Adapter 0.71x Speed Booster” (a real mouthful), which has the interesting capability of multiplying the focal length by factor of 0.71. This is a sort of “inverted teleconverter” as it reduces the image size that the lens produces, allowing more light to fit into the smaller (APS C) sensor, and almost eliminates the crop factor.

Most interestingly, as the booster collects more light into the sensor, this also has an effect of increasing the maximum aperture of my EF/EF-S lenses in an M mount camera. When I attach Viltrox into my 70-200 mm F4, it appears to my M50 camera as an F2.8 lens (with that constant aperture over the entire zoom range). The image quality that these “active speed booster adapters” produce is apparently a somewhat contested topic among camera enthusiasts. In my personal, initial tests, I have been pretty happy: the sharpness and corner vignetting also appear to be well controlled and the images produced of rather good quality – or good enough for me, at least.

When I put this into my 50 mm F2.8 portrait lens, this lens functions as having F1.2 maximum aperture. This is pretty cool, e.g. the capability to shoot in lower-light conditions is much better this way, and the narrow depth of field is similar to much more heavy and expensive, full frame camera system when using this adapter.

In my tests so far, all my Canon EF lenses have worked perfectly with Viltrox. However, when testing with the Tamron 16-300 mm F/3.5-6.3 Di II VC PZD super-zoom lens, there are issues. The adapter focuses light in a wrong manner when using this lens, and the result is that the corners are cut away from images (see the picture below). So, your mileage may vary. I have written to Viltrox customer service and asked what they suggest in the Tamron case (I have updated the adapter into the most recent available firmware – this can be done very simply using a PC and the built-in micro-usb connector in the adapter).

You can read a bit more about this technology (in connection to the first, Metabones product) from here: https://www.newsshooter.com/2013/01/14/metabones-speed-booster-adapter-gives-lenses-an-extra-fstop-and-nearly-full-frame-focal-lengths-on-aps-c-sensors/

Going mirrorless (EOS M50)

I have today started to learn to take photos with an ultra-compact EOS M50, after using the much bigger SLR or DSLR cameras for decades. This is surely an interesting experience. Some of the fundamentals of photography are still the same, but some areas I clearly need to study more, and learn new approaches.

Canon EOS M50 (photo credit: Canon).

These involve particularly learning how to collaborate with the embedded computer (DIGIG 8 processor) better. It is fascinating to note how fast e.g. the automatic focusing system is – I can suddenly use an old lens like my trusty Canon EF 70-200mm f/4 L USM to get in-flight photos of rather fast birds. The new system tracks moving targets much faster and in a more reliable manner. However, I am by no means a bird photographer, having mostly worked with still life, landscapes and portraits. Getting to handle the dual options of creating the photo either through the electronic viewfinder, or, the vari-angle touchscreen takes some getting used to.

Also, there are many ways to use this new system, and finding the right settings among many different menus (there must be hundreds of options in all) takes some time. Also, coming from much older EOS 550D, it was weird to realise that the entire screen is now filled with autofocus points, and that it is possible to slide the AF point with a thumb (using the touchscreen as a “mouse”) into the optimal spot, while simultaneously composing, focusing, zooming and shooting – 10 frames per second, maximum. I am filling up the memory card fast now.

My Canon EOS 550D and M50, side by side. Note that I am using a battery grip on 550D, which is rather small DSLR camera in itself.

It is easy to do many basic photo editing tasks in-camera now. It actually feels like there is small “Photoshop” built into the camera. However, there is a fundamental decision that needs to be made: of either using photos as they come, directly from camera, or after some post-processing in the computer. This is important since JPG or RAW based workflows are a bit different. These days, I am using quite a lot of mobile apps and tools, and the ability to wirelessly copy photos from the camera into a smartphone or tablet computer (via Wi-Fi, Bluetooth + NFC), in the field, is definitely something that I like doing. Currently thus the JPG options make most sense for me personally.

Chilies in the greenhouse

First flowers: chilies, 2019 season.

My first hydroponics chili pepper growing season has been bit of a mixed experience so far. On the one hand, the passive hydroponic setup that I installed (based on the AutoPot 4pot system, HydroCoco, and Canna Coco A+B) was a great success. The plats really grew fast.

So fast actually, that I was soon in trouble with them. My planting schedule was based on my earlier experiences with soil-based gardening, but the growth speed in hydroponics is much faster. I germinated the seeds in early February, moved selected seedlings into the AutoPot system in 25th February, and already in early April the plants were so tall they should had been moved to the greenhouse already. My LED plant light system was particularly a bottleneck – the fast-growing chili plants grew quickly up to the maximum height that I could adjust the LED strips into, and I needed to cut them down quite a bit. Even then, the plant growth would have needed better light, and real, strong sunlight that would had been coming from multiple angles, not just from those narrow LED strips.

But we got snow and “takatalvi” (cold spell & wintry weather) in April, and I could not move the plants into the greenhouse. I just kept growing them, cutting them down, growing more – and waiting for the weather to get warmer.

It was only in late May (18th May, to be exact), when the weather forecast told that further snow was now highly unlikely. I started moving the overgrown plants to the greenhouse, but lost maybe half of their branches. The weak, big plants were just not made for punishing physical handling. The hydroponics setup is not designed to be moved around, either.

Poor chilies, moved too late to the greenhouse.

But, I got the plants out, set up the AutoPot system again, this time into the greenhouse, filled it in with water and Canna Coco, and hoped for the best.

All four plants are still alive, which is nice. CAP 270 is in bloom, and is bearing the first fruit even now. But the plants are not that nice looking, as they lost much of their branches in the move, and the growth patterns are not that good, thinking about the future crop. The branches should be stronger, thicker and more symmetrical, to support decent amount of chili pods. Well, we’ll see what the final outcome will be.

The lesson? Maybe I need to carefully think about my cultivation schedules: the plants should be much smaller at the point when they still can safely be moved from indoors to the greenhouse. They should be pruned, so that the powerful growth can be controlled. But otherwise: hydroponic gardening seems like a really interesting option!

The first fruit (C. baccatum, “CAP 270”).

Lens trumps the camera?

It is sort of interesting to think that maybe cameras have already got “good enough”? By this I mean that the capabilities of the camera body are no longer the real bottleneck in photography. Following the field, it is easy to find anecdotal stories about professional photographers relying on their 10-year-old, even much older equipment, with no need to update or upgrade. And this does not count in the “retro” photographers who for various reasons prefer the film cameras and vintage equipment.

As digital cameras include microprocessors, and the light-sensitive sensors are based on semiconductor technologies, the development of new cameras has gained a lot from the “Moore’s Law”, and quick progress in manufacturing faster and faster silicon chips. It is today particularly in the design and marketing of smartphones where this “speedrun” is obvious, with the next generation following the previous one in every six months or so. But even in smartphones, the sales are slowing down, and one reason appears to be that the existing phones are already – good enough.

The brains of a digital camera are its processor, the system chip. This is where sensor information gets processed, operations such as AF (automatic focus systems) are coming from, and where any in-camera postprocessing of photos takes place. I have been mostly following the evolution of DIGIC series of image processors by Canon, and it is obvious that many genuinely useful features for photographers have come from the new processor generations. In addition to being able to fit in data from lens and light sensors to produce more-or-less optimally exposed photos, the newer generations have e.g. introduced face-detection autofocus, which can automatically find faces in a group photo, and set the depth of field so that all of them are sharp. Mostly the new generation usually just provides incremental improvements in the some fundamental areas such as speed of image processing, noise reduction in low-light conditions, or speed and preciseness of autofocus.

It is nice to have a fast-shooting, fast-focusing camera that does all sorts of intelligent things like scene detection, and is able to apply many settings automatically. On the other hand, much of the art and craft of photography is in learning to think about the key dimensions of photographs, and about developing the ability to make use of technology to produce a certain kind of creation. The “smart” processor might be useful in removing the danger of technically failed shots, but it might also slow down a bit the ability to experiment, and learn from mistakes? I know from my own experience how easy it is just to give the “Program” (the ‘semi-auto’ mode in Canon) the reigns, and then end up living in somewhat smaller creative sandbox, as the result.

Putting over-emphasis on the latest features in cameras has also the danger of missing out other important dimensions of cameras as physical tools. The mechanical construction of a camera, the size and shape of it, how the physical dials and control buttons work – all of this have a very significant effect on the handling and ergonomics that matter a lot while taking photographs. Consider the latest smartphones, for example. In many cases the wide-angle and normal focal length photos can be shot with a smartphone with technically excellent results. However, most professionals still prefer to have a tool that is designed to be a camera also in ergonomic terms, while taking photographs all day long. The slippery smartphone with virtual, on-screen buttons just does not provide same kind of experience and sense of control.

Thus, in many cases one can actually save some money by settling for an older-generation model in the camera body, and investing into lenses instead. This can be a bit tricky, of course, as new camera and lens generations sometimes also come with new lens mounts; the autofocus and metering systems, for example, might rely on new pins for exchanging information between the lens and the body in new ways, or -as in the case of mirrorless cameras – the lenses are redesigned to take advantage from the smaller shape of mirrorless body (that is, moving the lenses physically closer to the image sensor). In many cases, however, the manufacturer standard lens mount still applies, or there is a perfectly working adapter available, to fit new lenses to older generation bodies, or the other way around.

Thus, one way for an enthusiast photographer to move forward in the actual image quality and range of photos one can achieve, is to stick with a bit older camera technology, but put the available savings into updating the lenses. In interchangeable lens cameras there are different basic options for the lens selection, and this relates to the style of photography one is working on. A street photographer, or one that mostly shoots people and events, can do nicely with a “normal” lens – or in portraiture with a short telephoto. In this lens range, the maximum aperture, sharpness and absence of various distortions what one is paying for, in a good quality (or “professional”) lens versions.

I think that I have pretty decent situation in wide angle and normal focal lenght photography at the moment, but there is much to improve in the longer telephoto lenses. Particularly my growing interest in nature photography translates into need for long-range, bit-aperture and sharp lenses. And unfortunately those things do not come cheap. Below are a couple of interesting alternatives for a Canon EF mount – I’d be interested to hear any comments or experiences you might have of these, or other EF mount telephoto lenses!

Canon EF 100-400mm f/4.5-5.6L IS II USM (Photo credit: Canon.)
Sigma 150-600mm F5-6.3 DG OS HSM | S. (Photo credit: Sigma.)