I wanted to revisit my old gear tonight, so I dug up my trusty EOS 550D, coupled with the BG-E8 battery grip and the classic, Canon 70-200mm f4L USM lens. The Friendly Cat provided again the modelling services.
I was immediately reminded by the obvious strengths of this older, bigger camera body: the ergonomics are just so much better when you can really hold the camera comfortably and steadily in your hand, and have large, mechanical control knobs that you can quickly and effortlessly experiment with.
On the other hand, the limitations were again also immediately obvious; in particular, the mirrorless digital camera (EOS M50) that I am mostly using these days allows one seamlessly move from using the viewfinder to the live view in the rear display, while making the composition. 550D also has rear display live view, but you need to specifically switch it on, and it is slow and imprecise, and the autofocus in particular is just terrible when shooting with it.
The optical viewfinder, on the other hand, is excellent, and the very limited nine (9) AF points do their job just well enough for this kind of slow “portrait” work. The low maximum ISO of 6400 also does not matter when taking pictures under the bright evening sun, and sharpness of that old Canon L lens fits nicely the 18-megapixel image sensor’s resolution capabilities.
Thus, if I would think about a “perfect camera” for my use, I would be happy with current M50 image sensor resolution (24,1 megapixels), but I would be really happy for a bit more capable autofocus system, and for more low-light performance in particular. The single most beneficial upgrade could however be a body with larger physical dimensions, with better/larger mechanical controls for selecting the program mode, aperture, and making the other key adjustments.
While the new EOS R series Canon cameras provide exactly that, the issue for me is that those are full frame cameras; and I am very happy in taking my photos with APS-C (the “crop sensor”). Full frame lenses, and new Canon RF lenses in particular, tend to be both large and expensive to a degree that does not make much sense for my kind of “Sunday photographer”.
There are alternatives like Fujifilm, with their excellent APS-C camera bodies (X-T30, X-T4, for example), and their sharp and relatively compact and affordable lenses. But I am deeply invested in the Canon ecosystem – it would be so much easier if Canon would come up with a well-designed camera like Canon 7D Mark II, but updated and upgraded into current, mirrorless sensors’ and image processors’ capabilities. One can always make wishes? Happy weekend, everyone!
There are multiple operating systems you can operate. Some are feature-rich, some not so much. While those who are enthusiastic and passionate about these kinds of things continue to be passionate, the actual differences between systems where you can operate are growing less and less important, year by year.
The basics of digital environments are today “good enough”, pretty much everywhere you go.
There are certain significant differences still, of course. Windows has the legacy of great popularity over decades in highly heterogeneous, work and private use contexts. It has a huge backlog of software and hardware that has been created or supported in Windows computers. This is both a blessing and a challenge. It is very difficult to produce a new version of the OS that would not conflict with some software, or some driver-hardware combination out there, as the recent hurdles of Windows 10 upgrade installations have proved.
Apple Macintosh users have more often been left in the cold, as there has been many devices which never came with drivers to make them work with a Mac. There has been arguably a lot of high quality, professional software available for Macs, but in purely numeric terms, Windows software ecosystem is order of magnitude larger.
A bit similarly, iOS (the operating system for Apple mobile devices) is limited by design: there are many restrictions for modifying and customising the default operation and setup of an iOS system. On the other hand, the software developers can rely on highly standardised environment, and users get a very reliable (even if unified and rigid) experience.
There are thus obvious pluses and minuses with the various philosophies that operating systems have adopted, or have been based on.
The current leader, Windows 10 is overall strong in diversity, meaning here particularly the software and hardware support. Be it business software, services or games, Windows is the default environment with most alternatives. On the other hand, a Windows user is challenged by certain loss of control: both the operating system and much of available software and system add-ons and drivers are proprietary. The environment is effectively filled with black boxes that do something – and the user can in most cases only hope that what goes on is based on the right and correct principles. And as there are multiple actors in all Windows installations, the cumulative effects can be surprising: there is Microsoft, trying its best both to introduce new functions and technologies, while at the same time maintaining backward compatibility with their long history of legacy systems. Then there is the OEM (original equipment manufacturer), like Dell or HP, who typically configure their Windows computers with their own, custom-made tools and drivers. Then comes the user, who also installs various kinds of elements into this environment. There is the saying “tårta på tårta” in Swedish – cake upon cake. No-one is capable of carrying responsibility of how the entire conglomerate operates in a Windows computer. In many cases the results are good enough, and the freedom of choice and diversity of support for multiple use cases is what the users are looking for. On the other hand, there is also a well-documented history of bugs and problems related to the piling up effects of the sprawling and ineffective software ecosystem.
As the leading open-source alternative, Linux is known for rather effective use of computing resources. A typical Linux distribution runs well on even ageing computer hardware, and on modern, powerful systems one can really experience what a fast and reliable OS can mean. There are (of course) certain downsides to Linux, as well. The main challenges in this case lie in the somewhat higher threshold of learning. While there are increasingly easy distributions that come pre-configured with graphical tools that allow the non-expert user to take hold of their system, and configure it to their liking, the foundation of Linux is in command-line tools and text-format configuration files. Even today I find that after a new, out-of-the-box Linux distro installation, I feel the need to spend perhaps an hour or two in command line, hunting and installing the various tweaking tools, add-ons and other elements that are lacking in the default installation. But Linux is getting better. Particularly the support for new hardware is now much better than what it used to be ten years ago. While the laptop computer user of Linux in the past would in many cases find out that most of the controllers, special keys and other elements of one’s device would not work at all, or only after considerable efforts, today the situation is different. Most things actually work, which is great. But if something does not work in a Linux installation, one is mostly left to one’s own devices (and for hunting for help in the various community websites online). However, as an alternative example, Lenovo recently announced that they will certify their entire workstation portfolio to run Linux – “every model, every configuration” (see: https://news.lenovo.com/pressroom/press-releases/lenovo-brings-linux-certification-to-thinkpad-and-thinkstation-workstation-portfolio-easing-deployment-for-developers-data-scientists/).
I myself recently configured two laptops with a dual-boot, Windows/Linux setup: Microsoft Surface Pro 4 and HP Elitebook x360 1030 G3. I considered both more challenging devices from a Linux perspective, since these are both two-in-one, hybrid devices with touch screens, which means that they most probably rely on many proprietary drivers to keep all their functionalities running. There were certain challenges (in BIOS/UEFI settings, in configuring the GRUB2 system boot menu, and in the disk partitioning), but Linux itself actually did handle both devices just fine. I was using the most recent, 20.04 release of Ubuntu desktop distribution, but there are several other alternatives that could work just equally well, or even better. Elitebook x360 is my main daily driver, and while my Windows 10 installation makes it run burning hot, fans blowing, Ubuntu is snappy, quiet and cool. And I actually can operate both the touch screen and touchpad with gestures that I have fully customised to my own liking, the active pen is also working fine with the screen, and there are only a couple of things that fall short of Windows 10. The special keys for controlling brigtness do not work (I use control sliders instead), and probably neither does the infrared camera (for facial recognition & login) and the LTE modem (I have not tested it though). One thing that I noticed is that this system sounds currently much better under Windows – the sound system is Bang & Olufsen certified, and they have probably configured the sound drivers and equalizers for optimal sound delivery, as the audio quality of music under Windows perhaps the best of any laptop I have used. But there is a highly detailed software tool, called PulseEffects, available for Linux that allows one to create a customized audio profile – if one is ready to dedicate the time and effort for tweaking and testing. That is the reality of Linux still, for good or bad; but luckily most of the essentials for work use will run just fine, directly out-of-the-box.
As a complete opposite of the high “tweakability” of Linux, iOS/ipadOS systems limit the user possibilities to a radical degree. The upside is then that an iPhone or iPad is very easy to use, one can always find the same settings from same places. It used to be that Apple mobile devices had excellent battery life and system reliability, but could only do one thing at a time. With the launch of iOS/ipadOS 13 (and coming version 14), multitasking became a certain kind of option in iPad Pro devices particularly. One cam also buy (a premium, and rather expensive) “Magic Keyboard” add-on to iPad Pro, and it will come with really nice scissor keys, plus a touchpad that allows mouse-and-keyboard style control of iOS. With iOS 14 there will be some more user-configurable elements added, such as (Android-style) widgets into the desktop. There are inevitable complications related to the added capabilities. iPad Pro which is constantly polling the touchpad (or, burning the back-light in the keyboard) does not have as long battery life as one without it. The multitasking and various split screen modes in ipadOS are rather clumsy and hard to control without considerable dedication into learning new gestures and skills of touch control.
Thus, I would say that we are currently in rather good situation in terms of having several good alternatives to choose from. I myself prefer to have both Windows 10 and Linux installed in my main computers, and keep them updated to their most recent versions. But I also use iOS, ipadOS and Android daily, and all of them have their distinctive strengths and weaknesses. If something does not work in one environment very well, it is often better to try something different, rather than trying to force the operating system out of its own “comfort zone”. I suspect this basic situation will remain the same in the foreseeable future, too.
I have followed an about five-year PC upgrade cycle – making smaller, incremental parts upgrades in-between, and building a totally new computer every four-five years. My previous two completely new systems were built during a Xmas break – in December 2011 and 2015. This time, I was seeking for something to put my mind into right now (year 2020 has been a tough one), and specced my five-year build now in Midsummer, already.
It somehow feels that every year is a bad year to invest into computer systems. There is always something much better coming up, just around the corner. This time, it seems that there will be both a new processor generation and new major graphics card generation coming up, later in 2020. But after doing some comparative research for a couple of weeks, in the end, I did not really care. The system I’ll build with the 2020 level of technology, should be much more capable than the 2015 one, in any case. Hopefully the daily system slowdowns and bottlenecks would ease, now.
Originally, I thought that this year would be the year of AMD: both the AMD Zen 2 architecture based, Ryzen 3000 series CPUs and Radeon RX 5000 GPUs appeared very promising in terms of value for money. In the end, it looks like this might be my last Intel-Nvidia system (?), instead. My main question-marks related to the single-core performance in CPUs, and to the driver reliability in Radeon 5000 GPUs. The more I read, and discussed with people who had experience with the Radeon 5000 GPUs, the more I heard stories about blue screens and crashing systems. The speed and price of the AMD hardware itself seemed excellent. In CPUs, on the other hand, I evaluated my own main use cases, and ended up with the conclusion that the slightly better single core performance of Intel 10th generation processors would mean a bit more to me, than the solid multi-core, multithread-performance of similarly priced, modern Ryzen processors.
After a couple of weeks of study into mid-priced, medium-powered components, here are the core elements chosen for my new, Midsummer 2020 system:
Intel Core i5-10600K, LGA1200, 4.10 GHz, 12MB, Boxed (there is some overclocking potential in this CPU, too)
ARCTIC Freezer 34 eSports DUO – Red, processor cooler (I studied both various watercooling solutions, and the high-powered Noctua air coolers, before settling on this one; the watercooling systems did not appear quite as durable in the long run, and the premium NH-D15 was a bit too large to fit comfortably into the case; this appared to be a good compromise)
MSI MAG Z490 TOMAHAWK, ATX motherboard (this motherboard appears to strike a nice balance between price vs. solid construction, feature set, and investments put into the Voltage Regulator Modules, VRMs, and other key electronic circuit components)
Corsair 32GB (2 x 16GB) Vengeance LPX, DDR4 3200MHz, CL16, 1.35V memory modules (this amount of memory is not needed for gaming, I think, but for all my other, multitasking and multi-threaded everyday uses)
MSI GeForce RTX 2060 Super ARMOR OC GPU, 8GB GDDR6 (this is entry level ray-tracing technology – that should be capable enough for my use, for a couple of years at least)
Samsung 1TB 970 EVO Plus SSD M.2 2280, PCIe 3.0 x4, NVMe, 3500/3300 MB/s (this is the system disk; there will be another SSD and a large HDD, plus a several-terabyte backup solution)
Corsair 750W RM750x (2018), modular power unit, 80 Plus Gold (there should be enough reliable power available in this PSU)
Cooler Master MasterBox TD500 Mesh w/ controller, ATX, Black (this is chosen on the basis of available test results – the priorities for me here were easy installation, efficient air flow, and thirdly silent operation)
As a final note, it was interesting to note that during the intervening 2015-2020 period, there was time when RGB lights became the de facto standard in PC parts: everything was radiating and pulsating in multiple LED colours like a Xmas tree. It is ok to think about design, and aim towards some kind of futurism, even, in this context. But some things are just plain ridiculous, and I am happy to see a bit more minimalism winning ground in PC enthusiast level components, too.
The global coronavirus epidemic has already caused deaths, suffering and cancellations (e.g. those of our DiGRA 2020 conference, and the Immersive Experiences seminar), but there are still luckily many things that go on in our daily academic lives, albeit often in somewhat reorganised manner. In schools and universities, the move into remote education is particularly one producing wide-ranging changes, and one that is currently been implemented hastily in innumerable courses that were not originally planned to be run in this manner at all.
I am not in a position to provide pedagogic advice, as the responsible teachers will know much better what are the actual learning goals of their courses, and therefore they are also best capable of thinking how to reach those goals with alternative means. But since I have been directing, implementing or participating in remote education over 20 years already (time flies!), here are at least some practical tips I can share.
Often the main part of remote education is independent, individual learning processes, which just need to be somehow supported. Finding information online, gathering data, reading, analysing, thinking and writing is something that does not fundamentally change, even if the in-class meetings would be replaced by reporting and commenting taking place online (though, the workload for teachers can easily skyrocket, which is something to be aware of). This is particularly true in asynchronous remote education, where everyone does their tasks in their own pace. It is when teamwork, group communications, more advanced collaborations, or when some special software or tools are required, when more challenges emerge. There are ways to get around most of those issues, too. But it is certainly true that not all education can be converted into remote education, not at least with identical learning goals.
According to my experience, there are three main types of problems in real-time group meetings or audio/videoconferences: 1) connection problems, 2) audio & video problems, and 3) conversation rules problems. Let’s deal with them, one by one.
1) The connection problems are due to bad or unreliable internet connection. My main advice is either to make sure that one can use wired rather than Wi-Fi/cellular connection while attempting to join a real-time online meeting or get very close to Wi-Fi router in order to get as strong signal as possible. If one has weak connection, the experience of everyone will suffer, as there will likely be garbled noises and video artefacts coming from you, rather than good-quality streams.
2) The audio and video problems relate to echo, weak sound levels, background noise, or dark, badly positioned or unclear video. If there are several people taking part in a joint meeting, it might be worth thinking carefully whether a video stream is actually needed. In most cases people are working intensely with their laptops or mobile devices during the online meeting, reviewing documents and making notes, and since there are challenges in getting a real eye-to-eye contact with other people (that is pretty impossible still, with current consumer technology), there are multiple distancing factors that will lower the feeling of social presence in any case. Good quality audio link might be enough to have a decent meeting. For that, I really recommend using a headset (headphones with a built-in microphone) rather than just the built-in microphone and speakers of the laptop, for example. There will be much less echo, and the microphone will be close to speakers’ mouths meaning that speech is picked up in much clear and loud manner, and the surround noises are easier to control. But it is highly advisable to move into a silent room for the period of teleconference.
Another tip: I suggest always first connecting the headset (or external microphone and speakers), BEFORE starting the software tool used for teleconferencing. This way, you can make sure that the correct audio devices (both output and input devices) are set as active or default ones, before you start the remote meeting tool. It is pretty easy to get this messed up and end up with low-quality audio coming from the wrong microphone or speakers rather than the intended ones. Note that there are indeed two layers here: in most cases, there are separate audio device settings both in the operating system (see Start/Settings/System/Sound in Windows 10), and another, e.g. “Preferences” item with other audio device settings hidden inside most remote meeting tools. Both of those need to be checked – prior to the meeting.
Thus, yet one tip: please always dedicate e.g. 10-15 minutes of technical preparation time before a remote education or remote meeting session for double-checking and solving connection, audio, video or other technical problems. It is sad (and irresponsible) use of everyone’s precious time, if every session starts with half of the speakers missing the ability to speak, or not being able to hear anyone else. Though, this kind of scenario is still unfortunately pretty typical. Remote meeting technology is notoriously unreliable, and when there are multiple people, multiple devices and multiple connections involved, the likelihood of problems multiplies exponentially.
Please be considerate and patient towards other people. No-one wants to be the person having tech problems.
3) The discussion rules related problems are the final category, and one that might be also culturally dependent. In a typical remote meeting among several Finnish people, for example, it might be that everyone just keeps quiet most of the time. That is normal, polite behaviour in the Finnish cultural context for face-to-face meetings – but something that is very difficult to decode when in an online audio teleconference where you are missing all the subtle gestures, confused looks, smiles and other nonverbal cues. In some other cultural setting, the difficulty might be people speaking on top of each other. Or the issues might be related to people asking questions without making it clear who is actually being addressed by the question or comment.
It is usually a good policy to have a chair or moderator appointed for an online meeting, particularly if it is larger than only a couple of people. The speakers can use the chat or other tools in the software to make signals when they’d want to speak. The chairperson makes it clear which item is being currently discussed and gives floor to each participant in their turn. Everyone tries to be concise, and also remembers to use names while presenting questions or comments to others. It is also good practice to start each meeting with a round of introductions, so that everyone can connect the sound of voice with a particular person. Repeating one’s name also later in the meeting when one is speaking up, does not hurt, either.
In most of our online meetings today we are using a collaboratively edited online document for notetaking during the meeting. This helps everyone to follow what has been said or decided upon. People can fix typos in notes in real time or add links and other materials without requiring the meeting chairperson (or secretary) to do so for them. There are many such online notetaking tools in standard office suites. Google Docs works fine, for example, and has probably the easiest way of generating an editing-allowed link, which can then be shared in the meeting chat window with all participants, without requiring them to have a separate service account and login to do so. Microsoft has their own, integrated tools that work best when everyone is from within the same organisation.
Finally, online collaboration has come a long way from the first experiments in 1960s (google “NLS” and Douglas Engelbart), but it still continues to have its challenges. But if we are all aware of the most typical issues, dedicate a few minutes before each session for setup and testing (and invest 15 euros/dollars into a reliable plug-and-play headset with USB connectivity) we can remove many annoying elements and make the experience much better for everyone. Then, it is much easier to start improving the actual content and pedagogic side of things. – Btw, if you have any additional tips, or comments to share on this, please drop a note below.
I have long been in Canon camp in terms of my DSLR equipment, and it was interesting to notice that they announced last week a new, improved full frame mirrorless camera body: EOS R5 (link to official short annoucement). While Canon was left behind competitors such as Sony in entering the mirrorless era, this time the camera giant appears to be serious. This new flagship is promised to feature in-body image stabilization that “will work in combination with the lens stabilization system” – first in Canon cameras. Also, while the implementations of 4k video in Canon DSLRs have left professionals critical in past, this camera is promised to feature 8k video. The leaks (featured in sites like Canon Rumors) have been discussing further features such as a 45mp full frame sensor, 12/20fps continuous shooting, and Canon also verified a new, “Image.Canon” cloud platform, which will be used to stream photos for further editing live, while shooting.
But does one really need a system like that? Aren’t cameras already good enough, with comparable image quality available at fraction of the cost (EOS R5 might be in 3500-4000 euros range, body only).
In some sense such critique might be true. I have not named the equipment I have used for shooting the photos featured in this blog post, for example – some are taken with my mirrorless systems camera, some are coming from a smartphone camera. For online publishing and hobbyist use, many contemporary camera systems are “good enough”, and can be flexibly utilized for different kinds of purposes. And the lens is today way more important element than the camera body, or the sensor.
Said that, there are some elements where a professional, full frame camera is indeed stronger than a consumer model with an APS-C (crop) sensor, for example. It can capture more light into the larger sensor and thus deliver somewhat wider dynamic range and less noise under similar conditions. Thus, one might be able to use higher ISO values and get noise-free, professional looking and sharp images in lower light conditions.
On the other hand, the larger sensor optically means more narrow depth of field – this is something that a portrait photographer working in studio might love, but it might actually be a limitation for a landscape photographer. I do actually like using my smartphone for most everyday event photography and some landscape photos, too, as the small lens and sensor is good for such uses (if you understand the limitations, too). A modern, mirrorless APS-C camera is actually a really flexible tool for many purposes, but ideally one has a selection of good quality lenses to suit the mount and smaller format camera. For Canon, there is striking difference in R&D investments Canon have made in recent years into the full frame, mirrorless RF mount lenses, as compared to the “consumer line” M mount lenses. This is based on business thinking, of course: the casual photographers are changing into using smartphones more and more, and there is smaller market and much tighter competition left in the high-end, professional and serious enthusiast lenses and cameras, where Canon (and Nikon, and many others) are hoping to make their profits in the future.
Thus: more expensive professional full frame optimised lenses, and only few for APS-C systems? We’ll see, but it might indeed be that smaller budget hobbyists (like myself) will need to turn towards third-party developers for filling in the gaps left by Canon.
One downside of the more compact, cheaper APS-C cameras (like Canon M mount systems) is that while they are much nicer to carry around, they do not have as good ergonomics and weather proofing as more pro-grade, full frame alternatives. This is aggravated in winter conditions. It is sometimes close to impossible to get your cold, gloved fingers to strike the right buttons and dials when they are as small as in my EOS M50. The cheaper camera bodies and lenses are also missing the silicone seals and gaskets that are typically an element that secures all connectors, couplings and buttons in a pro system. Thus, I get a bit nervous when outside with my budget-friendly system in a weather like today. But, after some time spent in careful wiping and cleaning, everything seems to continue working just fine.
Absolute number one lesson I have learned in these years of photography, is that the main limitation of getting great photos is rarely in equipment. There are more and less optimal, or innovative, ways of using same setup, and with careful study and experimentation it is possible to learn ways of working around technical limitations. The top-of-the-line, full frame professional camera and lenses system might have wider “opportunity space” for someone who has learned how to use it. But with additional complexity, heavy and expensive elements, those systems also have their inevitable downsides. – Happy photography, everyone!
Sports and wildlife photographers in particular are famous (or notorious) for investing in and carrying around lenses that are often just huge: large, long, and heavy. Is it possible to take great photos with small, compact lenses, or is an expensive and large lens the only option for a hobbysist photographer who’d want to reach better results?
I am by no means an authority in optics or lens design, but I think certain key principles are important to take into consideration.
Perhaps one of the first ones is the style of photography one is engaged with. Are you shooting portrait photos indoors, or even in a studio? Or, are you tripping outdoors, trying to get closeup photos of elusive birds and animals? Or, are you rather a landscape photographer? Or, a street photographer?
Sometimes the intended use of photos is also a factor to consider. Are these party photos, or something that you’ll aim to share mostly among your friends in social media? Or, is this that important photo art project that you aim output into large-format prints, and hang to your walls – or, in to a gallery even?
These days, digital camera sensors are “sharp” enough for pretty much any purpose – one of my smartphones, Huawei Mate 20 Pro, for example, has a 40 megapixel main photo sensor, with 7296 × 5472 native resolution. That is more than what you need for a large poster print (depending on viewing distances and PPI settings, a 4000 x 6000 pixels, or even 2000 x 3000 pixels might be enough for a poster print). There are many professional photographers who took their commercial photos for years with cameras that had only 6 or 8 megapixel sensors. And many of those photos were reproduced in large posters, or in covers of glossy magazines, and no-one complained.
The lens and quality of optics are more of a bottleneck: if the lens is “soft”, meaning that it is not capable of focusing all rays of light in consistent, sharp manner, there is no way of achieving very clear looking images with that. But truth be told, in perhaps 90 % of cases with blurry photos, I blame myself rather than my equipment these days. There are badly focused photos, I had a wrong aperture setting or too long exposure time (and was not using a tripod but shooting handheld) and all that contributes to getting a lot of blurry looking photos.
But it is also true, that if one is trying to achieve very high quality results in terms of optical quality, using a more expensive lens is usually something that many people will do. But actually there are “mainstream” photography situations where a cheap lens will produce results that are just – good enough. It is particularly the more extreme situations, where one is for example trying to get a really lot of light into the lens, to capture really detailed scenes in a very consistent manner, where large, heavy and expensive lenses come to play a role. This is also true of portraiture, where a high-quality lens is also used to deliver good separation of person from the background, and the glass elements, their positioning and the aperture blades are designed to produce particularly nice looking “bokeh” effect (the out-of-focus highlights are blurred in an aesthetically pleasing manner). And of course those bird and wildlife photographers value their well-designed, long telephoto range lenses that also capture a lot of light, thereby enabling the photographer to use short enough exposure times and get sharp images of even moving targets.
In many cases it is actually other characteristics rather than the optical image quality that makes a particular lens expensive. It might be the mechanical build quality, weather-proofing, or the manner the focusing, zooming and aperture mechanisms, and how control rings are implemented that are something a professional photographer might be willing to pay for, in one of their main tools.
In street photography, for example, there are completely different kind of priorities as compared to wildlife photography, or studio portraiture, where using a solid tripod is common. In a street, one is constantly moving, and also trying not to be very conspicuous while taking photos. A compact camera with a compact lens is good for those kinds of reasons. Also, if the targets are people and views on city streets, a “normal range” lens is usually preferable. A long-range telephoto lens, or very wide-angle lens will produce very different kinds of effects as compared to the visual feel and visual experiences that people usually experience as “normal images”. In a 35 mm film camera, or “full-frame” digital camera, a 50 mm lens is usually considered a normal lens, whereas with a camera equipped with a (Canon) “crop” sensor (APS-C, 22.2 x 14.8 mm sensor size) would require c. 30 mm lens to produce similar field of view for the image as a 50 mm in a full-frame camera. Lenses with this kinds of short focal ranges can be designed to be physically smaller, and can deliver very good image quality for their intended purposes, even while being nicely budget-priced. There are these days many such excellent “prime” lenses (as contrasted to more complex “zoom” lenses) available from many manufacturers.
One should note here that in case of smartphone photography, everything is of course even much more compact. A typical modern smartphone camera might have a sensor of only few millimeters in size (e.g. in popular 1/3″ type, the sensor is 4.8 x 3.6 mm), so actual focal length of the (fixed) lens may be perhaps 4.25 mm, but that translates into a 26 mm equivalent lens field-of-view, in a full-frame camera. This is thus effectively a wide-angle lens that is good for many indoor photography situations. Many smartphones feature a “2x” (or even “5x”) sensor-lens combinations, that can deliver a normal range (50 mm equivalent in full-frame) or even telephoto ranges, with their small mechanical and optical constructions. This is an impressive achievement – it is much more comfortable to put a camera capable of high-quality photography into your back pocket, rather than lug it around in a dedicate backbag, for example.
Perhaps the main limitation of smartphone cameras for artistic purposes is that they do not have adjustable apertures. There is always the same, rather small hole where rays of light will enter the lens and finally focus on the image sensor. It is difficult to control the “zone of acceptable sharpness” (or, “depth of field”) with a lens where you cannot adjust aperture size. In fact, it is easy to achieve “hyperfocal” images with very small-sensor cameras: everything in image will be sharp, from very close to infinity. But the more recent smartphones have already slighly larger sensors, and there have already even been experiments to implement adjustable aperture system inside these tiny lenses (Nokia N86 and Samsung Galaxy S9 at least have advertised adjustable apertures). Some manufacturers resort to using algorithmic background blurring to create full-frame camera looking, soft background while still using optically small lenses that naturally have much wider depth of field. When you take a look at the results of such “computational photography” in a large and sharp monitor, the results are usually not as good as with a real, optical system. But, if the main use scenario for such photos is to look at them from small-screen, mobile devices, then – again – the lens and augmentation system together may be “good enough”.
All the photos attached into this blog post are taken with either a compact kit lens, or with a smartphone camera (apart from that single bird photo above). Looking at them from a very high resolution computer monitor, I can find blurriness and all kinds of other optical issues. But personally, I can live with those. My use case in this case did not involve printing these out in poster sizes, and I just enjoyed having a winter-day walk, and taking photos while not carrying too heavy setup. I will also be posting the photos online, so the typical viewing size and situation for them pretty much obfuscates maybe 80 % of the optical issues. So: compact cameras, compact lenses – great photos? I am not sure. But: good enough.
Ten years ago, 2010, I blogged about the new kiuas (stove) for our sauna, Harvia Figaro. Just before Christmas this year, this kiuas broke down. There was an electric failure (the junction box of kiuas basically exploded, there was an electric short circuit), and I am not an engineer enough to say whether some underlying failure in stove itself was the reason, or just the weakly designed connections in the junction box failing over time. Luckily our circuit breakers worked just fine – we just missed one fundamental Finnish tradition: joulusauna.
Looking inside the Harvia after 10 years of use was eye-opening. The heater elements (lämpövastukset) of kiuas were pretty much gone. Also, we could not really completely trust the controller, timer and other electronics inside Harvia after the dramatic short circuit. So I decided to get a new kiuas.
After some careful examination and discussion about the needs and priorities of our family, the choice was Tulikivi Sumu ST 9 kW model. This is specced for a 8-13 m³ sauna room, so hopefully it would be suitable for our case. (More, see: https://www.tulikivi.fi/tuotteet/Sumu_ST .)
One of the lessons from Harvia was that a long construction with an open side that exposes the stones is challenging: the small stones can easily even squeeze out through the steel bars, and the tall heater elements become strained among the moving stones. I blame myself for not being diligent enough to take the stones out at least a few times per year, washing them, and then putting them back – the heater elements would no doubt stayed in better shape and the entire kiuas maybe even lived longer that way. At the same time, it must be said that positioning stones among the heater elements inside a kiuas that is 94 cm deep, is hard. The inside edges or rim of the steel box were so sharp in Harvia Figaro, that it was bit painful to squeeze your hand (and stones) deep inside kiuas. And this kiuas took maximum amount of 90 kg of stones. This sounded great when we got it (in theory at least, a massive kiuas gives more balanced “löyly” – the experience derived from such elements as right temperature, the release of steam and the atmosphere), but in the end this design was one of the reasons we did not maintain the kiuas in the manner it should have been done (we did of course change the stones, but probably not as often as it should have been done).
The new kiuas, Tulikivi Sumu, is also rather tall, but the external dimensions hide the fact that Tulikivi relies on dual-casing construction: there are isolating cavities inside kiuas, and this model only takes 60 kg of stones (rather than 90 kg of Harvia Figaro). Together with the smaller internal dimensions, this is clearly easier kiuas to handle and maintain.
We also (of course) always use a professional electrician to install a kiuas. This time, the installed, new junction box was also a more sturdy and hopefully electrically safer and more durable model.
The dual shell casing of Tulikivi means that it is also safer – this is something that the company also advertises, alonside with their expertise in traditional soapstone stoves (vuolukivi – they are based in Nunnalahti, Juuka, North Karelia). The outer surfaces of this stove get warm, but they do not get so hot that you would get burns, if you touch it while it is heating. Btw, this also means that the safe distances to wooden seats (lauteet) or walls can be very small. One could even integrate this kiuas inside lauteet, having the top of kiuas with its hot stones sticking out among the people sitting in lauteet. We are not going for that option, though.
One thing that I really tried to do carefully this time was positioning of the kiuaskivet – stones of the stove. I have become increasingly aware that you should not just randomly throw stones into the electric stove, and hope that kiuas would give good löyly – or that it would even be safe.
The instruction manual of Tulikivi even explitly says that their warrantly will be void, if the stones are positioned wrongly, and that if stones are too tightly or too loosely positioned, it can even cause fire.
The basic idea is that there should always be enough air cavities inside a kiuas, but also that the electric heater elements are not bare at any point. There should be a sort of internal architecture to the kiuaskivet: one needs to find large stones that fit in and work as supports in larger spaces, flat stones that are like internal “support beams”, taking the weight and supporting those stones that will come on the next layer. The weight of stones should not be focused on the heater elements, as otherwise they will become twisted and deformed under pressure. There should also be air channels like internal chimneys that allow hot air to move upwards, and transfer the heat from the heater elements into the kiuaskivet (stones) and also into the air of sauna room.
We use olivine diabase as the sauna stones – this is what Tulikivi also recommends. This is a rather durable and heavy rock material, meaning that it will not break under temperature change strains quickly, and since it is a heavy stone, it will also store and release heat. We use also small number of rounded olivine diabase stones at the top of kiuas. This is mostly for decorative purposes, even if some experts claim that rounded stones will also spread löylyvesi (water you throw into kiuas in Finnish sauna to get löyly), as water flows smoothly from rounded stones, end up deeper inside kiuas, and thereby produce more smooth löyly.
It should be said that selection of löylykivet, their positioning, and all such details of “sauna-knowhow” are subjects of endless passionate debates among the Finns. You can go e.g. into this good site (use the Google Translator, if needed) and read more: https://saunologia.fi/kiuaskivet/ .
Now, we’ll just need to take care of those lauteet, too. – Meanwhile: Hyviä löylyjä!
As holidays are traditionally time to be lazy and just rest, I have not undertaken any major photography projects either. One thing that I have been wondering though, has been the distinction between “soft” and “sharp” photos. There are actually many things intermingling here. In old times, the lenses I used were not capable of delivering optically sharp images, and due to long exposure times, unsensitive film (later: sensors), the images were also often blurry: I had not got the subject in focus and/or there was blur caused by movement (of target and/or the camera shaking). Sometimes the blurry outcomes were visually or artistically interesting, but this was mostly due to pure luck, rather than any skill and planning.
Later, it became feasible to get images that were technically controlled and good-looking according to the standard measurements of image quality. Particularly the smartphone photos have changed the situation in major ways. It should be noted that the small sensor and small lenses in early mobile phone cameras did not even need to have any sort of focus mechanisms – they were called ‘hyperfocal lenses’, meaning that everything from very close distance to infinity would always be “in focus” (at least theoretically). As long as you’d have enough light and not too much movement in the image, you would get “sharp” photos.
However, sharpness in this sense is not always what a photographer wants. Yes, you might want to have your main subject to be sharp (have a lot of details, and be in perfect focus), but if everything in the image background shows such detail and focus as well, that might be distracting, and aesthetically displeasing.
Thus, the expensive professional cameras and lenses (full frame bodies, and “fast”, wide-aperture lenses) are actually particularly good in producing “soft” rather than “sharp” images. Or, to put it slightly better, they will provide the photographer larger creative space: those systems can be used to produce both sharp and soft looking effects, and the photographer has better control on where both will appear in the image. The smartphone manufacturers have also added algorithmic techniques that are used to make the uniformly-sharp mobile photos softer, or blurry, in selected areas (typically e.g. in the background areas of portrait photos).
Sharpness in photos is both a question of information, and how it is visually expressed. For example, a camera with very low resolution sensor cannot be used to produce large, sharp images, as there is not enough information to start with. A small-size version of the same photo might look acceptably sharp, though. On the other hand, a camera with massively high-resolution sensor does not automatically procude sharp looking images. There are multiple other factors in play, and the visual acuity and contrast are perhaps the most crucial ones. The ray of light that comes through the lens and falls on the sensor produces what is called a “circle of confusion”, and a single spot of the subject should ideally be focused on so small spot in the sensor that it would look like a nice, sharp spot also in the finished image (note that this is also dependent on the visual acuity, the eyes of the person looking at it – meaning that discussions of “sharpness” are also in certain ways always subjective). Good quality optics have little diffraction effects that would optically produce visual blur to the photo.
Similarly, the sharp and soft images may be affected by “visual noise”, which generally is created in the image sensor. In film days, the “grain” of photography was due to the actual small grains of the photosensitive particles that were used to capture the light and dark areas in the image. There were “low ISO” (less light-sensitive) film materials that had very fine-grained particles, and “high ISO” (highly light-sensitive) films that had larger and coarser particles. Thus, it was possible to take photos in low-light conditions (or e.g. with fast shutter speeds) with the sensitive film, but the downside was that there was more grain (i.e. less sharp details, and more visual noise) in the final developed and enlarged photographs. The same physical principles apply also today, in the case of photosensitive, semiconductive camera sensors: when the amplification of light signal is boosted, the ISO values go up, faster shots or images in darker conditions can be captured, but there will be more visual noise in the finished photos. Thus, the perfectly sharp, noise-free image cannot always be achieved.
But like many photographers seek for the soft “bokeh” effect into the backgrounds (or foregrounds) of their carefully composed photos, some photographers do not shy away from the grainy effects of visual noise, or high ISO values. Similar to the control of sharpness and softness in focus, the use of grain is also a question of control and planning: if all and everything one can produce has noise and grain, there is no real creative choice. Understanding the limitations of photographic equipment (with a lot of training and experimentation) will eventually allow one to utilize also visual “imperfections” to achieve desired atmospheres and artistic effects.
I have long been thinking about a longer, telephoto range zoom lens, as this is perhaps the main technical bottleneck in my topic selection currently. After finding a nice offer, I made the jump and invested into Sigma 150-600mm/5.0-6.3 DG OS HSM Contemporary lens for Canon. It is not a true “professional” level wildlife lens (those are in 10 000+ euros/dollars price range in this focal length). But his has got some nice reviews on its image quality and portability. Though, by my standards this is a pretty heavy piece of glass (1,930 g).
The 150-600 mm focal range is in itself highly useful, but when you add this into a “crop sensor” body as I do (Canon has 1.6x crop multiplier), the effective focal range becomes 240-960mm, which is even more into the long end of telephoto lenses. The question is, whether there is still enough light left in the cropped setting at the sensor to allow autofocus to work reliably, and to let me shoot with apertures that allow using pretty noise-free ISO sensitivity settings.
I have only made one photo walk with my new setup yet, but my feelings are clearly at the positive side at this point. I could get decent images with my old 550D DSLR body with this lens, even in a dark, cloudy winter’s day. The situation improved yet lightly when I attached the Sigma into a Viltrox EF-M Speed Booster adapter and EOS M50 body. In this setup I lost the crop multiplier (speedboosters effectively operate as inverted teleconverters), but gained 1.4x multiplier in larger aperture. In a dark day more light was more important than getting that extra crop multiplier. There is nevertheless clear vignetting when Sigma 150-600 mm is used with Viltrox speedbooster. As I was typically cropping this kind of telephoto images in Lightroom in any case, that was not an issue for me.
The ergonomics of using the tiny M50 with a heavy lens are not that good, of course, but I am using a lens this heavy with a monopod or tripod (attached into the tripod collar/handle), in any case. The small body can just comfortably “hang about”, while one concentrates on handling the big lens and monopod/tripod.
In daylight, the autofocus operation was good, both with 550D and M50 bodies. Neither is a really solid wildlife camera, though, so the slow speed of setting the scene and focusing on a moving subject is somewhat of a challenge. I probably need to study the camera behaviour and optimal settings still a bit more, and also actually start learning the art of “wildlife photography”, if I intend to use this lens into its full potential.
Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.
Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)
It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).
The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).
These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.
Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing asnote-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.
The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.
Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).
As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.
MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).
There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.
There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.
Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.
Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.
In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.
The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).
For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.
I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.
In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.
In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.