I have followed an about five-year PC upgrade cycle – making smaller, incremental parts upgrades in-between, and building a totally new computer every four-five years. My previous two completely new systems were built during a Xmas break – in December 2011 and 2015. This time, I was seeking for something to put my mind into right now (year 2020 has been a tough one), and specced my five-year build now in Midsummer, already.
It somehow feels that every year is a bad year to invest into computer systems. There is always something much better coming up, just around the corner. This time, it seems that there will be both a new processor generation and new major graphics card generation coming up, later in 2020. But after doing some comparative research for a couple of weeks, in the end, I did not really care. The system I’ll build with the 2020 level of technology, should be much more capable than the 2015 one, in any case. Hopefully the daily system slowdowns and bottlenecks would ease, now.
Originally, I thought that this year would be the year of AMD: both the AMD Zen 2 architecture based, Ryzen 3000 series CPUs and Radeon RX 5000 GPUs appeared very promising in terms of value for money. In the end, it looks like this might be my last Intel-Nvidia system (?), instead. My main question-marks related to the single-core performance in CPUs, and to the driver reliability in Radeon 5000 GPUs. The more I read, and discussed with people who had experience with the Radeon 5000 GPUs, the more I heard stories about blue screens and crashing systems. The speed and price of the AMD hardware itself seemed excellent. In CPUs, on the other hand, I evaluated my own main use cases, and ended up with the conclusion that the slightly better single core performance of Intel 10th generation processors would mean a bit more to me, than the solid multi-core, multithread-performance of similarly priced, modern Ryzen processors.
After a couple of weeks of study into mid-priced, medium-powered components, here are the core elements chosen for my new, Midsummer 2020 system:
Intel Core i5-10600K, LGA1200, 4.10 GHz, 12MB, Boxed (there is some overclocking potential in this CPU, too)
ARCTIC Freezer 34 eSports DUO – Red, processor cooler (I studied both various watercooling solutions, and the high-powered Noctua air coolers, before settling on this one; the watercooling systems did not appear quite as durable in the long run, and the premium NH-D15 was a bit too large to fit comfortably into the case; this appared to be a good compromise)
MSI MAG Z490 TOMAHAWK, ATX motherboard (this motherboard appears to strike a nice balance between price vs. solid construction, feature set, and investments put into the Voltage Regulator Modules, VRMs, and other key electronic circuit components)
Corsair 32GB (2 x 16GB) Vengeance LPX, DDR4 3200MHz, CL16, 1.35V memory modules (this amount of memory is not needed for gaming, I think, but for all my other, multitasking and multi-threaded everyday uses)
MSI GeForce RTX 2060 Super ARMOR OC GPU, 8GB GDDR6 (this is entry level ray-tracing technology – that should be capable enough for my use, for a couple of years at least)
Samsung 1TB 970 EVO Plus SSD M.2 2280, PCIe 3.0 x4, NVMe, 3500/3300 MB/s (this is the system disk; there will be another SSD and a large HDD, plus a several-terabyte backup solution)
Corsair 750W RM750x (2018), modular power unit, 80 Plus Gold (there should be enough reliable power available in this PSU)
Cooler Master MasterBox TD500 Mesh w/ controller, ATX, Black (this is chosen on the basis of available test results – the priorities for me here were easy installation, efficient air flow, and thirdly silent operation)
As a final note, it was interesting to note that during the intervening 2015-2020 period, there was time when RGB lights became the de facto standard in PC parts: everything was radiating and pulsating in multiple LED colours like a Xmas tree. It is ok to think about design, and aim towards some kind of futurism, even, in this context. But some things are just plain ridiculous, and I am happy to see a bit more minimalism winning ground in PC enthusiast level components, too.
The global coronavirus epidemic has already caused deaths, suffering and cancellations (e.g. those of our DiGRA 2020 conference, and the Immersive Experiences seminar), but there are still luckily many things that go on in our daily academic lives, albeit often in somewhat reorganised manner. In schools and universities, the move into remote education is particularly one producing wide-ranging changes, and one that is currently been implemented hastily in innumerable courses that were not originally planned to be run in this manner at all.
I am not in a position to provide pedagogic advice, as the responsible teachers will know much better what are the actual learning goals of their courses, and therefore they are also best capable of thinking how to reach those goals with alternative means. But since I have been directing, implementing or participating in remote education over 20 years already (time flies!), here are at least some practical tips I can share.
Often the main part of remote education is independent, individual learning processes, which just need to be somehow supported. Finding information online, gathering data, reading, analysing, thinking and writing is something that does not fundamentally change, even if the in-class meetings would be replaced by reporting and commenting taking place online (though, the workload for teachers can easily skyrocket, which is something to be aware of). This is particularly true in asynchronous remote education, where everyone does their tasks in their own pace. It is when teamwork, group communications, more advanced collaborations, or when some special software or tools are required, when more challenges emerge. There are ways to get around most of those issues, too. But it is certainly true that not all education can be converted into remote education, not at least with identical learning goals.
According to my experience, there are three main types of problems in real-time group meetings or audio/videoconferences: 1) connection problems, 2) audio & video problems, and 3) conversation rules problems. Let’s deal with them, one by one.
1) The connection problems are due to bad or unreliable internet connection. My main advice is either to make sure that one can use wired rather than Wi-Fi/cellular connection while attempting to join a real-time online meeting or get very close to Wi-Fi router in order to get as strong signal as possible. If one has weak connection, the experience of everyone will suffer, as there will likely be garbled noises and video artefacts coming from you, rather than good-quality streams.
2) The audio and video problems relate to echo, weak sound levels, background noise, or dark, badly positioned or unclear video. If there are several people taking part in a joint meeting, it might be worth thinking carefully whether a video stream is actually needed. In most cases people are working intensely with their laptops or mobile devices during the online meeting, reviewing documents and making notes, and since there are challenges in getting a real eye-to-eye contact with other people (that is pretty impossible still, with current consumer technology), there are multiple distancing factors that will lower the feeling of social presence in any case. Good quality audio link might be enough to have a decent meeting. For that, I really recommend using a headset (headphones with a built-in microphone) rather than just the built-in microphone and speakers of the laptop, for example. There will be much less echo, and the microphone will be close to speakers’ mouths meaning that speech is picked up in much clear and loud manner, and the surround noises are easier to control. But it is highly advisable to move into a silent room for the period of teleconference.
Another tip: I suggest always first connecting the headset (or external microphone and speakers), BEFORE starting the software tool used for teleconferencing. This way, you can make sure that the correct audio devices (both output and input devices) are set as active or default ones, before you start the remote meeting tool. It is pretty easy to get this messed up and end up with low-quality audio coming from the wrong microphone or speakers rather than the intended ones. Note that there are indeed two layers here: in most cases, there are separate audio device settings both in the operating system (see Start/Settings/System/Sound in Windows 10), and another, e.g. “Preferences” item with other audio device settings hidden inside most remote meeting tools. Both of those need to be checked – prior to the meeting.
Thus, yet one tip: please always dedicate e.g. 10-15 minutes of technical preparation time before a remote education or remote meeting session for double-checking and solving connection, audio, video or other technical problems. It is sad (and irresponsible) use of everyone’s precious time, if every session starts with half of the speakers missing the ability to speak, or not being able to hear anyone else. Though, this kind of scenario is still unfortunately pretty typical. Remote meeting technology is notoriously unreliable, and when there are multiple people, multiple devices and multiple connections involved, the likelihood of problems multiplies exponentially.
Please be considerate and patient towards other people. No-one wants to be the person having tech problems.
3) The discussion rules related problems are the final category, and one that might be also culturally dependent. In a typical remote meeting among several Finnish people, for example, it might be that everyone just keeps quiet most of the time. That is normal, polite behaviour in the Finnish cultural context for face-to-face meetings – but something that is very difficult to decode when in an online audio teleconference where you are missing all the subtle gestures, confused looks, smiles and other nonverbal cues. In some other cultural setting, the difficulty might be people speaking on top of each other. Or the issues might be related to people asking questions without making it clear who is actually being addressed by the question or comment.
It is usually a good policy to have a chair or moderator appointed for an online meeting, particularly if it is larger than only a couple of people. The speakers can use the chat or other tools in the software to make signals when they’d want to speak. The chairperson makes it clear which item is being currently discussed and gives floor to each participant in their turn. Everyone tries to be concise, and also remembers to use names while presenting questions or comments to others. It is also good practice to start each meeting with a round of introductions, so that everyone can connect the sound of voice with a particular person. Repeating one’s name also later in the meeting when one is speaking up, does not hurt, either.
In most of our online meetings today we are using a collaboratively edited online document for notetaking during the meeting. This helps everyone to follow what has been said or decided upon. People can fix typos in notes in real time or add links and other materials without requiring the meeting chairperson (or secretary) to do so for them. There are many such online notetaking tools in standard office suites. Google Docs works fine, for example, and has probably the easiest way of generating an editing-allowed link, which can then be shared in the meeting chat window with all participants, without requiring them to have a separate service account and login to do so. Microsoft has their own, integrated tools that work best when everyone is from within the same organisation.
Finally, online collaboration has come a long way from the first experiments in 1960s (google “NLS” and Douglas Engelbart), but it still continues to have its challenges. But if we are all aware of the most typical issues, dedicate a few minutes before each session for setup and testing (and invest 15 euros/dollars into a reliable plug-and-play headset with USB connectivity) we can remove many annoying elements and make the experience much better for everyone. Then, it is much easier to start improving the actual content and pedagogic side of things. – Btw, if you have any additional tips, or comments to share on this, please drop a note below.
I have long been in Canon camp in terms of my DSLR equipment, and it was interesting to notice that they announced last week a new, improved full frame mirrorless camera body: EOS R5 (link to official short annoucement). While Canon was left behind competitors such as Sony in entering the mirrorless era, this time the camera giant appears to be serious. This new flagship is promised to feature in-body image stabilization that “will work in combination with the lens stabilization system” – first in Canon cameras. Also, while the implementations of 4k video in Canon DSLRs have left professionals critical in past, this camera is promised to feature 8k video. The leaks (featured in sites like Canon Rumors) have been discussing further features such as a 45mp full frame sensor, 12/20fps continuous shooting, and Canon also verified a new, “Image.Canon” cloud platform, which will be used to stream photos for further editing live, while shooting.
But does one really need a system like that? Aren’t cameras already good enough, with comparable image quality available at fraction of the cost (EOS R5 might be in 3500-4000 euros range, body only).
In some sense such critique might be true. I have not named the equipment I have used for shooting the photos featured in this blog post, for example – some are taken with my mirrorless systems camera, some are coming from a smartphone camera. For online publishing and hobbyist use, many contemporary camera systems are “good enough”, and can be flexibly utilized for different kinds of purposes. And the lens is today way more important element than the camera body, or the sensor.
Said that, there are some elements where a professional, full frame camera is indeed stronger than a consumer model with an APS-C (crop) sensor, for example. It can capture more light into the larger sensor and thus deliver somewhat wider dynamic range and less noise under similar conditions. Thus, one might be able to use higher ISO values and get noise-free, professional looking and sharp images in lower light conditions.
On the other hand, the larger sensor optically means more narrow depth of field – this is something that a portrait photographer working in studio might love, but it might actually be a limitation for a landscape photographer. I do actually like using my smartphone for most everyday event photography and some landscape photos, too, as the small lens and sensor is good for such uses (if you understand the limitations, too). A modern, mirrorless APS-C camera is actually a really flexible tool for many purposes, but ideally one has a selection of good quality lenses to suit the mount and smaller format camera. For Canon, there is striking difference in R&D investments Canon have made in recent years into the full frame, mirrorless RF mount lenses, as compared to the “consumer line” M mount lenses. This is based on business thinking, of course: the casual photographers are changing into using smartphones more and more, and there is smaller market and much tighter competition left in the high-end, professional and serious enthusiast lenses and cameras, where Canon (and Nikon, and many others) are hoping to make their profits in the future.
Thus: more expensive professional full frame optimised lenses, and only few for APS-C systems? We’ll see, but it might indeed be that smaller budget hobbyists (like myself) will need to turn towards third-party developers for filling in the gaps left by Canon.
One downside of the more compact, cheaper APS-C cameras (like Canon M mount systems) is that while they are much nicer to carry around, they do not have as good ergonomics and weather proofing as more pro-grade, full frame alternatives. This is aggravated in winter conditions. It is sometimes close to impossible to get your cold, gloved fingers to strike the right buttons and dials when they are as small as in my EOS M50. The cheaper camera bodies and lenses are also missing the silicone seals and gaskets that are typically an element that secures all connectors, couplings and buttons in a pro system. Thus, I get a bit nervous when outside with my budget-friendly system in a weather like today. But, after some time spent in careful wiping and cleaning, everything seems to continue working just fine.
Absolute number one lesson I have learned in these years of photography, is that the main limitation of getting great photos is rarely in equipment. There are more and less optimal, or innovative, ways of using same setup, and with careful study and experimentation it is possible to learn ways of working around technical limitations. The top-of-the-line, full frame professional camera and lenses system might have wider “opportunity space” for someone who has learned how to use it. But with additional complexity, heavy and expensive elements, those systems also have their inevitable downsides. – Happy photography, everyone!
Note-taking and writing are interesting activities. For example, it is interesting to follow how some people turn physical notepads into veritable art projects: scratchbooks, colourful pages filled with intermixing text, doodles, mindmaps and larger illustrations. Usually these artistic people like to work with real pens (or even paintbrushes) on real paper pads.
Then there was time, when Microsoft Office arrived into personal computers, and typing with a clanky keyboard into an MS Word window started to dominate the intellectually productive work. (I am old enough to remember the DOS times with WordPerfect, and my first Finnish language word processor program – “Sanatar” – that I long used in my Commodore 64 – which, btw, had actually a rather nice keyboard for typing text.)
It is also interesting to note how some people still nostalgically look back to e.g. Word 6.0 (1993) or Word 2007, which was still pretty straightforward tool in its focus, while introducing such modern elements as the adaptive “Ribbon” toolbars (that many people hated).
The versatility and power of Word as a multi-purpose tool has been both its power as well as its main weakness. There are hundreds of operations one can carry out with MS Word, including programmable macros, printing out massive amounts of form letters or envelopes with addresses drawn from a separate data file (“Mail Merge”), and even editing and typesetting entire books (which I have also personally done, even while I do not recommend it to anyone – Word is not originally designed as a desktop publishing program, even if its WYSIWYG print layout mode can be extended into that direction).
These days, the free, open-source LibreOffice is perhaps closest one can get to the look, interface and feature set of the “classic” Microsoft Word. It is a 2010 fork of OpenOffice.org, the earlier open-source office software suite.
Generally speaking, there appears to be at least three main directions where individual text editing programs focus on. One is writing asnote-taking. This is situational and generally short form. Notes are practical, information-filled prose pieces that are often intended to be used as part of some job or project. Meeting notes, or notes that summarise books one had read, or data one has gathered (notes on index cards) are some examples.
The second main type of text programs focus on writing as content production. This is something that an author working on a novel does. Also screenwriters, journalists, podcast producers and many others so-called ‘creatives’ have needs for dedicated writing software in this sense.
Third category I already briefly mentioned: text editing as publication production. One can easily use any version of MS Word to produce a classic-style software manual, for example. It can handle multiple chapters, has tools such as section breaks that allow pagination to restart or re-format at different sections of longer documents, and it also features tools for adding footnotes, endnotes and for creating an index for the final, book-length publication. But while it provides a WYSIWYG style print layout of pages, it does not allow such really robust page layout features that professional desktop publishing tools focus on. The fine art of tweaking font kerning (spacing of proportional fonts), very exact positioning of graphic elements in publication pages – all that is best left to tools such as PageMaker, QuarkXPress, InDesign (or LaTex, if that is your cup of tea).
As all these three practical fields are rather different, it is obvious that a tool that excels in one is probably not optimal for another. One would not want to use a heavy-duty professional publication software (e.g. InDesign) to quickly draft the meeting notes, for example. The weight and complexity of the tool hinders, rather than augments, the task.
MS Word (originally published in 1983) achieved dominant position in word processing in the early 1990s. During the 1980s there were tens of different, competing word processing tools (eagerly competing for the place of earlier, mechanical and electric typewriters), but Microsoft was early to enter the graphical interface era, first publishing Word for Apple Macintosh computers (1985), then to Microsoft Windows (1989). The popularity and even de facto “industry standard” position of Word – as part of the MS Office Suite – is due to several factors, but for many kinds of offices, professions and purposes, the versatility of MS Word was a good match. As the .doc file format, feature set and interface of Office and Word became the standard, it was logical for people to use it also in homes. The pricing might have been an issue, though (I read somewhere that a single-user licence of “MS Office 2000 Premium” at one point had the asking price of $800).
There has been counter-reactions and multiple alternative offered to the dominance of MS Word. I already mentioned the OpenOffice and LibreOffice as important, more lean, free and open alternatives to the commercial behemot. An interesting development is related to the rise of Apple iPad as a popular mobile writing environment. Somewhat similarly as Mac and Windows PCs heralded transformation from the ealier, command-line era, the iPad shows signs of (admittedly yet somewhat more limited) transformative potential of “post-PC” era. At its best, iPad is a highly compact and intuitive, multipurpose tool that is optimised for touch-screens and simplified mobile software applications – the “apps”.
There are writing tools designed for iPad that some people argue are better than MS Word for people who want to focus on writing in the second sense – as content production. The main argument here is that “less is better”: as these writing apps are just designed for writing, there is no danger that one would lose time by starting to fiddle with font settings or page layouts, for example. The iPad is also arguably a better “distraction free” writing environment, as the mobile device is designed for a single app filling the small screen entirely – while Mac and Windows, on the other hand, boast stronger multitasking capabilities which might lead to cluttered desktops, filled by multiple browser windows, other programs and other distracting elements.
Some examples of this style of dedicated writers’ tools include Scrivener (by company called Literature and Latte, and originally published for Mac in 2007), which is optimized for handling long manuscripts and related writing processes. It has a drafting and note-handing area (with the “corkboard” metaphor), outliner and editor, making it also a sort of project-management tool for writers.
Another popular writing and “text project management” focused app is Ulysses (by a small German company of the same name). The initiative and main emphasis in development of these kinds of “tools for creatives” has clearly been in the side of Apple, rather than Microsoft (or Google, or Linux) ecosystems. A typical writing app of this kind automatically syncs via iCloud, making same text seamlessly available to the iPad, iPhone and Mac of the same (Apple) user.
In emphasising “distraction free writing”, many tools of this kind feature clean, empty interfaces where only the currently created text is allowed to appear. Some have specific “focus modes” that hightlight the current paragraph or sentence, and dim everything else. Popular apps of this kind include iA Writer and Bear. While there are even simpler tools for writing – Windows Notepad and Apple Notes most notably (sic) – these newer writing apps typically include essential text formatting with Markdown, a simple code system that allows e.g. application of bold formatting by surrounding the expression with *asterisk* marks.
The big question of course is, that are such (sometimes rather expensive and/or subscription based) writing apps really necessary? It is perfectly possible to create a distraction-free writing environment in a common Windows PC: one just closes all the other windows. And if the multiple menus of MS Word distract, it is possible to hide the menus while writing. Admittedly, the temptation to stray into exploring other areas and functions is still there, but then again, even an iPad contains multiple apps and can be used in a multitasking manner (even while not as easily as a desktop PC environment, like a Mac or Windows computer). There are also ergonomic issues: a full desktop computer probably allows the large, standalone screen to be adjusted into the height and angle that is much better (or healthier) for longer writing sessions than the small screen of iPad (or even a 13”/15” laptop computer), particularly if one tries to balance the mobile device while lying on a sofa or squeezing it into a tiny cafeteria table corner while writing. The keyboards for desktop computers typically also have better tactile and ergonomic characteristics than the virtual, on-screen keyboards, or add-on external keyboards used with iPad style devices. Though, with some search and experimentation, one should be able to find some rather decent solutions that work also in mobile contexts (this text is written using a Logitech “Slim Combo” keyboard cover, attached to a 10.5” iPad Pro).
For note-taking workflows, neither a word processor or a distraction-free writing app are optimal. The leading solutions that have been designed for this purpose include OneNote by Microsoft and Evernote. Both are available for multiple platforms and ecosystems, and both allow both text and rich media content, browser capture, categorisation, tagging and powerful search functions.
I have used – and am still using – all of the above mentioned alternatives in various times and for various purposes. As years, decades and device generations have passed, archiving and access have become an increasingly important criteria. I have thousands of notes in OneNote and Evernote, hundreds of text snippets in iA Writer and in all kinds of other writing tools, often synchronized into iCloud, Dropbox, OneDrive or some other such service. Most importantly, in our Gamelab, most of our collabrative research article writing happens in Google Docs/Drive, which is still the most clear, simple and efficient tool for such real-time collaboration. The downside of this happily polyphonic reality is that when I need to find something specific from this jungle of text and data, it is often a difficult task involving searches into multiple tools, devices and online services.
In the end, what I am mostly today using is a combination of MS Word, Notepad (or, these days Sublime Text 3) and Dropbox. I have 300,000+ files in my Dropbox archives, and the cross-platform synchronization, version-controlled backups and two-factor authenticated security features are something that I have grown to rely on. When I make my projects into file folders that propagate through the Dropbox system, and use either plain text, or MS Word (rich text), plus standard image file types (though often also PDFs) in these folders, it is pretty easy to find my text and data, and continue working on it, where and when needed. Text editing works equally well in a personal computer, iPad and even in a smartphone. (The free, browser-based MS Word for the web, and the solid mobile app versions of MS Word help, too.) Sharing and collaboration requires some thought in each invidual case, though.
In my work flow, blog writing is perhaps the main exception to the above. These days, I like writing directly into the WordPress app or into their online editor. The experience is pretty close to the “distraction-free” style of writing tools, and as WordPress saves drafts into their online servers, I need not worry about a local app crash or device failure. But when I write with MS Word, the same is true: it either auto-saves in real time into OneDrive (via O365 we use at work), or my local PC projects get synced into the Dropbox cloud as soon as I press ctrl-s. And I keep pressing that key combination after each five seconds or so – a habit that comes instinctually, after decades of work with earlier versions of MS Word for Windows, which could crash and take all of your hard-worked text with it, any minute.
While learning to take better photos with within the opportunities and limitations provided by whatever camera technology offers, it is also interesting now and then to stop to reflect on how things are evolving.
This weekend, I took some time to study rainy tones of Autumn, and also to hunt for the “perfect blues” of the Blue Hour – the time both some time before sunrise and after the sunset, when indirect sunlight coming from the sky is dominated by short, blue wavelenghts.
After a few attempts I think I got into the right spot at the right time (see the above photo, taken tonight at the beach of Hervantajärvi lake). At the time of this photo it was already so dark that I actually had trouble finding my gear and changing lenses.
I made the simple experiment of taking an evening, low-light photo with the same lens (Canon EF 50 mm f/1.8 STM) with two of my camera bodies – both the old, Canon EOS 550D (DSLR) and new EOS M50 (mirrorless). I tried to use the exact same settings for both photos, taking them only moments apart from the same spot, using a tripod. Below are two cropped details that I tried to frame into same area of the photos.
I am not an expert in signal processing or camera electronics, but it is interesting to see how much more detail there is in the lower, M50 version. I thought that the main differences might be in how much noise there is in the low-light photo, but the differences appear to go deeper.
The cameras are generations apart from each other: the processor of 550D is DIGIC 4, while M50 has the new DIGIC 8. That sure has a effect, but I think that the sensor might play even larger role in this experiment. There are some information available from the sensors of both cameras – see the links below:
While the physical sizes of the sensors are exactly the same (22.3 x 14.9 mm), the pixel counts are different (18 megapixels vs. 24.1 megapixels). Also, the pixel density differs: 5.43 MP/cm² vs. 7.27 MP/cm², which just verifies that these two cameras, launched almost a decade apart, have very different imaging technology under the hood.
I like using both of them, but it is important to understand their strengths and limitations. I like using the old DSLR in daylight and particularly when trying to photograph birds or other fast moving targets. The large grip and good-sized physical controls make a DSLR like EOS 550D very easy and comfortable to handle.
On the other hand, when really sharp images are needed, I now rely on the mirrorless M50. Since it is a mirrorless camera, it is easy to see the final outcome of applied settings directly from the electronic viewfinder. M50 also has an articulated, rotating LCD screen, which is really excellent feature when I need to reach very low, or very high, to get a nice shot. On the other hand, the buttons and the grip are just physically a bit too small to be comfortable. I never seem to hit the right switch when trying to react in a hurry, missing some nice opportunities. But when it is a still-life composition, I have good time to consult the tiny controls of M50.
To conclude: things are changing, good (and bad) photos can be taken, with all kinds of technology. And there is no one perfect camera, just different cameras that are best suited for slightly different uses and purposes.
I made a significant upgrade to my main gaming and home workstation in Christmas 2015. That setup is soon thus four years old, and there are certainly some areas where the age is starting to show. The new generations of processors, system memory chips and particularly the graphics adapters are all significantly faster and more capable these days. For example, my GeForce GTX 970 card is now two generations behind the current state-of-the-art graphics adapters; NVIDIA’s current RTX cards are based on the new “Turing” architecture that is e.g. capable of much more advanced ray tracing calculations than the previous generations of consumer graphics cards. What this means in practice is that rather than just applying pre-generated textures to different objects and parts of the simulated scenery, the ray tracing graphics attempts to simulate how actual rays of light would bounce and create shadows and reflections in this virtual scene. Doing this kind of calculations in real-time for millions of light rays in an action-filled game scene is an extremely computationally intensive thing, and the new cards are packed with billions of transistors, in multiple specialised processor cores. You can have a closer look at this technology, with some video samples e.g. from here: https://www.digitaltrends.com/computing/what-is-ray-tracing/ .
I will probably update my graphics card, but only a little later. I am not a great fan of 3D action games to start with, and my home computing bottlenecks are increasingly in other areas. I have been actively pursuing my photography hobby, and with the new mirrorless camera (EOS M50) moving to using the full potentials of RAW file formats and Adobe Lightroom post-processing. With photo collection sizes growing into multiple hundreds of thousands, and the file size of each RAW photo (and it’s various-resolution previews) growing larger, it is the disk, memory and speed of reading and writing all that information that matters most now.
The small update that I made this summer was focused on speeding up the entire system, and the disk I/O in particular. I got Samsung 970 EVO Plus NVMe M.2 SSD (1 Tb size) as the new system disk (for more info, see here: https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evoplus/). The interesting part here is that “NVMe” technology. That stands for “Non-Volatile Memory” express interface for solid stage memory devices like SSD disks. This new NVMe disk looks though nothing like my old hard drives: the entire terabyte-size disk is physically just a small add-on circuit board, which fits into the tiny M.2 connector in the motherboard (technically via a PCI Express 3.0 interface). The entire complex of physical and logical interface and connector standards involved here is frankly a pretty terrible mess to figure out, but I was just happy to notice that the ASUS motherboard (Z170-P) which I had bought in December 2015 was future-proof enough to come with a M.2 connector which supports “x4 PCI Express 3.0 bandwidth”, which is apparently another way of saying that it has NVMe support.
I was actually a bit nervous when I proceeded to install the Samsung 970 EVO Plus NVMe into the M.2 slot. At first I updated the motherboard firmware to the latest version, then unplugged and opened the PC. The physical installation of the tiny M.2 chip actually became one of the trickiest parts of the entire operation. The tiny slot is in an awkward, tight spot in the motherboard, so I had to remove some cables and the graphics card just to get my hands into it. And the single screw that is needed to fix the chip in place is not one of the regular screws that are used for computer case installations. Instead, this is a tiny “micro-screw” which is very hard to find. Luckily I finally located my original Z170-P sales box, and there it was: the small plastic pack with a tiny mounting bolt and the microscopic screw. I had kept the box in my storage shelves all these years, without even noticing the small plastic bag and tiny screws in the first place (I read from the Internet that there are plenty of others who have thrown the screw away with the packaging, and then later been forced to order a replacement from ASUS).
There are some settings that are needed to set up in BIOS to get the NVMe drive running. I’ll copy the steps that I followed below, in case they are useful for some others (please follow them only with your own risk – and, btw, you need to start by creating the Windows 10 installation USB media from the Microsoft site, and by pluggin that in before trying to reboot and enter the BIOS settings):
In your bios in Advanced Setup. Click the Advanced tab then, PCH Storage Configuration
Verify SATA controller is set to – Enabled
Set SATA Mode to – RAID
Go back one screen then, select Onboard Device Configuration.
Set SATA Mode Configuration to – SATA Express
Go back one screen. Click on the Boot tab then, scroll down the page to CSM. Click on it to go to next screen.
Set Launch CSM to – Disabled
Set Boot Device Control to – UEFI only
Boot from Network devices can be anything.
Set Boot from Storage Devices to – UEFI only
Set Boot from PCI-E PCI Expansion Devices to – UEFI only
Go back one screen. Click on Secure Boot to go to next screen.
Set Secure Boot state to – Disabled
Set OS Type to – Windows UEFI mode
Though, in my case if you put “Launch CSM” to “Disabled”, then the following settings in that section actually vanish from the BIOS interface. Your mileage may vary? I just backspaced at that point, made the next steps first, then made the “Launch CSM” disable step, and then proceeded further.
Another interesting part is how to partition and format the SSD and other disks in one’s system. There are plenty of websites and discussions around related to this. I noticed that Windows 10 will place some partitions to other (not so fast) disks if those are physically connected during the first installation round. So, it took me a few Windows re-installations to actually get the boot order, partitions and disks organised to my liking. But when everything was finally set up and running, the benchmark reported that my workstation speed had been upgraded the “UFO” level, so I suppose everything was worth it, in the end.
Part of the quiet and snappy, effective performance of my system after this installation can of course be just due to the clean Windows installation in itself. Four years of use with all kinds of software and driver installations can clutter the system so that it does not run reliably or smoothly, regardless of the underlying hardware. I also took the opportunity to physically clean the PC inside-out thoroughly, fix all loose and rattling components, organise cables neatly, etc. After closing the covers, setting the PC case back to its place, and plugging in a sharp, 4K monitor and a new keyboard (Logitech K470 this time), and installing just a few essential pieces of software, it was pleasure to notice how fast everything now starts and responds, and how cool the entire PC is running according to the system temperature sensor data.
It has been interesting to follow how since last year, there has been several articles published that discuss the “mirrorless camera hype”, and put forward various kinds of criticism of either this technology, or related camera industry strategies. One repeated criticism is rooted to the fact that many professional (and enthusiast) photographers still find a typical DSLR camera body to work better for their needs than a mirrorless one. There are at least three main differences: a mirrorless interchangeable camera body is typically smaller than a DSLR, the battery life is weaker, and the image from an electronic viewfinder and/or LCD back screen offers a less realistic image than a traditional optical viewfinder in a (D)SLR camera.
The industry critiques appear to be focused on worries that as the digital camera market as a whole is going down, the big companies like Canon and Nikon are directing their product development resources for putting out mirrorless camera bodies with new lens mounts, and new lenses for these systems, rather than evolving their existing product lines in DSLR markets. Many seem to think that this is bad business sense, since large populations of professionals and photography enthusiasts are deeply invested in these more traditional ecosystems, and lack of progress in them means that there is not enough incentive to upgrade and invest, for all of those who remain in those parts of the market.
There might be some truth in both lines of argumentation – yet, they are also not the whole story. It is true that Sony, with their α7, α7R and α7S lines of cameras have stolen much of the momentum that could had been strong for Canon and Nikon, if they would had invested into mirrorless technologies earlier. Currently, the full frame systems like Canon EOS R, or Nikon Z6 & Z7, are apparently not selling very strongly. In early May of this year, for example, it was publicised how Sony α7 III sold more units in Japan at least than the Canon and Nikon full frame mirrorless systems combined (see: https://www.dpreview.com/news/3587145682/sony-a7-iii-sales-beat-combined-efforts-of-canon-and-nikon-in-japan ). Some are ready to declare Canon and Nikon’s efforts as dead on arrival, but both companies have claimed to be strategically committed into their new mirrorless systems, developing and launching lenses that are necessary for their future growth. Overall though, both Canon and Nikon are producing and selling much more digital cameras than Sony, even while their sales numbers have been declining (in Japan at least, Fujifilm was interestingly the big winner in year-over-year analysis; see: https://www.canonrumors.com/latest-sales-data-shows-canon-maintains-big-marketshare-lead-in-japan-for-the-year/ ).
From a photographer perspective, the first mentioned concerns might be the more crucial than the business ones, though. Are mirrorless cameras actually worse than comparable DSLR cameras?
There is the curious quality when you move from a large (D)SLR body into using a typical mirrorless: the small camera can feel a bit like a toy, the handling is different, and using the electronic viewfinder and LCD screen can produce flashbacks of compact, point-and-shoot cameras of earlier years. In terms of pure image quality and feature sets, the mirrorless cameras are already equals to DSLRs, and in some areas have arguably moved already beyond most of them. There are multiple reasons for this, and the primary relates to the intimate link there is between the light sensor, image processor and viewfinder in mirrorless cameras. As a photographer you are not looking at a reflection of light coming from the lens through an alternative route into the optical viewfinder – you are looking at the image that is produced from the actual, real-time data that the sensor and image processor are “seeing”. The mechanical construction of mirrorless cameras can be made simpler, and when the mirror is removed, the entire lens system can be moved closer to the image sensor – something that is technically called shorter flange distance. This should allow engineers to design lenses for mirrorless systems that have a large aperture and fast focusing capabilities (you can check out a video, where a Nikon lens engineer explains how this works here: https://www.youtube.com/watch?v=LxT17A40d50 ). The physical dimensions of the camera body in itself can be made small or large, as desired. Nikon Z series cameras are rather sizable, with a conventional “pro camera” style grip (handle); my Canon EOS M50 is diminutive, from the other extreme.
I think that the development of cameras with ever more stronger processors and their machine learning and algorithm-based novel capabilities will push the general direction of photography technology towards various mirrorless systems. Said that, I completely understand the benefits of more traditional DSLRs and why they might feel superior for many photographers at the moment. There has been some rumours (in the Canon space at least, which I am personally mostly following) that new DSLR camera bodies will be released into the upper-enthusiast APS-C / semi-professional DSLR category (search e.g. for “EOS 90D” rumours), so I think that DSLR cameras are by no means dead. There are many ways in which the latest camera technologies can be implemented into mirror-bodies, as well as into the mirrorless ones. The big strategic question of course is that how many different mount and lens ecosystems can be maintained and developed simultaneously. If some of the current mounts will stop getting lenses in the near future, there is at least a market for adapter manufacturers.
I have today started to learn to take photos with an ultra-compact EOS M50, after using the much bigger SLR or DSLR cameras for decades. This is surely an interesting experience. Some of the fundamentals of photography are still the same, but some areas I clearly need to study more, and learn new approaches.
These involve particularly learning how to collaborate with the embedded computer (DIGIG 8 processor) better. It is fascinating to note how fast e.g. the automatic focusing system is – I can suddenly use an old lens like my trusty Canon EF 70-200mm f/4 L USM to get in-flight photos of rather fast birds. The new system tracks moving targets much faster and in a more reliable manner. However, I am by no means a bird photographer, having mostly worked with still life, landscapes and portraits. Getting to handle the dual options of creating the photo either through the electronic viewfinder, or, the vari-angle touchscreen takes some getting used to.
Also, there are many ways to use this new system, and finding the right settings among many different menus (there must be hundreds of options in all) takes some time. Also, coming from much older EOS 550D, it was weird to realise that the entire screen is now filled with autofocus points, and that it is possible to slide the AF point with a thumb (using the touchscreen as a “mouse”) into the optimal spot, while simultaneously composing, focusing, zooming and shooting – 10 frames per second, maximum. I am filling up the memory card fast now.
It is easy to do many basic photo editing tasks in-camera now. It actually feels like there is small “Photoshop” built into the camera. However, there is a fundamental decision that needs to be made: of either using photos as they come, directly from camera, or after some post-processing in the computer. This is important since JPG or RAW based workflows are a bit different. These days, I am using quite a lot of mobile apps and tools, and the ability to wirelessly copy photos from the camera into a smartphone or tablet computer (via Wi-Fi, Bluetooth + NFC), in the field, is definitely something that I like doing. Currently thus the JPG options make most sense for me personally.
One of the frustrating parts of upgrading one’s photography tools is the realisation that there indeed is no such thing as “perfect camera”. Truly, there are many good, very good and excellent cameras, lenses and other tools for photography (some also very expensive, some more moderately priced). But none of them is perfect for everything, and will found lacking, if evaluated with criteria that they were not designed to fulfil.
This is particularly important realisation at a point when one is both considering of changing one’s style or approach to photography, at the same time while upgrading one’s equipment. While a certain combination of camera and lens does not force you to photograph certain subject matter, or only in a certain style, there are important limitations in all alternatives, which make them less suitable for some approaches and uses, than others.
For example, if the light weight and ease of combining photo taking with a hurried everyday professional and busy family life is the primary criteria, then investing heavily into serious, professional or semi-professional/enthusiast level photography gear is perhaps not so smart move. The “full frame” (i.e. classic film frame sensor size: 36 x 24 mm) cameras that most professionals use are indeed excellent in capturing a lot of light and details – but these high-resolution camera bodies need to be combined with larger lenses that tend to be much more heavy (and expensive) than some alternatives.
On the other hand, a good smartphone camera might be the optimal solution for many people whose life context only allows taking photos in the middle of everything else – multitasking, or while moving from point A to point B. (E.g. the excellent Huawei P30 Pro is built around a small but high definition 1/1.7″ sized “SuperSensing”, 40 Mp main sensor.)
Another “generalist option” used to be so-called compact cameras, or point-and-shoot cameras, which are in pocket camera category by size. However, these cameras have pretty much lost the competition to smartphones, and there are rather minor advances that can be gained by upgrading from a really good modern smartphone camera to a upscale, 1-inch sensor compact camera, for example. While the lens and sensor of the best of such cameras are indeed better than those in smartphones, the led screens of pocket cameras cannot compete with the 6-inch OLED multitouch displays and UIs of top-of-the-line smartphones. It is much easier to compose interesting photos with these smartphones, and they also come with endless supply of interesting editing tools (apps) that can be installed and used for any need. The capabilities of pocket cameras are much more limited in such areas.
There is an interesting exception among the fixed lens cameras, however, that are still alive and kicking, and that is the “bridge camera” category. These are typically larger cameras that look and behave much like an interchangeable-lens system cameras, but have their single lens permanently attached into the camera. The sensor size in these cameras has traditionally been small, 1/1.7″ or even 1/2.3″ size. The small sensor size, however, allows manufacturers to build exceptionally versatile zoom lenses, that still translate into manageable sized cameras. A good example is the Nikon Coolpix P1000, which has 1/2.3″ sensor coupled with 125x optical zoom – that is, it provides similar field of view as a 24–3000 mm zoom lens would have in a full frame camera (physically P1000’s lenses have a 4.3–539 mm focal length). As a 300 mm is already considered a solid telephoto range, a 3000 mm field of view is insane – it is a telescope, rather than a regular camera lens. You need a tripod for shooting with that lens, and even with image stabilisation it must be difficult to keep any object that far in the shaking frame and compose decent shots. A small sensor and extreme lens system means that the image quality is not very high: according to reviews, particularly in low light conditions the small sensor size and “slow” (small aperture) lens of P1000 translates into noisy images that lack detail. But, to be fair, it is impossible to find a full frame equivalent system that would have a similar focal range (unless one combines a full frame camera body with a real telescope, I guess). This is something that you can use to shoot the craters in the Moon.
A compromise that many hobbyists are using, is getting a system camera body with an “APS-C” (in Canon: 22.2 x 14.8 mm) or “Four-Thirds” (17.3 × 13 mm) sized sensors. These also cannot gather as much light as a full frame cameras do, and thus also will have more noise at low-light conditions, plus their lenses cannot operate as well in large apertures, which translate to relative inability to achieve shallow “depth of field” – which is something that is desirable e.g. in some portrait photography situations. Also, sports and animal photographers need camera-lens combinations that are “fast”, meaning that even in low-light conditions one can take photos that show the fast-moving subject matter in focus and as sharp. The APS-C and Four-Thirds cameras are “good enough” compromises for many hobbyists, since particularly with the impressive progress that has been made in e.g. noise reduction and in automatic focus technologies, it is possible to produce photos with these camera-lens systems that are “good enough” for most purposes. And this can be achieved by equipment that is still relatively compact in size, light-weight, and (importantly), the price of lenses in APS-C and Four-Thirds camera systems is much lower than top-of-the-line professional lenses manufactured and sold to demanding professionals.
A point of comparison: a full-frame compatible 300 mm telephoto Canon lens that is meant for professionals (meaning that is has very solid construction, on top of glass elements that are designed to produce very sharp and bright images with large aperture values) is priced close to 7000 euros (check out “Canon EF 300mm f/2.8 L IS II USM”). In comparison, and from completely other end of options, one can find a much more versatile telephoto zoom lens for APS-C camera, with 70-300 mm focal range, which has price under 200 euros (check our e.g. “Sigma EOS 70-300mm f/4-5.6 DG”). But the f-values here already tell that this lens is much “slower” (that is, it cannot achieve large aperture/small f-values, and therefore will not operate as nicely in low-light conditions – translating also to longer exposure times and/or necessity to use higher ISO settings, which add noise to the image).
But: what is important to notice is that the f-value is not the whole story about the optical and quality characteristics of lenses. And even if one is after that “professional looking” shallow depth of field (and wants to have a nice blurry background “boukeh” effect), it can be achieved with multiple techniques, including shooting with a longer focal range lens (telephoto focal ranges come with more shallow depth of fields) – or even using a smartphone that can apply the subject separation and blur effects with the help of algorithms (your mileage may vary).
And all this discussion has not yet touched the aesthetics. The “commercial / professional” photo aesthetics often dominate the discussion, but there are actually interesting artistic goals that might be achieved by using small-sensor cameras better, than with a full-frame. Some like to create images that are sharp from near to long distance, and smaller sensors suit perfectly for that. Also, there might be artistic reasons for hunting particular “grainy” qualities rather than the common, overly smooth aesthetics. A small sensor camera, or a smartphone might be a good tool for those situations.
One must also think that what is the use situation one is aiming at. In many cases it is no help owning a heavy system camera: if it is always left home, it will not be taking pictures. If the sheer size of the camera attracts attention, or confuses the people you were hoping to feature in the photos, it is no good for you.
Thus, there is no perfect camera that would suit all needs and all opportunities. The hard fact is that if one is planning to shoot “all kinds of images, in all kinds of situations”, then it is very difficult to say what kind of camera and lens are needed – for curious, experimental and exploring photographers it might be pretty impossible to make the “right choice” regarding the tools that would truly be useful for them. Every system will certainly facilitate many options, but every choice inevitably also removes some options from one’s repertoire.
One concrete way forward is of course budget. It is relatively easier with small budget to make advances in photographing mostly landscapes and still-life objects, as a smartphone or e.g. an entry-level APS-C system camera with a rather cheap lens can provide good enough tools for that. However, getting into photography of fast-moving subjects, children, animals – or fast-moving insects (butterflies) or birds, then some dedicated telephoto or macro capabilities are needed, and particularly if these topics are combined with low-light situations, or desire to have really sharp images that have minimal noise, then things can easily get expensive and/or the system becomes really cumbersome to operate and carry around. Professionals use this kinds of heavy and expensive equipment – and are paid to do so. Is it one’s idea of fun and good time as a hobbyist photographer to do similar things? It might be – or not, for some.
Personally, I still need to make up my mind where to go next in my decades-long photography journey. The more pro-style, full-frame world certainly has its certain interesting options, and new generation of mirrorless full-frame cameras are also bit more compact than the older generations of DSLR cameras. However, it is impossible to get away from the laws of physics and optics, and really “capable” full frame lenses tend to be large, heavy and expensive. The style of photography that is based on a selection of high-quality “prime” lenses (as contrasted to zooms) also means that almost every time one changes from taking photos of the landscape to some detail, or close-up/macro subject, one must also physically remove and change those lenses. For a systematic and goal oriented photographer that is not a problem, but I know my own style already, and I tend to be much more opportunistic: looking around, and jumping from subject and style to another all the time.
One needs to make some kinds of compromises. One option that I have been considering recently is that rather than stepping “up” from my current entry level Canon APS-C system, I could also go the other way. There is the interesting Sony bridge camera, Sony RX10 IV, which has a modern 1″ sensor and image processor that enables very fast, 315-point phase-detection autofocus system. The lens in this camera is the most interesting part, though: it is sharp, 24-600mm equivalent F2.4-4 zoom lens designed by Zeiss. This is a rather big camera, though, so like a system cameras, this is nothing you can put into your pocket and carry around daily. In use, if chosen, it would complement the wide-angle and street photography that I would be still doing with my smartphone cameras. This would be a camera that would be dedicated to those telephoto situations in particular. The UI is not perfect, and the touch screen implementation in particular is a bit clumsy. But the autofocus behaviour, and quality of images it creates in bright to medium light conditions is simply excellent. The 1″ sensor cannot compete with full frame systems in low-light conditions, though. There might be some interesting new generation mirrorless camera bodies and lenses coming out this year, which might change the camera landscape in somewhat interesting ways. So: the jury is still out!
Currently in the Canon camp, my only item from their “Lexus” line – of the more high-quality professional L lenses – is the old Canon EF 70-200mm f/4 L USM (pictured). The second picture, the nice crocus close-up, is however not coming from that long tube, but is shot using a smartphone (Huawei Mate 20 Pro). There are professional quality macro lenses that would definitely produce better results on a DSLR camera, but for a hobbyist photographer it is also a question of “good enough”. This is good enough for me.
The current generation of smartphone cameras and optics are definitely strong in the macro, wide angle to normal lens ranges (meaning in traditional terms the 10-70 mm lenses on full frame cameras). Going to telephoto territory (over 70 mm in full frame terms), a good DSLR lens is still the best option – though, the “periscope” lens systems that are currently developed for smartphone cameras suggest that the situation might change in that front also, for hobbyist and everyday photo needs. (See the Chinese Huawei P30 Pro and OPPO’s coming phones’ periscope cameras leading the way here.) The powerful processors and learning, AI algorithms are used in the future camera systems to combine data coming from multiple lenses and sensors for image-stabilized, long-range and macro photography needs – with very handy, seamless zoom experiences.
My old L telephoto lens is non-stabilized f/4 version, so while it is “fast” in terms of focus and zoom, it is not particularly “fast” in terms of aperture (i.e. not being able to shoot in short exposure times with very wide apertures, in low-light conditions). But in daytime, well-lighted conditions, it is a nice companion to the Huawei smartphone camera, even while the aging technology of Canon APS-C system camera is truly from completely different era, as compared to the fine-tuning, editing and wireless capabilities in the smartphone. I will probably next try to set up a wireless SD card & app system for streaming the telephoto images from the old Canon into the Huawei (or e.g. iPad Pro), so that both the wide-angle, macro, normal range and telephoto images could all, in more-or-less handy manner, meet in the same mobile-access photo roll or editing software. Let’s see how this goes!
(Below, also a Great Tit/talitiainen, shot using the Canon 70-200, as a reference. In an APS-C crop body, it gives same field of view as a 112-320 mm in a full frame, if I calculate this correctly.)