Hydroponics, pt. 3

My chili project was delayed for a week or two (a nasty virus hit), so I have only now gradually been able to set up and move forward with my hydroponics system. I did get the AutoPot 4pot system by mail order (everything else was ok, except the small “tophat grommet” that is used to seal the connection of watertube into the water reservoir tank – I got that from a local store). The growing medium is 60/40 “Gold Label” HydroCoco mix, with a small layer of pure hydrocorn at the bottom.

The LED light system was bit of a challenge to install so that I can adjust the right height of lamps from the tops of chili plants (without fastening anything to the ceiling, as our panels cannot take it). This time it was right spot for an “IkeaHack”: the “elevators” for LED strips were installed into a Ikea MULIG cloth rack. Underneath the entire system a 80 x 80 cm plastic vat was installed, just to be secure with all that water. The outcome is perhaps not very beautiful, but it seems functional enough. Let’s see how the Canna Coco A+B solution that I am feeding them will work out. I am following the mild, rooting phase solution recipe at this point: 20 ml of both fertilizers into a 10 L bucket of water.

My four pots finally host these: Lemon Drop, CAP 270, Sugar Rush Orange, and Hainan Yellow Lantern. (Laura has other four chili seedlings in soil pots.) Looking forward to good growth!

Hydroponics, pt. 2

Short update again on chilies and hydroponics (apologies): my current work on this is focused on three areas. Firstly, I have been trying to figure out what growing method (or sub-method) to use. As I wrote earlier, there are reasons why ‘passive hydroponics’ looks like the best in my case. There are different ways of implementing this, though. Understanding in advance e.g. the risks associated in algae growth, over- (or under-) fertilisation, and pests in passive hydroponics appears to be important. As contrasted with growing in soil, the basic situation with nutrients is very different. In principle the hydroponic growing should be free of many risks coming with soil (less risk of pests and plant diseases, no need for pesticides, etc.) However, a hydroponic farmer needs to be bit of a scientist, in that you need to understand something about physics, chemistry and some (very basic) bioengineering. The choice of growing medium (substrate) is important as in passive hydroponics one should get enough moisture (water) to the plant roots without suffocating them – thus, the material needs to be neutral (no bio-actives or fertilisers by its own), porous and spongy enough to hold suitable amounts of water when irrigated, but also get dry enough so that air can get to the roots in-between drenching.

Secondly, I have been looking into the technical solutions for implementing the hydroponic growing environment. As I wrote, I have considered building my own ‘hempy bucket’ system. However, I kept thinking about root rot, fungus and other risks: in this kind of bucket system, there is always some fertilising liquid just standing in the water reservoir. The standing water provides ideal conditions for algae growth. Stagnate water system can cause lack of oxygen; build-up of salts and decomposing algae can produce toxins. I am not sure how significant those risks are (there are many hempy bucket gardeners who appear perfectly happy with their low-cost systems), but currently I am inclining more towards a commercial passive hydroponics system that also includes some kind of water valve: the idea here is, that the water valve will allow automatic, periodic watering of the growing media (and the root system), but also flush the water away as completely as possible, so that no similar stagnate water reservoir would be in the pots, as in the hempy bucket option. There are at least two models that are widely available and used: AutoPot and PLANT!T GoGro. I am not sure if there is much fundamental different between these two – GoGro appears to be more widely available to where I am living, but some gardeners appear to consider AutoPot (the original, older system) as more robust and a bit more sophisticated.

LED strip (Nelson Garden 23W).

Thirdly, I need to find a plant light solution that works. Currently, the tiny seedlings can nicely fit below the small LED plant light system that I have been long using. However, doing some hydroponic gardening indoors (before the greenhouse season starts) means that I need to be ready to provide enough, and right kinds of light for growing plants. We had an old fluorescent tube lamp, left from Laura’s old aquarium. That lamp was, however, too large and heavy for my needs, and I was also a bit suspicious how safe (in electronic terms) a 10+ year-old lamp setup would be today. Some chili gardeners appear to be using rather expensive, “hi-fi lamps” where different high-intensity discharge lamps (HIDs) have taken over from older incandescents and fluorescent tube lamp systems. Ceramic metal halide lighting and full-spectrum metal halide lighting are used to create powerful light with large amounts of blue and ultraviolet wavelengths that are good for plant growth. The price of good lamps of this kind can be rather high, however. I decided to go for a lightweight but plant-optimised LED system that was a comparably budget-friendly option. I am now setting up four 23W LED strips that were sold as Nelson Garden LED plant light (No.1 and No.2 systems use the same power transformer). Each LED strip is 85 cm long, is specified for 6400 K light temperature, and should provide 2200 lumen, or, more precisely, PPFD (100 mm) 570 µmol/s/m² of lighting power. Having four of those should be enough for four AutoPot style chili growing stations, at least in the early phases of gardening, I think. I am still thinking about how to suspend and adjust these LED strips to correct height above the plants. I am doing this pre-growing phase in my home office corner, in the basement, and e.g. the ceiling panels do not allow attaching anything into them.

Measuring the nutrients.
Measuring the nutrients.

Finally, the choice of growing medium has also an effect on the style of fertilisers to use, and most hydroponic gardeners invest to both EC and pH meters and adjustment solutions, in order to control the salts and acidity levels in the nutrient solution, and to adjust the values in different stages of growth, bloom and fruit production. Some do not take this so seriously, and just try to follow some fertiliser manufacturer’s guidelines and make no measurements at all, just trying to monitor how plants look like. Some study this very scientifically, measuring and adjusting various nutrients, starting from the “key three”: Nitrogen (N), Phosphorus (P) and Potassium (K), which are commonly referred to as the fertilizing products’ NPK value. All these three are needed: nitrogen boosts growth, phosphorous is needed by plant for photosynthesis, cell communication and reproduction; and potassium is crucial for plant’s water regulation. But there are also “micronutrients” (sometimes called “trace elements”) that are needed in smaller amounts, but which still are important for healthy growth – these include, e.g. magnesium. Popular fertilisers for hydroponic gardening often come in multiple components, where e.g. the mixtures for growth, bloom and then the micronutrients are sold and apportioned separately. It is possible to find quite capable all-in-one fertiliser products, however. I am currently planning of using coco coir (neutral side-product of coconut manufacturing) as the growing medium, so I picked “Canna Coco A+B” by Canna Nutrients as my starting hydroponic fertiliser solution. I also bought a simple pH tester for checking the acidity of fertilising solution, and I probably should also invest in a reliable EC meter, at some point. The starting solution for seedlings should be very mild in any case, to avoid over-fertilising.

Testing the pH of our tap water.
Testing the pH of our tap water.


I have done my chili gardening so far only with traditional, soil-based methods. The results have been varied, and there seems to be the constant threat of pests, plant diseases, or improper amounts of water and nutrients while working with soil. I am not completely sure how real this observation is, but I think I have noticed that e.g. soil-based chili growing is something that some of the more passionate hobbyists have long left behind. After moving into hydrophonics (where nutrients and oxygen are moved with water flow to plant roots), then to aeroponics (use of moist air to nourish hanging root systems), some even have made use of the NASA experiments in the International Space Station to create “high pressure aeroponics” or ultrasonic “fogponics” systems, where very small, 50 micron droplet size is utilised, to stimulate the growth of fine root hairs (trichoblasts) that maximise the surface area of root system, and produce optimal crop yield with minimal amounts of water and nutrients. The related high-pressure pumps and misting nozzle systems are interesting in engineering sense, I admit.

The first seedlings, spring 2019.

I was personally merely considering the more prosaic “bucket bubbler” hydroponics setup, but even that proved a bit problematic in my case. (There is no electric line running into our greenhouse, where I was planning these hydroponic bubblers to be situated.) Thus, I have now turned towards “passive hydroponics”, which is probably the oldest way this has been applied: growing plants without soil. The version that I am now aiming at is internationally known as a “hempy bucket” method: a black/dark bucket is filled with a 3 parts perlite and 1 part vermiculite mix, where the chili seedling is planted. There needs to be a drill hole for excess water down in the bucket, at c. 2 inches (or c. 5 cm) from the bottom. One then waters the plant with a nutrient, hydroponic solution every other day, until the roots grow and reach the water reservoir at the bottom part of the bucket. The solution watering is then reduced a bit, to twice a week. The water reservoir, bucket microclimate and perlite-vermiculite substrate keeps the upper roots supported, nourished and moist, while also providing nice amounts of oxygen, while the submerged, lower parts of the roots deliver the plant plenty of water and nutrients. The final outcome should be a better and more controlled growing environment than what can be reached in typical soil-based gardening.

For more, see e.g.

Selection of chilies, Spring 2019

Some of the chili crop, 2018.

I have been growing my own crops of chili peppers for few years now, and every year it feels like I am a bit late in starting the germination period. This time, it is already late January, and I am still just selecting the seeds and species to grow. These are the varieties I have narrowed down the selection this time – I have also attached links to Fatalii Seeds, who provide a bit more information about each:

Taken together, all these species and varieties capture quite nicely the enormous range of options that chili cultivation provides. In some, my main interest is in the taste and productivity of chilies, in some, the exotic and interesting looks would provide joy to the hobbyist chili farmer. In some, it the main interest would lie in understanding more about some of the more exotic, alternative options that the chili universe provides. But I think that all of these should be relatively easy to grow, so in that sense they all could be realistic options. Let’s see how this goes; it is clear that I cannot grow as many as I am interested in, and also the number of plants need to be kept to the mininum, considering the small greenhouse and our other spaces.


My updates about e.g. Diablo3, or Pokémon GO, will go into https://frans.game.blog/.

I decided to experiment with microblogging, and set up three new sites: https://frans.photo.blog/https://frans.tech.blog/ and https://frans.game.blog/. All these “dot-blog” subdomains are now offered free by WordPress.com (see: https://en.blog.wordpress.com/2018/11/28/announcing-free-dotblog-subdomains/). The idea is to post my photos, game and tech updates into these sites, for fast updates and for better organisation, than in a “general” blog site, and also to avoid spamming those in social media, who are not interested in these topics. Feel free to subscribe – or, set up your own blog.

Personal Computers as Multistratal Technology

HP “Sure Run” technology here getting into conflicts with the OS and/or computer BIOS itself.

As I was struggling through some operating system updates and other installs (and uninstalls) this week, I was again reminded about the history of personal computers, and about their (fascinating, yet often also frustrating) character as multistratal technology. By this I mean their historically, commercially and pragmatically multi-layered nature. A typical contemporary personal computer is a laptop more often than a desktop computer (this has been the situation for numerous years already, see e.g. https://www.statista.com/statistics/272595/global-shipments-forecast-for-tablets-laptops-and-desktop-pcs/). Whereas a personal computer in a desktop format is still something that one can realistically consider to construct by combining various standards-following parts and modules, and expect to start operating after installation of an operating system (plus typically some device drivers), the laptop computer is always configured and tweaked into particular interpretation of what a personal computing device should be – for this price group, for this usage category, with these special, differentiating features. The keyboard is typically customised to fit into the (metal and/or plastic) body so that the functions of a standard 101/102-key PC keyboard layout (originally by Mark Tiddens of Key Tronic, 1982, then adopted by IBM) are fitted into e.g. c. 80 physical keys of a laptop computer. As the portable computers have become smaller, there has been increased need to do various customised solutions, and a keyboard is a good example of this, as different manufacturers appear to resort each into their own style of fitting e.g. function keys, volume up/down, brightness controls and other special keys into same physical keys, using various keyboard press combinations. While this means that it is hard to be a complete touch-typist if one is changing from one brand of laptops to another one (as the special keys will be in different places), one should still remember that in the early days of computers, and even in the era of early home and personal computers, the keyboards were even much more different from each other, than they are in today’s personal computers. (See e.g. Wikipedia articles for: https://en.wikipedia.org/wiki/Computer_keyboard and https://en.wikipedia.org/wiki/Function_key).

The heritage of IBM personal computers (the “original PCs”) coupled with the Microsoft operating systems, (first DOS, then various Windows versions) has meant that there is much shared DNA in how the hardware and software of contemporary personal computers is designed. And even Apple Macintosh computers share much of similar roots with those of IBM PC heritage – most importantly due to the influential role that the graphical user interface and with its (keyboard and mouse accessed) windows, menus and other graphical elements originating in Douglas Engelbart’s On-Line System, then in Xerox PARC and Alto computers had for both Apple’s macOS and Microsoft Windows. All these historical elements, influences and (industry) standards are nevertheless layered in complex manner in today’s computing systems. It is not feasible to “start from an empty table”, as the software that organisations and individuals have invested in using needs to be accessible in the new systems, as also the skill sets of human users themselves are based on similarity and compatibility with the old ways of operating computers.

Today Apple with its Mac computers and Google with the Chromebook computers that it specifies (and sometimes also designs to the hardware level) are most optimally positioned to produce a harmonious and unified whole, out of these disjointed origins. And the reliability and generally positive user experiences provided both by Macs and Chromebooks indeed bears witness to the strength of unified hardware-software design and production. On the other hand, the most popular platform – that of a personal computer running a Microsoft Windows operating system – is the most challenging from the unity, coherence and reliability perspectives. (According to reports, the market share of Windows is above 75 %, macOS at c. 20 %, Google’s ChromeOS at c. 5 % and Linux at c. 2 % in most markets of desktop and laptop computers.)

A contemporary Windows laptop is set up in a complex network of collaborative, competitive and parallel operations networks of multiple operators. There is the actual manufacturer and packager of computers that markets and delivers certain, branded products to users: Acer, ASUS, Dell, HP, Lenovo, and numerous others. Then there is Microsoft who develops and licences the Windows operating system to these OEMs (Original Equipment Manufacturers), collaborating to various degrees with them, and with the developers of PC components and other device makers. For example, a “peripheral” manufacturer like Logitech develops computer mice, keyboards and other devices that should install and run in a seamless manner when connected to desktop or laptop computer that has been put together by some OEM, which, in turn, has been combining hardware and software elements coming from e.g. Intel (which develops and manufactures the CPUs, Central Processing Units, but also affiliated motherboard “chipsets”, integrated graphics processing units and such), Samsung (which develops and manufactures e.g. memory chips, solid state drives and display components) or Qualcomm (which is best known for their wireless components, such as cellular modems, Bluetooth products and Wi-Fi chipsets). In order for the new personal computer to run smoothly after it has been turned on for the first time, the operating system should have right updates and drivers for all such components. As new technologies are constantly introduced, and the laptop computer in particular follows the evolution of smartphones in sensor technologies (e.g. in using fingerprint readers or multiple camera systems to do biometric authentication of the user), there are constant needs for updates that involve both the operating system itself, and the firmware (deep, hardware-close level software) as well as operating system level drivers and utility programs, that are provided by the component, device, or computer manufacturers.

The sad truth is, that often these updates do not work out that fine. There are endless stories in the user discussion and support forums in the Internet, where unhappy customers describe their frustrations while attempting to update Windows (as Microsoft advices them), the drivers and utility programs (as the computer manufacturer instructs them), and/or the device drivers (that are directly provided by the component manufacturers, such as Intel or Qualcomm). There is just so much opportunity for conflicts and errors, even while the big companies of course try to test their software before it is released to customers. The Windows PC ecosystem is just so messy, heterogeneous and historically layered, that it is impossible to test beforehand every possible combination of hardware and software that the user might be having on their devices.

Adobe Acrobat Reader update error.

In practice there are just few common rules of thumb. E.g. it is a good idea to postpone installing the most recent version of the operating system as long as possible, since the new one will always have more compatibility issues until it has been tested in “real world”, and updated a few times. Secondly, while the most recent and advanced functionalities are something that are used in marketing and in differentiation of the laptop from the competing models, it is in these new features where most of the problems will probably appear. One could play safe, and wipe out all software and drivers that the OEM had installed into their computer, and reinstall a “pure” Windows OS into the new computer instead. But this can mean that some of the new components do not operate in advertised ways. Myself, I usually test the OEM recommended setup and software (and all recommended updates) for a while, but also do regular backups, restore points, and keep a reinstall media available, just in case something goes wrong. And unfortunately, quite often this happens, and returning to the original state, or even doing a full, clean reinstall is needed. In a more “typical” or average combination of hardware and software such issues are not so common, but if one works with new technologies and features, then such consequences of complexity, heterogeneity and multistratal character of personal computers can indeed be expected. Sometimes, only trial and error helps: the most recent software and drivers might be needed to solve issues, but sometimes it is precisely the new software that produces the problems, and the solution is going back to some older versions. Sometimes disabling some function helps, sometimes only way into proper reliability is just completely uninstalling an entire software suite by a certain manufacturer, even if it means giving up some promised, advanced functionalities. Life might just be simpler that way.