Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.
Niall’s virtual diary:
Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.
Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact, especially in terms of broken links or images.
- A biography of me is here if you want to get a quick overview of who I am
- An archive of prior virtual diary entries are available here
- For a deep, meaningful moment, watch this dialogue (needs a video player), or for something which plays with your perception, check out this picture. Try moving your eyes around - are those circles rotating???
Latest entries: 
This is actually a much shorter post about Powerline Networking i.e. the power hungry adapters you can get to transport Ethernet over your power sockets. This is nearly the worst type of wired networking you can do, but it’s still proper wired networking and therefore of a different class to wireless networking (which is fundamentally less reliable). You would never fit Powerline Networking anywhere you have a choice e.g. if you own your own house, just fit Ethernet or Fibre. But if you rent and you’re not allowed to drill holes in your walls and roof, Powerline can be a reasonable solution – and it is better than Wifi in terms of reliability and consistency.
Powerline Networking
We fitted Homeplug AV1 500? into this rented house shortly after we moved in in 2013 precisely because I needed to get the internet from where the vDSL modem is (next to the front door) to where everything else is without problems caused by the very crowded 2.4Ghz Wifi dropping out due to us living within high density former council housing. That did work well, but the AV1’s would drop out any time the microwave turned on plus they were well slower than the vDSL connection (70 Mbps at the time). I therefore got my first set of AV2’s in 2014, they were AV2 600’s and they were still a bit slower than the vDSL connection, but impervious to microwaves so that was a win. Annoyed by losing internet speed, I then splurged on my first set of AV2 1200’s in 2015 and they were finally faster than my vDSL, as they could do about 100 Mbps in terms of TCP transfer bandwidth. And that’s where I stopped, as there is no point fitting something faster than your external network. And, in fairness, those AV2 1200’s have been utterly trouble free for ten years now. They ‘just work’.
About one month ago, our internet went totally dead. Out came the Eircom guy, he fitted a new line, which turned out to be flaky with the best connection speed at about 20 - 30 Mbps and it constantly dropping out. We had the exact same problem when we first moved in, and I had the Eircom guy out like a yoyo to fix it until they finally found some combination of wiring which was fairly stable – you’d still get occasional outages for a few minutes, but it was liveable and you generally got about 90 Mbps. To be honest, 90 Mbps is plenty enough internet for most use cases – you don’t need more, it’s now a ‘nice to have’ if it’s more. Indeed, if anything, the biggest issue now with vDSL is latency, you get an extra 10 milliseconds with vDSL or so for those few hundred metres to the cabinet which is a significant chunk of the total latency to anywhere in Europe.
I wasn’t in the mood to go through that circus again with getting Eircom constantly out to fiddle with the vDSL, so I looked into getting fibre installed into the home. I had fibre installed into the site last year where to avoid the install fee, you needed to sign up to a 24 month contract. Seeing as I very seriously hope that we are out of here a year from now, that would be a non-starter however thankfully only a year later, now the minimum contract length to get free installation has dropped to twelve months. So I signed up!
Fibre to the Home (FTTH)
Fibre to the home is interesting stuff. Your traditional analogue phone line is a twisted pair of copper (and often copper coated aluminium in Ireland) cables between the cabinet and your house. It arrives into your house as maybe a 3 mm diameter cable. What fibre to the home does is physically replace that cable with an identical one, but with a fibre optic cable within instead. The big advantage of this is one to one physical compatibility – often reusing the existing hole in the wall as you simply pull out the old cable and push in the new cable. You can reuse the same terminating enclosure in the wall, same fixings from your roof to the pole etc.
In fact, fibre to the home is probably the only place where multi-mode rather than single-mode fibre is going to survive into the long term. Multi-mode fibre shines multiple frequencies of light down a single fibre, whereas single-mode shines exactly one frequency. My future house uses dual channel single-mode fibre throughout – it is nowadays same price as multi-mode as the fibre has become so cheap, and if you can run a fibre for each direction, then single-mode fibre is superior in every way. My very cheap 2.5 Gbps fibre transducers will do 10 km of fibre or so, so they’re way overkill, and with more powerful transducers you can do 10 Gbps over 100 km without issue. The multi-mode fibre used in FTTH swaps the ability to carry multiple signals within a single fibre for far shorter distances – 10 Gbps might only work over 550 metres or so – but now you can pack multiple links into each fibre. Run half a dozen fibres per cable from the cabinet to the pole, and you can give each home off that pole up to 10 Gbps each. As all vDSL connections are within hundreds of metres from the cabinet, multi-mode fibre really shines as a direct substitute in this case.
Anyway, this Monday they’ll be swapping my analogue cable to the pole outside for a fibre cable to the same pole and then I’ll have a 500 Mbps internet connection (which is the lowest still available in Ireland for fibre). Obviously a Powerline Network able to do only 100 Mbps would mean giving up most of the internet speed once again, so that’s what motivated me to look into the latest and greatest in Powerline technology to see what has replaced my AV2 1200’s.
G.hn
It turns out that there has been no replacement for AV2 in the past ten years! In fact, the AV2 consortium wound itself up in 2018, considering its work ‘done’. Their last AV2 release was AV2 2000 to supersede 1200, but as https://www.smallnetbuilder.com/tools/charts/powerline/view/ shows, it’s actually slower than the 1200 for short to medium distances and only really improves long distances.
There is however a new kid on the block: G.hn, which stands for ‘Gigabit Home Networking’. It has a very different lineage. It was originally for putting ethernet over analogue telephone wires i.e. rather like vDSL, and it’s much closer to vDSL in terms of implementation though still quite different.
That smallnetbuilder list of benchmarks above does show a G.hn entry, and it’s slower than AV2 1200. But not by much – 120 Mbps vs 160 Mbps. What has changed since is that there is a newer edition of G.hn called ‘Wave 2’ which uses MIMO, whereas original G.hn only used SISO. So it should now be rather faster than before. Unfortunately, there is a real lack of anything empirical on the internet about G.hn Wave 2. There are no reviews at all comparing different technologies on the same network. Some folk on Reddit and Amazon reviews were positive, some were negative. There was nothing conclusive – which is why I have written all this up, because nobody seems to have done any actual side by side testing despite that G.hn Wave 2 products landed in 2021 or so.
Benchmarking G.hn vs Homeplug AV2
I took a punt on a G.hn Wave 2 kit, once again from TP-Link, and I swapped the existing TP-Links identically for the new ones with an identical network and identical testing. It should be mentioned that I have two AV2 networks in the home, and I only replaced one of them for the testing, so the power cables are ‘noisy’ with traffic from the other AV2 network. I should also mention that I explicitly disabled vDSL compatibility for both AV2 and G.hn, which defaults to on for both, and I disabled power saving for both.
PHY TX | PHY RX | iperf TX | iperf RX | Efficiency | ping TX | ping RX | |
---|---|---|---|---|---|---|---|
Homeplug AV2 1200 | 387 Mbps | 426 Mbps | 75.7 Mbps | 93.6 Mbps | ~20% | 6 ms | 5 ms |
G.hn 2400 | 731 Mbps | 920 Mbps | 230 Mbps | 296 Mbps | ~30% | 14 ms | 3 ms |
That is about three times faster for TCP bandwidth than before, which is quite impressive. The ping times get better in one direction, but much worse in the other. Weird, though it could be the other AV2 network interfering.
It looks like I’ll get about half of my shiny new 500 Mbps fibre connection, which is still three times better than the vDSL when it was at its best. Probably more importantly, ten milliseconds should be lopped off latencies, meaning that the internet will ‘feel’ much faster.
And I guess we’re not moving out now until April 2026!
I have completed migrating from the sixth to the seventh generation dedicated server architecture, and the sixth generation hardware will idle from now on probably until its rental expires in the summer. So far, so good – to date it’s running very well, and the difference in terms of email client speed is very noticeable. It was a fair few months to get it done, and probably at least fifty hours of my time between all the researching and configuration etc. But that should be my public server infrastructure solved until I’m fifty-five years old or thereabouts.
Also some months ago I happened to be reading Hacker News where somebody mentioned that ancient Roman coins are so plentiful that you can buy bags of them for not much money at all. That kinda led me down a bit of a rabbit hole, it turns out a surprising amount of ancient coinage has survived – probably for the dumb simple reason that it was valuable by definition, so people went out of their way to retain and preserve it. You can, off eBay, get a genuine Greek solid silver tetradrachm made during the reign of Alexander the Great for under €100 if you bid carefully at auction – complete with portrait of the man himself! As much as buying a whole load of ancient silver and gold coinage has a certain appeal, it is a horrendous consumer of both money and time for which I currently have much higher priority uses. But I did see you can pick up unwashed Late Roman Imperial ‘shrapnel’ for cheap enough I reckoned it worth buying a few as a teaching opportunity for my children.
So I purchased ten unwashed bronzes for fifty euro, an absolute rip off considering you can get a thousand unwashed bronzes for under a thousand euro, but I suppose there are claims that mine would come from a checked batch with more ‘good ones’ in it i.e. legible ones. Well, after the kids had scrubbed them with toothbrushes and let them soak in deionised water in between a few times over several days, here are the three best of the ten coins:


The first I reckon is a Constantine (unsure which); the second I think he’s Valentinian the third (425-455 AD); the third he’s not quite clear enough to make out, but almost certainly a Late Roman Emperor. A further three coins had a bust which could be just about made out, but not well enough to say which emperor, and of the remaining four, three only had some letters which could be made out with nothing else, and the last we could maybe make something out coin-like if you squinted hard enough – but neither writing nor bust.
Certainly an expensive way of learning about history, but hopefully one that they’ll remember. The key lessons taught were: (i) Long lived emperors turn up more frequently than short lived ones (ii) Emperors who debased their money by printing a lot of coin also turn up more frequently and (iii) we get a lot of Late Roman Imperial coin turning up because at the end of the empire, the owners of buried stashes either died in the instability or the stash simply became not worth digging up as Imperial coin isn’t worth much without an Empire to spend it in. Having hopefully communicated these three points to my children, I guess I can draw a line under this teaching opportunity.
Solar panel history for my site
In Autumn 2023 – can you believe it was nearly eighteen months ago now! – a virtual diary entry showed my newly mounted solar panels on the site. These eighteen panels are half of the future house roof panels and half was deliberately chosen as you cannot fit more than twenty panels per string, which implies eighteen on one string and twenty panels on the other string.
The Sungrow hybrid inverter has performed absolutely flawlessly during this time. The early days had many uncontrolled outages during the winter period as I hadn’t yet figured out quite the right settings (my first time installing and commissioning a solar panel system!), but by March 2024 I nearly had all the configuration kinks ironed out. Since then – apart from a ‘loop of death’ outage in November 2024 which was due a very rare combination of events – it really has been solid as a rock.
To be clear, if less radiation falls from the sky than is consumed by the security cameras and internet connection there, yes the batteries do run down and eventually the system turns off. I call this a ‘controlled outage’ because the system detects it will soon run out of power and it turns everything but the inverter off. It then recharges the batteries up to a minimum threshold before restoring power, and at no point does the system get confused. This is different to an uncontrolled outage where the inverter does not recharge the batteries for some reason, and enters into a fault condition requiring me to manually intervene on site.
That ‘loop of death’ I mentioned is an example. Previously, I had the system configured to never let the battery drop below 5% charge, and that worked fine. Unfortunately, last November what happened was a sudden drop in temperature after when the battery had reached 4% charge or so. Lower temperatures mean less battery capacity, so that 4% suddenly became effectively zero. This caused the computer inside the batteries to physically disconnect the batteries to prevent them getting damaged. When the sunshine reappeared, the physical switch connecting the batteries was tripped, and there was no ability to charge them. I didn’t notice this for a few days as it was an especially dull week of weather, only when it kept not coming back did I drive out to investigate where I was obviously appalled as if I couldn’t get any charge back into the batteries, I couldn’t prevent the physical safety relays from firing. That would turn several thousand euros of battery into bricks. That was quite a stressful morning. Still, I got them rescued, and I tweaked the configuration to henceforth never let the batteries get below 20% charge instead. That worked brilliantly – the entire winter 24-25 period of little solar irradiation passed without a single further uncontrolled outage.
Anyway, Sungrow offer an optional cloud service integration which provides remote management and remote monitoring via a phone app and/or website. If enabled, it records every five minutes the following measurements into its history database:
- Volts and amps on PV strings one and two.
- Volts and amps on each phase of the three phase AC output.
- Total DC power in kW.
- Total AC power in kW (from this you can calculate inverter efficiency).
- Battery charging or discharging power in kW.
You can get a whole bunch more measurements from the cloud interface, but as far as I can tell, the above are the only ones stored in a long term time series database. Said database is downloadable into your choice of CSV or Excel, however their export tool only permits thirty minute granularity if you’re downloading a month or more. That’s good enough for my use case, which is attempting to estimate how much power those panels could gather if all the power they could generate were used.
Daily hours of light
For obvious reasons, if the sun isn’t shining then solar panels cannot generate power. As we live quite far north, there is considerable variance in daylight hours across the year: approximately 7.75 hours at the winter solstice up to 16.75 hours at the summer solstice. That is 32% of the day for winter, and 70% of the day for summer. This is a best case – while solar panels work surprisingly well in bright cloudy days, they do not work well in dull cloudy days. A short day means less opportunity for thick cloud to pass within the hours of daylight.
Solar panels, interestingly, develop their maximum voltage if radiation lands on them exactly perpendicularly. If it lands oblique to ninety degrees, you get less voltage, and indeed much of the recent technological progress in solar panels has come from increasing the voltage developed over a wider angle. Voltage will appear with almost any amount of light – indeed, as my time series history clearly shows, a full moon on a clear night will generate more than fifty volts across those eighteen panels. You won’t get more than a few watts out of it, maybe enough to charge a phone, but it’s not nothing. I can also see that peak voltage – around 730 volts – clearly happens in winter, whereas summers might not beat around 690 volts. This is because these panels are mounted at 45 degrees, and when the sun is high the angle is quite oblique to their perpendicular. In any case, we can tell when light reaches the panels by when voltage appears on the PV string, and for our graph below we count the number of half hour slots with more that 500 volts appearing on the PV string.
The next bit is harder. The batteries start charging as soon as enough power appears on the panels that it is worth sending some to the batteries. Having stood next to the inverter, I can tell you it appears to determine how much load it can put on the panels by incrementally scaling up how many amps it draws from the panels, and if the voltage droops it backs off. I can tell this from relays clicking, and a volt and current meter attached (note that standard consumer multimeters cannot handle > 500 volts! You need a trade multimeter for this). Obviously, the time series we have doesn’t capture any of this, and only reports how many kW was flowing into the battery at any given time. And once the battery is full, it stops charging it.
This tends to mean that only the very beginning of each morning charges the battery, and therefore our only measurements for estimating how much power these panels can gather are for the very start of the day only. This matters, because solar irradiation has a curve like this:

… where zero is the horizon, and that curve is for June 20th at my latitude. This means solar irradiation reaches two thirds full strength four hours into the day, so measuring capture for only the first few hours of the day will grossly underestimate total capacity to capture for a whole day. I therefore need to ‘undo’ that curve which looks to be approximately x2 or x4.
Anyway, I chose x0.25 and here is the year of 2024 (I actually tell a small lie – Jan/Feb are actually 2025, because of all the uncontrolled outages in Jan/Feb 2024. It’s why I was waiting until March 2025 to write up this post):

As previously described, the blue line is the total number of 30 minute periods with more than 500 volts on the PV string – this strongly tracks the number of daylight hours, unsurprisingly, with the variance due to cloud cover. As mentioned above, ignore the dip in November with the ‘loop of death’, and do bear in mind that for Nov-Dec-Jan-Feb there can be occasional gaps in the data due to controlled outages due to lack of power raining down from the sky. Obviously if there is no power, there is no internet, and the numbers then don’t appear on Sungrow’s cloud service. This artifically depresses those months, but it also artificially boosts them because the batteries will often suck in 8 - 10 kWh in a day during those months which makes that day look unusually good.
Something perhaps surprising about the blue line is it ranges between 20% and 60%, rather than between 32% and 70% as described above. The answer is simple: geography. We have tree cover to the west which chops off the end of the day in the summer, and mountains to the south which chops off both sunrise and sunset in winter. The panels are mounted on the ground so they are particularly affected by geography – once onto the house’s roof, that effect should be markedly diminished.
The red line is the estimated number of kWh available per day based on the rate of charging in the morning descaled by x0.25, and then linearly adjusted to match this estimate of solar PV production from my house’s PHPP computer model of its predicted performance:

This is for thirty-seven panels, so divide everything by two to get what PHPP thinks ought to be the solar PV yield for this location. I matched my estimated graph such that Jun-Jul matches what this graph predicts (~27 kWh/day), as does Dec-Jan (~10 kWh/day).
So, the obvious elephant in the room is that the curves of both graphs don’t match! To be honest, the PHPP graph looks like the sunrise graph whereby due to how the planet rotates whilst also going around the sun, sunrise gets earlier quicker in the beginning of the year. This might be a bug in PHPP? I have a second set of kWh per day estimations for the house from the Easy PV website:

Now that looks more like my graph! There is an off-centre bias towards Apr-May, and a similar raised tail Aug-Sep to the PHPP estimate, but it’s less pronounced. Easy PV also thinks a bit less power will be captured in summer, but especially in winter (the red is contribution back to the grid; the green is charging of battery; the blue is consumption).
My graph does show a raised tail Aug-Sep, but no off-centre bias towards Apr-May. But do you know it could be as simple as that the weather in 2024 in Apr-May was unusually cloudy? It’s entirely possible, each year’s graph will have its own shape, and only by averaging say ten years of them might you get the shapes that Easy PV and PHPP show.
Perhaps a future virtual diary entry here might average all the annual results and find out?
The next virtual diary entry
Which brings me onto the likely topic of my next virtual diary entry here.
I haven’t written here about geopolitics in a very long time, certainly decades. It’s not that I haven’t been keeping up to date and well informed, rather to be honest I don’t think my thoughts on it are worth typing up in my very limited spare time. If I am to invest multiple hours clarifying my thinking onto this virtual paper, generally it is because of:
I need a searchable record of my past thinking for future projects. This has been 65% of everything I’ve written here in certainly the past fifteen years.
It helps me to clarify what I am actually thinking by writing out prose to describe that thinking, even if I never expect to need to refer to that thinking again in the future. This might be as much as 30% of everything I’ve written here in the past fifteen years.
And because my thinking on geopolitics usually really doesn’t matter, it isn’t worth investing a non-trivial amount of my free time to write it up.
I am one of the few I believe to have correctly predicted the current secession of the United States from its colonial outposts in Europe in approximate timing, form and nature. Because I never wrote any of that down, only the people who know me well enough to have heard me blabbing on about all this since a few years after the financial collapse will be able to confirm it. I made the suspicion roughly after returning from working for BlackBerry in Canada, it got confirmed with how the first election of Donald Trump came about, and obviously we are right now in the beginning of said secession.
Most of such ‘pub bar talk’ material is harmless and irrelevant – a hobby thankfully usually not punished for doing it publicly in the collective West, unlike in most of the rest of the World. But when trillions of euro will be spent and billions of lives are about to radically change from the trajectory they were previously on, now it actually matters enough to be worth writing up here.
My family, but also my friends, my neighbours, my colleagues and indeed my people will now not live the rest of their lives along the patterns previously assumed. Seeing as they rather matter to me, I ought to clarify my thinking on this topic in order to have my best guess at what will happen in the future before I die. Only then can I guide those I care about in the right directions as best I can.
So I need to write something up. It will likely take me several weeks to phrase it correctly. But I do think it needs doing.
If you’re interested in such things, watch out for that here. If you’re not, remember to skip the next post! Until then, be happy!
This post will be mainly about testing the seventh generation of my public server infrastructure. I previously discussed last December the current economics of the market driving me towards a colocated server solution for the first time ever, which showed quite the shift in exigencies in recent years. As you will see, this new solution has big gains in some areas, but a few regressions in others.
Firstly, to summarise past posts a little, what has shifted is that obsolete servers offered as budget dedicated servers have been rising in price as datacenter electricity has risen in price. This is because obsolete hardware whilst nowadays very good at idle power consumption, they can still consume a fair whack of power if doing anything, so their peak power consumption makes them expensive. If you can reduce your space footprint down to two credit cards and your power consumption down to fifteen watts or especially ten watts or less, there are colocation options available nowadays far cheaper than renting a budget obsolete server.
I gave a list of those I could find in previous posts, and I ended up choosing the cheapest which was FinalTek in the Czech Republic at €1.97 inc VAT per server per month if you buy three years of colocation at once. This, as my noted in earlier posts, is a 100 Mbit ‘unlimited’ shared service on a 1 Gbit switch, so you get 1 Gbit between the two Pis but up to 100 Mbit to the public internet. I’ll quote their setup email at this point for requirements:
The device must meet the following parameters:
- it must be in a box (not a bare board with exposed circuitry)
- the power adapter to the device must be for Czech power grid (EU power plug)
- dimensions must not exceed 15 x 10 x 5 cm
- must not have a power consumption greater than 5V / 3A
- must be set to a static IP address
- send the device to the address belowIf the device uses an SD card for operation, it is advisable to pack spare cards with a copy of the OS in case the primary card fails.
As mentioned in previous posts, it is the 5v @ 3a requirement which filters out Intel N97 mini PCs which probably can be configured to run under 15 watts (and definitely under 20 watts), but they generally need a 12v input. They’re far better value for money than a Raspberry Pi based solution which is around one third more expensive for a much worse specification. You can of course fit a higher spec Banana Pi or any other Single Board Computer (SBC) also able to take 5v power, but none of those have the effortless software ecosystem maturity of the Raspberry Pi i.e. an officially supported Ubuntu Server LTS edition which will ‘just work’ over the next four years. So, to put it simply, the one single compelling use case I have ever found for a high end Raspberry Pi is cheap dedicated server colocation. For this one use case, they are currently best in class with current market dynamics.
Even with the very low monthly colocation fees, this hasn’t been an especially cheap exercise. Each Raspberry Pi 5 with case cost €150 inc VAT or so. Add power supply €10 and used NVMe SSD off eBay €35 and you’re talking €200 inc VAT per server. Over three years, that’s equivalent to €7.64 inc VAT per server per month which is similarly priced to my existing rented Intel Atom C2338 servers (~€7.38 inc VAT per server per month). So this solution overall is not cheaper, but as previous posts recounted you get >= 2x the performance, memory and storage across the board. And, the next cheapest non-Atom rented server is €21 inc VAT per month, and this is one third the cost of that all-in.
Assuming market dynamics continue to shift along their current trajectories, in 2032 when I am next likely to look at new server hardware it’ll be interesting to see if wattage per specification will let me reduce watts for good enough hardware for a 2030’s software stack. In the list of colocation providers I gave in previous posts, many capped power to ten watts max or price went shooting up quickly. That’s enough for a Raspberry Pi 4, but they’re as slow as my existing Intel Atom C2338 rented servers plus they can’t take a NVMe SSD. Seven years from now, I would assume there will be a Raspberry Pi 6 and/or cheap colocation for 12v ten watt max mini PCs might be now affordable. It’ll be interesting to see how trends play out.
In any case, server capability per inflation adjusted dollar continues its exponential improvement over time. The fact I can colocate a server for the cost of a cup of coffee per month is quite astounding given I grew up in a time where colocation space cost at least €10k per 1U rack per month. I reckon they’re fitting ten to twelve SBCs per 1U rack, so that’s ~€20-24 per rack slot per month. Which is 99.8% cheaper than in the 1990s! In case you’re wondering, a 1U slot with max 100 watts power currently costs about €30-40 ex VAT per month, so I guess FinalTek are relying on those Pis to not draw all of their fifteen watt power budget to make a profit!
Raw storage device performance
The Raspberry Pi 5 has a single PCIe 2.0 lane available to be connected to a NVMe adapter. Much debugging and tweaking has been done by RPI engineers in the past year to get that PCIe lane running at 3.0 speed and working without issue over a wide range of NVMe SSDs. The most recent significant compatibility improvement was only in December 2024’s firmware, so this is an ongoing process since the Pi 5 was launched in Autum 2023.
Most – but not all as we shall see – original RPI NVMe SSD compatibility
issues have been worked around such that compatibility is now very good.
Just make sure you install a year 2025 or newer EEPROM firmware and you
should be good to go with running PCI 3.0 on any of the after market NVMe expansion kits.
I ended up fitting a 256 Gb well used Samsung SM961 off eBay to
europe7a
and an official Raspberry Pi 512 Gb NVMe SSD to europe7b
after wasting a lot of time on a Samsung PM9B1 SSD which nearly works.
It turns out that the Samsung PM9B1 actually has a Marvell controller,
and that is very finickety: it doesn’t like the RPI, it also doesn’t
like one of the USB3 NVMe enclosures I have, but it’s happy in the other
USB3 NVMe enclosure I have. I think there’s nothing wrong with the SSD
apart from limited compatibility, and as the PM9B1 was an OEM only
model they only needed it to work well in the OEM’s specific hardware.
The official Raspberry Pi NVMe used to be a rebadged Samsung PM991a which is a superb SSD. Unfortunately, at some point they silently swapped it for a rebadged Biwin AP425 which is an industrial SSD. The Biwin is fast in smoke testing, but it doesn’t implement TRIM so I’m unsure how either its performance or longevity would hold up over extended use. It is also a RAMless design, and having a RAM cache on the SSD particularly benefits random reads for the Pi from testing. So the used Samsung SSD with about 80k hours and ~25Tb written (i.e. 100 total drive writes, which the firmware thinks is 9% spare threshold used) ended up going into the primary server, and the brand new Biwin SSD into the failover server.
For the 256 Gb Samsung SM961
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 15.1569 s, 692 MB/s
Idle is 0.3 watts, write load is +3.6 watts.
dd of=/dev/null if=/dev/nvme0n1 bs=1M count=10000 iflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 12.6483 s, 829 MB/s
Idle is 0.3 watts, read load is +3.3 watts.
For the 512 Gb Biwin AP425 (official RPI NVMe SSD)
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 14.5891 s, 719 MB/s
Idle is near zero, write load is +0.9 watts.
dd of=/dev/null if=/dev/nvme0n1 bs=1M count=10000 iflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 13.4262 s, 781 MB/s
Idle is near zero, read load is +0.8 watts.
Remarks
Raw bandwidth is approx 2x that of the SATA SSD on my Atom C2338 servers. As it’s NVMe instead of SATA, latency will be orders of magnitude lower too, but milliseconds to microseconds won’t matter much for a web server.
RAM does consume power, and you see it in the idle power consumption above. The Samsung SSD is approx 4x less power efficient during reads and writes than the Biwin SSD which in fairness uses very, very little power in smoke testing. Obviously the amount of time your internet server spends doing sustained reads or writes will generally be minimal, so apart from peak power consumption calculations to ensure you fit inside the max colocation power limit, the idle power consumption will be what is used almost all of the time.
I tried battering the Biwin SSD with sustained writes and yes after a while you start seeing power consumption spikes of about four watts while write performance nose dives. After you leave it for a while it recovers. This suggests that it does have some form of SLC cache to enable fast burst writes at low power consumption. If so, why on earth it doesn’t also implement TRIM is beyond me as a SLC cache implies a block storage emulation layer internally.
Filesystem storage performance
I only run ZFS on my public servers which has been the case for many years now, principally for ZFS’s awesome auto-replication feature whereby ZFS filesystems can be mirrored very efficiently across multiple machines. I have ZFS replication running between the two public servers every few minutes, and from public servers to home and from home to public servers. I therefore have multiple offsite backups of everything down to a few minutes of lag, which I learned after the year 2020 two week outage is very wise.
The slow Intel Atom C2338 with LZ4 compressed unencrypted ZFS was surprisingly okay at about 348 Mb/sec reads and 80 Mb/sec writes. It did rather less well with encrypted storage at 43 Mb/sec read and 35 Mb/sec write, this is because the ZFS in kernel 5.4 wasn’t able to use the Intel AES-NI hardware acceleration functions so everything is done in software.
Ubuntu 24.04 LTS comes with kernel 6.8, and if I was on Intel its ZFS would now use AES-NI hardware acceleration. Unfortunately, as https://github.com/openzfs/zfs/issues/12171 which adds AArch64 hardware crypto acceleration to ZFS is unmerged, we still fall back to software cryptography. LZ4 compressed unencrypted ZFS reads at about 448 Mb/sec and writes at 144 Mb/sec – a 28% and 80% performance improvement - but encrypted reads are 84 Mb/sec and encrypted writes are 51 Mb/sec which whilst still a nice improvement, they are not the ~500 Mb/sec rates which that ZFS patch if merged would produce.
Still, it is only the email storage and a few other bits which use encrypted ZFS. Most of the system uses unencrypted LZ4 compressed ZFS and it uses highly compressible data, not the random bytes I used for the testing. So in practice you’ll get much better performance than the above.
Indeed, this RPI can saturate its 1 Gbit NIC from the ZFS filesystem with ease,
something the Intel Atom C2338 wasn’t quite strong enough to do.
I suspect kernel sendfile()
improvements are also at work as the
Intel Atom C2338 + ZFS could only push 59 Mb/sec down its NIC if
the file content had to be read from the SSD or 80 Mb/sec if the file
content was in ZFS RAM cache. A quick google search just there seems
to confirm that ZFS copy_file_range()
support was merged in Ubuntu 24.04,
so kernel zero copy byte range splice support is indeed just fresh
out of the oven as it were.
But aren’t we on a 100 Mbit capped public internet connection, so does it matter if the board can saturate 1 Gbit?
FinalTek network performance
I performed these benchmarks at 1am BST 2am CET to ensure a relatively
quiet public internet. Firstly, between adjacent Pi servers I get
936 Mbit, which has since been confirmed many times by scp
that yes
the switch is 1 Gbit as promised.
The sixth generation server has a ‘up to 1Gbit’ NIC. It is located in Amsterdam, whereas the seven gen is in the Czech Republic. There is 22 milliseconds of latency between them which is surprisingly high given the distance between them is under 1000 km (approx 5 ms latency). It turns out that traffic routes via Vodafone’s backhaul who are a British company, so I suspect traffic takes an elongated route far from a geographical straight line. That said, I measured 96.6 Mbit from dedi6 to dedi7, and 887 Mbit from dedi7 to dedi6.
Yes, that did make me think I had made a mistake too. So I ran the same test from my site’s fibre internet, which has a 1 Gbit downstream and 100 Mbit upstream. There is a 38 millisecond latency between my site and FinalTek, with 61 Mbit from site to dedi7, and 226 Mbit from dedi7 to site. Here is how traffic is routed:
traceroute -4 dedi7.nedprod.com
traceroute to dedi7.nedprod.com (46.167.244.57), 30 hops max, 46 byte packets
1 217-183-226-1-dynamic.agg3.mlw.lmk-mlw.eircom.net (217.183.226.1) 2.395 ms 3.295 ms 0.691 ms
2 eth-trunk137.hcore1.lmk.core.eircom.net (86.43.253.148) 9.729 ms 5.629 ms 4.578 ms
3 eth-trunk13.hcore1.dbn.core.eircom.net (159.134.123.9) 13.517 ms 10.842 ms 8.655 ms
4 * * *
5 dln-b3-link.ip.twelve99.net (62.115.32.200) 9.360 ms 7.846 ms 8.852 ms
6 vodafonepeering-ic-356964.ip.twelve99-cust.net (213.248.98.129) 8.622 ms 9.161 ms 7.572 ms
7 ae25-xcr1.ltw.cw.net (195.2.3.129) 38.989 ms 37.176 ms 37.839 ms
8 ae37-pcr1.fnt.cw.net (195.2.2.74) 32.413 ms 29.676 ms 30.388 ms
9 ae4-ucr1.czs.cw.net (195.2.10.233) 36.986 ms 36.364 ms 36.577 ms
10 vodafonecz-gw3.czs.cw.net (195.2.12.42) 38.170 ms 39.950 ms 41.302 ms
11 ip-81-27-200-59.net.vodafone.cz (81.27.200.59) 37.554 ms 36.569 ms 44.476 ms
12 tachyon.finaltek.net (77.48.106.250) 40.717 ms 38.654 ms 39.420 ms
13 europe7a.nedproductions.biz (46.167.244.57) 40.171 ms 39.042 ms 39.527 ms
(cw.net
is Cable & Wireless, the internet backhaul subsiduary of Vodafone.
Between Eircom and them is twelve99
which used to be the familiar TeliaSonera
of yore, now called Arelion who are Swedish)
I did a bit more testing, and it looks like Finaltek only throttle inbound to 100 Mbit, not outbound. That means the public can download from my website at up to 1 Gbit. This was quite unexpected for the monthly cost – I had assumed a straight 100 Mbit throttle per MAC address with maybe a minimum bandwidth guarantee of 5 Mbit, like you might get at OVH et al. Apparently not.
Obviously FinalTek have promised nothing other than ‘up to 100 Mbit shared’ so network performance can absolutely worsen in the future and they’re still within claims. But consider me very impressed for the money!
I run CloudFlare on front of the website in any case, so it doesn’t actually matter if the NIC is faster than claimed except when CF is fetching uncached content, which is for two thirds of all requests according to its dashboard (I am on the free tier). I have no idea if there is any DDoS protection from FinalTek – I would assume not, but again CloudFlare should take of that too at least for the web site part of things.
Final words
I’m not sure what more there is to say other than that I am very pleased with the considerable performance improvements in the seventh generation of my server infrastructure. It should be good for at least seven years once I’m fully migrated over, then I’ll be releasing the sixth generation from rental (I have already released all the VPSs I used to rent to provide redundant DNS now that CloudFlare and Hurricane Electric provide redundant DNS for free with far lower latencies than any VPS could). Despite the nearly five hundred euro outlay on new hardware, over four years it will be cheaper than the present rental costs and over seven years it should be far cheaper. Let’s hope that the hardware is reliable and trouble free during that time!
Re: house build, that is still stuck in joist design as it has been since November last year. I don’t know when it’ll get free of that, it isn’t for a lack of me nagging people. My next post might be on the eighteen solar panels I’ve had installed on the site since Oct 2023, as I now have two winters of measurements to analyse as the inverter captures lots of data every ten minutes or so, so I have a large time series database now. I think it worth a post here trying to figure out what I might be able to infer from all that data.
Until the next time, be happy!
What came back to us is this first draft of a joist design:

… which, obviously enough, arranges all the joists left-right. Which, for any normal house would be fine, but in our specific case we have a highly insulated fresh air duct for the ventilation because it carries most of the space heating. Which means it’s fat, especially in the hallway just outside the utility room where it comes from the MVHR. There, the fresh air duct is approx 280 mm in diameter after the insulation, and myself and my architect had solved this by running the joists up-down just for the hallway, and left-right elsewhere.
Normally you should always orientate your joists in the same direction as the ridge of your roof, so if the ridge runs left-right as it does in this house, so must your joists. This is because the roof trusses will be perpendicular to your ridge, and you then want the floor joists to be perpendicular to those again to brace them.
However, due to the vaulted areas in this particular house, we have two steel portal frames either side of the vaulted area with steel beams pinning both of the house ends to those frames, and then two more steel beams pinning both portal frames together (really this is a steel frame house). Because the steel takes on most of the bracing work, you should be more free with the joist orientation. This is why myself and my architect took advantage of that during duct routing.
Unfortunately, my TF supplier was adamant that if we wanted up-down joist runs in the hallway, we’d need to fit steel, and that steel would need to be custom designed by a specialist engineer, and coordinated with the joist design to fit. That sounded (a) expensive, no designer in this current busy market is willing to take on new work unless their fees are at least €5k and steel is hardly cheap either and (b) it would be months more of delay, at best.
So that now meant we needed to get those ducts through a 304 mm tall joist which has an absolute maximum of 300 x 210 mm clearance. Otherwise months more of delay would occur and large unexpected additional sums of money. Wonderful.
High end custom made industrial ducting
Here were the choices before us:
Custom steel joist design. Cost: five figures. Lead time: months.
Increase joist height from 304 mm to 417 mm. Cost: five figures. Lead time: weeks (mainly to redesign all the ceilings). Also: you lose a fair bit of ceiling height.
Instead of insulated circular steel ducting, use high end custom made industrial ducting. Cost: four figures. Lead time: days. Would need a bit of rejigging the duct layout as the 300 x 210 mm pass through limits airflow to an equivalent 160 mm diameter duct.
So option 3 was the obvious one (well, initially I didn’t know how much it would cost, but I asked for quotes got back numbers and now I know it’s four figures of added cost. Painful, but less than the alternatives). Within Europe, there are a number of prefabricated preinsulated duct manufacturers all supplying the standard DIN sizes cheaply, but if you want custom dimensions they get expensive. Indeed, the gap between their cost and the ultra fancy Kingspan Koolduct gets reasonable. That stuff is the second highest end insulated duct on the market, due to it using a special heat resistant and stiffened phenolic foam cut by CNC machine and then hand assembled into the finished form. As the hand assembly is done by western Europeans using foam blown in western European factories, it isn’t cheap. But in terms of performance for the space, it’s very good and much better than can be achieved with conventional fibreglass wool duct insulation:
- @ 10 C, 0.022 W/mK (vs 0.037 W/mK for ductwrap)
- @ 25 C, 0.024 W/mK
- @ 50 C, 0.027 W/mK
- @ 80 C, 0.031 W/mK (vs 0.046 W/mK for ductwrap)
The only better performing duct material is vacuum panel at 0.007 W/mK. It’s superb, but you could buy a fair portion of a whole house with what it costs.
52 mm of this phenolic insulation delivers these u-values:
- @ 10 C, 0.423 W/m2K
- @ 25 C, 0.461 W/m2K
- @ 50 C, 0.519 W/m2K
- @ 80 C, 0.596 W/m2K
I assumed 18.5 m2 of fresh air duct for my calculations (this is pre-recent layout changes). If so, one would leak the following amounts of energy out of the ductwork, thus not delivering it to the outlet if the house is at 21 C:
- @ 10 C, 7.826 W/K, so -86 watts.
- @ 25 C, 8.529 W/K, so 34 watts.
- @ 50 C, 9.602 W/K, so 278 watts.
- @ 80 C, 11.03 W/K, so 651 watts.
Are these losses acceptable? It depends on the rate of ventilation air flow because the losses are fixed to temperature difference, but the amount of energy transported rises with more air flow:
Duct air temperature | 200 m3/hr | 400 m3/hr | 600 m3/hr | |||
---|---|---|---|---|---|---|
Heating | Losses | Heating | Losses | Heating | Losses | |
10 C | -645 W | 11.74% | -1290 W | 5.87% | -1936 W | 3.91% |
25 C | 235 W | 12.814% | 469 W | 6.40% | 704 W | 4.27% |
50 C | 1701 W | 14.41% | 3403 W | 7.20% | 5104 W | 4.80% |
80 C | 3461 W | 16.54% | 6923 W | 8.27% | 10384 W | 5.51% |
I would expect the system to run at no less than 150 m3/hr unless the house is unoccupied, so losses to ductwork of greater than ten percent are distressing. The next Koolduct size up is 60 mm which is 15% better, but now you swap air pressure loss for thermal efficiency. I’ve been a little unreasonable in this design by aiming for a 0.5-0.75 Pa/linear metre pressure loss which is excessively loose (less than 2 Pa/metre overall is what you need to achieve). But it’s for good reason - the boost fans can increase the air flow for a specific outlet or inlet very significantly, and you want enough slack in the ducts to cope. Also, due to me not having the modelling software, I’ve ignored friction, bends and several other factors which cause additional pressure loss over linear metre calculations.
There is also the cost angle: the 52 mm Koolduct costs about €110 ex VAT per metre. The 60mm stuff I would assume would be maybe €126/metre. Given that standard DIN size preinsulated ducts (albeit with EPS instead of phenolic board insulation) cost about €60 ex VAT/metre, one is paying a big enough price premium already to fit these ducts inside the joists.
I think, to be honest, it’ll just have to do.
The new duct routing
So, having digested all of that, I have drawn in to correct scale a potential duct routing with sufficient dimensions for the flows which definitely fit through and between the draft joist layout above:


Unfortunately, that’s not the end of it. Obviously the above are not ‘CAD quality’ drawings despite the fact that they are exactly to scale and indeed of identical resolution to the original CAD file (Inkscape is surprisingly good). And in any case, we shall need a 3D design in order to send the exact order for the Koolducts through, because there is vertical detail omitted on that 2D layout.
Hence I’ve come to an arrangement with my architect to do some additional work to turn this into 3D and adjust the joists (in some places they just don’t make sense). Hopefully he can deliver that next week, and then we can unblock forward progress with the joist design.
Here’s hoping!
Raspberry Pi colocation
You may remember my recent post on upgrading my internet server infrastructure in which I said I was researching whether to replace my rented Intel Atom 220 based public servers with colocated Raspberry Pi 5’s. Well here they are nearly ready to be dispatched to the colocation:

These will be my seventh generation of public internet infrastructure. One will act as primary, the other as failover. Both will be in the same rack so they get a 1 Gbps network connection between them, but external internet will be clamped to 100 Mbit shared with everybody else on that rack. For under €9 ex VAT/month, I can’t complain.
As you are concluding, after quite a few head scratches I got my full public Docker Compose based software stack up and working with offsite ZFS replication running well. They are very noticeably faster than the current Intel Atom 220 infrastructure, plus due to having double of pretty much everything (RAM, SSD, CPU etc) they have room to grow. Barring unpleasant surprise, these should last me until 2028 when the next Ubuntu LTS upgrade would be due, and possibly for a second four year period under whatever the next Ubuntu LTS will be then (28.04?). The aim will be to have one returned, it upgraded, sent back, migrated, then have the second one returned and upgraded. We’ll see how it goes.
I hope to post them on Wednesday – I am waiting for another NVMe
SSD for europe7b
because the one I got off eBay wasn’t compatible
as it turned out (nearly compatible, and it cost me staying up
to 4am one night going WTF? a lot before I realised it wasn’t me
and it isn’t the Pi). So I’ve ordered an official Pi SSD which is
really a rebadged Samsung 991A SSD. Should work very well, and should
arrive on Monday just long enough for me to get the second Pi also
configured and ready for dispatch.
3D printing extensions to an IKEA Fridans blind to avoid the expense of IKEA Fytur blinds.
… which I kept putting off as it would need a lot of work to type it up. Here is finally that post!
This is probably one of the longest running projects I had of them all. It ran for over a year. What I’m about to write out removes all the blind alleys, the ideas which didn’t work out, all not helped by having to usually wait two months for new parts from Aliexpress to arrive in the post. Before you knew it, it was a year later. Mad.
Automating blackout blinds cheaply
One of the signature features in any automated house is motorised blinds which automatically open and close themselves according to sunrise and sunset. They have historcially been very expensive, typically €500 or more per window, so only very rich people would fit them. IKEA shook up this market with battery powered radio controlled blinds called ‘Fytur’ for under €120-160 per blind. These are great and all, but if you have nearly thirty windows, it’s still thousands of euro.
IKEA also sell the Fridans blackout blind which is operated using a stick at its side. This is very considerably cheaper at €22-28 per blind. Their build quality isn’t great, but for the price they’re a good base for a DIY automation which replaces the manual stick at the side with a motor.
This is hardly my own idea. https://www.yeggi.com/q/ikea+fridans/ will show you thousands of 3d printables related to IKEA Fridans blinds. Most involve replacing this part:

This is the manual stick at the side. You push and pull on it to turn the blind – internally there is a cord with plastic beads which can be reexposed if desired by cutting off the plastic handle, and have a motor push and pull on the cord directly. We’ll be going a bit further – 3D printing a complete replacement for the above with identical dimensions, just minus the handle.
I reckon that all in you can do one of these fully automated blinds for under €40 inc VAT for the 200 cm blind, and under €30 inc VAT for smaller blinds. This excludes the 5v power supply only (often an old USB phone charger will do for that), that’s the price for everything else. This turns thousands of euro of cost for me if we chose the IKEA Fytur solution into hundreds of euro instead, and with very little manual work (apart from all the work done to come up with what I’m about to describe now).
Choosing the motor
If you go through the many, many IKEA Fridan blind automation projects, you will find a legion of motor solutions. Some people use 12v, some 9v, some 5v for power. Various microcontroller and driver boards are used, all with different voltages and currents. In my case, I had very specific needs:
- The entire thing had to work within the power budget of an Olimex ESP32-PoE board running off PoE. That as my notes described gives you up to four watts off PoE before it browns out, which includes everything on the board or hanging off it. That means max three watts for peripherals.
- There is a much better v2 Olimex PoE board now for nearly the same price as the original which has a 12v power tap. Mine, being the older ones, has a max 5v supply. So the motor needed to run off 5v, and never consume more watts than the board has spare.
- My longest blind is 1.8 metres in the kitchen. It needs enough torque to turn that from fully extended.
Those requirements it turns out reduce thousands of options down to approximately two. The first I alighted upon is the 28BYJ-48 unipolar motor hacked into a bipolar configuration. As a unipolar motor, its native voltage is 5v approx 0.1 watts in power. Typical torque is 0.3 kgf.cm with stall torque at 0.6 kgf.cm. Hacking it into bipolar doubles the active windings, so you now might get 0.2 watts of power, 0.6 kgf.cm of torque and 1.2 kgf.cm of stall torque. Obviously the motor was not designed for both windings to be concurrently active so it will get hot quickly. The 28BYJ-48 is however cheap for the spec, about €2.50 delivered which includes a ULN2003 driver board.
I then fortunately landed on something not needing every motor housing to be opened and hand modified: https://github.com/AndBu/YAIFM and my customisations of his project can be found at https://github.com/ned14/YAIFM. This uses a GA12-N20 bipolar motor with integrated encoder. These vary in power and spec despite using the same model name, so you need to choose very carefully to get one with the right combination of torque and power consumption when stalled.
The one I chose off Aliexpress claimed these specs:
Motor Specification Table | ||||||||
---|---|---|---|---|---|---|---|---|
Model | GA12-N20 (GA12-B1215) | Rated Voltage | DC12V | |||||
Type | DC Brushed Geared Motor | Test Voltage | DC6V | |||||
No-Load | Rated Load | Stall | Reduction Ratio | |||||
Speed (RPM) | Current (A) | Speed (RPM) | Torque (kgf.cm) | Current (A) | Power (W) | Torque (kgf.cm) | Current (A) | |
15 | 0.02 | 12 | 1.25 | 0.05 | 0.3 | 8 | 0.35 | 1000 |
25 | 0.02 | 20 | 0.8 | 0.04 | 0.24 | 4.9 | 0.29 | 298 |
30 | 0.02 | 24 | 0.85 | 0.04 | 0.24 | 5.85 | 0.31 | 298 |
40 | 0.02 | 32 | 0.95 | 0.04 | 0.3 | 6.8 | 0.35 | 298 |
50 | 0.02 | 40 | 0.95 | 0.05 | 0.3 | 7.75 | 0.35 | 298 |
60 | 0.02 | 48 | 0.8 | 0.05 | 0.3 | 7.3 | 0.35 | 250 |
80 | 0.02 | 56 | 0.34 | 0.05 | 0.3 | 4 | 0.35 | 200 |
100 | 0.02 | 80 | 0.48 | 0.05 | 0.3 | 3.4 | 0.35 | 200 |
120 | 0.02 | 96 | 0.23 | 0.04 | 0.24 | 1.8 | 0.3 | 150 |
150 | 0.02 | 160 | 0.27 | 0.05 | 0.3 | 2.15 | 0.36 | 100 |
200 | 0.02 | 200 | 0.28 | 0.06 | 0.36 | 2.3 | 0.37 | 100 |
300 | 0.02 | 240 | 0.2 | 0.05 | 0.3 | 1.6 | 0.37 | 50 |
400 | 0.02 | 320 | 0.24 | 0.05 | 0.3 | 1.8 | 0.37 | 50 |
500 | 0.02 | 400 | 0.15 | 0.05 | 0.3 | 1.2 | 0.35 | 30 |
600 | 0.02 | 480 | 0.16 | 0.05 | 0.3 | 1.3 | 0.37 | 30 |
700 | 0.02 | 560 | 0.17 | 0.06 | 0.36 | 1.4 | 0.37 | 30 |
800 | 0.02 | 640 | 0.18 | 0.06 | 0.36 | 1.5 | 0.38 | 20 |
1000 | 0.02 | 800 | 0.42 | 0.1 | 0.42 | 1.6 | 0.39 | 20 |
1200 | 0.02 | 960 | 0.1 | 0.05 | 0.3 | 0.5 | 0.33 | 10 |
1500 | 0.02 | 1200 | 0.07 | 0.04 | 0.24 | 0.55 | 0.33 | 10 |
2000 | 0.02 | 1600 | 0.08 | 0.05 | 0.3 | 0.6 | 0.35 | 10 |
Again, I stress that the above table is what the Aliexpress claims for my specific GA12-N20 motor. Other GA12-N20 motors will have very different tables.
The 50 rpm model highlighted (which has the maximum stall torque of all models) at 6v uses 0.3 watts, typical torque is 0.95 kgf.cm with stall torque at 7.75 kgf.cm. Max amps at stall is 0.35 amps (2.1 watts, within my power budget). The motor plus a DRV8833 driver for it is about €6 delivered, so nearly double the cost of the previous option. However, it delivers (i) 58% more turning torque (ii) 650% more stall torque and (iii) I’m fairly sure the chances of coil burnout would be much lower with this choice.
Not all GA12-N20 motors come with a rotary encoder, which you will need as it counts how many turns the motor does, which we will then use in software to wind the blind to exact positions. A six wire cable is usually supplied, and its pinout means:
- Red: Motor forwards.
- White: Motor backwards.
- Blue: Common ground.
- Black: 3.3v - 5v power supply for the encoder.
- Green: Encoder A phase.
- Yellow: Encoder B phase.
You still need a driver for the motor which is the DRV8833 dual H-Bridge. It works the same as any other H-Bridge motor driver, you set a PWM either forwards or backwards as desired and the motor goes. The DRV8833 rather usefully will take a TTL input and output 5v, so you don’t need a level shifter. Just feed its Vin with 5v, also raise its EEP input to 5v, and voila it all just works with the PWM inputs straight off the ESP32 using the 5v supply off the Olimex board.
ESPHome’s rotary encoder driver will read the A and B encoder pulses. Make SURE you connect 3.3v as its power supply, otherwise the encoder outputs will have too much voltage for the ESP32.
Anyway here it is in action, being driven by ESPHome:
Blind motor driven by a DRV8833 using the 5v supply from PoE
I did lots of testing to try to make the solution brown out, but I failed. I found the following power consumptions off PoE:
- No load, just turning: 105 mW.
- Max load, PWM at 50%: 263 mW.
- Max load, PWM at 100%: 315 mW.
These seemed surprisingly low, so I redid them off USB power:
- No load, just turning: 166 mW.
- Max load, PWM at 100%: 600 mW.
The cause of the disparity is that the PoE power conversion circuitry is especially inefficient at low loads, but gets more efficient as load goes up. The effect is to ‘hide’ some of the power consumption of the motor. Obviously, I only care about peak PoE consumption, so 315 mW looks great.
What about stall current? Well, the thing has so much torque you need two hands to stop it turning. I have my software detect jams within 160 milliseconds and cut the power. Perhaps that meant I never saw the stall current for long enough to measure it, but equally the Aliexpress listing could just be increasing all power claims by 50-100% as they sometimes do. 350 mA at 6v should be ~292 mA at 5v, which is 1,460 mW. I didn’t measure even half that including when I stalled the motor.
There is another possibility: the DRV8833 is mounted on a board called ‘HW-627’. There is very little information about that board that I (or anybody else) can find, but it may well configure the overload protection in the DRV8833 to cut off the power in the case of stall at some fairly low limit. I can say I see several capacitors and resistors on the board, so it’s entirely possible they set a lower overload limit.
Making the blind stop turning when open
The original YAIFM project used either a mechanical switch to detect blind fully open, or you had to program it manually with a count every time there is a power loss. The switch is visually intrusive, and manually setting a count for each of thirty blinds isn’t practical. So I wondered if we could have the ESP32 detect when the blind stops turning, and choose that as the new base point for winding the blind down and up until the next power loss?
The first thing I’d need to fix is that the Fridans blinds have nothing to stop them keeping turning once fully open, because the blind bottom will happily wrap round the upper housing. To solve this, I designed so 3D printed inserts to extend the width of the bottom of the blind. This also doubles as an ‘anti-fray’ shield, because the bottom corners of those Fridans blinds are notorious for getting scruffy very quickly:

One side of the 3D printed width extension for the Fridan blind. The existing weighted plastic bar in the blind's bottom inserts into the 3D printed component to hold it in. A pleasant looking oval shaped external 'knob' then protrudes in a way to ensure it will prevent the blind passing through once fully open.
Strengthening the 3D prints
The next problem I found is that the plastic just can’t take the torque that this motor puts out. I know this from manually putting load onto the motor by hand, it did not take long before the D-shaped hole for the motor in the printed plastic went circular, and now the blind spool wouldn’t turn. This clearly would need to be fixed.
After a great deal of searching I finally found some metal cogs off Aliexpress which fit the motor’s D shaft (I won’t bore you with the cogs tried and found to not fit, that wasted many months. I really wish listings described measurements the same way!). What you’d need is the ‘9 Teeth D Type’ which has an outer diameter of 8 mm and is 7.4 mm long. The key measurement is between the flat part of the D hole rising perpendicularly to the topmost of the rounded part of the D hole – that needs to be 2.4 mm if you want it loose, 2.3 mm if you want it tight. For some reason, these can cost a lot or not at all depending on the listing, so for reference mine cost €0.82 inc VAT each delivered.
I then remeshed the original YAIFM blind spool to take the metal cog instead of the D-shaped shaft of the motor. I also thickened up some of the plastic, as I reckoned it would be getting repeated stress.


Above is the 3D printed blind spool with cog shaped hole, the metal cog, the GA12-N20 motor with rotary encoder, and its cable. I added two metal washers between the metal cog and the motor to ensure horizontal force landed mainly on the motor housing, not on the motor shaft. You do still get the weight of the blind bearing down on the motor shaft, but it’s probably good for a few years.
Putting it all together:


And that’s pretty much it! The great thing about this particular IKEA Fridans blind customisation is that the 3D printed parts exactly replace the originals in dimensions, so as you can see in the rightmost picture above the blind fits exactly identically to before except you now have a wiring connector. From that you take your cable to your MCU.
The motorised blind in action
Completed blind being automated from ESPHome exclusively using the 5v supply from PoE for power
This won’t have looked like a particularly long post, and it’s not. Where most of the real work went in was preparing all the materials for upload, which meant cleaning them up, writing a single set of coherent truth from all the notes, and then writing it effectively three times once for Thingiverse, once for Github, and once for here. Here are the links:
Thanks for reading and enjoy!