Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.
Niall’s virtual diary:
Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.
Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact, especially in terms of broken links or images.
- A biography of me is here if you want to get a quick overview of who I am
- An archive of prior virtual diary entries are available here
- For a deep, meaningful moment, watch this dialogue (needs a video player), or for something which plays with your perception, check out this picture. Try moving your eyes around - are those circles rotating???
Latest entries: 
I have completed migrating from the sixth to the seventh generation dedicated server architecture, and the sixth generation hardware will idle from now on probably until its rental expires in the summer. So far, so good – to date it’s running very well, and the difference in terms of email client speed is very noticeable. It was a fair few months to get it done, and probably at least fifty hours of my time between all the researching and configuration etc. But that should be my public server infrastructure solved until I’m fifty-five years old or thereabouts.
Also some months ago I happened to be reading Hacker News where somebody mentioned that ancient Roman coins are so plentiful that you can buy bags of them for not much money at all. That kinda led me down a bit of a rabbit hole, it turns out a surprising amount of ancient coinage has survived – probably for the dumb simple reason that it was valuable by definition, so people went out of their way to retain and preserve it. You can, off eBay, get a genuine Greek solid silver tetradrachm made during the reign of Alexander the Great for under €100 if you bid carefully at auction – complete with portrait of the man himself! As much as buying a whole load of ancient silver and gold coinage has a certain appeal, it is a horrendous consumer of both money and time for which I currently have much higher priority uses. But I did see you can pick up unwashed Late Roman Imperial ‘shrapnel’ for cheap enough I reckoned it worth buying a few as a teaching opportunity for my children.
So I purchased ten unwashed bronzes for fifty euro, an absolute rip off considering you can get a thousand unwashed bronzes for under a thousand euro, but I suppose there are claims that mine would come from a checked batch with more ‘good ones’ in it i.e. legible ones. Well, after the kids had scrubbed them with toothbrushes and let them soak in deionised water in between a few times over several days, here are the three best of the ten coins:


The first I reckon is a Constantine (unsure which); the second I think he’s Valentinian the third (425-455 AD); the third he’s not quite clear enough to make out, but almost certainly a Late Roman Emperor. A further three coins had a bust which could be just about made out, but not well enough to say which emperor, and of the remaining four, three only had some letters which could be made out with nothing else, and the last we could maybe make something out coin-like if you squinted hard enough – but neither writing nor bust.
Certainly an expensive way of learning about history, but hopefully one that they’ll remember. The key lessons taught were: (i) Long lived emperors turn up more frequently than short lived ones (ii) Emperors who debased their money by printing a lot of coin also turn up more frequently and (iii) we get a lot of Late Roman Imperial coin turning up because at the end of the empire, the owners of buried stashes either died in the instability or the stash simply became not worth digging up as Imperial coin isn’t worth much without an Empire to spend it in. Having hopefully communicated these three points to my children, I guess I can draw a line under this teaching opportunity.
Solar panel history for my site
In Autumn 2023 – can you believe it was nearly eighteen months ago now! – a virtual diary entry showed my newly mounted solar panels on the site. These eighteen panels are half of the future house roof panels and half was deliberately chosen as you cannot fit more than twenty panels per string, which implies eighteen on one string and twenty panels on the other string.
The Sungrow hybrid inverter has performed absolutely flawlessly during this time. The early days had many uncontrolled outages during the winter period as I hadn’t yet figured out quite the right settings (my first time installing and commissioning a solar panel system!), but by March 2024 I nearly had all the configuration kinks ironed out. Since then – apart from a ‘loop of death’ outage in November 2024 which was due a very rare combination of events – it really has been solid as a rock.
To be clear, if less radiation falls from the sky than is consumed by the security cameras and internet connection there, yes the batteries do run down and eventually the system turns off. I call this a ‘controlled outage’ because the system detects it will soon run out of power and it turns everything but the inverter off. It then recharges the batteries up to a minimum threshold before restoring power, and at no point does the system get confused. This is different to an uncontrolled outage where the inverter does not recharge the batteries for some reason, and enters into a fault condition requiring me to manually intervene on site.
That ‘loop of death’ I mentioned is an example. Previously, I had the system configured to never let the battery drop below 5% charge, and that worked fine. Unfortunately, last November what happened was a sudden drop in temperature after when the battery had reached 4% charge or so. Lower temperatures mean less battery capacity, so that 4% suddenly became effectively zero. This caused the computer inside the batteries to physically disconnect the batteries to prevent them getting damaged. When the sunshine reappeared, the physical switch connecting the batteries was tripped, and there was no ability to charge them. I didn’t notice this for a few days as it was an especially dull week of weather, only when it kept not coming back did I drive out to investigate where I was obviously appalled as if I couldn’t get any charge back into the batteries, I couldn’t prevent the physical safety relays from firing. That would turn several thousand euros of battery into bricks. That was quite a stressful morning. Still, I got them rescued, and I tweaked the configuration to henceforth never let the batteries get below 20% charge instead. That worked brilliantly – the entire winter 24-25 period of little solar irradiation passed without a single further uncontrolled outage.
Anyway, Sungrow offer an optional cloud service integration which provides remote management and remote monitoring via a phone app and/or website. If enabled, it records every five minutes the following measurements into its history database:
- Volts and amps on PV strings one and two.
- Volts and amps on each phase of the three phase AC output.
- Total DC power in kW.
- Total AC power in kW (from this you can calculate inverter efficiency).
- Battery charging or discharging power in kW.
You can get a whole bunch more measurements from the cloud interface, but as far as I can tell, the above are the only ones stored in a long term time series database. Said database is downloadable into your choice of CSV or Excel, however their export tool only permits thirty minute granularity if you’re downloading a month or more. That’s good enough for my use case, which is attempting to estimate how much power those panels could gather if all the power they could generate were used.
Daily hours of light
For obvious reasons, if the sun isn’t shining then solar panels cannot generate power. As we live quite far north, there is considerable variance in daylight hours across the year: approximately 7.75 hours at the winter solstice up to 16.75 hours at the summer solstice. That is 32% of the day for winter, and 70% of the day for summer. This is a best case – while solar panels work surprisingly well in bright cloudy days, they do not work well in dull cloudy days. A short day means less opportunity for thick cloud to pass within the hours of daylight.
Solar panels, interestingly, develop their maximum voltage if radiation lands on them exactly perpendicularly. If it lands oblique to ninety degrees, you get less voltage, and indeed much of the recent technological progress in solar panels has come from increasing the voltage developed over a wider angle. Voltage will appear with almost any amount of light – indeed, as my time series history clearly shows, a full moon on a clear night will generate more than fifty volts across those eighteen panels. You won’t get more than a few watts out of it, maybe enough to charge a phone, but it’s not nothing. I can also see that peak voltage – around 730 volts – clearly happens in winter, whereas summers might not beat around 690 volts. This is because these panels are mounted at 45 degrees, and when the sun is high the angle is quite oblique to their perpendicular. In any case, we can tell when light reaches the panels by when voltage appears on the PV string, and for our graph below we count the number of half hour slots with more that 500 volts appearing on the PV string.
The next bit is harder. The batteries start charging as soon as enough power appears on the panels that it is worth sending some to the batteries. Having stood next to the inverter, I can tell you it appears to determine how much load it can put on the panels by incrementally scaling up how many amps it draws from the panels, and if the voltage droops it backs off. I can tell this from relays clicking, and a volt and current meter attached (note that standard consumer multimeters cannot handle > 500 volts! You need a trade multimeter for this). Obviously, the time series we have doesn’t capture any of this, and only reports how many kW was flowing into the battery at any given time. And once the battery is full, it stops charging it.
This tends to mean that only the very beginning of each morning charges the battery, and therefore our only measurements for estimating how much power these panels can gather are for the very start of the day only. This matters, because solar irradiation has a curve like this:

… where zero is the horizon, and that curve is for June 20th at my latitude. This means solar irradiation reaches two thirds full strength four hours into the day, so measuring capture for only the first few hours of the day will grossly underestimate total capacity to capture for a whole day. I therefore need to ‘undo’ that curve which looks to be approximately x2 or x4.
Anyway, I chose x0.25 and here is the year of 2024 (I actually tell a small lie – Jan/Feb are actually 2025, because of all the uncontrolled outages in Jan/Feb 2024. It’s why I was waiting until March 2025 to write up this post):

As previously described, the blue line is the total number of 30 minute periods with more than 500 volts on the PV string – this strongly tracks the number of daylight hours, unsurprisingly, with the variance due to cloud cover. As mentioned above, ignore the dip in November with the ‘loop of death’, and do bear in mind that for Nov-Dec-Jan-Feb there can be occasional gaps in the data due to controlled outages due to lack of power raining down from the sky. Obviously if there is no power, there is no internet, and the numbers then don’t appear on Sungrow’s cloud service. This artifically depresses those months, but it also artificially boosts them because the batteries will often suck in 8 - 10 kWh in a day during those months which makes that day look unusually good.
Something perhaps surprising about the blue line is it ranges between 20% and 60%, rather than between 32% and 70% as described above. The answer is simple: geography. We have tree cover to the west which chops off the end of the day in the summer, and mountains to the south which chops off both sunrise and sunset in winter. The panels are mounted on the ground so they are particularly affected by geography – once onto the house’s roof, that effect should be markedly diminished.
The red line is the estimated number of kWh available per day based on the rate of charging in the morning descaled by x0.25, and then linearly adjusted to match this estimate of solar PV production from my house’s PHPP computer model of its predicted performance:

This is for thirty-seven panels, so divide everything by two to get what PHPP thinks ought to be the solar PV yield for this location. I matched my estimated graph such that Jun-Jul matches what this graph predicts (~27 kWh/day), as does Dec-Jan (~10 kWh/day).
So, the obvious elephant in the room is that the curves of both graphs don’t match! To be honest, the PHPP graph looks like the sunrise graph whereby due to how the planet rotates whilst also going around the sun, sunrise gets earlier quicker in the beginning of the year. This might be a bug in PHPP? I have a second set of kWh per day estimations for the house from the Easy PV website:

Now that looks more like my graph! There is an off-centre bias towards Apr-May, and a similar raised tail Aug-Sep to the PHPP estimate, but it’s less pronounced. Easy PV also thinks a bit less power will be captured in summer, but especially in winter (the red is contribution back to the grid; the green is charging of battery; the blue is consumption).
My graph does show a raised tail Aug-Sep, but no off-centre bias towards Apr-May. But do you know it could be as simple as that the weather in 2024 in Apr-May was unusually cloudy? It’s entirely possible, each year’s graph will have its own shape, and only by averaging say ten years of them might you get the shapes that Easy PV and PHPP show.
Perhaps a future virtual diary entry here might average all the annual results and find out?
The next virtual diary entry
Which brings me onto the likely topic of my next virtual diary entry here.
I haven’t written here about geopolitics in a very long time, certainly decades. It’s not that I haven’t been keeping up to date and well informed, rather to be honest I don’t think my thoughts on it are worth typing up in my very limited spare time. If I am to invest multiple hours clarifying my thinking onto this virtual paper, generally it is because of:
I need a searchable record of my past thinking for future projects. This has been 65% of everything I’ve written here in certainly the past fifteen years.
It helps me to clarify what I am actually thinking by writing out prose to describe that thinking, even if I never expect to need to refer to that thinking again in the future. This might be as much as 30% of everything I’ve written here in the past fifteen years.
And because my thinking on geopolitics usually really doesn’t matter, it isn’t worth investing a non-trivial amount of my free time to write it up.
I am one of the few I believe to have correctly predicted the current secession of the United States from its colonial outposts in Europe in approximate timing, form and nature. Because I never wrote any of that down, only the people who know me well enough to have heard me blabbing on about all this since a few years after the financial collapse will be able to confirm it. I made the suspicion roughly after returning from working for BlackBerry in Canada, it got confirmed with how the first election of Donald Trump came about, and obviously we are right now in the beginning of said secession.
Most of such ‘pub bar talk’ material is harmless and irrelevant – a hobby thankfully usually not punished for doing it publicly in the collective West, unlike in most of the rest of the World. But when trillions of euro will be spent and billions of lives are about to radically change from the trajectory they were previously on, now it actually matters enough to be worth writing up here.
My family, but also my friends, my neighbours, my colleagues and indeed my people will now not live the rest of their lives along the patterns previously assumed. Seeing as they rather matter to me, I ought to clarify my thinking on this topic in order to have my best guess at what will happen in the future before I die. Only then can I guide those I care about in the right directions as best I can.
So I need to write something up. It will likely take me several weeks to phrase it correctly. But I do think it needs doing.
If you’re interested in such things, watch out for that here. If you’re not, remember to skip the next post! Until then, be happy!
This post will be mainly about testing the seventh generation of my public server infrastructure. I previously discussed last December the current economics of the market driving me towards a colocated server solution for the first time ever, which showed quite the shift in exigencies in recent years. As you will see, this new solution has big gains in some areas, but a few regressions in others.
Firstly, to summarise past posts a little, what has shifted is that obsolete servers offered as budget dedicated servers have been rising in price as datacenter electricity has risen in price. This is because obsolete hardware whilst nowadays very good at idle power consumption, they can still consume a fair whack of power if doing anything, so their peak power consumption makes them expensive. If you can reduce your space footprint down to two credit cards and your power consumption down to fifteen watts or especially ten watts or less, there are colocation options available nowadays far cheaper than renting a budget obsolete server.
I gave a list of those I could find in previous posts, and I ended up choosing the cheapest which was FinalTek in the Czech Republic at €1.97 inc VAT per server per month if you buy three years of colocation at once. This, as my noted in earlier posts, is a 100 Mbit ‘unlimited’ shared service on a 1 Gbit switch, so you get 1 Gbit between the two Pis but up to 100 Mbit to the public internet. I’ll quote their setup email at this point for requirements:
The device must meet the following parameters:
- it must be in a box (not a bare board with exposed circuitry)
- the power adapter to the device must be for Czech power grid (EU power plug)
- dimensions must not exceed 15 x 10 x 5 cm
- must not have a power consumption greater than 5V / 3A
- must be set to a static IP address
- send the device to the address belowIf the device uses an SD card for operation, it is advisable to pack spare cards with a copy of the OS in case the primary card fails.
As mentioned in previous posts, it is the 5v @ 3a requirement which filters out Intel N97 mini PCs which probably can be configured to run under 15 watts (and definitely under 20 watts), but they generally need a 12v input. They’re far better value for money than a Raspberry Pi based solution which is around one third more expensive for a much worse specification. You can of course fit a higher spec Banana Pi or any other Single Board Computer (SBC) also able to take 5v power, but none of those have the effortless software ecosystem maturity of the Raspberry Pi i.e. an officially supported Ubuntu Server LTS edition which will ‘just work’ over the next four years. So, to put it simply, the one single compelling use case I have ever found for a high end Raspberry Pi is cheap dedicated server colocation. For this one use case, they are currently best in class with current market dynamics.
Even with the very low monthly colocation fees, this hasn’t been an especially cheap exercise. Each Raspberry Pi 5 with case cost €150 inc VAT or so. Add power supply €10 and used NVMe SSD off eBay €35 and you’re talking €200 inc VAT per server. Over three years, that’s equivalent to €7.64 inc VAT per server per month which is similarly priced to my existing rented Intel Atom C2338 servers (~€7.38 inc VAT per server per month). So this solution overall is not cheaper, but as previous posts recounted you get >= 2x the performance, memory and storage across the board. And, the next cheapest non-Atom rented server is €21 inc VAT per month, and this is one third the cost of that all-in.
Assuming market dynamics continue to shift along their current trajectories, in 2032 when I am next likely to look at new server hardware it’ll be interesting to see if wattage per specification will let me reduce watts for good enough hardware for a 2030’s software stack. In the list of colocation providers I gave in previous posts, many capped power to ten watts max or price went shooting up quickly. That’s enough for a Raspberry Pi 4, but they’re as slow as my existing Intel Atom C2338 rented servers plus they can’t take a NVMe SSD. Seven years from now, I would assume there will be a Raspberry Pi 6 and/or cheap colocation for 12v ten watt max mini PCs might be now affordable. It’ll be interesting to see how trends play out.
In any case, server capability per inflation adjusted dollar continues its exponential improvement over time. The fact I can colocate a server for the cost of a cup of coffee per month is quite astounding given I grew up in a time where colocation space cost at least €10k per 1U rack per month. I reckon they’re fitting ten to twelve SBCs per 1U rack, so that’s ~€20-24 per rack slot per month. Which is 99.8% cheaper than in the 1990s! In case you’re wondering, a 1U slot with max 100 watts power currently costs about €30-40 ex VAT per month, so I guess FinalTek are relying on those Pis to not draw all of their fifteen watt power budget to make a profit!
Raw storage device performance
The Raspberry Pi 5 has a single PCIe 2.0 lane available to be connected to a NVMe adapter. Much debugging and tweaking has been done by RPI engineers in the past year to get that PCIe lane running at 3.0 speed and working without issue over a wide range of NVMe SSDs. The most recent significant compatibility improvement was only in December 2024’s firmware, so this is an ongoing process since the Pi 5 was launched in Autum 2023.
Most – but not all as we shall see – original RPI NVMe SSD compatibility
issues have been worked around such that compatibility is now very good.
Just make sure you install a year 2025 or newer EEPROM firmware and you
should be good to go with running PCI 3.0 on any of the after market NVMe expansion kits.
I ended up fitting a 256 Gb well used Samsung SM961 off eBay to
europe7a
and an official Raspberry Pi 512 Gb NVMe SSD to europe7b
after wasting a lot of time on a Samsung PM9B1 SSD which nearly works.
It turns out that the Samsung PM9B1 actually has a Marvell controller,
and that is very finickety: it doesn’t like the RPI, it also doesn’t
like one of the USB3 NVMe enclosures I have, but it’s happy in the other
USB3 NVMe enclosure I have. I think there’s nothing wrong with the SSD
apart from limited compatibility, and as the PM9B1 was an OEM only
model they only needed it to work well in the OEM’s specific hardware.
The official Raspberry Pi NVMe used to be a rebadged Samsung PM991a which is a superb SSD. Unfortunately, at some point they silently swapped it for a rebadged Biwin AP425 which is an industrial SSD. The Biwin is fast in smoke testing, but it doesn’t implement TRIM so I’m unsure how either its performance or longevity would hold up over extended use. It is also a RAMless design, and having a RAM cache on the SSD particularly benefits random reads for the Pi from testing. So the used Samsung SSD with about 80k hours and ~25Tb written (i.e. 100 total drive writes, which the firmware thinks is 9% spare threshold used) ended up going into the primary server, and the brand new Biwin SSD into the failover server.
For the 256 Gb Samsung SM961
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 15.1569 s, 692 MB/s
Idle is 0.3 watts, write load is +3.6 watts.
dd of=/dev/null if=/dev/nvme0n1 bs=1M count=10000 iflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 12.6483 s, 829 MB/s
Idle is 0.3 watts, read load is +3.3 watts.
For the 512 Gb Biwin AP425 (official RPI NVMe SSD)
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 14.5891 s, 719 MB/s
Idle is near zero, write load is +0.9 watts.
dd of=/dev/null if=/dev/nvme0n1 bs=1M count=10000 iflag=direct
10485760000 bytes (10 GB, 9.8 GiB) copied, 13.4262 s, 781 MB/s
Idle is near zero, read load is +0.8 watts.
Remarks
Raw bandwidth is approx 2x that of the SATA SSD on my Atom C2338 servers. As it’s NVMe instead of SATA, latency will be orders of magnitude lower too, but milliseconds to microseconds won’t matter much for a web server.
RAM does consume power, and you see it in the idle power consumption above. The Samsung SSD is approx 4x less power efficient during reads and writes than the Biwin SSD which in fairness uses very, very little power in smoke testing. Obviously the amount of time your internet server spends doing sustained reads or writes will generally be minimal, so apart from peak power consumption calculations to ensure you fit inside the max colocation power limit, the idle power consumption will be what is used almost all of the time.
I tried battering the Biwin SSD with sustained writes and yes after a while you start seeing power consumption spikes of about four watts while write performance nose dives. After you leave it for a while it recovers. This suggests that it does have some form of SLC cache to enable fast burst writes at low power consumption. If so, why on earth it doesn’t also implement TRIM is beyond me as a SLC cache implies a block storage emulation layer internally.
Filesystem storage performance
I only run ZFS on my public servers which has been the case for many years now, principally for ZFS’s awesome auto-replication feature whereby ZFS filesystems can be mirrored very efficiently across multiple machines. I have ZFS replication running between the two public servers every few minutes, and from public servers to home and from home to public servers. I therefore have multiple offsite backups of everything down to a few minutes of lag, which I learned after the year 2020 two week outage is very wise.
The slow Intel Atom C2338 with LZ4 compressed unencrypted ZFS was surprisingly okay at about 348 Mb/sec reads and 80 Mb/sec writes. It did rather less well with encrypted storage at 43 Mb/sec read and 35 Mb/sec write, this is because the ZFS in kernel 5.4 wasn’t able to use the Intel AES-NI hardware acceleration functions so everything is done in software.
Ubuntu 24.04 LTS comes with kernel 6.8, and if I was on Intel its ZFS would now use AES-NI hardware acceleration. Unfortunately, as https://github.com/openzfs/zfs/issues/12171 which adds AArch64 hardware crypto acceleration to ZFS is unmerged, we still fall back to software cryptography. LZ4 compressed unencrypted ZFS reads at about 448 Mb/sec and writes at 144 Mb/sec – a 28% and 80% performance improvement - but encrypted reads are 84 Mb/sec and encrypted writes are 51 Mb/sec which whilst still a nice improvement, they are not the ~500 Mb/sec rates which that ZFS patch if merged would produce.
Still, it is only the email storage and a few other bits which use encrypted ZFS. Most of the system uses unencrypted LZ4 compressed ZFS and it uses highly compressible data, not the random bytes I used for the testing. So in practice you’ll get much better performance than the above.
Indeed, this RPI can saturate its 1 Gbit NIC from the ZFS filesystem with ease,
something the Intel Atom C2338 wasn’t quite strong enough to do.
I suspect kernel sendfile()
improvements are also at work as the
Intel Atom C2338 + ZFS could only push 59 Mb/sec down its NIC if
the file content had to be read from the SSD or 80 Mb/sec if the file
content was in ZFS RAM cache. A quick google search just there seems
to confirm that ZFS copy_file_range()
support was merged in Ubuntu 24.04,
so kernel zero copy byte range splice support is indeed just fresh
out of the oven as it were.
But aren’t we on a 100 Mbit capped public internet connection, so does it matter if the board can saturate 1 Gbit?
FinalTek network performance
I performed these benchmarks at 1am BST 2am CET to ensure a relatively
quiet public internet. Firstly, between adjacent Pi servers I get
936 Mbit, which has since been confirmed many times by scp
that yes
the switch is 1 Gbit as promised.
The sixth generation server has a ‘up to 1Gbit’ NIC. It is located in Amsterdam, whereas the seven gen is in the Czech Republic. There is 22 milliseconds of latency between them which is surprisingly high given the distance between them is under 1000 km (approx 5 ms latency). It turns out that traffic routes via Vodafone’s backhaul who are a British company, so I suspect traffic takes an elongated route far from a geographical straight line. That said, I measured 96.6 Mbit from dedi6 to dedi7, and 887 Mbit from dedi7 to dedi6.
Yes, that did make me think I had made a mistake too. So I ran the same test from my site’s fibre internet, which has a 1 Gbit downstream and 100 Mbit upstream. There is a 38 millisecond latency between my site and FinalTek, with 61 Mbit from site to dedi7, and 226 Mbit from dedi7 to site. Here is how traffic is routed:
traceroute -4 dedi7.nedprod.com
traceroute to dedi7.nedprod.com (46.167.244.57), 30 hops max, 46 byte packets
1 217-183-226-1-dynamic.agg3.mlw.lmk-mlw.eircom.net (217.183.226.1) 2.395 ms 3.295 ms 0.691 ms
2 eth-trunk137.hcore1.lmk.core.eircom.net (86.43.253.148) 9.729 ms 5.629 ms 4.578 ms
3 eth-trunk13.hcore1.dbn.core.eircom.net (159.134.123.9) 13.517 ms 10.842 ms 8.655 ms
4 * * *
5 dln-b3-link.ip.twelve99.net (62.115.32.200) 9.360 ms 7.846 ms 8.852 ms
6 vodafonepeering-ic-356964.ip.twelve99-cust.net (213.248.98.129) 8.622 ms 9.161 ms 7.572 ms
7 ae25-xcr1.ltw.cw.net (195.2.3.129) 38.989 ms 37.176 ms 37.839 ms
8 ae37-pcr1.fnt.cw.net (195.2.2.74) 32.413 ms 29.676 ms 30.388 ms
9 ae4-ucr1.czs.cw.net (195.2.10.233) 36.986 ms 36.364 ms 36.577 ms
10 vodafonecz-gw3.czs.cw.net (195.2.12.42) 38.170 ms 39.950 ms 41.302 ms
11 ip-81-27-200-59.net.vodafone.cz (81.27.200.59) 37.554 ms 36.569 ms 44.476 ms
12 tachyon.finaltek.net (77.48.106.250) 40.717 ms 38.654 ms 39.420 ms
13 europe7a.nedproductions.biz (46.167.244.57) 40.171 ms 39.042 ms 39.527 ms
(cw.net
is Cable & Wireless, the internet backhaul subsiduary of Vodafone.
Between Eircom and them is twelve99
which used to be the familiar TeliaSonera
of yore, now called Arelion who are Swedish)
I did a bit more testing, and it looks like Finaltek only throttle inbound to 100 Mbit, not outbound. That means the public can download from my website at up to 1 Gbit. This was quite unexpected for the monthly cost – I had assumed a straight 100 Mbit throttle per MAC address with maybe a minimum bandwidth guarantee of 5 Mbit, like you might get at OVH et al. Apparently not.
Obviously FinalTek have promised nothing other than ‘up to 100 Mbit shared’ so network performance can absolutely worsen in the future and they’re still within claims. But consider me very impressed for the money!
I run CloudFlare on front of the website in any case, so it doesn’t actually matter if the NIC is faster than claimed except when CF is fetching uncached content, which is for two thirds of all requests according to its dashboard (I am on the free tier). I have no idea if there is any DDoS protection from FinalTek – I would assume not, but again CloudFlare should take of that too at least for the web site part of things.
Final words
I’m not sure what more there is to say other than that I am very pleased with the considerable performance improvements in the seventh generation of my server infrastructure. It should be good for at least seven years once I’m fully migrated over, then I’ll be releasing the sixth generation from rental (I have already released all the VPSs I used to rent to provide redundant DNS now that CloudFlare and Hurricane Electric provide redundant DNS for free with far lower latencies than any VPS could). Despite the nearly five hundred euro outlay on new hardware, over four years it will be cheaper than the present rental costs and over seven years it should be far cheaper. Let’s hope that the hardware is reliable and trouble free during that time!
Re: house build, that is still stuck in joist design as it has been since November last year. I don’t know when it’ll get free of that, it isn’t for a lack of me nagging people. My next post might be on the eighteen solar panels I’ve had installed on the site since Oct 2023, as I now have two winters of measurements to analyse as the inverter captures lots of data every ten minutes or so, so I have a large time series database now. I think it worth a post here trying to figure out what I might be able to infer from all that data.
Until the next time, be happy!
What came back to us is this first draft of a joist design:

… which, obviously enough, arranges all the joists left-right. Which, for any normal house would be fine, but in our specific case we have a highly insulated fresh air duct for the ventilation because it carries most of the space heating. Which means it’s fat, especially in the hallway just outside the utility room where it comes from the MVHR. There, the fresh air duct is approx 280 mm in diameter after the insulation, and myself and my architect had solved this by running the joists up-down just for the hallway, and left-right elsewhere.
Normally you should always orientate your joists in the same direction as the ridge of your roof, so if the ridge runs left-right as it does in this house, so must your joists. This is because the roof trusses will be perpendicular to your ridge, and you then want the floor joists to be perpendicular to those again to brace them.
However, due to the vaulted areas in this particular house, we have two steel portal frames either side of the vaulted area with steel beams pinning both of the house ends to those frames, and then two more steel beams pinning both portal frames together (really this is a steel frame house). Because the steel takes on most of the bracing work, you should be more free with the joist orientation. This is why myself and my architect took advantage of that during duct routing.
Unfortunately, my TF supplier was adamant that if we wanted up-down joist runs in the hallway, we’d need to fit steel, and that steel would need to be custom designed by a specialist engineer, and coordinated with the joist design to fit. That sounded (a) expensive, no designer in this current busy market is willing to take on new work unless their fees are at least €5k and steel is hardly cheap either and (b) it would be months more of delay, at best.
So that now meant we needed to get those ducts through a 304 mm tall joist which has an absolute maximum of 300 x 210 mm clearance. Otherwise months more of delay would occur and large unexpected additional sums of money. Wonderful.
High end custom made industrial ducting
Here were the choices before us:
Custom steel joist design. Cost: five figures. Lead time: months.
Increase joist height from 304 mm to 417 mm. Cost: five figures. Lead time: weeks (mainly to redesign all the ceilings). Also: you lose a fair bit of ceiling height.
Instead of insulated circular steel ducting, use high end custom made industrial ducting. Cost: four figures. Lead time: days. Would need a bit of rejigging the duct layout as the 300 x 210 mm pass through limits airflow to an equivalent 160 mm diameter duct.
So option 3 was the obvious one (well, initially I didn’t know how much it would cost, but I asked for quotes got back numbers and now I know it’s four figures of added cost. Painful, but less than the alternatives). Within Europe, there are a number of prefabricated preinsulated duct manufacturers all supplying the standard DIN sizes cheaply, but if you want custom dimensions they get expensive. Indeed, the gap between their cost and the ultra fancy Kingspan Koolduct gets reasonable. That stuff is the second highest end insulated duct on the market, due to it using a special heat resistant and stiffened phenolic foam cut by CNC machine and then hand assembled into the finished form. As the hand assembly is done by western Europeans using foam blown in western European factories, it isn’t cheap. But in terms of performance for the space, it’s very good and much better than can be achieved with conventional fibreglass wool duct insulation:
- @ 10 C, 0.022 W/mK (vs 0.037 W/mK for ductwrap)
- @ 25 C, 0.024 W/mK
- @ 50 C, 0.027 W/mK
- @ 80 C, 0.031 W/mK (vs 0.046 W/mK for ductwrap)
The only better performing duct material is vacuum panel at 0.007 W/mK. It’s superb, but you could buy a fair portion of a whole house with what it costs.
52 mm of this phenolic insulation delivers these u-values:
- @ 10 C, 0.423 W/m2K
- @ 25 C, 0.461 W/m2K
- @ 50 C, 0.519 W/m2K
- @ 80 C, 0.596 W/m2K
I assumed 18.5 m2 of fresh air duct for my calculations (this is pre-recent layout changes). If so, one would leak the following amounts of energy out of the ductwork, thus not delivering it to the outlet if the house is at 21 C:
- @ 10 C, 7.826 W/K, so -86 watts.
- @ 25 C, 8.529 W/K, so 34 watts.
- @ 50 C, 9.602 W/K, so 278 watts.
- @ 80 C, 11.03 W/K, so 651 watts.
Are these losses acceptable? It depends on the rate of ventilation air flow because the losses are fixed to temperature difference, but the amount of energy transported rises with more air flow:
Duct air temperature | 200 m3/hr | 400 m3/hr | 600 m3/hr | |||
---|---|---|---|---|---|---|
Heating | Losses | Heating | Losses | Heating | Losses | |
10 C | -645 W | 11.74% | -1290 W | 5.87% | -1936 W | 3.91% |
25 C | 235 W | 12.814% | 469 W | 6.40% | 704 W | 4.27% |
50 C | 1701 W | 14.41% | 3403 W | 7.20% | 5104 W | 4.80% |
80 C | 3461 W | 16.54% | 6923 W | 8.27% | 10384 W | 5.51% |
I would expect the system to run at no less than 150 m3/hr unless the house is unoccupied, so losses to ductwork of greater than ten percent are distressing. The next Koolduct size up is 60 mm which is 15% better, but now you swap air pressure loss for thermal efficiency. I’ve been a little unreasonable in this design by aiming for a 0.5-0.75 Pa/linear metre pressure loss which is excessively loose (less than 2 Pa/metre overall is what you need to achieve). But it’s for good reason - the boost fans can increase the air flow for a specific outlet or inlet very significantly, and you want enough slack in the ducts to cope. Also, due to me not having the modelling software, I’ve ignored friction, bends and several other factors which cause additional pressure loss over linear metre calculations.
There is also the cost angle: the 52 mm Koolduct costs about €110 ex VAT per metre. The 60mm stuff I would assume would be maybe €126/metre. Given that standard DIN size preinsulated ducts (albeit with EPS instead of phenolic board insulation) cost about €60 ex VAT/metre, one is paying a big enough price premium already to fit these ducts inside the joists.
I think, to be honest, it’ll just have to do.
The new duct routing
So, having digested all of that, I have drawn in to correct scale a potential duct routing with sufficient dimensions for the flows which definitely fit through and between the draft joist layout above:


Unfortunately, that’s not the end of it. Obviously the above are not ‘CAD quality’ drawings despite the fact that they are exactly to scale and indeed of identical resolution to the original CAD file (Inkscape is surprisingly good). And in any case, we shall need a 3D design in order to send the exact order for the Koolducts through, because there is vertical detail omitted on that 2D layout.
Hence I’ve come to an arrangement with my architect to do some additional work to turn this into 3D and adjust the joists (in some places they just don’t make sense). Hopefully he can deliver that next week, and then we can unblock forward progress with the joist design.
Here’s hoping!
Raspberry Pi colocation
You may remember my recent post on upgrading my internet server infrastructure in which I said I was researching whether to replace my rented Intel Atom 220 based public servers with colocated Raspberry Pi 5’s. Well here they are nearly ready to be dispatched to the colocation:

These will be my seventh generation of public internet infrastructure. One will act as primary, the other as failover. Both will be in the same rack so they get a 1 Gbps network connection between them, but external internet will be clamped to 100 Mbit shared with everybody else on that rack. For under €9 ex VAT/month, I can’t complain.
As you are concluding, after quite a few head scratches I got my full public Docker Compose based software stack up and working with offsite ZFS replication running well. They are very noticeably faster than the current Intel Atom 220 infrastructure, plus due to having double of pretty much everything (RAM, SSD, CPU etc) they have room to grow. Barring unpleasant surprise, these should last me until 2028 when the next Ubuntu LTS upgrade would be due, and possibly for a second four year period under whatever the next Ubuntu LTS will be then (28.04?). The aim will be to have one returned, it upgraded, sent back, migrated, then have the second one returned and upgraded. We’ll see how it goes.
I hope to post them on Wednesday – I am waiting for another NVMe
SSD for europe7b
because the one I got off eBay wasn’t compatible
as it turned out (nearly compatible, and it cost me staying up
to 4am one night going WTF? a lot before I realised it wasn’t me
and it isn’t the Pi). So I’ve ordered an official Pi SSD which is
really a rebadged Samsung 991A SSD. Should work very well, and should
arrive on Monday just long enough for me to get the second Pi also
configured and ready for dispatch.
3D printing extensions to an IKEA Fridans blind to avoid the expense of IKEA Fytur blinds.
… which I kept putting off as it would need a lot of work to type it up. Here is finally that post!
This is probably one of the longest running projects I had of them all. It ran for over a year. What I’m about to write out removes all the blind alleys, the ideas which didn’t work out, all not helped by having to usually wait two months for new parts from Aliexpress to arrive in the post. Before you knew it, it was a year later. Mad.
Automating blackout blinds cheaply
One of the signature features in any automated house is motorised blinds which automatically open and close themselves according to sunrise and sunset. They have historcially been very expensive, typically €500 or more per window, so only very rich people would fit them. IKEA shook up this market with battery powered radio controlled blinds called ‘Fytur’ for under €120-160 per blind. These are great and all, but if you have nearly thirty windows, it’s still thousands of euro.
IKEA also sell the Fridans blackout blind which is operated using a stick at its side. This is very considerably cheaper at €22-28 per blind. Their build quality isn’t great, but for the price they’re a good base for a DIY automation which replaces the manual stick at the side with a motor.
This is hardly my own idea. https://www.yeggi.com/q/ikea+fridans/ will show you thousands of 3d printables related to IKEA Fridans blinds. Most involve replacing this part:

This is the manual stick at the side. You push and pull on it to turn the blind – internally there is a cord with plastic beads which can be reexposed if desired by cutting off the plastic handle, and have a motor push and pull on the cord directly. We’ll be going a bit further – 3D printing a complete replacement for the above with identical dimensions, just minus the handle.
I reckon that all in you can do one of these fully automated blinds for under €40 inc VAT for the 200 cm blind, and under €30 inc VAT for smaller blinds. This excludes the 5v power supply only (often an old USB phone charger will do for that), that’s the price for everything else. This turns thousands of euro of cost for me if we chose the IKEA Fytur solution into hundreds of euro instead, and with very little manual work (apart from all the work done to come up with what I’m about to describe now).
Choosing the motor
If you go through the many, many IKEA Fridan blind automation projects, you will find a legion of motor solutions. Some people use 12v, some 9v, some 5v for power. Various microcontroller and driver boards are used, all with different voltages and currents. In my case, I had very specific needs:
- The entire thing had to work within the power budget of an Olimex ESP32-PoE board running off PoE. That as my notes described gives you up to four watts off PoE before it browns out, which includes everything on the board or hanging off it. That means max three watts for peripherals.
- There is a much better v2 Olimex PoE board now for nearly the same price as the original which has a 12v power tap. Mine, being the older ones, has a max 5v supply. So the motor needed to run off 5v, and never consume more watts than the board has spare.
- My longest blind is 1.8 metres in the kitchen. It needs enough torque to turn that from fully extended.
Those requirements it turns out reduce thousands of options down to approximately two. The first I alighted upon is the 28BYJ-48 unipolar motor hacked into a bipolar configuration. As a unipolar motor, its native voltage is 5v approx 0.1 watts in power. Typical torque is 0.3 kgf.cm with stall torque at 0.6 kgf.cm. Hacking it into bipolar doubles the active windings, so you now might get 0.2 watts of power, 0.6 kgf.cm of torque and 1.2 kgf.cm of stall torque. Obviously the motor was not designed for both windings to be concurrently active so it will get hot quickly. The 28BYJ-48 is however cheap for the spec, about €2.50 delivered which includes a ULN2003 driver board.
I then fortunately landed on something not needing every motor housing to be opened and hand modified: https://github.com/AndBu/YAIFM and my customisations of his project can be found at https://github.com/ned14/YAIFM. This uses a GA12-N20 bipolar motor with integrated encoder. These vary in power and spec despite using the same model name, so you need to choose very carefully to get one with the right combination of torque and power consumption when stalled.
The one I chose off Aliexpress claimed these specs:
Motor Specification Table | ||||||||
---|---|---|---|---|---|---|---|---|
Model | GA12-N20 (GA12-B1215) | Rated Voltage | DC12V | |||||
Type | DC Brushed Geared Motor | Test Voltage | DC6V | |||||
No-Load | Rated Load | Stall | Reduction Ratio | |||||
Speed (RPM) | Current (A) | Speed (RPM) | Torque (kgf.cm) | Current (A) | Power (W) | Torque (kgf.cm) | Current (A) | |
15 | 0.02 | 12 | 1.25 | 0.05 | 0.3 | 8 | 0.35 | 1000 |
25 | 0.02 | 20 | 0.8 | 0.04 | 0.24 | 4.9 | 0.29 | 298 |
30 | 0.02 | 24 | 0.85 | 0.04 | 0.24 | 5.85 | 0.31 | 298 |
40 | 0.02 | 32 | 0.95 | 0.04 | 0.3 | 6.8 | 0.35 | 298 |
50 | 0.02 | 40 | 0.95 | 0.05 | 0.3 | 7.75 | 0.35 | 298 |
60 | 0.02 | 48 | 0.8 | 0.05 | 0.3 | 7.3 | 0.35 | 250 |
80 | 0.02 | 56 | 0.34 | 0.05 | 0.3 | 4 | 0.35 | 200 |
100 | 0.02 | 80 | 0.48 | 0.05 | 0.3 | 3.4 | 0.35 | 200 |
120 | 0.02 | 96 | 0.23 | 0.04 | 0.24 | 1.8 | 0.3 | 150 |
150 | 0.02 | 160 | 0.27 | 0.05 | 0.3 | 2.15 | 0.36 | 100 |
200 | 0.02 | 200 | 0.28 | 0.06 | 0.36 | 2.3 | 0.37 | 100 |
300 | 0.02 | 240 | 0.2 | 0.05 | 0.3 | 1.6 | 0.37 | 50 |
400 | 0.02 | 320 | 0.24 | 0.05 | 0.3 | 1.8 | 0.37 | 50 |
500 | 0.02 | 400 | 0.15 | 0.05 | 0.3 | 1.2 | 0.35 | 30 |
600 | 0.02 | 480 | 0.16 | 0.05 | 0.3 | 1.3 | 0.37 | 30 |
700 | 0.02 | 560 | 0.17 | 0.06 | 0.36 | 1.4 | 0.37 | 30 |
800 | 0.02 | 640 | 0.18 | 0.06 | 0.36 | 1.5 | 0.38 | 20 |
1000 | 0.02 | 800 | 0.42 | 0.1 | 0.42 | 1.6 | 0.39 | 20 |
1200 | 0.02 | 960 | 0.1 | 0.05 | 0.3 | 0.5 | 0.33 | 10 |
1500 | 0.02 | 1200 | 0.07 | 0.04 | 0.24 | 0.55 | 0.33 | 10 |
2000 | 0.02 | 1600 | 0.08 | 0.05 | 0.3 | 0.6 | 0.35 | 10 |
Again, I stress that the above table is what the Aliexpress claims for my specific GA12-N20 motor. Other GA12-N20 motors will have very different tables.
The 50 rpm model highlighted (which has the maximum stall torque of all models) at 6v uses 0.3 watts, typical torque is 0.95 kgf.cm with stall torque at 7.75 kgf.cm. Max amps at stall is 0.35 amps (2.1 watts, within my power budget). The motor plus a DRV8833 driver for it is about €6 delivered, so nearly double the cost of the previous option. However, it delivers (i) 58% more turning torque (ii) 650% more stall torque and (iii) I’m fairly sure the chances of coil burnout would be much lower with this choice.
Not all GA12-N20 motors come with a rotary encoder, which you will need as it counts how many turns the motor does, which we will then use in software to wind the blind to exact positions. A six wire cable is usually supplied, and its pinout means:
- Red: Motor forwards.
- White: Motor backwards.
- Blue: Common ground.
- Black: 3.3v - 5v power supply for the encoder.
- Green: Encoder A phase.
- Yellow: Encoder B phase.
You still need a driver for the motor which is the DRV8833 dual H-Bridge. It works the same as any other H-Bridge motor driver, you set a PWM either forwards or backwards as desired and the motor goes. The DRV8833 rather usefully will take a TTL input and output 5v, so you don’t need a level shifter. Just feed its Vin with 5v, also raise its EEP input to 5v, and voila it all just works with the PWM inputs straight off the ESP32 using the 5v supply off the Olimex board.
ESPHome’s rotary encoder driver will read the A and B encoder pulses. Make SURE you connect 3.3v as its power supply, otherwise the encoder outputs will have too much voltage for the ESP32.
Anyway here it is in action, being driven by ESPHome:
Blind motor driven by a DRV8833 using the 5v supply from PoE
I did lots of testing to try to make the solution brown out, but I failed. I found the following power consumptions off PoE:
- No load, just turning: 105 mW.
- Max load, PWM at 50%: 263 mW.
- Max load, PWM at 100%: 315 mW.
These seemed surprisingly low, so I redid them off USB power:
- No load, just turning: 166 mW.
- Max load, PWM at 100%: 600 mW.
The cause of the disparity is that the PoE power conversion circuitry is especially inefficient at low loads, but gets more efficient as load goes up. The effect is to ‘hide’ some of the power consumption of the motor. Obviously, I only care about peak PoE consumption, so 315 mW looks great.
What about stall current? Well, the thing has so much torque you need two hands to stop it turning. I have my software detect jams within 160 milliseconds and cut the power. Perhaps that meant I never saw the stall current for long enough to measure it, but equally the Aliexpress listing could just be increasing all power claims by 50-100% as they sometimes do. 350 mA at 6v should be ~292 mA at 5v, which is 1,460 mW. I didn’t measure even half that including when I stalled the motor.
There is another possibility: the DRV8833 is mounted on a board called ‘HW-627’. There is very little information about that board that I (or anybody else) can find, but it may well configure the overload protection in the DRV8833 to cut off the power in the case of stall at some fairly low limit. I can say I see several capacitors and resistors on the board, so it’s entirely possible they set a lower overload limit.
Making the blind stop turning when open
The original YAIFM project used either a mechanical switch to detect blind fully open, or you had to program it manually with a count every time there is a power loss. The switch is visually intrusive, and manually setting a count for each of thirty blinds isn’t practical. So I wondered if we could have the ESP32 detect when the blind stops turning, and choose that as the new base point for winding the blind down and up until the next power loss?
The first thing I’d need to fix is that the Fridans blinds have nothing to stop them keeping turning once fully open, because the blind bottom will happily wrap round the upper housing. To solve this, I designed so 3D printed inserts to extend the width of the bottom of the blind. This also doubles as an ‘anti-fray’ shield, because the bottom corners of those Fridans blinds are notorious for getting scruffy very quickly:

One side of the 3D printed width extension for the Fridan blind. The existing weighted plastic bar in the blind's bottom inserts into the 3D printed component to hold it in. A pleasant looking oval shaped external 'knob' then protrudes in a way to ensure it will prevent the blind passing through once fully open.
Strengthening the 3D prints
The next problem I found is that the plastic just can’t take the torque that this motor puts out. I know this from manually putting load onto the motor by hand, it did not take long before the D-shaped hole for the motor in the printed plastic went circular, and now the blind spool wouldn’t turn. This clearly would need to be fixed.
After a great deal of searching I finally found some metal cogs off Aliexpress which fit the motor’s D shaft (I won’t bore you with the cogs tried and found to not fit, that wasted many months. I really wish listings described measurements the same way!). What you’d need is the ‘9 Teeth D Type’ which has an outer diameter of 8 mm and is 7.4 mm long. The key measurement is between the flat part of the D hole rising perpendicularly to the topmost of the rounded part of the D hole – that needs to be 2.4 mm if you want it loose, 2.3 mm if you want it tight. For some reason, these can cost a lot or not at all depending on the listing, so for reference mine cost €0.82 inc VAT each delivered.
I then remeshed the original YAIFM blind spool to take the metal cog instead of the D-shaped shaft of the motor. I also thickened up some of the plastic, as I reckoned it would be getting repeated stress.


Above is the 3D printed blind spool with cog shaped hole, the metal cog, the GA12-N20 motor with rotary encoder, and its cable. I added two metal washers between the metal cog and the motor to ensure horizontal force landed mainly on the motor housing, not on the motor shaft. You do still get the weight of the blind bearing down on the motor shaft, but it’s probably good for a few years.
Putting it all together:


And that’s pretty much it! The great thing about this particular IKEA Fridans blind customisation is that the 3D printed parts exactly replace the originals in dimensions, so as you can see in the rightmost picture above the blind fits exactly identically to before except you now have a wiring connector. From that you take your cable to your MCU.
The motorised blind in action
Completed blind being automated from ESPHome exclusively using the 5v supply from PoE for power
This won’t have looked like a particularly long post, and it’s not. Where most of the real work went in was preparing all the materials for upload, which meant cleaning them up, writing a single set of coherent truth from all the notes, and then writing it effectively three times once for Thingiverse, once for Github, and once for here. Here are the links:
Thanks for reading and enjoy!
My current public internet node is dedi6.nedprod.com which as you might guess by the number six, it is the sixth iteration of said infrastructure. I have previously talked about the history of my dedicated servers before back in 2018 when I was about to build out my fifth generation server which was based on an eight core Intel Atom box with 8Gb of ECC RAM and a 128Gb SSD for €15/month. That was a good server for the money, it ran replicating offsite ZFS and it sucked less than your usual Atom server due to the eight cores and much faster clock speed. You even got a best effort 1 Gbit network connection, albeit peering was awful.
Alas two years later in July 2020 that server never came back after a reboot, and nedprod.com had a two week outage due to all data being lost. Turns out that particular server hardware is well known to die with a reboot, which put me off getting another. I did at least get to discover if my backup strategies were fit for purpose, and I discovered as that post related at the time that there was room for considerable improvement, which has since been implemented and I think I would never have a two week outage ever again. The sixth generation server infrastructure thus resulted, which consisted originally of two separate dual core Intel Atom servers with 4Gb of RAM and a 128Gb SSD each ZFS replicating to the other and to home every few minutes. They originally cost €10 per month for the two of them, which has since increased to €12 per month. As that post relates, they’re so slow that they can’t use all of a 1Gbit NIC, I reckoned at the time they could use about 80% of it if the file were cached in RAM and about 59% of it if the file had to be loaded in. It didn’t take long for me to drop the two server idea and just have Cloudflare free tier cache the site served from one of the nodes instead. And, for the money and given how very slow those servers are, they’ve done okay.
However, technology marches on, and I have noticed that as the software stack gets fatter over time, that 4Gb of RAM is becoming ever less realistic. It’s happening slowly enough I don’t have to rush a replacement, but it is happening and at some point they’re going to keel over. 4Gb of RAM just isn’t enough for a full Mailcow Docker stack with ZFS any more, and the SATA SSD those nodes have are slow enough that if a spike in memory were to occur, the server would just grind to a halt and I would be out of time to replace it.
The state of cheap dedicated servers in 2024
Prices have definitely risen at the bottom end. I had been hoping for last Black Friday I could snag a cheap deal, but stock was noticeably thin on the ground and I didn’t have the time to be sitting there pulsing refresh until stock appeared. So I missed out. Even then, most Black Friday deals were noticeably not in Europe, which is where I want my server for latency, and the deals weren’t particularly discounted even then.
Writing end of 2024, here is about as cheap as it gets for a genuine dedicated server with min 8Gb of RAM and a SSD in a European datacentre, with an IPv4 address:
- €14.50 ex VAT/month Oneprovider
- Four core Intel Atom 2.4Ghz with 8 Gb of 2133 RAM, 1x 120 Gb SATA SSD, and 1 Gbit NIC ('fair usage unlimited' [1]). Yes, same model as dedi5!
- €17 ex VAT/month OVH Kimsufi
- Four core Intel Xeon 2.2Ghz with 32 Gb of 2133 RAM, 2x 480 Gb SATA SSDs, and 300 Mbit NIC ('fair usage unlimited' [2]).
- €19 ex VAT/month Oneprovider
- Eight core Opteron 2.3Ghz with 16 Gb of 1866 RAM, 2x 120 Gb SATA SSDs, and 1 Gbit NIC ('fair usage unlimited' [1]).
I have scoured the internet for cheaper – only dual core Intel Atoms with 4Gb RAM identical to my current servers are cheaper.
However, during my search I discovered that more than few places offer Raspberry Pi servers. You buy the Pi as a once off setup charge, then you rent space, electricity and bandwidth for as little as a fiver per month. All these places also let you send in your own Raspberry Pi. That gets interesting, because the Raspberry Pi 5 can take a proper M.2 SSD. I wouldn’t dare try running ZFS on a sdcard, it would be dead within weeks. But a proper NVMe SSD would work very well.
[1]: OneProvider will cap your NIC to 100 Mbit quite aggressively. You may get tens of seconds of ~ 1 Gbit, then it goes to 100 Mbit, then possibly lower. It appears to vary according to data centre loads, it’s not based on monthly transfer soft limits etc.
[2]: It is well known that Kimsufi servers have an unofficial soft cap of approx 4 Tb per month, after which they throttle you to 10 Mbit or less.
The Raspberry Pi 5
To be honest, I never have given anything bar the Zero series of the Raspberry Pi much attention. The Zero series are nearly as cheap as you can buy for a full fat Linux computer, and as this virtual diary has shown, I have a whole bunch of them. At €58 inc VAT each to include PoE and a case, they’re still not cheap, but they are as cheap as you can go unless you go embedded microcontroller, but then that’s not a full fat Linux system.
The higher end Raspberry Pi’s are not good value for money. Mine the 8Gb RAM model with PoE, no storage and a case cost €147 inc VAT. I bought Henry’s games PC for similar money, and from the same vendor I can get right now an Intel N97 PC with a four core Alder Lake CPU in a case and power supply with 12Gb of RAM and a 256Gb SATA SSD for €152 inc VAT. That PC has an even smaller footprint than the RaspPi 5 (72 x 72 x 44 vs 63 x 95 x 45), it’s smaller in two dimensions and has proper sized HDMI ports not needing an annoying adapter. It even comes with a genuine licensed Windows 11 for that price. That’s a whole lot of computer for the money, it’ll even do a reasonable attempt at playing Grand Theft Auto V at lowest quality settings. The Raspberry Pi is poor value for money compared to that.
However, cheap monthly colocation costs so long as you supply a Raspberry Pi makes them suddenly interesting. I did a survey of all European places currently offering cheap Raspberry Pi colocation that I could find:
- €4.33 ex VAT/month https://shop.finaltek.com/cart.php?a=confproduct&i=0. Czechia. 100 Mbit NIC ‘unlimited’. Price includes added IPv4 address for extra cost, which you need to explicitly add (it costs nearly €2 extra per month, so if you can make do with IPv6 only, that would save a fair bit of money).
- €5.67 ex VAT/month $6 http://pi-colocation.com/. Germany. 10 Mbit NIC unlimited. Client supplies power supply.
- €6.00 ex VAT/month £5 https://my.quickhost.uk/cart.php?a=confproduct&i=0. UK. 1Gbit NIC 10Tb/month. Max 5 watts.
- €6.81 ex VAT/month https://raspberry-hosting.com/en/order. Czechia. 200 Mbit NIC ‘unlimited’. Also allows BananaPi.
- €14.60 ex VAT/month £12.14 https://my.quickhost.uk/cart.php?a=confproduct&i=0. UK. 1Gbit NIC 10Tb/month. Max 10 watts.
There are many more than this, but they’re all over €20/month and for that I’d just rent a cheap dedicated server and save myself the hassle.
So these questions need to be answered:
- Is a Raspberry Pi 5 sufficiently better than my Intel Atom servers to be worth the hassle?
- What if it needs to run within a ten watt or five watt power budget?
- Can you even get an Ubuntu LTS with ZFS on root onto one?
Testing the Raspberry Pi 5



As you can see in the photo, unfortunately I got the C1 stepping, not the lower power consuming D0 stepping which consumes exactly one watt less of power. The case was a bundle of a full size NVMe SSD M.2 slot with PoE power HAT, official RaspPi 5 heatsink and cooler, and an aluminium case to fit the whole thing. It looks pretty good, and I’m glad it’s metal because the board it runs hot:


This is it idling after many hours, at which it consumes about 5.4 watts of which I reckon about 1.5 watts gets consumed by the PoE circuitry (I have configured the fan to be off if the board is below 70 C, so it consumes nothing. I also don’t have the NVMe SSD yet, so this is minus the SSD). The onboard temperature sensor agrees it idles at about 65 C with the fan off.
After leaving it running stress-ng --cpu 4 --cpu-method fibonacci
for a while the most power I could make it use is 12.2 watts of PoE
with the fan running full belt and keeping the board at about 76 C.
You could add even more load via the GPU, but I won’t be using the
GPU if it’s a server. As with all semiconductors, they use more
power the hotter they get, however I feel this particular board
is particularly like this. It uses only 10.9 watts when cooler, the
additional 1.3 watts comes from the fan and heat. How much of this
is my particular HAT or the Raspberry Pi board itself, I don’t know.
Is a Raspberry Pi 5 sufficiently better than my Intel Atom servers to be worth the hassle?
First impressions were that this board is noticeably quick. That got me curious: just how quick relative to other hardware I own?
Here are SPECINT 2006 per Ghz per core for selected hardware:
- Intel Atom: 77.5 (my existing public server, 2013)
- ARM Cortex A76: 114.7 (the RPI5, CPU first appeared on the market in 2020)
- AMD Zen 3: 167.4 (my Threadripper dev system, 2022)
- Apple M3: 433 (my Macbook Pro, 2024)
Here is the memory bandwidth:
- Existing public server: 4 Gb/sec
- Raspberry Pi 5: 7 Gb/sec
- My Threadripper dev system: 37 Gb/sec
- My Macbook Pro: 93 Gb/sec
(Yeah those Apple silicon systems are just monsters …)
For the same clock speed, the RPI 5 should be 50-75% faster than the Intel Atom. Plus it has twice as many CPU cores, and a NVMe rather than SATA SSD, and possibly 40% more clock speed so it should be up to 2x faster single threaded and 4x faster multithreaded. I’d therefore say the answer to this question is a definitive yes.
(In case you are curious, the RaspPi5 is about equal to an Intel Haswell CPU in SPECINT per Ghz. Some would feel that Haswell was the last time Intel actually made a truly new architecture, and that they’ve been only tweaking Haswell ever since. To match Haswell is therefore impressive)
What if it needs to run within a ten watt or five watt power budget?
I don’t think it’s documented anywhere public that if you tell this board that it’s maximum CPU clock speed is 1.5 Ghz rather than its default 2.4 Ghz, it not only clamps the CPU to this but it also clamps the core and GPU and everything else to 500 Mhz. This results in an Intel Atom type experience, it’s pretty slow, but peak power consumption is restrained to 7.2 watts.
Clamping maximum CPU clock speed to one notch above at 1.6 Ghz appears to enable clocking the core up to 910 Mhz on demand. This results in a noticeably snappier use experience, but now:
- @ max 1.6 Ghz, peak power consumption is 9.1 watts.
- @ max 1.8 Ghz, peak power consumption is 9.4 watts.
- @ max 2.0 Ghz, peak power consumption is 9.6 watts.
The NVMe SSD likely would need half a watt at idle, so 1.8 Ghz is the maximum if you want to fit the whole thing including PoE circuitry into ten watts.
The board idles above five watts, so using the max five watt package listed above is not on the radar at all. So would the Raspberry Pi 4 with a PoE HAT, so you’d need to use no better than an underclocked Raspberry Pi 3 to get under five watts. That uses an ARM Cortex A53, which is considerably slower than an Intel Atom.
Finally, you get about twenty seconds of four cores @ 1.6 Ghz before the temperature exceeds 70 C. As the fan costs watts and a hot board costs watts, it makes more sense to reduce the temperature at which the board throttles to 70 C. Throttling drops the CPU initially to 1.5 Ghz, then it’ll throttle the core to 500 Mhz and the CPU to 1.2 Ghz. It’ll then bounce between that and 1.6 Ghz as temperature drops below and exceeds 70 C. I reckon power consumption at PoE is about 7.2 watts. If you leave it at that long enough (> 30 mins), eventually temperature will exceed 75 C. I have configured the fan to turn on slow at that point, as I reckon the extra watts for the fan is less than extra watts caused by the board getting even hotter.
In any case, I think it is very doable indeed to get this board with NVMe SSD to stay within ten watts. You’ll lose up to one third of CPU performance, but that’s still way faster than my existing Intel Atom public server. So the answer to this question is yes.
Can you get Ubuntu LTS with ZFS on root onto one?
The internet says yes, but until the NVMe drive arrives I cannot confirm it. I bought off eBay a used Samsung 960 Pro generation SSD for €20 including delivery. It’s only PCIe 3.0 and 256 Gb, but the RaspPi 5 can only officially speak single lane PCIe 2.0 though it can be overclocked to PCIe 3.0. So it’ll do.
When it arrives, I’ll try bootstrapping Ubuntu 24.04 LTS with ZFS on root onto it and see what happens. I’ve discovered that Ubuntu 24.04 uses something close to the same kernel and userland to Raspbian, so all the same custom RPI tools and config files and settings work exactly the same between the two. Which is a nice touch, and should simplify getting everything up and working.
Before it goes to the colocation …
I’ve noticed that maybe one quarter of the time rebooting the Pi does not work. I think it’s caused by my specific PoE HAT, other people have also reported the same problem for that HAT. Managed PoE switches can of course cut and restore the power, so it’s not a problem but I’d imagine setting it up and configuring it remotely wouldn’t be worth the pain. It would be better to get it all set up locally, make sure everything works, and go from there.
In any case, I’d like to add load testing, so maybe have some sort of bot fetch random pages continuously to see what kind of load results and whether it can continuously saturate a 1 Gbit ethernet even with the ZFS decompress et al running in the background. I think it’ll more than easily saturate a 1 Gbit ethernet, maybe with half of each of the four cores occupied. We’ll find out!
There is also the question whether my Mailcow software stack would be happy running in Docker on Aarch64 rather than in x64. Only one way to find out. Also, does ZFS synchronisation work fine between ARM and Intel ZFS sets? Also only one way to find out.
Finally, after all that testing is done, I’ll need to take a choice about which colocation to choose. The 10 Mbit capped NIC option I think is definitely out, that’s too low. That means a choice between the 1 Gbps ten watt constrained option, or the 200 Mbit unconstrained option for nearly exactly half the price.
The ten watt constrained option is a bit of a pain. In dialogue with their tech people, they did offer 95% power rating, so they’d ignore up to 5% of power consumption above ten watts. However, that Samsung SSD can consume 4 watts or so if fully loaded. If it happens to go off and do some internal housekeeping when the RaspPi CPU is fully loaded, that’s 13 watts. Depending on how long that lasts (I’d assume not long given how often we’d do any sustained writes), they could cut the power if say we exceeded the ten watt max for more than three minutes within an hour. That feels … risky. Plus, I lose one third of CPU power due to having to underclock the board to stay within the power budget.
Given that there would be no chance of my power getting cut, I think I can live with the 200 Mbit NIC. I wish it were more, but it’s a lot better than a 100 Mbit NIC. How often would I do anything needing more than 20 Mb/sec bandwidth? Mainly uploading diary posts like this, and so long as we’re in the rented accommodation our internet has a max 10 Mbit upload, which is half of that NIC cap. Cloudflare takes care of the public web server, so for HTTP NIC bandwidth doesn’t really matter except for fetching cold content.
Anyway, decision on colocation is a few weeks out yet. Lots more testing beckons first.
To conclude, the Raspberry Pi 5 looks like a great cheap colocated server solution. But only because providers special case a Raspberry Pi with extra low prices. If they only special cased anything under twenty watts with the same price, I’d be sending them an Intel N97 mini PC and avoid all this testing and hassle with the Raspberry Pi.