Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.
Niall’s virtual diary:
Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.
Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact, especially in terms of broken links or images.
- A biography of me is here if you want to get a quick overview of who I am
- An archive of prior virtual diary entries are available here
- For a deep, meaningful moment, watch this dialogue (needs a video player), or for something which plays with your perception, check out this picture. Try moving your eyes around - are those circles rotating???
Latest entries:
My current public internet node is dedi6.nedprod.com which as you might guess by the number six, it is the sixth iteration of said infrastructure. I have previously talked about the history of my dedicated servers before back in 2018 when I was about to build out my fifth generation server which was based on an eight core Intel Atom box with 8Gb of ECC RAM and a 128Gb SSD for €15/month. That was a good server for the money, it ran replicating offsite ZFS and it sucked less than your usual Atom server due to the eight cores and much faster clock speed. You even got a best effort 1 Gbit network connection, albeit peering was awful.
Alas two years later in July 2020 that server never came back after a reboot, and nedprod.com had a two week outage due to all data being lost. Turns out that particular server hardware is well known to die with a reboot, which put me off getting another. I did at least get to discover if my backup strategies were fit for purpose, and I discovered as that post related at the time that there was room for considerable improvement, which has since been implemented and I think I would never have a two week outage ever again. The sixth generation server infrastructure thus resulted, which consisted originally of two separate dual core Intel Atom servers with 4Gb of RAM and a 128Gb SSD each ZFS replicating to the other and to home every few minutes. They originally cost €10 per month for the two of them, which has since increased to €12 per month. As that post relates, they’re so slow that they can’t use all of a 1Gbit NIC, I reckoned at the time they could use about 80% of it if the file were cached in RAM and about 59% of it if the file had to be loaded in. It didn’t take long for me to drop the two server idea and just have Cloudflare free tier cache the site served from one of the nodes instead. And, for the money and given how very slow those servers are, they’ve done okay.
However, technology marches on, and I have noticed that as the software stack gets fatter over time, that 4Gb of RAM is becoming ever less realistic. It’s happening slowly enough I don’t have to rush a replacement, but it is happening and at some point they’re going to keel over. 4Gb of RAM just isn’t enough for a full Mailcow Docker stack with ZFS any more, and the SATA SSD those nodes have are slow enough that if a spike in memory were to occur, the server would just grind to a halt and I would be out of time to replace it.
The state of cheap dedicated servers in 2024
Prices have definitely risen at the bottom end. I had been hoping for last Black Friday I could snag a cheap deal, but stock was noticeably thin on the ground and I didn’t have the time to be sitting there pulsing refresh until stock appeared. So I missed out. Even then, most Black Friday deals were noticeably not in Europe, which is where I want my server for latency, and the deals weren’t particularly discounted even then.
Writing end of 2024, here is about as cheap as it gets for a genuine dedicated server with min 8Gb of RAM and a SSD in a European datacentre, with an IPv4 address:
- €14.50 ex VAT/month Oneprovider
- Four core Intel Atom 2.4Ghz with 8 Gb of 2133 RAM, 1x 120 Gb SATA SSD, and 1 Gbit NIC ('fair usage unlimited' [1]). Yes, same model as dedi5!
- €17 ex VAT/month OVH Kimsufi
- Four core Intel Xeon 2.2Ghz with 32 Gb of 2133 RAM, 2x 480 Gb SATA SSDs, and 300 Mbit NIC ('fair usage unlimited' [2]).
- €19 ex VAT/month Oneprovider
- Eight core Opteron 2.3Ghz with 16 Gb of 1866 RAM, 2x 120 Gb SATA SSDs, and 1 Gbit NIC ('fair usage unlimited' [1]).
I have scoured the internet for cheaper – only dual core Intel Atoms with 4Gb RAM identical to my current servers are cheaper.
However, during my search I discovered that more than few places offer Raspberry Pi servers. You buy the Pi as a once off setup charge, then you rent space, electricity and bandwidth for as little as a fiver per month. All these places also let you send in your own Raspberry Pi. That gets interesting, because the Raspberry Pi 5 can take a proper M.2 SSD. I wouldn’t dare try running ZFS on a sdcard, it would be dead within weeks. But a proper NVMe SSD would work very well.
[1]: OneProvider will cap your NIC to 100 Mbit quite aggressively. You may get tens of seconds of ~ 1 Gbit, then it goes to 100 Mbit, then possibly lower. It appears to vary according to data centre loads, it’s not based on monthly transfer soft limits etc.
[2]: It is well known that Kimsufi servers have an unofficial soft cap of approx 4 Tb per month, after which they throttle you to 10 Mbit or less.
The Raspberry Pi 5
To be honest, I never have given anything bar the Zero series of the Raspberry Pi much attention. The Zero series are nearly as cheap as you can buy for a full fat Linux computer, and as this virtual diary has shown, I have a whole bunch of them. At €58 inc VAT each to include PoE and a case, they’re still not cheap, but they are as cheap as you can go unless you go embedded microcontroller, but then that’s not a full fat Linux system.
The higher end Raspberry Pi’s are not good value for money. Mine the 8Gb RAM model with PoE, no storage and a case cost €147 inc VAT. I bought Henry’s games PC for similar money, and from the same vendor I can get right now an Intel N97 PC with a four core Alder Lake CPU in a case and power supply with 12Gb of RAM and a 256Gb SATA SSD for €152 inc VAT. That PC has an even smaller footprint than the RaspPi 5 (72 x 72 x 44 vs 63 x 95 x 45), it’s smaller in two dimensions and has proper sized HDMI ports not needing an annoying adapter. It even comes with a genuine licensed Windows 11 for that price. That’s a whole lot of computer for the money, it’ll even do a reasonable attempt at playing Grand Theft Auto V at lowest quality settings. The Raspberry Pi is poor value for money compared to that.
However, cheap monthly colocation costs so long as you supply a Raspberry Pi makes them suddenly interesting. I did a survey of all European places currently offering cheap Raspberry Pi colocation that I could find:
- €4.33 ex VAT/month https://shop.finaltek.com/cart.php?a=confproduct&i=0. Czechia. 100 Mbit NIC ‘unlimited’. Price includes added IPv4 address for extra cost, which you need to explicitly add (it costs nearly €2 extra per month, so if you can make do with IPv6 only, that would save a fair bit of money).
- €5.67 ex VAT/month $6 http://pi-colocation.com/. Germany. 10 Mbit NIC unlimited. Client supplies power supply.
- €6.00 ex VAT/month £5 https://my.quickhost.uk/cart.php?a=confproduct&i=0. UK. 1Gbit NIC 10Tb/month. Max 5 watts.
- €6.81 ex VAT/month https://raspberry-hosting.com/en/order. Czechia. 200 Mbit NIC ‘unlimited’. Also allows BananaPi.
- €14.60 ex VAT/month £12.14 https://my.quickhost.uk/cart.php?a=confproduct&i=0. UK. 1Gbit NIC 10Tb/month. Max 10 watts.
There are many more than this, but they’re all over €20/month and for that I’d just rent a cheap dedicated server and save myself the hassle.
So these questions need to be answered:
- Is a Raspberry Pi 5 sufficiently better than my Intel Atom servers to be worth the hassle?
- What if it needs to run within a ten watt or five watt power budget?
- Can you even get an Ubuntu LTS with ZFS on root onto one?
Testing the Raspberry Pi 5
As you can see in the photo, unfortunately I got the C1 stepping, not the lower power consuming D0 stepping which consumes exactly one watt less of power. The case was a bundle of a full size NVMe SSD M.2 slot with PoE power HAT, official RaspPi 5 heatsink and cooler, and an aluminium case to fit the whole thing. It looks pretty good, and I’m glad it’s metal because the board it runs hot:
This is it idling after many hours, at which it consumes about 5.4 watts of which I reckon about 1.5 watts gets consumed by the PoE circuitry (I have configured the fan to be off if the board is below 70 C, so it consumes nothing. I also don’t have the NVMe SSD yet, so this is minus the SSD). The onboard temperature sensor agrees it idles at about 65 C with the fan off.
After leaving it running stress-ng --cpu 4 --cpu-method fibonacci
for a while the most power I could make it use is 12.2 watts of PoE
with the fan running full belt and keeping the board at about 76 C.
You could add even more load via the GPU, but I won’t be using the
GPU if it’s a server. As with all semiconductors, they use more
power the hotter they get, however I feel this particular board
is particularly like this. It uses only 10.9 watts when cooler, the
additional 1.3 watts comes from the fan and heat. How much of this
is my particular HAT or the Raspberry Pi board itself, I don’t know.
Is a Raspberry Pi 5 sufficiently better than my Intel Atom servers to be worth the hassle?
First impressions were that this board is noticeably quick. That got me curious: just how quick relative to other hardware I own?
Here are SPECINT 2006 per Ghz per core for selected hardware:
- Intel Atom: 77.5 (my existing public server, 2013)
- ARM Cortex A76: 114.7 (the RPI5, CPU first appeared on the market in 2020)
- AMD Zen 3: 167.4 (my Threadripper dev system, 2022)
- Apple M3: 433 (my Macbook Pro, 2024)
Here is the memory bandwidth:
- Existing public server: 4 Gb/sec
- Raspberry Pi 5: 7 Gb/sec
- My Threadripper dev system: 37 Gb/sec
- My Macbook Pro: 93 Gb/sec
(Yeah those Apple silicon systems are just monsters …)
For the same clock speed, the RPI 5 should be 50-75% faster than the Intel Atom. Plus it has twice as many CPU cores, and a NVMe rather than SATA SSD, and possibly 40% more clock speed so it should be up to 2x faster single threaded and 4x faster multithreaded. I’d therefore say the answer to this question is a definitive yes.
(In case you are curious, the RaspPi5 is about equal to an Intel Haswell CPU in SPECINT per Ghz. Some would feel that Haswell was the last time Intel actually made a truly new architecture, and that they’ve been only tweaking Haswell ever since. To match Haswell is therefore impressive)
What if it needs to run within a ten watt or five watt power budget?
I don’t think it’s documented anywhere public that if you tell this board that it’s maximum CPU clock speed is 1.5 Ghz rather than its default 2.4 Ghz, it not only clamps the CPU to this but it also clamps the core and GPU and everything else to 500 Mhz. This results in an Intel Atom type experience, it’s pretty slow, but peak power consumption is restrained to 7.2 watts.
Clamping maximum CPU clock speed to one notch above at 1.6 Ghz appears to enable clocking the core up to 910 Mhz on demand. This results in a noticeably snappier use experience, but now:
- @ max 1.6 Ghz, peak power consumption is 9.1 watts.
- @ max 1.8 Ghz, peak power consumption is 9.4 watts.
- @ max 2.0 Ghz, peak power consumption is 9.6 watts.
The NVMe SSD likely would need half a watt at idle, so 1.8 Ghz is the maximum if you want to fit the whole thing including PoE circuitry into ten watts.
The board idles above five watts, so using the max five watt package listed above is not on the radar at all. So would the Raspberry Pi 4 with a PoE HAT, so you’d need to use no better than an underclocked Raspberry Pi 3 to get under five watts. That uses an ARM Cortex A53, which is considerably slower than an Intel Atom.
Finally, you get about twenty seconds of four cores @ 1.6 Ghz before the temperature exceeds 70 C. As the fan costs watts and a hot board costs watts, it makes more sense to reduce the temperature at which the board throttles to 70 C. Throttling drops the CPU initially to 1.5 Ghz, then it’ll throttle the core to 500 Mhz and the CPU to 1.2 Ghz. It’ll then bounce between that and 1.6 Ghz as temperature drops below and exceeds 70 C. I reckon power consumption at PoE is about 7.2 watts. If you leave it at that long enough (> 30 mins), eventually temperature will exceed 75 C. I have configured the fan to turn on slow at that point, as I reckon the extra watts for the fan is less than extra watts caused by the board getting even hotter.
In any case, I think it is very doable indeed to get this board with NVMe SSD to stay within ten watts. You’ll lose up to one third of CPU performance, but that’s still way faster than my existing Intel Atom public server. So the answer to this question is yes.
Can you get Ubuntu LTS with ZFS on root onto one?
The internet says yes, but until the NVMe drive arrives I cannot confirm it. I bought off eBay a used Samsung 960 Pro generation SSD for €20 including delivery. It’s only PCIe 3.0 and 256 Gb, but the RaspPi 5 can only officially speak single lane PCIe 2.0 though it can be overclocked to PCIe 3.0. So it’ll do.
When it arrives, I’ll try bootstrapping Ubuntu 24.04 LTS with ZFS on root onto it and see what happens. I’ve discovered that Ubuntu 24.04 uses something close to the same kernel and userland to Raspbian, so all the same custom RPI tools and config files and settings work exactly the same between the two. Which is a nice touch, and should simplify getting everything up and working.
Before it goes to the colocation …
I’ve noticed that maybe one quarter of the time rebooting the Pi does not work. I think it’s caused by my specific PoE HAT, other people have also reported the same problem for that HAT. Managed PoE switches can of course cut and restore the power, so it’s not a problem but I’d imagine setting it up and configuring it remotely wouldn’t be worth the pain. It would be better to get it all set up locally, make sure everything works, and go from there.
In any case, I’d like to add load testing, so maybe have some sort of bot fetch random pages continuously to see what kind of load results and whether it can continuously saturate a 1 Gbit ethernet even with the ZFS decompress et al running in the background. I think it’ll more than easily saturate a 1 Gbit ethernet, maybe with half of each of the four cores occupied. We’ll find out!
There is also the question whether my Mailcow software stack would be happy running in Docker on Aarch64 rather than in x64. Only one way to find out. Also, does ZFS synchronisation work fine between ARM and Intel ZFS sets? Also only one way to find out.
Finally, after all that testing is done, I’ll need to take a choice about which colocation to choose. The 10 Mbit capped NIC option I think is definitely out, that’s too low. That means a choice between the 1 Gbps ten watt constrained option, or the 200 Mbit unconstrained option for nearly exactly have the price.
The ten watt constrained option is a bit of a pain. In dialogue with their tech people, they did offer 95% power rating, so they’d ignore up to 5% of power consumption above ten watts. However, that Samsung SSD can consume 4 watts or so if fully loaded. If it happens to go off and do some internal housekeeping when the RaspPi CPU is fully loaded, that’s 13 watts. Depending on how long that lasts (I’d assume not long given how often we’d do any sustained writes), they could cut the power if say we exceeded the ten watt max for more than three minutes within an hour. That feels … risky. Plus, I lose one third of CPU power due to having to underclock the board to stay within the power budget.
Given that there would be no chance of my power getting cut, I think I can live with the 200 Mbit NIC. I wish it were more, but it’s a lot better than a 100 Mbit NIC. How often would I do anything needing more than 20 Mb/sec bandwidth? Mainly uploading diary posts like this, and so long as we’re in the rented accommodation our internet has a max 10 Mbit upload, which is half of that NIC cap. Cloudflare takes care of the public web server, so for HTTP NIC bandwidth doesn’t really matter except for fetching cold content.
Anyway, decision on colocation is a few weeks out yet. Lots more testing beckons first.
To conclude, the Raspberry Pi 5 looks like a great cheap colocated server solution. But only because providers special case a Raspberry Pi with extra low prices. If they only special cased anything under twenty watts with the same price, I’d be sending them an Intel N97 mini PC and avoid all this testing and hassle with the Raspberry Pi.
3D printing extensions to an IKEA Fridans blind to avoid the expense of IKEA Fytur blinds.
Alas, this post won’t be that. I will write that post soon, not least that I need to write down everything I will need to do when I have to go make thirty of them for the house. But writing it out properly will be time consuming to do it correctly, plus all the 3D prints need to be uploaded and blurbs etc written for them too. In short, chances are I’ll write it up during Christmas break, that kind of time consuming long form instructional writing needs more time than a single night after the kids have gone to bed.
So what will I be talking about tonight? Well …
House build actually takes a step forwards …
On the 11th November, Structural Engineering finally completed! They had been working on it since July.
I removed the logos of the firm because I’m going to be mildly critical – I did have to formally complain about how long it was taking. In fairness, they did step up very much after me complaining, they were like a whole new company after in comparison to before me complaining. If I had had the company after my complaint before I complained, I would be falling over with praise for them. Still, I cannot argue that they most definitely did step up, albeit without any admission of having done anything wrong to begin with.
In fact, they were rather critical of me instead! Apparently I had not been driving forward the process sufficiently i.e. I should have been a plague about their houses forcing myself to the front of their work queue, instead of just letting them at it. And okay, fair enough, Ireland is currently the hottest construction market in the OECD and everybody is at capacity. If you don’t make noise, nobody will be prioritising you right now. You need to be a pest to get put ahead of everybody else. Lesson learned.
I am not a Structural Engineer, in fact it is probably my weakest area of skill of all the skills in the building of a house. However from my best understanding, their quality of work looks to be good. I did some spot calculations using my Applied Leaving Cert Maths from more than a quarter century ago and they appear to have built in very significant safety margins everywhere I checked (which was, admittedly the simpler cases because the complex cases are beyond my capabilities). The work appears to be high quality.
If I did have another mild criticism, they were annoyingly narrow in their scope. Lots and lots of stuff they simply hand waved away as ‘this isn’t something we do’, punting it onto somebody (which sometimes felt like anybody) else. And I get they have a lot of business on right now, and they’re focusing on their narrow tunnel of the specific engineering they do. But as I made very clear in my feedback to them, I hired them to help the wider team build this house. From my point of view, that means applying yourself to solving the wider problems up to the point where your expertise ends. It doesn’t mean shunting off everything outside your narrow remit onto anybody else – that, in my opinion, helps cause systemic failure because it opens gaps between expertise which only the client (i.e. me) can fill, sometimes using best guesses from what he can glean off Google. That isn’t - in my opinion - what engineering is about. I certainly wouldn’t tolerate that kind of attitude in my workplace, and if I ever exhibited it myself I would view it as a personal failure for which I would publicly apologise.
All that said, my workplace(s) hire from global talent pools, and the minimum bar is very considerably higher than any national bar. My workplace(s) expect global top tier talent, and anybody who doesn’t perform at a globally competitive skill level quickly gets dismissed. It is unrealistic to expect the same here. I am also mindful that the sums of money involved for my house build are a small fraction of what my workspace(s) spend, and to a certain extent, if somebody is paying big bucks they can expect a level of service not appropriate for somebody like me who isn’t paying big bucks. So I need to arrest my expectations, and I am very sure I’ll need to keep on doing so.
What next?
After my TF supplier received the completed structural engineering designs, they put my house into the queue for the metal webbed joist designer. They apparently have a six week lead time, and they are not expected to even look at my house until the New Year.
After the metal webbed joist design is complete (I’m thinking February at earliest), my TF supplier says we’ll be immediately put into the queue for foundations design which apparently currently has a lead time of three months. They’ll then use those months to do the detailed timber frame design which is the specific bit they do in-house as they manufacture offsite the TF panels. I am told it is highly likely they’ll complete that work before the foundations are done as those are holding up everybody in the Irish construction industry right now, they’re all at capacity and hence the lead times. So we’re looking at an earliest possible build start date of May or so. Which is more likely June.
That is real bad timing for me, as school holidays start then and I’ll be on childcare. I can’t say I’m looking forward one bit to childcare whilst trying to get that house sealed up externally before winter sets in. It’s going to be an absolute bear.
However, I suppose there is a good chance that we will get that house weatherproof by this time next year. We then beaver away at the internals, with a bit of luck we get to move in by Summer 2026. That is many years later than anybody ever expected, but better than no house at all.
Music while we work
With the family having been on austerity for the past two years as our bank accounts have to be absolutely pristine for the mortgage application (they scan every line item, require explanations for anything unusual, and of course they prove your savings and spend rates which says how well you would afford the mortgage), we don’t ever spend anything discretionary except on the house. Black Friday and Singles’ Day 2022 & 2023 we spent many thousands on stuff from China and elsewhere, but for Black Friday 2024 I was kinda at a loss on what more is worth buying now not later? After all, anything bought now ties up cash and this build will be primarily cash flow constrained, however if very large savings are on hand, or I need an item for testing now to make sure it’ll work, I’ve tended to go for it.
You might remember the future house TV I bought this time last year in the Black Friday sales. That was an example of large savings being worth the tying up of cash. That is indeed a superb TV, however one thing I found during testing was that the bass was disappointing, as I described this time last year. So I am in the market for something which can do a far better job with low end frequencies.
My completely unrealistic subwoofer in the wishlist is the Klipsch RP 1600SW which is a 16 inch vented driver with about 800 watts of 16 - 175 Hz audio delivery. This very much would deliver all the low frequency audio you would ever need, but it does cost €2,000 inc VAT which is nearly as much as the TV itself.
Getting into a more realistic price range, the Q Acoustics QB12 has a 12 inch driver, 220 watts of bass power, and delivers 28 - 300 Hz. At €450 inc VAT, still a lot of money for a single purpose item. And, in the end, too much money. I couldn’t justify it for that single use case.
What I could justify is something which can act as the TV subwoofer, but can also act as outdoor music if we were on the patio, and also act as a general portable audio e.g. when working on the inside of the house. Which kinda ticked the box that this is arguably a future house purchase and therefore ‘allowed’. So I ended up buying at a good Black Friday discount the JBL PartyBox 710 for €650 inc VAT. It has 800 watts of total power, and can deliver 35 - 20,000 Hz (but see later about that). It does have serious audiophile chops, rtings reckoned it the best Bluetooth capable speaker available along with similarly high ratings from several audiophile sources including enthusiast forums on Reddit. As you will see shortly, I don’t think it that good personally, especially for €650, but I don’t regret the purchase. Here it is in action (and yes, the light show can be turned off):
The JBL Party Box 710 compared
I’m not a serious audiophile, but I have sought good ‘bang for the buck’ in audio systems. My first set of speakers attempting audio quality was a 2.1 Audio-Technica set. I still have them – in fact, I was only listening to them a few hours ago as they’re out at the site – and despite being a quarter century old, they still work perfectly and still sound great. Funnily enough, I can find zero mention of these on the internet, so I really wish I’d taken a photo of the model number so I could give you some technical details. In any case, they have a reasonably sized subwoofer, and two (later upgraded to three) ‘full range’ satellites (which usually means they struggle at the low mid range, which is exactly true in this case). The reason I added the third optional speaker is because the satellites were a bit overwhelmed in a TV scenario, and the unit is capable of 3.1. The third satellite provided a dedicated centre channel and made them into excellent TV speakers. If used as computer speakers where the satellites are right close to your head, two is plenty, but at a distance you do need more satellite power. In any case, these go real loud, the bass is thumpy enough, for a small fairly inexpensive and certainly very long lasting speakers, they were and remain a great purchase.
I seem to have some luck when picking speakers. After a few years with the AT 2.1 set, I invested in a Logitech 5.1 set. I remember they lasted a while and then blew up, so I then bought a more expensive Logitech 5.1 set which I think was the second most expensive model they had at the time. Those remain working perfectly to this day after nearly two decades of continuous use. As they’re next door, I know the model number this time: they are the Logitech Z-5400 5.1 speaker system released in the year 2006:
- Total 310w RMS
- 1x 6.5” Subwoofer 116w RMS
- 5x 2.5” Satellites 194w RMS
- Claimed 35 - 20,000 Hz (one user reported measured 3 dB from 42 Hz)
- I would say the bass has plenty of power but not much definition i.e. it is muddy.
- Crossover is approx 120 Hz to compensate for the ‘full range’ satellites not having much at the low end, so the muddy subwoofer does much more lifting at the low end than anything hifi would.
I remember ex-girlfriend Johanna didn’t think the audio quality from these were good back when we lived together, but we were using them as room speakers when they were designed specifically to surround a single individual sitting in front of a computer. If you have them arranged like that, well I find them better than the Audio-Technica’s, but I have also heard better again though I must admit not by much. The Z-5400’s got sniffed at by audiophiles, but end users absolutely loved them and I see from the internet that if arranged in their intended position closely surrounding a single seated individual, empirical measurements found audiophile level quality.
Finally, I suppose the other audiophile type audio system is the one in my car which is the maximum trim possible for a Ford Focus. It is made by Sony, and is the only one with proper dedicated tweeters, mid range and subwoofer speakers. Music sounds great within the car, and if you crank it up you get everybody staring at you thanks to the dedicated subwoofer which sits on top of the spare wheel in the boot and is the same diameter. Rock music sounds particularly amazing, indeed you can be singing so loud as you’re driving I’m not entirely sure it’s safe.
So, onto the party box! You may find the rtings review of interest, but my own notes are these:
- Total 800w RMS
- 2x 8” Subwoofers
- 2x 2.75” Tweeters
- Claimed 35 - 20,000 Hz (rtings measured 3 dB between 26.7 - 14,100 Hz).
- SBC and AAC Bluetooth codecs.
- Noticeable volume drop near to 100 Hz during a frequency sweep (rtings found the same). This is surely a firmware glitch.
- Many users report volume past about 66% simply swaps treble for bass i.e. bass is actively reduced in favour of treble. I couldn’t check this because this speaker at that volume in this small rented house is just too much to bear.
All testing was done with a wired cable from a MBP, not over Bluetooth to eliminate Bluetooth codec compression effects.
The party box is on paper the most powerful audio system I now have. Yes it goes loud, yes everything in your house will vibrate, however as with all powerful audio systems, they max out at about 100 dB of noise to prevent damaging human ears. The difference in power is how much air volume gets moved i.e. dB is the amplitude of the wave, but doesn’t say anything about the mass of air moved. More powerful speakers move more air volume. If this is hard to conceptualise, it’s like a light having a maximum point brightness otherwise you get glare, so past a certain power level lights would add brightness by adding surface volume to emit more light from more surface. The party box is really big, easily more than a metre tall, and the large bass vent it has at the back pushes lots and lots of air. Putting a 24 Hz tone through it, I can’t hear anything but given that the house and everything in it is shaking (including me) I can’t fault the power delivery.
Unfortunately, the party box is also the least consistent of any audio system I have. On some material, it is goosebumps good. On other material, it sounds actively crap. I upgraded to the latest firmware as the actively crap tracks are almost certainly due to firmware issues, but in the end I have to call it as I hear it:
- If music has ‘tight’ swooping bass, this speaker very much favours that. Basically it bats you physically around with bass.
- If music has lots of ‘general’ bass e.g. most rock music, this speaker doesn’t favour that. The sound is off-kilter or unbalanced or crowded somehow.
- Most music falls somewhere in between. Most is well rendered, however just because it has a beat or is EDM doesn’t necessarily make it stand out. Rather, it’s ‘competent’ rather than ‘wow’. Whereas on my other sound systems above, the exact same material just sounds better and sometimes a lot better.
To try and nail this down better, here is specific material I tested:
Most improved tracks (like wow! goosebumps good):
- Timeless by Goldie. I had no idea until now how much attention he paid the bass track!
- Porcelein by Moby
- Teardrop by Massive Attack
Disappointing (sound is brash? crowded soundstage? somehow sounds bad)
- Being Everyone by After Forever
- Is Nothing Sacred Anymore? by Meatloaf
- Ironic by Alanis Morissette
Very acceptable:
- Unwritten by Natasha Bedingfield (no crowded sound at all, very spacious)
- Halo by Beyonce
- Talk that Talk by Rihanna
- Here with me by Dido
- Fotografia by Juanes (but getting a bit of that crowded sound issue)
Tested movies (all with 5.1 or better audio with dedicated LFE channel):
- The Matrix. Breathtaking. Bass here is exactly right and clean. Every bullet fired is like a bass punch into the chest. It shows just how good this speaker can be with the right input.
- Sleepy Hollow intro. Amazing, and it’s what I got this speaker for because this exact movie had a sound not commensurate with the picture on that nice TV.
- Starship Troopers. Amazing.
- Ultraviolet intro. Superb.
- Edge of Tomorrow. Almost as if you’re in the battle scenes.
- Deadpool and Wolverine intro. The movie has too much general bass in the intro music, it doesn’t sound good, it’s crowded. Rest of the movie is good.
- The Rock. Very good, but not as good as I think it could be. Where is the bass punch in the chest per bullet fired?
Movies, as you might surmise, suit this speaker far better in general than music.
I will say that almost nobody seemed to find the same as I have above online. You could say it’s low quality source material, and maybe it is, however I deliberately threw on each of those three disappointing tracks on the Audio Technica speakers earlier today. And they sounded great, Being Everyone was spacious almost felt like you were at the concert. Meatloaf had Julia my three year old bopping around. Alanis’ most famous song came through clear, very much not crowded, and there was plenty of midrange even though that isn’t the strength of these speakers. I played all three in my car too, and all three sounded superb there, better than the Audio Technica speakers.
I can’t really explain it nor that none of the reviews online found anything similar. I’m very sure it’s not a hardware defect, Teardrop by Massive Attack would nearly bring you to tears it sounds so good. No, I reckon it’s something misconfigured in the DSP firmware, the fact there is that arbitrary volume drop off around 100 Hz would suggest their firmware is just buggy. It’s a shame.
Now, all the above said, the party box does tick all the boxes it was bought for: it can be wheeled wherever you need sound, it can be safely left outside in the rain, no fiddling with wires is needed thanks to the Bluetooth, and it works very well to add bass to movies as a standin for a dedicated subwoofer. So I’m happy with my purchase. I guess my only temptation now is whether two of them would sound better because you could send left channel to one and right channel to the other, and maybe that might solve the crowded sound problem.
Anyway, not a concern for the next few years. This will do for now.
I guess my next update will probably be during my Christmas break. See you then!
My remaining prototype projects are:
3D printing extensions to an IKEA Fridans blind to avoid the expense of IKEA Fytur blinds.
Dimming RGBW LED strips with my ‘new’ IRF540N MOSFET boards.
Figure out a solution to reducing the power consumption of my cheap ESP32-POE boards.
I’m pretty much done with all three. The blinds have taken the most effort and have been the most frustrating, but I have a video of them in operation and I just need to write it all up. I’ve also completed the other two in full and those are what I’ll be talking about in this post.
One these prototypes are complete, my next goal is to draw into a 3D representation of the house where all the wiring and items will go in detail. I already have done a full set of schematic wiring diagrams, but it would save time onsite if I knew exactly what wire goes where and it may help me save money on wire by optimising routing. So I’ll still be busy doing house build aiding work.
Throwing a whole PC at a pulse generator
Last post I mentioned:
After quite a bit of research I landed on a surprising conclusion: the most cost effective way of implementing a low power pulse generator is actually a second ESP32 chip which does nothing but deep sleep for 340 ms and power on for 60 ms. This seems wasteful for a microcontroller as powerful as an Intel Pentium II from 1997, but the economics are what they are – I can get an ESP32-C3 on a breakout board with USB-C, onboard programmer and 3.3v buck converter delivered for €1.50 inc VAT! Madness! And it doubles as ‘the load’ because you can turn on the Bluetooth and Wifi stacks to consume up to 200 mW @ 3.3v (which should be just enough to consume 500 mW @ PoE), whereas the 555 circuit would need an additional load resistor and wiring.
Those ESP32-C3 “Super Mini” boards arrived from Aliexpress a week ago. According to https://roryhay.es/blog/esp32-c3-super-mini-flaw, mine are the flawed design which puts the Wifi antenna (the red thing on the left) too close to the external 40 Mhz clock (the silver component top left) which impacts reception quite badly:
I never found the Wifi on the Olimex ESP32 board of much use, it has a much better embedded antenna design which I still find nearly useless, especially as it is 2.4Ghz and even a few of these devices in a room quickly overwhelms that spectrum such that it denigrates into unreliable noise. I certainly never would have any use for the Wifi with an even worse embedded antenna, so I don’t care that this is the flawed design for Wifi. Especially for €1.50 inc VAT. These boards do have the ESP32-C3 with the embedded 4Mb flash (amazingly, some do not), and it is the latest ESP32-C3-FH4 revision whereas some boards ship with a retired earlier revision. Build assembly, as you can see, isn’t the cleanest but I’ve also seen a lot worse. I will say that when I asked it to list Wifi APs it could see, it found an identical list to my phone and the signal strengths it reported were only a bit worse, but I didn’t try connecting to any as I won’t ever need that functionality.
Here are the pinouts:
In addition to your twelve very flexible i/o, the ESP32-C3 has 400 Kb of RAM, 4Mb of flash, it runs at up to 160 Mhz and has a 32 bit RISC-V instruction set. It is only single cored so if you do run a Wifi or BLE stack, that will consume a lot of CPU wherea the Xtensa ESP32s throw a whole CPU core at Wifi and BLE and leave the other core entirely for you. It is also missing hardware floating point, but does have hardware crypto acceleration.
For €1.50 inc VAT delivered, that is a whole load of computing for the money. This is a very capable dev board for the money. It’s almost embarrasing the value here.
You can, however, actually get still cheaper on a breakout board with USB connected flash programmer and a 5v to 3.3v buck converter. Yes, even less cost than €1.50 inc VAT delivered:
- A clone of the Waveshare RP2040-Zero dev board for €1.29 inc VAT delivered. This is the same RP2040 microcontroller of Raspberry Pi fame, and the board is generous with nineteen i/o on pins and a further nine i/o if your micro soldering skills are good. It also has a single onboard WS2812 RGB LED if you fancy a very small light show.
- A clone of the ATTiny85 dev board for €0.93 inc VAT delivered. This is an eight bit CPU compatible with Arduino, and it has six i/o pins.
(I deliberately omit the fake STM32 clones which are well known to be sufficiently flaky and weird that they aren’t worth the hassle at this price point. The RP2040 chip on these boards is genuine, as is the ATTiny85, it’s just the dev board which is a clone of a branded (and much more expensive) board. As the chip is genuine and not a bad attempt at a reproduction, you get consistent behaviour and the toolchains will work without issues)
Of those two, the RP2040-Zero looked worth a speculative purchase, and the Aliexpress vendor with both the ESP32-C3 Super Mini boards and the RP2040-Zero boards gave you free shipping if the order was over €37. So, I ordered this bag of both MCU types, mostly the ESP32-C3 with a few of the RP2040-Zero’s thrown in:
I still can’t quite believe that in 2024 I can buy two dozen 1997 era PCs in a bag for under €40 inc VAT delivered. Madness!
The RP2040-Zero boards have more i/o than the ESP32-C3, and have the whole Raspberry Pi ecosystem with them, but otherwise are inferior in almost every way. They have nearly half the RAM, half the flash, slower clock speed and they in fact run far slower again than any ESP32. There is obviously no Wifi nor Bluetooth. For €0.20 inc VAT saved they probably aren’t worth it except for the Raspberry Pi ecosystem and they were also a guarantee that if the ESP32-C3 boards didn’t work out, I’d have a fallback. I’m glad to report that the ESP32-C3 boards did work out, so those RP2040’s are probably going to gather dust until a use case for them turns up.
One of those use cases might be as a MicroPython learning platform. My son occasionally types in Python Turtle programs to a laptop. He doesn’t understand enough to write his own programs yet, but if he sticks with it and gets good enough that he can, certainly one of those RP2040 is ideal for learning. Once you flash MicroPython onto it, you literally plug it in to a PC, connect to the serial port it exposes and voila, you’re in a Python interpreter. You can flash LEDs, poke pins, there is actually a fair fist at an embedded debugger on there. This is stuff Raspberry Pi does well, and to date they’ve had superb support for ancient hardware so I expect latest firmwares will be available for those boards a decade or more from now. And hey, if you just want a quick and dirty MCU try out of something embedded from within an interactive interpreter, that’s more hassle with an ESP32 to set up than a RP2040.
Completing Olimex ESP32-PoE detailed power consumption
Last post I reported disappointing power consumption measurements for the Olimex ESP32-PoE board when powered from PoE, and I speculated that a pulse generator using these cheap ESP32-C3 boards would produce a superior result:
Removing resistor R42 which burns 500 mW to keep the PoE supply going is an obvious step, but I think we can do better than 1.2 watts idling with that resistor removed for not much extra money. PoE doesn’t actually require a constant 500 mW of load to stay active – rather it needs to see 500 mW of load for at least 60 milliseconds every 340 milliseconds. This is a duty cycle of 15% on a tick of 400 milliseconds, reducing amortised load to keep PoE active to 75 mW. If I could get these boards mostly into deep sleep, and have some sort of pulse generator generate load at the right duty cycle, that could reduce heat contributed to the house significantly … Assuming that the 2 mW deep sleep is the same for both, and conversion losses might be 5x at such a low current, it might draw 25 mW from PoE during the off cycle. That should bring total PoE draw to under 100 mW per device amortised, so under three watts for the total.
I can tell you now I didn’t get the power consumption that low as there are inefficiencies in the chain. But I can complete the table now:
USB (5.2v) | PoE (52.6v) unmodified | PoE (52.6v) with R42 resistor removed | |
---|---|---|---|
Deep sleep (two LEDs shining) | 2 mW | 579 mW | switch cuts power, so must be < 500 mW |
Idle in ESPHome no ethernet (two LEDs shining) | 386 mW | n/a | n/a |
As above with ESPHome trying to get ethernet | 454 mW | n/a | n/a |
Idle in ESPHome with ethernet (four LEDs shining) | 553 mW | 1525 mW | 947 mW |
Idle as above with all peripherals for a bedroom | 600 mW | 1736 mW | 1158 mW (estimated) |
Deep sleep + ESP32-C3 pulse generator to keep PoE alive (three LEDs shining plus one pulsing) | 46 mW | 170 mW |
So about twice worse than originally hoped for, most of which I would guess is inefficiency in the buck converters at very low currents. But still a lot better than any of the other alternatives above, so I think we’ll take it.
Here is it in action, I turn on the blue LED when the board is burning power and off when it goes into deep sleep:
And, for reference, here is the snippet of ESP-IDF code which I wrote:
#define US_DEEPSLEEP (400000 - US_BURN)
#define US_BURN (66000) // 60000 plus min 5200 for period before wifi enable
void app_main(void)
{
esp_sleep_wakeup_cause_t wakeup_cause = esp_sleep_get_wakeup_cause();
int64_t since_boot = esp_timer_get_time();
// Turn on the LED to burn a few extra mW
gpio_reset_pin(8);
gpio_set_direction(8, GPIO_MODE_OUTPUT);
gpio_set_level(8, 0);
// Initialize NVS. Apparently required for Wifi.
esp_err_t ret = nvs_flash_init();
if (ret == ESP_ERR_NVS_NO_FREE_PAGES || ret == ESP_ERR_NVS_NEW_VERSION_FOUND)
{
ESP_ERROR_CHECK(nvs_flash_erase());
ret = nvs_flash_init();
}
ESP_ERROR_CHECK(ret);
// Fire up wifi to add an extra ~400 mW power consumption
int64_t before_scan = esp_timer_get_time();
ESP_ERROR_CHECK(esp_netif_init());
ESP_ERROR_CHECK(esp_event_loop_create_default());
esp_netif_t *sta_netif = esp_netif_create_default_wifi_sta();
assert(sta_netif);
wifi_init_config_t cfg = WIFI_INIT_CONFIG_DEFAULT();
ESP_ERROR_CHECK(esp_wifi_init(&cfg));
ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_STA));
ESP_ERROR_CHECK(esp_wifi_start());
esp_wifi_scan_start(NULL, false);
// Spin loop the CPU to burn another ~25 mW
while (esp_timer_get_time() < US_BURN)
{
// vTaskDelay(10);
}
int64_t after_scan = esp_timer_get_time();
printf("\nsince boot = %lld us\n", since_boot);
printf("wakeup cause = %d\n", (int)wakeup_cause);
printf("before wifi scan = %lld us\n", before_scan);
printf("after wifi scan = %lld us\n", after_scan);
int64_t before_sleep = esp_timer_get_time();
printf("before deep sleep = %lld us\n", before_sleep);
esp_deep_sleep(US_DEEPSLEEP);
}
I measured this burning 450 mW at peak with my USB power meter and 3 mW during deep sleep. The code which initialises non-volatile storage takes quite a bit, about three milliseconds and it takes two milliseconds to boot. This is why we burn for 66 milliseconds instead of 60 milliseconds to account for the period before burn begins.
I threw the above together with a few hours of work. I was impressed with the ESP-IDF framework, it is very well documented and the APIs are complete and well designed in the bits I looked at. The toolchain has a Visual Studio Code extension that ‘just works’ on Windows, Linux or Mac OS, and it’s full fat C++ 17 as it’s based on the current latest stable GCC 14.
There is an awful lot to like here I must admit. Compared to embedded systems development at this price point even five years ago … exponential improvement remains live and well at this price point. I can see future ESP32 class MCUs on a dev board with USB based flash programmer coming in well below one euro soon enough. Madness!
Dimming RGBW LED strips with my ‘new’ IRF540N MOSFET boards
Last August I wrote:
Many moons ago after testing IRF520N MOSFET based solutions for dimming LED strips, I realise that I had bought poorly and I should have bought IRF540N MOSFETs instead as they should run much cooler. Easily two years ago now I did buy a bunch of four channel IRF540N MOSFET boards, but I’ve never tested them in action. I’d like to get that done, to create peace of mind that this solution will definitely work if asked and they do run cool as the maths had predicted.
This one was a bit bad, I had had that IRF540N board since January 2023. To be honest, I was sufficiently sure it would be fine it wasn’t high priority but as I run down the prototypes to work on, this was an easy one to also clear off the deck.
Back in 2021 when I first started testing prototypes – yes, three years ago now – I put in a prototype cove lighting operated by a Devantech industrial PIC32 board. It used an IRF520N MOSFET to do the dimming, and boy did it get hot with perhaps 2 amps going through it. This is due to maths, here are my estimates from their datasheets of waste heat generation for a control voltage of 10v and the MOSFET’s temperature is 20 C (they leak more heat as they get hotter):
Current | IRF520N watts | IRF520N temp | IRF540N watts | IRF540N temp |
---|---|---|---|---|
0.63 A | 0.08 W | +5.0 C | 0.02 W | +1.3 C |
1.00 A | 0.20 W | +12 C | 0.05 W | +3.2 C |
2.00 A | 0.80 W | +50 C | 0.2 W | +13 C |
2.50 A | 1.25 W | +78 C (needs cooling) | 0.33 W | +20 C |
4.00 A | 3.20 W | +198 C (needs cooling) | 0.82 W | +52 C |
Most MOSFET boards of any kind put a LED in series with the control signal which removes 0.5v, so with a level shifter you end up with a control voltage of about 4.5v after the LED. This is obviously a good bit lower than the 10v most MOSFET datasheets describe, which can result in even more heating than the above as more current will leak due to the weaker clamp.
This particular IRF540N board happens to be wired differently from usual however. To be honest, I knew that when I did the research and I bought it two years, but I had forgotten since and so this testing and wiring is wrong as you’ll shortly see.
There isn’t much to the automation side of things here – ESPHome knows about four channel RGBWW LED strips, and here it applies a 15 kHz PWM to each channel which is more than flicker free. In case you’re wondering why the green stays on, more on that shortly.
To test heating, I left them on 100% for a period. These are my outdoor LED strips for the house soffit and I reckon they consume about 72 watts mainly clamped by the overly thin anode wire which probably pegs current to 3 amps or so. When I fit these to the house, I’ll be running a second anode wire to the other end just to reduce heat wastage.
You can clearly see the hot anode wire standing out – why these Chinese RGBWW LED strip manufacturers insist on fitting identical gauge wire for the common anode I just do not understand. I can’t believe it actually saves much money, maybe a few cents, but it would make far more sense electrically speaking to either fit a slightly thicker wire, or even just double up on the wires.
The strip itself gets up to 75 C, but as these will be living outside up high where there is a breeze I am not worried about them warping the soffit plastic or anything (I also have them standing off slightly using clips). Also, the extra anode line will help a bit I think by not turning the common anode into a heating element.
The ‘MOSFET4 U04’ board which costs about €2 inc VAT delivered shows very little heating by the MOSFETs, maybe +15 C. What is surprising is that the PS2801-4 optocoupler chip gets rather warm, though it remains well inside its datasheet temperature range. There was no PWM as it is fully on, so it’s not rapid switching causing the heating. Its datasheet says it’s happy with 3.3v signalling and it’ll switch outputs of up to 80v.
The heating made me look a bit more into the circuit, which then made me realise that I had wired it wrong by using a 5v level shifter. The optocoupler is powered by the MOSFET power NOT Vin as would be normal in MOSFET breakout boards, so the Vin is completely ignored and the optocoupler detects only the difference between signal and ground. There is a voltage divider for the input which divides the MOSFET voltage by approximately 2.5 to create the input signal for the IRF540N. The MOSFET has a maximum signal voltage of 20v, so the maximum safe voltage for the load is 50v. I’m using 24v for the LED strip, so that yields a 9.6v signal voltage. This is a clever design – it means my 3.3v to 5v level shifter is completely unnecessary, and indeed may be driving too much current through the optocoupler making it get hot. That inspired me to rewire and retest this by removing the level shifter, and this was the result:
Well that is a surprise! This is now the optocoupler driven directly from the ESP32 3.3v output rather than level shifted 5v. What you’d expect is the lower voltage and surely much lower max current would produce less heating, but for some reason it appears to have the opposite effect. I didn’t connect the load in this case, but assuming it wouldn’t cause voltage drop I can’t see any reason why that would affect the heating of the optocoupler.
Looking again at the PS2801-4 optocoupler datasheet, apparently it’ll dissipate up to 200 mW of heat per channel. Four channels is one watt. That might explain things, it is a small chip. I think it’ll need a heatsink when deployed. Good to learn!
Finally, there is a question about why the green is always on? A thermal photo with all signals low explains everything:
That’s clearly a busted MOSFET or it isn’t wired in correctly – either way, it’s always on. I note that the optocoupler is now cool, so I suspect the fault isn’t there, but it can be hard to say. In any case, for €2 inc VAT you just grab another board, and I bought a bag of them.
Northern Lights
About six months ago I was very fortunate to see the Northern Lights come as far south as Cork. We are currently in a solar maximum, I thought I’d give a quick peek assuming it would be clouded out. It was not – once my eyes adapted, I got twenty minutes of reds, greens and blues strobing, pulsing and twisting above me easily visible with the naked eye. Then the clouds came in, so it was over. I didn’t bring my phone with me at the time thinking I wouldn’t see anything, and I could see the clouds coming so I didn’t want to ruin my night adapted vision by getting my phone. So I captured no photos.
I honestly thought that would be the one and only time I’d ever see Northern Lights this far south in my life. However, a coronal mass ejection hit the planet a few days ago, and most unusually it was a clear night. As this was an ejection, the lights were concentrated to the north rather than extending below Ireland like last time, and unfortunately Mallow town is to my north so light pollution rather ruined the view. I did get this:
And that’s very similar to what you could see with the naked eye. You could see the colours just fine, despite the street lamps immediately around and Mallow town spraying light all over it.
Contrast that with the site for my future house which is on the edge of a dark sky reserve:
This was taken at almost the same time by my neighbour Rob with his iPhone. Without doubt Apple have done some postprocessing there, so it wasn’t quite that nice with the naked eye. But it should have been a lot better if you let your eyes adapt than what was possible where I live with all those bright street nights to the north of my house preventing you adapting.
So there you have it – I’ve seen the Northern Lights twice in my life now! And they were awesome.
Anyway here’s hoping that by this time next month we’ll be out of structural engineering. Three months for SE is a long time.
As we all sit around waiting, I have been pushing onwards with the future house projects. Last post I mentioned:
Implementing the ventilation boost fan per inlet and outlet in the house.
3D printing extensions to an IKEA Fridans blind to avoid the expense of IKEA Fytur blinds.
Dimming RGBW LED strips with my ‘new’ IRF540N MOSFET boards.
I haven’t got to item (3), but I’ve made significant progress with (1) and (2). (1) is what I’ll be writing about today.
Ventilation boost fans
Last post I said:
I shall be testing a €4 inc VAT driver based on what Aliexpress claims is a BTS7960 H-bridge. It claims it can handle 43 amps, the reviews are clear it cannot, but it should handle the max 3 amps we’ll ever demand from it. The BTS7960 can take a max 30v, so I’m a little concerned that back EMF from the 24v bilge pump fan might spike over that. However it would seem that these bilge pumps respond very well to lower voltages, they turn well at 5v and have more than plenty flow in my opinion at 12v (and at 24v, they’re insane) so chances are very high I’ll run them at 12v and make everything easier on myself.
And here it is wired up:
The BTS7960 H-bridge is by far the cheapest ‘not small’ motor driver on Aliexpress. I had been a little worried about it, but having tested it myself and watched plenty of YouTube videos of other people testing it (including to destruction), I’m feeling much happier with it. The BTS7960 ICs themselves (assuming they aren’t fake clones) claim a max 43 amps, but they’ll throw out an enormous amount of heat for that and besides the cheap module these are mounted on doesn’t have thick enough traces to handle such current. YouTube reviews reckon the safe maximum without active cooling is about 15 amps, and moving the heatsink from the ‘wrong’ side to the front also helps.
There isn’t much to the ESPHome scripting for this – you put a PWM onto the forward and backwards pins, and a digital output onto the forwards and backwards enable. Enabling both backwards and forwards shorts the windings, which equals braking now as the back EMF from the motor stops the turn quicker than preventing the back EMF flowing. The BTS7960 is happy with 3.3v TTL and at the currents that the ESP32 outputs, so it ‘just works’:
The first thing you may notice in the video is coil whine – for some reason I don’t remember now, I had configured the PWM for these to 3662 Hz so unsurprisingly, there is a clear 3662 Hz tone in the video. I think I might have done that to reduce the impact of voltage spikes from back EMF on the electronics, but if I’m now running at 12v which I think I am (as you can see, they still go like the clappers despite being 24v motors), then something much higher frequency would make sense. The ESP32 has a 80 Mhz base frequency for its PWM, so the obvious choice would be 625 kHz to give 128 steps. However the datasheet for the BTS7960 says 25 kHz is the maximum, so 2048 or 4096 steps seem like reasonable choices and nobody will be hearing 20 kHz whine.
The second thing you may notice if you have a HDR display is that the above video ‘glows’ brightly compared to the rest of the page. This is because it is in HDR10+! What won’t be so obvious is that it’s the first ever AV1 encoded video on this website, and I encoded it at 1080p Full HD in ten bit HDR with stereo AAC audio with a capped bitrate of 500 Kb/sec. That entire minute long video is only 3.8 Mb long! Two hours of it would be 456 Mb.
I think you’ll agree that it came out very well for such a low bitrate, so I expect to be mounting all future videos directly on this website instead of having YouTube host them. I only have seven videos on YouTube, so there’s even an argument of converting the lot down to be direct. I had only ever mounted two videos directly to this site before, both reduced severely in resolution to keep the file small, but if AV1 can encode 1080p Full HD in HDR10+ at that kind of quality for that low file size I think that’s the future. What I’m still wrapping my head around a little is that twenty five years ago, ninety minute movies came in a 700 Mb file to fit on a CD. For that you might have gotten 720 x 340 resolution encoded in MPEG-4 with MP3 encoded audio. Here we are in 2024 with 8.5x more resolution and Rec.2020 colour gamut in half the bitrate. It’s impressive.
My phone which took the video remains the venerable Samsung Galaxy S10 from 2019. I’ve never owned a phone past two years until this phone, and here it is still going strong into its fifth year (batteries clearly took a huge leap forwards around then). Mine runs Android 11, mainly because I keep thinking it isn’t worth upgrading to Android 12 as surely at some point ‘soon’ I’ll be replacing it. Amazingly, you need Android 14 to take photos which retain the HDR information in the file – despite that as early as Android 10 you could happily take HDR video if the hardware was capable. It’s one of those things you’d have thought very easy to implement much sooner, but apparently not.
Anyway I think that’s the first half of the ventilation boost fan problem solved – it goes forwards and backwards at any speed you like under ESP32 control. There is more though: how do we decide by how much to reverse the fan to stop the flow i.e. how do we detect air flow direction and dynamically adjust the fan reverse to keep air flow stopped?
Detecting ventilation air flow (cheaply!)
I’ve sized the ventilation ducts in the house for a linear pressure loss of 0.5 - 1 Pa per metre, so if we want to cut off air in one part of the house (by turning its boost fans into reverse) to boost heating or cooling in another part of the house (by turning its boost fans forwards), we have some slack in the ducts to drive boosted flow in that direction. The Zehnder ComfoAir Q600 data sheet says it should not have more than 200 Pa pressure at the unit for a long service life, so we will need to balance the speed of the fans to ensure no excessive loads anywhere in the system. For this, ideally speaking, you’d fit air velocity meters at every ventilation inlet and outlet.
The FS3000 sensor is exactly what one would prefer – it has two models, one can measure up to 7.23 metres/sec and the other up to 15 metres/sec. The worst case air velocity in this system is the 180 mm diameter connection at the MVHR unit which is 6.52 metres/sec, so the first model would be the right one. Unfortunately, the FS3000 sensor it is expensive – cheapest I can find it is €55 inc VAT delivered each. I have seven boosted stale air outlets and eight boosted fresh air inlets, so that would cost me €825 inc VAT which is a bit much. Can I do it cheaper?
A Mass Air Flow Sensor like cars use are much cheaper, but as they work by measuring the resistance of a heated wire in the airflow, they are non-linear temperature sensitive and my fresh air outlets will have varying temperatures, so that won’t work. What I really need is something solid state, and temperature insensitive.
I had a few spare BME280 temp + press + humidity sensors, so I tried sellotaping one onto the fan and see what readings it gets:
In the video I ran the fan both forwards and backwards (you might have noticed I fixed the audible coil whine since) and I found these readings:
Backwards 100% | Backwards 50% | Stopped 1 | Stopped 2 | Forwards 50% | Forwards 100% |
---|---|---|---|---|---|
100048.599 Pa | 100044.02 Pa | 100032.2068 Pa | 100038.2385 Pa | 100032.901 Pa | 100016.3545 Pa |
0.0164% | 0.0058% | 0% | 0% | -0.0053% | -0.0158% |
The difference between the two stopped values is 0.006%, and that especially clarifies the problem here – yes the BME280 can tell if there is air flow or not (+/- 16 Pa for 100% speed), but due to the drift in the absolute reading over even short periods of time, it won’t be useful for this application.
What I actually need here is a differential pressure sensor which returns the difference between two inputs (here: inside the duct and outside the duct), but the cheapest one of those I can find is €45 inc VAT so not much better than the FS3000 air velocity sensor. So let’s see if there are better barometic sensors for a reasonable price:
Noise | Relative accuracy | Absolute accuracy | Cost incl delivery | |
---|---|---|---|---|
BMP280 | 3 Pa | +/- 12 Pa | +/- 100 Pa | €0.56 |
BMP390 | 2 Pa | +/- 3 Pa | +/- 50 Pa | €4 |
BMP581 | 0.1 Pa | +/- 6 Pa | +/- 50 Pa | €53 |
The improvement in relative accuracy of the BMP390 would be a large help, but that +/- 50 Pa in absolute accuracy is a problem. In atmospheric pressure terms, it’s the difference between 1000 hPa and 1000.5 hPa, so very accurate on that scale which is what it was designed for. But not ideal for my purposes where my max pressure difference will be around 16 Pa.
In any case, I reckon it’s worth a punt on getting some of the BMP390s and seeing what they’re like, so I ordered three. Should arrive within a month.
Olimex ESP32-PoE detailed power consumption
Believe it or not, it’s almost exactly two years ago I first mentioned my Olimex ESP32-PoE boards where I described what was known at the time about its power consumption gleaned off datasheets and the internet:
The ESP32 running full belt minus wifi at 240 Mhz will consume about 50 mA, peripherals can’t draw more than 250 mA if on PoE, and perhaps less than that if on battery. An idling ESP32 might draw 4 mA, therefore a 3000 mAh battery could run the device for between 30 and 750 hours (one month) assuming board power overhead of 20-50 mA. If you can put the device into deep sleep, that draws only 0.1 mA, which could be up to 30,000 hours (or over three years)!
and:
Thanks to this being an open source hardware design, I discovered that the DC-DC stepdown chip is the TX4138 whose datasheet can be found at https://datasheet.lcsc.com/lcsc/1811141153_XDS-TX4138_C329267.pdf. It claims an 84% efficiency. Assuming it’s a linear regulator taking the 5v to 3.3v and therefore burns as heat 33% of the current the ESP32 uses, a 100 mA draw by the ESP32 at 3.3v (one third of a watt) would be 133 mA of 5v, or two thirds of a watt. That turns into a minimum of 0.8 watts of PoE power, which is a best case efficiency of 41%.
By the way, that test board measuring CO2, humidity etc shown in that post two years ago has been running continuously since then with zero issue. Its OLED display now suffers from burn-in, but it’s still going and I have two years of sensor measurements in the database.
Anyway, in the past two years more information has appeared on the internet about these boards, and there is a suggestion that the PoE power consumption can be greatly reduced by removing a resistor on the board. I also needed detailed empirical power consumption metrics in order to figure out whether these boards could drive the blind motor directly without additional power i.e. how powerful a blind motor can I fit without browning out the board when powered off PoE?
You can get a USB power meter easily enough, though the cheaper ones return inaccurate values so shop carefully. Mine is a bit more expensive, but it’s accurate. Finding a PoE power meter turns out to be rather harder – they exist, but cost over €100. And you can get a managed PoE switch for that money, and if you choose the right model it will publish in SNMP the PoE power drawn per port. So I splurged on a TP-Link TL-SG2210P which is the cheapest modern (i.e. 54v based not 48v based) PoE managed switch I could find on the market and I finally have empirical PoE power consumption measurements for the Olimex ESP32-PoE:
USB (5.2v) | PoE (52.6v) unmodified | PoE (52.6v) with R42 resistor removed | |
---|---|---|---|
Deep sleep (two LEDs shining) | 2 mW | 579 mW | switch cuts power, so must be < 500 mW |
Idle in ESPHome no ethernet (two LEDs shining) | 386 mW | n/a | n/a |
As above with ESPHome trying to get ethernet | 454 mW | n/a | n/a |
Idle in ESPHome with ethernet (four LEDs shining) | 553 mW | 1525 mW | 947 mW |
Idle as above with all peripherals for a bedroom | 600 mW | 1736 mW | 1158 mW (estimated) |
One of the first things you notice is how power expensive an active Ethernet connection is. It was designed in the days before low power unfortunately, and a 100 Mbit connection will gladly consume ~110 mW with gigabit and higher sucking down ~400 mW upwards just for idle. It goes even higher if data is moving, but these boards will mostly be silent.
In terms of thermal camera heating after being left in free air for twenty minutes:
To be honest, I was a little shocked at how high both the thermal heating and the PoE power consumption is. I had been assuming ~1 watt for an empty board off PoE. I was out by 50%, which adds up if you fit lots of boards. Add some peripherals – in this case, 2x 5v <=> 3.3v level shifters, a BME280 temperature, humidity and pressure sensor, and the rotary encoder on the GA12-N20 motor – and you’re burning ~1.75 watts per board. If I fit thirty of these boards, that’s 52 watts of heat being pumped into the house. To put that into context, my entire 3000 litre thermal store in summer leaks ~60 watts into the house. Adding another fifty watts of background heating puts into jeopardy no overheating of this house in summer!
Removing resistor R42 which burns 500 mW to keep the PoE supply going is an obvious step, but I think we can do better than 1.2 watts idling with that resistor removed for not much extra money. PoE doesn’t actually require a constant 500 mW of load to stay active – rather it needs to see 500 mW of load for at least 60 milliseconds every 340 milliseconds. This is a duty cycle of 15% on a tick of 400 milliseconds, reducing amortised load to keep PoE active to 75 mW. If I could get these boards mostly into deep sleep, and have some sort of pulse generator generate load at the right duty cycle, that could reduce heat contributed to the house significantly.
My first instinct was a 555 pulse timer circuit which are cheap and plentiful at €0.58 inc VAT delivered each, however it turns out when they’re off they consume ~425 mW which seems like we could do better. After quite a bit of research I landed on a surprising conclusion: the most cost effective way of implementing a low power pulse generator is actually a second ESP32 chip which does nothing but deep sleep for 340 ms and power on for 60 ms. This seems wasteful for a microcontroller as powerful as an Intel Pentium II from 1997, but the economics are what they are – I can get an ESP32-C3 on a breakout board with USB-C, onboard programmer and 3.3v buck converter delivered for €1.50 inc VAT! Madness! And it doubles as ‘the load’ because you can turn on the Bluetooth and Wifi stacks to consume up to 200 mW @ 3.3v (which should be just enough to consume 500 mW @ PoE), whereas the 555 circuit would need an additional load resistor and wiring. Assuming that the 2 mW deep sleep is the same for both, and conversion losses might be 5x at such a low current, it might draw 25 mW from PoE during the off cycle. That should bring total PoE draw to under 100 mW per device amortised, so under three watts for the total. Which is better than 52 watts!
In case you’re wondering why not use the existing ESP32 for this, one could theoretically modify ESPHome to do this for you. However, it would be a lot of work – the ESP32 has a bootloader stage and a main stage, and it takes 100 milliseconds to reach the main stage. So to get the timings we need, the firmware must exclusively operate within the bootloader stage. That’s deep customisation of ESPHome, and to be honest for €1.50 I can make the problem go away so a second ESP32 it is.
The ESP32-C3 boards are on their way from Aliexpress and when they arrive they surely will be written about here. Watch this space!
If I were starting all this again – and it wasn’t obvious at all at the beginning nor was it available until recently – Olimex now have a v2 of the ESP32-POE which can draw 25 watts instead of 12.5 watts from PoE, and has a built-in 12v 1.5 amp supply. That bilge fan if running off 12v should consume less than that though it would need a slow start implementing to prevent brown out. I’ve already bought the PoE switches etc and they were all sized for max 12.5 watts per port, so that ship has sailed. Still, if you’re reading this thinking about replicating what I’ve done, it’s worth bearing in mind.
Next post I might – or might not – cover the blind automation or the LED strips or the ESP32-C3 boards. We shall see how things go. I kinda do want a video of a real blind going upwards and downwards on command and demonstrating that when it gets to the top, it stops on its own. I have some 3D printing between now and then to reach that, not least because my initial 3D print of the blind spool ended up losing its grip on the motor because the motor’s torque is so strong and its burred the plastic, despite that being ABS. So I had to go get little metal cogs for the end of the motor and I’ll need to redesign the blind spool to fit the metal cog.
Still, what else do I have to be doing? At least this advances the house build in its own way.
I expect getting the GA and Structural Engineering (SE) drawings to final sign off will consume a great deal of my free time next few weeks. I also will need to start various balls rolling in terms of getting workers and supplies and indeed the mortgage in place by the various due dates. A long eighteen months beckons before us.
I jinxed myself. Silly Niall.
What the first draft of the GA drawings did enable in the past two months is a first draft of the SE drawings, and then everything went on pause because obviously everybody goes on summer holidays apart from us. I expect nothing further to happen until at least September.
This pushes back the likely erection of the timber frame to Spring 2025. This is disappointing. Anyway, here are the draft GA and SE floor plans for your entertainment:
There isn’t a huge amount to say about these, they are pretty much what the architect’s drawings had, and what deviations there have been I’ve since had undone in feedback. I suppose it is interesting that the most pressing weight in the entire building will be between his and hers bathroom sinks in the master bedroom ensuite.
Despite the further slippage to the build date, I suppose there is at least forward progress. Got to look at the sunny side I suppose.
Before I do my usual show and tell of projects I’ve been working on this past two months, I suppose I ought to mention that an epoch has come to a close. Approximately twenty-four years ago while I was living in Madrid, Spain I subscribed to my first Boost C++ libraries mailing list. That was the first step in what became a long road culminating in me digging real deep to get Boost.Outcome past peer review and into the Boost C++ Libraries in 2019. That, in turn, approximately doubled my earning power for my next role and doubled it again for my following role. Without getting that library past peer review and into Boost, I think it safe to say that I could never have afforded to leave renting and get my own home.
However, I think it’s now time to move on from the Boost C++ Libraries so I won’t be participating there going forth apart from continuing to sustain Outcome i.e. I’ve unsubscribed from everything there, and I no longer consider myself having anything to do with Boost going forth.
Casting to a wider picture, I’m also winding down my C++ participation
in general. Serving at WG21 has broken me to be honest, it’s increasingly
‘anything but C++’ for me which is a shame, as it isn’t the language’s
fault. I expect that the Sofia meeting in Summer 2025 will be my last
face to face meeting, thereafter I’ll attend virtually only. I have
ceased progressing all my WG21 papers apart from P1030 std::filesystem::path_view
,
and assuming that makes it in before the C++ 26 IS major feature
deadline (yes, you guessed it, Summer 2025), that will be the only
evidence of seven years of my service at WG21.
I am not remotely alone in moving on – WG21 broke a whole generation of us with our ideas about getting good engineering achieved. A good portion of my generation of C++ programmer at Boost and at WG21 will in fact be moving on like me. It’s funny you know, wind me back twelve years ago and I couldn’t really understand why people moved on as a whole bunch just had after C++ 11 had shipped, but now that C++ 26 will ship soon yeah I totally get it now. The standards process just chews you up if you try to change anything which matters. And then you find yourself asking why am I bothering with all this if it isn’t pleasant, doesn’t benefit me, and I don’t get paid for it?
As you might gather, I’m on the hunt for a new programming language to dig into, one without the baggage and dysfunctionality of C++. I haven’t seen one yet, but I’m hoping a true C++ successor might turn up soon (and no, Rust isn’t that). To preempt people emailing me with their pet languages, this is what I seek from a true C++ successor:
- SIMD orientated, not scalar orientated.
- Stack unwinding resource release.
- Functions default to doing only static memory allocation like for embedded toolchains (i.e. where the linker can precalculate all allocations and reserve space for them at link time), otherwise you are required to supply memory from outside. A function can opt out of this so it can use unbounded dynamic memory allocations, but then only ‘non-deterministic’ functions like it may call it.
- Same as the above, but for thread synchronisation.
- Borrow checker, but not annoying and productivity damaging like Rust’s.
- Superb compatibility with at least two existing major language ecosystems so it isn’t a pain to bridge in existing codebases.
The closest that I am currently aware of is Mojo, but the syntax is too scalar and Python-like for me personally. What I’d really prefer is a syntax which encourages you to write in SIMD friendly terms to the maximum extent possible.
Until my shining knight in systems programming language armour turns up, I intend to move over to WG14 the C programming language where I hope to get modernised signal handling into C, and thence into absolutely everything else. It should keep me busy for a good few more years yet. And I probably will do a circuit of the global C++ conferences as a good bye swan song, say good bye to lots of people, tie everything up nicely.
Taught myself further how to design 3D printable things
As I was looking after my children during my mornings before work throughout these past two months, my productivity has been quite impaired compared to normal. Still, I bit the bullet and invested several more very late evenings these past two months into learning how to design things which can be 3D printed, building on my experience gained in designing the picture frame shown in the last post. I went at things again in MeshMixer and I yielded this by early July:
This is my ‘midi’ case for my Olimex ESP32-POE boards which I’ll be fitting throughout my house (you can find its Thingiverse page here). The midi case can take up to a 3400 mAh battery and an additional breakout board which almost certainly will be an Olimex UEXT extender for my use cases. This is the first thing I ever printed in ABS on my Anycubic Kobra Go 3D printer, and I can testify getting a successful print is indeed a black art on this printer, which isn’t really designed for ABS printing. My printer has a Bowden drive, which pushes and pulls the filament far from the hotend. This introduces ‘bounce’ into retraction cycle, which means filament is left in the hot end, and ABS it tends to singe which then means it clogs and then flow stops. Or if the hotend is too close to the print surface, it can’t extrude quickly enough so it singes and it clogs. Or if the wind is blowing slightly wrong, it singes and blocks. Painful.
As we saw in the commercial print of the house model with a Prusa XL, a direct drive hotend has no such troubles with ABS. But that’s a far more expensive printer than mine, and maybe by varying ABS filament I might find one less finickety than my current ABS filament in my cheap printer.
After the midi case, there was an obvious ‘mini’ case waiting to be extracted:
And then a maxi case, though it has design quirks I’d like to change about it so I haven’t uploaded it to Thingiverse yet:
My maxi case is currently wired up as a prototype for a bedroom in the future house. All available i/o bar one input is fully loaded:
- GPI34 (PINS) is unused (is always pulled up)
- GPI35 (PINS) is wall dimmer switch level shifted down to TTL
- GPI39 (PINS) is external power supply voltage (analogue)
- GPIO13 (UEXT) is I2C-SDA for sensors
- GPIO16 (UEXT) is I2C-SCL for sensors
- GPIO4 (PINS) is blind motor sensor A
- GPI36 (UEXT) is blind motor sensor B
- GPIO0 (PINS) is blind motor PWM forwards
- GPIO1 (PINS) is blind motor PWM backwards
- GPIO2 (UEXT) is LED strip colour R
- GPIO5 (UEXT) is LED strip colour G
- GPIO14 (UEXT) is LED strip colour B
- GPIO15 (PINS) is LED strip colour WW
- GPIO3 (PINS) is ventilation fan driver enable
- GPIO32 (PINS) is ventilation fan forwards
- GPIO33 (PINS) is ventilation fan backwards
Chances are high that LED strip pins R, G and B won’t be wired in in most rooms as those only have warm white downlighters. In fact, chances are high that initially I won’t wire the lighting into the ESP32 boards at all, as it’ll save me time.
The ventilation fan drive I haven’t tested yet, but I shall be testing a €4 inc VAT driver based on what Aliexpress claims is a BTS7960 H-bridge. It claims it can handle 43 amps, the reviews are clear it cannot, but it should handle the max 3 amps we’ll ever demand from it. The BTS7960 can take a max 30v, so I’m a little concerned that back EMF from the 24v bilge pump fan might spike over that. However it would seem that these bilge pumps respond very well to lower voltages, they turn well at 5v and have more than plenty flow in my opinion at 12v (and at 24v, they’re insane) so chances are very high I’ll run them at 12v and make everything easier on myself.
Something a bit mad to consider is that my fresh air ventilation pipes will need 50 mm of insulation each side, so a 100 mm diameter pipe will become 200 mm. The above bilge pump takes a 100 mm pipe, it itself is 135 mm, so it’ll be entirely encased within the insulation and apart from some wires sticking out you’ll never know it was in there. I’m hoping that the motor will tolerate the ~60 C air temperatures, as a bilge pump it should. The plastic is ABS, so should be absolutely fine at 60 C.
You’re likely going to be very curious about the blind motor. This is my solution to paying a lot of money for IKEA Fytur blinds, instead I’m going to motorise a very cheap IKEA Fridans blind using ABS printed parts and a carefully chosen GA12 N20 geared motor with rotation encoder costing €7 inc VAT which is small enough to fit inside the blind’s roller. I won’t say much more about it this post other than I’ve successfully got it to turn from the ESP32 using only the available 5v amperage from the PoE supply without seeing power brownouts, and I can testify it has quite considerable torque and should be able to lift quite a lot of blind. What I haven’t got working yet is getting ESPHome to detect when the motor has stopped turning, and to turn off the power. Free time will fix that, as always.
House model display case
The custom made cases for the commercially printed house model arrived and for the money, they are excellent:
The acrylic upper is fully bonded and forms a single surface. The base of the large model has mahogony laminate strips, while the base of the small model is black acrylic. Including delivery, for under £200 inc VAT. The supplier’s name is LasAcryl Ltd, they’re actually French but of course their English website thinks the only possible currency is Sterling.
The extra height of the case will be used by standing each layer of the house on stilts, with little LED panels inside lighting the house. You’ll thus be able to see all around the inside of the model house whilst standing instead the actual finished house. It may well be a decade before I get to assemble all the parts to create the final display case, or if this build takes even longer I might just get it done before the build starts. We’ll see.
House dashboards, revisited
Last post I mentioned that I didn’t like the e-Ink displays for the future house dashboards, and for £115 inc VAT you could get a portable touchscreen monitor off Amazon. Well, I went ahead and bought one for £98 inc VAT instead:
Apologies for the stock image, it’s because the actual device is already out at the site configured as the TV for the children to watch when we’re out there using the old TV box, old TV remote control and some very old computer speakers (y’see, this is why one should never ever throw anything out!).
And well, for the money, I am impressed! The PoE powered Raspberry Pi Zero 2 described last post will power it without issue (albeit with a bit of tweaking to overdrive the HDMI signals on the Pi). The touchscreen works great and the Pi recognises it. I can display interactive graphs which respond to touch – it ain’t quick because the Pi chugs a bit when rendering 1080p, but it is absolutely functional.
In terms of the device itself, it is surprisingly good for the money. The display isn’t bright, but given that it works from USB power alone and illuminates 15.6 inches, I think it’s as good as is feasible for the five watts available to it or so. As it’s an IPS panel, colour reproduction is very good apart from magenta tending to pink. The 1080p resolution is plenty for this size. Using this display indoors if it’s bright sunshine outside is just about okay, if it’s cloudy outside then it’s plenty bright enough indoors. Its speakers are absolute crap, barely any volume out of them, but I have my old computer speakers for that and it does have a headphone jack. It has half a 75 mm VESA mount on the back, and I’ve already affixed that to a cheap monitor arm, later it will get mounted on the wall in the kitchen.
Oh and there’s one more kicker – yes it responds to VESA brightness commands, so you can dim it and brighten it arbitrarily. If combined with a Time of Flight sensor to detect local motion, you can wake the display if somebody goes near it. Very nice.
What comes next
Many moons ago after testing IRF520N MOSFET based solutions for dimming LED strips, I realise that I had bought poorly and I should have bought IRF540N MOSFETs instead as they should run much cooler. Easily two years ago now I did buy a bunch of four channel IRF540N MOSFET boards, but I’ve never tested them in action. I’d like to get that done, to create peace of mind that this solution will definitely work if asked and they do run cool as the maths had predicted.
I’d like to get that BTS7960 driver tested with my bilge fan at 12v to make sure it definitely works as expected.
And finally, I’d like to get a reference house bedroom ESPHome firmware written and debugged, which includes window blind control.
If I get all those done, I’m sure a further post will appear here. And god help us if the GA and SE designs get finalised as I might actually finally have to pay for some timber frames!