Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.
Niall’s virtual diary:
Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.
Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact, especially in terms of broken links or images.
- A biography of me is here if you want to get a quick overview of who I am
- An archive of prior virtual diary entries are available here
- For a deep, meaningful moment, watch this dialogue (needs a video player), or for something which plays with your perception, check out this picture. Try moving your eyes around - are those circles rotating???
Latest entries: 
My mobile phone push solution: Ntfy
Thankfully, the DIY solution space here is quite mature. In fact, it’s so mature that there are many competing solutions all with their own pros and cons. I ended up choosing Ntfy as the mobile push solution, though I could find absolutely nothing wrong with Gotify. I only chose Ntfy because it has many times more the userbase than Gotify, which usually means it will be more mature, more debugged, and more optimised. From a thorough reading of the bug trackers of the various solutions, and reading the source code for Ntfy, I reckoned they’d done the most empirical testing on ensuring a minimum battery consuming solution which is reliable.
Ntfy is about as simple as you could get for this solution – it does exactly one thing only. You can push text messages with optional attachments like images to a channel name of your choice. Anybody subscribed to that channel gets notified. And that is literally it – you can even configure it only use RAM for storage, which is perfect for a embedded grade computer with limited storage write cycles and a high likelihood of sudden power loss.
You can, of course, configure it with usernames and passwords and access tokens and all the other usual REST access control. You can closely configure what users can push what to which channels if you like. You can set a TLS cert on the public API endpoint so no passwords nor access tokens can get sniffed. In short, it does exactly what it says on the tin and to date I have found it ‘just works’.
Another neato feature it has is you can provide up to a three button menu with actions per button press. So, for example, you could send a picture still from the camera with a push button for ‘view the video around this time’ and another for ‘set off the alarm’. Pressing them pushes messages at other channels or performs arbitrary REST API invocations, which lets you configure simple bidirectional communication. Here it is in action:

I didn’t personally test it, but Ntfy can also optionally push to mobile phones via Google Firebase or Apple’s equivalent. So if you’re somebody running Google Play Services all the time, you can vector Ntfy via that instead of replacing Google Play Services with Ntfy. There is an open source unified push notification service called UnifiedPush, which Ntfy can also use instead on request. There are plenty of config options likely to suit most people. See below for measurements of mobile battery consumption, which is for Ntfy directly listening to a custom Ntfy broker running at the site.
Upgrading OpenWRT
To use the Ntfy Android app, you need to have the Ntfy message broker running somewhere public. I couldn’t see any good reason to not run it at the site, especially as failure to connect would then get reported and that is also something I want to know about i.e. power or internet loss at the site. And the site’s IP address is stable over time, and Eir don’t impose any restrictions on inbound connections, so you can absolutely run a public server there.
With the AI PC removed, the main sources of compute out there are the two hand built Banana Pi R3 boxes which provide the Wifi, firewall and routing. They run OpenWRT, and they’re fairly well endowed with specs: 2 Gb of RAM, four ARM Cortex A53s running at 2 Ghz, and 8Gb of MMC storage. Until this week, they were running the original very first OpenWRT firmware which was compatible with their hardware, which is a couple of years old now – after all, I started work on making those boxes back in early 2023. But that edition of OpenWRT couldn’t run Docker, and I needed Docker to get Ntfy (amongst other services) running. And of course that edition of OpenWRT was also too old to be able to self upgrade to the latest OpenWRT, so I ended up spending the entire day at the site Wednesday two weeks ago getting those two boxes onto the latest OpenWRT with everything reinstalled and reconfigured exactly as they were originally. Painful, but hopefully I’ll never have to do that again.
Now that I am on the latest OpenWRT, standard Docker Compose more or less ‘just works’. I say ‘more or less’ because you will need a custom network configuration in your compose files to make it work on OpenWRT (see below), but once I’d figured that part out, to be honest it’s been exactly the same as on a full fat Linux installation and all the Docker stuff I’ve installed has pretty much just worked. This is despite how barebones OpenWRT is in comparison to a normal Linux distro, and the very limited 6.5Gb storage partition (which runs the F2FS filesystem as it operates on MMC storage). Performance is acceptable, YABS reports as follows:
Banana Pi R3 on my site | My colocated Raspberry Pi 5 | A very budget VPS I rent | |
---|---|---|---|
Location | Cork, Ireland | Mratín, Czechia | Amsterdam, Netherlands |
CPU | ARM Cortex A53 @ 2.0Ghz | ARM Cortex A76 @ 2.4Ghz | Intel Xeon Gold 6122 CPU @ 1.80GHz |
Storage | eMMC running f2fs | NVMe SSD running ZFS | Shared NVMe SSD running ext4 |
YABS Single Core | 194 | 772 | 569 |
YABS All Cores | 525 | 1368 | 1792 |
YABS Disk Read | 58 Mb/sec | 232 Mb/sec | 111 Mb/sec |
YABS Disk Write | 65 Mb/sec | 239 Mb/sec | 112 Mb/sec |
YABS Download speed | 930 Mbps | 929 Mbps | 1946 Mbps |
YABS Upload speed | 102 Mbps | 928 Mbps | 2089 Mbps |
YABS worst download locations (< 50% capacity) | Sao Paulo (419 Mbps) | Sao Paulo (146 Mbps) Los Angeles (245 Mbps) Tashkent (250 Mbps) Singapore (537 Mbps) | Sao Paulo (271 Mbps) Los Angeles (447 Mbps) |
YABS worst upload locations (< 50% capacity) | None | Los Angeles (219 Mbps) | Los Angeles (112 Mbps) Sao Paulo (158 Mbps) |
For a box consuming around five watts, that is decent performance. Sure, one of my Raspberry Pi 5 colocated boxes idles at the same wattage, but if you max out its cores it’ll jump to twelve watts. The RPi5 delivers approx 4x the compute for 2.4x the power, as you’d expect from a superscalar CPU. Indeed, as mentioned in the article about my colocated Raspberry Pi 5s, the benchmarks above again demonstrate that clock-for-clock the ARM Cortex A76 matches an Intel Xeon Gold 6122 CPU. The latter is faster in multicore only because it has far more memory bandwidth to avoid stalling the four CPUs.
Anyway, the four ARM Cortex A53s are plenty to run lightweight programs. What we need next is to plug the Dahua IP cameras into Ntfy. Before we get into how I solved that, here is the custom docker compose network stanza for OpenWRT, because it is not documented in an obvious nor easy to find place:
networks:
default:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_icc: "true"
com.docker.network.bridge.enable_ip_masquerade: "true"
com.docker.network.bridge.host_binding_ipv4: "openwrt_ip_address"
com.docker.network.bridge.name: "docker-lan"
com.docker.network.driver.mtu: "1500"
docker compose up
will create in OpenWRT a new bridge device docker-lan
.
You need to adjust its settings to say it is always up, then add a
new OpenWRT interface which I called dockerlan
for the docker-lan
bridge. Add that interface into the docker
firewall group.
Finally, in the OpenWRT firewall, add the docker
firewall group
so lan => docker
is permitted as is docker => lan
. Do docker
compose down
to destroy the container, then docker compose up
and you should find your container can now see the network.
One thing to be VERY aware of with this configuration is that ports listening within the docker container are ALSO listening on the OpenWRT LAN at the OpenWRT LAN address. If you wish to expose one of those ports to the WAN, you can add a port forward to the OpenWRT firewall. This is very convenient, but be careful as the port number space is shared between docker containers and host which makes it easy for ports and services to collide or otherwise interfere with each other.
Replacing the Dahua & Sungrow cloud integrations
Dahua provide a free of cost proprietary cloud based notification push service which can be configured as ‘full fat’ (everything goes via the Dahua cloud), ‘notification only’ (only the event notification goes to Google Firebase) and ‘camera does nothing’ (your local software e.g. Blue Iris, actively subscribes to events on each camera using Dahua’s REST API). Using the Dahua Android app, you can have the app tell the camera to push notifications to Google Firebase for the app if you don’t create a Dahua cloud account. Yes the Dahua app does have some bugs, but it works surprisingly well considering. All you need to do is to remember after a push notification to enable the Wireguard VPN before opening the Dahua app because any images or video will be fetched directly from the camera, and it usually ‘just works’ about as well as that Dahua app ever works well.
The Sungrow inverter also provides a free of cost proprietary cloud based monitoring solution, and you can opt in or out as you choose. If you opt in, your Sungrow inverter will push quite detailed metrics to your Sungrow cloud account. You can also remotely manage the inverter to a very detailed degree from the Sungrow web interface or Android app. When I say ‘very detailed’ I mean it, there are esoteric config options available there that there are no other means of accessing. Whilst all that is great, it is an enormous security vulnerability. A bad actor could cause thousands of euro of damage if they got access to that management interface. Plus, there are the usual concerns with such personal and intimate data going out into the cloud in any case.
I have used the Dahua and Sungrow cloud integrations in the nearly two years they’ve been running now simply out of convenience. But I always intended to move them onto my own, private, infrastructure and I deliberately and intentionally made sure before I bought them that it would be straightforward to integrate both into Home Assistant when the time came. Home Assistant, unfortunately, is quite resource hungry. It might plod along with these slow CPUs, but it definitely needs at least 4Gb of RAM and 20Gb of storage. As my Banana Pi boxes have 2Gb of RAM and 6Gb of storage, Home Assistant just isn’t possible on this lower end hardware.
So what else? The next most popular open source home automation software after Home Assistant is probably OpenHAB which predates Home Assistant by a few years, and has retained a slimmer resource footprint. Using their Alpine based docker image, I got it installed and working surprisingly quite well in 5.5Gb of storage. It raises the RAM usage on the Banana Pi to about 800 Mb with the rest of RAM filled with disc cached data, so it’s pretty heavy for this class of hardware. Still, it does seem to work and without much impact on the board as a Wifi router and public facing internet endpoint.
The Sungrow Inverter part was dead easy as there is a built in out of the box integration, albeit it is not initially obvious because it’s part of the ‘Modbus over IP’ module:

The values in percentages are off by a factor of 100, but that’s easy to work around in automations etc. The Sungrow integration provides both control and lots of values to read – you can, if you wish, override the Sungrow firmware configuration and have the inverter behave any way you like.
Configuring the Dahua camera OpenHAB integration
OpenHAB also comes with a Dahua camera integration, but it’s rather
more effort to configure because it supports a vast range of Dahua
camera models and configurations over well over a decade of multiple
firmware changes. As a result, it exposes a vast number of fields,
most of which will forever read NULL
because your camera’s firmware
and/or current configuration won’t emit that field.
Solving this took a bit of thinking cap time, but I did figure out a solution. Here is the correct way of adding a Dahua camera to OpenHAB:
In Things, hit Plus => IpCamera Binding => Dahua Camera with API => Enter the IP address and username-password, Create Thing. Don’t forget to give it a suitable name!
Back in Things, enter the Camera just created, choose the Channels tab, at the bottom tick ‘Add Equipment to Model’, tick ‘Show Advanced’, then ‘Select All’, then ‘Add to Model’.
Go outside, and do everything to trigger everything your cameras are configured to trigger upon.
In Items, enter the name of your camera in the filter. You need to examine all the input Switches – if all these are
NULL
, then your camera needs to be reconfigured (I suggest making sure your ONVIF username and password match your main username and password, because for some reason they are set separately).If some Items are either
ON
orOFF
, write those down now as those are the only ones we need to subscribe to. These WILL differ based on per-camera configuration even if your cameras are all identical models.Return to Things and enter your Camera. In the Channels tab, at the bottom, click ‘Unlink and Remove Items’. This will remove all the items. You can now tick exactly the ones you wrote down before, and only subscribe to those alone.
I currently have three security cameras on the site: CamNorthWest
,
CamMidWest
and CamSouthWest
. CamNorthWest
is configured with an
intrusion detection boundary so it alerts if something crosses that
boundary:

(in case you’re thinking that green line is down the middle of the footpath, no that is not intentional – a storm pushed the camera slightly to the left and I haven’t gotten around to redrawing the boundary)
I can tell you that for the Dahua IPC-Color4K-X, intrusions appear
in OpenHAB as Field Alarm
, Last Motion Type
is fieldDetectionAlarm
,
and these fields appear to be active for this camera model, firmware,
and current configuration:
Enable Motion Alarm
isOFF
.Audio Alarm Threshold
, set to50
.Enable Audio Alarm
isOFF
.Enable Line Crossing Alarm
isON
.- yet
Line Crossing Alarm
seems to remainNULL
?
- yet
Motion Detection Level
, set to3
.Poll Image
isOFF
.Start HLS Stream
isOFF
but appears to goON
if you try to watch a HLS stream from OpenHAB.
CamSouthWest
is the exact same model as CamNorthWest
, and is also
configured with an intrusion detection boundary so it alerts if
something crosses that boundary:

There is one configuration difference: there is an additional post
filter on intrusion that the object must be a human or a vehicle.
This camera model, firmware and current configuration appears in
OpenHAB as Field Alarm
, Last Motion Type
is fieldDetectionAlarm
and:
Enable Motion Alarm
isOFF
.Audio Alarm Threshold
, set to50
.Enable Audio Alarm
isOFF
.Enable Line Crossing Alarm
isON
.- yet
Line Crossing Alarm
seems to remainNULL
?
- yet
Motion Detection Level
, set to3
.Poll Image
isOFF
.Start HLS Stream
isOFF
but appears to goON
if you try to watch a HLS stream from OpenHAB.
In other words, identically to CamNorthWest
, but I have manually
verified that Field Alarm
only triggers with humans and vehicles,
unlike for CamNorthWest
which also triggers for birds, cats etc.
CamMidWest
is very different to the other two. Firstly, it is
a Dahua IPC-Color4K-T180 so very different hardware which ships
with the latest generation of Dahua firmware, whereas the previous
two cameras are on the preceding generation of Dahua firmware (most
of the changes are to the UI, but there are a few feature changes too). Secondly, it is
configured with Motion Detection with a post filter that the object
must be a human or vehicle. This appears in OpenHAB as Motion Alarm
with a separate Human Alarm
, and these fields appear to be active
for this camera model, firmware, and current configuration:
Last Motion Type
ismotionAlarm
orhumanAlarm
.Field Alarm
isNULL
here.Enable Motion Alarm
isON
.Audio Alarm Threshold
, set to50
.Enable Audio Alarm
isOFF
.Enable Line Crossing Alarm
isON
.- yet
Line Crossing Alarm
seems to remainNULL
?
- yet
Motion Detection Level
, set to3
.Poll Image
isOFF
.Start HLS Stream
isOFF
but appears to goON
if you try to watch a HLS stream from OpenHAB.
Subscribing to motionAlarm
will get you lots of false positives
by definition, so humanAlarm
is a much better choice.
Additional fields common to all models which are read-only:
Last Event Data
is whatever the camera did last e.g. ‘user logged out’, ‘synchronised time to NTP’ etcHLS URL
, but its addresses don’t seem to work?Image URL
, which returns a JPEG of the current view. Note this also stores a snapshot on the camera’s storage.MJPEG URL
, which is a MJPEG video feed of the current view.
Finally, I only really trialled it a bit so I didn’t spend much time on it, however you can create a custom dashboard for OpenHAB:

The weather forecast is simply more sensor data, so you could do rules like ‘if the batteries are low, but there will be sunshine later today, charge the EV first’ or ‘if the weather tomorrow will be heavy rain and cold, charge the thermal store to full using cheap night rate electricity; but if the weather tomorrow will be sunshine all day, only charge the thermal store up to 50%‘. There are lots of possibilities here, and OpenHAB is probably as powerful as Home Assistant at this sort of stuff, except it will happily run on your Wifi box with a five watt power budget!
Having camera alerts send a message via ntfy
There is a dedicated section in OpenHAB for rules which are variations on ‘if this (and/or this …), then that’. You can have rules be conditional on any event, time, system event (e.g. start up etc) with any arbitrary logic between them. Any programmer will find it very straightforward.
To define scripts to execute as a result of a rule, you have your choice of writing the script in Javascript, a dedicated DSL, YAML or a visual programming IDE called ‘Blockly’ which looks like this:

This lets you drag and drop chunks of connector to create a program which emits as YAML (and you can hand customise and edit that YAML at the same time, though changing the graphical representation may eat those YAML customisations sometimes). They are obviously trying to replicate VisualBasic from the 1990s, but it’s not quite that fluid nor intuitive. In particular, there is a steeper learning curve than it appears – I had to go search Google a fair few times to figure out how the drag-drop UI works in places. Above you can see a script which performs a HTTP PUT to Ntfy attaching a still from the camera, the last motion type, and an Action button to view the live video right now (which you saw appear in the Ntfy app screen shot above).
And yeah, that’s pretty much it for solving replacing the proprietary cloud services entirely. OpenWRT lets you firewall those devices off from the internet so you’re sure they can’t get out, but both Dahua and Sungrow let you toggle off the cloud push in their config as well. For now, I’ve left both systems running in parallel to ensure everything is working perfectly, and after a week both systems issue alerts in perfect synchronicity without one being delayed from the other etc.
Mobile phone battery consumption
I left the Ntfy app running on the Google Pixel 9 Pro for a day whilst doing nothing else with it, and according to the Google battery status ‘< 1%’ of battery gets used by the Ntfy Android app despite constantly running in the background. I then set up a timer to push messages at it to test its reliability. Every message was received, and it now reckons 1% of battery was consumed. This seems very acceptable, though this testing was exclusively done on Wifi.
I’ve since moved onto the Google Pixel 9 Pro as my main phone, so taking it out of the house and away from Wifi (also: when I’m in bed, the Pixel loses all Wifi and uses LTE – it has noticeably worse Wifi than the Samsung Galaxy S10 which sitting right beside it keeps a stable Wifi connection). Averaged over the past four days:
- From the system perspective: 76% of battery went on the mobile network, 8% on Wifi, 6% on screen, 3% on Camera, 3% on GPS.
- From the apps perspective: 23% went on the web browser, 12% went on WhatsApp, 3% went on Ntfy, 3% went on the Launcher, 3% went on the streaming radio app.
Between 4am and 6am when I was sleeping at my Dad’s house and it was 100% on LTE and I definitely wasn’t using the phone:
- From the system perspective: 54% of battery went on the mobile network, 26% on Wifi, 20% on GPS. Ouch!
- From the apps perspective: 22% went on WhatsApp, 15% went on the web browser, 8% went on Ntfy, everything else was < 1%. Also ouch!
It isn’t widely known that Meta supply an edition of WhatsApp which doesn’t require Google Play Services (here). This works by keeping an open web socket to Meta’s servers so it can receive push notifications. As you can see above, their implementation is nearly three times worse for power consumption than Ntfy’s, so I think I was right that Ntfy would have been heavily debugged and tweaked due to its large user base.
This past weekend I really needed WhatsApp to definitely be working, so I gave it unrestricted background operation permissions. As I won’t need it to definitely be working these next few days, I’ve enabled background usage optimisation going forth and we’ll see what that might do about WhatsApp chewing down so much battery.
The amount spent on Wifi when there is no known Wifi available is disappointing. It obviously is constantly scanning. I wonder if that is related to the high GPS consumption? Something might be constantly requesting the current location, which then uses the current Wifi environment and GPS. I found that the weather app was refetching the weather every ninety minutes – I’ve now changed that to every six hours, and we’ll see if that improves things.
Finally, I’ll also need to do something I think about the web browser as that power consumption is unacceptable, and I’ve now removed GPS access permissions from everything bar OsmAnd and Google Maps. I’ll keep monitoring battery consumption and keep at the tuning – the default battery consumption for GrapheneOS is one of the biggest complaints by new users on its issue tracker, but the old hands say a great deal can be done by tweaking configuration, so we’ll see how that goes.
What’s next?
I expect to write the article comparing the Google Pixel 9 Pro and my previous Samsung Galaxy S10 when I get back from my trip to Spain in October. Whilst in Spain, I intend to fully test the new phone, see how it holds up. I may get the article comparing my new watch to my old watch done this week, but I have a very full week ahead of me this week, so it’s entirely possible it’ll have to get pushed to after Spain.
In fact, I’ve been working so hard on burning down the chores and todo lists that Megan actually ordered me to take a lie in last week, which I turned out to have sorely needed as I had been getting six or seven hours of sleep nightly as I’ve been working so hard. I guess that’s the fortunate thing about unemployment – motivating yourself to burn through your own personal todo list is a lot easier than motivating yourself to do somebody else’s todo list for money. Because your own todo lists are worth more to you personally, you find yourself really going at them all day long every day without even often pausing for food.
On the one hand, long may the todo list burndown last! On the other hand, restoring financial income would be rather handy too.
You might remember that this summer I was trialling a decade old PC running a decade old 8 Gb nVidia Tesla P4 AI inferencing accelerator card bought second hand off Aliexpress. Its purpose was to analyse the three security camera feeds on the site to see how much better a job it could do over the AI built into the cameras. I ran it for exactly two months, and my prediction that the 28 Tb hard drive would store a bit more than three months of video was spot on. I manually reviewed all the alerts the AI recognised during those two months and it is markedly less prone to false positives than the camera’s built in AI – which is to be expected. Still, the specific security camera specialist AI model I was running still got confused by ravens in particular – those like to flap around on the roof of the office in groups sometimes – and it regularly thought those were people (which the camera based AI also gets confused by). The PC AI did not get confused by cats – unlike the camera AI – and as expected it could see people much further away than the camera AI, whose internal resolution for the AI is surely quite coarse (and far below the camera’s 4k resolution). I think with a bit of tweaking and fiddling that this solution is a marked improvement, albeit with an added ~80w power cost, which is almost exactly double the site’s current power draw, and which is why I can’t afford to run it outside the long summer days. The watt meter that I fitted read 19.6 kWh before I turned everything off – that seems absurdly low when 80 watts should result in ~58.4 kWh per month, but maybe that watt meter wraps at 100 kWh and then it would make sense?
Last post I mentioned that there will be coming here soon a review of my new watch a Huawei Watch D2 and my new phone a Google Pixel 9 Pro. That won’t be this post – one of my big chores this week was to start replacing all the proprietary cloud solutions the site is currently using with my own infrastructure. This was greatly raised in priority because I intend to run GrapheneOS on the new phone, and that lets you segment Google Play Services off into its own enclosure along with only the apps which require Google Play Services. That enclosure is closed down every time you lock the phone, so it doesn’t run when the phone is locked, which means that anything Google Play Services based (including all of Google’s own stuff) can’t spy on you when it’s not being used. That, in turn, means that you won’t get any notifications through Google Firebase which is the Google infrastructure for pushing notifications to phones. So, you need to set up your own notification push infrastructure, and there are many ways to do that.
That will however be the next post here, because there is something else which needs doing to this website implementation before I can fully move onto my new Google Pixel 9 Pro: what to do about HDR photos.
The sorry state of HDR photos in 2025
Last October I transitioned the videos shown in posts on this website to self-hosted, rather than hosted on YouTube. This was made possible by enough web browsers in use supporting AV1 encoded video (> 95% at the time) that I could reencode HDR10+ videos captured by my Samsung S10 phone into 1080p Full HD in ten bit Rec.2020 HDR with stereo AAC audio at a capped bitrate of 500 Kb/sec with – to be honest – quite spectacular retention of fidelity for such a low bitrate. One minute of video is only 3.8 Mb, so I was in the surprising situation that most of the JPEG photos hosted here are larger than a minute of video!
Video got widespread wide gamut (HDR) support quite a long time ago now. Not long after DCI-P3 and Rec.2020 were standardised around 2012, HDR video became widely available from about 2016 onwards, albeit at the time with huge file sizes (one friend of mine would only watch Blu-ray HDR content or better, so every movie he stored was a good 70Gb each! That uses up a lot of hard drives very quickly …). Video games followed not long after, despite Microsoft Windows having crappy HDR support then and indeed still now today. Then, basically everybody hit pause for a while, because for some reason nobody could agree on how best to implement HDR photos. It didn’t help that for a long time, Google was pushing WebP files, Apple was pushing HEIC files, and creatives were very keen on JPEG XL which is undoubtedly the best technical solution to the problem (but in my opinion sadly likely to go the way of Betamax). Problem was – to be honest – none was sufficiently better than JPEG to be worth upgrading a website, and I like almost everybody else didn’t bother with moving on from JPEG, in the same way everybody still seems to use MP3 for music because portability and compatibility trumps storage consumption.
It didn’t help that implementations of WebP and HEIC only concentrated on smaller file sizes, which nobody cared about when bandwidth and storage costs kept exponentially improving. For example, the camera in my Samsung S10 does take photos in HDR, but you need to have it save them in RAW format, and then on a computer convert the RAW format into a Rec.2020 HDR image format to preserve the wide gamut. That was always too much hassle for me to bother, especially as for video it natively records video in Rec.2020 HEVC in the first place. What’s weird about that phone is that Samsung stores photos in HEIC format, which is HEVC compression under the bonnet and is absolutely able to use Rec.2020 gamut. But Samsung very deliberately uses a sRGB colour space, which at the time they claimed was for better compatibility (despite that almost nothing but Apple devices support HEIC format images natively). The Samsung phone does convert those HEIC files into JPEG on demand, so perhaps using the same SDR gamut as JPEG was just easier, who knows.
That Samsung S10 phone was launched in 2019, the same year as the AVIF format. The AVIF image format stores images using the AV1 video codec much in the same way as HEIC stores images using the HECV video codec. Like HEIC, if your device has hardware acceleration for AV1 video, this can accelerate the rendering of AVIF images, which is important as these formats are computationally expensive to decode. Unlike HEIC though, AVIF did see widespread take up by the main web browsers and platforms, with everybody supporting AVIF by the start of 2024. As of the time of writing, according to https://caniuse.com/avif 95.05% of desktop web browsers currently in use support AVIF and 97.89% of mobile web browsers do so. While WebP support is even more widely supported again, HDR in WebP support is not a great story. In short, AVIF is as good as it gets if you want to show HDR photos on websites.
Or is it? After many years of Google banging the WebP drum and not finding much take up, obviously another part of Google decided to upgrade the venerable JPEG format. Very recent Google Pixel Pro’s can now optionally save photos in ‘Ultra HDR JPEG’ format, which is a conventional SDR JPEG but with a second ‘hidden’ greyscale JPEG describing a ‘gain map’ so a Rec.2020 gamut image can be reconstructed from the SDR data. As the human eye isn’t especially sensitive to gamut at those ranges (which is why they were omitted from SDR in the first place), this does work for added file size, and it has the big advantage of backwards compatibility because they are absolutely standard JPEGs to code which doesn’t know about the gain map. The wide gamut is only used if your image processing pipeline understands gain map extended JPEGs.
Despite that the gain map extended JPEGs were standardised as ISO 21496-1 and all the major vendors have agreed to support them, due to being standardised only this year support in existing tooling for gain map extended JPEG is extremely limited. There is the official Google reference implementation library and the few bits of software which have incorporated that library. AVIF also supports gain map extended SDR images, but it is very hard currently to create one as tooling support is even worse than for JPEGs. Web browser support for gain map extended AVIF is also far more limited, with only year 2025 editions of Chrome based browsers supporting it. That said, in years to come gain map extended AVIF will be nearly as widely supported as AVIF, and with the claimed much reduced file size they could be the most future proof choice.
Why all this matters is that this website is produced by a static website generator called Hugo and as part of generating this website it takes in the original high resolution images, and generates many lower resolution images for each, and then emits CSS to have the browser choose smaller images when appropriate. There is absolutely zero chance that Hugo will support gain map extended JPEGs any time soon as somebody needs to write a Go library to support them. So image processing support for those is years away.
It’s not much better in the Python packaging space either – right now I can find exactly two PyPi packages which support gain map extended JPEGs. Neither seems to offer a lossless way of converting from gain map extended JPEG to gain map extended AVIF.
Converting losslessly between gain map extended image formats
It won’t be obvious until I explain it: rendering HDR as somewhat accurate SDR is hard at the best of times. Usually you have to supply a thing called a ‘tone map’ with your HDR video to say how to render this HDR as SDR. This is where colour profiles and all that complexity comes in, and if you’ve ever seen HDR video content have all wrong colours, that’s where things have gone wrong somewhere along the pipeline.
Something not obvious above is that gain map extended JPEG doesn’t come with a tone map, nor a colour profile. The software which creates the gain map extended JPEG chose as perfect as possible SDR representation and HDR representation. It emits the SDR image with a delta of how to approximate the HDR image from that SDR image.
The problem is that all the current image processing tooling thinks in terms of (a) here is your image content data (b) this is what the colours in that image content mean. If I render just the SDR portion of the gain map extended JPEG into a RAW format, I lose the HDR side of things. But the same goes if I render the HDR portion, then I lose what the device thought was the best SDR representation.
Therefore, if you want to convert between gain map extended image formats without losing information, right now you need to emit the gain map extended JPEG firstly in raw SDR and then in raw HDR. You then need to tell your AVIF encoder to encode that raw SDR with a gain map using the raw HDR to calculate the gain map.
The tool in libavif
to do that wasn’t working right as of a few months
ago, and invoking all this tooling correctly is very arcane. Luckily,
this exact problem affects lots of people, and I found a fork of
Google’s libultrahdr
which adds in AVIF emission.
That fork is literally being developed right now, its most recent
commit was two days ago.
Gain map extended JPEG to gain map extended AVIF via libultrahdr
Due to its immature state, right now that fork of libultrahdr
cannot create a gain map extended AVIF directly from a gain map
extended JPEG, so you need to traverse through a raw uncompressed
file.
That’s fine, but I was rather surprised to (a) see how very long it takes this tool to create a gain map extended AVIF – but let’s assign that to the ‘this is alpha quality code’ category – and (b) that the gain map extended AVIF file is twice the size of the original gain map extended JPEG.
That produced a ‘huh?’ from me, so I experimented some more:
- A gain map extended JPEG from an input gain map extended JPEG is also twice the size of the original.
- That suggested dropping quality settings would help, so I reduced the quality of the gain map to 75% leaving the SDR picture at 95%: now the AVIF file is the same size as the original JPEG.
- Dropping quality for both sides to 75% yields a file 60% smaller than the original JPEG.
I can’t say I’m jumping up and down about a 60% file size reduction. AVIF is normally a > 90% file size reduction over JPEG.
In any case, this fork of libultrahdr
can’t do resizing,
so in terms of helping me solve my photo downsizing problem
for Hugo, this isn’t much help.
Gain map extended JPEG to gain map extended JPEG via ImageMagick
The traditional Swiss army knife for doing stuff with images
is ImageMagick, and if you’re
willing to compile from source you can enable a libultrahdr
processing backend. There is good reason why it isn’t turned
on by default, because the support for gain map extended images
is barely there at all.
I’m about to save you the reader many hours of trial and error
time on how to resize a gain map extended JPEG using ImageMagick
built from source, and I suspect had I not spent plenty of time
messing around with libultrahdr
this wouldn’t have come to me
eventually.
Firstly, extract the SDR edition of the original gain map extended JPEG into a raw TIFF applying any resizing you want to do. Make SURE you turn on floating-point processing for all steps, otherwise you’ll see ugly gamut banding in the final output:
magick -define quantum:format=floating-point \
PXL_20250908_164927689.jpg \
-resize 10% test_sdr.tif
Now extract the HDR edition, but be aware that the raw TIFF generated is not even remotely correct, but it won’t matter because you’re preserving the original information in the gain map extended JPEG:
magick -define quantum:format=floating-point \
-define uhdr:hdr-color-gamut=display_p3 -define uhdr:output-color-transfer=hlg \
uhdr:PXL_20250908_164927689.jpg \
-resize 10% test_hdr.tif
Now here comes the non-obvious part: here is how to tell ImageMagick
to feed the raw SDR and HDR TIFFs into libultrahdr
to create a new,
reduced size, gain map extended JPEG:
magick -define quantum:format=floating-point \
-define uhdr:hdr-color-gamut=display_p3 -define uhdr:hdr-color-transfer=hlg \
-define uhdr:gainmap-quality=80% -quality 80 \
\( test_sdr.tif -depth 8 \) test_hdr.tif \
uhdr:test2.jpg
The 80% quality setting was found to produce an almost identically
sized output to the original if output at identical resolution.
My Macbook Pro M3 will display 100% of DCI-P3 but only 73% of
Rec.2020. Zooming in and out, the image detail at 80% is extremely close
to the original, but the colour rendering is very slightly off –
I would say that the output is ever so slightly more saturated than
the original. You would really need to stare closely at side by
side pictures to see it however, at least on this Macbook Pro
display. I did try uhdr:hdr-color-gamut=bt2100
, but the colour
rendering is slightly more off again. libultrahdr
supports
colour intents of (i) bt709 (i.e. SDR) (ii) DCI-P3 (iii) bt2100
(i.e. Rec.2020), so display_p3
I think is as good as it gets
with current technology.
So we are finally there: we now have a workable solution to the Hugo image processing pipeline which preserves HDR in images! I am a little disappointed that gain map extended AVIF with sufficiently smaller file sizes isn’t there yet, but I can surely revisit solving this in years to come.
Let’s see the money shots!
So, here we go: here are the first HDR photos to be posted on this site. They should retain their glorious HDR no matter what size the webpage is (i.e. the reduced size editions will be chosen, and those also have the HDR):







I thought lest the difference that the HDR makes isn’t obvious enough, here is a HDR and SDR edition side by side. If your display is able to render HDR, this should make the difference quite obvious:


All that took rather more effort to implement than I had originally expected, but now it’s done I am very glad with the results. Web browsers will remain unable to render HDR in CSS for a while yet, though here’s trying the proposed future HDR CSS:
This may have a very bright HDR yellow background!
… and no web browsers currently support HDR CSS, at the time of writing.
When HDR CSS does land, I’m not sure if I rework all the text and background to be HDR aware or not. I guess I’ll cross that bridge when I get to it.
For now, enjoy the new bright shiny photos!
As you might gather, I have been quite busy, week before last I took the Fiido T2 Longtail down to my Dad’s house which is near Cork to create a new base station from which to explore. Last week I then took Henry and Julia on the back of it around Cork’s rather new cycle lane infrastructure whose high point culminated in a 65 km round trip to Haulbowline Island, which is where the Irish navy is based. We were on dedicated cycle lane infrastructure almost the entire trip, only falling back to roads between Monkstown and Ringaskiddy, and of course between my Dad’s house in Kerry Pike and Shandon. Here it is as a map:

Why Haulbowline Island? The bike’s battery can do maybe 80 km with the kids on the back, so 65 km gives a margin of safety. It needed 880 Wh to charge, so that might have been 800 Wh expended. I know on my own without the added weight of the kids I did 85 km and still had enough in the battery that I could have done a bit more. Also, Haulbowline Island is about where the dedicated cycle infrastructure stops – after that, it is country roads at best.
Whilst it’s great that there is dedicated cycling infrastructure at all now in Cork, I can’t say I think much of the stuff along the north quays in the city centre which requires constant stops and waits for cars. Once you get out towards the Marina Market though, then things get vastly better and the stretch of the converted Blackrock Railway line out to Mahon is truly spectacular in parts – eye achingly pretty as you zoom under old stone bridges with craggy rock sides and mature trees above you. The part after that from Rochestown to Passage West along the seafront isn’t half bad either especially on a sunny day, though I think the other side up to Blackrock Castle along the seafront still prettier again.
They’ve got some world class cycle paths there, and I look forward to them completing joining up the cycle path coming out of Crosshaven to the Cork City cycle paths, then you could jaunt down to Crosshaven possibly starting from the dedicated cycle path beginning in Ballincollig Regional Park, along seafront or within former railway for most of it i.e. away from cars. Might be about 45 km each way, much of it very pretty especially in warm weather.
Anyway, that’s all for next summer now I suspect. Myself and/or the kids did 430 km on that bike this summer according to its odometer. So maybe one tank of petrol in a car. I might get a bit more in during September if the weather remains nice, but I expect to likely not do more than 500 km this summer in total. That is still a fairly respectable distance for leisure rather than commuting travel I think.
The CFSensor XGZP6897D differential pressure sensor
The last time I wrote here in the series on my house build about pressure sensors it was April earlier this year. In that post I came away quite impressed with the Bosch BMP390 temperature and barometric pressure sensor, which for €4 inc VAT delivers +/- 3 Pa relative accuracy with a +/- 2 Pa stochastic noise. For a barometric pressure sensor – which isn’t designed to measure relative pressure differentials – that is very good for the money. In that post, I mentioned that the CFSensor XGZP6897D had in six months dropped from €45 to a tenner, which suddenly brought it into scope. A differential pressure sensor – as the name would imply – is exactly the correct sensor to use for measuring relative pressure differentials such as between the front and back of a fan. Here is one of those XGZP6897D sensors, I bought the packaging with the 2.54 mm spaced pins to make it easy to solder onto a breadboard:

A reminder of its claimed specification compared to the BMP390 sensor:
Noise | Relative accuracy | Absolute accuracy | Cost incl delivery | |
---|---|---|---|---|
BMP390 barometric pressure sensor | 2 Pa | +/- 3 Pa | +/- 50 Pa | €4 |
XGZP6897D differential pressure sensor (-500 Pa to +500 Pa model) | 25 Pa (2.5% of range) | +/- 10 Pa (1% of range) | n/a | €7 |
The noise and relative accuracy looks bad, but because this is a differential sensor, you can measure the difference between low and high pressure sides of a fan, rather than absolutely measure each side with its own barometric sensor. That then eliminates the problem of absolute drift in a barometric sensor, and it also doubles the signal to be measured.
As you may have noticed in the table above, since April it has dropped in price still further to €7, making it even more in scope at only 2x the cost of the Bosch sensor. At the time I didn’t know why, but now I can tell you it is because CFSensor have rather irritatingly silently swapped out the implementation without changing the model number. Yes, the new implementation has a completely incompatible i2c interface – different registers, different address, different mode of operation, even the values read come in a different bit format. Despite using identical model numbers. They are, in effect, completely different sensors at the software level and would require entirely separate drivers.
The ESPHome documentation for the XGZP68xx component hasn’t yet been updated to mention any of these shenanigans, so the first thing I did was fix those docs which can be found at https://github.com/esphome/esphome-docs/pull/5255. In short, it comes down to the part number:
- XGZP6897D001Kxxxx is the older non-C series, which appears at I²C address
0x6d
and supplies a 24 bit signed integer for the pressure. - XGZP6897DC001Kxxxx is the newer C series, which appears at I²C address
0x58
and supplied a 21 bit signed integer for the pressure using a completely different register numbering and layout.
Yes this is very stupid on the part of CFSensor, but worse is to come. Have a look at the sensors I received from Aliexpress:

These are all supposed to be -500 Pa to +500 Pa range sensors, and maybe they are (the one I’ve soldered onto the breadboard definitely is). But the part numbers are all over the place, and worse, none of them are listed on CFSensor’s current list of part numbers. Apparently according to the internet they like to iterate the part numbers frequently (which is fine, I suppose) and release datasheets to match. This iteration cycle is a few months at most. No, they do not produce a single list of all part numbers and which batch they belong to – you need to manually assemble that information by studying the two dozen or so datasheets, and manually constructing a spreadsheet of all the part numbers and which datasheet they refer to.
In short, I cannot tell quickly what pressure range those parts are, whether the Aliexpress vendor made a mistake or not, or indeed much else at all. The only thing I do know is they are non-C series sensors, which makes sense as surely stock is being cleared onto Aliexpress, which is why these sensors are suddenly so cheap.
I know Chinese vendors have a reputation for this kind of shitty confusing packaging stuff, but to be honest apart from the crap documentation and failure to change the god damn part number when you change the I²C address the datasheet is pretty well written and contains all the information you need, albeit in poorly translated English sometimes. I have seen far, far, worse.
Once you get the sensor hooked up and talking to ESPHome, your impression improves still further. This sensor is good for the money – it is stable, not flaky, over the few days I’ve been testing it it has been fairly unsurprising and definitely reliable. Again, I have seen far, far worse.
One issue with the ESPHome driver was that it did not support setting oversampling, so you always got the default of 4096x. The sensor supports up to 32768x oversampling, which makes a big difference to measurement quality reducing stochastic noise by about half, so I had to go improve the ESPHome driver which can be found at https://github.com/esphome/esphome/pull/10306.
Anyway, here is the testing rig:

I’ve attached two lengths of silicone pressure hose to the sensor which I’ve soldered onto a breadboard (I actually soldered two as you’ll notice in case one was a dud to save me having to break out the soldering stuff again). As before, the Bosch BMP390 remains taped to the side of the bilge fan, and I inserted the low pressure tube into the air intake and the high pressure tube into the air outlet while the fan ran at full speed on 12v, which is 9 m/s air flow. As usual, I recorded the readings over time, and I found:
Reading | Min-Max difference across readings (i.e. noise) | Min-Max percentage of sensor range | Half min-max percentage of reading | |
---|---|---|---|---|
XGZP6897D Idling (over ten minutes) | 0 Pa | 0.573 Pa | 0.06% | n/a |
XGZP6897D High pressure tube in outlet, low pressure tube to atmosphere | 16.6 Pa | 10.44 Pa | 1.44% | 31% |
XGZP6897D High pressure tube in outlet, low pressure tube in inlet | 61.05 Pa | 23.87 Pa | 2.39% | 20% |
For comparison: BMP390 | 18.66 Pa | 3.833 Pa | n/a | 10% |
Obviously this fan is open to the air, so has the lowest possible static pressure differential (approx 37 Pa according to the BMP390). A noise level of 30% of the reading makes it hard to accurately control a fan in response. A 20% noise level is better, but to be honest, a 10% noise level is far better again. In the house’s future ventilation system, the pressure differential should be a good bit higher than 37 Pa, so the noise as a percentage of signal should significantly decrease.
There is more nuance to these readings however. You might have noticed that the XGZP6897D reads a bit less than the BMP390 for the single tube case, yet well more than twice for the dual tube case. What gives? Well, it turns out that the XGZP6897D is overly sensitive to low pressures in a similar way as I found with the BMP280 (the very cheap pressure sensor), which the BMP390 does not suffer from. Moreover, the XGZP6897D’s low pressure port is especially sensitive to pressures below atmosphere, giving a reading of ~40 Pa, whereas its high pressure port reads ~25 Pa in the fan inlet. For the fan outlet, I did not find any noticeable difference between the high pressure and low pressure ports, both read ~17 Pa.
This is why ~60 Pa gets read when both ports are in use if you connect the low pressure port to the inlet and the high pressure port to the outlet. If you flip them around from how they are supposed to be i.e. put high pressure port to the inlet and low pressure port to the outlet, you get ~40 Pa. Which I think is actually about accurate.
The reason that the high pressure port is the high pressure port is because it is protected against high pressures, though this only matters in practice for the high pressure range sensors (>= 50 kPa). I suspect that as a result, it gets a harder shell or something else which peforms better at ignoring atmospheric pressure. For the very low pressure range sensors, you absolutely could reverse the connection without problem as far as I can tell from the datasheet which claims that you can apply up to 2500 Pa (5x) to either port of this +/- 500 Pa range sensor safely.
In the end, the CFSensor XGZP6897D costs €7 while an equivalent sensor from Honeywell or Sensiron costs €30 (which is hugely down from over €100 last year), or from Kele about €85. I would be fairly sure neither of those treats pressures less than atmosphere inaccurately, and as we will see below, they claim much better accuracy than the CFSensor sensors.
The bilge fan’s maximum static pressure
Many posts here in the past few months have wondered what the bilge fan’s maximum static pressure us, as its manufacturer gave no claims for anything bar air flow (which we found from testing met those claims). We know from last post that if run on 12v in free air, you get 9 metres per second of airflow, which is 254 m3/hr, which is correct for 12v instead of 24v operation. I did say last post I found mention in a review on US Amazon that its maximum static pressure at 24v is 225 Pa, which I think might turn into 70 Pa at 12v. Using that 70 Pa guesstimate, I came up with an estimated air leakage for the dual Naber Thermobox non-return valve solution as 5.55 m3/hr at 50 Pa. Let’s see if we can do better using this differential pressure sensor!
Firstly I taped as tight as I could make it a plastic sheet over one end of the fan:

It probably leaks a little, but not too bad. I then turned the fan on full. The XGZP6897D read a very stable mean of 38.65 Pa, min was 37.02 and max was 40.02. The min-max difference was only 3 Pa, or +/- 3.9% of the reading. Much better than before when there was airflow! It would seem that the XGZP6897D does not like airflow much, but if there is no airflow and just static pressure, it really is quite low noise.
The BMP390, which I had intended as the control sensor, it did not do well at all. It began at 1009.321 hPa. When I turned on the fan, it rapidly went up to 1009.545 hPa, but it then just kept on rising. After ninety seconds it breached 1010 Pa, after which I cut the power. It then took twenty minutes to get back to atmospheric pressure:
- +5 mins: 1009.595 hPa
- +10 mins: 1009.387 hPa
- +15 mins: 1009.349 hPa
- +20 mins: 1009.341 hPa (but now rising slowly, because I’m fairly sure atmospheric pressure is currently rising)
I’ll be honest, I was very surprised to see this happen, so I ran the test for a second time to make sure and to see if there was ever a point at which the BMP390 sensor might stop rising.
- +0 mins: 1009.348 hPa
- +5 mins: 1010.380 hPa
- +10 mins: 1010.447 hPa (it seems to stabilise at about seven minutes). I now turn the fan OFF.
- +15 mins: 1009.419 hPa
- +20 mins: 1009.249 hPa (and falling slowly, I think we’ve reached atmospheric pressure)
Obviously a 110 Pa static pressure is improbable, plus it took many minutes for the sensor to stop raising the value, though I find the 39 Pa read by the XGZP6897D a bit disappointing though more plausible. Interestingly, if you add the other tube to the inlet, it rises to 45 Pa so I suspect there must be more air leakage happening than I can notice – after all, if it were truly fully sealed, the fan should be just cavitating and not drawing any air.
I then did even more testing, and I discovered that the BMP390 only starts doing this monotonic reading rise once the static pressure reaches about 25 Pa. This is why I never noticed it before – the open fan doesn’t generate more than 20 Pa pressure, so this behaviour never turned up before. As to why the BMP390 does this, I can only speculate: perhaps it has a macro sensor and a micro sensor, and if the micro sensor exceeds range then its firmware starts incrementing offsets until the micro sensor stops being out of range?
In any case, the BMP390 adjusts too slowly for pressures above 25 Pa difference from atmospheric to be useful for controlling fan speeds. I guess the sole option remaining is the XGZP6897D or some other differential pressure sensor.
Conclusions
The first item is that my air leakage estimation for the dual Naber Thermobox was underestimated. If the maximum static pressure of this fan is more like 50 Pa than 70 Pa, then we actually had an air leakage rate of 7.168 m3/hr. That means two of them would contribute about 2.85% to whole house air leakage, which is still acceptable I think.
The second item is that this research has turned out to be very valuable. I was going to go off and base the ventilation boost fan speed control on barometric pressure sensors. Now I know those can’t possibly work for this use case. I have to use a differential pressure sensor, which is now the cheapest solution to estimating air flow entering each inlet, and exiting each outlet.
I was thinking what if I put one hose before the fan and leave the other hose exposed to the room? If the fan is off, there will be a slight pressure drop due to the fan blocking air flow. One could, theoretically, calculate the air flow from that small pressure drop. If the fan is running, the pressure before the fan would drop, and the pressure in the room would slightly increase. I am minded that single port CFSensor pressure sensors are currently going for about €4 inc VAT on Aliexpress. That would nearly halve the cost.
Or would it be better to put each hose each side of the fan? Then when the fan is running, the pressure differential would be much higher, so how much work the fan is doing would be more accurate to estimate. But then how do you estimate how much air flow is actually going into the room? The greater the air flow, the greater the differential from the room pressure.
Obviously you could fit two sensors here, but that feels overkill and unnecessary expense and hassle wiring in more pipes. I don’t know on that one.
Are there better sensors than the XGZP6897D for me to test? From what ESPHome supports:
- Honeywell ABP i2c sensors (starts from 6,000 Pa upwards, no use for HVAC)
- NPI-19 (starts from 17,000 Pa upwards, no use for HVAC)
- TE-M3200 (starts from 400,000 Pa upwards, no use for HVAC)
- Honeywell ABP 2 i2c sensors (starts from 250 Pa upwards), costs about €20, single port so measures against atmosphere. A dual port model starts from 600 Pa upwards and costs about €30. Accuracy is 1.5% with long term stability of +/- 0.25% of range.
- Sensirion SDP31 i2c sensors (starts from 500 Pa upwards), costs about €30. Accuracy is 3% with long term stability of +/- 0.1 Pa.
I’m a bit hesitant to be spending an additional €30 per ventilation inlet and outlet in the house. We have a fair few of them, so it would add up quickly.
Also, once you’re into the €30 price range, why arse around with differential pressure sensors when you can fit an air velocity sensor directly and call it a day? The FS3000 costs about €20, unfortunately its SMD package would be a pain to hand solder. Its breakout board off Aliexpress is about €55 inc VAT delivered. That’s very pricey if fitting one per inlet and outlet, but it might be an idea to fit a few in select places, then you can figure out how much air flow is going down various tee junctions of the ventilation. Accuracy is about +/- 5%, and as far as I can tell all near substitutes are far more expensive again. I do note that at the time of reading that in some places in the US the breakout board price has dropped to US$20, if so, maybe the Euro price will be dropping shortly. Definitely one to watch out for.
If the price does come down, I ought to get one and test it. There are very few reviews of that sensor, the only obvious one at https://www.yoctopuce.com/EN/article/testing-the-renesas-fs3000-air-speed-sensor reckons this sensor doesn’t work well. So we may be back to differential pressure sensors in the future anyway.
What’s next?
At the end of September I’m taking a solo holiday in Spain for ten days. I’m going to go see the families of two people I knew from back when I lived in Madrid – one still in Madrid, the other in Bilbao. As they can only see me at weekends, I’ll be taking a road trip across Spain during the weekdays during which I’ll be mainly visiting ancient Roman shit, but I’ll also get in some spectacular nature. It should be fun, it should also refresh my long neglected Spanish speaking skills and I’ll get to see people I haven’t seen in well over a decade, plus their children!
Before that, the kids will have returned from school so I’ll have no excuse to not (a) get the services layout for the house completed (b) actually start losing weight, as I’ll be going tee total from September onwards as I do every year. There will also be two posts coming here on these topics:
The Huawei D2 watch, which as of today replaced my Amazfit GTS 2 Mini watch which I’ve had since 2021 (its battery swelled and the screen popped off, so I’ll have some cool pictures of its insides).
The Google Pixel 9 Pro phone, which will be replacing my very long standing current phone the Samsung Galaxy S10 which has been my daily driver since – believe it or not – since 2020, making it five years long in the tooth. Which is by far and away the longest I’ve ever used a single phone, for reasons I’ll get into in that post. Indeed, I may have to write two posts on the new phone, because I have replaced its firmware with https://grapheneos.org/, and I have come to a number of realisations about that OS in the last few days of playing with it which I think I ought to write down for later reference.
Anyway, all that is for after I get back from Czechia which will be week after next. Maybe see you then!
- Lose weight
- On this I have been spectacularly unsuccessful, I have actually slightly
gained weight
.
- On this I have been spectacularly unsuccessful, I have actually slightly
gained weight
- Do stuff with the kids I couldn’t normally do
- On this I have been very successful, they are quite sick of how much time they have been spending with me! Over 250 km with the kids on the back of the Fiido T2 Longtail and counting!
- Move ISO standards committees
- Four major paper proposals written for the WG14 meeting in August, and I am all set for the trip and meeting. I wish I had written more papers, but I think four big papers is still pretty good.
- Clear project backlogs
- I have done absolutely nothing on the 3D services layout. I now expect I’ll have to force myself to do those as if a day job after the kids go back to school. I have cleared all but one of the house build solutions testings however, so I’m going to call this a partial success.
I guess that’s an overall score of about two thirds successful? I am a bit annoyed about the lack of weight loss as I’ve been starving myself. But as mentioned last post, those trips abroad with all the nice food and all the beer from it being an unusually nice summer for Ireland I think has more than made up for calories. I expect to go completely tee total as I usually do after Megan’s birthday until Christmas, so maybe the weight will fall off then?
Regarding new employment, I haven’t been trying to find new work, but I have been watching ever more people I’ve known for years also enter the jobs market. Lots of very senior, very talented, developers. I haven’t seen it this bad in tech since the 2009 recession. If it keeps going like this, it’s going to be as bad as the 2001 recession which was especially hard on the tech industry. Anyway I’ll worry about all that next month.
This post will be about testing my solution to a long standing problem in certified Passive Houses: kitchen cooking fumes extraction.
The problem
Certified Passive House for my climate zone requires the total outer fabric u-value to be under 0.20 W/m2K or so (i.e. all windows, walls, roof, floor etc when all put together the total thermal transmittance must be below 0.2 watts per square meter per degree of temperature difference). Additionally, airtightness must be better than 0.6 indoor air changes per hour. For both these reasons, kitchen cooking fumes extraction to the outside is problematic:
Most kitchen extractor vents have a u-value around 25 W/m2K – though this is a 150 or 125 mm diameter region, so it won’t impact the building fabric average by much. Where you get more trouble is from the thermal bridging as you effectively have a hole to the outside – this causes condensation and mould unless you put perhaps 50 mm of insulation around the duct. All that insulation is unwieldy, especially if trying to route it in a kitchen e.g. behind cabinets.
Kitchen extractor vents should have a non-return valve to prevent outdoor wind blowing air into the kitchen through the extractor, which if not prevented would fill your house with lovely stale cooking smells as well as cold air. Most of these non-return valves don’t really close properly, you’ll often hear them banging with wind gusts outside. This means they ruin the excellent air tightness of your Passive House.
For these reasons the Passive House Institute recommends recirculating cooker hoods. These pass the fumes through a filter before releasing them back into the home. I’ve never been keen on these – from my own testing I have found the thin insubstantial and cheap filters typically fitted to cooker hoods saturate within a few weeks. And nobody is going to be changing these filters every few weeks in practice. If you’re going to do this right, you should fit commercial kitchen grade fume filtration which does actually work. Expect to spend thousands on such an installation, and it is noisy and uses lots of electricity to run.
The Naber Thermobox non-return valve
That made me wonder if a decent thermally broken extractor non-return valve could be possible? I had found the Naber Thermobox which is an affordable, thermally broken unit costing €48 inc VAT at the time of writing. It consists of three ABS plastic non-return valves with two 20 mm thick pockets of trapped air between them. Naber claim that this unit has a u-value of 2.2 W/m2K. They also claim that it will only open with 65 Pa of air pressure, which is implemented using two small magnets per flap to ensure that the flap is either completely closed or completely open – it cannot be slightly open, or get flapped open and closed by wind gusts. Theoretically, you could even do the air tightness blower door test without these covered because the air tightness test runs at 50 Pa, so it won’t open the Naber unit. It’s probably easier to show you a video of it than describe it in words:
The Naber Thermobox insulated non-return valve
This looks promising, though I note from inspection that it isn’t particularly air tight – the plastic flaps do have slight gaps around them as they need to permit some air to pass in order to whip them open when the extractor fan turns on.
There is something else cool about Naber’s design: each layer in these units simply snaps together. This lets you amalgamate them easily like this:

And now I have a five chambered insulated non-return valve which ought to be significantly better than a two chambered edition both thermally, and for air tightness.
Thermally broken kitchen extractor testing rig
I purchased from an online German vendor a bunch of Naber ventilation kit (enough to test if my planned duct routing under the kitchen cabinets would work – I now know it will). One of the items was a 125 mm diameter half metre long PVC pipe suitable for penetrating the wall of the house (which is 0.5 metres thick if excluding the interior service cavity). To mimic the cellulose insulation which would surround it, I fitted a roll of 40 mm thick Aluminium Silicate Ceramic Fibre blanket which I got off Amazon at €66 inc VAT per sqm (which is about 0.1 W/mK thermal conductivity which isn’t great, however its singular advantage is it is happy well past 1000 C so it’s the right stuff to use for chimney flues etc). Because the fibre blanket is itchy, I then wrapped that in Aluminium foil covered bubble wrap which is good at reflecting heat:

I then stuck Megan’s hair dryer in one end, and used my thermal camera to measure what heat was going in and what transmitted through. Unfortunately on my first attempt I didn’t realise just how hot her hair dryer gets when blowing into a highly insulated tight space and I melted my first Thermobox unit which was fifty euro wasted:

Luckily I had another two of those units for testing and the PVC duct while it warped from the heat it was sufficient for further testing. I used the lowest heat setting after this!
Initial testing revealed an issue: the hair dryer blows with considerable air pressure, and I suspected some of the heat was being unreasonably blown through the unit. So I came up with a ‘sock’ made out of greaseproof paper:

The reason I couldn’t use tinfoil is because it reflects infrared and therefore makes the thermal camera useless. The greaseproof paper is meant to be used in ovens, and it worked a treat: the hair dryer blew into the greaseproof paper sock, it heated up to 65 C or so. That was against the front of the non-return value, so some of the heat shined onto the unit under test plus the air around heated up. That then heated the first layer of ABS plastic of the unit, which then shines heat onto the air pocket and the next plastic layer behind it. As the heat cannot escape outside the sides due to the thick insulation, and the hair dryer side reaches a steady state of hot quickly enough, you can then measure how long it takes for how much heat to pass through the unit. From that, one can theoretically calculate what the thermal conductivity of the unit is.
The other part of the testing is for air tightness. For this I put one of the bilge fans in one end snug and for the other end I made a paper hole to focus all the airflow into the right size for my anemometer to measure, as the little air getting through the unit was just on the cusp of not being readable:


Thermal testing
I did two runs – I should have done more, but they are incredibly boring and tedious to do. They involve taking a photo with the thermal camera every minute. For the single unit (two air compartments) case, it wasn’t too bad: it took about fifteen minutes for the cold end to reach 25 C having started from 20.6 C. The reason I chose 25 C as the test end point is because the temperature increase (4.4 C) is approximately 10% of the hot to cold end differential (44 C), and if the cold end gets much warmer than the surrounding air then convection will carry off heat which affects the measurements.
The thermal camera does get a bit of noise at the sub degree level, but the trend is clear over time – the cold end gets warmer.
For the double unit (five air compartments) case, the testing was very considerably more tedious:
Yup, that is a full ninety minutes of testing to reach 25 C. Very, very boring to do, but I did catch up on all my internet reading I suppose. Here is a sped up time lapse of the thermal photos:
Ninety minutes of thermal photos in under one minute!
I reckon it took a good forty minutes before any heating around the top of the non-return valve becomes obvious in that video.
Straight off from that alone one can deduce that the five compartment return valve is five times more insulating than the two compartment return valve. In case you’re wondering if six non-return valves open easily when the extractor fan starts up:
If you look carefully at the design, each of the plastic flaps has a little ‘foot’ on its back designed to smack into the next layer of flap behind it. So when the front flaps open, they smack into the next layer which knocks them off their magnetic closed state i.e. opening cascades. Sellotaping two units together doesn’t affect this – each of the five layers of non return value behind the front layer open in turn.
Finally, some pictures of the hot end to round out this section:




Air tightness testing
I already showed you how I did the test above, and you could even make out a measurement. The bilge fan, which is a 24v model, but I ran it at 12v which pushes 9 m/s of air if left free:

If the fan is tightly inserted into one end of the pipe with the five air compartments non-return valve inside, I measure 0.6 m/s of air getting through a hole with diameter 65 mm:

Which is 0.002 m3 of air per second, or 7.168 m3 of air per hour. There will be 806 m3 of air in the house, this means we can leak no more than 483 m3 of air per hour at 50 Pa. This would mean that the five compartment unit should be well within Passive House air leakage, but things do depend on what static pressure is being generated by the fan outside the unit.
If when left free the bilge fan it moves 9 m/sec of air through a 100 mm diameter pipe, that is 254 m3/hr (it does about 450 m3/hr if running off 24v, so this seems about right – air flow is proportional to fan speed, so you get ~56% of max air flow on 12v instead of 24v). The manufacturer of the Seaflo fan doesn’t provide static pressure ratings (and it is hard for me to measure those until I get the differential pressure sensors up and working), but I did find a review on US Amazon where the guy reckons 0.89 inches of static pressure, which is 225 Pa. Static pressure is negatively related to air flow squared, so halving pressure only reduces air flow by one quarter. Therefore, if the fan is turning at 56% of full speed, you would expect static pressure to drop to 70 Pa.
Therefore, as a rough guesstimate, that 7.168 m3/hr at 70 Pa would be 5.55 m3/hr at 50 Pa. This would be 1.1% of the maximum air leakage allowed, which seems acceptable, even though in my specific case I’ll actually have two of these ducts – one for extraction, one for make up air to more rapidly purge the kitchen of smoke. Even then, both those ducts should contribute no more than 2.5% to whole house air leakage.
More (but harder) maths …
We have a probable air tightness measurement for the doubled up Naber Thermobox non-return valve, but can we calculate a thermal conductivity?
The formula for the rate of heat flow is:
We know the area of the unit to be 0.01227 m2 as it has a 125 mm diameter. For the two compartment test, it is 0.055 m long and the temperature difference was 44 C. Rearranging:
Simplifying:
The two compartment unit claims a u-value of 2.2 W/m2K and therefore one would expect a ~1.2 watt heat flow, which feels plausible. Let’s see if we also saw that during testing …
The thermal camera measures the surface of the final ABS plastic flap, so if ABS plastic has a heat capacity of 1500 J/kgK, a density of 1115 kg/m3, and the flaps in our case are 1 mm thick, I reckon it would require 1673 joules to raise a 1 m2 sheet one millimetre thick by one degree celsius. Or, put another way, if one applied 1673 watts to that sheet, its temperature would rise by 1C per second. For our much smaller sheet, that becomes 20.52 joules per degree.
We know it took 780 seconds for our final flap to rise by 4 degrees, therefore that was 195 seconds per degree, or 0.005128 degrees per second. This implies a heat flow of 0.11 watts, which means our testing is about a factor of ten better than the manufacturer claimed value.
Losses to convection to outside air are maybe 0.06 watts per degree above ambient air which isn’t enough to remotely close the gap.
Let’s now look at the five chambered unit: firstly, if a two chambered unit 0.055 m long has a u-value of 2.2 W/m2K, that would imply a five chambered unit 0.125 m long would have a u-value upper bound of 0.986 W/m2K. However, most of the insulation will come from the air (0.026 W/mK) versus the ABS plastic flaps (0.1 W/mK) and we have 2.5 times the number of air chambers. So I would suggest a u-value lower bound of 0.775 W/m2K. With a 37 C temperature difference, that is a heat flow between 0.352 watts and 0.448 watts which is 3.4 times better and 2.7 times better than the two chamber unit respectively. This is still rather lower than expected given my second test took five times longer than my first test, but let’s keep going by stating the heat flow equation for our second test:
Simplifying:
This time it took 5160 seconds for our final flap to rise by 2.6 degrees, therefore 0.0005039 degrees per second. This implies a heat flow of 0.01 watts, which is even more confusing because this test took five times longer than the first test, not ten times longer. It also implies a test unit u-value of 0.022 W/m2K which is plainly ridiculous.
I think I must surely have either a mistake in my maths or I made some mistake during testing. Quite frustrating after all the effort invested!
Conclusion
I suppose at least I discovered that the Naber Thermobox is not worse than manufacturer claims for thermal conductivity, and I probably did determine a reasonably accurate figure for air tightness which is useful, as the manufacturer said nothing about that aspect of the unit.
Obviously it’ll be the five chambered unit that I’ll be fitting. It should have a u-value a good bit below 1 W/m2K and an air leakage rate of around 5 m3/hr at 50 Pa. Those should enable reduced thickness insulation around the ducts, which will make routing these ducts underneath and behind the kitchen cabinets tractable, as I can only really allow for 25 mm of insulation – 50 mm just won’t fit.
Theoretically speaking, the non return valve should be installed close to the outside of the building to minimise thermal bridging. However there is a superior alternative – one could fit one half on the outside, and the other half maybe about half way through the wall. This increases the thickness of the pocket of air trapped between the two, improving insulation. I reckon it could improve the total assembly u-value by around 10%. What I don’t know is whether the flaps would whip open as well, as when they are together the little feet create a cascade open and I can see that a second nearly airtight seal behind the first could create a lack of pressure shock to whip open the second unit.
Thankfully, these Naber units are easy to install and remove – you just twist them in and out of place. So I can experiment once the house is built. It also means if they ever get clogged with grease, or the plastic gets worn down, or anything else goes wrong with them, they are dead easy to remove and repair or replace.
All in all I think I’m very happy with this affordable solution to extracting cooking fumes without ruining passive house.
What’s next?
Obviously differential pressure sensor testing is the next big task, and as we’re handily getting through August it’s beginning to feel urgent. I’ll endeavour to get it done by the end of this weekend, though it may be a few days into next week for me to write everything up.
Still, forwards we proceed as everything I get done now is one less thing to do later. And I have been getting in a lot of extra time with the kids by doing Irish tourist day trips – for the most arduous of our many recent day trips I made Clara and Henry climb to Coumshingaun Lough which is 2 km along the Earth’s surface, but also 250 metres upwards a steep, slippy incline. The weather involved periodic bursts of heavy rain and howling winds, so they weren’t entirely happy by the time they got to the top even with views like this:

Me, I was very glad to have finally crossed that one off my list. It was well worth the climb and suffering in my opinion, it is just stunning up there. We’ve done most things within an hour’s drive from home, but we have a long way to go on the list of things within ninety minutes drive. Ireland really is amazing for that type of tourism – it’s so much more authentic than what Amsterdam has become, where its centre has become more like a theme park pastiche of itself than what I remember of it from the 1990s. Thankfully, if you walk about 45 minutes away from the centre real genuine true Amsterdam is still there as I remember it – despite even out there having a lot of foreign tourists milling around. But, like with the many foreign tourists also climbing with us to Coumshingaun Lough, in the outskirts of Amsterdam the tourists add rather than subtract from the place as they do in the centre of Amsterdam. I haven’t been to London in well over a decade now, I expect it to also suffer from overtourism, but I’ll find out next week.
Last post in the series on my house build I said I have these three projects remaining awaiting testing and writeup here:
- Differential pressure sensor testing.
- Thermally broken kitchen extractor testing.
- Radar human presence sensor testing.
Today’s post will be mainly about human presence sensors. I have begun the other two – I have already accidentally melted a part of the test assembly for the thermally broken kitchen extractor, and I have soldered up the breadboards for the differential pressure sensor testing. But before I get onto human presence sensors, I mentioned last post that we ended up purchasing a Fiido D11 2025 edition for Megan’s commute bike, so let’s do that first.
Fiido D11 bike (2025 edition)
You may find reading my post on choosing an electric bike useful to read first, particularly at the end where I compare all the currently available foldable ‘lightweight’ electric bicycles currently available for under the €1,500 Cycle to Work scheme Revenue budget limit. The D11 2025 edition is available for under €1,000 inc VAT delivered. The 2025 edition only launched in July, and we were one of the first to get one in the world.

Megan's Fiido D11 commute bike folded up in the hallway
I wouldn’t call this bike light at around 23 kg. It’s heavy and awkward to lift even when folded. But it does fit into the boot of our cars, which is useful. Ride quality is unsurprisingly rough with the twenty inch diameter tyres. Handling is like all folding bikes – not great – but it is acceptable, and the folding locks are both tight and secure with no give nor rattle. The 36v based electric motor is just about powerful enough, and you will need to help it going up steep hills. Its pedal torque sensor is the same model as on my T2 Longtail, and therefore you get the same oscillating power delivery which is irritating.
Those are all the middling points. In terms of strong positives, the battery life is indeed more than plenty – Megan only charges it once per week and she’d do about 18 km per day. So it certainly can do 80 km on a single charge. The tail light on it being mounted high under the saddle works very well. The electric motor does genuinely make commuting far more pleasant, especially on the steep hill up to home on the way back.
The biggest negative is without doubt the brakes. They’re cheap and they don’t work well – the stopping distance is lousy. I wish they’d fit traditional rim V-shaped brakes at this price point, as they’re simply better than cheap hydraulic brakes. So long as you don’t go fast, the poor stopping distance is acceptable, but I think Fiido missed a trick here as a better bike was possible for even less cost and weight – just fit rim brakes.
I haven’t tried this bike with the EU speed limits removed, but to be honest the power system is sufficiently weak it wouldn’t get much above 25 kph anyway on the flat. Add any wind and you definitely won’t be getting above 25 kph without you helping it.
Megan is pleased with the bike, and I suppose so am I for the money. I would have preferred the ADO Air 20 Pro which costs €1,499 everywhere in Europe except Ireland, where it costs €1,799. I think the ADO would be the better folding bike albeit for 50% more cost than the Fiido.
I just wish Fiido had put better brakes on the D11. I’m not asking for the monster brakes that the T2 Longtail has, but something less shit than on the D11 would be a big gain.
Incidentally, the T2 Longtail has been busy taking my children places during the summer so far. Here it is resting after taking us to a playground 20 km away, and at the summit of Bweengduff mountain:


It was an especially hot day that day for Ireland at 28 C, so being able to tootle along at a nice refreshing clip was greatly appreciated by both myself and the kids at the time.
Getting to the top of Bweengduff mountain it gave out and refused to do further work due to overheating a short push of the bicycle from the summit. In fairness, it took us nearly to the top. It also very much took us down at speed on gravel, I was being extra careful due to the kids on the back, but boy that bike is a surprisingly excellent gravel bike. The brakes had to be on almost continuously to keep us under 50 kph, and they were so hot at the bottom of the mountain that they were clacking from metal expansion. What a bike! I’m looking forward to taking the kids to other places on it when we get back from London.
Human presence detection sensors
I’ve mentioned these on and off throughout the past four years on here, but I don’t think I’ve done a dedicated post before this. Human presence detection sensor modules suitable for operation by an ESP32 or Raspberry Pi come in many implementation techniques:
Passive infrared (like burglar alarm sensors or flood light sensors) e.g. HC-SR501.
- Characteristics: Range 7m when new (but shrinks with age); 120x120 degree field of view.
- Pros: a few euro of cost; works outdoors and in the rain; widest field of view; can disambiguate humans (or rather, warm moving things).
- Cons: needs a third of a watt power to work; they need movement over a fair distance to activate; they can’t sense distance; they tend to reliably fail after a few years.
Ultrasonic e.g. HC-SR04
- Characteristics: Range 2-3m; 30x4 degree field of view.
- Pros: sub euro cost; can measure distance very accurately; lowest power requirements of any solution (0.075 watts).
- Cons: extremely limited useful field of view; can’t disambiguate humans; long term reliability is also an issue with the cheaper Chinese modules; limited range, well under three metres.
Video with AI analysis e.g. Dahua IP cameras
- Characteristics: Range 100m; 120x90 degree field of view.
- Pros: very flexible, also lets human check visually; works outdoors and in the rain; wide field of view; can disambiguate humans.
- Cons: hundreds of euro of cost; distance accuracy is poor without a second camera; needs 15 watts of power (by far the most of out any option here).
Laser Time of Flight (ToF) e.g. VL53L8CX
- Characteristics: Range 4m (bright white objects, much less with dark objects); 60x60 degree field of view.
- Pros: can detect multiple objects within 64 zones and whether they are in motion or stationary; needs about a quarter watt of power.
- Cons: Cost about €15 inc VAT each (these are sub-euro models like the VL53L0X which do simple distance calculation and nothing else over a limited field of view); don’t work well with dark or non-reflective surfaces; can’t disambiguate humans; the sensor itself outputs histograms, and the analysis of those must be done in software by the microcontroller, which isn’t possible for an ESP32 unless it is fitted with extra RAM.
Radar e.g. HLK-LD2450
- Characteristics: Range 6m; 120x120 degree field of view.
- Pros: a few euro of cost; can detect distance of multiple objects and whether they are in motion or stationary; works outdoors; can disambiguate humans; has onboard DSP to do all the analysis so your ESP32 doesn’t have to.
- Cons: limited distance and speed resolution; some radar frequencies don’t work well in rain; needs about half a watt of power.
From four years ago onwards I tested options one to four – at the time, the radar based solutions were very new and not much good. I set up a passive infrared sensor in my living room, it worked for about three years with range reducing over time until the sensor died which is par for the course. The lack of resolution, and the constant need to have humans moving over its low resolution to keep the sensor on, ruled it out as a bedroom human presence sensor.
The ultrasonic sensor was cute and great for short range very tight field use cases. But I’d be thinking more ‘is the door open?’ type use cases, not ‘is a human present?’.
The video I’ve already done a post about, and it’s my preferred solution for outdoors and the public areas of indoors as it saves on lots of wiring, and I’d be recording most of those areas anyway. It isn’t viable within bedrooms for obvious reasons.
The laser based time of flight sensors were until now my best solution to this problem. I tested the VL53L0X ToF sensor which is a simple very cheap distance sensor, and it was great. Nearly as accurate as an ultrasonic sensor, similar range, can also work with non-hard non-perpendicular surfaces. But it really is mainly an alternative for an ultrasonic sensor.
I also tested the VL53L1 ToF sensor. The latest firmware can see bright white objects up to eight metres away – if not a bright white object, four metres is more likely. The results for this sensor were impressive, it could locate and track multiple objects with location, speed and direction. It did chew up a fair bit of a Raspberry Pi Zero’s CPU because the software analysis is all done on the CPU using raw data from the sensor. And the maths needed are non-trivial. But I came away surprised just how much information you can calculate from what is backscattered laser light if you can throw enough maths at the problem.
This brings us finally to the radar sensors. As mentioned earlier, I hadn’t investigated these until now as they were expensive, novel, and at the time not very good. But the technology has matured, and Hi-Link which is the principal Chinese vendor of cheap radar sensors has both driven down cost and greatly improved accuracy. I can, for above five euro, get a very good radar sensor now.
Radar human presence sensor categories
Radar sensors come in categories based on what radio frequency they use to implement the radar. There are the following radar implementations available as modules for integration into DIY projects currently on the market at the time of writing. I put in bold any showstopper type gotchas about each.
- 5.8 Ghz Hi-Link & Leap, 3-20 dBm power range
- Characteristics: Range 6m; 140x140 degree field of view; 150 mm max resolution.
- Pros: Part of ISM frequencies, so no licence required.
- Cons: Poor, but some, building penetration; collides with Wifi 5 Ghz 802.11p.
- 10 Ghz Hi-Link, 2 dBm power range
- Cons: Similar frequency to satellite TV; low range; requires licence to use.
- 24 Ghz Hi-Link & Leap, 6-20 dBm power range
- Characteristics: Range 9m; 120x120 degree field of view; 22 mm max resolution.
- Pros: Part of ISM frequencies, so no licence required; no building penetration.
- Cons: Absorbed by water and water vapour, which reduces range to a few kilometres at most
(but also makes this frequency range especially sensitive to bags of water
).
- 60 Ghz Hi-Link & MicRadar, 6-12 dBm power range
- Characteristics: Range 6m; 100x100 degree field of view; 10 mm max resolution.
- Pros: Part of ISM frequencies, so no licence required.
- Cons: Strongly attenuated by oxygen in the atmosphere which strongly reduces range to a few metres, which means it is generally always limited to within a room; collides with ‘WiGig’ 60 Ghz Wifi standard.
- 77-80 Ghz Hi-Link, 10-13 dBm power range
- Characteristics: Range 15m; 100x80 degree field of view; 8 mm max resolution.
- Pros: No building penetration; very versatile, can be used for everything from medical scanning to high bandwidth satellite uplinks.
- Cons: Lots of stuff uses this range due to its excellent properties; requires licence to use.
For human presence detection within a room, you generally don’t want to see into other rooms, plus 5 Ghz Wifi tramples all over the 5.8 Ghz radar option. The ~80 Ghz radar is amazing, but needs a licence for use. That realistically leaves the 24 Ghz and 60 Ghz radar options. Of those, the 24 Ghz radar is more interesting for human presence detection because big bags of water absorb most of 24 Ghz, so to the sensor humans stick out as big black shapes easy to disambiguate with a simple filter. As radar penetrates clothing, it can easily pick out micro movements like your heart beating in a way laser based solutions cannot. This makes 24 Ghz radar based solutions ideally placed to detect stationary but present humans e.g. if they are asleep in a bed.
24 Ghz Hi-Link radar sensor models
Hi-Link alone has dozens of radar sensor modules with use cases for everything between smart toilets to water tank gauges. I ran through the full list choosing only those likely to be useful for human presence detection within a typically sized bedroom, and eliminating old and legacy models. This list naturally fell out into three models, one per year. All can measure distance, all provide a serial UART interface which an ESP32 or indeed anything else can use, and I think all provide a Bluetooth interface (more on that shortly):
- LD2410b (released 2023)
- Cost: €1 inc VAT
- Transmit power: 13 dBm (79 mA module consumption)
- People count max: 1
- Position: no
- Range max: 6 m
- Resolutions: 0.2 m, 0.75 m
- Field of view: 120x120 degrees
- Needs 5v power supply
- LD2412S (just released in 2025)
- Cost: €3 inc VAT
- Transmit power: 13 dBm (90 mA module consumption)
- People count max: 1
- Position: no
- Range max: 9 m
- Resolutions: 0.2 m, 0.5 m, 0.75 m
- Field of view: 150x150 degrees
- Needs 3.3v or 5v power supply
- LD2450 (released 2024)
- Cost: €4 inc VAT
- Transmit power: 12 dBm (120 mA module consumption)
- People count max: 3
- Position: yes
- Range max: 6 m
- Resolutions: 0.75 m
- Field of view: 120x70 degrees
- Needs 5v power supply
The HLK-LD2412 human presence detection radar sensor
Of these, the LD2412 looked the most interesting for detecting if a human is in a bedroom, so I picked up three of them off Aliexpress for a tenner delivered:


The top one is the LD2412 and the bottom the LD2412S – the latter is a v2 design according to the manufacturer, and the board layouts are indeed very similar. Something which Hi-Link has realised this year is that 5v powered boards are a pain, so they’ve added in the option of either or both 3.3v and 5v power options – onboard boost and buck converters will generate the missing voltage from the other as needed. This makes them far more convenient to integrate. They also dropped the default UART speed to 115200 which usually means you can skip bothering to wire in hardware flow control, another convenience. Finally, all three modules above have a Bluetooth transmitter thanks to their onboard DSP. It simply offers the serial communication over Bluetooth. And, to be clear, the Bluetooth aerial on these is lousy, plus there is zero security and nothing to prevent hijack, so I’d only use it as a temporary solution. But it certainly is very very convenient, and well done on Hi-Link for putting together such a DIY hobbyist friendly package for such a low cost (something which ST Microelectronics might want to think about for their ToF sensors).
So, does it work?
Testing the HLK-LD2412S human presence detection radar sensor using my laptop
As you saw, it correctly detects when a human is moving or stationary. It gives a fair approximation as to their distance given the 0.5 m resolution it is configured with – by measuring tape I’d say my legs were 2.55 m away from the sensor, and if you wait long enough it does seem to eventually realise this. If you move a little further away, it switches to 2.5 m, so I guess it might be a calibration thing. Also: this sensor is affected by bright light, and it was daylight outside and I hadn’t bothered to tell it to auto calibrate for light using its onboard light brightness sensor. In any case, the sensor definitely works as advertised.
You might notice mention of having set it to 0.5 m resolution, so I should explain that. The LD2412 has twelve ‘gates’. If you choose 0.75 m resolution, that is 9 m of range as 12 x 0.75 = 9; if 0.5 m resolution, that is 6 m of range; if 0.2 m resolution, that is 2.4 m of range. The 2.4 m range is just too short to reach my couch, it oscillates between seeing people stationary on the couch or not. It picks motion immediately even outside its maximum range, and of course it picks up stationary people within the 2.4 m range without issue. The 6 m range worked well for my 3 m wide room, as you saw in the video, but the granularity is coarse. Where things didn’t go well is the 9 m range, it really didn’t detect stationary people on the couch at all. My pet theory for this is that the wall behind the couch is reflecting radar back at the sensor, and if the sensor is trying to pick out humans up to six metres behind the wall, it gets overwhelmed. So the default setting of 0.75 m for this sensor would make it look useless in a 3 m room, which may explain some of the bad online reviews and commentary.
At the 6 metre range, I tried it with multiple people in the room – it reports the closest. I tried it from its side, and I think its 150 degree field of view is realistic. I tried to confuse it by being as still as possible by holding my breath, and I failed. It appears able to detect the micro movements of my heart no matter what I do. I am suitably impressed. If I leave the room, or go outside its range or its field of view, it realises within about one second, though its default setting is to wait for ten seconds before reporting absence. Consider me even more impressed – the signal processing mathematics to achieve this sort of accuracy and speed of reaction are non-trivial. All in a very easy to use package costing €3 inc VAT.
Why choose the LD2412 sensor when the LD2450 sensor looks better and for only one euro more? The LD2450 sensor which can detect and locate multiple people in an X-Y space would seem initially the better choice. However, online reports (and the manufacturer itself) say that stationary human detection greatly suffers as a result, with a reliable range of about two metres from the sensor. Basically the algorithms can either do stationary human detection well OR multiple human tracking well. If your use case is detecting how many people are in the room and where they are whilst moving, choose the LD2450. If you want to know if somebody is asleep in their bed, especially with a very wide field of view, choose the LD2412.
Which brings me to my final two things to mention about this sensor. The first is how to update its firmware – after all it runs its own DSP microcontroller on there, and they have greatly improved its firmware over time. So you really do want the latest firmware on there, and I can speak from testing experience on that. You update its firmware using Bluetooth from its mobile app. There are no alternatives. Its mobile app on Android seems incapable of actually updating the firmware as it always errors out, at least from the day I spent trying to make it work. Lots of others on the internet report the same. I was resigned to giving up, when it occurred to me that Apple Silicon Macbooks can now run iOS apps directly. So I gave it a go, and indeed the iOS app running on my Macbook worked seamlessly and updated the firmware without issue. In fact, that iOS app is what you’re seeing run in the video above, it was just easier for testing and videoing. So if you’re reading this trying to figure out how to update the firmware on these Hi-Link devices, now you know how: get an Apple Silicon device.
The second thing about this sensor is that individual gates can be configured for sensitivity if you wish. If you want to detect people at say 1.5 m and at 7.5 m only, and ignore all humans at all other distances from the sensor, that is very easy to configure. This means you can have a sensor detect if somebody is between the kitchen island and kitchen counter, and not trigger if they are anywhere else. This I think should be very useful indeed down the line. In short, I am won over by this type of sensor, from my testing I think it’s quite the game changer for what has been traditionally quite a tricky problem to solve well.
Finally, you can hide this sensor behind a few mm of ABS plastic without affecting it. This means bedroom human presence detection can be completely invisible. Nice!
What’s next?
I might get the thermally broken kitchen extractor testing done this coming weekend before we head off to London for the week. If not, then surely not long after our return.
Next comes the differential pressure sensor testing. That is surely doable in the three weeks of summer remaining before the WG14 standards meeting.
After I return from the WG14 standards meeting, the summer holidays for the kids will be over and I’ll be released from childcare. I guess it’ll be finally time to seriously start looking for work, though I may wait until the Monad cryptocurrency mainnet launches, as I did informally agree to remain available to them until then. Supposedly that will be some time in September, so only a few weeks more – and I surely could fill those weeks doing 3D services layout (and maybe a day out with the T2 Longtail on gravel tracks if the weather were nice one day).