Tracking High Altitude Balloons with Delphi and TMS Miletus

Whilst Delphi targets Windows, Mac OS, Android, iOS and Intel Linux, one gaping hole in its repertoire is the Arm-based Raspberry Pi. Taking 2020 as an example, the Pi out-sold the Apple Mac, and with over 40 million sold it’s a large target audience. So whilst I happily develop applications for all of those targets in Delphi, until recently I had to switch to Python or C or Lazarus/FPC for my Pi applications.

Enter Miletus, which is part of the TMS Web Core product. Web Core makes it easy to develop web applications which should run on pretty much any browser, and Miletus expands on that by running the web application within a binary executable that includes code to connect with the host machine more intimately. On the Pi, this includes components to control devices connected to the GPIO port, so if say you want to measure temperature and humidity using a Pi HAT then Miletus can provide the glue between your application and the hardware.

Key Components

My hobby is HAB (High Altitude Ballooning) which is flying and tracking payloads that travel through the stratosphere at 30-40km altitude – and I soon realised that Miletus had everything I needed to code an application for tracking those balloons on a Raspberry Pi. It has:

  • SPI component – needed for interfacing with an radio receiver.
  • UART component – needed for reading local GPS position
  • Mapping – so I can show the local and balloon positions on a map
  • Web and Web Socket components – needed to share telemetry and GPS location with other balloonists
  • INI file component for storing user settings

Once I realised that I could do it, that quickly turned into a need to do it, so I set up a Raspberry Pi and added a radio receiver and GPS.

Development Environment

I set up the Pi to allow VNC connections so I could view and control the Pi desktop from my Windows PC, and installed Samba on the Pi so I could copy the built application from the PC to the Pi; this is needed as there’s no equivalent of the Delphi paserver program.

One thing I like to do with all my Delphi cross-platform development is to test as much as I can on the Windows PC, as the compile-run-test cycle is quicker. Obviously there are some things that cannot be tested in this way – for example SPI comms to my radio receiver module – but the majority of the code can be. Also, in Miletus there are separate components for general serial comms and for the Pi serial port, so I wrote separate code here for Windows and Pi, so that the entire application excluding the Pi-specific code could be tested as a whole.

Balloon Tracking

Before we get on to the Miletus application itself, a few notes about how high altitude balloons are tracked. The balloon carries a tracker which consists of a GPS receiver (so it knows where it is), a radio transmitter (so it can transmit that position to the ground), and a small computer running software that listens to the GPS receiver and talks to the radio. That’s a simple task and is commonly achieved with a small microcontroller such as a basic AVR (as used on Arduino boards). More complex trackers use a Raspberry Pi which makes it easy to take photographs and send the image data down with the telemetry, but that’s beyond of the scope of this article.

At the ground we need a receiver to receive that telemetry, and some means of at least displaying the balloon position as latitude, longitude and altitude. Ideally we also want to see that position on a map, so we can follow the flight path in real time and plan a route to the landing position for recovery. So now a receiver will need a suitable colour display as well as the radio receiver.

One of the great things about the hobby is the community of balloonists who are more than willing to help each other. To make it easier to track a balloon, a distribute receiver network was built so that any balloon can be tracked by multiple receivers, all sending their received telemetry to a central server. That server connects to a mapping application so that, even if only one receiver (and we commonly have 15-20 active receivers during a flight) receives a telemetry packet, then that’s enough to update the map.

So now, ideally, the receiver application can upload to the central server, and even better download telemetry from it in case the balloon is not being received locally (e.g. there’s a hill in the way).


I decided to call my new application PiPADD, after the “Personal Access Display Device” in Star Trek. If you’ve seen my other HAB applications then you’ll have noticed that many have an LCARS (Library Computer Access/Retrieval System) user interface, also stolen borrowed from Star Trek.

The top area shows up to 3 received payloads along with the latest telemetry from the selected payload.

The left area has buttons for the main functions.

The lower area shows the status of each telemetry source, then the current GPS time and position, and finally the status of the uploaders.

Finally, the centre of the screen shows the current selected screen (one per button on the left). On startup it shows the splash screen as you can see. – a photograph from one of my flights over the UK.

Main and Sub-Forms

The way that the UI works is that there’s a single main form that is displayed all the time, and separate forms (one per button) which are made visible as required. What I wanted was to keep the code separate – hence one form per button – but to display those forms as if they are physically part of the main form. This makes it quick and neater to change from one form to another.

On the main form, there is one TWebPanel for each of those forms:

Those panels are set to Visible := False in the designer, and are made visible as required.

At startup, each of the forms is loaded in turn, with code like this:

    procedure AfterPayloadsCreate(AForm: TObject);
        frmSplash.lblStatus.Caption := 'Loading direction form ...';
        frmDirection := TfrmDirection.CreateNew(pnlDirection.ElementID,

Note “pnlDirection.ElementID” which tells Web Core to load the new form with pnlDirection as its parent. That means that all we need to do to make the form visible is make that parent control (a TWebPanel) visible.

Also note “AfterDirectionCreate” which is a callback for when the form has been created. That procedure then loads the next form, and so on till all forms are loaded and the application is ready for use.

When the user clicks one of those main buttons, then essentially all that happens is that the relevant TWebPanel is made visible and the previously visible TWebPanel is made invisible. It makes for a quick changeover between forms.

Form Inheritance

I use form inheritance quite a lot, as it saves on work and because (like most programmers I guess) I’m inherently lazy. It reduces the work designing forms and writing code, and makes for simpler code too.

For this application I created a base form for all forms aside from the main one, and then generic forms for sources (LoRa, GPS etc.), targets (directions, map etc.) and settings. Here’s the resulting inheritance tree, with some forms missing to make it clearer.

GPS – Serial

My application needs a GPS receiver so it can do things such as calculate the distance and direction to the balloon. GPS receivers for the Pi come in two flavours – HATs that connect to the GPIO pins and send data over the standard Pi serial port /dev/ttyAMA0, and USB models that create a new device typically /dev/ttyUSB0 or /dev/ttyACM0. The TMiletusRaspberryUART component now (after a bit of prompting!) supports either of these.

Usage is straightforward – set the port/device name, baud rate etc and open the device. Once opened you have to poll the device from a time (it’d be nice to have this wrapped up in an event, hint hint), then parse the GPS data which is in NMEA format and contains time, date, latitude, longitude and altitude plus lots of other information about the satellites in use. e.g.:


To reduce CPU usage my code switches off the NMEA sentences that it’s not interested in.

LoRa – SPI

SPI is “Serial Peripheral Interface” and is a popular serial interface on microcontrollers. SPI devices include various sensors and controllers, as well as the LoRa radio transceiver used by my application.

To use an SPI device you will need to read the manual for your specific device as they vary enormously, but in general they will present a number of registers that you can address and then write to or read from. You will then need to configure the device through config registers before sending and/or receiving data such as temperatures etc.

In the case of the LoRa devices that I use, they need to be set to the correct radio frequency etc., and then a status bit in a status register can be polled to know when a radio packet has been received. For high altitude balloons those packets will usually be telemetry containing the balloon position as an ASCII (text) string with time, latitude, longitude and altitude etc. Something like this:



TMS Web Core includes TFNCMap – the same map that you may be familiar with from writing VCL or FMX applications. Usage is almost identical to those; the only exception being that markers need to be web-based so you no longer have the option of having them in the local file system.

TFNCMap is many features, including the ability to draw circles (useful for drawing the balloon’s radio horizon), polylines (to draw its path), and now overlaid HTML controls – see the buttons on this screenshot for an example:


I wanted to have nice rounded corners as seen on LCARS itself. For my Android applications I have to draw circles and size/place them carefully, but in TMS Web Core it’s easier as we have access to CSS. All I needed was set add some CSS and link it to the corner component:

AddCSS('TopLeft', '.TopLeft {border-radius: 12px 0px 0px 0px}');

lblTopLeft.ElementClassName := 'TopLeft';


Star Trek ship computers all have voices, so why not add speech-to-text to my application? Then it can announce events such as “Balloon ABC has burst” when it notices the altitude rapidly falling.

TMS Web Core includes a speech component however, due to a lack of support in the Pi browser, this doesn’t work on the Pi. All is not lost though, as Miletus does include the ability to run external programs. I can imagine many tasks for which this ability is essential. So, I installed the espeak package on my Pi, and then wrote the following code to use that or the browser speech component according to the target machine:

if os.Platform = opLinux then begin
    MiletusShell.Execute('espeak -a 200 -s 150 "' + Speech + '"');
end else begin

Raspad 3

Initially I ran this app on a Pi 4 in a small plastic case with the official Pi touchscreen. For GPS I used a GPS HAT, and for LoRa I used a LoRa HAT, all from Uputronics.

I then wondered if anyone sells a Raspberry Pi tablet where I could add these components internally. There are some tablets available, but the only one that was currently available, and which seemed to have enough internal space, was the Raspad 3. So I ordered one from Amazon and next day it arrived.

It didn’t take long to install my software (see later for instructions) and get it working. However the internal space is quite limited, and there’s only space for a single HAT. I do have a combined GPS/LoRa HAT, however it uses SPI channel 1 which is already used by the Raspad 3’s internal board (used for power, fan control, and repeating the Pi ports). I didn’t see anything in the documentation about the use of SPI, however I doubt that many people try to add a HAT especially as it blocks air from the internal fan!

Fortunately, I had other options, namely USB. All of the Pi’s 4 USB ports are free to use, so I connected a USB GPS to one and an Uputronics USB LoRa receiver to the other. This did require some software changes, firstly to allow for a choice of GPS device, and secondly to support the LoRa receiver’s serial protocol, but I had code for that already from my Android/iOS apps. It’s such a great advantage to be able to use existing Delphi code with TMS Web Core!

I tried with the GPS internally, but it did make it difficult to pick up satellites, so I opted for a stick-on external GPS which I mounted on the top of the tablet, with the LoRa UHF antenna socket on the left. I think it looks pretty good.

Hints & Tips for Miletus on Pi

Whilst TMS Web Core is pretty much “Delphi for the web”, there are some differences and it’s worth taking some time learning those differences to begin with. So here are my tips for TMS Web Core in general and Miletus on the Pi specifically.

  • Learn the differences from VLC/FMX
    • Understand async code and the await() function
    • Use of HTML/CSS to enhance or design UI
    • Be friends with timers
    • Use the F12 console
  • Testing on Windows is faster
  • Test Pi-specific code in small test programs
  • Use dummy code on Windows to replace Pi-specific
  • Use “if os.Platform = opLinux …” to choose Windows/Pi code
  • Set up SAMBA on Pi to transfer executable
  • Use VNC to control Pi from Windows
  • Read the Hints & Tips thread in the Web Core support forum

Further Reading

To learn more about high altitude ballooning, read my article High Altitude Ballooning, From The Ground Up (and back again).

My blog (this one!) is worth reading too. I think.

If you want to peruse the source code for this app it’s here on github.

If you want to duplicate my Raspad3 build, see this blog post.

Also, I’m on Twitter where I mostly tweet about ballooning.

Finally, the UKHAS web site has a lot of information.

HAB App Updates for sondehub/amateur

As I hope you are already aware, the venerable HABHUB system is to be decommissioned soon. This was planned to be done by now, but has been postponed to the end of 2022. This won’t be postponed again, so in 2023 there will be no more HABHUB.

There’s a replacement, namely sondehub/amateur. It provides much the same functionality but it is completely new code. Programs that interfaced with habhub need to be changed or replaced with new programs that interface with Sondehub.

I’ve written several programs that take balloon telemetry (typically LoRa) and upload to HABHUB, and which I have modified to upload to sondehub/amateur. These are HAB LoRa Gateway. HAB Base, HAB Explora and HAB PADD. These all currently support HABHUB also but that will change in the new year. Additionally, most of them support uploading telemetry to an MQTT broker meaning that you can be independent of Sondehub if you wish.

You may have notice support in these programs for something called “HABLink”. That is my own server which I have recently reworked to be based on an MQTT broker. New versions of my apps can upload telemetry/chase positions to (and in some cases download from) the MQTT broker, providing further online redundancy for your balloon flights.

Server Comparison

End of LifeEnd of 2022
Run ByUputronicsProject HorusMeYou
Upload MechanismhttphttpsMQTTMQTT
Download MechanismhttpMQTTMQTTMQTT
Payload Doc RequiredYesNoNoNo
Chase Cars SupportedYesYesYesYes
Receiver Data e.g. SNR SupportedNoYesYesNo
Upload Format (telemetry)UKHASJSON containing fields and values. UKHAS optional as “raw” fieldUKHASUKHAS
Upload Format (Chase)FieldsJSON containing fields and values.CSVCSV

HAB App Comparison

HAB ExploraHAB PADDHAB BasePi LoRa Gateway
Primary FunctionMobile TrackingMobile TrackingBase Station ManagementEmbedded Receiver
Target HardwareAndroid PhoneAndroid TabletWindows PCRaspberry Pi
Landing Prediction on mapPayload OnlyPayload OnlyPayload or online Tawhiri predictorNo
Uplink to HABYesYesYesYes
Direct Sources
LoRa USBYesYesYesNo
LoRa BluetoothYesYesYesNo
LAN Sources
LoRa GatewayNoYesYesNo
TCP/IP (fldigi)NoNoYesNo
UDP (HABDEC, other HAB apps)YesYesYesNo
Online Sources
HABHUB (for now)Yes (whitelist)Yes (whitelist)Yes (whitelist, area)No
Sondehub/amateurYes (whitelist)Yes (whitelist)Yes (whitelist, area)No
hab.linkYes (whitelist)Yes (whitelist)Yes (whitelist, area)No
MQTTNoNoYes (whitelist, area)No
LAN Ports
TCP/IP (control, monitoring, emit telemetry)NoNoNoYes
UDP (emits telemetry)YesYesYesYes
Upload To
HABHUB (for now)Telemetry and chase positionTelemetry and chase positionTelemetry and listener positionTelemetry and listener position
sondehub/amateurTelemetry and chase positionTelemetry and chase positionTelemetry and listener positionTelemetry and listener position
hab.linkNoTelemetry and chase positionTelemetry and listener positionTelemetry and listener position
MQTTNoTelemetry and chase positionTelemetry and listener positionTelemetry

HAB Base and RTTY

Once the HABHUB server is disconnected next month, the map will stop working and you won’t be able to upload RTTY telemetry to HABHUB. However, you can if you wish continue to use dl-fldigi or fldigi to decode the RTTY, and have HAB Base show a map with your tracker on it. Additionally, HAB Base can forward the decoded telemetry to the sondehub/amateur system. You can then view your payload on the Sondehub amateur map.


Use the latest release from github. V1.6.6 is when the new functionality was added.

Set Up Source

Add a new source, setting the type of source to “TCP (DL-FLDigi).

Set the code and name to whatever you like.

Set the Host to be the hostname or IP address of dl-fldigi / fldigi; typically this will be “localhost” for the same machine that is running HAB Base.

Set the port to 7322.

If you want to upload to Sondehub, check the “Upload to Server(s)” box. Also, make sure that Sondehub uploads are enabled in the System Settings dialog:


If dl-fldigi/fldigi is not running, then start it. It doesn’t matter if you start it before or after HAB Base.

If all is well then the source window will show a green background:

Once telemetry has been successfully received then this will turn lime green:

You can see the telemetry in the history tab:

And, as with other sources, you will have a payload window for each received payload:

and it should appear on the map, with a prediction if sent by the tracker:

Sondehub/amateur uploads

If this is working correctly then you will have a lime green status button for Sondehub:

That is now a clickable button, showing a log window:

Sondehub Map

You can open the Sondehub map from the Payload / Settings tab:

Click the link to open the map in your default browser:

Moving from HABHUB to Sondehub

As I hope you’re already aware, the HABHUB system is due to be shut down in October 2022. This system has supported HAB in the UK and elsewhere for many years now, however the hardware is old, the operating system is old and unsupported, the database engine and other installed software components are old, and maintenance of these things is no longer tenable.

(Very) fortunately, we have an alternative. Sondehub has been running for about 3 years now, providing tracking of met office radiosondes, and Mark and Michaela of the Sondehub team have been busy providing the same infrastructure for amateur flights. Sondehub was designed to cope with the large number of radiosonde flights, and their high data update rates, so the system is well capable of coping with the much smaller number of amateur flights and their generally rather lower update rates.

HABHUB includes various components that are used daily by HAB enthusiasts, such as a burst calculator, predictor and the balloon tracking system with live map. All of these things either have equivalents with the Sondehub system, or have been ported over to it.

For more information on Sondehub and the transfer from HABHUB, see Mark’s article.

Supported Tracker Modulations

For details information on this, see this article.

Most UK flights use RTTY or LoRa.


The usual method for RTTY was to use dl-fldigi which uploaded directly to HABHUB, but this will stop working when the HABHUB server is switched off. However there are alternatives:

  • Use Horus GUI
  • Continue to use dl-fldigi (or, better, just fldigi) and have the output read by HAB Base which will show the decoded telemetry, place the payload on its own map, and upload the telemetry to sondehub/amateur.


There are several programs available for receiving LoRa telemetry:

  • HAB LoRa Gateway. This has been modified to upload to sondehub/amateur; you will need to update to V1.9.2 or later.
  • HAB Base. This has been modified to upload to sondehub/amateur; you will need to update to V1.6.2 or later.
  • HAB Explora. A new version will be released soon, with sondehub/amateur support (balloon telemetry and chase car upload) and also support for Android 11/12.
  • HAB PADD. A new version will be released soon, with sondehub/amateur support (balloon telemetry and chase car upload) and also support for Android 11/12.
  • Windows Serial Gateway. I don’t currently plan to modify this for sondehub/amateur as HAB Base does everything that it does.

What Do We Lose?

Flight Documents

Sondehub has no concept of flight documents, so please be sure to publicise your flight in the UKHAS Google Group. Obviously, you will no longer need to create flight documents or get them approved.

Payload Documents

Sondehub has no concept of payload documents, which were used to tell HABHUB which field in the UKHAS telemetry string contained what information. This means that you no longer need to create a payload document, but that Sondehub will only show the basic, essential fields such as latitude and longitude. However, see the following section on Tracker Telemetry for an alternative mechanism.

Tracker Telemetry

Sondehub has no concept of payload documents, or indeed of the format of a UKHAS telemetry string, so it’s up to the receiving/uploading software to parse the UKHAS telemetry from a tracker, deciding which field means what (latitude, longitude, temperatures etc.), then passing those values on to Sondehub.

All such programs assume that the telemetry string is of this form:


Provided that your tracker follows this format – and I don’t recall seeing one that didn’t – then you tracker can be stored by sondehub/amateur and will (assuming a valid GPS position) appear on the live map.

That’s great, and it avoids the issues of people creating payload documents that don’t match their tracker telemetry, however it also means that any extra telemetry is not displayed on the map and aren’t charted on the sondehub Grafana page.. Such telemetry might include the number of satellites, internal and external temperatures, and it would be a shame to not have those displayed, stored and charted.

There is a solution, which is for the tracker to also broadcast the field list. The method supported by my various receiver programs is for the field list – a sequence of digits/letters one per field – to be included in the regular telemetry. For example:


The part in bold is the field list. Every character in that string represents a field type; e.g. the “0” represents the payload callsign. Since we always assume that the basic telemetry is as shown earlier, the field list will always begin with the characters for those standard fields, namely “012345”. This is used by the decoding programs to locate the field list.

I have defined a list of field types that my receiver programs understand and can then send to sondehub/amateur. I’ve listed the field codes in the following table, together with which of my trackers and receivers support them currently. HAB Rx is a common library for HAB Base, Explora and PADD. The HAB Base column shows that HAB Base displays those fields in the Payload window.

#FieldNotesFlexTrakFlexTrackPITSHAB RxSondehubHAB Base
7SpeedNot WorkingNYYY
8HeadingNot WorkingNYYY
9Battery VoltageYNYYYY
AInternalTemperatureYNYY (if no ext)Y
ECutdownStatus0 = Idle
1 = Armed
2 = Triggered (Altitude)
3 = Triggered (Manual)
4 = Triggered (Other)
PBattery CurrentYYY
QExternal Temperature 2Y
UPredicted Landing SpeedYYY
VTime Till LandingYYY
WLast Command ReceivedYYY

Tracker Support

Follow me on Twitter for release announcements.

All of my existing trackers will work fine with Sondehub/amateur, however you will only see the essential fields (see above) unless you update and enable the fieldlist function.


Update to the current version, then edit /boot/pisky.txt and add the following line:


The field list will then be automatically built and appended to the telemetry.

FlexTrak (Pi/STM)

The next release will support sending the field list. This can be configured using the configuration file on the Pi, or by connecting the tracker to a PC with a USB-serial cable and using the Windows configuration program. The field list itself can also be configured using either of those methods.

FlexTrak (Pi/AVR)

The next release will support sending the field list. This can be configured using the configuration file on the Pi, or by connecting the tracker to a PC with a USB-serial cable and using the Windows configuration program. The field list itself can also be configured using either of those methods.

FlexTrack ESP32

The next release will support sending the field list. This can be configured using the Windows configuration program. The field list will then be automatically built and appended to the telemetry.

FlexTrack AVR

The next release will support sending the field list. This can be configured using #defines at the top of the flextrack.ino file. The field list will then be automatically built and appended to the telemetry.

Receiver Support

Follow me on Twitter for release announcements.

LoRa Gateway

Update to the current version, then edit gateway.txt and add the following line:


Tracker telemetry will then be sent to sondehub/amateur. If the telemetry contains a field list then the specified fields will be sent as well as the basic ones.

HAB Base

Update to the current version.


To enable uploads sondehub/amateur, click the System Settings button and click the Sondehub option:

Tracker telemetry will then be sent to sondehub/amateur. If the telemetry contains a field list then the specified fields will be sent as well as the basic ones.

Local Payload Receiving

To receive telemetry of nearby payloads from sondehub/amateur, (equivalent to the previous HABHUB Logtail, option), add a new source and choose “Sondehub” from the drop-down list of source types:

For any payload that you are uploading to sondehub/amateur, you can load the Sondehub map for that payload by using the link under the Settings tab:

HAB Explora

This is being worked on.


This is being worked on.

FlexTrak STM flight with 3D Printed Holder

Yesterday I launched a FlexTrak STM32 tracker to test my latest firmware, test my HAB Base software during a flight, and to try out a Raspberry Pi camera with a wide-angle lens. It was also my first flight using a 3D-printed tracker holder and aerial.

3D-Printing and HAB

It took me a long while to decide to buy a 3D printer, as I wondered if I’d use it very much, but the reality is that once you have one you find no end of things to make. After a while though I found that there are things I wanted to make that I couldn’t find a design for, so I started to play with TinkerCAD which is an online 3D designer that is pretty easy to use.

One thing I wanted to do was to make payloads easier to build, plus be more resilient and re-usable. So I set about designing tracker holders that incorporate an aerial with holes above and below for easy connection to the parachute above and another payload below. My printer has a print base of about 26 x 26cm, meaning I had to print in 2 parts so I made separate designs for the tracker holder the aerial, with mating surfaces that can be glued or screwed together. Here’s the holder for FlexTrak:

Note the ring at the top to connect to the payload train, the holes for screwing on the tracker, the indented strip for where the GPIO pins go, the hole at the bottom for the radio aerial connector and the hole at the back for the Pi camera cable to exit.

This is the 434MHz aerial for FlexTrak:

Note the hole for the SMA socket, the grooves for the aerial wires, and the hole for mounting a payload below. The latter avoids the usual problem of the hanging cord fouling the aerial.

To complete the aerial, insert an SMA chassis plug in the hole, with the plug side above and the solder pin below. Solder solid wire to the pin, cut at a total of 164mm, run it along the centre groove and fix with superglue. Similarly, solder 2 wires to the ground connection and run along the horizontal radial groves. The aerial can then be fitted to the assembled tracker. I found that the SMA connection was solid enough but you can screw or glue (superglue or hotmelt) the mating surfaces together if you wish.

To complete the tracker glue a battery pack to the rear of the frame, connect a camera to the Pi, and mount the lot in some expanded polystyrene foam:


Flight Planning

I didn’t announce the flight as it was planned to be a short flight with a small balloon landing locally. However the predictions showed that the landing area with a small balloon was the other side of the a motorway, and the only way I could keep the flight away from local towns and motorways was to send it up quite high where it would be taken west into a very rural area.

Going high means using a large balloon and a light payload. I was hoping to test a second tracker but doing so would mean using a rather larger balloon, so I left that for another day. I settled on a 1000g balloon. Here’s my launch data sheet:

I also checked the changing flight profiles during the day, and opted for a 3pm launch to land in the best area.

What I didn’t take into account, but should have, was that hydrogen flights don’t have a constant ascent rate. The prediction software works best for helium which does, but for hydrogen the initial ascent is slower and the later ascent faster than the prediction assumes. For my flight, that meant more time going east and less time higher up going west, so the eventual landing spot was well east of prediction. The landing area was still safe, as I’d built in a large margin as we should always do, but it meant landing in an area with quite a lot of trees!

Flight Check List

For those new to HAB, it might be worth me going through my usual launch procedure for my solo launches. For those launching with a team, many of these tasks can be done in parallel.

  • Final check on predictions (worth doing as the prediction data updates every 6 hours and it’s best to use the latest data). From this I would hope to confirm my choice of balloon and gas fill. Most launchers will only have the one balloon and parachute, but even then you can modify the flight profile by adjusting the amount of gas, or even adding ballast or “reefing” the parachute to make it fall a bit faster.
  • Fill a water bottle (I use empty/cleaned milk bottles as they have a useful handle for attaching cable ties to) according to the weight you calculated (see my sheet above for an example).
  • Set out the launch area. Be aware of where the wind is coming from so you can have that behind you while you are filling (so the balloon doesn’t want to hit you in the face). Lay out the ground sheet, with a weight or peg at each corner if needed. Have the gas cylinder to your right (if you are right-handed) and a box with duct tape, cable ties, scissors and wire cutters to your left. Connect the balloon filler to the cylinder, and cable-tie the water bottle to the filler. My filling arrangement is unusual as I use the cylinder valve to regulate flow, but everyone else can open the cylinder valve at this point (and use the filler valve to start/stop flow later).
  • Connect the payload(s) and parachute together with cord. I use 10 metres from payload to parachute, and 5 metres from there to the balloon. Place the balloon end of the cord to your left, and place the payload on a table or bench.
  • Start up all the receivers. I generally use one receiver at home, and 2 or 3 in the chase car, but 1 at each location is enough. My car has a secondary battery for the HAB receivers and a 4G router, so I can leave everything running for many hours without the engine running. For most people they will use a laptop so ensure the laptop is charged, and take a 12V charger for it, or a mains charger and inverter.
  • Start the tracker(s). Ensure that all receivers are receiving OK, and are uploading to habhub. Check that the tracker(s) have GPS lock, and that they all appear on the habhub map. Close the tracker payloads.
  • If you have any other payload electronics e.g. a camera start it now if it has enough battery power for the launch preparations (30 minutes at this point) and flight. If not, delay the camera startup and payload closure till the balloon is ready to launch.
  • Fill the balloon till it can just lift the water bottle, without wind assistance off the ground and keep it aloft. If it falls back to the ground then add more gas. Once it can keep the bottle off the ground, maybe with a slight ascent, the fill is complete.
  • Fit 2 cable ties to the neck.
  • Tie the payload cord to the neck.
  • Stand or sit on the cord. Carefully remove the balloon neck from the filler, keeping a tight hold of the balloon.
  • This is the first bit where it’s handy to have a helper, though it’s certainly possible to do solo. Fold over the balloon neck so it’s covering the 2 cable ties. Add a tie around the fold. Add a second, then wrap with duct tape.
  • If it’s at all windy, or you have a heavy payload, wear strong gloves for the launch. The pull of the balloon – from lift and wind – can easily cause the cord to cut through your skin.
  • This is also easier with a helper – I have to carry the balloon from the filling area in my garden into the adjacent field. Everything has to be carried – payload, parachute and cord – off the ground otherwise the cord will snag and you will spend the next 10 minutes untying the inevitable knots. As we had to do when my helper didn’t understand my request!
  • Slowly allow the balloon to rise by tightly holding the cord and swapping the cord between your hands. As you do so, check that the parachute is the right way up (!!) and continue till you have the payload just below your hands.
  • Check the sky for any aircraft that that are flying past – usually because the pilot saw your NOTAM and thought they’d come along and watch – and launch if safe to do so.

My launch site is doesn’t have a nearby airport, so I have no need to call Air Traffic Control, but if yours does then your launch permission will state that you must call ATC before launch, typically 15 minutes before, so add that into your checklist at an appropriate point.

The Launch

Aside from the delay untying the aforementioned and unintended knotting, the launch process went smoothly and easily. I then watched the flight online and with my HAB Base program, with the latter being fed from my home receiver (a LoRa Gateway with aerial in the loft). All home and car receivers were feeding habhub and I noticed how much better my car receivers were doing, probably because of the lack of house roof in the way. I do have other receivers available, in my HAB shack and mountable on a 12m telescopic pole, but even the loft aerial managed to locate the flight down to an altitude of 2,200 metres which is pretty good considering the flight was the opposite side of the hill that my house is on.

The flight was predicted to land about 40 minutes away, which is approximately the descent time from burst, so we started chasing shortly before the predicted burst so we could hopefully get to the landing area before the flight landed. The burst altitude was calculated to be 36km, and the balloon actually gave way at 36.6km. Often these Hwoyee balloons manage about 2km above prediction but that’s not always the case and this balloon was 3 years old. Anyway, I’d planned the flight for a range between 36 and 38km this was inside my planning.

We saw a slight difference between the on-board landing prediction and the online one, and as the flight descended the two moved towards each other. The online one was initially closest to the final position which is to be expected but both were plenty good enough to aim for in the car.

We aimed for the closest road to the prediction, and got there a couple of minutes before landing, but with trees limiting our view we didn’t see the flight descending. We did however continue to receive telemetry so we had our landing position. Putting the map into satellite view we could see that the flight had avoided all the trees and found a nice field to land in. Excellent!

The next step was to find a route to the payload. Although we were less than 500 metres away, there was no path available so we checked ordnance survey mapping and found a public footpath that got close to the payload and was accessible from a road the other side of the payload. So we had to drive away from and around the landing position – which may have looked odd to anyone watching our progress online – to get to the footpath.

As we got close we spoke to some locals who said that they were walking that way anyway, so just follow, which we did …

We left the car by the road next to the footpath, and set out on foot …

As I said, we had to drive the long way round to make the on-foot part feasible, so here’s our path in red from the point where we were waiting when the balloon landed, to the footpath by road then by foot out to the payload:

We saw the bright parachute easily and walked across the grass field to collect it:

And to show how close this was to trees …

Finally, here are some photos from the camera. Clearly (or otherwise) the focus was set too short; it had been set up but it is easy to knock out of focus and that’s what happened. Next time, glue! Here’s my Star Trek Tricorder receiving telemetry from the tracker:

Raspad 3 HAB Receiver

As you may know, several years ago I built a portable HAB receiver based on the Raspberry Pi, with a Star Trek LCARS user interface written in Python. It works well enough and I’ve not revisited the project since then.

Instead I developed mobile apps for Android and iOS, to run on phones and tablets, all written in Delphi which is an IDE and language for developing cross-platform applications Of course phones and tablets cannot receive balloon telemetry directly, so they need to be connected to a separate receiver via USB or Bluetooth or WiFi, and those apps support those connections.

Although Delphi can target Linux, that support is currently only for Intel, so Delphi could not be used to target the Raspberry Pi running Pi OS. However TMS who are probably the major supplier of 3rd party Delphi components have a product called TMS Web Core which can be used to develop Web apps, to run on pretty much any browser. Web apps cannot access hardware directly, however Web Core includes a product called Miletus which embeds a web app within a binary executable that can target, amongst other things, Raspberry Pi OS. Further, such an app can then access hardware on the Pi – GPIO, serial, SPI and I2C are all supported – making it possible to develop a HAB receiver for the Pi in Delphi code. I developed such a receiver earlier this year and presented on it in this webinar:

Raspad 3

Initially I ran this app on a Pi 4 in a small plastic case with the official Pi touchscreen. For GPS I used a GPS HAT, and for LoRa I used a LoRa HAT, all from Uputronics.

I then wondered if anyone sells a Raspberry Pi tablet where I could add these components internally. There are some tablets available, but the only one that was currently available, and which seemed to have enough internal space, was the Raspad 3. So I ordered one from Amazon and next day it arrived.

It didn’t take long to install my software (see later for instructions) and get it working. However the internal space is quite limited, and there’s only space for a single HAT. I do have a combined GPS/LoRa HAT, however it uses SPI channel 1 which is already used by the Raspad 3’s internal board (used for power, fan control, and repeating the Pi ports). I didn’t see anything in the documentation about the use of SPI, however I doubt that many people try to add a HAT especially as it blocks air from the internal fan!

Fortunately, I had other options, namely USB. All of the Pi’s 4 USB ports are free to use, so I connected a USB GPS to one and an Uputronics USB LoRa receiver to the other. This did require some software changes, firstly to allow for a choice of GPS device, and secondly to support the LoRa receiver’s serial protocol, but I had code for that already from my Android/iOS apps.

I tried with the GPS internally, but it did make it difficult to pick up satellites, so I opted for a stick-on external GPS which I mounted on the top of the tablet, with the LoRa UHF antenna socket on the left. I think it looks pretty good.


Raspad 3 is supplied with cables etc. but without a Pi 4, so install that as per the instructions.

This bit is up to you, but I found the fan to be very noisy. It’s not so much the fan itself, but the fact that its mounted on the back of the tablet which behaves as a large sounding board, amplifying the noise from hardly audible to quite annoying. I removed the fan. My application uses little CPU and I saw temperatures around 50°C, way below the 80°C where the Pi slows down. Your Mileage May Vary.

Software Installation

For the latest instructions, see However note that since we aren’t using the Pi internal serial port, you don’t need to follow the instructions to free that up. The latest release binaries are in the releases section


For the GPS you will need one that is designed for sticking to as surface and has a sticky pad. The cable it comes with will be much too long, and besides we need to find a way to feed the USB cable through to the inside of the case, so you will need to cut the cable, insert it through a small hole drilled in the top of the case:

Small hole in case

Finally, solder the cable back together (but much shorter than before). Use heat-shrink tubing on the individual wires, then re-cover with the existing screening before covering with more heat-shrink tubing.

LoRa Antenna Socket

Use a short adapter cable SMA male at one end and SMA female at the other. Drill a hole in the case and fit the female connector through it:

You should now have 2 cables fitted, looking like this:

USB Receiver

This is supplied in a case, but there’s no space for that so remove the 2 screws from the case, separate the 2 halves and remove the board from inside.

The board can now be fixed inside. I find the best place to be on top of the connector that plug into the Pi. you could instead place above the Pi but it will partially block air from the fan (if you fit that), and you’ll need to take care not to foul the fan. I think the connector option is best. Use a double-sided pad or hot-melt glue.

I mounted mine upside-down because that suited the best USB cable that I had.

Connect the USB cable, running it as neatly as you can round to one of the free USB sockets on the Pi.

Next, connect the GPS to another space USB socket, and connect the antenna cable to the SMA socket on the LoRa receiver.

Finally, slowly close down the back of the case, ensuring that the cables don’t prevent the case from closing. So long as your cables are fairly short then this should be easy enough.


Again, see the github repository for the latest full information, but remember that we are configuring two USB serial devices and not the LoRa HAT, so ignore the section on the latter.

GPS Setting

You will need to set the Linux device name for the GPS device. To find the name, open a command terminal and type the following:

ls /dev/tty*

You will see plenty of entries such as tty19, which you can ignore. The interesting ones are here:

If you see ttyACM0 and ttyACM1, then one is GPS and the other LoRa. In my case the allocation was:

  • LoRa = /dev/ttyACM0
  • GPS = /dev/ttyACM1

If instead you don’t see ttyACM1, but do see ttyUSB0, then that is the GPS.

So, armed with this information, enter “/dev/ttyACM1” or “/dev/ttyUSB0” in the “Device” box on the GPS settings page.

LoRa USB Setting

Similarly, the LoRa USB settings page has a Device box, and this time you should enter “/dev/ttyUSB0”.

Close and restart the program once you have entered the GPS and LoRa device settings.

4G (LTE) and Rural Broadband

This isn’t my usual type of blog post, but if you’re frustrated with slow and/or unreliable broadband, you might find my experience interesting and useful. With a lot of research, learning and testing, we’ve gone from a feeble 12Mbps down / 1Mbps up broadband to around 200Mbps down 70Mbps. How? Read on ….

A Bit Of History

When we moved to Herefordshire 8 years ago, the fastest internet we could reasonably get was around 7Mbps down 0.3 Mbps up, on a copper line to the local exchange. Good enough for browsing or watching iPlayer, it was dire for sending files to customers or uploading to YouTube, for example. So when FTTC (Fibre To The Cabinet) was offered, I got excited. Briefly. Because the C (Cabinet) was about 10 metres from the exchange, and thus about 2.5km from us. So 7/0.3 went to 8/1. Whoopdedoo. Still, when you’re sending files to a customer, a 3x increase in speed is worthwhile.

I knew when we moved that we had a fairly good 4G signal here, if only at one side of the house which faces the local town which is where pretty much all of the reachable phone masts are located. However it took several years before the mobile phone companies caught up with the need for 4G as a home broadband option, and started to provide plans that made financial sense instead of being around £1 per GB. EE were first with, if I remember correctly, £80 per month for 200GB, but after a short flirtation with them I switched to Three who offered unlimited data for about a third of the price. I had an external aerial installed, pointing towards town and the masts, connected through 5m of twin coax to a Huawei B525 router in the house. That worked pretty well, with some outages, at speeds of approx 30Mbps down and 20Mbps up, for a couple of years.


“FasterShire” has existed for about 6 years now, with the intention of getting fast internet across Herefordshire and Gloucestershire. We’ve had various promises from them as to when “work will start in your area” to get FTTP (Fibre To The Premises) and gigabit bandwidth, but all have come and gone. The latest was last month, stating that our house – and a lot of others according to the planning map – are no longer part of the Fastershire contract with Gigaclear. Apparently the costs of reaching these far-flung places (we’re 5km from the nearby town) are too high for the contractors to reach as they were contracted to do. When we’ll finally get fibre is anyone’s guess.

Still, we have 4G …

When Three is less than 4(G)

Then it broke. Three had sent me a text saying that they were going to work on the mast, so I wasn’t surprised when the signal dropped out a few times, and ran slowly between, for a few hours. But the speed never returned to previous levels, settling at around 12Mbps down 1Mbps up.

It stayed that way for a couple of days so I contacted Three. After the usual “you’re near the edge of coverage” “yes I know but I have an external aerial and it’s been fine for years”, they said that the work was still in progress so try again the next day.

Still slow.

So I complained again. My ticket got raised to “2nd level” support, who called me in the evening. “Let me do a network check”. That was just an excuse to bring up the usual “you’re near the edge of coverage” nonsense. Yes, I know. I also know where the mast is, and that I have clear line of sight to it, and the signal itself is still strong it’s the bandwidth that’s shite. The call ended with the suggestion to “try again in a couple of days, and cancel your contract if it’s still slow”.

It was, and I did.

ISP Testing

I already had an EE SIM card that I use for internet in my HAB chase car, so I put that in my router to compare with the Three bandwidths. It was, despite connecting to the same mast, much faster. Faster in fact than Three on a good day. I also tested with my wife’s O2 SIM card, and a Vodafone SIM card purchased on Amazon, and EE was the fastest of all especially for uploads. My EE SIM card though is a pre-paid 100GB/1 year card, which is great for use in the car and as a backup, but would last about a week in the home router! So I signed up for a 12-month “all you can eat” contract with EE and switched to that SIM card when it arrived.

Which Mastest is Fastest?

In my ISP testing I started to learn about 4G and how to get the most from it. Finding the closest mast, or even the mast with the best signal, will not necessarily get you the fastest speed. Masts are not all equal.

First, if you have a Huawei router, and you want to optimise your connection, download a copy of LTE H-Monitor, which is a great utility for monitoring your connection, and configuring your router’s modem.

As well as the pretty graphs so you can see activity, it shows which mast you are connected to, which cell on that mast, and which 4G band you are connected on. For more information, click the “Find Cell” button, choose the “CellMapper” option, and you will see a map with that particular mast displayed. Click on the mast to see the coverage plots:

As I understand it, these plots are from data that has been crowd-sourced from a mobile phone app, and don’t reflect the range achievable with routers equipped directional antennae.

Clicking on one of the plots shows the cell data, including the direction in which the mast antenna for that cell is pointing, which 4G band it uses, and what the bandwidth is. Different ISPs have a different set of bands to choose from, and in my area Three use bands 3 and 20, EE use bands 1, 3 and 20, and Vodafone use 1, 7 and 20. It’s interesting to use the search function on the map to see which operators have masts near you, and what bands they offer.

Band 20 is on a lower frequency than the others – around 900MHz vs 1.8-2.6GHz – so has the advantage of travelling further but with less bandwidth. So if you find that you have a good signal on a 4G device, but poor download/upload speeds, it’s quite likely that your device has connected to a band 20 mast. For mobile phones it’s a good thing to have a band like this – slower but more likely to get a signal – but for a fixed router it’s not the best option.

Generally it’s up to your 4G device to decide which mast/cell to connect to, but depending on your device this can be overridden. For phones there are some apps to select the band; for routers there is often a band setting in the configuration. For some routers there is even the option of selecting a particular mast. For my Huawei routers, the web dashboard has a band setting or, for more flexibility, I can use LTE H-Monitor:

On EE, I get the fastest connection from the closest mast however that is not the one that is always chosen by the router. However, it is the only mast that offers band 1, so by choosing band 1 for the upload band, my router will connect to that mast. But is band 1 the fastest band?

Battle Of The Bands

In my closest, best mast, I have a choice of the following bands 1, 3 and 20. I’ve found that for fast uploads I need to be connected to band 3 on that mast, but the small problem here is that other masts also offer band 3, so there’s a chance of connecting to one of those instead. However I’ve found that my router seems to give priority to whichever mast it connected to before, so if I select band 1, it connects to band 1 on that mast (the only nearby mast with band 1), then I select band 3 and it will connect to band 3 on that mast.

You may have noticed that more than one band can be selected for download, which is because the router supports downloading from 2 bands at once.

The CellMapper site gives information on all the cells and bands available on each mast, including approximate antennae directions and radio bandwidths. I’ve found though that having a wider bandwidth doesn’t necessarily mean faster uploads or downloads, because of course there are other users so you don’t (unless it’s 2 in the morning) have the entire bandwidth to yourself. So finding the cell(s) that give you the best throughput is trial and error, and having a tool such as LTE-H-Monitor means you can easily try different bands and use speedtest or similar to test the speed.

Which Route(r) To Take

For my initial tests I had my old faithful Huawei B525 router, but once I’d got the “how fast can I get this to go?” bit in my teeth, I wondered about upgrading. The B525 is a CAT6 device, which is old tech compared to the modems inside current routers. The latest routers boast much much higher maximum speeds than the old ones, however they can only go as fast as the kit inside the mast allows. Getting that information about the masts seems to be next to impossible, so I bought the flagship Huawei B818 4G router from Amazon,

Huawei B818-263 5G Router | Huawei B818 5G LTE Cat19 Gigabit CPE Specs

plugged it in and …. wow! Speeds of up to about 150MBps down, more than twice as fast as my old router. Positioning however was critical – I could either get fast downloads or fast uploads, but never both, and connecting my external aerial didn’t get me fast uploads (more important than downloads for me).

Which Way Should It Be Poynting?

So the next step-of-unexpected-expense was to get a better antenna. I’m pretty sure that the old EE-supplied antenna isn’t directional, so I scoured for advice and tech specs online and found the Poynting 5G/4G directional antenna which is both slightly higher gain than their 4G model, and by supporting 5G frequencies means it won’t need replacement if 5G ever arrives.

So, the antenna arrived, I put it up on a tripod near where it would be wall-mounted, connected to the router and thence to a laptop in my shed via a long CAT6 cable, and got this …


That’s some improvement over the B818 on its own, which itself was an improvement over the B525, which with EE was an improvement on Three.

So now I had a more than reasonable 4G connection, but what if it stops? All providers have periods of local and even national downtime, and as I work from home and am online pretty every waking hour (OK, every waking hour) then this is a vital utility. As vital as electricity in fact, nd I have UPSs on my PC, servers and router, so what can I do to backup the 4G connection?

Another 4G connection.

Load Balancing

To backup one 4G provider, I needed to choose another (but not Three!!) so I checked for speeds with local masts again, and opted for Vodafone as they provide download speeds nearly as fast as EE, though slower uploads as the mast is further away. This was with my old antenna, I suspect that getting a second directional antenna will help, but I’ll leave that to another time. Vodafone offer an unlimited data plan, so I purchased that and set up that up in a B535 router connected to my old antenna.

But having 2 separate 4G connections to 2 separate ISPs via 2 separate routers means 2 separate networks. How to connect them together so that, if one ISP fails then traffic is automatically sent to the other?

Enter the Load Balancing Router. Specifically, the Draytek Vigor 2925:

Vigor2925 Series | DrayTek

I’d seen this recommended online – especially the forum – and it’s a very solid load balancing / failover router. It has 2 WAN ports and can also use 4G USB modems but I didn’t bother with those. I paid £75 for mine on ebay. There are other makes and models of course, but I’ve used Draytek routers before and what they lack in user interface design they more than make up for with solidity and flexibility. Kit to rely on, for sure.

And for this application, setup was a breeze. In fact it didn’t need any – connect the 4G routers to the WAN ports, connect my LAN to one of the LAN ports, and that was that. Dual 4G load balancing with failover. The only setup I needed to do was to first set each 4G router to a unique IP address outside my usual range. So I have:

  • EE router on
  • Vodafone router on
  • Draytek router on

The 4G routers each deliver DHCP addresses, which the Draytek picks up to set its own addresses on the WAN ports, like this:

The Draytek routes to those addresses from LAN to WANs, so I can still access the router web pages as before, just using those new IP addresses, and I can set separate copies of LTE H-Monitor to monitor each 4G router separately:

So now, if an ISP drops out, the other will keep me connected. And if (as happened this morning when my local EE mast had an issue) if one ISP is slower, it won’t affect the other.

It’s been interesting to watch how the Draytek splits traffic between the two providers. There are lots of options here, and all I’ve done is select the recommended “session based” rather than “IP based” so that each connection is sent to whichever the router deems best. If I play a YouTube video, I can see the download blips on one of the routers, then if I play another that will appear on the other router. It’s all very slick and seamless. In fact the only issue I’ve seen is that the cPanel interface for my shared internet server complains that my IP address changes after login! Easily fixed by telling the Draytek to only use one WAN channel for that particular web domain. Otherwise, everything seems to work – internet telephony, Alexa, iPlayer, YouTube, my weather station and smart heating system – they all just keep working as they did before.

One thing to note is that this is load balancing and not bonding, so any single internet connection hits the internet through one 4G router or the other, and not both. So if you download a file using software that uses a single download connection, that will go through one WAN only, and not necessarily the fastest one (though you could force it to a desired WAN through a routing rule in the load balancer). However, multi-connection downloads will hit both WANs at full speed, like this:

And here are the upload (green) and download (red) graphs, for EE (left) and Vodafone (right). You can see that the download speeds here are similar, but EE (much stronger signal) is much faster for uploads. Also note that speeds do vary considerably during the day, and I’ve seen close to 300Mbps combined download.

Mini GPS Clock

Something I use more often in my HAB trackers these days is TDM – Time Division Multiplexing – where instead of transmitting continuously, each tracker transmits only during particular time slots. Since trackers always have a GPS receiver, this just requires software to sync the transmissions with GPS time. The advantage vs continuous transmission is that multiple trackers can share the same frequency, meaning that several trackers can be received by each receiver.

Once when I did this, I noticed that two trackers seemed to be using different times. The problem was that one tracker was using UTC, as sent by the GPS module in NMEA messages such as $GPGGA, but the other tracker was using GPS time from a UBlox UBX binary message. UTC is currently 18 seconds behind GPS time, and is sometimes adjusted with the addition of leap seconds to match the rotation of the Earth; GPS time is not adjusted. I adjusted my UBX tracker to match UTC time.

To make it easier for me to test that TDM trackers are transmitting at the correct time, I thought of having a small GPS clock on my desk. There are other uses of course, so it would be a useful thing to have. I thought I’d be able to find something ready made but couldn’t, so I decided to make one.


When I bought my 3D printer, I wondered how much I would actually use it. That was a couple of years ago and I’ve found that it’s been much more useful than I thought, from making coasters, little cats for my wife, phone holders, and more recently cases for electronics projects. There are some online repositories for designs that can just be downloaded, converted to printer files, and then loaded onto the printer. So I searched the Thingiverse site and found this little housing for a 1.3″ OLED display which I thought would be about the right size for a little clock, with enough space to include a GPS receiver:

The design comes as 2 files, one for the case and one for the base. So I downloaded both into Cura, converted them into print files for my 3D printer, and sent them to the printer.

3D Printing

My printer is a Geeetech A10M which I bought because of its dual print head, meaning that it can make models that either mix 2 colours, or are partly made in one colour and partly the other. It turned out not to be a great decision, because that mixing head was more trouble than it was worth – the convoluted path and non-return valves necessary for such a design cause the head to jam up more frequently than a much simpler non-mixing head. So I’ve swapped out my mixing head for a non-mixing one.

My printer generally stays in my work shed, so rather than take print files to it on an SD card, I have the printer controlled from a Raspberry Pi 4B running OctoPi. I can then send print jobs to it via the web interface, and monitor the print throughout.

Since prints can sometimes fail – if the print base plate isn’t properly levelled then the print can be a ball of string in the air, or I’ve had the reel of filament jam – then it’s good to have a view of the printer in action. I’ll probably add a Pi camera for that, but for now I use a Reolink pan/tilt/zoom camera connected to the WLAN in the shed.

Finally, so I don’t have to go down to the shed to power the printer and Pi up, they are powered through a TP-Link Kasa “smart plug”. These are great, and I have several around the house for things that aren’t used often and would have a significant cost of powered when not in use. I can switch any of these on or off, and even monitor power consumption on some of them, via a phone app.


The case that I printed houses a 1.3″ OLED, so I ordered one of those from Amazon for next day delivery. The case was also designed for a particular CPU board, but I have plenty that will do the job for my application, including an AVR 32u4 board that is small enough to fit in the case, and has USB power already. Finally, I removed a GPS module from an old unused tracker. The connections are straightforward:

  • GND from AVR to GPS and OLED
  • 3V3 from AVR to GPS and OLED
  • SDA from AVR to OLED
  • SCL from AVR to OLED
  • Rx from AVR to GPS Tx

I didn’t connect the AVR Tx to GPS Rx as I don’t need to send anything to the GPS – it certainly doesn’t need to be put into flight mode!


The case is intended for use as a status display for OctoPrint, for which all the source code is available. I might even make one the same to monitor my own prints! For this project though, I just need to scrap together some GPS and OLED code from my own tracker source.

You can download the code from here. It’s pretty simple:

  • At startup it sets up the OLED
  • At startup it draws the display, with zero time of course
  • Main loop checks for GPS data
  • Incoming GPS data is split into lines
  • Lines are checked to confirm checksum OK
  • GPGGA lines are parsed; others ignored
  • Time and number of sats are extracted from GPGGA message
  • Once a new GPGGA message has been received, the display is updated
  • Display shows time in a large font, plus a bargraph showing number of satellites.

One point to note is that the OLED mounts upside-down in the case, so the software rotates the display to suit.


This part is very easy, as the 3D model takes care of the case and OLED of course.

The case does have a slot for the intended CPU board, but mine was smaller so I just used a hot-melt glue gun to mount it on the base of the case. The GPS was then mounted just above, with coax connecting it to an SMA socket mounted on the back of the case.

The Result

And on my desk …

HAB Base Update V1.4.0

I’ve released a new version of HAB Basem which is intended for easy management of one or more receivers from a Windows PC.

The changes are:

  • Support for Sondehub import via UDP
  • Support for local Sonde data from a rdzTTGOSonde receiver via UDP
  • Map shows the predicted path and landing point of the balloon
  • Balloon burst altitude can be set for the prediction
  • UI changes to fix issues on high-DPI monitors and other systems where Windows has a font stretch factor set to other than 100%.
  • The APRS source can also be used for OGN (Open Glider Network)
  • Fixed error when removing old payloads
  • Settings window for serial sources shows the list of COM ports to choose from
  • Payloads can be removed from the list and map, either till the next telemetry from that payload, or blocked while the program is running.

Download the installer from here.


This is receiver for radiosondes, running on a TTGO ESP32/LoRa board with OLED. It emits position packets using UDP on the wireless network to which it is connected. HAB Explora, PADD and Base can all now decode these packets.

To configure:

  • In Settings –> Other, set the UDP Rx port to whatever you like (I use 12004 which is one of a set that I have reserved for HAB use on my networks, but you can use whatever works for you)
  • In the rdzTTGOSonde configuration, set the UDP transmit port to match above (see screenshot below).
  • In the rdzTTGOSonde configuration, set the UDP transmit IP address to either the known IP address of the device running HAB Explora/PADD or, better, use the UDP broadcast value for the local network. For the latter, use the IP address by taking the local subnet value and set all 1’s in the host portion. For example, if a device has an address and a subnet mask, then the broadcast address is

These are the settings in the rdzTTGSonde configuration page:

Flight Prediction

For payloads that do not include their own predicted landing position, HAB Base now refers to the online Tawhiri predictor to calculate not only the predicted landing position but also the flight path.

For payloads that are still ascending, the program initially assumes a burst altitude of 30km however this can be adjusted by opening the payload window and choosing the “Settings” tab:


This is a system used by gliders and other light aircraft, being broadly equivalent to the ADS-B system but at a much lower cost. OGN receivers upload to a server which HAB Base can now receive from. The received packets are in APRS format so to receive them you should create a new APRS Source within HAB Base, and set the host to

To configure:

  • Click the “…” button next to an existing source
  • Choose “Add New Source” from the menu
  • Choose APRS/OGN from the “type of new source” list
  • Fill in the code and name as normal
  • Set the host box to be
  • Leave the port as 14580


If you previously used HAB Base to monitor radiosondes, you noticed a while ago that this stopped working. This is due to the sondehub system being separated from habhub. Sondehub has its own http and websocket APIs, different to habhub.

To import from sondehub to HAB Base, you need a Linux or Windows machine with Python 3 and Pip3 installed, onto which you run a small Python script to import filtered telemetry from sondehub and emit locally via UDP.

The following steps are for Raspberry Pi OS:

sudo pip3 install paho-mqtt
sudo pip3 install boto3
git clone
cd pysondehub

Then to run the script, enter a command of the format:

python3 <latitude> <longitude> <max_distance_in_km> <target IP address> <port>

You can use these parameters to select a central point (e.g. your location) and the radius of a circle around it, so that the script only emits positions of radiosondes that are within that circle.

The “target IP address” is that of your Windows PC that runs HAB Base.

“Port” can be any unused port number; I use the value 12013 which is part of a set that I reserve for HAB communications on my network.


python3 51.95023 -2.54445 1000 12013

The script will display the telemetry for each position report that it receives from sondehub and which is within the circle defined.

Once this is running, open HAB Base and create a new Source. Select the source type as “UDP”, fill in the code and name as usual, and set the port to be the same value you used for the script earlier. e.g.:

Finally, click the Save button.

The new source will then run, e.g.:

Note: Remember that HAB Base has its own distance filter, which it applies to this source, and this would normally be set the same as the value used for the script.

UCS Flight – Video Streaming

This was a school flight, where they built the payload and did the launch, I supplied a self-contained radio tracker, and a 3rd party supplied 2 rugged mobile phones to take video during the flight. It gave me a chance to test a live streaming setup that I intend to use on all of my flights from my launch site.

Since the school’s payload weight was already about right for the parachute size, I decided to use a lightweight tracker in a small foam egg. Based on a small AVR, it doesn’t include landing prediction but is a reliable basic LoRa tracker. I set it to mode 1 with a packet sent every 5 seconds, and I enabled calling mode so that receivers could be set to the calling channel.

Live Streaming

I’ve streamed video from the launch site before, but that has been either directly from a Raspberry Pi with a camera (using ffmpeg to stream to YouTube) or with a camcorder connected to a video capture device connected to a laptop.

For this flight I tested a new, more flexible setup. I used the popular and free OBS (Open Broadcaster Software) which is an incredibly capable program for combining various video and other sources into a single video screen which it can record locally and/or stream to various online services, including YouTube.

The basis of my stream was an IP camera in my garden, looking over the area I use for filling the balloon. All that needed to be done was to find the URL for the camera video stream, and enter that as a video source in OBS.

Next I wanted to add telemetry from my tracker. I now run my own internet server which runs a program that collects balloon telemetry from various sources, including my LoRa receivers at home. So I needed a way of displaying this telemetry as a text overlay on top of the IP camera video. One of the OBS options is to feed the contents of any program window into the output video stream, so I wrote a small program to collect the telemetry from my server, write that as text on a green background, add this program as a video source into OBS, and finally add a ChromaKey filter within OBS that removes the green background.

I also wanted to add data from my weather station. That station automatically uploads to various services every few seconds, including Wunderground which has an API for getting the current data from any station. So I added some code to my internet server that polls Wunderground for this data every few seconds, and then sends it my overlay program. This is how the ChromaKey works:


Finally, I added the school’s logo as a fixed transparent bitmap, and set OBS to stream to my YouTube channel. Here’s the resulting stream, just before launch:

The Launch

Predictions for the flight were fairly stable, with the landing position moving steadily SE during the day. The launch was a bit later than planned so it did land further SE than planned, but more on that later.


The day itself was very hot, during a week-long UK heatwave with daily temperatures up to 30 degrees C. So hot in fact that the mobile phones used for flight overheated when still on the ground! They were cooled prior to launch by dipping them in a tub of water (I told you they were rugged!).

Here are the team, posing with probably the largest helium balloon they’ve ever seen!

Launch was easy enough, though the excited schoolkids launched the balloon before I’d had a chance to get my drone ready to record the launch; I’m hoping to record or possibly stream drone footage on my next flight. So here instead is the launch captured by my field time-lapse camera:

The Flight

The flight proceeded pretty much as expected, however a combination of factors meant that it landed further south than planned, which could have been unfortunate as that took it into the Forest of Dean. Why are balloons magnetically attracted to trees?? Those factors were that after the initial ascent rate being the planned 5-5.5m/s, the ascent slowed once it entered the stratosphere, resulting in it spending longer in a south-westerly direction.

Later, heading west, it ascended almost 3km above the prediction, bringing it further west. So the net shift was a shift south-west from the planned landing near Longhope:

Note the large expanse of green in that map. And note the high density of (very tall) trees that it successfully managed to avoid by landing in a clearing!


We were about 2 miles away in out chase car when it landed, without a radio signal from the tracker due to a hill in the way. After stopping to check the landing position and the route, we approached along that road you can see near the landing spot, turned into the drive and called the other chase cars. At that time I spotted the parachute in the field (see arrow at the right of the image):

Once permission was obtained from the landowner, the team walked across the field to recover their precious payload!

And here they are, checking the footage that they got from their 2 phones – one with a regular camera and the other with an infra-red camera:

Finally, if you want to watch any of the launch stream, it’s here:

Subscribe to RSS Feed Follow me on Twitter!