GSM-based trackers are quite rightly frowned upon for HAB tracking, mainly because they only work at low altitudes (within range of a mobile phone tower, which aim the signal generally downwards). So they don’t provide tracking throughout a flight, which is a problem as then you don’t know where the payload is until it lands.
If you’re lucky.
There are 2 problems here – one is that GSM coverage isn’t 100%, and the other is that the popular GSM trackers don’t seem to like high altitudes. I don’t know if they get confused, or they don’t like the cold, but I’ve tried these things several times and only had one work once.
A GSM/GPS tracker that actually works would be useful though, as a backup to a main tracker. Having had little success with commercial offerings, I thought I’d make one. I found a model that uses the SIM868 GSM/GPS module, plus supporting electronics on a Pi Zero HAT. So that plus a Pi Zero and suitable power supply would make a fairly small backup tracker, and maybe even one that works.
The device supports GSM (calls, texts) and GPRS (2G, i.e. slow data). It also has a GPS receiver. It seemed attractive to use GPRS to provide internet access (via PPP), but that would lock out the single serial port thus making GPS unavailable. So I decided to just send SMS from the device instead, using a script that gets the GPS position, then builds and sends an SMS containing that position. I wrote this in Python using the PyGSM library, which makes things very easy (generally no need to mess around with AT commands). PyGSM doesn’t know about the SIM868 GPS functions however, but it was simple to add those. So my test script requests and parses the GPS position, then formulates a text message and ends it to my mobile phone:
It would also be useful to have the balloon position automatically uploaded to the live map, so I decided to have the device send a second SMS but this time to a gateway based at home. This gateway is another Pi with a USB 3G modem attached. I used the same library, but a different script to poll for new messages, determine whether an incoming message is of the correct format, and if so build a UKHAS telemetry sentence, finally uploading it to habhub for the live map:
Tidying my office a few days ago, I came across some car reversing monitors that I used to use as cheap Pi displays for use in the chase car, to show the distance and direction to the payload; these days I use the official Pi touchscreen as it’s a lot better for that application. One of the monitors is a flip-up model, and I wondered how much space there was inside. I use the Pi Zero a lot for balloon trackers, as it’s small and light compared to other Pi models, but perhaps one could fit one inside the base to make a smart dashcam – one that can stream my balloon chases to Youtube as well as record to SD.
About the same time, Michael Horne (author of the excellent Pi Pod blog), posted a picture of a similar-looking model on Twitter, asking how to power it from 5V. That’s the opposite of what I wanted to do (power the Pi Zero from the 5V rail inside the monitor) but I felt I might be able to help so I opened up my unit to find where the 5V could be tapped. As it turned out, Michael’s unit had a very different PCB to mine, but the seed was sewn so I decided to start building my dashcam.
Pi Zero to Monitor
First job was to connect the the display to a Pi Zero W (W because I want to be able to stream the camera video). This requires the 5V and GND lines on the GPIO pins, plus the composite video output pin, to be wired to their counterparts in the monitor. Once I’d used the correct video pin this worked without issue!
I don’t know how much spare current capacity the display has on the 5V rail, but it dropped slightly from 5V to 4.9V which is OK. The Pi Zero booted and ran continuously overnight with no issues and with nothing on the display PCB getting hot.
I connected the Pi Zero to my LAN via a USB LAN adapter, ssh’d to it, then set the video aspect ratio to match the monitor (16:9).
The full set of options is:
I also updated/upgraded Raspbian, and set up the WiFi.
Next steps were to enable and connect the camera, and to install ffmpeg, which is what I use to stream to YouTube. I used these instructions to install the library:
git clone git://git.videolan.org/x264
./configure --host=arm-unknown-linux-gnueabi --enable-static --disable-opencl
sudo make install
and then ffmpeg. This takes several hours to build, so it’s a good time to find something else to do!
git clone git://source.ffmpeg.org/ffmpeg.git
sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree
I then tested the video streaming to YouTube with a command like this:
which worked to Youtube, however the preview displayed on the monitor was distorted. I thought that this was to do with Raspbian not correctly applying the screen resolution, but no matter what I did to fix that (specifying the resolution explicitly in config.txt, or specifying the preview window dimensions) fixed it. Eventually I concluded that the issue was within raspivid, and then I soon found a very relevant forum post, which explained how to modify raspivid:
git clone https://github.com/raspberrypi/userland.git
Then add these lines near line 116 of RaspiPreview.c:
then rebuild. After following these steps, the problem was gone!
With the base plate removed, I threaded the flat camera cable up through the back of the base, behind the support that goes up to the top of the display, connected the camera and fixed it in position with Sugru:
I then added a piece of black duct tape covered the cable to stop it snagging and make it look tidy.
The Pi Zero has 3 wires connected – 5V, 0V and video, which on my monitor go to a convenient electrolytic capacitor and the back of a PCB socket. Finally, a switch is mounted near the front of the monitor’s base, and connected to a GPIO pin and GND:
Everything was then insulated with duct tape before screwing on the metal base.
I wanted the dashcam to work in one of 2 modes – to record as a dashcam normally would, and to also live stream to YouTube (via the WiFi connection to a phone or MyFi device, for example). So I wrote a small script that switches modes according to the position of a switch connected to a GPIO pin. On startup, or when the switch position changes, the script runs the appropriate shell command for that mode. For regular dashcam recording, that’s a simple raspivid command; for streaming it pipes the raspivid output through ffmpeg (see command above). At present I’m not recording the video, so I need to do that using rotating file names, over-writing old files before the SD card fills up, and recording at a high resolution but streaming at a lower resolution.
I sometimes receive emails asking which HAB tracker is best to buy, so here I will compare the ones I have direct knowledge of. First though, if you have or want to have the ability to build and code your own tracker, do that instead! It’s much more educational to walk that path, rather than take the easy option of buying a ready-made tracker. It’s also far more rewarding. Or if you prefer, you could build your own tracker but use existing software, or buy a pre-built tracker and write your own software.
If you want to DIY the electronics and/or software, and if you can you should, then check out these resources:
NASA have given us many iconic photographs, such as “EarthRise” and “Man On The Moon“, but there’s one which is more easily replicated if you’re not NASA or Elon Musk, and that is the image of Bruce McCandless floating freely in space during the first untethered spacewalk:
So several months ago I set about replicating this as best I could, under a high altitude balloon.
Revell USA made an astronat/MMU kit back in 1984; samples now are rare and expensive, but eventually I found an unused kit in the UK on ebay. It was a good price, complete and original:
The kit arrived late last year, and I assembled it over the Xmas break, in preparation for a launch on (hopefully) February 7th, 34 years to the day after the original flight.
I wanted to include several cameras for the flight, both video cameras and still cameras for downloading live to the ground during the flight. Having tried several different action cameras in the past, my current favourite is the Gitup Git2 camera – reliable, inexpensive and plenty of options in the firmware. I combined 2 of these (one Git2 with wide-angle lens, and one Git2P with a normal lens) with some AA-powered powerbanks to extend the run time from about 90 minutes to several hours using 64GB SD cards.
I also wanted live images, ideally from different viewpoints. The only reliable live image cameras I’ve used are the Pi models, and these are one-per-Pi. So I built a small network with 3 Pi boards using the built-in wireless modules to pass image files between them; a Pi3 as an access point, and Pi Zero Ws as clients. All live images were then downlinked in squence by one Pi using LoRa.
In the end it wasn’t really feasible to set up vastly different viewpoints as the astronaut model is quite large and the payload would have then been huge (and cumbersome, and delicate), so I had the cameras all quite close to each other.
I decided to place each Pi in its own Hobbycraft box. The Zeroes are very small of course and even with an AA powerbank there was space to fit a video camera with its own powerbank inside the box:
Next came the main Pi, with its own camera, 3G (later removed due to insufficient power from the powerbank) and UBlox USB GPS, all inside 2 of the same boxes glued together:
Finally, I added a backup tracker in case the main one failed for any reason, and to provide a programmed cutdown to prevent the flight drifting too far:
There was one last thing to do – my Revell model isn’t identical to the version that NASA flew, and most prominently was missing a camera. Easily fixed with some foam polystyrene and a plastic cap!
Bruce Junior was now ready for flight!
The flight predictions for my chosen date were not ideal, but quite good for the time of year. Initially I was going to have help from another HABber but he couldn’t make it that day, so I launched alone. To make that task easier I removed some items from the flight, allowing for a smaller balloon and less gas. I also chose to launch later in the day than planned, which meant I didn’t need to overfill the balloon so much to keep it away from the sea. Here’s the predicted flight path:
Less gas means less lift which makes it easier to tie the balloon and handle it afterwards. Aside from the cold, it was a very nice day to launch – fairly clear skies and not too much ground wind. Here’s the partially-inflated balloon:
Meanwhile the payload cameras were recording and transmitting to the other HAB enthusiasts online:
With the balloon fully inflated, tied off and tied to the parachute and payloads, it was time to launch:
I then finished getting aerials set up for the flight, finished filling the car with kit, and then set off on the chase. I knew that the flight was going to land some time before I arrived, so I wasn’t in as much of a rush as usual. Meanwhile the payload continued to rise till, at over 30km and just under 100,000 feet, the balloon burst. Here’s what happens to an aerodynamically asymmetrical payload when a high altitude balloon bursts and gravity takes over!
The flight computer includes its own landing prediction which, as I’ve seen every time before, is more accurate than one that the online map uses. Here “X” is the last prediction from the tracker, with “O” being the actual landing spot and the red line showing the online prediction:
That last position was from my chase car, which was still on the M5 and over an hour away at the time! Here’s what happened when I was trying to catch up:
Normally when I get close to a landed balloon the radio signal reappears and I can get the landing position easily. This time though the landing was on a farm behind some metal cowsheds, blocking the signal from the nearby roads. After driving past where I thought the signal should reappear, I found a hill, connected a Yagi aerial to my handheld receiver and got a position that way. Following that target I still didn’t regain the position until I got to the farm, when I could see a row of sheds between me and the landing position. It’s always fun explaining to farmers why I’ve suddenly turned up, and this time one of them had actually seen it land. Retrieval was easy, though muddy and rather smelly …
Here you can see the balsa-wood frame (for lightness and deliberate fragility) with pivoting support (again, to help prevent damage to whatever it lands on), with the balsa painted matt black so it disappears against the black sky at altitude.
The point of the all this effort was to replicate as close as I could those original NASA images, so once home I went through the camera footage to select these …
My next flight is planned for this coming Wednesday, 7th February, to commemorate the first untethered spacewalk by Bruce McCandless on the same day in 1984, by trying to recreate the classic image:
There will be a total of 3 Pi cameras with different viewpoints and different lenses, to best capture images of a Revell model astronaut and MMU (Manned Maneuvering Unit), sending live pictures down to the ground.
The flight will have 2 trackers, one a Pi with 434 and 868 LoRa modules and the other a simple AVR LoRa tracker with cutdown:
BRUCE: Pi, LoRa, 869.850MHz, Mode 3, SSDV and telemetry
MMU: Pi, LoRa: 434.225MHz, Mode 1, SSDV and telemetry
EVA: AVR, LoRa: 434.450MHz, Mode 2, telemetry only
The main tracker is a Pi 3 plus LoRa and UBlox boards. The SSDV images will cycle between 3 cameras – one on the Pi 3 and 2 more on a pair of Pi ZeroW boards, all connected via wifi (the Pi 3 is an access point). The cameras are (currently – may change): Pi V2 (Sony) camera, Pi V1 (Omnivision) camera, and a PiHut “fisheye” Omnivision camera. The cameras are arranged on the payload to get different views, so you will see different views during the flight. Same applies for both LoRa transmitters, with different images being sent to each. If for any reason the wireless stops (though it’s been 100% reliable in testing), then the Pi3 will just send its own images.
Both include landing prediction fields, and those from MMU will be re-uploaded by a Python script to appear as “XX” on the map.
The LoRa signals will stop for a few seconds each minute, during which time one of my gateways will be sending a message up to the tracker to request a re-send of missing SSDV packets on the 869.85MHz link.
The Pi 3 also has a USB 3G modem on it, which will attempt to connect while below 2km. When connected it will:
* Upload telemetry directly to habitat every 1 minute, as payload ID STS41B
* Stream video to YouTube
* Copy images to a web server
The video uses the Pi 3’s camera, so there will be no SSDV from this camera before launch or after landing – all SSDV will be from the other cameras.
The backup tracker will be attached just above the parachute, and will cut the balloon away if the flight gets south of 51.1 latitude (may ehcnage that depending on predictions), or on a specific upload from the ground, to prevent a watery death.
And just because the payload isn’t heavy enough already, and doesn’t have enough trackers on it, we are adding a couple of car GSM/GPRS trackers, both sending messages to Anthony’s traccar server from where a Python script will send them to SNUS, as:
HTGSM1 – Upu’s car tracker
HTGSM2 – My cheap car tracker
There will be 2 YouTube streams – one from the payload (launch and landing, hopefully) and one at launch only using a camcorder connected to a laptop. Both streams will appear on a web dashboard – see links below:
This is to commemorate the very first untethered spacewalk by Bruce McCandless on 7th February 1984, as part of Space Shuttle mission STS-41B, when he used an MMU (Mobile Maneuvering Unit) to fly up to 300′ away from the Challenger Space Shuttle, and to replicate as best I can the famous photograph of him floating in space …
Sadly, Mr McCandless died late last year.
Of course the “untethered” aspect can’t be repeated under a balloon, gravity being what it is, but careful use of black supports against a black sky will make it look untethered.
Fortunately, Revell USA made a combined astronaut and MMU kit in 1984. Unfortunately they soon stopped making it, and examples are expensive and fairly rare. I watched listings on ebay for a few months, but all were in the USA with expensive postage, until one popped up in the UK. Not only was this the least expensive I’d seen, with reasonable postage, but also it was a completely original sample with the parts still in sealed plastic wrappers.
It’s probably 45 years or more since I assembled a plastic kit, so I had to buy the glue, paints and brushes before I started. Assembly wasn’t difficult though the plastic in general was much thinner than the small Airfix kits that I remember from my childhood.
The flight will include 2 LoRa downlinks – one in the 868MHz ISM band (more bandwidth for larger images) and 434MHz ISM band (better range).
I want to be able to take photographs from different viewpoints, ideally:
Straight shot from distance
One option would be to move the camera around with motors, but that would be delicate and likely to fail during flight. Instead I’ve opted for 3 separate cameras. This could possibly be done from a single Pi using USB cameras, but those aren’t reliable in my testing. Webcams tend to be much more reliable but not as good quality as a Pi camera.
Another idea is to use a separate Pi for each camera. I could then build 3 separate trackers, but for them all to use the 868MHz band I would need to have them take turns transmitting. All do-able but a bit messy, plus there would be a lot of aerials!
So instead, I decided to have one central Pi that has a camera, GPS and radio transmitters, plus 2 extra Pi boards just with cameras. Networking 3 Pis could be done with a network switch and cabling, but a wireless is a lighter option. Recent Pis have built-in wireless networking (saves a bit more weight, and is more reliable in my experience) so I settled on a Pi 3 as the tracker and access point, and 2 Pi Zero W boards as clients. So that’s 3 Pi boards in total and 3 cameras.
Setting up a Pi 3 as an access point takes quite a few steps, especially when bridging the wireless LAN to the wired LAN, but there are clear instructions on the RPi web site.
I needed to modify my PITS software to cope with 3 cameras. Normally, the tracker program (which is the one transmitting image packets) requests new images periodically according to the schedule in the configuration file, and then chooses the “best” image and requests a conversion from JPG to SSDV format shortly before it finishes transmitting the current image. One option I had was for this code to be modified to request 3 photos instead of 1, with 2 of those being on the Pi Zeroes, plus of course to cycle between the cameras for conversion and transmission. Separately, a bash script takes photos and does the conversions.
I felt it would be simpler to remove some of this responsibility from the tracker, so that it just chooses which photo to send, choosing each time from a different camera. So this means that we need a simple script to take photographs, and a simple script to do the conversion to SSDV. The first of these scripts is run on each Pi, with the Pi Zero scripts also copying the photo files to the Pi 3.
Here’s the SSDV page showing the result, on the 434MHz channel (smaller images), with the Pi 3 cycling through all 3 cameras:
Weather permitting, I’ll fly this on February 7th, 34 years to the day after the original flight.
This was a fun flight to provide a remote serial terminal on a Pi, between ground and a high-altitude balloon, using a bi-directional radio link.
Most high altitude balloon flights use a simple unidirectional data stream from the balloon to the ground, sending the telemetry (balloon position and sensor data) and sometimes images too, from balloon to the ground. Most often this is RTTY or (in the USA APRS) but there are alternatives such as LoRa which more easily provides a means of reliably transmitting data to the balloon as well as from it. This greatly expands the range of things we can do during a balloon flight, for example:
A ground station can request re-sends down to the ground of missing data (image data or anything else) – see http://www.daveakerman.com/?p=2195
A balloon can repeat data from other balloons, which might be flying or have landed – see http://www.daveakerman.com/?p=1850
Uplink to request cutdowns
Uplink to provide a guided parachute or parafoil with a new target landing position
You are in a maze of twisty little passages
Another possibility is to run a terminal session between ground (client) and balloon (host), allowing programs to be run on the balloon tracker as requested by a ground station:
This could even be used to change the tracker program, or have that program use new configuration parameters. Here though I’m going to use an idea provided by Philip Heron – run an old text adventure game. And to make this a group experience, I added a web dashboard that displays the terminal window in real (ish) time. The following diagram shows how this is achieved in software:
The LoRa gateway is the standard release with modifications added to provide a server socket on a specified port, to which any network terminal program (e.g. putty) can connect. In this case I have written a simple terminal program, in Delphi, that screen-scrapes the terminal window and sends the contents to a Python script, which then updates a web dashboard so that anyone with the URL can see what I see in my terminal program. Separately (and not shown on this diagram) another Python script updates the same dashboard with the current telemetry, using data from the habitat system.
Each time that a key is pressed in the terminal window, the key character is sent to the gateway that stores it ready for radio upload to the balloon tracker. Normally with balloon trackers, they transmit all the time, but instead this tracker sits listening for an uplink, to which it replies immediately. So the gateway program reads the keys sent to it from the terminal program, adds them to a custom message, and sends the result to the tracker. Assuming the messages arrives intact, the tracker replies with an ACK; any other reply (or no reply at all) results in the message being re-sent.
At the tracker, these messages result in those key codes being sent to the telnetd program (telnet daemon) which is an installable program on Raspbian. That program provides a regular command interface – same as a login on a Pi using a keyboard and monitor – and any responses to those key codes are sent back from telnetd to the tracker program, where they are included in messages sent back to the ground.
Periodically, when the tracker receives an uplink it will reply with a telemetry string so that the gateway can upload the balloon position to habitat as usual. The string is standard except for the addition of some status information about the uplink. There’s also a timeout so that if no uplink is received for a while, telemetry is sent anyway (useful for tracking after landing).
Houston, We Have Another Idea
As the Apollo missions of the 1970’s were a major part of my inspiration for my very first high altitude balloon launch, it seemed entirely appropriate to push the retro theme of this flight a stage further and try to replicate an Apollo mission control console. So I grabbed a suitable photograph from the web, edited it fairly heavily, and incorporated in a new web server program that populates the screen with balloon telemetry and the terminal session. I wrote this in Delphi, with some Python to grab the telemetry from Habitat. I opted for a green-screen monitor though (I later noticed) the Apollo screens had white screens. I think green looks better!
This is what it looks like in action, minus the web dashboard:
LoRa packets are up to 255 bytes, so long sections of text are downloaded in chunks of a bit less than that length (there’s some overhead of course), so the terminal window is updated in chunks also. That window is screen-scraped every 1 second, and the results are pushed to the web dashboard at that rate or slower (depending on the time needed to post to the server). The following video shows the terminal window and dashboard, for a short session that includes logging in to the tracker, running a couple of basic Linux commands, and then starting the Colossal Caves adventure game.
Choice Of Frequency
We have a range of frequencies available to us for balloon flights, with different restrictions according to power and duty cycle. For this flight the duty cycle (proportion of time spent transmitting) is between 50% and near 100%, so I had to choose frequency in the band that allows that.
It’s also important to choose a frequency that doesn’t have a lot of use from other devices. For those receiving on the ground, they may be near ISM (Industrial Scientific and Medical) devices such as oil level senders, weather stations etc, that can be bothersome if transmitting near the receiver. For the flight though, it can potentially hear transmitters over 100’s of miles, so about a year ago I did a test flight to scan the spectrum and report on the signal levels as received by the balloon. The results of that test showed that some frequencies are 15dB better (which is a lot) better than others:
The quietest area does not allow 100% duty cycle (which is probably why it’s the quietest!) so I chose a frequency centred on the quietest area that does; namely 434.225MHz.
Normally I would use a PITS tracker, but as I didn’t need RTTY I decided to use a custom GPS+LoRa board, atop a Raspberry Pi model A+. Power came from an AA “emergency phone charger” with 4 Energizer Ultimate Lithium cells. The lid of this was firmly taped down with duct tape, and the cells held in place with double-sided pads, but even so the Pi rebooted when it landed. Not an issue but a reminder that soldered cells are best!
As usual, the lot went into a Hobbycraft polystyrene box, with GPS aerial on the top and a 1/4 wave aerial on the bottom, made from an SMA bulkhead plug and 5 pieces of guitar wire.
The conditions were favourable, without too much ground wind (makes it difficult to launch) or high level winds (can take the flight a long way away); I wanted the flight to stay fairly close to give the uplink the best chance of working throughout the flight.
First step was to get all the ground-station software started (LoRa gateway, Telnet terminal client, web server (for Apollo dashboard) and Python scripts for updating the two dashboards (Apollo and thedash.com) with telemetry and terminal data. I intended to use a gateway up on my Clark mast, for the best range, but that position is beyond the reach of my house wifi signal, and the TP-Link repeater I bought the day before completely failed to extend the signal far enough. Soon I’ll have my shed (next to the mast) wired to the house network, and that problem will go away. Meanwhile though, I had to go with the LoRa gateway in the house, using a short colinear aerial in the loft.
With the software all set up, I started the tracker, checked that the 2-way communications was all working as expected, and then filled the balloon. I chose a 350g Hwoyee, a 24″ Sphereachute, and a gas fill to have the flight land south of Monmouth. I needed to launch by about midday as after the the flight would land further east, increasing the risk of a tree landing. The launch itself was easy, with little wind.
As mentioned, I only had a loft aerial to communicate with the flight, but that worked very well, both for uplink and downlink; very few missing packets on the uplink, and a last position from an altitude of 390 metres.
Once I got back to my PC after the launch, I typed a few commands into my remote terminal window, with the results then being relayed to the dashboard webb pages along with telemetry:
The upper section shows the latest telemetry, as received either by my gateway or by at least one of the other gateways operated by the HAB community. The left side shows what the payload is reporting for messages it has received from the ground; the right side is basic GPS information.
The lower section shows a copy of the terminal window from my PC. There is some latency in the system, with screen updates relying on the approx 1400 bps downlink from the balloon, plus some delays as the terminal is polled then changes sent to the web server and distributed over the web. It was though entirely usable.
Next step was to run the text adventure “Colossal Cave”, which I remember playing around 1980. This can be installed on a pi with
sudo apt-get install bsdgames
I’d already installed it (as the payload doesn’t have internet access!), so I just needed to run it by typing “adventure” into my terminal window.
One of the other balloonists on IRC asked if I could do some ASCII art, and conveniently I’d already installed figlet which does that!
Another request was to reboot the Pi, so I obliged:
I don’t have a screenshot from the live reboot, but I do have one from when I previously tested this on the ground:
As mentioned, this flight wasn’t expected to go far, so I left the chase until shortly before the flight landed. We had a last position at 390 metres altitude which is plenty good enough to then find a position within radio range of wherever the payload actually is. Here’s the path that the flight took:
We parked up near the last position, switched on our mobile LoRa gateway, and soon received a new position with the landing spot. This was close to the lane that we were on, so we parked opposite and took a look. The payload was hanging from a small tree in someone’s front garden. We rang the doorbell, several times, but nobody in so as the payload was very close we just grabbed it from the tree.
and took it back to the car:
So, a very successful flight, and though the remote terminal and game-playing was just a bit of fun, it did show just how reliable the LoRa uplink is, and I’ll use that for other purposes in forthcoming flights.
This was a simple flight, partly to try out a fisheye camera for the Pi Zero, partly to try streaming the launch to YouTube from a new camcorder, and partly to get a launch in while the weather is good for it!
I wrote previously about how to stream to YouTube from a D-SLR. That technique used the USB connection from the camera, but better quality is possible by feeding the HDMI output from a suitable camera into a HDMI to USB video capture device. However, many DSLRs superimpose focus rectangles and other items onto their HDMI output, which is not what we want. For some Canon DSLRs (but sadly not my EOS 760D!) some third-party firmware can make the HDMI output clean, so I needed to find another camera to use. One option was to buy a cheap/older Canon DSLR, but in the end I opted for a Panasonic V160 which has a 38x optical zoom, is small and very light (perhaps too light!), and can accept a larger than standard battery for longer run time (or can run from USB power). I paired this with a cheapish fluid head on a heavyish tripod.
For capturing the HDMI feed, I bought an AverMedia HDMI to USB capture device which had HDMI in and out sockets, and a USB connection to a PC. It’s really very very good, once you get past the rather odd software user-interface which wants to be a game rather than a program. The software can stream to various services including YouTube, and will authenticate to YouTube so you don’t have to mess around with URLs or video ID codes. Provided you have the uplink bandwidth (which for me means using 4G rather than the pathetically slow FTTC connection) then streaming is very smooth indeed, mainly as a result of the device doing H.264 compression internally.
I streamed the launch live to YouTube, and you can see the results here in my channel.
I made the tracker a few weeks ago, similar to this design but using a fisheye camera from Pi Hut, using an 868MHz LoRa module so I could send fairly large images down during the flight. As the range of these wideband transmissions is much shorter than for more normal settings, I set up a LoRa gateway connected to a high-gain Yagi antenna atop an ex-Army 12 metre Clark mast. This gateway uplinked messages to the flight, for re-sending any missing image packets, and separately another gateway listened only, from a 1/4 wave antenna through a filtered pre-amp.
For the flight, I removed the tracker from that case and placed it inside a foam plastic box from Hobbycraft, powered by a cheap AA*2 powerbank. However, this combination failed to gain a GPS lock, so I swapped the powerbank for a 4*AA model and used 2 cases taped together so I could keep the powerbank away from the GPS aerial (1/4 wave wire as in the photo above). The result had no problem at all getting a good GPS lock with plenty of satellites, however the weight went up from 95g to 195g. Balloon was a 1600g Hwoyee with hydrogen.
The launch was delayed by the GPS issue and wanting to wait until I had some help, by which time the wind had gone from “nothing at all” to “mainly blustery”. So filling the balloon was fun, as can be seen on the video, and I had to take the balloon down to near some trees before launching it. After that, the flight itself went smoothly, and pretty close to the prediction, albeit a bit higher finally bursting at 43,014 metres (I expected 41-42km).
We tracked the flight in our chase car, both via the live map and also with direct reception using the 868MHz LoRa gateway in the dashboard. As I mentioned, range on 868 isn’t that good, so for a long time we had no direct data or images, but once we got in range reception was very good. We were about 10 minutes away from the flight when it landed, and got our last position when it was at about 775m altitude. We then tapped the predicted landing position (as sent by the tracker itself) into my phone and drove up to the landing spot easily. When we got there, Julie first spotted the payload and parachute just metres away from the road.
Even better, there was an open gate just behind where we parked, so a very very easy recovery.
My PITS software and LoRa Gateway software both allow for uplinks from ground to HAB; in this post I’m going to cover how to use the facility to fill in missing image packets.
Although LoRa has a very good range compared to RTTY for similar data rates, and in the UK we have a reasonably large LoRa receiver network for HABS, there are still situations where there might be enough packet loss to leave large holes in images downloaded via SSDV. This could be the case if you are flying a long way from receivers, or if you are using higher bandwidths where the range is compromised. In this instances an uplink can help by asking the tracker to re-send missing packets.
How It Works
The way this is done is by dedicating a certain amount of time for uplinks; during this time the tracker stops transmitting and starts listening, meanwhile the gateway collects information from the SSDV server, and uses that to build a message that it sends up to the tracker. The tracker then marks up the requested SSDV packets as “Not Yet Sent” so they then can re-sent.
Normally I set the system to a 1-minute cycle, with the tracker listening from 0s to 5s past the minute, and the gateway transmitting at the 2s mark.
Frequencies and LoRa Mode
The requirements for the uplink are different to those for the downlink, so it may be a good idea to use a different frequency, bandwidth etc. For downlinks we need to use a frequency band where we can transmit for (nearly) 100% of the time; for the uplink we only transmit for (depending on settings) 1% or less of the time, and generally that allows for use of a different band where higher powers are allowed. This is good because the HAB is in a relatively noisy environment (it can potentially hear signals from a large circle on the ground below it) so extra power is needed to overcome that noise.
This is an example configuration for an 868MHz module on channel CE0:
This sets the tracker to listen for the first 5 seconds of each minute, and transmit the rest of the time. The uplink frequency (which it listens on during those 5 seconds) is set to 869.5MHz which is near the centre of a band that IR2030 allows 500mW transmissions at up to 10% duty cycle. Mode 6 is a convenient set of LoRa settings for the uplink.
This is an example configuration for an 868MHz module on channel CE0:
This sets the gateway to transmit at the 2-second mark after each minute, and to listen the rest of the time. The uplink frequency is set to 869.5MHz, and the mode to 6, to match the tracker settings above. The last line enables the SSDV uplink.
Since the uplink system uses the current time to decide when to transmit or listen, the gateway needs to have an accurate time reference – e.g. GPS or an NTP server.
List Of Missing Packets
The gateway source includes a Python script ssdv_resend.py that is simply run like so:
python3 ssdv_resend.py PISKY
The “PISKY” is the ID of the payload, and should be no more than 6 characters (the SSDV server truncates longer payload IDs). The script periodically creates a “missing packets” file that the gateway reads, sends to the tracker, and then deletes.
As well as having the standard LoRa gateway installation, you will also need Python3 installed:
If you’ve made or want to make a lightweight Pi Zero HAB tracker using my guide, then you’ll need to sort out power for the thing (this is something that we do for you in the PITS Zero product).
Many people use standard USB powerbanks to power a Pi for a few hours, but these things have LiPo cells inside and thus don’t like getting cold. They can be used for a HAB flight with enough insulation, especially with something warm (like an action camera) in the payload, but if you want to make a small tracker then you really need to use Energizer Lithium cells which are rated for very cold temperatures.
There do exist some powerbanks, sold as “emergency phone chargers”, that accept 2 or 4 AA cells. I’ve tested a few, and they don’t supply a lot of current (typically 400-500mA max) but that’s enough for a Pi Zero tracker. Models with 2 AAs will run such a tracker for 6 hours or so from Lithium AAs, which is enough to track a complete flight if you’re reasonably proficient at tracking; this not an option I recommend for newbies.
To start with, buy a suitable AA emergency charger, like this one:
The case has similar height and width to a Pi Zero case, so is a good choice. Open it up to reveal the battery compartment:
On the model I bought the two halves are clipped together and are not welded or screwed, and can be separated carefully (but forcibly) with fingers; no need for tools.
The circuit board is not fixed so just lift it out. You now need to remove the USB socket (to make it easier to solder wires to the 5V output on the board) though you could leave it in place if you wish. I also removed the white LED. Here are 2 boards one before one after:
You now need to solder 2 wires, preferably red for +5V and black for 0V, to the vacated holes left by the USB socket; the one on the far left is +5V and far right is 0V:
Now fold the wires flat to the bottom of the board, keeping red on the lft and black on the right, and place the board back in the case:
I’ve run the wires through the hold that the button for on-off switch was in; if you want to retain the button then make a hole in the case for the wires; if not then you can still use the switch with a small screwdriver.
Now clip the other half of the case on, flip the powerbank over, and place next to your case Pi Zero like so:
If like me you already made the Pi Zero tracker, then it’s easier to solder to the underside of the Pi than the top side, so tin the 3 power holes (the 3 at the bottom, in the row on the right), cut the wires to length and tin those too:
And solder the wires to the holes like so (red goes to the end 2 holes; black to the next one):
As a final touch, glue or use double-sided tape to fix the battery holder to the Pi case, remembering that the battery holder has a cover for the battery compartment that wraps round to the back of the case.