This is to commemorate the very first untethered spacewalk by Bruce McCandless on 7th February 1984, as part of Space Shuttle mission STS-41B, when he used an MMU (Mobile Maneuvering Unit) to fly up to 300′ away from the Challenger Space Shuttle, and to replicate as best I can the famous photograph of him floating in space …
Sadly, Mr McCandless died late last year.
Of course the “untethered” aspect can’t be repeated under a balloon, gravity being what it is, but careful use of black supports against a black sky will make it look untethered.
Fortunately, Revell USA made a combined astronaut and MMU kit in 1984. Unfortunately they soon stopped making it, and examples are expensive and fairly rare. I watched listings on ebay for a few months, but all were in the USA with expensive postage, until one popped up in the UK. Not only was this the least expensive I’d seen, with reasonable postage, but also it was a completely original sample with the parts still in sealed plastic wrappers.
It’s probably 45 years or more since I assembled a plastic kit, so I had to buy the glue, paints and brushes before I started. Assembly wasn’t difficult though the plastic in general was much thinner than the small Airfix kits that I remember from my childhood.
The flight will include 2 LoRa downlinks – one in the 868MHz ISM band (more bandwidth for larger images) and 434MHz ISM band (better range).
I want to be able to take photographs from different viewpoints, ideally:
Straight shot from distance
One option would be to move the camera around with motors, but that would be delicate and likely to fail during flight. Instead I’ve opted for 3 separate cameras. This could possibly be done from a single Pi using USB cameras, but those aren’t reliable in my testing. Webcams tend to be much more reliable but not as good quality as a Pi camera.
Another idea is to use a separate Pi for each camera. I could then build 3 separate trackers, but for them all to use the 868MHz band I would need to have them take turns transmitting. All do-able but a bit messy, plus there would be a lot of aerials!
So instead, I decided to have one central Pi that has a camera, GPS and radio transmitters, plus 2 extra Pi boards just with cameras. Networking 3 Pis could be done with a network switch and cabling, but a wireless is a lighter option. Recent Pis have built-in wireless networking (saves a bit more weight, and is more reliable in my experience) so I settled on a Pi 3 as the tracker and access point, and 2 Pi Zero W boards as clients. So that’s 3 Pi boards in total and 3 cameras.
Setting up a Pi 3 as an access point takes quite a few steps, especially when bridging the wireless LAN to the wired LAN, but there are clear instructions on the RPi web site.
I needed to modify my PITS software to cope with 3 cameras. Normally, the tracker program (which is the one transmitting image packets) requests new images periodically according to the schedule in the configuration file, and then chooses the “best” image and requests a conversion from JPG to SSDV format shortly before it finishes transmitting the current image. One option I had was for this code to be modified to request 3 photos instead of 1, with 2 of those being on the Pi Zeroes, plus of course to cycle between the cameras for conversion and transmission. Separately, a bash script takes photos and does the conversions.
I felt it would be simpler to remove some of this responsibility from the tracker, so that it just chooses which photo to send, choosing each time from a different camera. So this means that we need a simple script to take photographs, and a simple script to do the conversion to SSDV. The first of these scripts is run on each Pi, with the Pi Zero scripts also copying the photo files to the Pi 3.
Here’s the SSDV page showing the result, on the 434MHz channel (smaller images), with the Pi 3 cycling through all 3 cameras:
Weather permitting, I’ll fly this on February 7th, 34 years to the day after the original flight.
This was a fun flight to provide a remote serial terminal on a Pi, between ground and a high-altitude balloon, using a bi-directional radio link.
Most high altitude balloon flights use a simple unidirectional data stream from the balloon to the ground, sending the telemetry (balloon position and sensor data) and sometimes images too, from balloon to the ground. Most often this is RTTY or (in the USA APRS) but there are alternatives such as LoRa which more easily provides a means of reliably transmitting data to the balloon as well as from it. This greatly expands the range of things we can do during a balloon flight, for example:
A ground station can request re-sends down to the ground of missing data (image data or anything else) – see http://www.daveakerman.com/?p=2195
A balloon can repeat data from other balloons, which might be flying or have landed – see http://www.daveakerman.com/?p=1850
Uplink to request cutdowns
Uplink to provide a guided parachute or parafoil with a new target landing position
You are in a maze of twisty little passages
Another possibility is to run a terminal session between ground (client) and balloon (host), allowing programs to be run on the balloon tracker as requested by a ground station:
This could even be used to change the tracker program, or have that program use new configuration parameters. Here though I’m going to use an idea provided by Philip Heron – run an old text adventure game. And to make this a group experience, I added a web dashboard that displays the terminal window in real (ish) time. The following diagram shows how this is achieved in software:
The LoRa gateway is the standard release with modifications added to provide a server socket on a specified port, to which any network terminal program (e.g. putty) can connect. In this case I have written a simple terminal program, in Delphi, that screen-scrapes the terminal window and sends the contents to a Python script, which then updates a web dashboard so that anyone with the URL can see what I see in my terminal program. Separately (and not shown on this diagram) another Python script updates the same dashboard with the current telemetry, using data from the habitat system.
Each time that a key is pressed in the terminal window, the key character is sent to the gateway that stores it ready for radio upload to the balloon tracker. Normally with balloon trackers, they transmit all the time, but instead this tracker sits listening for an uplink, to which it replies immediately. So the gateway program reads the keys sent to it from the terminal program, adds them to a custom message, and sends the result to the tracker. Assuming the messages arrives intact, the tracker replies with an ACK; any other reply (or no reply at all) results in the message being re-sent.
At the tracker, these messages result in those key codes being sent to the telnetd program (telnet daemon) which is an installable program on Raspbian. That program provides a regular command interface – same as a login on a Pi using a keyboard and monitor – and any responses to those key codes are sent back from telnetd to the tracker program, where they are included in messages sent back to the ground.
Periodically, when the tracker receives an uplink it will reply with a telemetry string so that the gateway can upload the balloon position to habitat as usual. The string is standard except for the addition of some status information about the uplink. There’s also a timeout so that if no uplink is received for a while, telemetry is sent anyway (useful for tracking after landing).
Houston, We Have Another Idea
As the Apollo missions of the 1970’s were a major part of my inspiration for my very first high altitude balloon launch, it seemed entirely appropriate to push the retro theme of this flight a stage further and try to replicate an Apollo mission control console. So I grabbed a suitable photograph from the web, edited it fairly heavily, and incorporated in a new web server program that populates the screen with balloon telemetry and the terminal session. I wrote this in Delphi, with some Python to grab the telemetry from Habitat. I opted for a green-screen monitor though (I later noticed) the Apollo screens had white screens. I think green looks better!
This is what it looks like in action, minus the web dashboard:
LoRa packets are up to 255 bytes, so long sections of text are downloaded in chunks of a bit less than that length (there’s some overhead of course), so the terminal window is updated in chunks also. That window is screen-scraped every 1 second, and the results are pushed to the web dashboard at that rate or slower (depending on the time needed to post to the server). The following video shows the terminal window and dashboard, for a short session that includes logging in to the tracker, running a couple of basic Linux commands, and then starting the Colossal Caves adventure game.
Choice Of Frequency
We have a range of frequencies available to us for balloon flights, with different restrictions according to power and duty cycle. For this flight the duty cycle (proportion of time spent transmitting) is between 50% and near 100%, so I had to choose frequency in the band that allows that.
It’s also important to choose a frequency that doesn’t have a lot of use from other devices. For those receiving on the ground, they may be near ISM (Industrial Scientific and Medical) devices such as oil level senders, weather stations etc, that can be bothersome if transmitting near the receiver. For the flight though, it can potentially hear transmitters over 100’s of miles, so about a year ago I did a test flight to scan the spectrum and report on the signal levels as received by the balloon. The results of that test showed that some frequencies are 15dB better (which is a lot) better than others:
The quietest area does not allow 100% duty cycle (which is probably why it’s the quietest!) so I chose a frequency centred on the quietest area that does; namely 434.225MHz.
Normally I would use a PITS tracker, but as I didn’t need RTTY I decided to use a custom GPS+LoRa board, atop a Raspberry Pi model A+. Power came from an AA “emergency phone charger” with 4 Energizer Ultimate Lithium cells. The lid of this was firmly taped down with duct tape, and the cells held in place with double-sided pads, but even so the Pi rebooted when it landed. Not an issue but a reminder that soldered cells are best!
As usual, the lot went into a Hobbycraft polystyrene box, with GPS aerial on the top and a 1/4 wave aerial on the bottom, made from an SMA bulkhead plug and 5 pieces of guitar wire.
The conditions were favourable, without too much ground wind (makes it difficult to launch) or high level winds (can take the flight a long way away); I wanted the flight to stay fairly close to give the uplink the best chance of working throughout the flight.
First step was to get all the ground-station software started (LoRa gateway, Telnet terminal client, web server (for Apollo dashboard) and Python scripts for updating the two dashboards (Apollo and thedash.com) with telemetry and terminal data. I intended to use a gateway up on my Clark mast, for the best range, but that position is beyond the reach of my house wifi signal, and the TP-Link repeater I bought the day before completely failed to extend the signal far enough. Soon I’ll have my shed (next to the mast) wired to the house network, and that problem will go away. Meanwhile though, I had to go with the LoRa gateway in the house, using a short colinear aerial in the loft.
With the software all set up, I started the tracker, checked that the 2-way communications was all working as expected, and then filled the balloon. I chose a 350g Hwoyee, a 24″ Sphereachute, and a gas fill to have the flight land south of Monmouth. I needed to launch by about midday as after the the flight would land further east, increasing the risk of a tree landing. The launch itself was easy, with little wind.
As mentioned, I only had a loft aerial to communicate with the flight, but that worked very well, both for uplink and downlink; very few missing packets on the uplink, and a last position from an altitude of 390 metres.
Once I got back to my PC after the launch, I typed a few commands into my remote terminal window, with the results then being relayed to the dashboard webb pages along with telemetry:
The upper section shows the latest telemetry, as received either by my gateway or by at least one of the other gateways operated by the HAB community. The left side shows what the payload is reporting for messages it has received from the ground; the right side is basic GPS information.
The lower section shows a copy of the terminal window from my PC. There is some latency in the system, with screen updates relying on the approx 1400 bps downlink from the balloon, plus some delays as the terminal is polled then changes sent to the web server and distributed over the web. It was though entirely usable.
Next step was to run the text adventure “Colossal Cave”, which I remember playing around 1980. This can be installed on a pi with
sudo apt-get install bsdgames
I’d already installed it (as the payload doesn’t have internet access!), so I just needed to run it by typing “adventure” into my terminal window.
One of the other balloonists on IRC asked if I could do some ASCII art, and conveniently I’d already installed figlet which does that!
Another request was to reboot the Pi, so I obliged:
I don’t have a screenshot from the live reboot, but I do have one from when I previously tested this on the ground:
As mentioned, this flight wasn’t expected to go far, so I left the chase until shortly before the flight landed. We had a last position at 390 metres altitude which is plenty good enough to then find a position within radio range of wherever the payload actually is. Here’s the path that the flight took:
We parked up near the last position, switched on our mobile LoRa gateway, and soon received a new position with the landing spot. This was close to the lane that we were on, so we parked opposite and took a look. The payload was hanging from a small tree in someone’s front garden. We rang the doorbell, several times, but nobody in so as the payload was very close we just grabbed it from the tree.
and took it back to the car:
So, a very successful flight, and though the remote terminal and game-playing was just a bit of fun, it did show just how reliable the LoRa uplink is, and I’ll use that for other purposes in forthcoming flights.
This was a simple flight, partly to try out a fisheye camera for the Pi Zero, partly to try streaming the launch to YouTube from a new camcorder, and partly to get a launch in while the weather is good for it!
I wrote previously about how to stream to YouTube from a D-SLR. That technique used the USB connection from the camera, but better quality is possible by feeding the HDMI output from a suitable camera into a HDMI to USB video capture device. However, many DSLRs superimpose focus rectangles and other items onto their HDMI output, which is not what we want. For some Canon DSLRs (but sadly not my EOS 760D!) some third-party firmware can make the HDMI output clean, so I needed to find another camera to use. One option was to buy a cheap/older Canon DSLR, but in the end I opted for a Panasonic V160 which has a 38x optical zoom, is small and very light (perhaps too light!), and can accept a larger than standard battery for longer run time (or can run from USB power). I paired this with a cheapish fluid head on a heavyish tripod.
For capturing the HDMI feed, I bought an AverMedia HDMI to USB capture device which had HDMI in and out sockets, and a USB connection to a PC. It’s really very very good, once you get past the rather odd software user-interface which wants to be a game rather than a program. The software can stream to various services including YouTube, and will authenticate to YouTube so you don’t have to mess around with URLs or video ID codes. Provided you have the uplink bandwidth (which for me means using 4G rather than the pathetically slow FTTC connection) then streaming is very smooth indeed, mainly as a result of the device doing H.264 compression internally.
I streamed the launch live to YouTube, and you can see the results here in my channel.
I made the tracker a few weeks ago, similar to this design but using a fisheye camera from Pi Hut, using an 868MHz LoRa module so I could send fairly large images down during the flight. As the range of these wideband transmissions is much shorter than for more normal settings, I set up a LoRa gateway connected to a high-gain Yagi antenna atop an ex-Army 12 metre Clark mast. This gateway uplinked messages to the flight, for re-sending any missing image packets, and separately another gateway listened only, from a 1/4 wave antenna through a filtered pre-amp.
For the flight, I removed the tracker from that case and placed it inside a foam plastic box from Hobbycraft, powered by a cheap AA*2 powerbank. However, this combination failed to gain a GPS lock, so I swapped the powerbank for a 4*AA model and used 2 cases taped together so I could keep the powerbank away from the GPS aerial (1/4 wave wire as in the photo above). The result had no problem at all getting a good GPS lock with plenty of satellites, however the weight went up from 95g to 195g. Balloon was a 1600g Hwoyee with hydrogen.
The launch was delayed by the GPS issue and wanting to wait until I had some help, by which time the wind had gone from “nothing at all” to “mainly blustery”. So filling the balloon was fun, as can be seen on the video, and I had to take the balloon down to near some trees before launching it. After that, the flight itself went smoothly, and pretty close to the prediction, albeit a bit higher finally bursting at 43,014 metres (I expected 41-42km).
We tracked the flight in our chase car, both via the live map and also with direct reception using the 868MHz LoRa gateway in the dashboard. As I mentioned, range on 868 isn’t that good, so for a long time we had no direct data or images, but once we got in range reception was very good. We were about 10 minutes away from the flight when it landed, and got our last position when it was at about 775m altitude. We then tapped the predicted landing position (as sent by the tracker itself) into my phone and drove up to the landing spot easily. When we got there, Julie first spotted the payload and parachute just metres away from the road.
Even better, there was an open gate just behind where we parked, so a very very easy recovery.
My PITS software and LoRa Gateway software both allow for uplinks from ground to HAB; in this post I’m going to cover how to use the facility to fill in missing image packets.
Although LoRa has a very good range compared to RTTY for similar data rates, and in the UK we have a reasonably large LoRa receiver network for HABS, there are still situations where there might be enough packet loss to leave large holes in images downloaded via SSDV. This could be the case if you are flying a long way from receivers, or if you are using higher bandwidths where the range is compromised. In this instances an uplink can help by asking the tracker to re-send missing packets.
How It Works
The way this is done is by dedicating a certain amount of time for uplinks; during this time the tracker stops transmitting and starts listening, meanwhile the gateway collects information from the SSDV server, and uses that to build a message that it sends up to the tracker. The tracker then marks up the requested SSDV packets as “Not Yet Sent” so they then can re-sent.
Normally I set the system to a 1-minute cycle, with the tracker listening from 0s to 5s past the minute, and the gateway transmitting at the 2s mark.
Frequencies and LoRa Mode
The requirements for the uplink are different to those for the downlink, so it may be a good idea to use a different frequency, bandwidth etc. For downlinks we need to use a frequency band where we can transmit for (nearly) 100% of the time; for the uplink we only transmit for (depending on settings) 1% or less of the time, and generally that allows for use of a different band where higher powers are allowed. This is good because the HAB is in a relatively noisy environment (it can potentially hear signals from a large circle on the ground below it) so extra power is needed to overcome that noise.
This is an example configuration for an 868MHz module on channel CE0:
This sets the tracker to listen for the first 5 seconds of each minute, and transmit the rest of the time. The uplink frequency (which it listens on during those 5 seconds) is set to 869.5MHz which is near the centre of a band that IR2030 allows 500mW transmissions at up to 10% duty cycle. Mode 6 is a convenient set of LoRa settings for the uplink.
This is an example configuration for an 868MHz module on channel CE0:
This sets the gateway to transmit at the 2-second mark after each minute, and to listen the rest of the time. The uplink frequency is set to 869.5MHz, and the mode to 6, to match the tracker settings above. The last line enables the SSDV uplink.
Since the uplink system uses the current time to decide when to transmit or listen, the gateway needs to have an accurate time reference – e.g. GPS or an NTP server.
List Of Missing Packets
The gateway source includes a Python script ssdv_resend.py that is simply run like so:
python3 ssdv_resend.py PISKY
The “PISKY” is the ID of the payload, and should be no more than 6 characters (the SSDV server truncates longer payload IDs). The script periodically creates a “missing packets” file that the gateway reads, sends to the tracker, and then deletes.
As well as having the standard LoRa gateway installation, you will also need Python3 installed:
If you’ve made or want to make a lightweight Pi Zero HAB tracker using my guide, then you’ll need to sort out power for the thing (this is something that we do for you in the PITS Zero product).
Many people use standard USB powerbanks to power a Pi for a few hours, but these things have LiPo cells inside and thus don’t like getting cold. They can be used for a HAB flight with enough insulation, especially with something warm (like an action camera) in the payload, but if you want to make a small tracker then you really need to use Energizer Lithium cells which are rated for very cold temperatures.
There do exist some powerbanks, sold as “emergency phone chargers”, that accept 2 or 4 AA cells. I’ve tested a few, and they don’t supply a lot of current (typically 400-500mA max) but that’s enough for a Pi Zero tracker. Models with 2 AAs will run such a tracker for 6 hours or so from Lithium AAs, which is enough to track a complete flight if you’re reasonably proficient at tracking; this not an option I recommend for newbies.
To start with, buy a suitable AA emergency charger, like this one:
The case has similar height and width to a Pi Zero case, so is a good choice. Open it up to reveal the battery compartment:
On the model I bought the two halves are clipped together and are not welded or screwed, and can be separated carefully (but forcibly) with fingers; no need for tools.
The circuit board is not fixed so just lift it out. You now need to remove the USB socket (to make it easier to solder wires to the 5V output on the board) though you could leave it in place if you wish. I also removed the white LED. Here are 2 boards one before one after:
You now need to solder 2 wires, preferably red for +5V and black for 0V, to the vacated holes left by the USB socket; the one on the far left is +5V and far right is 0V:
Now fold the wires flat to the bottom of the board, keeping red on the lft and black on the right, and place the board back in the case:
I’ve run the wires through the hold that the button for on-off switch was in; if you want to retain the button then make a hole in the case for the wires; if not then you can still use the switch with a small screwdriver.
Now clip the other half of the case on, flip the powerbank over, and place next to your case Pi Zero like so:
If like me you already made the Pi Zero tracker, then it’s easier to solder to the underside of the Pi than the top side, so tin the 3 power holes (the 3 at the bottom, in the row on the right), cut the wires to length and tin those too:
And solder the wires to the holes like so (red goes to the end 2 holes; black to the next one):
As a final touch, glue or use double-sided tape to fix the battery holder to the Pi case, remembering that the battery holder has a cover for the battery compartment that wraps round to the back of the case.
When following a flight – whether it’s your own or someone else’s – there are some great online tools to see what’s going on. The main one of course is the map, but there are also pages for showing live images and for displaying sensor values graphically.
HAB flights in the UK in particular are often community things, with the launch and chase teams keeping in touch via the #highaltitude IRC chatroom. This is useful both for keeping other balloonists up to date with what’s happening (e.g. letting them know the launch is delayed) or for asking for help when recovering the flight. Some launchers also provide video streams from the launch and sometimes the chase and recovery too, and all of this helps with the community spirit – balloonists sometimes dedicate hours to help receive data from a flight, so providing them with these extra things is a form of payback for their efforts.
For a while now I’ve wanted to add a custom dashboard page for my flights, to combine elements that are of interest at that stage of the flight in one simple screen. This screen needs to change during the flight – for example the map isn’t of much interest before launch, and the launch video isn’t useful after the launch. So I’ve designed 4 custom dashboard screens for my next flight, for launch, ascent, descent and landing.
Here I’ve added a basic description of the flight, current time, a video from the launch site, and a list of status updates.
The video is a life feed, using the “embed” URL from YouTube. For now this links to a recorded video but for the actual launch it will be streamed live of course.
The “Status Updates” list was originally intended to be a Twitter feed, and I wrote a Python script to generate the tweets automatically from the telemetry downloaded from the habhub server. However as I said earlier, the Twitter widgets on thedash.com are not live (one is supposed to be but is currently broken due to changes at the Twitter end), so I replaced the Twitter feed with a live table widget. That table has data pushed at it from a Python script that grabs the current telemetry from habhub, decides if anything interesting has changed, and if so generates a suitable message which it pushes to thedash using the Python requests library.
For this part of the flight, the launch video stream is no longer of interest, but the map is:
Normally there would be no point having a separate dashboard for the descent phase, but for this flight there will be some extra things happening, so I intend to add some extra items to this screen soon. For now it just shows the altitude as well as the usual map and status.
This is more interesting as I intend to stream the landing video on this flight. The map is also needed (so we can see the approach to the landing spot). I’ve also added some servo positions which should be some clue as to what’s going on.
I’ve had a few emails asking how to get camera images off of the PITS SD card, and have seen flights appear on the live image page with poorly configured image settings, so I’m going to cover both of these subjects in this article.
The image-taking is quite flexible, and the software comes pre-configured with options that are suitable for the majority of flights, but still you may wish to adjust the settings yourself.
The camera is controlled by a script (~/pits/tracker/camera) which runs a basic loop to take photos for the radio transmitter and also for permanent storage. The script looks for an runs separate small scripts for the RTTY channel (the standard transmitter on the PITS board), for LoRa (if you have the LoRa add-on board) and for permanent storage but not transmission. These small scripts are created by the main tracker program (~/pits/tracker/tracker) according to settings in the configuration file (/boot/pisky.txt). There is some basic information in the manual and online but I will go into more depth here.
The camera software uses a simple scheduler to take a photograph of a certain resolution every n seconds. The resolution can be set separately for low or high altitudes – there’s no point in transmitting large images before the balloon is launched!
There are several such schedules running independently, so you can separately control the rate of image taking for RTTY, LoRa etc. This is useful because for example LoRa typically transmits data more quickly than RTTY does, so you might want there to be more frequent LoRa images than RTTY.
The next thing to remember is that although it’s possible to take a photograph every few seconds, it takes a lot longer to transmit them. The transmission time depends on various factors but the main one under your control is of course the image size in pixels; do not make the mistake of one team who used images so large that it didn’t have time to fully transmit even one image!
Since it is possible to take images much more quickly than transmit them, the software is able to choose what it considers to be the “best” of recent images for transmission. Suppose that it takes 5 minutes to transmit an image, and one us taken every 30 seconds; that gives the software 10 images to choose from. The “algorithm” (if I can call it that) is to simply select the largest JPG file out of those images, and this works surprisingly well in selecting good images and not selecting poor ones (e.g. those pointing at the black sky above).
Having selected a image to transmit, that image and all the ones it rejected are moved to a dated folder, so they aren’t later considered for transmission. This means that (using the above example) the next image will be chosen from the next 10 photographs. So, at any point, the transmitted image will only be a few minutes old.
Since images are sent using the same radio transmitter(s) as telemetry, only one or the other can be sent at any one time on a particular transmitter. So, the software interlaces image data and telemetry, which means that it will send a certain number of image packets, then a telemetry packet, and then repeat the cycle. The ratio can be fairly high when the balloon is high – we don’t need frequent telemetry updates at that time – but is forced to be 1:1 at lower altitudes (so we get frequent position updates as the balloon comes in to land).
High and Low
As I mentioned, the software can use different image sizes for low altitude and high altitude, with the changeover controlled by this line in the configuration file:
So, above 2000m the software uses settings for “high” images, and below that it uses those for “low” images. The same changeover altitude is used for all image channels (RTTY, LoRa and “FULL” for SD card only) and each channel has separate settings for high and low image sizes.
You can set the image size for low or high images, plus how often images are taken, plus the ratio of image packets to telemetry packets (when above the “high” setting above):
When the tracker program starts and reads this file, it displays the following in response, explaining what the options do:
RTTY Low image size 320 x 240 pixels
RTTY High image size 640 x 480 pixels
RTTY: 1 Telemetry packet every 4 image packets
RTTY: 60 seconds between photographs
For typical 300 baud RTTY (the default speed for RTTY in PITS), this provides an image every few minutes, which is fine. Images will be transmitted more often at high altitude because the black sky compresses very well, which helps. I do not recommend using larger image sizes for RTTY. Also, do not transmit images at all if you are using 50 baud (set image_period=0 to disable images for RTTY).
Full Size Images for SD Card Only
Since these images are not transmitted, they might as well be set to the full camera resolution when above the “high” setting:
LoRa transmissions should be set to mode 1 (see the LoRa section in the manual) as this is the best option for imaging. The resulting speed is about 4 times that for 300 baud RTTY, so you will get more frequent updates, or you could increase the image sizes.
LORA0 Low image size 320 x 240 pixels
LORA0 High image size 640 x 480 pixels
LORA0: 1 Telemetry packet every 4 image packets
LORA0: 60 seconds between photographs
If you don’t want images at all, either simply remove the camera, or disable the software from using it with:
Assuming use of a Pi camera, the software uses the raspistill program to take photographs. You can add parameters that are passed to raspistill, which is handy if for example the camera is upside-down (happens…). To do this, first check the online raspistill documentation, and then add your parameters using “camera_settings” e.g.:
which does vertical and horizontal flips to rotate the image by 180 degrees.
Images From SD Card
So, you’ve flown your flight, recovered the payload, and want to get those images from the SD card onto another computer. There are many methods:
If you have a PC or Pi running Linux, you can pop the SD card into a card reader, and access the files directly
If you connect the Pi to a network, you can use WinSCP to access files from a Windows PC on the same network.
Or, also with the Pi on a network, you can install SAMBA on the Pi to share it’s files.
Here we will cover the WinSCP option.
First, install WinSCP from the official page. Start the program and you will see this initial screen; enter the IP address of the Pi (which it displays on the monitor at the end of the boot process) and the Pi user name (“pi”). Click Save to save the settings.
Then click Login and the program will connect to the Pi, requesting a password (even you entered one above!):
and then, again if this is the first time you’ve connected to the Pi, it will ask for confirmation:
Once connected, you will see a file manager window with the directory structure on the left, and any files on the right:
Now, select the images directory (home/pi/pits/tracker/images) and you will see the following directories:
FULL – for the full-sized images
LORA0 – for LoRa CE0 images
LORA1 – for LoRa CE1 images
RTTY – for RTTY images
Choose whichever you like, and you will then see dated directories within:
Open up whichever matches the date of your flight, and you will (hopefully!) see lots of image files:
You can drag individual files, multiple files or even entire directories to your hard drive or any other storage.
One of the nice things about high altitude ballooning in the UK and Europe is the community spirit and help that launchers get from others who will receive and upload the launcher’s balloon transmissions, and freely offer advice during the flight. I think it’s a very good idea to return that favour by providing live video streams of the launch, chase and recovery where possible. A few HABbers do this and it would be nice if more did.
There are various methods of uploading to different streaming services, using a phone or Pi with Pi camera or a laptop with webcam. For a balloon launch though it would be good to use a camera with zoom lens so that the balloon can be streamed once airborne. To do this requires an SLR.
So, how to get video from an SLR into a laptop? Again, multiple options – either use a HDMI video capture device (higher quality but a tad expensive) or send the video over USB. Here we will explore the USB option.
So we need some software on the PC to receive the video stream from the camera, and to upload to a streaming service. Again, each of these has a choice. I’m using a Canon EOS 760D and that comes with “EOS Utility” which displays the video in a window (which we can then capture), but a neater option is a £50 program called SparkoCam, which makes a modern Canon or Nikon DSLR appear as a regular webcam thus making it easy to pass the stream onto another program. If you want to spend nothing then instead you can use the EOS utility or Nikon equivalent plus OBS (see below) to capture from the PC screen.
To upload to a streaming service, we need a suitable video encoder. I’ve used Adobe’s Flash Media Live Encoder which works well, and which works from a webcam including SparkoCam’s virtual webcam. Here though we are going to use OBS (Open Broadcasting Software) which is rather more powerful and flexible.
First, install SparkoCam. You can try it for free but it will watermark the video stream.
Connect your DSLR and switch on. It should automatically be selected by SparkoCam, and you will hear the mirror latch up as it switches to live video mode. If nothing happens, check that the camera is on, that you used a data USB lead not just a charging one, and that your DSLR is supported by SparkoCam.
Open Broadcasting Software
Now, install OBS and run either the 32-bit or (if available on your PC) 64-bit version. OBS can be a pain to get running initially depending on if your PC has the required DLLs or not, and you may find that the 64-bit version doesn’t work but the 32-bit one does. Or vice versa. Error messages from OBS can be a bit cryptic too, but once it starts it works very well.
The opening screen is a bit cryptic too till you realise what you need to do. First, you need to add a video source to accept video from SparkoCam’s virtual webcam; click the “+” below the Sources panel:
and choose “Video Capture Device” from the popup menu. eave the name as “Video Capture Device” or change to something appropriate e.g. “Canon DSLR”. Click OK to save.
A Properties window will appear with a preview; you can just click OK to accept the defaults. Now the video stream is inside the main window in OBS.
This window is what will be streamed, and can contain several video sources if you want to get clever, but for now we’ll just expand the video source window, using the red drag lines, to fill the OBS source window.
If it doesn’t fit exactly, choose Settings –> Video to change the aspect ratio of the window to match the DSLR’s aspect ratio, and then expand to fit.
The following is for YouTube; other live streaming sites should have similar options.
Go to your YouTube Live Dashboard and either choose “Stream Now” or create an “Event”; we’ll do the former. With “Stream now” selected on the left of the screen, look at the “Encoder Setup” in the “Basic Info” section, where you will see the Server URL and Stream name/key. Click “Reveal” and copy the stream name to your clipboard.
Now, in OBS, click the Settings button and then click on “Stream”. Choose YouTube as the Service, and paste your key into “Stream key”. Click OK to save.
Now to start the streaming from your PC, just click the “Start Streaming” button in OBS.
YouTube likes to buffer, but after a few seconds your YouTube page should show the stream as “Live”.
A few months back, Raspberry Pi brought out a very very nice little case for the Pi Zero, including 3 different front plates one of which accepts the Sony Pi camera. After several minutes measuring the internal dimensions, I reckoned I could just about fit the parts for a HAB tracker inside, and came up with this:
Just add batteries.
That one was for 434MHz, and I wanted another for 868MHz, so I thought I’d document the build in case anyone else wants to make one.
Plus a soldering iron, solder, wire cutters and a Dremel with cutting disc. I assume that you also have the parts required to power and operate a Pi Zero (all the Zero suppliers provide kits). For a flight, you will also need 3 Lithium AAA or AA cells, flexible hookup wire, plus Styrofoam or similar to enclose and protect the tracker.
If you are new to soldering, practice on something else first! We are going to solder wires directly to the Pi GPIO holes, plus those on the radio and GPS boards, which isn’t the most delicate soldering operation ever but may be daunting for those with no soldering experience.
First, cut 4 short lengths of the solid-core wire, and solder to the Pi Zero as shown (making sure that the wires are on the top of the board!).
I’ve left a very short piece of insulation on the bottom-right wire, but you can remove that completely if you wish.
Next, bend the two top-right wires out of the way, and fold over the leftmost wire and cut to the length shown – this wire will connect to the Vcc hole (top one) on the GPS.
The next part is moderately fiddly: Push the short wire on the left into the Vcc hole, and then push the GPS module over the short bottom-right wire so that this wire goes through the GND hole on the GPS module:
Then push the GPS module down flat on top of the SD socket on the Pi, and solder those 2 wires (Vcc and GND) on the GPS module:
Those last 2 wires can now be bent round and connected to the GPS; the wire in the right of the above photo (Tx on the Pi) above goes to the RXD hole whilst the other (Rx on the Pi) goes to the TXD hole:
Cut the wires to length, bare the ends, push slightly into the holes then solder them.
That’s the GPS sorted.
This is the radio module for communication with the ground. This has a few more connections to make, and is a bit more fiddly.
First, place wires in these holes as shown, and solder them in place:
Be sure to use the correct holes, by counting from the right edge of the Pi Zero; don’t do it relative to any components because those can vary in position (the Zero and Zero W have the CPU in a different position, for a start!).
Now add 3 bare wires as shown:
The next step is optional. We need to provide some mechanical security for the radio, to keep if slightly away from the Pi so nothing gets shorted. This could be a double-sided sticky pad or, as here, a 4th solid wire but this time soldered directly to a capacitor on the Pi. If that sounds daunting, use the pad! Here’s the wire, ‘cos that’s how I roll:
Once soldered, remove the insulation.
Now it’s time to place the LoRa module on those 3/4 bare wires:
If you are using a sticky pad, place it now, on the underside of the LoRa module, then push the module down so it’s stuck to the Pi.
If instead you are using the 4th wire, push the LoRa module down but maintain a 1-2mm gap between it and any components on the Pi.
Then cut to length and solder them to the LoRa module.
Now we can cut each of the other wires to length and solder them to the LoRa module:
Until we have the tracker completely soldered together:
Using a Dremel or similar with cutting disc, cut a slot in the case for the GPS module to poke out. This will take some trial-and-error till the module fits comfortably.
Then drill a hole in the opposite end, in line with the corner pin on the LoRa module. The hole diameter needs to be wide enough to push a wire through it.
Connect the short flat camera cable (which came with the case) to the Pi, then insert in the case.
Cut a piece of wire to length (164mm for 434MHz), bare and tin and few mm at one end, insert through the hole and solder to the corner pin on the LoRa module. Finally, connect the camera to the cable, push fit the camera into the lid, and close the lid on the case.
You will also need to provide a power supply to the tracker. This can be any USB powerbank with enough capacity, however the batteries may stop working if they get cold during flight. An alternative is a powerbank that takes AA cells, in which case you can use Eneergizer AA Lithiums. Finally, and this is the option you will want for a lightweight payload, simply solder 3 Energizer Lithium cells directly to the 5V/GND pads on the Pi.
As I mentioned in my previous post, I was planning to enable my landing prediction code for my next flight. This code is based on some work that Steve Randall did a few years ago, but using a slightly different technique as I was using a Pi and therefore had plenty of RAM available for storing wind data (Steve used a PIC). I wrote the code as the first stage in having a HAB guide itself to a predetermined landing spot, and knew that it worked pretty well using stored data from one of Steve’s flights, but hadn’t got round to trying it for real.
The way my code works is this:
During ascent, it splits the vertical range into 100 metres sections, into which it stores the latitude and longitude deltas as degrees per second.
Every few seconds, it runs a prediction of the landing position based on the current position, the data in that array, and an estimated descent profile that uses a simple atmospheric model (from Steve) plus default values for payload weight and parachute effectiveness.
During descent, the parachute effectiveness is measured, and the actual figure is used in the above calculation in (2).
So, basically, for each vertical 100m band, the software calculates the estimated time to fall through that band, and applies that to the latitude/longitude deltas measured during ascent. It then sums all the resulting deltas for descent to 100m (typical landing altitude), adds them to the current position, and emits the result in the telemetry as the predicted landing position.
Although the habhub online map does its own landing prediction, an onboard prediction has some advantages:
It has more descent data to work with, so can more accurately profile the parachute performance
It is using more recent wind data, measured during ascent
Ground chase crews can see the landing prediction without having internet access
There are disadvantages too. Because it uses wind data from the ascent, if the wind has changed (due to the landing being in a different area, or because the wind is changing with time) then those factors will introduce errors.
Also, I have a suspicion that the live map consistently overestimates the horizontal distance travelled by a descending flight. This can be seen by watching its landing prediction which, as the flight descends, will move back towards the actual flight position.
So I was keen to see how well the onboard prediction fairs against the habhub prediction. Steve Randall was also interested in this, and was kind enough to record the descent on his screen. He has sped up and annotated the video which you can see here:
From that you can see that:
Until close to landing, it’s a lot more accurate than the habhub prediction (for this flight – might not be the case generally!)
The noise in the estimated landing position is mainly along the large part of the descent track.
Here’s a screenshot from the map, edited to show the movement of the landing position during descent:
Steve produced a chart showing the parachute effectiveness ( relative coefficient of drag – which is what the code is trying to measure) with altitude:
Noise at low altitudes is less important, as it’s being applied to a short remaining distance to fall, but the noise higher – between say 5000 and 15,000m – is more important.
For my next flight, I’ll apply some filtering to hopefully make the prediction more consistently accurate. I have all the GPS data from this flight and I can run that back into the tracker code to test how well it would have worked on that last flight.