Skip to content

Backend encounter handling for the interim

Matthew Rossouw edited this page Jun 6, 2022 · 3 revisions

Handling encounters

While it is easy to formulate many smart ideas to handle the process of working out where a satellite will be given some core variables, actually implementing this stuff is [https://conference.sdo.esoc.esa.int/proceedings/sdc8/paper/10/SDC8-paper10.pdf](miserably difficult). Actually taking the TLE properties as given in the [https://www.ucsusa.org/resources/satellite-database](Union of Concerned Scientists satellite database) or https://www.n2yo.com/api/ is totally unhelpful without significant effort to process them, and working back from these core variables is messy and far from perfectly accurate even with that effort expended.

Two-line elements

[https://en.wikipedia.org/wiki/Two-line_element_set](The TLE format) originated in the 60s and was intended to be the minimal set of parameters required to predict the orbital state of a satellite. For our (open-sourced) purposes, and as a university group, the vast TLE datasets published by the US Space Force (formerly NORAD) are our best bet for generalised satellite tracking. Given that (practically) every artificial satellite which has ever been launched and know of publicly has been catalogued and given these elements, they are quite convenient.

However, as mentioned above, TLE is extremely spartan and requires a huge amount of work to derive the state of a satellite at a given position. Luckily there are lots of open-sourced solutions for doing this math, but integrating them is not particularly straight-forward and will likely be a struggle so early in the project.

The parameters in a TLE are pictured below. TLE parameters

So, what now?

Going into the process of developing the hardware for our groundstation project, we are best served with a "ready-made" solution for figuring out where a satellite is in the sky. As established above, TLE propagation is complicated and difficult to integrate (but as theoretically accurate as we can hope to be). However, the N2YO API exposes lots of useful endpoints - most notably ones which return calculated encounters for a given satellite, given your current position. N2YO doesn't reveal how these work, but they presumably use TLE propagation as the TLE is the only such element revealed in their https://www.n2yo.com/database/ which might be used in the calculation. If there is some hidden magic in the background, we will never know.

Encounters, to tracking

It's a non-trivial step up from the encounter given by N2YO to actually tracking a satellite. By combining the API endpoints for a visual and radio encounter, we can determine three points in the sky to observe the satellite at, given certain points in time.

While it is theoretically possible to try and determine the orbital path a a parametric eclipse given 4 points (creativity required to get fourth point), this is likely to be very inaccurate (notably because it totally forgets the many kinematic properties which make TLE a valid descriptor). In other words, it would be a terrible solution.

We could also try and interpolate between the start, highest point and end of the encounter, but again these three points do not adequately describe the path through the sky a satellite may take. This is also a terrible solution.

There is an extension to the latter however - we could use a different endpoint to retrieve extra points in the sky to make this process of interpolation far smoother. This may work - but it's a non-trivial amount of work for what is ultimately still a shoddy solution.

The answer: abusing the API (for now)

After a lot of pondering, I realised that the N2YO API exposes an endpoint revealing "exact" locations of a satellite in the sky, but only for up to 300 seconds at a time from the instant it is called. While ugly, we can asynchronously make our backend calculate an encounter and sleep until just before it. At this point it can retrieve 300 seconds of positions (or less if the encounter is shorter), send this to our hardware, and then sleep again until just before that window is up to retrieve more positions. This can be rinsed and repeated indefinitely until the encounter is over, or we are locked out of the API for exceeding the hourly budget (this is unlikely though, the API is very generous).

Even with no interpolation or smoothing, this process should allow us to get some signal from a satellite as it is tracked. It is very easy and will be a very nice platform for us to start designing our signal processing subsystem, full-scale antenna system, and a more powerful gantry to be able to drive said antenna system.

This is a very hacky solution, and we should fully replace this with TLE propagation in the future. However it is very easy, so it is the clear candidate for the early days of the bluesat groundstation.

You can see the implementation for this in the backend in /orch/encounter.py

Interfacing with hardware, and ideas for data transfer optimisation

With the above implementation, we simply send off serial packets to the microcontroller of the form (azimuth: number, elevation: number, timestamp: unix timestamp [unsigned long]). There are some optimisations we should make immediately however - if we leave azimuth and elevation as floats, each packet should consist of 32 + 32 + 32 = 96 bits

If leave a window of 5 seconds to make each transmission (buffer between API calls, minus 5 seconds for API return time), this means we need to send 96*300 / 5 = 5760 bits / sec or transmit at a minimum of 5760 baud. This is obviously hazardous as we have burned almost all of our wiggle room for the API call (N2YO is s l o w), and any interruptions to the transmission are not recoverable. Given that we are forcing a slow, scheduler-less microcontroller to multitask in this scenario, I don't think it's unreasonable to assume that interruptions and loss are likely.

note: if i remember correctly the safe way to handle a serial transmission like this is to set the arduino to throw a virtual interrupt on each serial event. I am unsure how interrupting the main control flow will affect motor control. I will update this section once I get around to discovering this (or you can, if you know!)

We may be able to implement a far smarter protocol than just blasting the full capacity of the serial line in the future, but for now simplicity is key.

Bearing this in mind, we may be able to optimise by abandoning the needless precision in the floating point form of azmuth and elevation, and the chunkiness of a unix timestamp.

We may be better served with a fixed point approximation of the pair of angles.

The integer portions can be represented with 9 bits (max 512 base10) for azimuth (ranges from 0 to 360) and 7 bits (max 128 base10) for elevation (ranges from 0 to 90). The decimal portion of each can be represented with 3 digits for both (guestimation - not clear if this is accurate enough!). This would yield 12 and 10 bits respectively for these values; a handsome saving of 42 bits between them.

Compaction of time is more suspect however - given that the groundstation is equipped with a realtime clock counting everything in seconds is quite a lot easier. We also need to contend with local time, and time conversions if we use an alternative format (e.g. HH:MM:SS). However, there is a clear opportunity to save a lot of space if we make this change -> we can encode HHMMSS with 4 bits for HH and 6 bits for both MM and SS, leaving us with a total of 16 bits - a 50% reduction in space.

All of the above are just ideas off the top of my head however, and you shouldn't take anything too seriously apart from the fact that we need to transmit packets of the form (az, el, time).