Best practices when sending GPS location data [HowTo]

I think it is a very interesting approach, but it won’t work with negative integers… so forget about any coordinate to the west of Greenwich or to the South of the Ecuador… any suggestion to fix it?

1 Like

same problem

You could add 200 and then subtract them on the server/TTN side… that way you would always treat with positive numbers!

2 Likes

Great, we seem to have come to a good solution to transfer coördinates. Is there any to automatically get this data on a coverage map? Say on pade.nl/lora. Is there an application that we can assign our trackers to?

Check TTNmapper.org

When sending 3 bytes, we’re basically not sending the 4th byte of each integer, which should be 0xFF for negative numbers. Like for New York (40.712784, -74.005941, which would be sent as integers 407127 and -740059) we would send:

When decoding these 6 bytes back into two 32 bits signed integers, we need to compute the missing 4th and 8th bytes ourselves, to make JavaScript properly convert the negative values for us. Those bytes should be 0xFF if the most significant bytes have their “high bit” set, which is called “sign extending”:

// LSB, Least Significant Bit/Byte first
// Sign-extend the 3rd and 6th bytes into a 4th and 8th byte:
lat = (b[0] | b[1]<<8 | b[2]<<16 | (b[2] & 0x80 ? 0xFF<<24 : 0)) / 10000;
lng = (b[3] | b[4]<<8 | b[5]<<16 | (b[5] & 0x80 ? 0xFF<<24 : 0)) / 10000;
// MSB, Most Significant Bit/Byte first
// Sign-extend the 1st and 4th bytes into leading bytes:
lat = ((b[0] & 0x80 ? 0xFF<<24 : 0) | b[0]<<16 | b[1]<<8 | b[2]) / 10000;
lng = ((b[3] & 0x80 ? 0xFF<<24 : 0) | b[3]<<16 | b[4]<<8 | b[5]) / 10000;

Alternatively, shift the most significant byte 8 bits too far to the left, and then shift it back, which will do the sign extension on the fly, as the bitwise operator >> is the sign-propagating right shift:

lat = (b[0]<<24>>8 | b[1]<<8 | b[2]) / 10000;
lng = (b[3]<<24>>8 | b[4]<<8 | b[5]) / 10000;

Meanwhile, for the new production environment, payload functions should also include the function name, Decoder. So, to support 6 byte coordinates with possible negative values:

function Decoder(b, port) {

  // Amsterdam: 52.3731, 4.8924 = MSB 07FDD3 00BF1C, LSB D3FD07 1CBF00
  // La Paz: -16.4896, -68.1192 = MSB FD7BE0 F59B18, LSB E07BFD 189BF5
  // New York: 40.7127, -74.0059 = MSB 063657 F4B525, LSB 573606 25B5F4
  // Sidney: -33.8688, 151.2092 = MSB FAD500 17129C, LSB 00D5FA 9C1217

  // LSB, Least Significant Bit/Byte first! Your node likely sends MSB instead.

  // Sign-extend the 3rd and 6th bytes into a 4th and 8th byte:
  var lat = (b[0] | b[1]<<8 | b[2]<<16 | (b[2] & 0x80 ? 0xFF<<24 : 0)) / 10000;
  var lng = (b[3] | b[4]<<8 | b[5]<<16 | (b[5] & 0x80 ? 0xFF<<24 : 0)) / 10000;

  return {
    location: {
      lat: lat,
      lng: lng
    },
    love: "TTN payload functions"
  };
}

Alternatively, as coordinates in decimal degrees are -90…+90 for latitude, and -180…+180 for longitude: one could add 90 to the latitude and 180 to the longitude before sending, then send the positive values, and reverse that in the payload function.

And life can be made easy using libraries such as https://github.com/thesolarnomad/lora-serialization (though that one does not support 3 byte coordinates).

6 Likes

Look like the MGRS or the Maidenhead Locator System :wink:

1 Like

The AIS standard use 28 bits for longitude and 27 bits for latitude to report the position of a ship without loosing accuracy.
Could be packed in 7 bytes, giving one more bit to code something else.

TDOA could give us a approximative location, (or even the position of the base station itself), so the offset could be relative to the nearest square in a reference grid.

Example:
We are in the 4QFJ 1 6 MGRS square (precision level 10 km).
Exactly at location 4QFJ 12345 67890 (precision level 1 m).
So we transmit only 2345 7890, 0—9999 (dec): we need only 14 bits X 2
If we choose to have 12 bits X 2, the precision is near 2.5 meters (10000/2^12)

2 Likes

For information, see Low Power Payload (LPP) that allows the device to send multiple sensor data at one time.

GPS location is 9 bytes long:
3 bytes for Latitude: 0.0001 ° Signed MSB
3 bytes for Longitude: 0.0001 ° Signed MSB
3 bytes for Altitude: 0.01 meter Signed MSB

3 bytes for altitude is not best practices in my book. It also needs two additional bytes to indicate the “channel” and what the field type is. But indeed, TTN supports this as an integration as well.

1 Like

I think all GPS position reports must contain position error information. It is very important that the system receiving the position report have an understanding of how accurate the position report is.

2 Likes

Another idea:

If you need more frequent readings record a single GPS position set, then record at a higher frequency timestamp and position deltas. Only upload the message when it is full. Convert the delta’s back to absolute references in the incoming conversion function. (No need to hold any additional state in the Platform).

1 Like

Maybe the following may enhance the basic problem of sending a payload as small as possible to encounter the Fair Access Policy.
Please comment on this to make it worthwhile to work this out or not at all. Maybe the wheel was already invented.
The Google Protocol Buffers Encoding cannot be used as Google encoding has a minimal of one byte. This algorithm has a minimum of one bit.
The following is focussed on collecting measurement values transported via LoRaWan.
The basic is a dynamic marshalling data technic via marshalling algorithm coefficients in tables known but not static at the concentrator and client/node side.

How does this work?
The table has meta data to identify the table and version. The server side maintains a set of tables. Each client/node uses a table to define the coefficients of the marshalling algorithm to serialize the values into a bit stream for the payload.
The table has an ordered array of tuples. Each index of the tuple denotes the measurement (e.g. type of sensor, e.g. dht22, and unit, e.g. temperature). The tuple has a minimum value, the base, the maximum value, to describe the bandwidth of the measurements, and decimal precision.
E.g. for a GPS location which changes only in it’s lifetime within 30 km and with a precision of say 15 meters, this scheme will allow a payload of a very few bits.
E.g. for a temperature within a housholt and a bandwidth of say ± 12.0 oC eight bits suffice or even if dynamic marshalling table technic is used 6 or less bits will suffice.

How to make this to a dynamic marshalling table?
Lucky enough the communication is bidirectional. So the client/node is able to send a change tuple N of the used table to the server. The server acknowledge the request with a new version of the table for this client node.
This means on the longer run the table will stabalize.

Questions:
Is something like this already existant?
For some less powerfull clients this scheme might be too complex.
Is this scheme feasible?

I was triggered by the TTN service to define some statements to decode a payload and wanted to automate this. In other wordings the manual version is already present in TTN.

1 Like

how many times per day ?

How many per day?
Good point. Put this as meta value in the table. This to avoid at start up time and poor initial values in the table a frequent exchange of messages to improve the table.

Could you give an example of what a table could look like?

Unrelated to the last few posts: see also Encoding GPS data in 40 bits.

Note that my writing is an idea and not a full definition of a proposal. So with the discussion we try to collect the items for in the definition as well the feasibility.
This reply is just to give you more info to explain the idea.

Description of the table:
The table has some meta data:
The client/node has an unique number so this ID can be used to identify the table. But the there is a need to identify say the version. Probably as well more data as time of creation, last modified time, flag for “in transition”, and maybe more. Meta data can be sent “off line”.
The other part is an ordered array of _name_s and _tuple_s.
The name is to discriminate the different measurements, e.g. a string: , , e.g. DHT22@temperature and a unit, e.g. Celcius.
The tuple is to be able to calculate the amount of bits, the marshalling, needed to decode/encode the value of the measurement: ,,. In this way the amount of bits needed to encode the value is always a positive integer (minimal 0, maximal 2^^n -1),
e.g. say the first array element of the table is:
“DHT22”, “temperature”, “Celcius”, (12,37.5,-1) will allow 8 bits to denote values between 12.0 up to 37.5 (256 different values) . The first 8 bits can be decoded to the temperature of the DHT22 sensor, measurement in degrees Celcius.
Say the second array element is defined as “switch@lamp”,“On/Off”, (0,1,0) will encode as 1 bit. The payload will be 9 bits.
Say the third array element is defined as “switch@dimmed”,“level”,(0,7,0) will encode 8 dimmed levels as 3 bits. The pay load now becomes now 12 bits.

The table is defined something as: { meta_data: { …}, data: [ { type: string, unit: string, measurement: string, tuple: [ base float, width float, decimals integer ], …]}
Discussion:
Use a minimum value minus 1 to denote a null reading?
The client/node will send the table number with say version zero to denote an out of width value, in order to alarm the server to redefine the table for some entry.

1 Like

Like noted before: note that downloads are very limited. You certainly cannot send a downlink for every uplink.

The table will probably be quite stable. Yes the initial table should be defined carefully. I do not expect that the download is more frequent as say once a month. As the table updates is done as changes to the table the amount of data to update a table is very very limited.
The problem is merely: how to make sure that the table at the server and client side are the same? Once a week a table checksum?
Remind you: this “table/algorithm” synchronisation problem already currently exists.