Another Payload Decoder Question (issues converting bytes to float)

Hi all,

I’ve been trying for the past few days to decode GPS latitude and longitude values for my application.

I’m using MicroPython on an Heltec Lora 32 V2 board, current sending:

cpu_temp_pack = struct.pack('h', int(cpu_temp))
bat_pack = struct.pack('h', int(bat))
lat_pack = struct.pack('f', float(lat))
lon_pack = struct.pack('f', float(lon))

payload = (bat_pack + cpu_temp_pack + lat_pack + lon_pack)

Here’s an (fake obviously) example of the bytes I’m sending:
00 00 2D 00 5F 06 31 43 7E 8D 12 C2

And the lat/long values from the above bytes should be:
Latitude: -36.638176
Longitude: 177.024887

My payload function is:

function Decoder(bytes, port) {
  var decoded = {};

  // Based on https://stackoverflow.com/a/37471538 by Ilya Bursov
  function bytesToFloat(bytes) {
    // JavaScript bitwise operators yield a 32 bits integer, not a float.
    // Assume LSB (least significant byte first).
    var bits = bytes[3]<<24 | bytes[2]<<16 | bytes[1]<<8 | bytes[0];
    var sign = (bits>>>31 === 0) ? 1.0 : -1.0;
    var e = bits>>>23 & 0xff;
    var m = (e === 0) ? (bits & 0x7fffff)<<1 : (bits & 0x7fffff) | 0x800000;
    var f = sign * m * Math.pow(2, e - 150);
    return f;
  }  

  if (port === 1) 
  { 
    decoded.bat = (bytes[1] << 8) | bytes[0];
    decoded.cpu_temp = (bytes[3] << 8) | bytes[2] ;
    decoded.lat = bytesToFloat(bytes.slice(4, 7));
    decoded.lon = bytesToFloat(bytes.slice(8, 11));
  }

  return decoded;
}

Which returns:
{
“bat”: 0,
“cpu_temp”: 45,
“lat”: 2.162256779794837e-39,
“lon”: 1.5634889524811644e-39
}

I’m able to put either of the 4 bytes for lat or long into an online converter like the following:

And the correct value is identified as ‘Float - Little Endian (DCBA)’.

If I am to send my ‘cpu_temp’ as a float (using the same struct.pack(‘f’, blahblah)), the bytesToFloat function taken from stackoverflow.com works fine.

Any idea what I’m doing wrong here?

structs and the packing thereof are platform / compiler specific.

You could in theory figure it all out and then write a JS decoder.

Or just multiply the numbers by a power of ten and then send as a 32 bit integer and divide when you get to the other end. Or send the integral part and a multiplied out fractional part.

Thanks for the reply @descartes.

I did try multiplying/dividing by a power of 10, with:

cpu_temp_pack = struct.pack('h', int(cpu_temp))
bat_pack = struct.pack('h', int(bat))
lat_pack = struct.pack('i', int(lat * 10000))
lon_pack = struct.pack('i', int(lon * 10000))

payload = (bat_pack + cpu_temp_pack + lat_pack + lon_pack)

Then at the decoder end:

if (port === 1)
{
decoded.bat = (bytes[1] << 8) | bytes[0];
decoded.cpu_temp = (bytes[3] << 8) | bytes[2] ;
decoded.lat = ((bytes[7] << 8) | bytes[4]) / 10000;
decoded.lon = ((bytes[11] << 8) | bytes[8]) / 10000;
}

But that was giving me results in the range of 0.4-7.

I can mock up a real example after work.

EDIT: And I’m now realising I have to do something fancy with my bit shifting, due to it being a signed int?

The solution, as per here:

Was to multiply by 1000000, like so:

cpu_temp_pack = struct.pack('h', int(cpu_temp))
bat_pack = struct.pack('h', int(bat))
lat_pack = struct.pack('i', int(lat * 1000000))
lon_pack = struct.pack('i', int(lon * 1000000))

Then decode like so:

payload = (bat_pack + cpu_temp_pack + lat_pack + lon_pack)

function Decoder(bytes, port) {
  var decoded = {};

  if (port === 1) 
  { 
    decoded.bat = (bytes[1] << 8) | bytes[0];
    decoded.cpu_temp = (bytes[3] << 8) | bytes[2];
    decoded.lat = (bytes[7]<<24 | bytes[6]<<16 | bytes[5]<<8 | bytes[4]) / 1000000;
    decoded.lon = (bytes[11]<<24 | bytes[10]<<16 | bytes[9]<<8 | bytes[8]) / 1000000
  }

  return decoded;
}

As I said, struct is compiler / platform specific, so the fact you have it transferring & decoding could break or need the decoder rewriting if processed by something other than JavaScript.

Just turn the variables in to integers and then put their individual bytes in to an array.

Am I’m missing something?

I am sending them all as integers and, for this application, I’ll only be using MicroPython on the same Heltec boards, since MicroPython is efficient enough. So I shouldn’t run into any issues.

  1. You probably don’t need 32bits (4bytes) precision but 24 bits (3 bytes). So, you can send 2 bytes less.

In my case it seems ok this:

// https://github.com/ricaun/esp32-ttnmapper-gps/blob/8d37aa60e96707303ae07ca30366d2982e15b286/esp32-ttnmapper-gps/lmic_Payload.ino#L21
          // accuracy till 5-6 decimal
          lat = ((fix.latitude() + 90) / 180) * 16777215;
          lon = ((fix.longitude() + 180) / 360) * 16777215;
            
            loraData[0]  = lat >> 16;      // MSB
            loraData[1]  = lat >> 8;
            loraData[2]  = lat;            // LSB

            loraData[3]  = lon >> 16;
            loraData[4]  = lon >> 8;
            loraData[5] = lon;

On decoder (js) TTN V2

decoded.latitude  = +((bytes[0] << 16 | bytes[1] << 8 | bytes[2]) / 16777215.0 * 180.0 - 90).toFixed(6);
decoded.longitude = +((bytes[3] << 16 | bytes[4] << 8 | bytes[5]) / 16777215.0 * 360.0 - 180).toFixed(6);

Thanks to: How to send float value and convert it into bytes - #8 by arjanvanb

In case you’re still looking for a solution, I’ve used this library a number of times and it’s quite a gem. LoRa serializer with decoder examples

It includes serializing lat/lon values as Integers and then decodes them using a similar 10e6 multiplication method that you’re pursuing. It seems to also pack/unpack a float.

Thanks all,

@clv nice! I’ll definitely give 3 bytes that a try.

@sunbutncat I did see that library and have referenced it to determine the most efficient amount of bytes to send for each value type, but I’m using MicroPython.

Defaults are compiler specific, but any language soundly intended for network traffic has mechanisms or macros which permit the details to be explicitly specified to reach agreement with another. Especially things like the python struct library that was being used in the post you responded to.

Though one should probably be explicit by starting the format string with > or < to indicate the designed integer endianness.

Not so fast… eg, signed or unsigned?

That’s where things with javascript decoding get a bit… nasty. People so easily forget that when unpacking signed quantities the significant byte with the sign bit has to be treated as signed, but the remaining bytes have to be treated as unsigned.