I’m searching if there are any other formats that are currently being used by the sensors to communicate apart from hexadecimal. And what other formats are available for encoding/decoding other than cayennelpp, openAMR-MBus.
Not sure exactly what you are asking.
Most microcontrollers communicate directly in binary, normally 8 bits per byte, which I presume is what you mean by hexadecimal. To match that the data sent from the node is sent as a series of bytes in a packet.
If a sensor has a reading of 0x55 for instance, which is decimal 85, you could send the data as two ASCII bytes by sending “85” to represent the number, but then that means you need to send two bytes when one would do.
@LoRaTracker. Exactly! I’m using hexadecimal as well just wanted to know ASCII is another possibility. Is there any other way used by these Industrial sensors. e.g. Bosch parking sensor.
As told by @LoRaTracker, sensors communicate in sequences of bits, not hexadecimal. Hexadecimal is only a way to make human-readable something that in principle is not. And I doubt there will be someone in Industry using a coding that takes more bits than necessary, in particular in the IoT field. I mean, you can also transmit numbers in English (“eighty five”) or Roman (“LXXXV”) but what’s the gain/reason?
There is one sensor I came across, a particle sensor, that sends the measurements out as ASCII text, such as “1234” etc. To make TTN more efficienent, and use less of your fair use allowance, you need to parse the 4 bytes of ASCII “1234” into 2 bytes of binary as in 0x04D2 and send that.
I see there’s a point.
I was documenting on the study which I have made on different kinds of payloads that are sent by the sensor. I think it’s more like different kinds of payloads received by the IoT middlewares. Every other big sensor manufacturer have their own integrators which makes it bit confusing.
Any other formats other than Cayenne LPP for encoding and decoding the payloads?
@LoRaTracker. So according to your answer it makes two different kinds(Human-Readable) ‘ASCII’ and ‘Hexadecimal’ .
Hi @Hemanth, I use purchased sensors from companies such as Netvox, Adeunis, Sensoterra, Tektelic, DigitalMatter, Libelium, Sensei, etc.
[I hear a general intake of breath from the forum followed by hissing from some]
All these manufacturers have their own proprietary payload data encodings. When I purchase devices they will provide detailed technical information on the device configurations and payload layout for the various devices and will often provide basic decoders for use on TTN. Some of these documents are large and complex, as an example the Libelium “Waspmote Data Frame Programming Guide” is 42 pages of small print!
In most cases they use standard IEEE encodings for floating point and integer numbers.
There are a lot of bit-level on/off type flags, etc.
I have never seen ascii encoded as hex.
I think the most interesting variation is in the lossless encoding of high-precision floating point numbers, e.g. latitude and longitude, simple in four bytes or complex in three bytes.
Some manufacturers also use encoded downlinks for configuration and the devices have no USB/serial port at all - hard to get used to after a lifetime of configuring equipment using CLI. Once used to it I find that I like it as well as being more secure and saving on power and component costs.
I think that the weakest thing I see in some devices is a failure to use standard LoRaWAN fport numbers to encode uplink and downlink types. Many devices use a single fport and then an additional byte in the payload to signal the type. In my opinion, writing good decoders is much easier when the fport is used.
Its only “Human-Readable” when you take the raw binary data and choose to view it or display it as such.
Not exactly. Hexadecimal is how you read byte data. ASCII is one (old) way of coding text (not numbers).
“85” as ASCII string is 0x38 0x35 hexadecimal (easier to read than 0111000 011 0101). These are very basic computer/programming concepts.
@UdLoRa. I know what ASCII is wanted to know if sensor payloads are encoded in any other formats.
In my opinion this hurts interoperability when companies adopt their own payload standards instead of using an open one like Cayenne LPP. However, the issue I found with Cayenne LPP it implements only enough to for myDevice dashboard and is slow to completely adopt all the datatypes in the IPSO smart objects data types. In a nutshell, i recommend to continue to use Cayenne LPP which is widely used.
As a counter-argument, LoRa is so bandwidth constrained, that doing some moderately heavy lifting on the ends to give each bit actually sent over the air the maximum application-specific-utility seems entirely warranted.
The big problem with proprietary schemes is when they are undocumented or have critical issues the manufacturer can’t be bothered to fix.
In contrast, unique solutions tuned to a need can be quite wise, where the users have access to the specs and the source code for both interoperability and the possibility to make needed repairs.
This isn’t even so much an issue for writing decoders, as it is one of simple wastefulness.
An application pays to send that fport value regardless if it is used or not, so it only makes sense to use it in a way that balances meaningful distinction of data types today, with room for adding futures ones to a scheme in a compatible way.
http://openmobilealliance.org/wp/OMNA/LwM2M/LwM2MRegistry.html is an attempt to standardize payload formats for sensors/devices. A bit like how the bluetooth gatt stuff works, or how wireless m-bus devices format their payload. I haven’t seen it used in LoRaWAN though.
Hi @jerylcook, you raise an important matter that I think needs clarification.
I stated that manufacturers use proprietary data encoding. That’s not really correct as there are two components; the data encoding and the data schema.
The bit/byte level data encodings that I see are standard:
- Enumerated bits and enumerated bytes, see https://en.wikipedia.org/wiki/Enumerated_type
- Signed/unsigned integer numbers at 1, 2, 4 bytes, see https://en.wikipedia.org/wiki/Integer_(computer_science)
- Floating point numbers, see https://en.wikipedia.org/wiki/IEEE_754
The data schema can be:
- Self-describing, like CayenneLPP with its 2-byte overhead per sensor data (channel no. and data type).
- External schema, with just the data, no overhead.
A self-describing schema system makes for easier systems-integration at the price of lower efficiency.
An external schema system makes for harder systems-integration but is more efficient at wire-level.
Interestingly [to me anyway], ALL the commercial manufacturers have used standard data encoding with proprietary external schema systems with the schema documented in the normal product technical documentation way. This has been the normal approach throughout industrial automation for 30+ years. It may well be that the sensor manufacturers hired industrial automation software engineers who simply did LoRaWAN the same way that they’ve done every process control bus in the past.
Such overhead is a non-starter for non-trivial usage of LoRaWAN. In the US we basically can’t use uplink SF’s over 10 at all (because at 125 KHz bandwidth and SF11 the LoRaWAN packet hearders and MIC would take up essentially the entire duration allowance). And at SF10, if you have more than a few sensors, you really can’t afford any overhead to packing their readings.
Don’t forget that even where not constrained by regulation, airtime costs battery and network capacity.
Some schemes in industrial control date from before there were MCUs with a decent amount of code space on other end of the wire, and before the links themselves were as fast as they typically are today. Many modern schemes are wordy and not infrequently readable text. But the thing is, on a wire you have flexibility and can pay overhead for readability. On an ultra long range radio you typically don’t.
Case in point: air packets should be tightly packed. But traditional formats for gateways to report in to a server are mostly based on JSON (even if having custom binary framing as in the Semtech UDP protocol). JSON is both self describing and highly inefficient - but it’s not considered an issue because backhaul bandwidth is relatively cheap, and being able to just dump the JSON to a log file and grep through or whip up a log analyzer in any language of preference is hugely useful, as is the ability to toss in extra fields with debug info that will be simply ignored by recipients not looking for it.
The mechanisms to use for a task should be chosen with an eye to the balance of constraints and benefits.