MIC vs Checksum

Hi there

Can we rely on lorawan MIC as a checksum function to ensure our payload is valid? Or should we add a checksum function to our encoder/decoder?

Any experts on this subject here?

Would appreciate any advice

There is one called ‘tom-iotexperts’ who posts on here sometimes.


I have not actually checked, but I would be surprised if TTN is not using the CRC checking that LoRa devices can do automatically.

As a first defence, a gateway should not forward packets that have a bad CRC on LoRa radio modulation level, nor packets that use a non-LoRaWAN sync word. (If it would even see those, assuming the sync word is typically configured in the LoRa receivers, so the gateway’s software would not even see packets that use different sync words.)

Next, even when the CRC does seem good, the packets might still be random noise, like an expert called ‘LoRaTracker’ who posts on here sometimes nicely explained. :wink: But after receiving a packet TTN needs both the DevAddr and the MIC to deliver the packet; see How does a network know a received packet is for them? The DevAddr is 32 bits, but due to the systematic assignment maybe not all of those bits account for entropy. The MIC is typically also 32 bits, and while for LoRaWAN 1.0.x only 16 of those are in the LoRa message, TTN will use the full 32 bits to validate the MIC.

Finally, the application payload is decrypted using the secret AppSKey. But when using an efficient encoding (that is: when not sending text) decrypting a mangled payload (or using the wrong key) would simply yield a different result, which would very likely go undetected. But of course, the MIC has already been validated at this point, and that also covers the encrypted application payload (and the uplink counter).

So, I’d say that this is good enough to assume that if a message is routed to your application, then it’s a message that was created by one of your devices, and has not been changed/mangled during the LoRa transmission, nor while being sent from gateway to network server over the internet. Still then, your application should always be ready to recognize wrong values as caused by broken sensors or plain sabotage. But such values would not be detected by any checksum anyway.

In short: no need for some application-level CRC.


Generally I would agree, however:

Is not quite correct. The transmitted MIC is 32 bits.

The confusion is because the MIC is calculated across a buffer which includes the 32 bit frame counter, but only the lower 16 bits of the frame counter are transmitted in the packet. The recipient then has to guess plausible values for the upper bits (ie, from recent history) and see if those full width values result in a match between the locally calculated MIC which includes them, and the transmitted MIC which included the real 32 bit frame counter value.


Very true, don’t know why I got this wrong above :slight_smile:

So, in total there are 8 bytes used for validation for a packet of, say, a few dozen bytes. Perfect.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.