Improve node performance by changing coding rate from 4/5 to 4/8?

I was wondering, the default error coding rate for LoRaWAN is 4/5, what would happen if you would send a message with coding rate 4/8?
Using a higher coding rate might allow sending messages in an environment with lower signal-to-noise ratio, or get longer range, because the message is sent with more redundant bits that can help to correct a corrupted reception.

Changing the coding rate is possible because LoRaWAN uses explicit headers at the LoRa modulation level, see (page 3, ImplicitHeaderModeOn = 0 which means explicit mode). An explicit header means that the transmitter sends along a header in which things like the coding rate, CRC presence and low-data-rate-optimisation are specified. The receiver will follow this header and still decode the packet even if it does not use the default 4/5 error coding rate.

I tried this by modifying a define in the LMIC library, and guess what … it works:
The field codingrate now has a “4/8” value instead of the usual “4/5”.

I can imagine that similarly, we can also enable the low-data-rate-optimisation bit to get a little bit better performance. Enabling this bit makes it a little easier for the receiver to distinguish between symbols, by encoding only (SF-2) bits per symbol instead of SF bits.

The downside of both methods is that the on-air time is increased a bit, so I think this can be considered abuse of the network because it can increase congestion for everyone. Any opinions?


True, but the same applies for larger payloads, or more frequent transmissions. As soon as TTN is going to enforce the Fair Access Policy, the number of messages you can send will simply drop if you choose to use a better coding rate.

Hi Bertrik,

I think it’s worth exploring, especially the increase of the coding rate. It’s certainly not abuse, within the fair access policy a user can allocate the airtime as needed.

I would be interesting to learn what the effect of the increase is in a real environment. We could do some testing with nodes that alternate between 4/5 and 4/8 to see if we can measure a significant difference? Then you can make a tradeoff between the extra bandwidth vs loss rate.

Regarding the low-data-rate-optimisation, can you signal this in-band? Otherwise it’s problematic I think. Also, isn’t it fixed in the gateways for SF11/12 only?

Bertrik, as discussed yesterday evening, i did give this a try :slight_smile:

I setup a node that alternates between CR 4/5 and 4/8, and then put a CW interferer in between the node and gateway to get the SNR down to the level that the gateway doesn’t decode it anymore.

Right on that edge, I managed to find a point where the 4/5 messages were lost, and the 4/8 were received fine. It was very narrow though, in most cases either all messages were received, or all lost.

I wonder how much impact this change would have in a real deployment. I guess testing is the only way to find out.


Just for future reference: while early specifications were not quite explicit about it (or even plain confusing, mentioning 4/5 for only some details), nowadays it seems LoRaWAN should always use 4/5.