ADR, Fair Use and LMIC


I’m building some nodes to measure local environmental conditions and had planned to send data at 10-15 minute intervals. In most locations this is well within the allowed fair use as the nodes have good coverage and stay on SF7.

In some areas, I’ve noticed ADR switches to SF11 to make a connection - or rather that seems to be the case when LMIC joins the node to the network. It never reverts back down as the downlink is SF9 and the signal doesnt reach the node. That takes about 741ms to send a message.

At a 15 minute interval, that’s around 71 seconds airtime, at 10 minutes its about 107 seconds. Obviously that’s outside fair use.

Some questions

  1. Is fair use per node or the average of the nodes in an application? If those on SF7 are using 15 seconds, can that time be allocated to nodes on higher SF?
  2. Does the fair use policy take account of ADR in any way? If so, what’s a reasonable assumption to make on SF calculations for airtime (assuming that as ADR is best for the network then some leeway may be allowed)
  3. Is there a way for LMIC to inform the main code what AF is in use in ADR and thus I can adjust my send interval in the main code (I’d rather do this so devices with good coverage can send often)
  4. Can we set a maximum SF value for LMIC to use in ADR?

(Arjan) #2

As an aside, TTN is telling the gateway to use more power so the node is still expected to receive it:

As for your first question: the Fair Access Policy is per node.


Interesting on the SF9 - hadn’t realised that’s how it worked. I’m guessing that some of the Dow links also tell the node to stay as it is then.

Fair use is as I thought so that leaves trying to figure out how to expose the current SF and adjust the rate I send using LMIC.


I’ve managed to extract the datarate so should be able to adjust the send timings. Code below

void printDataRate() {
switch (LMIC.datarate) {
    case DR_SF12: debugPrintLn(F("Datarate: SF12")); break;
    case DR_SF11: debugPrintLn(F("Datarate: SF11")); break;
    case DR_SF10: debugPrintLn(F("Datarate: SF10")); break;
    case DR_SF9: debugPrintLn(F("Datarate: SF9")); break;
    case DR_SF8: debugPrintLn(F("Datarate: SF8")); break;
    case DR_SF7: debugPrintLn(F("Datarate: SF7")); break;
    case DR_SF7B: debugPrintLn(F("Datarate: SF7B")); break;
    case DR_FSK: debugPrintLn(F("Datarate: FSK")); break;
    default: debugPrint(F("Datarate Unknown Value: ")); debugPrintLn(LMIC.datarate); break;


Next issue I get is if I go from good signal (SF8 area) to really poor signal (SF11 area) then LMIC seems to fail

I notice

  • When in SF8, the downlink packets for adr are in SF8
  • I move the device inside to where it would get SF11 if I’d let it join from power on
  • The SF8 packets are no longer received by the gateway as expected
  • After no gateway acks, the device seems to report link dead
  • The device then reports rejoin failed. It seems to then do nothing at all

I’m wondering what the correct operation is. I’d expect after link dead that adr would increase the spreading factor until at the maximum and continue sending packets in the hope they’re received.

One way to do this and ensure connection is to rejoin and thus renegotiate over OTAA. That may be what happens but the device doesn’t report its joining.

If a rejoin fails, I’d expect it to retry a rejoin as it would retry joining in first power up if it fails to join.

Do I need to do anything in the main state machine case statements to do an LMIC reset and reinitialise if the link fails?

(Arjan) #6

Are you saying ADR is telling you to keep using SF8, or only that the ADR response is received in RX1 on SF8?

When no changes are needed, ADR might not send a downlink. However, if it would never get any ADR command then the node would not be able to tell if its packets were received at all. To mitigate that, it should explicitly set the ADRAckReq field every now and then. According to the specifications: Adaptive data rate control in frame header (ADR, ADRACKReq in FCtrl)


If an end-device whose data rate is optimized by the network to use a data rate higher than its lowest available data rate, it periodically needs to validate that the network still receives the uplink frames. Each time the uplink frame counter is incremented (for each new uplink, repeated transmissions do not increase the counter), the device increments an ADR_ACK_CNT counter. After ADR_ACK_LIMIT uplinks (ADR_ACK_CNT >= ADR_ACK_LIMIT) without any downlink response, it sets the ADR acknowledgment request bit (ADRACKReq). The network is required to respond with a downlink frame within the next ADR_ACK_DELAY frames, any received downlink frame following an uplink frame resets the ADR_ACK_CNT counter.

So, you’d need to verify if your uplinks has the ADRACKReq bit set every now and then.

In LoRaWAN 1.0, for most (if not all) regions, ADR_ACK_LIMIT is 64, and ADR_ACK_DELAY is 32. In LoRaWAN 1.1, it’s a configuration.

Beware that the Wiki says:

Only static nodes should use ADR. ADR can also be used by a mobile node that is able to detect when it is “parked” on a fixed spot.

That said, of course a node should be able to recover from losing connectivity, if only as gateways might go offline while the node doesn’t move. But given the ADR_ACK_LIMIT and ADR_ACK_DELAY totalling to 96 messages in LoRaWAN 1.0, it surely might take some time for the implementation to notice the link is dead.

But then: in your case, the dead link is reported and a re-join is initiated, so the above is probably merely for future readers…

What about Spreading factor and ADR
ADR control on TTN: not getting any ADR downlinks for confirmed uplinks with ADR bit set
Why is ADR only relevant for nodes at fixed positions?
ADR is not working (nucleo - raspberry -italy)

Looks like the gateway sent a downlink in SF8, presumably in the RX1 slot which was not received by the node.

To clarify, the indoor / Outdoor is just part of testing to simulate a change to the link conditions that you may see in a fixed outdoor node (eg a truck parks next to it). I’ve also set the packet timing to be much faster for test and debug as per the TTN allowances for development.

The problem looks to be with the way LMIC is responding - I think because the rejoin failed flag comes first in the case statement in LMIC examples, we never try to queue another packet while the flag is set. It looks like either an LMIC reset and restart or queuing another packet to force auto join us needed in that response to the rejoin failed case

(Arjan) #8

Which implementation of LMiC are you referring to?


This one:

Using the otaa example as the core

(Arjan) #10

These examples do not enable ADR, do they…?

Assuming you somehow enabled ADR, did you remove the following LMIC_setLinkCheckMode? If not, then I wonder if your node is sending ADRACKReq. (I’d guess it is, as it’s detecting the dead link.)

And I’m quite sure it’s very much unrelated to your problem, but still: could the following excessive second “break;” somehow mess with the running code, making LMiC only try to re-join once…? (One break; statement suffices and the one after that should be dead code. A good compiler would warn about that.)

You might understand, but just to be sure: the switch(ev) { case ... } is not handling (multiple) flags, but one-time events. Once the EV_REJOIN_FAILED event has fired and has been handled, the switch(ev) is ready for the next event. If some other code would use an internal flag indicating something failed, then it’s surely not repeating the EV_REJOIN_FAILED event as otherwise you’d see a lot of those in your debugging. Even more, it’s simply not sending any event at all, as otherwise you’d get into the default part which would execute Serial.println(F("Unknown event"));. As long as you leave default as last, you could change the order of the case statements without affecting the code.

But, you might be on to something: the TXCOMPLETE event likely does not get fired, and then the following will indeed never be executed:

// Schedule next transmission
os_setTimedCallback(&sendjob, os_getTime()+sec2osticks(TX_INTERVAL), do_send);

You could copy that line to the case EV_REJOIN_FAILED and hope the LMiC internals do then not trigger the following:

if (LMIC.opmode & OP_TXRXPEND) {
    Serial.println(F("OP_TXRXPEND, not sending"));
} else {

You could also consider trying to call setup() instead, or resetting the device within case EV_REJOIN_FAILED. See

(Arjan) #11

I’ve not validated your code, so it might very well be okay. But a warning when using this in EV_TXCOMPLETE: when not getting a downlink in RX1 then similar code in an RN2483 would return the SF used in RX2.


Probably best to post the code here. I’d already got rid of the double break and enabled the link checking when the problem was occurring. Only change to this code is you’ll see I reinitialise LMIC on a rejoin failed now.

When’s the best time to check the datarate? At the moment I’m just printing it butbthe aim is to adjust the delay between transmissions based on the data rate currently set by ADR. Is there perhaps a better variable to monitor?

Thanks for the help with this one!

Copyright (c) 2015 Thomas Telkamp and Matthijs Kooijman

Permission is hereby granted, free of charge, to anyone
obtaining a copy of this document and accompanying files,
to do whatever they want with them without any restriction,
including, but not limited to, copying, modification and redistribution.

This example sends a valid LoRaWAN packet with payload "Hello,
world!", using frequency and encryption settings matching those of
the The Things Network.

This uses OTAA (Over-the-air activation), where where a DevEUI and
application key is configured, which are used in an over-the-air
activation procedure where a DevAddr and session keys are
assigned/generated for use with all further communication.

Note: LoRaWAN per sub-band duty-cycle limitation is enforced (1% in
g1, 0.1% in g2), but not the TTN fair usage policy (which is probably
violated by this sketch when left running for longer)!

To use this sketch, first register your application and device with
the things network, to set or generate an AppEUI, DevEUI and AppKey.
Multiple devices can use the same AppEUI, but each device has its own
DevEUI and AppKey.

Do not forget to define the radio type correctly in config.h.

TheThingsNetwork Payload functions :
function Decoder(bytes, port)
var retValue = {
bytes: bytes

retValue.batt = bytes[0] / 10.0;
if (retValue.batt === 0)
delete retValue.batt;

if (bytes.length >= 2)
retValue.humidity = bytes[1];
if (retValue.humidity === 0)
delete retValue.humidity;
if (bytes.length >= 3)
retValue.temperature = (((bytes[2] << 8) | bytes[3]) / 10.0) - 40.0;
if (bytes.length >= 5)
retValue.pressure = ((bytes[4] << 8) | bytes[5]);
if (retValue.pressure === 0)
delete retValue.pressure;

return retValue;

#include <lmic.h>
#include <hal/hal.h>
#include <SPI.h>
#include <Wire.h>

#include "adcvcc.h"

#include <BME280I2C.h>

#define debugSerial Serial
#define debugPrintLn(...) { if (debugSerial) debugSerial.println(__VA_ARGS__); }
#define debugPrint(...) { if (debugSerial) debugSerial.print(__VA_ARGS__); }

// Pin mapping for the RGBLED object:
#define redLed 9
#define greenLed 6
#define blueLed 5

// This EUI must be in little-endian format, so least-significant-byte
// first. When copying an EUI from ttnctl output, this means to reverse
// the bytes. For TTN issued EUIs the last bytes should be 0xD5, 0xB3,
// 0x70.
static const u1_t PROGMEM APPEUI[8] ={ };
void os_getArtEui (u1_t* buf) {
memcpy_P(buf, APPEUI, 8);

// This should also be in little endian format, see above.
static const u1_t PROGMEM DEVEUI[8] = { };
void os_getDevEui (u1_t* buf) {
memcpy_P(buf, DEVEUI, 8);

// This key should be in big endian format (or, since it is not really a
// number but a block of memory, endianness does not really apply). In
// practice, a key taken from ttnctl can be copied as-is.
// The key shown here is the semtech default key.
static const u1_t PROGMEM APPKEY[16] = {  };
void os_getDevKey (u1_t* buf) {
memcpy_P(buf, APPKEY, 16);

static osjob_t sendjob;

// global enviromental parameters
static float temp = 0.0;
static float pressure = 0.0;
static float humidity = 0.0;

BME280I2C bme; // Default : forced mode, standby time = 1000 ms
// Oversampling = pressure ×1, temperature ×1, humidity ×1, filter off,

/* ======================================================================
Function: ADC_vect
Purpose : IRQ Handler for ADC
Input : -
Output : -
Comments: used for measuring 8 samples low power mode, ADC is then in
free running mode for 8 samples
====================================================================== */
// Increment ADC counter

// Schedule TX every this many seconds (might become longer due to duty
// cycle limitations).
const unsigned TX_INTERVAL = 10;

// Pin mapping
const lmic_pinmap lmic_pins = {
.nss = 10,
.rst = 14,
.dio = {2, 7, 8},

void printDataRate() {
switch (LMIC.datarate) {
case DR_SF12: debugPrintLn(F("Datarate: SF12")); break;
case DR_SF11: debugPrintLn(F("Datarate: SF11")); break;
case DR_SF10: debugPrintLn(F("Datarate: SF10")); break;
case DR_SF9: debugPrintLn(F("Datarate: SF9")); break;
case DR_SF8: debugPrintLn(F("Datarate: SF8")); break;
case DR_SF7: debugPrintLn(F("Datarate: SF7")); break;
case DR_SF7B: debugPrintLn(F("Datarate: SF7B")); break;
case DR_FSK: debugPrintLn(F("Datarate: FSK")); break;
default: debugPrint(F("Datarate Unknown Value: ")); debugPrintLn(LMIC.datarate); break;

void updateEnvParameters()
BME280::TempUnit tempUnit(BME280::TempUnit_Celcius);
BME280::PresUnit presUnit(BME280::PresUnit_hPa);, temp, humidity, tempUnit, presUnit);

void setColor(bool redValue, bool greenValue, bool blueValue) {
digitalWrite(redLed, !redValue);
digitalWrite(greenLed, !greenValue);
digitalWrite(blueLed, !blueValue);

void onEvent (ev_t ev) {
Serial.print(": ");
switch (ev) {
setColor(1, 0, 0);
setColor(1, 1, 0);
case EV_RFU1:
lmicStartup(); //Added this as it seems LMIC just stops if it ends up here
debugPrintLn(F("EV_TXCOMPLETE (includes waiting for RX windows)"));
if (LMIC.txrxFlags & TXRX_ACK)
debugPrintLn(F("Received ack"));
if (LMIC.dataLen) {
debugPrintLn(F("Received "));
debugPrintLn(F(" bytes of payload"));
setColor(0, 1, 0);
setColor(1, 1, 0);
// Schedule next transmission
os_setTimedCallback(&sendjob, os_getTime() + sec2osticks(TX_INTERVAL), do_send);
case EV_RESET:
// data received in ping slot
debugPrintLn(F("Unknown event"));

void do_send(osjob_t* j) {
// Check if there is not a current TX/RX job running
if (LMIC.opmode & OP_TXRXPEND) {
debugPrintLn(F("OP_TXRXPEND, not sending"));
} else {
// Prepare upstream data transmission at the next possible time.
// Here the sensor information should be retrieved
// Pressure: 300...1100 hPa
// Temperature: -40…85°C

int batt = (int)(readVcc() / 100); // readVCC returns mVolt need just 100mVolt steps
byte batvalue = (byte)batt; // no problem putting it into a int.




int t = (int)((temp + 40.0) * 10.0);
// t = t + 40; => t [-40..+85] => [0..125] => t = t * 10; => t [0..125] => [0..1250]
int p = (int)(pressure); // p [300..1100]
int h = (int)(humidity);

unsigned char mydata[6];
mydata[0] = batvalue;
mydata[1] = h & 0xFF;
mydata[2] = t >> 8;
mydata[3] = t & 0xFF;
mydata[4] = p >> 8;
mydata[5] = p & 0xFF;
LMIC_setTxData2(1, mydata, sizeof(mydata), 0);
debugPrintLn(F("PQ")); //Packet queued
// Next TX is scheduled after TX_COMPLETE event.

void lmicStartup() {
// Reset the MAC state. Session and pending data transfers will be discarded.

#if defined(CFG_eu868)
// Set up the channels used by the Things Network, which corresponds
// to the defaults of most gateways. Without this, only three base
// channels from the LoRaWAN specification are used, which certainly
// works, so it is good for debugging, but can overload those
// frequencies, so be sure to configure the full frequency range of
// your network here (unless your network autoconfigures them).
// Setting up channels should happen after LMIC_setSession, as that
// configures the minimal channel set.
// NA-US channels 0-71 are configured automatically
LMIC_setupChannel(0, 868100000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(1, 868300000, DR_RANGE_MAP(DR_SF12, DR_SF7B), BAND_CENTI); // g-band
LMIC_setupChannel(2, 868500000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(3, 867100000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(4, 867300000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(5, 867500000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(6, 867700000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(7, 867900000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(8, 868800000, DR_RANGE_MAP(DR_FSK, DR_FSK), BAND_MILLI); // g2-band
// TTN defines an additional channel at 869.525Mhz using SF9 for class B
// devices' ping slots. LMIC does not have an easy way to define set this
// frequency and support for class B is spotty and untested, so this
// frequency is not configured here.
#elif defined(CFG_us915)
// NA-US channels 0-71 are configured automatically
// but only one group of 8 should (a subband) should be active
// TTN recommends the second sub band, 1 in a zero based count.

LMIC_setClockError(MAX_CLOCK_ERROR * 1 / 100);


// Start job (sending automatically starts OTAA too)

void setup() {
pinMode(redLed, OUTPUT);
pinMode(greenLed, OUTPUT);
pinMode(blueLed, OUTPUT);
setColor(1, 0, 1);
if (!bme.begin())
debugPrintLn(F("No valid bme280 sensor!"));


// LMIC init


void loop() {

(Arinze) #13

Hello @tkerby ,

I am having similar challenge with devices seeming to give up after one re-join fail. I tried getting the device to try Rejoin by adding the following.


I notice the device tries to rejoin using SF12 and fails. Were you able to get the device to successfully re-join after rejoin fails? The LMIC.cpp seems to have rejoin procedure clearly to try multiple times at random intervals and also change datarate with each try.

(Remko) #14

You need to enable link-check to mitigate this in LMIC.

(Remko) #15

Rejoin failed seems to be caused by a failing link-check. I observed the same and found the cause that LMIC was not able to receive any downlink message.
To be sure that my node is able to receiving downlink messages I test using confirmed uplink messages.

In many cases I found that the cause for not receiving downlink messages was caused by programming ATmega328 with 16 MHz fuses while the chip is running 8 MHz with an external crystal. This results in complete wrong timing of the RF windows.


I found this line of code (from Kersing)
OTAA joins smootly now