ADR with RN2483 (The Things Node)

Hi all,

I have a RN2483-based devkit (The Things Node) and try to get ADR running on it in both directions, i.e. have it reliably adjust DR up as well as down.
Where I do my tests, I typically see up to three more-or-less far away gateways, the SNR typically varies between 0dB and -10dB or so. From my understanding of SNR limits, I would expect ADR to adjust to around SF8-SF10 to keep around 10dB link margin - please correct me if I’m wrong.

Starting with SF11, ADR does seem to work: After 64 uplinks (i.e. ADR_ACK_LIMIT?) I get a downlink from TTN, and the DR is adjusted up to eg DR2/SF10.

Starting at SF7, it does NOT seem to work: while there are some uplinks after ADR_ACK_LIMIT, the node never adjusts DR down / SF up. I’ve also tried sending manual down-links, and requesting confirmation in up-links. After confirmation, the RN2483 reports a link margin of 5dB - i.e. I think ADR should command SF up. However even after hundreds of uplinks with marginal SNR and many missed packets, it remains at SF7 / DR5

I use the TTN arduino library and did not see any mechanism to actually perform ADR. In the RN2483 Command Reference it sounds like it’s all handled internally in the RN2483 after turning ADR on. During the startup code generated by the TTN library, ADR is turned on with mac set adr on, and I join using OTAA, so all network parameters should be controlled by TTN.

Am I missing something fundamental here?

Thanks for any pointers,
Alex

Hm - maybe some more context re RFN2483 version and commanding would be helpful… Here’s the output from my TTN-OTAA-arduino-example based code when joining:

Model: RN2483
Version: 1.0.3
Sending: mac set deveui xxxxxxxxxxxxxx41
Sending: mac set adr on
Model: RN2483
Version: 1.0.3
Sending: mac set deveui xxxxxxxxxxxxxx41
Sending: mac set adr on
Sending: mac set deveui xxxxxxxxxxxxxx41
Sending: mac set appeui xxxxxxxxxxxxxxB4
Sending: mac set appkey xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx63
Sending: mac save
Sending: mac set ch drrange 1 0 6
Sending: mac set ch dcycle 0 799
Sending: mac set ch dcycle 1 799
Sending: mac set ch dcycle 2 799
Sending: mac set ch dcycle 3 799
Sending: mac set ch freq 3 867100000
Sending: mac set ch drrange 3 0 5
Sending: mac set ch status 3 on
Sending: mac set ch dcycle 4 799
Sending: mac set ch freq 4 867300000
Sending: mac set ch drrange 4 0 5
Sending: mac set ch status 4 on
Sending: mac set ch dcycle 5 799
Sending: mac set ch freq 5 867500000
Sending: mac set ch drrange 5 0 5
Sending: mac set ch status 5 on
Sending: mac set ch dcycle 6 799
Sending: mac set ch freq 6 867700000
Sending: mac set ch drrange 6 0 5
Sending: mac set ch status 6 on
Sending: mac set ch dcycle 7 799
Sending: mac set ch freq 7 867900000
Sending: mac set ch drrange 7 0 5
Sending: mac set ch status 7 on
Sending: mac set pwridx 1
Sending: mac set retx 7
Sending: mac set dr 5
Sending: mac join otaa
Join accepted. Status: 00000421
DevAddr: xxxxxx7B
Sending: mac set linkchk 14400

I’ve modified the TTN library to not send mac set rx2 3 869525000 to the RN2483 during startup, following some discussions and bug-reports. I understand that TTN does NOT actually send using DR3 in RX2 during OTAA (in contrast to later!), and my node did indeed not join with high SFs before I removed that command from the startup sequence. However I understand that after joining using OTAA, the rx2 settings should be commanded by the network, so I think this should not affect ADR, should it?

I may begin to understand a bit better, but maybe someone could help me by confirming my understanding:

  • After ADR_ACK_LIMIT uplinks without a downlink, the node sets ADRACKReq in the next uplink.
  • After a max of ADR_ACK_DELAY further uplinks, the network must send a downlink (either scheduled from the application or otherwise a dedicated ACK)
  • If - and only if! - the node does not receive any downlink after ADR_ACK_DELAY uplinks, it will increase the SF by one, and repeat the ADRACKReq cycle.

This means that ADR will adjust DR down only if no downlinks at all are received in ADR_ACK_LIMIT + ADR_ACK_DELAY frames. The link margin calculated at the server and reported in LinkCheckAns does not come into this at all, it is used only to adjust DR up, using the LinkADRReq command if there is an excess margin.

If this is correct (can anyone confirm?), that raises a new question:

Is it compliant if the application code on an end-device with ADR on adjusts the DR down, as long as it adjust DR up when commanded via ADR?
I did this manually during testing and it worked fine, but I don’t know if this is compliant.

I’d say that a server implementation might also tell the node to use a slower data rate, if it sees that many uplinks were missed and if it sees the available margin is low. But that’s just a wild guess; for TTN, I did not validate that in the source code.

As the topic title mentions RN2483: when using some third-party LoRaWAN stack then it’s not the application code that should implement all this. In your case the RN2483 should handle it all. For a LoRaWAN stack I’d guess the above will do (if the device is at maximum TX power already), but I hope someone can give you an authoritative answer.

Beware that ADR should also take the TX power into account. If the device is not as maximum power, then maybe it should increase that, before increasing the SF? (I don’t know.)

Just to be sure: did you see the TTN documentation?

How would a server know that uplinks were missed? There’s no real expectation of when a node will try to transmit.

Low margin however, yes, that would make sense, not sure about TTN buy my DIY server does take that into account.

Thanks, Arjan, for your response!

Ha!
Thanks for the pointer to the code. The way I understand it, handleUplinkADR() calls ADRSettings(), where DR is adjusted up, but never down (in contrast to power, which is also increased)

So that settles it!
TTN adjusts DR down/SF up based on downlink frame-loss only, not on link margin.

I have also found a post on the ChirpStack forum giving a rationale for LoRa Server:
[…] LoRa Server never decreases the DR as this could result into a domino effect. E.g. when your network is dense, then lowering the data-rate on one device will impact other devices so other devices might also need to change to a lower data-rate (following the ADR algorithm), and so on…
[…]

Makes sense I guess, but at the same time it can leave devices with a too-high DR. LoRaWAN is best effort…
I feel the most standard-confomant way to deal with this is to begin ADR at SF12, but that seems wasteful.

But that’s the thing: The stack will not adjust DR down based on a low link margin, only on a loss of downlink, as it’s not commanded to do it by the network.

How is this handled for OTAA? I have not observed the RN2483 increase SF if OTAA joins fail, it seems this has to be done in the application.
The same could then be done when (re-)activating ADR, eg in a tracker application that has detected it will transition to a long stationary phase. (eg no movement, vehicle locked)

Indeed. If the application messes around with DR, it also needs to take care of that.
Messy. :confused:

Yes, but thanks for the hint :slight_smile:
On this topic I found the TTN documentation a bit ambiguous. What helped me was reading and parsing - slowly, carefully! - the LoRaWAN spec and its implications.

Indeed, the server could not take that into account. The end device stack could, however.

Thanks to Arjan’s pointer at the right bit of TTN code, I think I could confirm now that the TTN server does not decrease DR based on link margin.

Gaps in the uplink counter, I’d say.

I don’t understand what you’re trying to say here (also given the part you quoted). If “stack” refers to the LoRaWAN stack in the node (which I tried to refer to) then: the node would not even know the link margin of its uplinks. So indeed it can only act on not receiving downlinks. I think we agree on that. But still then: I really expect the LoRaWAN stack in the device to take care of all that. Not your application code.

I was quite sure it did, but I guess I’m wrong. I’ve only used the RN2483 along with the TTN Arduino library. That code is on GitHub too, and its OTAA code takes care of retries, but does not lower the data rate. So, if indeed my tests showed an increasing SF during joins, then either some other part of the library was doing that, or the RN2483 handled all. (Other LoRaWAN stacks, such as LMIC, certainly do take care of it.) It must be documented somewhere.

Yes, with “Stack” I meant the LoRaWAN stack in the RN2483.
Originally I thought the server would recognise a low link margin and send a request to decrease DR through ADR, so the RN2483’s could act on it. We’ve established the server does not do that.

The RN2483 does have access to the link margin as reported in LinkCheckAns, but the standard says nothing about using that to adjust DR on the end-device sidebased on this, so the RN2483’s stack will not act on it.

So if one wants to decrease DR if the link margin is low, the application code has to act on the link margin reported in LinkCheckAns.

I’m not sure this is compliant, though.
Or even a good idea…

Coming down from high SF, ADR steers my node to something like SF 8 to SF 10. On SF 7 I sometimes get almost no package loss, sometimes a few minutes of 100% loss, mostly 10%-50% loss.

So the loss is not that bad, and at 10-50% loss, it will converge on SF 8 or 9 within a few days probably by just so happening on a spot of bad link between ADR_ACK_LIMIT and ADR_ACK_DELAY once or twice. Have not tried that yet.

As the wise man says

But maybe there’s a more or less established way of making this more efficient in the application code, as in the OTAA case below:

I think I’ve seen people report a DR reduction after unsuccessful join attempts, but I don’t see it in the TTN code, and don’t see it in my RN2483’s behaviour. Also, I can’t find any mention of this in the RN2483 command reference. So I was speculating that this must be done in other people’s application code.

I use the TTN supplied arduino libraries, as the RN2483 is in a The Things Node, and I started with the example sketches for that.

Different firmware revisions behave differently, which version do you have?

Sounds like you are transmitting far to often, do you keep the fair access policy (30 seconds airtime for a node each day) in mind? That allows for roughly one packet every 3 minutes using SF7.

It would have access to that if those packets are exchanged. However LoRaWAN is meant to be uplink heavy with just an occasional downlink so downlinks are avoided as much as possible and as a result the node will only get downlinks very infrequently. Keep in mind a node is allowed 10 downlinks max each day in the TTN community network. That includes ACKs and link check responses.

Good point. 1.0.3
There’s some logs further up in the thread that detail the configuration of the module.

Yes, I do keep fair use in mind.
This is during development/experimentation. When I hit 30s airtime, I quit for the day and shut down the node.
When I run a longer experiment, I transmit much rarer, like every 5, 10 or 15 minutes, depending on SF.

Part of my application will be a vehicle tracker. The idea is that as soon as the application detects motion, it switches to blind ADR. Once it detects a long stationary phase (no movement, vehicle locked, time of day), it switches back to ADR. This would happen very rare in the day, 2-3 times max. There is very little need for further downlinks, maybe 3 downlinks per week more. ADR Ack would add maybe 1-2 packets a day for 10 -15 min cadence, if conditions don’t change.
I don’t think I would have a need for confirmed packets.

Here it would make sense to start at DRs that avoid significant loss of packets. This could be done by starting at high SF, relying on ADR to regulate DR up, Burt this would be wasteful in airtime.

I think it could also be done by requesting a link confirmation at relatively high SF, looking at the link margin, switching to an appropriate DR and starting ADR.

I switched off my gateway and switched on a The Things Node (an old RN2483, with TTN library). And I remembered wrong: after 15 denied joins it still did not switch to a lower data rate, so it seems one indeed needs application code for that, for an (old) RN2483. :frowning: (But not for LMIC.)

(Still then, OTAA Join is unrelated to ADR, which I really think the RN2483 will handle.)

At what SF would you then start ADR again? Also, 2-3 times a day (max) still seems quite often: taking ADR_ACK_LIMIT and the Fair Access Policy into account I wonder if, say, 12 hours is enough for ADR to give good results. Let us know! :slight_smile:

Okay, at least I’m not crazy.
What I find a bit funny is that apparently different end node stacks do things differently, so buyer beware.
I use the RN2483 for getting my feet wet with LoRaWAN/TTN now, my final application might very well use something different - so thanks for the heads up!

What a very good question. I think I would try with an ‘intermediate’ SF, like SF 9 or so. Send one packet with confirmation request at SF9, look at the link margin, adjust SF according to look-up table, turn on ADR. Then, I would hope ADR would not really need 12h to adjust to a correct DR!

For reference at least in my neck of the woods (urban environment, not very congested network), I get the packet loss subjectively described above at SF7, but ADR steered me to SF11 in an experiment I ran over the weekend. So at least in this environment SF9 seems like a reasonable starting point.

When I stated that I am abiding by the fair access limitations by manually keeping tally of my airtime for high SFs, and just calling it a day before I exceed 30s, I made one HUGE mistake:

I was using the estimated airtime reported in the Application Data tab of the TTN console. Since today I run my own gateway, and on the Gateway Traffic tab, the reported airtime is 60% higher than in the Application Data tab (eg 102.9ms vs 61.9ms for 6 bytes of payload at SF8)!
I double checked with the TTN airtime calculator, and it spits out numbers consistent with the Gateway Traffic tab. It never occurred to me to double-check, as I had no access to a Gateway Traffic tab, and trusted the data shown in the Application Data tab in the TTN Console.

Holy smokes that’s bad :confused:
There seems to be a bug report about this, but with no conclusion.

Were you guys aware of that?

Weren’t you the one that quoted the ‘numbers’ (ADR_ACK_LIMIT) So how many packets are required and how much time will that take at SF9? (Check this issue against LoraMAC-node for a pointer :slight_smile: )

1 Like

Well… 64+32=96. Yes of course. You’re right. :man_facepalming:
My gut-feeling is still more attuned to IP style timescales and timeouts, and the testing modus operandi in which I send fairly frequently during testing and then shut down for the day before I reach 30s airtime has evidently not been conductive to changing that.

The last post states: “For development purposes it may be acceptable to exceed the fair access policy.”

My interpretation of the thread is that the limit is not intended to prevent development work.

My interpretation is that you are still using a shared medium (radio waves) so you should not exceed the allowances too much as that will have impact on other nodes trying to communicate.
One way to circumvent this issue would be to connect the gateway and node (use an attenuator to prevent damage to the hardware) directly without using antennas.

Thanks for the pointer, this is good to know.
However I do agree with Jac @kersing, we are sharing a medium after all.

Now that I have my own gateway, I am not only contributing to alleviate the congestion in our city’s network instead of only adding to it, but I can also see: there is not much congestion where I am. My gateway still has to see a packet that is not from me…
…of course, I’m sure the “professional” farther-away gateways I was relying on previously were put up for a reason, and due to their much better placed antennas they will see a lot of traffic I don’t see.

1 Like

This only works for nodes with an external antenna connector - which e.g. the The Things Node does not have.