ADR with RN2483 (The Things Node)

Thanks, Arjan, for your response!

Ha!
Thanks for the pointer to the code. The way I understand it, handleUplinkADR() calls ADRSettings(), where DR is adjusted up, but never down (in contrast to power, which is also increased)

So that settles it!
TTN adjusts DR down/SF up based on downlink frame-loss only, not on link margin.

I have also found a post on the ChirpStack forum giving a rationale for LoRa Server:
[…] LoRa Server never decreases the DR as this could result into a domino effect. E.g. when your network is dense, then lowering the data-rate on one device will impact other devices so other devices might also need to change to a lower data-rate (following the ADR algorithm), and so on…
[…]

Makes sense I guess, but at the same time it can leave devices with a too-high DR. LoRaWAN is best effort…
I feel the most standard-confomant way to deal with this is to begin ADR at SF12, but that seems wasteful.

But that’s the thing: The stack will not adjust DR down based on a low link margin, only on a loss of downlink, as it’s not commanded to do it by the network.

How is this handled for OTAA? I have not observed the RN2483 increase SF if OTAA joins fail, it seems this has to be done in the application.
The same could then be done when (re-)activating ADR, eg in a tracker application that has detected it will transition to a long stationary phase. (eg no movement, vehicle locked)

Indeed. If the application messes around with DR, it also needs to take care of that.
Messy. :confused:

Yes, but thanks for the hint :slight_smile:
On this topic I found the TTN documentation a bit ambiguous. What helped me was reading and parsing - slowly, carefully! - the LoRaWAN spec and its implications.

Indeed, the server could not take that into account. The end device stack could, however.

Thanks to Arjan’s pointer at the right bit of TTN code, I think I could confirm now that the TTN server does not decrease DR based on link margin.

Gaps in the uplink counter, I’d say.

I don’t understand what you’re trying to say here (also given the part you quoted). If “stack” refers to the LoRaWAN stack in the node (which I tried to refer to) then: the node would not even know the link margin of its uplinks. So indeed it can only act on not receiving downlinks. I think we agree on that. But still then: I really expect the LoRaWAN stack in the device to take care of all that. Not your application code.

I was quite sure it did, but I guess I’m wrong. I’ve only used the RN2483 along with the TTN Arduino library. That code is on GitHub too, and its OTAA code takes care of retries, but does not lower the data rate. So, if indeed my tests showed an increasing SF during joins, then either some other part of the library was doing that, or the RN2483 handled all. (Other LoRaWAN stacks, such as LMIC, certainly do take care of it.) It must be documented somewhere.

Yes, with “Stack” I meant the LoRaWAN stack in the RN2483.
Originally I thought the server would recognise a low link margin and send a request to decrease DR through ADR, so the RN2483’s could act on it. We’ve established the server does not do that.

The RN2483 does have access to the link margin as reported in LinkCheckAns, but the standard says nothing about using that to adjust DR on the end-device sidebased on this, so the RN2483’s stack will not act on it.

So if one wants to decrease DR if the link margin is low, the application code has to act on the link margin reported in LinkCheckAns.

I’m not sure this is compliant, though.
Or even a good idea…

Coming down from high SF, ADR steers my node to something like SF 8 to SF 10. On SF 7 I sometimes get almost no package loss, sometimes a few minutes of 100% loss, mostly 10%-50% loss.

So the loss is not that bad, and at 10-50% loss, it will converge on SF 8 or 9 within a few days probably by just so happening on a spot of bad link between ADR_ACK_LIMIT and ADR_ACK_DELAY once or twice. Have not tried that yet.

As the wise man says

But maybe there’s a more or less established way of making this more efficient in the application code, as in the OTAA case below:

I think I’ve seen people report a DR reduction after unsuccessful join attempts, but I don’t see it in the TTN code, and don’t see it in my RN2483’s behaviour. Also, I can’t find any mention of this in the RN2483 command reference. So I was speculating that this must be done in other people’s application code.

I use the TTN supplied arduino libraries, as the RN2483 is in a The Things Node, and I started with the example sketches for that.

Different firmware revisions behave differently, which version do you have?

Sounds like you are transmitting far to often, do you keep the fair access policy (30 seconds airtime for a node each day) in mind? That allows for roughly one packet every 3 minutes using SF7.

It would have access to that if those packets are exchanged. However LoRaWAN is meant to be uplink heavy with just an occasional downlink so downlinks are avoided as much as possible and as a result the node will only get downlinks very infrequently. Keep in mind a node is allowed 10 downlinks max each day in the TTN community network. That includes ACKs and link check responses.

Good point. 1.0.3
There’s some logs further up in the thread that detail the configuration of the module.

Yes, I do keep fair use in mind.
This is during development/experimentation. When I hit 30s airtime, I quit for the day and shut down the node.
When I run a longer experiment, I transmit much rarer, like every 5, 10 or 15 minutes, depending on SF.

Part of my application will be a vehicle tracker. The idea is that as soon as the application detects motion, it switches to blind ADR. Once it detects a long stationary phase (no movement, vehicle locked, time of day), it switches back to ADR. This would happen very rare in the day, 2-3 times max. There is very little need for further downlinks, maybe 3 downlinks per week more. ADR Ack would add maybe 1-2 packets a day for 10 -15 min cadence, if conditions don’t change.
I don’t think I would have a need for confirmed packets.

Here it would make sense to start at DRs that avoid significant loss of packets. This could be done by starting at high SF, relying on ADR to regulate DR up, Burt this would be wasteful in airtime.

I think it could also be done by requesting a link confirmation at relatively high SF, looking at the link margin, switching to an appropriate DR and starting ADR.

I switched off my gateway and switched on a The Things Node (an old RN2483, with TTN library). And I remembered wrong: after 15 denied joins it still did not switch to a lower data rate, so it seems one indeed needs application code for that, for an (old) RN2483. :frowning: (But not for LMIC.)

(Still then, OTAA Join is unrelated to ADR, which I really think the RN2483 will handle.)

At what SF would you then start ADR again? Also, 2-3 times a day (max) still seems quite often: taking ADR_ACK_LIMIT and the Fair Access Policy into account I wonder if, say, 12 hours is enough for ADR to give good results. Let us know! :slight_smile:

Okay, at least I’m not crazy.
What I find a bit funny is that apparently different end node stacks do things differently, so buyer beware.
I use the RN2483 for getting my feet wet with LoRaWAN/TTN now, my final application might very well use something different - so thanks for the heads up!

What a very good question. I think I would try with an ‘intermediate’ SF, like SF 9 or so. Send one packet with confirmation request at SF9, look at the link margin, adjust SF according to look-up table, turn on ADR. Then, I would hope ADR would not really need 12h to adjust to a correct DR!

For reference at least in my neck of the woods (urban environment, not very congested network), I get the packet loss subjectively described above at SF7, but ADR steered me to SF11 in an experiment I ran over the weekend. So at least in this environment SF9 seems like a reasonable starting point.

When I stated that I am abiding by the fair access limitations by manually keeping tally of my airtime for high SFs, and just calling it a day before I exceed 30s, I made one HUGE mistake:

I was using the estimated airtime reported in the Application Data tab of the TTN console. Since today I run my own gateway, and on the Gateway Traffic tab, the reported airtime is 60% higher than in the Application Data tab (eg 102.9ms vs 61.9ms for 6 bytes of payload at SF8)!
I double checked with the TTN airtime calculator, and it spits out numbers consistent with the Gateway Traffic tab. It never occurred to me to double-check, as I had no access to a Gateway Traffic tab, and trusted the data shown in the Application Data tab in the TTN Console.

Holy smokes that’s bad :confused:
There seems to be a bug report about this, but with no conclusion.

Were you guys aware of that?

Weren’t you the one that quoted the ‘numbers’ (ADR_ACK_LIMIT) So how many packets are required and how much time will that take at SF9? (Check this issue against LoraMAC-node for a pointer :slight_smile: )

1 Like

Well… 64+32=96. Yes of course. You’re right. :man_facepalming:
My gut-feeling is still more attuned to IP style timescales and timeouts, and the testing modus operandi in which I send fairly frequently during testing and then shut down for the day before I reach 30s airtime has evidently not been conductive to changing that.

The last post states: “For development purposes it may be acceptable to exceed the fair access policy.”

My interpretation of the thread is that the limit is not intended to prevent development work.

My interpretation is that you are still using a shared medium (radio waves) so you should not exceed the allowances too much as that will have impact on other nodes trying to communicate.
One way to circumvent this issue would be to connect the gateway and node (use an attenuator to prevent damage to the hardware) directly without using antennas.

Thanks for the pointer, this is good to know.
However I do agree with Jac @kersing, we are sharing a medium after all.

Now that I have my own gateway, I am not only contributing to alleviate the congestion in our city’s network instead of only adding to it, but I can also see: there is not much congestion where I am. My gateway still has to see a packet that is not from me…
…of course, I’m sure the “professional” farther-away gateways I was relying on previously were put up for a reason, and due to their much better placed antennas they will see a lot of traffic I don’t see.

1 Like

This only works for nodes with an external antenna connector - which e.g. the The Things Node does not have.

That assumes your gateway would forward all LoRa traffic that’s around. Given sync words and IQ inversion (and maybe even CRC) that might not be the case. Also, that does not take into account the transmission reach of your nodes?

An RTL-SDR dongle will tell you more, and is fun to play with. :slight_smile:

2 Likes

Im also working through this issue with LMIC,

I made an assumption (perhaps incorrectly) that TTN have adopted this ADR approach:

Semtech Lora networks Rate adaptation Class A/B specification

–LoRaWAN–simple rate adaptation recommended algorithm Common to Class A/B/C specification - Revision 1.0.

There are quite a few tweekable parameters…
Im also testing chirpstack which appears to use this same algorithm.

It would be very good to learn exactly what algorithm TTN are using and the parameters settings.

These LoraWan “grey areas” are when the fun starts!
P

The TTN sources are freely available on github so you can check for yourself…

Thanks Kersing!

Got it!

Cheers!

I comparison between the way ADR is implemented can kind off be seen by taking a diff between these two:

chirpstack-network-server/internal/adr/adr_test.go at master · brocaar/chirpstack-network-server · GitHub

ttn/core/networkserver/adr_test.go at develop · TheThingsArchive/ttn · GitHub

There are some interesting differences;