ADR features recently introduced to TTS March 2022

TL;DR: ABP devices with ADR turned off in the console (for things like TTN Mapping) will need to explicitly set the DR before each uplink. Or you can use a CLI command to tell the Network Server what you want the device to run at.

The details:

If you turn off ADR on the console for a device, the stack will send a request to the device to move to the mac-state.desired-parameters.adr-data-rate-index which, if you’ve not set this using the CLI, will be zero aka DR0 aka SF12.

TTS dev staff are aware of the consequential impact for devices that have been set to run without ADR if you are not controlling it via the CLI or the device is not setting it before each uplink and are looking at a solution that will allow anyone turning off ADR in the console to set the desired rate.

So if you have firmware that sets a data rate, it should do so before each uplink. Or you should use the CLI command to set the desired rate. The second is preferable as it means the NS will not schedule a downlink to adjust it.

ttn-lw-cli end-devices set application-id device-id --mac-state.desired-parameters.adr-data-rate-index DATA_RATE_X where X is the DR you want. See ttn-lw-cli end-devices set | The Things Stack for LoRaWAN for details. The parameter mac-state.desired-parameters.adr-data-rate-index is new to the CLI and the docs haven’t yet caught up but it does appear in Adaptive Data Rate | The Things Stack for LoRaWAN

If you are using Blind ADR (as per the Semtech white paper), for now you will get an ADR request when the device uses the scheduled lower DR. Again, the TTS devs are aware of this and we hope to see a “turn off anything ADR” option to facilitate this.

However, I’m told that the OTAA Join Accept structure & MAC channel commands come with some ADR & power settings and it is not in the specification to not set them - so understanding this from the OTAA angle is work in progress for me - but as there are a whole pile of other facilities built in for things like LinkCheck, which you can adjust if you want, both on the console or in the firmware, slotting all this together until further options are available on the console & CLI seems make-work.

Much of the above is linked to the Advanced MAC settings for current & desired - this allows devices with firmware that are in the supply chain or the back of an installers car to start up with what they have and then be tuned on an individual level. If they restart (which they may do, rarely of course) and don’t have the current settings saved, they can be moved to the preferences you have set automagically.

This has many benefits if you have a clutch of devices in an area where one channel is being hammered by the local pirate radio station or if the wet string that connects the internet (aka mobile) is slow, you can adjust things to suit a deployment.

4 Likes

I too think second would be preferable, as in my case when used with SF7 fixed from node, the node was asked to move to SF12 at-least for more than 1000 up-links, after which it was stopped. This means using fixed SF7 would waste one radio TX and air time. (I think four reception channels for one radio unless its a custom gateway with an additional TX node of its own)

Sounds like a potential Cluster Fsuk for users of COTS products or devices in the field and/or where they dont have immediate control of firmware and who for many users/deployers who may do little more that turn a device on or maybe register details in console before doing so but whome dont know one end of a cli from an ilc and cant/wont engage with such shinnanigans…

Absolutely - and we can feedback that the ‘give up trying to tell the node’ could do with looking at. Or at the least, we can validate the give up count but I do have a couple of nodes with some stupid downlink counts.

But to reinforce, for those that do need to control the rate from the device without intervention from the Network Server, the ability to to turn off any form of ADR is being looked at.

As I see or understand it, there are two trains of thought here.

  1. Turn off ADR.

  2. Use ADR setting to try and optimize your nodes settings automatically, where the NS assist you via the predetermined parameters you selected.

I high capacity microwave links we have similar assistance, called AGC, as the fade margins and link conditions change the system adjust the TX power. So you can have a relative low TX power to have the optimal RX level. As the fade increases the TX power is increased to maintain the same optimal RX level. As fade increases, you can safely increase the TX power as the interference with adjacent systems will be constant, as the RF path to that system experiences the same amount of increase in fade

This way you are minimizing the interference with adjacent systems (nodes).

Are ADR not achieving this where lower TX power and DR is maintained (if correctly setup) until you are experiencing changes in RF conditions, change in SNR?

In this manner you can have an increase of the amount of RF devices in the same geographical area.

Note: I have seen thought that with this type of automatic control via some clever calculations. When a new RF device is introduced to mix, things can take a while to stabilize. You sometimes need to manually interfere with the settings on one or two devices and then everything is happy again.

Potentially. But not many people are going to turn off ADR or indeed be on ABP, very few on ABP and then turning off ADR. And if they are on ABP and go on the console and find the turn off ADR button, I’d like to think they know what they are about and/or are sensible enough to subscribe to the various support channels, which is why this thread exists after some independent testing and an extensive discussion with one of the senior TTI developers, moving to a Zoom call to dig in to the details. And then there is the monitoring, see below.

As for firmware, this may feel like a little harsh, but if you don’t have a risk analysis doc for all the ways it can go wrong when it’s out in the field plus a test plan that makes NASA using SRB’s in the cold look delinquent, more fool you. Makers deploying stuff they can get to to fix issues is fine, I’ve certainly got plenty of them out in the field for community use but when you are making something that is being deployed in to distribution and then out in to the field in the 1,000’s, it’s a whole different ball game. Or in some countries I’ve co-developed for, 5 hours drive away. I (well, the production manager aka The Boss aka The Wife) make devices that go in to fire alarms, we’ve been achieving Six Sigma for about 15 years.

For those doing TTN Mapper projects, then fixing your DR to SF7 (yeah, enjoy the dual contra-scales, way to go LoRa standards), it’s as simple as putting an explicit API call to fix the DR/SF (FYI, both LoRaMAC-node & LMIC use DR) before each uplink.

If you have are deploying devices professionally, then your monitoring systems should highlight that there has been a change in energy usage / transmission parameters. And your change log should correlate with these alerts. And any uplink over SF10 should light up the dashboard, as SF11/12 are prohibited for long term use.

Sure, this isn’t ideal, but totally fixable with this information and is being worked on. We’ve had plenty of other ‘moments’ with TTS deployment, I’d call this as being a niche issue that if you’re paying attention shouldn’t be an issue.

Yes, but LoRaWAN doesn’t respond that fast as environmental conditions can affect individual uplinks so a hair trigger response isn’t appropriate and there is a penalty to sending a downlink.

Consider that there are five settings for ADR in the CLI:

adr-ack-delay-exponent
adr-ack-limit-exponent
adr-data-rate-index 
adr-nb-trans                                    
adr-tx-power-index

Although I’m not clear on which ones can be explicitly set as there is current, desired, pending and MAC-state plus MAC-settings.

But overall, a lot of control over how a device is commanded and how quickly it should be told or confirm it has set the new parameters.

One of the considerations for managing devices is being able to choose which device a gateway can send a downlink to - as having a pile of devices where the RF environment has changed doesn’t mean it’s appropriate to hammer the area with downlinks to alter DR’s to suit what may be a temporary situation. Which would be where the gateway is ‘compromised’. If it’s an individual device, that would be different. God knows how the Network Server could determine in semi-realtime which devices have had a food-truck parked next to it out of a potential 5,000 other devices that a gateway may be serving.

I help teach GCSE & A Level Computer Science, this one would be way way outside the scope of a degree level algorithm. Kudos to the TTI team for managing to wrap their heads around these non-trivial algorithm conundrums.

I have a number of commercial devices which I know are out in the field in thousands or may-be even hundred thousands with firmware which is at least questionable with regards to how robust the stack is.

May-be even worse, I’ve been working on porting lora-mac to a new hardware platform which is really close to the reference platform and while working on it I found really interesting issues. So even the ‘gold reference’ is not up to your standard.

I don’t have time or funds or, indeed, feel the need to rewrite the stacks. And I too have a laundry list of ‘features’ that I’d like to have resolved if I can figure out how to validate / repeat the issues.

But I can put in mitigation for any issues with bounded loops, hardware checks and software as well as the hardware watchdog. And if I’m going to use an unusual configuration, ensure there is a failsafe interaction available to command on/off & parameters.

But overall, the IoT sensing projects aren’t connected to any form of fire, safety or security systems, so there is less benefit in striving for zero defects.

  1. Disable ADR.
  2. Node TX in SF7.
  3. CLI used to fix to SF7 with following command:
    ttn-lw-cli end-devices set xxxxxxxxxx xxxxxxxxxxx --mac-state.desired-parameters.adr-data-rate-index DATA_RATE_5

The server still sends the DR0 change in opts field. Same steps tried with ADR disabled as well.
The current SNR is -6.8 and RSSI is -118 (approx 10KM). So I can’t escape the ADR request from server when ADR is enabled. (Tried increasing the ADR margin as well)

when ADR is disabled, the CLI fix is not working. CLI used is v3.18.0

Gateway is not able to reach the node (not able to down-link at this distance)

Is there anything I am doing wrong?

That’s not sensible. Reciprocity means that a node which can uplink should be able to be donwlinked to, bar implementation errors.

It is able to up-link and downlink in better SNR. But yes I need to check, but the current problem of SF12 is of higher priority.

and

Appear to contradict each other - you have to turn off ADR on the console and then tell the NS, via the CLI, what DR you want the device to use - if the device uses that DR, it won’t get commanded, if it doesn’t, it will.

But I’m impressed you get 10KM on DR5 so maybe the device is doing something different.

Having spoken at length to a senior TTI engineer about this issue I still took a few minutes to check it was as expected.

But there is still the join issue that can’t be circumvented.

Can you check your details so we can understand what sequence of tests you tried.

I did exactly the same thing.

Typo! Sorry!

Since things didn’t work as expected after asking through CLI and TX in SF7 itself. I just tried the same step with ADR enabled. I will edit the original. Dont mind that sentence, as I just tried.

Currently.

And currently transmitting in SF7

Around these part we don’t edit after people have commented as it makes a nonsense of the subsequent posts. I’ve reversed the edit.

So, you’ve disabled ADR on the console, set the DR to 5 using the CLI and the device is transmitting at DR5. Which seems to be working as advertised.

Its working but I get SF12 request from NS.

I will keep that in mind for the pointed out comments.

Home is 4 floor high, so may be thats why I am getting it. Other coverage simulations also shows similar range. Not relevant to the discussion, but just an input to the statement. The node at home is transmitting at 20dBm with 2dBi antenna.

:slightly_smiling_face:

Where in the world are you to be allowed to use 20 dBm ERP?

From one of his other, earlier posts on another thread:-

"The legal limits of transmission are 4watt EIRP (1 watt maximum) in 865-867. Location is India (South).
Both are transmitting only at 20dBm."

1 Like

I’m going to stop now, because you said that it was transmitting at SF7 which appeared to say that it was working, but now you add this detail after the fact. I’ll concentrate on topics I can keep up with.

@sniper4uonline has clearly explained that they’ve told TTN the node should be using SF7, it is using SF7, but it’s being told by TTN to use SF12.

That’s obviously a problem.