Some question about ADR algorithm

First question: according to the document, the server takes the 20 most recent uplinks as the input of ADR. I’m a little puzzled here:

  1. Is it necessary to carry out ADR processing for every 20 packets, and then clear the saved data set until another 20 packets appear? In other words, the ADR is processed every 20 packets.

  2. Another understanding is that starting from the 20th packet, each new packet received will be processed by ADR. But in this case, the packet that first entered the queue needs to be kicked out to ensure that the current amount of data stored is always 20 packets. This also meets the description of “the last 20 packets”.

  3. In short, the difference between the above two understandings is whether the data of each packet is cleared after being used only once by ADR, or whether it will be reused.


Another problem is how to obtain the transmission power of the device.

I know that through MQTT or other integration, I can subscribe to the SF, bandwidth, and datarate of the device, but it seems that the tx power cannot be obtained in this way.

But I think this is accessible to the server, otherwise, the server has no reference to calculate the power to be reduced next time and send it to the device through LinkADRReq. If the server can get the data, the user can also get it in theory.


Thanks for your reply.

It doesn’t change the output power of the radio.

A change of ADR is the change of the Data Rate (as in Adaptive Data Rate) - typically it lowers it to reduce the transmission time whilst still keeping a reasonable signal margin, but if conditions change it would increase.

Mostly we don’t get too tied up with ADR - the people that write the algorithm that makes those decisions have many years of data to analyse, can most definitively code in Go and have much enlarged frontal lobes and are currently tied up with other things to do so won’t be entertaining any change requests in that area any time soon - but you can read the code in both the v2 & v3 stack if you want, it’s all on GitHub.

I process the DR (which is a combo of SF & bandwidth) to identify any ‘at-risk’ devices - ones where they are only being heard by one or two gateways with a marginal RSSI / SNR. Occasionally I’ll do something about it, usually adding a small fill-in gateway to a particular point.

Depending on your deployment, reducing the power to save a bit of battery can make sense, but using the normal setting will ensure the fastest transmission time which is the best for everyone overall.

1 Like

Actually it does.

The MAC command that implements ADR has fields for both dara rate and power.

I think So beacuse the MAC Command LinkADRReq defined by the LoRaWAN specification includes the parameter tx power.

That’s the problem. If the server can’t receive its current power as a reference value, how can it know that the new power it sends to the device is ‘reduced’ .So the server can definitely get this value.

So I also want to get this value through any integration or something like that. Although it seems meaningless, it’s also very important to try.

The MAC command might, but we are talking about the algorithm. Whilst it appears in the v2 & v3 ADR code, I’ve not seen a device change power, despite trying very hard to catch ADR in action but only by logging, not by running an SDR or similar to see if that changes.

I may not have tried enough devices, it take me a while to realise that LMiC isn’t too sure about power settings.

It doesn’t definitely mean it can get the value, it can do it by observation of the signal measurements, I’m sure @cslorabox would have mentioned if there was a MAC command that gives the current Tx power setting.

It is somewhat academic to me, but not necessarily meaningless as it does have an impact on battery life. Why is it important to you to try? Can you include the power setting in an uplink?

The server does not need the value. While not at SF7 the power is set to max. Once at SF7 the power can be decreased starting from max.

Before you ask what the max is, quoting from regional parameters 1.0.3 for EU868:

By default MaxEIRP is considered to be +16dBm. If the end-device cannot achieve 16dBm EIRP, the Max EIRP SHOULD be communicated to the network server using an out-of-band channel during the end-device commissioning process.

Note “out of band”, this is not included in the communication protocol. So unless you find a MAC command in the specification you can safely assume there is no way to get the current setting from the device.

1 Like

So the logic of operation is that no matter what number the initial power of the device is, the network server only reduces the allocation to the device from the maximum value according to its own cached data.

I feel that this will cause problems. For example, the power allocated to the device is higher than the initial value, and many operations will be wasted to reduce it back.

Anyway, the logic works.

Many? I have nodes that have transmitted 100,000+ uplinks - the 50 to 100 it could potentially have taken to get its ADR (but given the library use, probably not power) correct are insignificant. These 50 - 100 will pale in to insignificance compared to variations in batches of battery & temperature variations.

If you deploy a device with a coin cell, I’d turn off ADR, set everything as low as possible and then mess about at the installation site, then those Tx’s would have meaning.

Given that the server has to set the power any time it sets the data rate, the only way it wouldn’t know the power is if it hasn’t sent any ADR adjustments yet, or if the node has deviated from the last setting.

1 Like

To answer another question I just put a SAMR34 on to v3 on ABP - as it’s on my desk it’s shouting somewhat at the office gateway so the v3 stack has queued ADR requests right from the start.

Trouble is it hasn’t acted on it - everyone things the firmware developer doesn’t want to know if a MAC command has arrived so I’ve no indication of whether the downlink was heard by the board. Grrrrrr.

Our ADR implementation first increases the data rate / lowers the spreading factor (from SF12 to SF7 in EU868). Only if there’s still some margin available at the maximum data rate / lowest spreading factor (SF7 in EU868) the implementation lowers the TX power. So if an EU868 end device uses SF8-SF12, then you can be sure that it always uses the maximum TX power. Changing the data rate does not affect the SNR, so we don’t need to throw away the recent measurements.

At the maximum data rate / lowest spreading factor it indeed gets more complicated. If the network server instructs an end device to use a lower TX power, this would affect the SNR, so it would have to throw away recent measurements. I don’t actually remember if our implementation does that, I’ll check that.

The network server can see what data rate the device is using, and as @cslorabox commented, the network server knows what settings it told the device to use. If the network server never never sent ADR settings to the end device, it assumes that the device uses the maximum TX power. We assume that the end device respects the ADR settings that the network server tells the end device to use.

With non-compliant devices there are no guarantees.

1 Like

Oops, I accidentally edited my previous message instead of replying to it.

It turns out that our v2 implementation doesn’t clear measurements, so that is incorrect (but we’re not going to fix this in v2 anymore).

Our v3 implementation only considers uplink messages since the last change in ADR settings, so that implementation could technically be optimized to consider more uplinks if we know the TX power was at the maximum.

Right now I have an Arduino MKR WAN 1310 in front of me that seems to be in a closed loop of being sent a Link ADR request, the module then sends a Link ADR rejection. It’s on ABP as it’s for testing - downloading firmware, power cycling etc. The firmware on the Murata module isn’t standard but I’m pretty sure Arduino didn’t touch the LoRa-node stack. It was configured from the device registry. It is on 165 uplinks and 147 downlinks over the space of a couple of days.

Any thoughts @htdvisser on debugging this?

Is there something wrong with the downlinked ADR? Or the channel.mask sent with it?

It would help if you could state the region and provide both the actual LinkAdrReq and the LinkAdrAns

I was more coming at it from the angle of a supposedly good branded device making a mockery of @htdvisser’s assertion but he’s run away, hopefully to fix the current v2 issue.

That client project has moved to a phase that doesn’t require any radio testing which is just as well as I don’t have any free work slots to debug this for at least a fortnight.

Meh.

The Network Server will re-send MAC commands to reconfigure the device when the Network Server detects a reset. For OTAA devices this would mean a join procedure, for ABP devices that are configured to allow frame counter resets, it assumes that if the device lost its frame counter, it will also have lost ADR and channel state.

If your ABP device keeps incrementing its frame counter across power cycles, the Network Server should keep track of rejected settings and should not retry those same settings again (to be clear: this state is also lost on a “reset”).

Typically, a properly configured end device should not reject ADR requests. If it does so often, it’s possible that it doesn’t have the channels enabled that the Network Server thinks it has enabled. For EU868 ABP devices this could mean that the “factory preset channels” setting in your device registration does not match the personalization configuration in the end device. For AU915 devices it’s also possible that the LoRaWAN version is different (but I guess you’re not using AU915).

I think maybe you’ve missed the point:

  • I’m using a branded device that exists in the repository
  • However bad you think ABP is, it’s better for development when doing radio tests
  • There are few devices that keep their frame counters across power cycles or full resets

The Arduino MKR WAN 1310 has a noticeable oder of Abandonware, but the main issue is that the v3 stack is going to have to cope with non-compliant devices in a more helpful way - other wise hardware makers will say avoid the TTI v3 stack and you’ll be saying avoid this list of hardware.

I will at some point dig in to the details, but I don’t anticipate being remunerated for the effort, despite using a branded device on the flagship LoRaWAN stack.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.