Thanks, Arjan, for your response!
So that settles it!
TTN adjusts DR down/SF up based on downlink frame-loss only, not on link margin.
I have also found a post on the ChirpStack forum giving a rationale for LoRa Server:
[…] LoRa Server never decreases the DR as this could result into a domino effect. E.g. when your network is dense, then lowering the data-rate on one device will impact other devices so other devices might also need to change to a lower data-rate (following the ADR algorithm), and so on…
Makes sense I guess, but at the same time it can leave devices with a too-high DR. LoRaWAN is best effort…
I feel the most standard-confomant way to deal with this is to begin ADR at SF12, but that seems wasteful.
But that’s the thing: The stack will not adjust DR down based on a low link margin, only on a loss of downlink, as it’s not commanded to do it by the network.
How is this handled for OTAA? I have not observed the RN2483 increase SF if OTAA joins fail, it seems this has to be done in the application.
The same could then be done when (re-)activating ADR, eg in a tracker application that has detected it will transition to a long stationary phase. (eg no movement, vehicle locked)
Indeed. If the application messes around with DR, it also needs to take care of that.
Yes, but thanks for the hint
On this topic I found the TTN documentation a bit ambiguous. What helped me was reading and parsing - slowly, carefully! - the LoRaWAN spec and its implications.