Does a node ever need to rejoin after OTAA?

Can you explain what exactly is meant by above?

Synchronization of devices happens if end devices respond to a large-scale external event. Some examples of synchronized events that we’ve experienced are:

  • Hundreds of end devices that are connected to the same power source (could be in a train, ship, building) and the power is switched off and on again
  • Hundreds of end devices that are connected to the same gateway, and the firmware of the gateway needs to be updated
  • Hundreds of thousands of end devices that are connected to The Things Network, and we have a database failover

Many end devices respond to these events, but if they respond in the wrong way, things can go terribly wrong.


Let’s take an example device that starts in Join mode when it powers on, and reverts to Join mode after being disconnected from the network. There are 100s of such devices in a field, and one gateway that covers this field.

The power source for the devices is switched on, and the gateway immediately receives the noise of 100s of simultaneous Join requests. LoRa gateways can deal quite well with noise, but this is just too much, and the gateway can’t make any sense of it. No Join requests are decoded, so no Join requests are forwarded to the network and no Join requests are accepted.

Exactly, or approximately 10 seconds later (the devices either have pretty accurate clocks, or they’re all equally inaccurate), the gateway again receives the noise of 100s of simultaneous Join requests, and still can’t make anything of it. This continues every 10 seconds after that, and the entire site stays offline.

Not great.

This situation can be improved by using jitter. Instead of sending a Join request every 10 seconds, the devices send a Join request 10 seconds after the previous one, plus or minus a random duration of 0-20% of this 10 seconds. This jitter percentage needs to be truly random, because if your devices all use the same pseudorandom number generator, they will still be synchronized, as they will all pick the same “random” number.

With these improved devices, the Join requests will no longer all be sent at exactly the same time, and the gateway will have a better chance of decoding the Join requests.

Much better. Especially if also the initial Join request was sent after a random delay.

But what if you have another site with 1000s of these devices instead of your site with 100s of them? Then the 10 seconds between Join messages may not be enough. This is where backoff comes in. Instead of having a delay of 10s±20%, you increase the delay after each attempt, so you do the second attempt after 20s±20%, the third after 30s±20%, and you keep increasing the delay until you have, say, 1h±20% between Join requests.

An implementation like this prevents persistent failures of sites and the network as a whole and helps speed up recovery after outages.

7 Likes

Power off/on related synchronized events can also be caused by power-outages in geographic regions (e.g. districts, cities).

One usually has no control over other LoRaWAN applications in an area and (depending on the application) the RF signals usually reach a larger area than where they are needed.
For many locations there is no guarantee that there will not be many end devices in the area and the number of devices may change/increase over time. Therefore, in theory, each gateway and each end device is prone to such large-scale external events.

So the ‘backoff and jitter’ strategy should actually be implemented in each LoRaWAN end device that performs OTAA joins.

Does randomizing of the delays have any impact on how spreading factors are/should be changed during join retries?

Will a ‘jitter and backoff’ strategy cause unnecessary join delays for devices in areas with only limited number of devices?

A ‘backoff and jitter’ strategy will probably need to be implemented in LoRaWAN libraries like LMIC and LoRaMac-node, because retries of failed joins are automatically performed by those libraries as part of a join request.

1 Like

One way to avoid pseudo random to provide the same results on all devices is to use a unique number to seed the random generator. The DevEUI (maybe combined with AppEUI) comes to mind.

1 Like

This makes sense. Thank you!

Ideally, a node knows what rate it was communicating at and can adjust it’s rejoin appropriately, lower SF trying more often.

Not if you use a random ± 20% and 8 channels for lower SF’s. This would be a good candidate for stochastic modelling.

This already happens in current LMIC (and probably LoRaMac-node) implementations but the intervals are predefined and of fixed length. AFAIK no randomization is applied.

Knowing that retry intervals in current (LMIC) implementations are already automatically incremented (by LMIC) but at the same time also spreading factors are automatically increased during retries, I was wondering if it suffices to only add a randomization to the length of retry intervals.

I know at least one LoRaWAN library implementation that works the opposite, tries joining using the highest SF first and gradually decrements if it fails. IIRC it is still unclear whether latter conforms to LoRaWAN specifications or not.

So it actually depends on the implemented algorithm.
Practical guidance for implementing ‘jitter and backoff’ would therefore be useful.

Not for power loss unless the user saves this info, which I sort of doubt, I know I haven’t

I meant after a power cycle (reset) when the device has not stored the keys receiced from from a previous join and does a new fresh join after the restart.

Good question. I don’t think we’ve actually ever given “official” recommendations on using different spreading factors during the Join procedure. @benolayinka is currently working on a “best practices” document that includes some of the content from my earlier posts here. Maybe we can also write something about this.

It’s always a tradeoff. For the “jitter” part, no, because that’s just randomization. For backoff you’ll need to set a maximum delay. For some devices it’s perfectly acceptable to have the retry delay slowly increase to a maximum of 24 hours ±20%. For other devices you may want them to retry more frequently.

In case it helps, and I assume you know, Semtech has an opinion too:

Note: It is important to vary the DR. If you always choose a low DR, join requests will take much more time on air. Join requests will also have a much higher chance of interfering with other join attempts as well as with regular message traffic from other devices. Conversely, if you always use a high DR and the device trying to join the network is far away from the LoRaWAN gateway or sitting in an RF-obstructed or null region, the gateway may not receive a device’s join request. Given these realities, randomly vary the DR and frequency to defend against low signals while balancing against on-air time for join requests.

In LDL, OTAA rotates through all spreading factors from most efficient to least efficient. LDL will keep retrying until OTAA is successful or the application cancels the operation.

The JoinRequest transmit time is dithered by up to 30 seconds on each attempt. LDL will also gradually reduce the duty cycle so that it does not exceed 0.0001 over 24 hours as described in the specification.

Source of random used for dither depends on how LDL has been integrated. It can be pseudorandom (i.e. rand()) seeded by entropy gathered by the radio driver which is made available to the application on startup.

This can get “fun” when someone cheats building “less immediately needed” functionality during a port to a new platform. The system behaves oddly (maybe always starts with the same already used-up join nonce…) and on investigation it is found that:

https://imgs.xkcd.com/comics/random_number.png

I have a Dragino lt-22222-l device and I think it’s at the outer limits of the nearest gateway. I was under the impression - from this thread + also ABP vs OTAA | The Things Stack for LoRaWAN, that once it joined successfully, it should not need to reconnect because of bad reception or something.
While connecting works, it’s not very smooth; am often having this accept join request loop (pbly issues with downlink). Once connected, I always see new join requests within a few days at the most (no power outage or anything), which is pretty annoying. I switched to ABP which keeps working, but was wondering what could cause this and if there are better solutions?

The newer LoRaWAN standards include a kind of keep alive handshake which makes the node rejoin if it doesn’t get any response from the network for an extended period.

What is your uplink period and what spreading factor is being used? Is your device within the fair use limits of TTN? (Average of 30 seconds of airtime a day)

I put the uplink period slightly smaller than 2 hours (imagining there might be more other devices sending at very regular intervals), cause it uses SF12 (default is 10 minutes for this node :sweat_smile: ). Not sure if even half the messages get through, but I can live with that. Still, during such an accept join request loop, it did seem to send more frequently than my uplink period, so the keep alive seems counterproductive in case of a crappy connection.

Um, you appear to be saying that you get about 1 every 2 hours but you are trying once every 10 minutes. If this is the case, then you are breaching the Fair Use Policy by a wide margin and very likely the local legal duty cycle.

LoRa Alliance members (like wot TTI is) are required to restrict routine SF11 & SF12 for the very reasons you are discovering - it’s just far too marginal for regular use & takes up seconds of air time.

It looks like you need to review your gateway antenna, perhaps the device antenna or add another gateway.

No, sorry if I explained myself badly: default/factory settings of lt-22222-l is 10 minutes; I changed that to slightly smaller than 2 hours to not breach fair use policy. Thing is, if the node needs to rejoin with OTAA, the accept join request loop it often gets stuck in, displays messages more often than I’d expect based on the configured uplink period (but once connected it does match this configuration). But a reconnect can basically take from a few hours to a few days due to the shaky reception. If this keep alive cannot be disabled or configured it looks like I can better keep using ABP (or invest in my own gateway to have better reception)

The rejoin uplinks use different (non configurable) timing setting.

The keep-alive has been implemented to recover automatically when LoRaWAN Network Server and node are out of sync. That happened when moving from TTN V2 to V3. It allows nodes to get back in sync without the need to visit it for a physical reset. Afaik there is no way to disable it.

1 Like