Rx1 Delay Setting in V3 Console

Hello All,

I’m a bit confused. Is there a way to change the RX1 delay from 5 seconds to something lower on TTN v3 console / community hosted version?

I’m looking to do it globally (all my devices/applications). I found that the window in TTN v2 worked fine for my setup, and a 5 second window means my device is awake for longer, using more battery life.

Thank you!

Hello all,

I’m in a similar situation. The V3 stack RX1 delay has a big effect on my power budget.
Having the option to change the delay would be desirable.

Thank you!

As detailed in a previous post (scroll up): It’s in the advanced settings of the Network layer when you do the setup.

Or you can make changes via the CLI

But bear in mind the v3 infrastructure is based on the Rx1 slot giving time for all the different elements to co-ordinate.

Also consider you are not listening for five seconds, you are dormant for 4.8 seconds before listening. So the current draw should be quite low. And you can sleep for some of that time.

Your device could sleep 4 (or even 5) of the 5 seconds if the timing is sufficiently accurate.

Due to the added delays introduced by the packet broker (the only scalable way to interconnect the regions) V3 isn’t able to guarantee delivery of downlinks within the 1 second previously used. As most (all?) other large operators already adapted 5 seconds TTN decided to play safe and switched to 5 seconds as well.

Hello, Thank you both so much for the info
@descartes, I just set up a new application, as well as a new device, and I do not see an option for RX1 window when doing the setup for the device or application. Mine is OTAA btw.
Would it be possible to check this on your end?

image

Only in ABP, but I believe you can adjust it using the CLI for OTAA

OK. Is there a way to change it globally through CLI for an entire application? Otherwise I would need to change it for each device individually after it is added. Or perhaps a way to integrate into the metadata of a device before it is imported?

Let’s have a race to see who can read the documentation fastest!

https://www.thethingsindustries.com/docs/getting-started/cli/

Sure! I see how to do it per-device,

$ ttn-lw-cli end-devices set <app-id> <device-id> --mac-settings.desired-rx1-delay RX_DELAY_5

and I see where it comes from on the network server defaults:

ns.default-mac-settings.desired-rx1-delay

but is it settable on a global level through the CLI? I don’t see that listed as a command, or how to set any of the

mac-settings.desired_<parameter>

options
Thank you!

1 Like

Can you see a command at the application level that sets the Rx1 delay parameter like there is for the device?

I can only refer to the documentation, like you. Or try stuff, like you. Or read the source, like you.

OK. No, I do not see this option, but I’m certainly no expert. Perhaps I have missed something.

I’m no better!

You could script the CLI to read the list of devices and then set the settings you want.

Or consider it may actually break your downlinks if you don’t get the infrastructure time to process. 5 seconds down to 3 seconds is one thing, but cutting the time down to a fifth of the time that TTI are working to would seem to be pushing the boundaries.

Have you checked what the battery life impact will be?

What is your device?

Sure, thank you. I have a custom sensor using LMIC - basically a modified Adafruit M0 Lora

Based on my modeling this would decrease our battery life by about 10-15% because our sleep current is very low (10uA), but when our micro is on it will consume about 15mA at present. We could try sleeping before the end of the RX window, but LMIC currently does not have an option for this.

I’ve been looking in to that as I have a bit of an understanding of the LMiC code - I think it would be feasible to hack the scheduler entry between Tx & Rx1. I have also looked at the original Semtech code but I’ve not got my head in to that yet.

Personally I’d like the option of more control over the stack so I can command it more for things like blind ADR and more intelligent startup joins or re-joins.

The problem is when LMIC is used in the arduino way. Here the timing is made by busy waits, which is consuming power. LMIC is using the functions hal_checkTimer() to setup a timer for the next wakeup and hal_sleep() to put the CPU to low power mode when nothing needs to be done. This can be used to hook in some low power implementation (timer+cpu sleep).

Hmm, Helium (the largest lorawan network) managed to use 1second as default RX1_DELAY.

Have you ever met “GSM Latency”?

Guess the difference is Helium doesnt expect to use GSM…not with GW’s apparently consuming 40-60GB/Mo! :rofl:

@mrx23dot V2 used 1sec, and you can set in V3 if you must (some legacy devices had to use that as a fall bac if they could not be reprogrammed (in the field) during V2-V3 migration) - but you need to have regard for backhaul latency as Johan calls out - this was evidenced many time over during the life of V2…just dont complain if your node misses messages as processing loads are increasing over time and ‘the standard’ or perhaps better the ‘default’ per L-A and as used by many other LRaWAN networks is 5sec. Guess when you are running a full ledger you have to assume a decent bband/fibre connection for most of the route to GW, which allows you to assume a lower latency per Helium, which is LoRaWAN with a twist as you might say…

1 Like

From my part of the world I have seen often 800 up ms from the network the gateway is on to the NS.

So if you think that Mr H is going to handle this with real hotspots, good luck.

Do they care? Most hotspot^H^H^H^H^H^H^miner owners seem only interested in earning rewards, not lpwan, so as long as the mechanism that gets them paid works, it may not matter that the network fails to meet downlink windows.