High quantity of SF12 Uplink Messages

Hi All,
I have had a gateway up and running for a few weeks and I have noticed a large amount of SF12 uplink messages.
They come from various Networks like “Network: Experimental Nodes”, “Network: Loriot” or “Network: The Things Network”
If it was just one or two I wouldn’t be concerned but I have between 5 - 10 a minute at times.
I am located in Adelaide Australia and am wondering if this is normal? (Considering all the documents I have read have stated not to ever use SF12)
If it isn’t normal is there any advise on what to do about it?

@Jaysmithadelaide The traffic is coming from this network of something like 300 nodes. SAWater - Cooling the community
Not only are they transmitting on SF12 but my gateways also transmit back so I assume its also running in Confirmed mode.
This news article explains some of the background SAWater - SA Water maps cool green parks with real-time temperature trial

Thanks Tony,
Looks like an interesting project.

That can be factually determined from the headers, even without access to the decryption keys

Good Point!

The first byte of the payload is “40” which I decode to mean an “Unconfirmed” data uplink. Further delving into the payload it looks like ADR is activated and the downlinks may be associated with the network instructing the nodes to change SF. (Something I will look into later).

To decode the payload, I simply read the LoraWan Link Layer specification.

Effective and educational indeed.

One can also post the base64 or hex into that online decoder and not provide keys

Could be wrong but I think this could just be the fact we don’t have enough density of gateways for the ADR to feel “safe” in transmission at a lower SF.

One could look at the signal strengths.

And if the reported replies through forum member’s gateways contain MAC commands, those are often cleartext, though they can also be sent in an encrypted way

@Jeff-UK @TonySmith @Jaysmithadelaide
My 2c/2p…

If these nodes are consistently above 3dB SNR and are still on SF12 after hundreds of packets then either ADR isn’t turned on, they’re not hearing/processing ADR MAC commands, or something else has gone wrong. If SNR is lower, then potentially they need SF12 to maintain a stable connection.

If you look at the cumulative airtime per node over a 24 hour period, available in gateway traffic metadata (which is much easier to do in V3), this should be less than 30s per device, according to the fair use policy.

We monitor this for customer applications that we manage, and as much as it annoys me to see devices hammering the network, sometimes it’s devices that we have configured ourselves causing the problem! For instance, a bunch of devices configured with ADR, sending hourly uplinks, and staying within the fair use policy on SF10, will without warning far exceed the fair use policy if they jump up to SF12 because someone’s turned their gateway off. It’s like herding cats.

In these situations we may adjust the uplink interval remotely for devices that support that. But we tend to always have some devices that are over and some that are under.

In my mind, as long as the TTN Application owner has made some consideration of this and isn’t just blasting at SF12, no ADR, every 5 minutes, then I’d link to think it’s a reasonable attempt at complying with the fair use policy. For instance if the airtime per device across the entire application is under 30s on average, then I think that’s OK (just my interpretation though).

Where it is careless or deliberate abuse of the fair use policy, then I think the local community should aim to approach the application owner to make sure they’re aware of the issue as a first step and potentially escalate to TTI if the abuse continues. Again, these are just my thoughts, so maybe not even relevant in this situation.


That change of behavior needs to be automatic implemented in the node firmware itself - the node needs to confirm to the guidelines (and beyond them regulations) for the spreading factor it is actually using not the one it wishes it were using.

Some of us aren’t allowed SF11 or SF12 at all, because even the LoRaWAN headers alone would be longer than our packet airtime limit.

Yes, that would be nice if devices supported the TTN fair use policy, but many devices are are not created with TTN in mind. They’re designed to conform to the LoRaWAN spec itself and they end up being registered on private and non-TTI networks.

For one of our device classes we’ve created an application which monitors uplink SF. When a device is greater than SF9, the app sends a downlink to the device to change its mode of operation (causing it to send less information, therefore shorter payloads, therefore lower airtime). When the device drops back to SF9, it’s allowed to send its full payload again. This could also be done for uplink interval.

Again, this has to happen in the node firmware itself, NOT on the infrastructure side!

Relying on a downlink to accomplish this is simply irresponsible - you, (and not those using private networks) have agreed to abide by the stricter TTN limits, vs. the applicable regulatory limits.

You simply must not use a node on TTN, unless it is capable of abiding by the TTN rules.

Hi Maj,

My gateways provide the downlink some of the time, so the Rx levels at my gateways are at times the best reception. Since they sometimes respond and other times not, I’m assuming the levels I’m seeing are on par with what other gateways are also seeing.

Inspecting the gateway metadata for a number of packets I can see RSSI levels are < -100dbm and SNR is < -13dB sometimes as low as -20dB.

Hi Tony, yeah with SNR that low it seems that those devices are on the edge of coverage. If they’re on SF12 then the application owner should ensure that their uplink interval is large enough to abide by the fair use policy.

That results in not being able to use any commercially available certified LoRaWAN devices on TTN as we can’t update the firmware on those devices without the certification being invalidated.
The intention of TTN is to provide a community network which anyone can use to deploy LoRaWAN nodes. To allow everyone a fair share of the resources (airtime, backend infrastructure etc) TTN has a fair access policy which states you are allowed on average 30 seconds uplink airtime per node per day. Johan and Wienke explicitly stated a device is allowed to exceed this one day if it backs off the next day. The same applies to the 10 downlink limit for a device.
This policy is not instituted to ban nodes from the network, just to make sure no one device monopolizes the shared resources.

1 Like

I disagree. If I’m abiding by the fair use policy of <10 downlinks per day, and I am using that downlink to minimise uplink airtime, I think it’s a responsible use of infrastructure.

If the device has no concept of the TTN Fair Use Policy built in (and I haven’t seen one that does yet), the TTN Application owner is responsible for ensuring the policy is met. Here’s some of the ways I can think of to achieve that:

  • Use an appropriate update interval to allow for devices that go to SF12
  • Use small payloads (if possible)
  • Set a maximum SF on the device (if possible)
  • Install additional gateways
  • Remotely instruct devices to reduce their airtime (ie my previous example)
  • Mount the devices in a more effective location

I don’t think we need to ban all devices that do not support Fair Use Policy in firmware.


Hi all,
Thanks for such great input.

Like Andrew mentions I like the idea of a little bit of flexibility with the airtime rules as long as we are reasonable and fair about it.
It seems like the TTN doco that has the hard and fast rule of no SF12 is unworkable and probably needs to be revised to a more realistic expectation of the network users.

It is actually the Lora alliance that states devices hard coded to use fixed SF11 or SF12 are not allowed to join a network.
And in the US those SFs can’t be used as they result in illegal use of a frequency.

1 Like

This comes back to what I suggested here Filter forwarded packets if on limited cellular plan it may be the user is oblivious to either status of their nodes or the TTN network needs so I would approach them to inform & educate… :wink: if a commercial node with no firmware access & operating in edge of reception as indicated then to improve then their solution may have to be adding extra gw(s)… which would be a result for the local community :slight_smile:

1 Like

Even if that were true, it still wouldn’t be an excuse for spamming the network, but rather evidence that one needs to get a node firmware that honors TTN rules.

But for the most part, it is not true, because with a few exceptions (whose manufacturers may need some pushback) most TTN nodes with certified stacks don’t have a unitary firmware that reads sensors and transmit, but rather have a module running a certified stack, and an application firmware in a different chip requesting it to send things.

In such case, it’s the responsibility of the application firmware to ask the stack what spreading factor is being used, and back off the transmission rate accordingly.

Getting misbehaving nodes into the field is really, really intolerable - because once they are out there, for the most part, they won’t get fixed.

Johan and Wienke explicitly stated a device is allowed to exceed this one day if it backs off the next day.

This just circles back to the basic fact that to have any confidence that the transgression will be time-limited, backoff must be autonomous behavior - it cannot be downlink commanded behavior, because the delivery of downlinks cannot be guaranteed.

Consider what happens with a bunch of nodes that will only back off in response to a command: once they start stepping on each other’s transmissions, you can’t command them to back off - the situation is unstable because congestive failure causes positive feedback resulting in more failures. In contrast, a stable system does its backoff autonomously, and only ramps up in response to encouragement from the network (specifically in the form of a ADR sending it to a faster SF).

And it’s not just a politeness issue - frequently transmitting at slow SF’s is a battery killer, too. The last thing you want is a node that’s battery expenditure increases when it’s out of touch of any functioning gateway.

This policy is not instituted to ban nodes from the network, just to make sure no one device monopolizes the shared resources.

Exactly - for the community network to work, users need to take responsibility to deploy only well behaved nodes which autonomously reduce their transmit rate when not receiving downlinks.

What we really, really, can’t have are irresponsibly broken nodes that increase their airtime utilization when they aren’t getting a response from the network.

Just because that’s what was bought off the shelf is no excuse - doubly so because in most cases it can be fixed in the accompanying custom portion that injects payloads to send.

1 Like