High quantity of SF12 Uplink Messages

Good Point!

The first byte of the payload is “40” which I decode to mean an “Unconfirmed” data uplink. Further delving into the payload it looks like ADR is activated and the downlinks may be associated with the network instructing the nodes to change SF. (Something I will look into later).

To decode the payload, I simply read the LoraWan Link Layer specification.

Effective and educational indeed.

One can also post the base64 or hex into that online decoder and not provide keys

Could be wrong but I think this could just be the fact we don’t have enough density of gateways for the ADR to feel “safe” in transmission at a lower SF.

One could look at the signal strengths.

And if the reported replies through forum member’s gateways contain MAC commands, those are often cleartext, though they can also be sent in an encrypted way

@Jeff-UK @TonySmith @Jaysmithadelaide
My 2c/2p…

If these nodes are consistently above 3dB SNR and are still on SF12 after hundreds of packets then either ADR isn’t turned on, they’re not hearing/processing ADR MAC commands, or something else has gone wrong. If SNR is lower, then potentially they need SF12 to maintain a stable connection.

If you look at the cumulative airtime per node over a 24 hour period, available in gateway traffic metadata (which is much easier to do in V3), this should be less than 30s per device, according to the fair use policy.

We monitor this for customer applications that we manage, and as much as it annoys me to see devices hammering the network, sometimes it’s devices that we have configured ourselves causing the problem! For instance, a bunch of devices configured with ADR, sending hourly uplinks, and staying within the fair use policy on SF10, will without warning far exceed the fair use policy if they jump up to SF12 because someone’s turned their gateway off. It’s like herding cats.

In these situations we may adjust the uplink interval remotely for devices that support that. But we tend to always have some devices that are over and some that are under.

In my mind, as long as the TTN Application owner has made some consideration of this and isn’t just blasting at SF12, no ADR, every 5 minutes, then I’d link to think it’s a reasonable attempt at complying with the fair use policy. For instance if the airtime per device across the entire application is under 30s on average, then I think that’s OK (just my interpretation though).

Where it is careless or deliberate abuse of the fair use policy, then I think the local community should aim to approach the application owner to make sure they’re aware of the issue as a first step and potentially escalate to TTI if the abuse continues. Again, these are just my thoughts, so maybe not even relevant in this situation.

Maj

1 Like

That change of behavior needs to be automatic implemented in the node firmware itself - the node needs to confirm to the guidelines (and beyond them regulations) for the spreading factor it is actually using not the one it wishes it were using.

Some of us aren’t allowed SF11 or SF12 at all, because even the LoRaWAN headers alone would be longer than our packet airtime limit.

Yes, that would be nice if devices supported the TTN fair use policy, but many devices are are not created with TTN in mind. They’re designed to conform to the LoRaWAN spec itself and they end up being registered on private and non-TTI networks.

For one of our device classes we’ve created an application which monitors uplink SF. When a device is greater than SF9, the app sends a downlink to the device to change its mode of operation (causing it to send less information, therefore shorter payloads, therefore lower airtime). When the device drops back to SF9, it’s allowed to send its full payload again. This could also be done for uplink interval.

1 Like

Again, this has to happen in the node firmware itself, NOT on the infrastructure side!

Relying on a downlink to accomplish this is simply irresponsible - you, (and not those using private networks) have agreed to abide by the stricter TTN limits, vs. the applicable regulatory limits.

You simply must not use a node on TTN, unless it is capable of abiding by the TTN rules.

Hi Maj,

My gateways provide the downlink some of the time, so the Rx levels at my gateways are at times the best reception. Since they sometimes respond and other times not, I’m assuming the levels I’m seeing are on par with what other gateways are also seeing.

Inspecting the gateway metadata for a number of packets I can see RSSI levels are < -100dbm and SNR is < -13dB sometimes as low as -20dB.

Hi Tony, yeah with SNR that low it seems that those devices are on the edge of coverage. If they’re on SF12 then the application owner should ensure that their uplink interval is large enough to abide by the fair use policy.

That results in not being able to use any commercially available certified LoRaWAN devices on TTN as we can’t update the firmware on those devices without the certification being invalidated.
The intention of TTN is to provide a community network which anyone can use to deploy LoRaWAN nodes. To allow everyone a fair share of the resources (airtime, backend infrastructure etc) TTN has a fair access policy which states you are allowed on average 30 seconds uplink airtime per node per day. Johan and Wienke explicitly stated a device is allowed to exceed this one day if it backs off the next day. The same applies to the 10 downlink limit for a device.
This policy is not instituted to ban nodes from the network, just to make sure no one device monopolizes the shared resources.

2 Likes

I disagree. If I’m abiding by the fair use policy of <10 downlinks per day, and I am using that downlink to minimise uplink airtime, I think it’s a responsible use of infrastructure.

If the device has no concept of the TTN Fair Use Policy built in (and I haven’t seen one that does yet), the TTN Application owner is responsible for ensuring the policy is met. Here’s some of the ways I can think of to achieve that:

  • Use an appropriate update interval to allow for devices that go to SF12
  • Use small payloads (if possible)
  • Set a maximum SF on the device (if possible)
  • Install additional gateways
  • Remotely instruct devices to reduce their airtime (ie my previous example)
  • Mount the devices in a more effective location

I don’t think we need to ban all devices that do not support Fair Use Policy in firmware.

3 Likes

Hi all,
Thanks for such great input.

Like Andrew mentions I like the idea of a little bit of flexibility with the airtime rules as long as we are reasonable and fair about it.
It seems like the TTN doco that has the hard and fast rule of no SF12 is unworkable and probably needs to be revised to a more realistic expectation of the network users.

It is actually the Lora alliance that states devices hard coded to use fixed SF11 or SF12 are not allowed to join a network.
And in the US those SFs can’t be used as they result in illegal use of a frequency.

1 Like

This comes back to what I suggested here Filter forwarded packets if on limited cellular plan - #19 by Jeff-UK it may be the user is oblivious to either status of their nodes or the TTN network needs so I would approach them to inform & educate… :wink: if a commercial node with no firmware access & operating in edge of reception as indicated then to improve then their solution may have to be adding extra gw(s)… which would be a result for the local community :slight_smile:

1 Like

Even if that were true, it still wouldn’t be an excuse for spamming the network, but rather evidence that one needs to get a node firmware that honors TTN rules.

But for the most part, it is not true, because with a few exceptions (whose manufacturers may need some pushback) most TTN nodes with certified stacks don’t have a unitary firmware that reads sensors and transmit, but rather have a module running a certified stack, and an application firmware in a different chip requesting it to send things.

In such case, it’s the responsibility of the application firmware to ask the stack what spreading factor is being used, and back off the transmission rate accordingly.

Getting misbehaving nodes into the field is really, really intolerable - because once they are out there, for the most part, they won’t get fixed.

Johan and Wienke explicitly stated a device is allowed to exceed this one day if it backs off the next day.

This just circles back to the basic fact that to have any confidence that the transgression will be time-limited, backoff must be autonomous behavior - it cannot be downlink commanded behavior, because the delivery of downlinks cannot be guaranteed.

Consider what happens with a bunch of nodes that will only back off in response to a command: once they start stepping on each other’s transmissions, you can’t command them to back off - the situation is unstable because congestive failure causes positive feedback resulting in more failures. In contrast, a stable system does its backoff autonomously, and only ramps up in response to encouragement from the network (specifically in the form of a ADR sending it to a faster SF).

And it’s not just a politeness issue - frequently transmitting at slow SF’s is a battery killer, too. The last thing you want is a node that’s battery expenditure increases when it’s out of touch of any functioning gateway.

This policy is not instituted to ban nodes from the network, just to make sure no one device monopolizes the shared resources.

Exactly - for the community network to work, users need to take responsibility to deploy only well behaved nodes which autonomously reduce their transmit rate when not receiving downlinks.

What we really, really, can’t have are irresponsibly broken nodes that increase their airtime utilization when they aren’t getting a response from the network.

Just because that’s what was bought off the shelf is no excuse - doubly so because in most cases it can be fixed in the accompanying custom portion that injects payloads to send.

1 Like

Let me start by noting that implementing the logic to adhere to TTN fair access policy at node level would be an excellent idea for TTN specific nodes. However care must be taken for it not to break LoRaWAN certification testing requirements. (During testing the stack does not observe legal airtime requirements either to speed up testing [which is done in an RF isolated environment])

Interesting that your experience differs vastly from the commercial nodes I encounter in my deployments. I’m mostly seeing nodes where one controller takes care of both the LoRaWAN stack and the application. Three years ago Microchip RN modules were the de facto standard, these days majority of the new nodes are based on one of the controllers with integrated radio module (not necessarily the same die).

Back of to stay within legal limits. Because the TTN fair access policy is not implemented in any of the commercial devices I know of. Being able to configure such limits would be a good enhancement at standard level in my opinion.
(For commercial LoRaWAN operators a limiting function would be good as well to stay within the plan purchased for the device.)

For any LoRaWAN network to work nodes should only deploy well behaved nodes. If someone starts a private network with misbehaving nodes using the same frequencies TTN uses (and there is not a lot of available space for other plans in large parts of the world) all TTN users suffer.

And that is where I strongly disagree. Most off the shelf products do not allow modification of the firmware without invalidating the LoRaWAN certification (and possibly breaking the stack) because just one processor is being used which drives both the application and the LoRaWAN stack. At least none of the 20+ different node types I handled in the last two years allow these modifications. Some do not even provide means to modify the firmware and the most restricted one not even the operational parameters.

1 Like

As near as I can tell, in arguing for manufacturer non-cooperation as a legitimate excuse to violate it, you are basically stating the opinion that the airtime policy is merely a “goal”, and not any sort of actual “policy” at all.

I believe that with a bit more consideration of the practical impact at scale, you’ll realize that’s not a technically viable position for a growing network - but you are welcome to your own opinions.

In terms of the LoRaWAN certification argument, if certification is allegedly the obstacle to achieving the autonomous backoff behavior needed to be fully compliant with TTN airtime “goals”, then people should probably give up on the idea of LoRaWAN certification as being a positive feature for TTN nodes, and instead focus only on legal certification as a radiator, while achieving node behavior that is actual constructive rather than destructive to the idea of a shared network.

Particularly, that would mean making sure that nodes autonomously behave appropriately in terms of both sharing the airwaves and consuming their own battery, so that they can behave appropriately in precisely the situations where they are receiving no feedback from the network.

First of all I never said devices are allowed to violate the policy. I just pointed out that generic devices from vendors do not implement the logic you demand to keep the airtime within the TTN policy. So the owner of the device is responsible to make sure it stays within policy.

Second, I know for sure me demanding the manufacturer change the firmware to implement this is not going to work. In a past case where a device implemented mandatory acknowledgement on uplink I had to get TTI involved to make the manufacturer see the error in their logic and wait 9 months for an updated firmware (and that was a vendor where I had good relations with their support department). It all comes down to volumes they can sell to you and my projects are not even close to the volume where they consider custom firmware (and the required new LoRaWAN certification).

I don’t state that devices are allowed transmit in excess of the TTN policy. I do state that a mechanism to take care of that in the node would be good, I merely state that most TTN users do not have the leverage to demand that feature.

I even state I think adding such a mechanism to the LoRaWAN standard would be a good idea.

Please read all of my message and not just the parts of it that trigger a knee jerk reaction. And consider how lucky you are if you are in position to demand those changes and have them implemented.

1 Like

If you are saying that it’s okay to rely on the success of downlinks to attempt to comply with the policy, then in reality, you are arguing that it’s okay not to comply.

We all know that downlinks won’t always be achievable - this whole subthread kicked off when someone pointed out that downlink failures meant they were sometimes out of compliance.

And in fact, downlinks are least likely to work, in exactly the situation of gateway change or failure where a node would have autonomously gone to a higher spreading factor and started burning up airtime and its own battery with too many long transmissions. Any viable node simply must backoff its rate in such a situation for its own purposes as well as politeness.

It’s really quite simple: because infrastructure-side intervention isn’t always possible, either you believe that a node must autonomously comply, or you believe that non-compliance is acceptable.

And consider how lucky you are if you are in position to demand those changes and have them implemented.

We refuse to be held hostage by vendors with faulty software; in practical terms we either avoid their products or replace their buggy software with our own maintainable software.

Which gets back to the ealier point: if “LoRaWAN certification” is being seen as an excuse to violate TTN policies, then “LoRaWAN certification” is a bug rather than feature for TTN.