Duty Cycle - Time on Air

Hey folks,
dealing with implementing a duty cycle limit on Network-Server side, there popped up some questions marks. I’m mainly referencing European regulations, as I’m dealing with EU868 band.

  1. I often read somethin like: When a device transmitted xxx milliseconds, then the channel is not available for another xxx milliseconds. I wonder where this regulation can be found? Neither from the ETSI standard nor from LoRaWAN specs I can understand this interpretation.

  2. Wouldn’t sending 20 packets of 1560 ms airtime (SF11@125kHz) every 2 seconds (thus <0.5s between each transmission) followed by silence within 1hour be still following duty cycle limit of 1% ?

This point makes a huge difference in the implementation. Thanks in advance for hints.

In general the precise detail you are after is in the regulations of your particular country.

For the UK if I want to know the duty cycle allowed at a particular frequency I would check the IR-2030 document produce by Ofcom, the UK regulator. Those regulations do in turn reference the particular European standards.

I am not aware of a specific German regulation so I consider the ETSI EN 300 220 (1/2) as the “bible”. But again, it states:

The Duty Cycle at the operating frequency shall not be greater than values in annex B or any NRI for the chosen operational frequency band(s).

And this annex just contains percentage values. I cannot find something about dwell times or “minimum silence” between two transmissions.
But maybe I’m just missing out something, that’s why I’m asking experts here.

Legal considerations aside, LoRaWAN compliance means the Network Server should attempt to limit transmissions at SF11 or 12. The spec specifically states that devices should not be setup to send at these SF’s as a normal working mode.

And 20 uplinks of 1.5 seconds is a gift-wrapped opportunity for some other uplinks to come along and spoil your day and result in reasonably frequent packet loss.

What is it you have planned that needs 1020 bytes uploading all at once?

Can you not have more gateways so they can be closer to the device?

This is exactly the sort of thing that no member state of the EU gets to legislate for their own country. So there won’t be a German regulation.

if it can’t be found in the standard, what conclusion can you draw from that.

But overall, if you want legal advice that you can hold you head up in a court, this is not the place for it. We are not lawyers!

Thanks for your responses. And you are right, I’m not seeking legal advice.

My “problem” stems from the fact, that the Chirpstack network server does not take DutyCycle Limitations in account (and the project I’m part of is just bound to Chirpstack stack at the moment). And my question is not about certain amounts of data in a certain time, but rather how to implement some kind of “DC-planner” inside the chirpstack, so sent frames aren’t dropped by gateways due to DC limitations (which seems to be a major reason for performance problems in LoRaWAN networks, according to Abdelfadeel et al., 2020).

And for implementing DC limit “the right way”, I’m just trying to understand it fully and as software developer I’m kind of used to lookup specs, standards and regulations.

Is that a TTN server ?

1 Like

Maybe this paper can help you to understand the Duty Cycle:

1 Like

Great, many thanks, Wolfgang.
The authors seem to conclude similiar things:

The duty cycle does not have any restric-tions how the transmissions should be spread out in time.It makes no distinction if transmission times are evenlyspaced out or if the transmission time is used up at thebeginning of the observation period and the rest of theinterval waited out. The only thing that must be respectedis the maximum duty cycle ratio itself. As such, devicesare allowed to transmit using bursty traffic, e.g., transmit-ting 36 s and then waiting for 3564 s for a duty cycle of1%.

I think I got a picture of how to implement it. Thanks all and have a nice evening!

Points arising:

  1. Duty Cycle is a device problem, not a gateway or network server problem - they process what they receive - the only thing that a network server can do is ask to adjust the data rate, it can’t stop a device from transmitting.

  2. The academics love to write papers.

  3. Your 20 uplinks of 51 bytes in short order needs a total rethink - unless you are writing an academic paper - if so, I rest my case on point 2, as no one in reality would do this.

  4. This forum is for LoRaWAN on TTN discussions only. Chirpstack is off topic. But as a general discussion it’s OK if we aren’t dragged in to implementation details specific to a non-TTN setup.

1 Like

My understanding has always been that, for regulations of this type, the EU introduces regulations that then require member states to introduce matching regulations for their own countries.

In the case of the UK, IR-2030 which refers extensivly to EU standards, is in itself enforced by regulations to the Wireless Telegraphy Act. So IR-2030 is in effect a UK regulation.

Absolutely, but they don’t get much of a choice, if the EU comes up with a regulation, they are meant to implement it ‘as-is’ in their national legislation.

I believe this was a key discussion point in some paperwork exercise that we’ve had foisted on us in the UK recently.

The 20 uplinks of that short cadence were merely an “extreme example” to clarify what edge cases I see. And the CS was the background information, why I am bothering with that thoughts, but I considered the question itself not vendor-specific.

And as I understood it, both ends of the air-channel would have to respect DC limits, devices and gateways. I learned, that the RAK gateways don’t deal with DC, while others do and drop/deny packets. Is that against your experience?

A gateway has to follow the DC-limits too. It has to fulfill the same requirements as a node (SRD).

I realize, that I have so far completely disregarded the fact that the network server nearly has no real choice to “clerverly schedule” or postpone a downlink, because the downlink is dependent on the uplink and at most it may choose between Rx1 and Rx2.
So… well then. what happens, if there are downlinks for “too much” nodes on a single gateway? The gateway must discard some downlinks for DC limits and the NS/AS won’t notice unless it were confirmed downlinks?

Persons reading the thread will assume the limits being discussed are those that apply to TTN and in the legal sense only they do.

However, most often the legal limits are not the limiting factor for TTN but the fair useage limit is, clarity is needed.

Oh, I totally missed that, I’m sorry. Makes sense, now that I’m reading it.
My idea was just to ask people with “real experience” and for that I considered TTN as a perfect fit. I didn’t intend to stir things up.

Agreed. But the Fair Use Limits are recommandations, the DC-limits are given by law. If the gateway of a user exceeds the DC-limits, the user (operator) of the gateway (and not TTN) might have problems with the authorities.

But my last question remains:
Is it your experience too, that (some) gateways just drop downlinks, if they would otherwise violate DC restrictions? And if they do, would the network server ever know (if not for missing ACK)?