How many gateways are needed to support 1,000 nodes sending 10 bytes per 5 minutes?

Hi everyone,

Me and my colleagues are building system for monitoring various parameters of electrical devices (current/voltage and etc.). One of our ideas is to use LoRa technology for data transmission.
But when we started to do some calculations, we are not sure if it will be capable to handle all the data with a larger amount of devices connected.
Maybe you had experiences and could tell us if its the right solution for us?

Here is example of our system site:
We have 1000 objects in small area (range of 500m in Europe.
Each of them should transmit data once per 5 minutes.
Payload size is around 10Bytes from each device.
Lets say it’s an open area and lowest SF should be 7-8.

We made some calculations by ourselves, but if we are correct - we should have at least 20GW to support our requirements? Or we doing something completely wrong?

Thanks in advance!

Care to share your calculations?

First of all: I am not an expert.

10 bytes at SF8 take 114 ms air time to send. When repeating that every 5 minutes, you’d send (24 × 60) / 5 = 288 messages per day, totalling to 33 seconds. That’s just above the TTN Fair Access Policy, which is roughly based on 1,000 nodes per gateway (but also an average duty cycle of 5%, which may be high). Using that math, one gateway would suffice?

It might be stretching the limits though, if all nodes are using the maximum daily air time. On the other hand: on SF7 the air time would be about half of this.

When not taking the Fair Access Policy into account, but using a lower duty cycle of 1%, you could send 10 bytes at SF8 every 12 seconds (on the same channel). So, assuming it’s a good idea to keep the network’s duty cycle below 1% too, then when sending only once per 5 minutes while you could repeat that every 12 seconds, I guess you could support (5 × 60) / 12 = 25 nodes per channel, per gateway. With 8 channels, that’s 200 nodes per gateway, so would require 5 gateways for your 1,000 nodes?

It’s hard to come up with definitive numbers. As nodes just send when they feel like it, the duty cycle aims to lower the chances on collisions, also with non-LoRa signals. That’s also why, in the previous paragraph, I did not multiply the result by 100 to get to 100% air time usage. (For those 25 nodes above, the air time would only total to 25 × 114 ms = 2.9 seconds every 5 minutes, just below 1% of the available 300 seconds.) The overall duty cycle of the network could probably be higher without introducing too many collisions. But if all sensors would react to similar events, then they would transmit simultaneously despite the duty cycle. Also, adding more gateways assumes the network will tell nodes to lower their transmission power, to ensure only few gateways receive the transmissions of a given node.

Did I say I am not an expert? :wink:

3 Likes

I think Semtech is a more optimistic with the calculations regarding the capacity of a LoRa gateway

as you can read in the FAQ. Sure, the document did not mention the size of the packet.

Also, I think different spreading factors can be handled simultaneously, like virtual sub-channels, not affecting each other? If true, then how to guess how much is sent per sub channel…

But that might not even be feasible for all gateways, if any:

Thanks for the answer.

We tried to do the calculations with the max TimeOnAir for an end device at first, without inlcluding GW side.
So if we have 10Bytes of payload + ~13 Bytes LoRaWan application payload, it’s about 40ms (SF7) and 70ms (SF8).
That means 750 (SF7) and 428 (SF8) messages per day. So looks like from this side it meets the requirements and 1 GW should be enough.
But then it comes to calculation of the channels capacity and gateways per channels, the required numbers is increasing and we are not sure what is the best way to do the calculations?
The number of 20 GW was received when we calculated data transmission every minute.

Also while large number of devices while be placed in small area, there will be not complicated to reduce tranmission power and this might cost collisions.

That seems to be too low; following Semtech’s designer’s guide, I think those are 61.7 ms and 113.2 ms.

Be cautious of “official” capacity figures. These are usually based on a theoretical gateway with 64 channels. As the Gateway chipsets only handle a max of 8, you would need a Gateway with 8 RF heads in it to achieve the claimed capacity. Given that all the affordable gateways only have a single RF module, you first have to divide the Semtech figure by 8.
With so many nodes, collisions are going to be your biggest issue. You will need to work out a strategy to minimise them. On suggestion is to divide the time window available for a poll cycle up and allocate devices to fixed time slots; you should also spread your devices out so they start transmitting on different channels. But, this means you have to maintain tight control of the clock in your devices to keep them syched. Unless you have added RTCs to your nodes, you will have an issue keeping time. Which is why Class B would be more appropriate than Class A mode.

How you came up with this 25 nodes per channel? Please correct me if i am wrong. A device could retransmit on the same channel(sub-band) only after 12 sec this means that 12sec/114msec = 105 nodes can transmit on the same channel before the first node can retransmit. so isnt it 105 nodes/channel ?

Thanks for your patience !

1 Like

A bit late (but maybe useful for future readers), your formula:

…is giving you the wrong idea due to the rounded numbers I used, and is not taking into account the (example) assumption that each node only sends once per 5 minutes:

  • When using the exact numbers, you’ll see that the airtime for 10 bytes at SF8 is 113.152 ms. Hence, for a 1% maximum duty cycle, the minimum time between subsequent transmission starts for the same node in the same sub-band is 11.3152 s.

    (So, after transmitting for 113.152 ms, a node needs to wait at least 11.3152 − 0.113152 = 11.202048 seconds to not exceed a maximum 1% duty cycle.)

  • So, my “you could send 10 bytes at SF8 every 12 seconds” should read “every 11.3152 seconds”. (With “every” referring to transmission starts, not to the time between the end of one transmission and the start of the next one.)

  • With those exacts numbers, your 12 / 0.114 should read: 11.3152 / 0.113152 = 100.

    This is the number of nodes that would be supported per channel per gateway if all nodes have the same time on air, all (ab)use the maximum 1% duty cycle, and by sheer luck would never transmit at the same time (at that channel, and using the same SF).

    Of course, you won’t be that lucky. That’s why it’s important to interpret the duty cycle as a maximum: if during some short period in time a node needs to send rapidly after its last transmission, then it should not exceed that maximum. But it should not use the maximum all day long unless its transmission power is low and many gateways have been deployed. And that’s why TTN also defines a 30 seconds maximum airtime for uplinks, per node, per day for the public network.

  • So, you’re basically only proving that the math for the duty cycle confirms to the specification like for EU868:

    The LoRaWAN enforces a per sub-band duty-cycle limitation. Each time a frame is transmitted in a given sub-band, the time of emission and the on-air duration of the frame are recorded for this sub-band. The same sub-band cannot be used again during the next Toff seconds where:

    Toffsubband = (TimeOnAir / DutyCyclesubband) - TimeOnAir

    …which is (0.113152 / 0.01 ) − 0.113152 = 11.202048 seconds

My (5 × 60) / 12 = 25 was based on: if a node’s maximum duty cycle allows for sending at most every 12 seconds, then when only sending once per 5 minutes (300 seconds), then the node is only using 12 / 300 = 1/25th of its maximum duty cycle. (Good!) Hence, when also dimensioning the network to 1%, another 24 nodes could also send at a random time once every 5 minutes while the number of collisions would then be acceptable. If, say, a 5% duty cycle for the network would be fine, that would allow for 5 times as many nodes.

Some more things to take into account:

  • All of the above numbers assume a single frequency and spreading factor. While (in EU868) for a node the duty cycle is indeed defined “per sub-band”, nodes should also adhere to frequency hopping. Like for EU868 nodes should use 8 different frequencies within the same sub-band. So, if a network supports 8 frequencies and all nodes perform perfect frequency hopping then, even if all nodes are using the same spreading factor, the network capacity might be 8 times as large.

  • In theory simultaneous transmissions using different spreading factors do not cause collisions, but apparently:

    Also, it’s hard to guess how many nodes will use which spreading factor.

  • Gateways might not be able to decode all they receive. Like for the Semtech SX1301:

    Several packet[s] using different data rates may be demodulated simultaneously even on the same channel.

    […]

    The SX1301 can detect simultaneously preambles corresponding to all data rates on all IF0 to IF7 channels. However it cannot demodulate more than 8 packets simultaneously. This is because the SX1301 architecture separates the preamble detection and acquisition task from the demodulation process. The number of simultaneous demodulation (in this case 8) is an arbitrary system parameter and may be set to any value for a customer specific circuit.

  • Uplinks and downlink use “inverted IQ”, hence should not interfere. But as today most (if not all) gateways are half-duplex, hence cannot listen for any incoming uplink when transmitting a downlink, downlinks decrease the capacity of a gateway. (But still a downlink of one gateway should not interfere with uplinks received by other gateways.)

In short: it’s hard to give definitive numbers. (And I’m still not an expert!)

1 Like

to the OLD specification :slight_smile: The newer ones don’t require to go such a straightforward way to meet DC limitations.