Gateway Capacity

While looking at the new SX1302 baseband receiver chip it raised some questions about the operation and capacity of gateways using the existing SX1301 chip.

Gateways are typically constructed in modules which consist of a SX1301 baseband receiver and two SX1257 radio receivers. This configuration can provide up to 8 separate radio channels.

With this arrangement a gateway can receive 8 nodes simultaneously. Its typically thought that this means the 8 nodes are on 8 different radio channels.

However, can someone confirm, this is not necessary, we could receive two nodes on the same radio channel provided their transmissions are orthogonal. In fact we could receive three nodes on the same channel provided all three transmissions are orthogonal to each other. The 8 node limit comes from the number of decoders in the SX1301 baseband receiver.

By orthogonal I mean different Chip Rate, which is the rate of change of frequency vs time. For example SF 7 BW=125kHz is orthogonal to SF9 BW=125kHz. Whereas SF 7 BW=125kHz is not orthogonal to SF 9 BW=250kHz.

The number of nodes that can be received simultaneously depends on the number of decoders in the sx1301/8, not the number of frequencies.
You are right multiple nodes transmitting on the same frequency with different spreading factors can be received simultaneously.

(For LoRaWAN frequency plans supported by TTN currently only 125kHz and 500kHz are used)

1 Like

Thanks @kersing,
The reason for checking, the new SX1302 has 16 decoders and when combined with 8 radio channels has the capacity to decode up to 16 nodes at the same time.

While the SX1302 is promoted as reducing power during standby and receive, it’s biggest benefit is to double the capacity of a gateway with the same number of radio channels.

While this is theoretical capacity, looking at the mix of SF being received on my gateway (a range of SF at BW=125kHz or 500kHz), I think the actual capacity increase will be close to the theoretical.

1 Like

To be precise the SX1302 actually has 8+8+1 “decoders” :

  • 8 for SF 5 to 12 on bandwidth 125kHz on any of the 8 RF channel
  • 8 for SF 5 to 10, also on bandwidth 125kHz on any of the 8 RF channel
  • The extra one, referred as service modem can listen to bandwidth 125, 250 and 500kHz (maybe also lower bandwidth, but I’m not sure) but only on one radio channel and one SF.

And about non-orthogonal combination of SF/bandwidth, SF7/BW125 is orthogonal to SF9/BW250, but not to SF9/BW500 (if you add 1 to the SF you need to double the bandwidth to find at least one half of the chirp matching): the larger bandwidth would not be that much affected since only half (or in your example a quarter) of the symbol would be scrambled. The lower bandwidth would have more issue since once symbol every 2 (or 4 in your example SF7/SF9) would be completely killed, but even then it would depend of the relative power.


We are building a gateway with sx1301 and two sx1257(RPI v3.0) and the data packets are a 19 bytes string without considering payloads. My question is how many messages the gateway is able to handle in a second(max data rate) on different numbers of end nodes and different channels…

May-be you are asking the wrong question because you are not just using your gateway but also a shared medium, the radio frequencies. If your goal is to use the maximum amount of data packets possible you are effectively using the frequencies to the max (and possibly exceeding legally allowed limits) and being the worst possible neighbor for everyone in a 15 to 50km range of your deployment.

Sorry, maybe my explanation wasn’t good because this is my first time using Lora and I am a software engineer, so not an electronic guy to know these parameters in the Semtech datasheet perfectly.
We are using 32 end nodes (sx1278-433MHz in that 8 channels) in the field and each node has 16 sensors and actuators connected and the project needs a fastest possible field monitoring. So I want to know shortest period of time for sending data sensors without any conflict or loss on gateway?
(How about this scenario that we use 64 similar end nodes?)
We want to be sure this gateway (RPI -V3.0) is suitable for our project.

Then that’s not LoRaWAN and certainly not TTN (which has a Fair Use Policy), it might be LoRa but it’s hard to tell with such a small amount of actual detail - “fastest possible” should actually be a number - like updates every 10 seconds. For measuring tectonic plate shift, “fastest possible” is once a year except during an earthquake.

Given a perfect sphere, the electronics will operate at or close to the speed of light so not much to do with electronics, quite a lot to do with comms which is protocol which is software.

It is unlikely you will need a data rate close to that order of magnitude.

So if you had a 9,600 baud serial link, would that be fast enough to move the bits you need?

But we don’t know how many bits you need …

So we won’t be doing engineering, but advanced guessing.

LoRaWAN is not a suitable use case for command & control if there are any time critical elements - the device needs to work autonomously and be sent settings that it can use to act on its environment that do not necessarily have to arrive in the next 12 hours or so.

That will be determined most likely by local radio regulations vs LoRa/GW capacity and as Jac states perhaps the wrong question - just 'cause regulation allow you to do that does not mean you should! RF Spectrum is a limited shared resource and having everyone shout as fast and as often as they can vs should will quickl render the spectrum in the local area unusable for all…and with the range achieved by LoRa ‘local’ in this context can be very long way indeed. Start with the question “what is the minimum rate at which I can get away with sending messages for my application”, possibly adding some redundancy and mitigation - its RF so not every message will succeed! - and build up from there! :wink:

If everone tried to drive all their cars (remember many have more than one also!) at 70mph (or whatever local limit is) traffic (flow) would quickly break down…

thanks for your comments @descartes @Jeff-UK
Well, maybe we need to run our private web app and data brokers (we don’t have any software problems) and we will consider all local regulations and the project is in the desert.

I just want information about this specific gateway capacity.
How many 20 bytes message will be handled in time unit by this hardware with the sx1301 chip and 32 or 64 end nodes.
Is there any way to calculate this max rate? (With considering chip datasheet parameters: different SF, Time on air, Emulates 49x LoRa demodulators and 1x (G)FSK demodulator, DDR, Dual digital TX& RX radio front-end interfaces,…).

Semtech had/has an emulator that looks at such theorical possibilities (LoRa node/GW capacity) but you need to consider real world deployment impact vs theoretical data sheet limits. The issue is orders of magnitude more complex than any of the questions that get asked on the forum with ‘simple’ use cases and scenarios (I have x nodes, I send y bytes of payload or whatever…). For max capacity in a network or even in a single GW there are many many academic evaluations of LoRa and LoRaWAN network capacity - GIYF!. Key is how are the nodes distributed - both spacially and operationally - and frankly even those of us who have been around for many years, some even from before LoRaWAN was introduced and even some before LoRa as a technology was publically announced to the world, will struggle to give you a definitive answer… :frowning:

The Silicon has its own limits which then add to the real world scenario/deployment limits, e.g. IIRC the SX1301 digital bus interface limits at around 2Mb/s throughput, Yes, there are potentially 48 demodulation options (8 channels/6SF’s) for a standard original LoRa/LoRaWAN deployment (ignoring FSK TX/RX and ignoring the additional LoRa element for TX/LBT), but that doesnt mean device can handle 48 messages concurrently, practically think 8 concurrentlly and even then best (to allow preamble detection and initiating/scheduling internal busses and demodulation resources) if there is a slight time offset to messages vs precice overlap of all 8). Then you get issues around the classic near/far problem of detecting RF…yes LoRa can detect below the noise floor (how well depending on SF used) - where say FSK classically needs 6-8db clearance from direct co channel interferers, and the various SF’s are (pseudo) orthogonal so can avoid mutual interference, but it is imperfect and some minimum headroom is still needed with a maximum dB seperation of the size of the near/far signals to allow error free detection and demodulation. Even at the same SF it is possible to detect and discriminate, but again imperfectly and whilst near/far issues and even foreign interferer issues are far better with LoRa modulation than with classical legacy modulations (FSK, gFSK, OOK, etc), these factors will impact GW performance - as a result of maths, physics and deployment real world behaviour more than any theoretical analysis of paper performance by a given GW build. Best througput comes where there is an ideal mix of SF’s and channels, and channel seperations being used at any given point in space and time. In turn the ‘ideal mix’ of SF’s to get best capacity means using higher SF’s which then means lower battery life - how often do you want to trek out into your desert to change batteries?! Local environmental issues will also come into play - not only local RF noise floor and potential interferers, but also local reflections and absorptions, even reflections of ‘friendly’ signals - own node or nodes in your own fleet… like I say complex and you are asking the wrong question…sorry…also for a given chip set combination ‘theoretical’ capacity will vary by geography and regulatory regime…in Eu we can use SF11 & SF12, but in say US for practical payload purposes you are limited to no more than SF10 - due to dwell time regulations etc…(GIYF again)

FWIW for a mixed node deployment various tx period/duty cycle, various payload size, SF’s etc.) I typically plan for around 5k nodes per GW as a starting assumption - sometimes < that may struggle, sometime 2-3x times that is practical…it all depends, I’ve seem >50k/GW work, and for some metering applications - small payload and 1 message per day type applications even >100knodes/GW possibile…depending on the resiliance of the application (not the GW capacity) - if node ref abcd doesnt get its value through today but does so tomorrow will that really break something in the context of a 3 month metered billing cycle? etc. I watch for lost packets and dropped messages for more sensitive applications/deployments and if my 5k/gw model starts to struggle, simples…densify and add another GW in the area (then all nodes in the area benefit and adjust/re-balance over time - if using e.g. ADR or an equivalent).

…what I dont worry about at all today is the specific GW chipset used or the ‘theoretical’ capacity that is calculated from some simple model of payload size, and number of nodes and gw silicon specification…

1 Like

There is no way to calculate this, there are too many variables depending on where you are going to deploy, what RF noise is in that environment etc etc.

And please keep in mind if you are using Lora modulation you deployments local is at least 10km. That is not local for most people!
For your requirements WiFi, Bluetooth or something like it would be a better match. LoRaWAN is the technology for a few bytes an hour, not continuous transmissions.

BTW, you mention 434 but the repository you link to has 868 and 915 designs, not 434.

Thanks for the helpful advice @Jeff-UK @kersing

“Consider” isn’t considered a defence in the courts - if you flood the local airwaves you will find other users will ask for an intervention from the authorities.

This is irrelevant, totally totally irrelevant. What if you block local essential water monitoring services?

Maybe you didn’t see my comments above, but if you search the forum you’ll see the details on why continuous data even within the typical 1% duty cycle (that’s 1 second tx every 100 seconds) and the typical 10% packet loss that we work to make what you have told us thus far unworkable. Which is a shame, because if you told us what you were trying to achieve, we could be more definitive.

Can you give us a hint as to how much data you want each node to send and how often ?

You say ‘fastest possible’ but you must have some idea as to what the minimum requirement is, so can you tell the forum ?

As an addendum - they all use chips from Semtech so there is a limited amount of variation of “specific gateway capacity” - if the host MCU is slow it will make a difference, if the backhaul is slow it will make a difference, but overall a Pi3 with a 10Mbps connection will handle far more than you can legal transmit.

Dear @descartes
We are just asking about Lora and its modules capacity to get information (Not implemented yet).
Because of moral and legal obligations, we never implement a system that interferes with the other systems, and we always get advice from experts like you( And of course we check the rules).
Probably Lora isn’t a suitable solution for us since we need to seek for a solution with a high data rate.

The main problem with ‘high’ data rates is nothing to do with LoRa but mostly because legally allowable duty cycles in a lot of places are at best 10% and often 1%. You will find it difficult to achieve a ‘high’ data rate when sending only 1% of the time.

As suggested, try the 2.4Ghz WiFi band, mostly no duty cycle limits there.

Would have been an idea to include in your first post, what sort of data rate you were actually after.

@salimi and also many assume ‘fast’ means fast data rate, but I also wonder if you are perhaps looking at fast as in response time as in limiting latency or delay for a message (up or down). As noted requirements not clear so hard to advise…

(Refering to your earlier comment of “the project needs a fastest possible field monitoring”…! )

1 Like

Dear @kersing @Jeff-UK

We have done some searching for a solution but are still confused.
BLE is a good option for data rate and ble long range are available, but there are some issues like max end device limits and real tested range and…
So I will try to explain the project maybe you can guide me.
Our project is to make a cultivated land smart. This farm is about 40 hectares ( first phase ) and its been divided into small research sections (about 100 sections) and each section has a different irrigation and fertilization program so they have their sensors and actuators.

Now our requirements:
Wireless communication (for fields and greenhouse) with 1000 meters range.
At least 32 end nodes.
No power limitations.
Each message is 20 bytes.
Each node has 2 messages per second (Gateway at least about 80 m/s).
It would be great if the modules were the popular ones like ESP32 which are supported by a good community and examples.