Increasing the number of gateways

I want to establish a highly dense network of nodes of about 1000 nodes under 100~200 m radius. From what I’ve read if a gateway listen to two nodes with the same reception power it will drop both the packets. If such is the case two nodes from the same place no matter how many gateways, if transmit paket at the same time, none of the gateways will be able to recieve it.

I want to know if I can increase the efficiency of this network by just increasing the number of gateways.

There is a high probability that two signals using a legacy modulation of the RF e.g. FSK may well be mutually destructive when at similar levels and a weak one will almost certainly be masked/swamped out… LoRa’s ‘secret weapon’ is the use of a Chirp Spread Spectrum implementation which means two signals on same freq/channel but using different spreading factors (SF’s) can co exist, even if one is many db’s ‘louder’ than the other and would usually swamp the weak one. SF dependent the loudest can be >20db stronger and the weak SF will still get through even where below noise floor. As a digital modulation scheme the signals are said to be orthogonal to each other and none interfering (not strictly true but close enough :slight_smile: ).

Recommend doing a bit of Googling (or Binging…or…) on the specific’s & properties of LoRa and its use in LoRaWAN before dismissing this great tech. As current implementations implement 6 SF’s (latest chips can support one or two more for ‘denser’ LoRa networks but not yet applied to LoRaWAN builds) this allows for much greater density c/w legacy and narrow band radio’s and helps solve, or atleast mitigate against, the ‘near far’ masking problem. The Chirp trick also helps mitigate other RF intereferers with performance determined by the type of alien modulation (FSK/OOK/FM/etc.)

While true: what are the chances that nodes in a small, dense network will not all use SF7? Would ADR even recommend some nodes to use a worse SF, just to spread “the load”, while that would also affect their time-on-air and hence even their usage against the fair access policy?

I’d assume that all would be using SF7, so the only variation is the channel hopping?

A nice video:

People who know more than me on this may well chip in here but its my understanding that ADR is a ‘dynamic’ scheme where the network & nodes adjust over time to improve overall reception. In the case of TTN I believe that around 20 messages are monitored to establish an appropriate TX Pwr & SF to be used + a ‘safety’ margin to improve probability of signal good enough to get through. If it is seen that a node misses signals then it is assumed that it is in a weaker/more interfered area and its SF will be backed off over time along with TX increasing, with nodes ‘shuffling around’ to help balance and ‘densify’ (*F.Hede et al) the network. Yes increased SF or TX pwr increases power consumption but at least signal should get through…it is an inherent trade off in any RF network not just a LoRa phenomenon, and users have to decide what is more important getting signals through or conserving power or finding a balance.

This mechanism is one of the advantages of LoRa based deployments in that you can start off with a given number & array of GW’s and as you see network starting to get congested you can then drop in additional intermediate GW’s to help further ‘densify’ the network, with the NS recognising the availability of this additional resource and starting to re-balance node behaviour under ADR. That is in part why it is recommended/good practice to enable ADR when deploying even if not expected to be used when 1st deployed, for a static node, as you never know how the local RF environment will evolve in teh future of what new GW’s may be come available (or indeed may go offline/get de-commissionsed…a big risk with a hobbyist/hacker led network like TTN!)

If main concern is every packet MUST get through then dont use any RF solution…use a wire! :wink: :slight_smile:

@arjanvanb as you point out there are ‘rules’ which guide on behaviour and performance but note I said ‘can’ co-exist…not ‘will’ ;-)…there are limits…with foreign/alien interferers also having different impacts.

1000 nodes under 100 mtr, a node every 10 cm ?

what is the use case if I may ask because its very unusual , or is it , as I think, just I ‘virtual case’ / idea :wink:

…consider also 3 dimensions!. Note Tags on assets/goods in transit could easily see this level of density esp when palettised or in containers/lorry/train loads etc.

These are dense indoor nodes at close proximity so even if adr is enabled they will get the same SF and DR. There won’t be significant difference in the SNR and Recieved power. And Even if they do have significant difference I plan on using all these nodes only in SF7 - SF9. That leaves me with channel hopping. But In India there are onlty 3 channels that I can hop

The killer/decision maker for @ThejAravindan will likely be update rates vs physical density and the observance of FUP (note TTN <> LORaWAN!) and duty cycle regs…if there is a significant amount of Joint Req/Ack traffic of if a significant number of confirmed payloads then that may place limit with the GW’s vs number of nodes or risk of node/node interference. If there a updates say once per day not a problem…once per minute and ooops! :wink:

Interestingly what I have observed in the past is that if you have a dense pallet of devices then the GW will see weaker signal from centre of pallet or far side vs side facing GW so even in that situation you will already see the network adjusting ADR to accommodate :wink:

Also part of Semtech’s decision to introduce support for lower SF’s in the latest gen of chips is the fact that will help in dense networks… The LoRa modulation scheme introduces a silicon/test/yield overhead to a conventional radio which meant that first gen wasn’t viable for lower SF due to economic impact (Higher SF 13/14/15…would add significant memory also driving up cost for limited return so a ‘Goldilocks’ range SF7-12 was implemented). With newer silicon implementations the lower SF’s become viable again :wink:

Ah then you looking to constrain the LoRaWAN implementation? Non-compliant? How will you stop nodes moving to higher SF’s?

Without more knowledge of the physical deployment can’t comment more on the absorption/reflection effects impacting received signal levels but as noted above have seen SF/Pwr adjusting even in small spaces/highly packed deployments under ADR (Packaging of goods/box contents etc acting as signal absorbers also antenna in close proximity can act as additional soaks for signal…

Am surrounded by a number of nodes here in office/lab area and even with many GW’s in close proximity under soak test & evaluation 2-100m range I’m seeing ADR driven SF’s across a number of values…

I’m tracking products in an industrial warehouse

I’d guess key is: how many transmissions do you expect? And can your application work with packet loss? (With LoRaWAN you should always expect to have some packet loss, even when no collisions would occur at all, if only as the current gateways are half-duplex so cannot listen when transmitting a downlink.)

See How many gateways are needed to support 1,000 nodes sending 10 bytes per 5 minutes?