Nice ‘theroretical’ back of envelope paper analysis but sadly nothing like what happens in a realworld deployment!
A few spanners in the works off top of head:
In a class A node deployment its never going to happen - clock drift from the nodes (assuming ABP to limit loss of capacity from GW due to join process, MAC commands etc.) means they will gradually drift away from each other over time in the field and discipline will be lost and conflicts will start to happen
A class B deployment (using GW beacons to correct/reset timing) might help discipline, but improbably deployment and few TTN deployed GWs can support class B, some support class C. As nodes appear to be mains powered that might be an option?
-
Most probably nodes will be distributed over varying distances away from GW and so 1 fixed DR not realistic, and even if theoretically at suitable distances real world issues - reflections, absorbers, building clutter, walking sacks of salt-water, moving vehicles, topology, etc. will purturb RF propogation and actual useable reception at both ends of the link
-
One of the characteristics of LoRa (and LoRaWAN) is that overall spectrum and GW capacity is best utilised when running with an idealised model of distributions of channels and particularly DR’s for a given GW (think of area around a GW as like an archery target or dart board with GW at the bullseye- and that idealised model will vary by node configuration and external factors out of your control (*such as). GW’s never run with idealised capacity
4*) Other nodes in range of your GW will also be transmitting, potentially cause uncontrolled conflicts/collisions, even with the resiliance of LoRa modulation
5*) Other nodes in the area may be initiating or recipients of GW downlinks - Join processes, ADR updates, MAC command, esp for mis-behaving nodes outside your control, commanded downlinks, etc. all of which will turn the GW deaf to your nice scheduled TX’s and wiping out timeslots.
6*) Given enough other nodes in the area and even allowing for you limiting the need for DL’s for your own use, there is a severe risk that in higher traffic areas the GW may max out its duty cycle - whether for RX1 responses on same channel & DR as TX nodes (generally 1% limit where applicable) or even the higher duty cycle RX2 window (10% on TTN where applicable)
7*) Rogue/badly behaved nodes can really bugger things up in line with above!
8*) Even if network appears to settle into a reasonable near steady state (though less performant than your ideal by definition), it can still be further purturbed and upset by transitory nodes passing through your area for a period from which the system will then have to recover back to prior steady stable state - think cold chain/logistics/asset tracking type nodes passing by your GW - especially if they are hving to do a rejoin due to a link loss and all rejoin close to your GW forcing it to send downlinks. See that regularly with trains/rolling stock and cargo coming through some areas. Also a common sight with road freight truck rolls.
9*) How resiliant is system/application to packet loss? Across my GW and node estate - atleast what I have bothered to monitor at various times - I see typicall 0.1 - 1.5% loss, though some deployemnts can peak out at 2-3%, TTI suggest you should actually model for up to 10% packet loss! (due to some of the issues flagged above, but also due to external RF phenomenen and interferers, plus what about packets received ok by GW but then subsequently lost over backhaul/the internet on route to the LNS back end!) How do the numbers look if you start to model for such losses?
Of course none of this reflects some mode of operation where 1000 devices come on together and then run for just 1-2hrs…per my original post I know from experience that there will be some nodes (possibly many!) that will never get to send a received useful payload during the operating window - even if able to join! (they may send but GW and hence back end may never see…). Densifying the network with many more GW’s is the only hope for this…
There are some legitimate use cases that might fall in scope but unlikely to use this model and more likely will stay live for much longer, and which likely will retain state and then not have to rejoin once they have joined - I’m thinking e.g. wide area street lighting deployments after 1st switch on etc…but they use lots of GW’s and ‘bring up’ by the dozen or a couple of hundred at most if they want quick reliable 1st start up - they have learnt the hard way not to do it by the 1,000 off! Some years back I saw a potential application using hundreds, and potentially thousands, but again with several GW’s to monitor a very wide area - for microclimate monitoring and path evaluation and personnel/asset tracking & monitoring in forest fire management - you need as many sensors as poss as quickly as poss…but they used other mechanisms to manage start up vs a big bang.