Fair usage with own gateway, why not offer a more generous fair usage plan?

The fair usage policy for TTN is understandable, given its provided free, but why doesn’t TTN offer more usage to those who add their own gateway? As I understand it, as soon as someone connects up a gateway to TTN it benefits everyone within range and expands coverage, even TTNs commercial customers benefit. Also with devices that support ADR, then a local gateway means lower transmission power is used, meaning better battery life, less time on air, more reliable communication making TTN look even better, plus less propagation of the signal so reducing congestion.

So to encourage more people to add their own gateway, TTN could offer a more generous fair usage policy which reflects the benefits of a local gateway. This would be a nice way to say thank you for helping expand coverage, which TTN also benefits from commercially.

Given that there exist gateway simulators/emulators, I think that as soon as you allow such a usage boost by adding your own gateway, there would all of a sudden be A LOT more gateways connected that don’t add anything in the real world..
Unfortunately, we humans are very greedy when it comes to taking advantage of ‘free’ stuff.

P.S. I fully agree and would hope that it would be an option. But I don’t see it being viable..

This has oft been discussed and proposed over many years within the TTN community and in discussion with the TTI core team, with many of the reasons you give amongst those points in favour.

Full disclosure: I would benefit massively from such a policy change as I have >>50 GW’s deployed (UK, EU & US) on the community network under my own name and have provided >150 more to community users and collaborators who support and host additional deployments around the world, however, as a community contributor I accept that exceptions to the policy are not allowed.

The reality is that it would be challenging to monitor and ‘police’ IRL. And more importantly there are a few technical reasons and issues why it may continue to be limited on the ‘free’ instance.

To give just a couple:

LoRaWAN is a broadcast system where the nodes just transmit and messages are picked up by all GW’s in range. Having your own GW would not mean only your GW hears your devices shout. Rather experience shows that in many places - especially in larger town and city or higher density deployments any given message may be heard by many GW’s - all of which will forward to TTN LoRaWAN backend (TTS Sandbox). Hearing via 4,5,6 GW’s is typical and in many places I have seen >>10-12 GW’s regularly hear the message. In some places, especially if there are adjacent or geographically overlaid peering LoRaWAN networks, the number of receiving GW’s can be far higher. Each message has to be handled by the LNS and messages de-duped. Allowing a user to increase TX rates would therefore not just be e.g. say 2x rate allowed then 2x through put on their own GW but could be 20, 24 or even >100 more messgaes to be handled for just dedupe. The impact would scale significantly. And whilst you would add 1 unit of GW capacity, in reality you are potentially absorbing far more capacity across the network as each hearing GW has to allocate receive/decode resources to handle, and in the case of any associated downlinks (Join accepts, ADR command, confirms, network/device management etc.) the impact is even higher as >95+% of GWs are simplex and can’t listen when transmitting to service your devices.

To provide a timely response to ANY node transmitting at any time essentially all device credentials for all devices (so DevAddrs, associated Network, Application and Device Keys etc.) have to be held in memory for immediate action - you cant wait to pull from remote HDD or even SSD storage to handle the message. Increasing the usage rate for any given device therefore has a significant impact on the operation & scale of the backend memory requirements.

Hence the decison to limit use under FUP on the freely provided Community network, which is increasingly positioned as a testbed and development network vs an open free to use/abuse resource.

The corollary to this of course is that as a broadcast technology is that other devices on alternate networks, or even on TTS based private networks the GW’s will still hears all device TX’s in range and will have to forward to their associated LNS, but atleast the back end will not need to be scaled to fully handle all of the messages received and many will not be set for peering or will not be recognised as ‘valid’ messages for the given LNS instance - TTS Sandbox in our Community case - and can simply be dropped at the appropriate point.

No doubt this will be revisited again in the future and TTI, who fund this free offering, may yet relax the rules but I would not have high hopes at this point as costs continue to escalate.

In the meantime for all the benefitial reasons you state please do go ahead and deploy and add to the Community network and help us all improve and benefit from the increased capacity - remember “You are the Network :slight_smile: :+1:

Thank you for the interesting reply. I’m only just starting on this so its all great information.

I can see how it may cause extra traffic, but wouldn’t devices having a local gateway just turn down their transmit power, so those more distant gateways stop receiving the signal, so helping compensate for a bit more fair usage? It wouldn’t be as though every single person using the community platform would immediately go out and buy a gateway just to get a bit more fair usage, or increase their usage if they have a gateway already, and we are talking, in todays terms, about tiny amounts of data.

With a lowish fair usage policy, it is likely driving people to use other alternatives, i.e. just have their own network for themselves (e.g. Chirpstack), then they can go all the way up to whatever is legal in their country for airtime. This is then worse for TTN as they are also receiving all those other packets that can’t be tamed by TTN’s fair usage but they still have to process anyway.

I just thought it would be the right thing to do, some acknowledgment from TTN, after all the gateway we buy and add at our cost may be helping them with their commercial contracts and clients by providing extra coverage and accepting and passing on that traffic.

Yes I have a gateway on order and will set it up for my local town, it will be interesting to see if any traffic starts arriving through it, and I’m quite happy to give back.

Thanks again.

Again, everything you say has been said before, with the same reasoning :slight_smile: as someone new you can be forgiven for that - there is a lot of history here and it takes a long time to go back through all the historic posts and discussions (and yes, many were in private discussions so not all discoverable by search). All valid in their own context but ultimately feels like waves breaking on the shores of real life implementations. As I stated I would also potentially benefit but having been around for a few years now and having lived LoRa/LoRaWAN/TTN for a while I see the challenges and can understand the reasons why not.

One point I would highlight is

The reality here is if used for commercial contracts then usually the clients will be looking for assurances, SLA’s and support (even if RF based hence cannot be guaranteed for inital link due to uncontrolled 3rd party or self inflicted transmissions, interference etc.) - something that is not offered with TTS Sandbox, so whilst the practical reality is that peering with TTN allows private instances and networks some additional coverage and redundancy this is not to be relied on. e.g. Where I am based in the UK’s Thames Valley west of London many of my GW’s and devices are fortunate to benefit from a benevolent private network south of the Thames (and others in the wider area) which peers with TTN and often captures and forwards data from devices to me, and similarly I see many of my GW’s in the area capturing the private network traffic and reciprocating. I’ve got a couple of GW’s that are down at the moment - network issues and switch off of 3G celluar in the area - but I dont need to rush to sort as the private network(s) and other community GW’s provide enough back up - though with ADR using slightly higher SF short term until my GW’s back online. The Community devices dont need an SLA and can live with occational lost packets. TL:DR no fully commercial network would/should rely on availability of community GW’s! :wink:

WRT alternate LoRaWAN instances one other little piece of practical advise I would offer is whilst ISM band licencing allows for much higher TX rates the reality (again) is that you may want to reevaluate if LoRaWAN is the right technology for any given application if you must push the limits. Remember as a Long Range technology this has impact to many others far and wide. I recommend collaborators, users and yes, commmercial clients, think hard about how resiliant their end aplication is - can it provide effective utility if there is significant packet loss? (On average I see <2% loss/missed messages across my various deployments; but recommendation is plan for how app will behave with upto e.g. 10% loss!) Also does it ‘really’ need 5 min or 30min or even hourly updates, is a longer TX duty cycle appropriate? Can data be packed in a way that longer TX cycles allow packing of intervening actual sensor measurements etc. Remember the RF spectrum is a precious resource and one that is shared……just because you can bang away at say 1% DC does not mean you should/have to! Think what would would happen if every user decided to work at the legal limit for every use case and every device - the airwaves would quickly become unusable :frowning: So I often advise that rather than FUP being a limit or restriction in many cases it is really a guide to good practice and many of the private network deployments I have supported over the years have stuck with the same limit or possibly increased very slightly where the use case has demanded it…… just my 2 penneth! :slight_smile: (Note the downlink limit is particularly good to adhear to as these are GW killers limiting the ability of uplinks to be heard and should be constrained where possible - dont think 10x per day but 10x per fortnight, or per month or (in case of static deployments!) even 10x across deployed (battery limited?) life :wink:

Finaly whilst

Is also a valid point; on a practical level it often doesnt happen in that way. Firstly, the deployed device has to be well programmed and managed to effect that benefit and many - even commercial devices - do not show that full behaviour, not least as secondly ADR doesnt immediately work that way (depending on network configuration and rules). Typically adjusting SF is used to effect good link quality at a distance under ADR so the idea is a good link initially at say SF10 will be commanded to move to say SF9 or SF8 or…. the benefit being reduced device power consumption through reduced time on air and also a consequent lower network congestion and reduced message clashes hence improvement for all c/w say staying at SF10 and just reducing Tx power to effect the same link budget. Only once down to SF7 does the idea of reducing Tx power levels come in to play, and by that time you are (in the context of LoRaWAN) already starting to consider the link as being relatively short range - possibly only a few hundred meters in a dense urban environment (vs >many 10’s km in open environments) and possibly less than that depending on construction techniques in the local environment. When getting very close in range it is often the case that swapping to a less efficient, lower cost antenna has significant benefit. Change out a $3-6 2.1/3dBi antenna to a cheap <$1 patch ant @ say -0.3/0.5dBito –1/-3dBi patch saves money and reduces long range interference and often gives a smaller device….horses for courses as they say. A lot of these alternate way of thinking and approaching only come with time and lived or shared (hence the Community and the Forum! :slight_smile: ) experiences.

Good luck with your GW deployment…..and welcome to the Network! :slight_smile:

1 Like

The bit that TTI funds for us isn’t the gateway or the backhaul, it’s the servers which do all the heavy lifting with decryption, deduplication, decoding and forwarding.

Of all the metrics, 30 seconds covers a multitude of use cases for payload size - probably not totally scientific, but easy to measure on device and via console.

So as outlined above, adding a gateway actually has the potential to increase the workload of the servers if it has significant overlap with other gateways.

And if you are the lone wolf gateway in an area, maybe you should be penalised until you have two to provide some redundancy. And when you hit three, you get a bonus. But then penalised when you hit four.

Most people with a few devices host their own gateways anyway to provide resilience to their own setup. Those that start out with devices who like to fiddle but are relying on the community network usually end up with a gateway to help with debugging.

If your device burns through all the 30 seconds a day, you are probably doing it wrong - sending to frequently, too large or too far away or any combination thereof. So if you hitting the limit, check back here for advice on rightsizing.

If you have your own gateway, possibly you can transmit at SF7. 30 seconds with SF7 is a lot of data.

1 Like