Is V3 the end for TinyLora?

It is, however unpalatable - particularly as it represents an opportunity to upgrade / improve.

People can swap out devices, putting in something better for much less than the original cost - and then repurpose the old device.

Certainly devices based on an ATtiny are going to be a challenge to find reasonable firmware for, but as I said, we are collectively working on it.

Tried that and it wasn’t succesful. The application keeps sending downlinks also after the 10th uplink. It might also be a bug that needs to be corrected. Maybe in the same category as “resets-f-cnt” which is also not working…

Shouldn’t that be it isn’t , however unpalatable… for the reasons you list? :thinking:

No**, suggesting people buy new hardware IS a solution. I was rebutting the ISN’T

It is a solution. Like telling people to use carrier pigeon or Clacks.

** the No here is rebutting your suggestion.

1 Like

:slight_smile: Ok.

1 Like

Please don’t post screenshots of code and logs. See: How to format your forum post? [HowTo]

That’s unfortunate - TinyLoRa has never really been suited to use, but more as a crude proof-of-concept.

That said, I can see where you are coming from

A lot of the replies here seem really gatekeepery for people who are just looking for a simple way to make use of LoRa with a really simple library.

LoRaWAN is in fact a rather complex protocol, and IMHO arguably has some design mistakes - it’s intended to support some rather complex situations which rarely occur in practice, and in order to do that pays some penalty in not only software complexity, but in the cost of the actual traffic transmitted, too.

But it’s equally important not to confuse LoRaWAN with LoRa - if you want something simple and barebones, LoRaWan might not be what you want to use.

Low memory devices were likely ill-chosen to begin with - this has been pointed out for quite a while, but some still insist on buying more of them. That said, I suspect that with real care (and dropping Arduino compatibility) optimal code could get the job done. But it’s unlikely to be a productive investment of the time of someone capable of doing so.

It’s clear where this is coming from, though it’s not clear if it actually matches the needs of the community.

Regardless though, the major mistake at present is that it hasn’t been coded with safeguards that will make it degrade gracefully when faced with misbehaving and misconfigured nodes - something that it is unavoidably going to face.

Most basically, the stack needs to have downlink rate limiting, so that regardless if by intention, under-implementation of LoRaWan on the node, misconfiguration, or even intentional DDoS-type abuse, the network refuses to transmit an excessive number of packets down towards a node.


So what does this mean for us amateurs with stateless TinyLora/Arduino sensors, using ABP and not caring about Frame Counter Checks and MAC Downlink Commands?
I mean we support TTN with own gateways as much as we can but if we need to change our entire hardware just for some compliancy needed for industrial solutions, this is no longer a community project.
Fair enough, but please be transparent so we can find other solutions. To me it’s a bit like saying IPv4 is no longer good, end of the year please switch to IPv6.


Nothing yet - it’s too early to say given the measured evaluation and testing that needs to be done and maybe some changes on the V3 stack to accommodate.

Or maybe compliance with the LoRaWAN specification as we all share the airwaves and need to play nicely with each other.

Somewhat analogous to drone flying. At first people just flew drones. Now in the UK you have to have a licence and pass a test to fly anything larger than a matchbox, almost entirely down to people not playing nicely.

If would be super useful if you could share what DR your devices are set to, their signal strength as reported on the console, how often they transmit and if ADR may have been set. Many of the MAC requests are for setting things that you will have already set in the firmware. But ADR requests do make a useful difference. If a device isn’t even listening, it may be useful to allow all downlinks to be turned off to save the gateway fruitlessly transmitting an ADR request.

It is important that once network realises node is being antisocial and ignoring such requests it stops adding to the problem by spamming downlinks - makes situation worse! I hope TTi adapt the stack to support this, and soon.

I would remind all users that recommendation is all none mobile devices should have ADR enabled for societal benefit and improved spectrum utilisation. - even if some users dont care and are irresponsible in thinking theirs is a ‘special case’ and they only want to use minimal hardware and crippled s/w (yes being militant here! :wink: ) that doesnt implement the standards. (Its a bit like the S/DCPF’s - I’m forever hearing “but its just for my use, or its just for my tests/experiments or its only on for 5/15/5000 minutes at a time - delete as appropriate - and isnt hurting anyone else really”) It is the responsible thing to do and if everyone disregarded such guidelines (reduce SF and reduce Tx power to minimum needed to sustan connection + a reasonable safety heardroom addition) we will find that the airwaves get clogged and regulations may get tougher… Ok we all occasionally cock up settings or mis-configure or miss fact we haven’y optimsed RF implementation and that is forgivable if corrected quickly once identifed or notified (reminder to self must also practice what I preach! :wink: )

1 Like

Ok, I see your points, but are we applying the same standards here? People fly drones based on 868MHz, pretty sure they have more bandwith use than my lora sensor transmitting a little package every x minutes. In my case I bought an 8-channel gateway for 200€ which is extending the network. In my area no one else is providing any TTN accessibility. So are we discussing real life problems where the 868MHz spectrum or TTN gateways are really clogged up by such “antisocial” sensors? The “does it hurt anyone?”-Question is in my view quite relevant. And I would even say that by investing into a gateway you get a certain right of way for that connection (again as long no one else is hurt in the 868MHz spectrum). So to me it’s the overall picture that’s relevant. And I think TTN needs to decide: either move to industrial solutions (which does not really fit to the 868MHz amateur spectrum in Germany) or promote a community network with contributions from many people at the price of relaxing some standards. Again: both ways are fine but please take transparent decisions.

I guess that is one for the TTI and TT Foundation core team - I’m just like you; a community user, contributor of GW’s and a voluntary Moderator. The problem I am trying to raise is one of social responsibility vs any sense of entitlement. The TTN Manifesto allows equal access for all as a community initiative and I would like to thank you for contributing a gawateway even though I will likely never have need to use it personally (where in the word are you?). Its very tempting to say “I have deployed a gateway and therefore I (should) have a better right of access” but that doesnt hold. Transmissions are heard by all GW’s in range - not just TTN but also TTI, Loriot, Orange, Singtel, Orbiwise, Digimodo, Public or Private instances of Chiprstack, etc. So even if you dont see other TTN users or at least GW’s nearby you have no way of knowing what else is around you!

Also there are transient users - my gateways often see other users passing through as they use various transport networks in the area around the GW’s - motorways, rail, trunk roads, even canals! And they may be affected by misbehaving installations, albeit for a short time.

I too feel a sense of entitlement - you are not alone in this - as I also deploy GW’s BUT one has to supress that natural assumption and help keep a level playing field. There are many in the Community who have no interest in deploying GW’s and who are focussed soley on the sensors, or not even the sensors rather looking at use cases and big data applications/integrations etc. I feel I learn a lot from them as they make their own contributions to the Community just as much as you & I deploying Gateways. :slight_smile:

If I take your reasoning further can I say that I have more entitlement than you - sounding all very Soviet Communist or ‘Animal Farm’ like with some more equal than others? For Ref I have registered ~40 GW’s to TTN of which this morning there are around 33 active - others will come back online over next days/weeks as I regain access (Colvid or frozen snowy weather allowing! :wink: ) and in addition there must be atleast another 150-200 deployed where I have bought or supplied for clients/friends/community members for them to deploy in their own names & for their own use and probably 2x that again that have been deployed based on my suggestions/specification and influence. Does that mean I have even more entitlement than you? Absolutely not! - we deploy in the knowledge this is a Community deployment and we get the benefit of a supported back end (industrial grade as you suggest) and infrastructure and the option (for our mobile nodes & sensors) to use deployed GW’s put up by others.

All I ask in return for the deployments I do is that people act reasonably and responsibly and share-alike and please do not deploy S/DPF’s near my GWs as they disrupt for all, …and please deploy Nodes that are well behaved & follow the evolving standards at the time of deployment… :+1:

BTW I thought that drones typically used 2.4Ghz as the often 1% duty cycle applicable in many parts of the world for 868Mhz band segments used for LoRa/LoRaWAN would result in unsafe controls of a potentially deadly flying mass?! And a few drones in a given area (density limited by physical spacing & collision avoidance) is very different to many hundreds, 10’s000’s or even millions of Nodes in a give RF coverage area - where spectral spacing vs physical spacing is required.

I see your point, but let me rephrase the “right of way”: first of all of course you need to comply with the general duty cycle limitations in the spectrum (1% for 868MHz). But my understanding is that the new requirements that come from the standard come on top to organize the bandwidth use within the TTN sphere (because the TTN Stack also doesn’t know and care about the Loriot/Orange/… users). And by right of way I mean that I should have the right to neglect these TTN-commands from the stack even if it might affect other users of this gateway (which in my case I am pretty sure it don’t). So yes, the traffic on this gateway would be a bit less optimized but at least the gateway is there at all. And this is the type of relaxation I think TTN should allow in order to keep the community support (and then evolve at a bit slower pace to the next level).

And for the drones look up TBS Crossfire, these long-range modules use 868 MHz. Although there is quite some evolution ongoing to get long-range also in the 2,4 GHz world.

Greetings from Berlin
(GW eui-b827ebfffeb7afe8)

There are actually no new requirements in this aspect for TTN V3 in regards to V2.

That is incorrect. Your gateway uses shared RF space in your area. It also is part of a shared public wide area network, The Things Network. Purpose of ADR is to manage and optimize availability and performance of the network in a geographic area. End devices (nodes) are not ‘connected’ to and not limited to your gateway, they are connected to the network. All gateways and nodes in your area share the same network and the same RF space.
Your gateway by definition is not the only one in your area. Tomorrow someone else may put up a gateway in the area and I know people who take mobile gateways with them while they travel.

Your gateways and nodes therefore have to comply with national RF spectrum legislation, LoRaWAN and TTN policies and their restrictions.

1 Like

Mfg aus UK :slight_smile:

There are legal limits - the 1% DC referenced, but there is also the TTN FUP there to protect the community access (30s airtime uplinks, <=10 downlinks, per day). If a misbehaving node triggers more downlinks than that even if not requested/confirmed etc. That is just as bad, indeed can be thought of as worse… please remember the system is simplex. When the GW is transmitting all those (even if not request or needed) downlinks it is deaf to ALL nodes tx’ing in the area. That is disruptive… and may also help tip GW into throttling back it’s own legitimate needed downlinks as it is a radiator in its own right and also has to comply with duty cycle limits (though typical TTN Gw in selected band this is 10%).

Please remember also just because you ‘can’ do something, does not mean you ‘should’ other than perhaps exceptional circumstances. Best practice is design your apps to work at way below dc or FUP limits, a) to allow for fair access, & b) to ensure your application is resilient in the face of lost packets… perhaps due to other less considerate users hammering the airwaves ;-). I typically tell potential users or clients to think in terms of 0.1% vs 1% and see if their application still fit :wink:

I wish you well in trying to meet and comply with recommendations and as Nick suggested earlier working together, as a Community, many of these issues will be resolved.


P.s. kids starting to show interest in drones for when we come out of lockdown & I’ve been thinking of possibility for coverage mapping, though altitude vs near ground level may be a problem… so appreciate tip on long range units :wink:

1 Like

My “worst” application has 0,1% airtime usage, so this should be ok :slight_smile:
I just think that my use case (sensor with very low cost, ultra-low-power mode on battery, no critical data and relay attacks irrelevant) is one of the major use cases for TTN. For critical data (just like smart energy meters) other frequencies such as 450Mhz are reserved in Germany and companies will most likely use dedicated networks for this. For IoT with no space or energy limitations, GSM/LTE is often a more convenient option for the supplier. So what is the use case and the target audience for the public TTN?

If you know of devices that have similar power consumption and at least similar costs as an ESP32 and can hold the full LoraWAN stack to support OTAA & Co. I am more than happy to try that for new devices. And if I am one of few remaining ghost-drivers, I will also accept this.

1 Like

Going forward I am using STM32L micro’s + RFM95 module’s. Powerconsumption in sleep 3.5 uA which is even better than my Atmega 328p.

Still would like to b able use my old hardware which is fully functional for me


Stay on topic and don’t try to start other discussions in this thread.
The last posts have hardly anything to do with the topic subject (is V3 the end for TinyLora?) anymore and are not related to the V2 to V3 Upgrade at all. For other subjects a new topic can be created if desired (always first check if any relevant topic already exists).


Actually the behavior of V3 imposes several new requirements which the behavior of V2 did not, and that’s the whole point of the thread.


Fair point. Yet the target audience and usage of TTN will have an impact on when the downward compability will end. So coming back to the original topic, it would be very helpful to know how long we can operate the old hardware with ABP, disabled frame counters and no downlink response. I guess the current setup, where every misbehaved device triggers unwanted downlinks and clogs the network, will not be accepted. So it’s either changing V3 for “backward” compability or I guess the devices will be kicked out (which I guess would be fine for some).