Limit adr on lmic to not use SF11 and SF12

But both scenarios require the same action. Using SF12 is not sustainable and missing packets are not sustainable, the remedy is always adding another gateway and that needs time.So i think the “miss some packets” scenario is better then causing trouble by exceeding the FUP because it needs no urgent action.

It’s always hard to agree with a strategy without any context, which is why I asked what the downside of uplinking less frequently would be.

But there still seems to be a conceptual problem that you’ve sort of acknowledged. If a device has ended up on SF12 as a result of ADR and you force it to SF10, it seem quite likely that you will experience more than the occasional missed packet - quite likely all of them in fact.

@LoRaTracker is the man to know what the reduction in range is likely to be.

Implementing a change to LMIC and testing seems likely to take as long as buying a TTIG and finding somewhere to host it.

How is your own gateway situated? What antenna does it have? Can it have its antenna in a better position?

Understanding that you may well stop your device stone dead, the simple answer is to put LMIC_setDrTxpow(DR_SF10,14); just before LMIC_setTxData2(1, payload, sizeof(payload), 0); to hot wire the SF. AFAIK the power value isn’t implemented but is required.

With LMIC, the changes to cope with FUP, are hardly difficult.

Why ?

Also @andonoc SF isnt ‘just’ about range - it impacts sensitivity which can be a factor in range, but also penetration and another factor often overlooked wrt LoRa modulation capability is the impact of steping SF on noise immunity. IIRC e.g. where legacy modulations need +ve SNR’s LoRa can operate below the local noise floor with a capability that varies with the chosen SF. ADR can therefore adjust SF not only to reflect a suitable RSSI + netweork determined safety headroom, it can also adjust to reflect the presence of impinging noise and poor noise floor. Again IIRC where SF7 is good down to approx -7 SNR, the improved sensitivity of higher SF’s (longer signals allowing more time to disciminate and corellate target symbols) means SF12 is good down to approx -20 SNR. So even if there is a good RSSI and some headroom, if the local RF environment is troublesome a higher SF may still be recommended to compensate. Force to SF10 where system is flagging SF11 or SF12 may mean you have another variable that will kill reception…best advice is therefore adjust timing to suite higher SF under FUP :wink: Fewer packets but much greater chance of getting through, and depending on how application constructed and controlled less need for recovery or retries (helping to also limit spectrum use by minimising on air time overall).

The data sheet sensitivity difference for a LoRa device is that SF12 has 5dBm more sensitivity than SF10. 6dBm extra sensitivity would be double the range.

1 Like

In the code I published for the LMIC low power library, which worked around the duty cycle problem and allowed the node, a SEEED XIAO SAMD21 in this case to go into a very low current deep sleep (5uA), the FUP adjustment was tied up with the deep sleep code.

However in a normal LMIC setup there is a line like;

const unsigned TX_INTERVAL = 30;

Now you might question why the code examples for the libraries are such a major breach of FUP, but if you make TX_INTERVAL a global variable, you can adjust it at every transmission to take account of the SF in use (LMIC.dndr) and the payload length in use (sizeof(payload)).

An exact calc of air time is not so simple, since the air time could be the same for a 20 byte packet as it is for a 22 byte packet due to the vagries of how LoRa works. However, if you take the length of the minimum packet and the maximum packet you can have an average air time per payload byte and with these constants its not difficult to estimate air time, then its easy enough to adjust the TX_INTERVAL to keep close to the FUP.

A Temperature sensor sending every 10 minutes 2 bytes is limited to SF9 with FUP. So, from what i understand, i should NOT allow the device to go to SF10. So if i lose complete connectivity i know to add another gateway or reposition the node. What is strange on my approach? How did you handle those situations?

If the node is in a location where it can end up using SF12, set the send interval to 60 minutes or so.

Do you really need to know temp that often? What is your application?

1 Like

spring frost detection for orchards

I found the ADR margin setting in the device settings! I go to play around with these numbers

8 minutes. But many temperatures don’t change that fast or need updating that fast …

OK, so send the temperature every hour and more frequently as it changes as it gets closer to the frost level and only when there is a significant change (relative to the accuracy & precision of the temperature sensor). FUP is over 24 hours, so more frequent uplinks as there are significant changes in the wrong direction, can be accommodated.

Like many threads, you’ve got access to some people that have solved many problems like this so many times we confuse them with hot dinners, but getting enough information to advise is like pulling teeth. Be more open to both providing information without us asking or feeling that the proposed solution won’t work.

I send ‘normal’ weather temperatures as one byte with 1 decimal place - which, given the accuracy of the average sensor, is more than sufficient. I can alter the range using the port numbers. But TBH, providing overlapping coverage for anything that’s being monitored that can cost money is a no-brainer.

So I’d ask what the temperature sensor is, hoping against hope that you just tell us what it is on your next post.

And if this is a commercial orchard, or one you get a reasonable crop from, installing a gateway on cellular is a next-day-delivery away - you can then figure out a longer term plan but you will have some breathing room.

If your install is for commercial use, shhhh, I didn’t tell you this, but you could use your one gateway and your temperature sensor off FUP by registering for a Discovery instance on TTI.

In which case you wont be dependent on just one sensor, right? (Single point of failure) But also if the orchard of any decent size and area then likely you will be literally ‘heatmapping’ by using at least 2, pref up to 4 or maybe even 8 sensors per acre approx., to then also determine which direction any chill is coming from or if simply a still cold night looking at wide area coverage to then trigger either blowers to move air, or possibly add in some space heating (keep it green folks!) to minimise any potential frost effects. what you then do is have each sensor update approx every hour or so but ensure they are not all updating at same time but rather spread in time across the locality so that you avoid/identify any localised temp phenomena (shelltered or exposed areas getting colder faster etc.) and get an impression of overall condition much faster than the one hour update would imply by virtue of sampling across the sensors - getting back to that 15 or even 10 min update you were thinking of and without any sensor even getting close to FUP. Saw that with a client in Kent with ~25 sensors spread over an approx 100 acre mixed fruits orchard deployment (they had 20 across the orchards specifically for the temp monitoring but then also had 5 for monitoring conditions close to a series of bee hives on site so simply aggregated in that data also :wink: and another client in the Napa Valley CA - where they were restricted by US dwell time limits to no better than SF10 - who had 30 sensors initially spread over just under 52 acres IIRC, both worked a treat providing the data and early warnings the clients wanted (the later getting greater value out of the small handful of GW’s they and neighbours had initially deployed by later adding soil condition monitoring and various other sensors into the mix over time!)

I have a recollection of a discussion where the user had many of the sensors at the edges of the plantation (I think it was a vineyard) with some co-ordination of the timing of the uplinks as they wanted to monitor how gusts of air moved into the area.

However at this point in time we seem to have a DIY LMIC based device, possibly only one, with no gateway in reasonable range. So without more detail, all I hear is Lalo Schifrin’s _ _ . . _ _ . . doodle-do doodle-do _ _ . . _ _

1 Like

Sorry for the long delay, and thank you for the good and extensive feedback!!!
I have not much time but i will report in a rush what happens after changing the ADR margin:
I changed the margin from 15 to 5 on 3 of 19 nodes
node 9 went from SF11 to SF10 with only a little more packets lost
node 16 went from SF12 to SF7 without significant packet loss
node 17 went from SF10 to SF8 without significant packet loss

It seems that i can affect the SF this way and keep tack of the FUP
Can i ask what you think about this way to keep track of the FUP?

I think with 19 devices in an orchard you’ll need to have a chat with @rish1 about commercial use.

I also think messing with the ADR margin which is carefully tuned by the supreme experts of TTI is somewhat hackish and likely to come unglued. Particularly as you appear to have inspired some considerable changes in the SF,

You should also look at the statistical variance of the node RSSI - before making a change as after it will only report the successful ones of course! - and you will likely see at least +/-5+ dbm variance just sitting there over time, you then need to consider how the local environment will change as time goes by - cold will be associated with condensation/fog or rain/and yes that frost/ice you are keen to avoid! A damp tree especially in bud/leaf will represent a fair attenuation even in the ~900Mhz bands though not as bad as would be the case for 2.4Ghz, with an array of them as woodland or orchard averaging out to a lot of attenuation overall (penetratin in woods/forestry much discussed historically on the Forum over time so best start there - Forum search is your friend!) so suspect whilst ok ‘now’ even a small change will start to loose you lots of messages… 15db network headroom is slightly conservative but has been arrived at across lots of users/device/applications. I often see 8-10, sometime 12 as margin but those are typcally private networks where the deployment, conditions, application, (and risks) are well understood and allowed for/accepted. Dont think I have ever seen a ‘real’ network deployed reliably wih just 5db margin! YMMV! :wink:

can i ask you what can happen? google found some posts about loosing connectivity. If that happens i have to bring them near the gateway and readjust the ADR margin and bring up a new gateway in a better location. or do i understand it wrong?

I dint know this is a problem. there are many things to measure and the orchards are spread around a valley

Thank you, its good to have a reference

It seems much easier to tweak the margin a little and keep track of the SF. A second gateway is in the mail and now i know much better in what location i can hang it

A reference sure, but DONT ignore the given context:-

With respect your questions and implied level of LoRa/LoRaWAN understanding and experience (correct me if I am wrong!) suggests

…may no be the case, yet!

Obviously one of the best ways to gain experince is to try, and TTN is in part about learning and testing LoRaWAN for your own use cases…just dont do anything that may be disruptive to other users. You have been given some free but none-the-less very valuable guidance and advice by Forumites with a lot of varied experience and I would recommend you follow what has been suggested and accumulate some real data and experience before starting to make breaking changes… :slight_smile:

Exactly what you have put, you’ll lose connectivity and have to add gateways, which, funnily enough, brings up full circle back to you adding more gateways, so 26 posts in, you’ve come to the same conclusion as the answer in post 2.

Totally irrelevant - if you gain a pecuniary advantage by the use of LoRaWAN via TTN and you aren’t just doing some initial Proof of Concept work, then that would be commercial use which should be on TTI. How far aware the orchards are makes no difference to the server use. And having many things to measure definitely makes a difference to the server use.

And then there is the breach of FUP, which isn’t an issue on a paid for TTI instance. Apart from the LoRa Alliance requirement that service providers restrict habitual use of SF11 & 12. So additional gateway it is then.

1 Like