we have a project to control irrigation valves. In the area of application we have the following setup:
3 Gateways (different manufacturers) in range
6 valve controllers (Class C end nodes) to open/close a pipe via downlink command (unconfirmed)
private network with self-hosted TTI instance
Valve controller end nodes are configured like in the screenshots bolow
The problem is that downlinks often don’t reach an end node. However, the downlinks (unconfirmed) were successfully transmitted in the gateway logs. We need to ensure, that the end node gets the downlink message.
In a test where gateway and end node were only a few meters apart, we also found that downlinks are getting lost. So, its not only a bad connection issue.
My questions are:
What is a realistic latency using LoRaWAN? Is it realistic to control many devices (100+) over LoRaWAN with a latency of 1 minute maximum? What should the setup be in this case (amount of antennas)? Or is the normal usecase to transmit the scheduling once and thats it?
What is the best practice to control critical infrastructure over LoRaWAN? Can we ensure that a Class C end node guaranteed received a command via downlink or is this by design not possible?
What do the node local serial ports say/what do they see? It may be they are out of range… or
Too close to each other and shouting in each others ears, causing channel overspill and saturation…recommend several meters seperation preferably an absorber (e.g. a wall) inbetween - look for RSSI values <~45db, better yet <55-60db
Ensure systems and connected devices are fine and safe in the case where they dont receive specific commands for reasons beyond the application and the deployed systems control… if controlling valves, etc., do they have local shut off and safety checks? WHat happens if turn on or turn off doesnt get through etc.
Basically its RF - there is no guarantee of getting through (local conditions, interferers etc, risk of RF collision on air etc.)… ensure design allows for this as above, design mechanisms for getting data through after the fact - either rolling window of values being sent or a mechanism for later supply on demand from a local log at the node etc…
Can you clarify - TTI is a company - do you mean TTS OS - as in the open source edition?
If so, which version?
Meaningless without knowing what the devices are - they could be on 1.0.4 which is a whole different level.
Not possible to the dictionary definition of ‘ensure’ with RF in general as a single comms channel. LoRaWAN is not suited to command & control as downlinks are an issue to all devices in the area that are trying to uplink as well as any other interference that other legitimate radio users may bring to the scene. LoRaWAN for setting settings for local autonomous control is fine if you can cope for a few hours whilst the new settings arrive when RF situation is bad.
On Class A & a good connection in good weather (ie not raining hard) and no solar flares, about 66% chance of being 5 seconds after it’s next uplink. Class C ditto for rain / solar / dumpster trucks blocking the antenna etc etc, about 5 seconds or any other random number depending on backhaul but it’s more about your config, the Class C spec doesn’t have any latency, you have to measure your system to find out what it is.
Not in the slightest - not if you mean all 100 at once.
Apart from the NS picking the best gateway to send to a device, making “more antennas” pointless unless there are dead patches but that would be for a only a handful of devices but you seem to have a general problem, if we take “more” as being better and by some scheme the “more” transmit at about the same time, you’ll make the situation worse.
Control? DO NOT DO IT, EVER.
Sensors & feedback, yes.
You can’t on any one tx. Overall, eventually, yes.
It was never considered in the design you are thinking of, so in a Schrödinger sort of way, it’s neither not possible and possible. It wasn’t designed out, it’s just not part of the spec. There’s too much going on in the ISM bands for any radio system to be guarantee a latency. LoRaWAN is low power so whilst its chirp can punch through a fair amount of noise & cross talk, it has its physical limits. But that doesn’t mean YOU can’t design a system that can work to a reasonable margin of certainty that runs on top of it. In the same way that cars have air bags etc, the final arbiter of the accident, or not, is the user.
Probably not the answers you were hoping for and yes, I’m being emphatic, it’s my middle name (along with Danger, obviously).
Ever since they had to send some kids down a water filled cellar at Chernobyl, no one ever put critical command & control on one piece of comms, nor do they not include some local processing to deal with the “oh dear, not good” and the “boommm noise in 30 seconds” situations. This works very very well with LoRaWAN, the device knows that it has to turn off the valve when the tank gets to 80%, it does so and phones home. You want it to fill to 90%, you send the command, a couple of Class A uplinks later it says “sure boss” and starts doing that. You want it to change the discharge rate, you put the command in, it doesn’t arrive and your firmware developer “forgot” to put in a return ack in the next uplink (no, don’t do anything confirmed, that makes the situation far worse). The tank gets to 30% and the device thinks, hmmmm, I’m mean’t to carry on but this is going to empty the tank before I get a reply to my “oh dear” alert. I know, I’ll throttle back the discharge rate until I get a message from central command. It sends the “oh dear” message but doesn’t get a response back. So it goes through a carefully thought out sequence of escalation in terms of uplinks, some of which may involve confirmations to establish if the network is still available, until it almost drains the cooling ponds, the waste pellets get warm and the west coast of Cumbria starts to glow in the dark, its radiation sensor goes off and it, as the firmware developer has never worked on anything really critical and thinks TikTok is a thing and management think a test plan is an optional extra, a bug is triggered and the Irish Sea is flooded with water that makes the fish glow in the dark.
Now that I’ve got that off my chest, next steps:
Can we have 10 RSSI & SNR’s for each device for each gateway so we can see, assuming some element of symmetrical tx/rx (not really valid, unless you can get us the values the device hears for downlinks), what the reception may be like.
Or if it’s actually irrigating GM crops or some such & you can’t share, you should look to > -120 RSSI and a SNR above 4ish. But @Jeff-UK is the radio expert - my expertise is in ranting on the forum about firmware developers half my age and, er, writing firmware. And flying planes without engines, helps focus the mind on the actual definition of do or die.
As well as the above, an actual indication of what these valves are irrigating would be useful to give context. Most plants can go a few hours without being rained on, three days for the contents of a typical garden, if it’s hydroponics, obviously that’s different but can still be run on local intelligence without stunting any growth if the message doesn’t get through for a few tries. Unless it’s a cannabis farm, then mum’s the word.
Is this a client thing? As in the client wants the valves to respond at a finger snap? If so, find & eviscerate the sales guy, it will save you so much bother in the long term.
For such a small instance of TTS, is it worth the effort of having to look after it yourself? Or is it for a poppy field, if so, your secret is safe with us.
But fundamentally, this is likely to be a local antenna & interference issue as you just don’t have enough devices to have them clash.
But if there is something outside of your control, a huge amount can be done with firmware - central commands sends message, device sends back “got that” - if not heard back, try again a minute later, rinse, repeat.
Or layer something like NB-IoT alongside so there is a whole different frequency band & infrastructure to deliver the message, that way, like Pony Express, something will get through.
Not if you don’t increase the number of gateways to 1 for every 9 nodes. Gateways in the EU868 frequency band are allowed to transmit at most 10% of the time (during which they don’t receive a single bit) so for 100 nodes, assuming a transmission of 0.5 second you can address 1 node every 5 seconds. Which means 60/5=12 nodes a minute in ideal circumstances without any data loss. As that is unrealistic assume one in every 4 packets is not received correctly, so 3 of the 12 get lost, resulting in a requirement of one gateway for 9 nodes…
The numbers might be different with faster transmissions, however in that case chances are more interference occurs resulting in more data loss.
BTW, the 10% is for the shared frequency for RX2 which is used for class C downlinks >99% of the time so there will probably be a lot of interference by other gateways transmitting at the same frequency.
All of the above should lead you to conclude what you want is not a realistic scenario. Redesign using the pointers provided by @descartes.
Hi guys, I’ve read all the topic and I have a similar problem with downlinks.
I am carrying out a project based on Lora. I currently have the 7240 RAK brand gateway (16 channels) and 7431 RAK brand nodes as well (they are LoRa to Rs485 converters). The devices connected to the node are actuators that use Rs485 communication, that is why the previous node has been used.
These actuators receive a 22-byte wait command for operation, and this data must be updated every 10 seconds.
After different tests, we want to send this information to a total of more than 40 nodes, but with 12 nodes we already have problems and information is lost, since we see that it does not reach all the nodes correctly. Together with the LoRa sending frame we have a frame of the size of 28 bytes (6 from LoRa and 22 from the RS485 equipment).
We know that LoRa technology is preferably intended for sensors, not actuators. In fact, we find that in almost all gateways on the market, the majority of channels are intended for Uplinks and not for Downlinks. LoRa communication was ideal for our project, due to the coverage distance, bit/s transmission and elements necessary for communication.
However, I am running into the problem of information being lost. This are my questions:
Is LoRaWan really what my project needs?
Is there a LoRa system designed for actuators and not so much for sensors?
No. LoRaWAN is for low bandwidth sparse communications. Not for updates every 10 seconds.
No. Most gateways do not receive when transmitting. And all gateways are subject to the same airtime/dwell limitations any device using the band is subject to, resulting in it being unable to transmit too many downlink frames without running out of allowed airtime.
Basically LoRaWAN is the wrong technology for your use case. You might be able to implement your own protocol using plain Lora.
PS for TTN you would be breaching the FUP in a massive way, you are allowed 10 downlinks maximum each day. Also, you would be an extremely unsociable spectrum user. While you might be able to do something that fits legal limits, you would block any other use in the vicinity. And for Lora that vicinity could be tens of kilometers.
LoRaWAN is 13. It’s not the majority of LoRaWAN gateways that are intended for uplinks, it’s all of them, they all share the same vendor for the chipset so they all have the same 8 minimum channels for listening, 1 downlink at a time. Coupled with the duty cycle issue you’ve encountered, I’d suggest a trip to the Learn centre (link top of forum pages) is worthwhile to learn what LoRaWAN actually is.
But only if using a radio for broadcast to all with the exact same information and only at DR5 thereby reducing the range.
In the ISM band, I don’t think any radio is a solution here.