I’m developing an application to control a water pump. My dev setup is:
- A Multitech Conduit gateway that uses the mobile phone network to talk to the internet;
- A Feather M0 915 node;
- The MCCI LMIC library, and the as923 frequency.
I have no control over the gateway, and cannot see it’s logs etc - it’s owned by the customer. I am almost certainly the only node using the gateway. The gateway is probably 6 metres or so away from my desk in the workshop (a garage), so further if I’m in the house.
I am finding joins take a long time, sometimes up to an hour. After a fresh install of MCCI LMIC they seem better, but still up to 10 minutes.
I am finding downlinks to be what I consider for this project hopelessly unreliable. They were much worse, but since the reinstall of MCCI LMIC they’ve improved to the point it takes about 4 or 5 uplinks to get a downlink through.
The feather sends an uplink of a single byte every 10 minutes, these are bit flags to give the status of the pump. But for testing I can send them any time, so they are much more frequent.
I expect the downlinks to be 2 messages every few days - turn the pump on, then off a few hours later when the tank is full. But that’s really 8+ downlinks on that day, and could easily be 20+ given how many attempts it takes for one to get through.
Using TTN console I was manually adding downlinks after each uplink until one got through, but I have changed to using the confirmed delivery flag to get this done automatically. Is this going to cause problems because so many of them have to be sent?
If the downlinks take 4 or 5 uplinks before they are received properly, then a command to switch the pump on or off could take 40+ minutes, and that is an open-ended interval because depending on I don’t know what, it could take many more uplinks until the downlink is received. This is pretty poor, but workable. Not much of a user experience though, if they ask the pump to turn on manually and it doesn’t happen for n hours.
Are downlinks just this bad, or is it some local problem I have? Could it be caused by sending too many uplinks while I’m testing?
If TTN console says a downlink has been sent, does that mean the gateway will definitely try to send it to the node, or does the gateway have the freedom to drop it if it feels like I’m taking up too much air time? I’m wondering if the node is having trouble reading the data from the gateway, if the gateway is dropping the downlink, or if the mobile network latency from TTN to the gateway means the gateway gets the downlink too late for the node’s rx window.
Is the problem caused by the Feather/LMIC combination? Uplinks are very reliable.
Is the long joining time caused by the fact downlinks are so bad? I’m assuming there is a downlink for key exchange etc.
Does a gateway using the mobile phone network suffer from latency in the mobile network as opposed to one connected to a wifi or ethernet LAN? Does this latency impact on the 2 second downlink window?
Is LoRaWAN just the wrong technology for control projects? If so, what is the use of downlinks?