Bad connection between gateway and The Things Stack

Hello!

I’m new here. Any more information if needed, please let me know.

I have a Gateway Laird RG191 connected with TTN V2, but I did some research and saw that I can send a message to TTN V3 without upgrade.
However, some downlinks that are automatically sent every uplink received are lost and do not reach the end device.

Could this problem be due to the gateway being connected to TTN v2?

I set the downlink timeout to 5000 millis in the end-devices configuration.
It appears that downlink timing changes on some messages.

Is the communication between the gateway and the server not correct?

This is refered to as a confirmed message and should be used only sparingly. TTN FUP is 10 max per day… including not just confirmation but join process etc. And even then should be avoided if possible. Every downlink renders the gw deaf to all up links from all other users for the duration…bad! What device are you using? Why confirmed? (I hope given (good!) choice of GW not a Laird RS1xx running version 1 Laird payload format…which indeed uses conf…you will need to switch to version 2 with option for unconf msgs… even then 1 in 10 still conf IIRC limiting number of messages per day…or select Cayenne format.)

I want each uplink to send a downlink to estimate packet loss for a university job.
But I didn’t change anything on both the gateway and the server so that it sends downlink to each uplink. When I registered the end-device it happened automatically.

I didn’t quite understand about the downlik package format being changed for Cayenne.

A number of devices available off the shelf do that - when using many LoRaWAN networks, including TTN, this breaks FUP and so devices then need to be reconfigured to bring them within terms by either limiting them to max 10 (in the case of TTN) uplinks per day to bring dl count down,or the confirmed option has to be turned off or reduced to conf only a few uplinks. One device that had a reputation for this OoTB behaviour is the Laird RS1xx as mentioned above, many users of Laird GW’s also choose to try the Laird sensors (they are both good and I use many myself!), but you have to ensure the config changes above are implemented - can you confirm what device (node) you are using?

A noble use case I’m sure and one raised by many users - esp in academia - over the years, but please read what I said (and use forum search) - you cant do that and stay within TTN rules with any more than 10 Tx per day! There are better options such as looking at the local GW logs, that will show you PL for the Node - GW link directly - simply look for missing packets/F-Counts in the sequence. This can also be done through the TTN Console looking at the GW traffic page and again looking for missing counts - though depending on GW to NS comms you may see a tiny increase in loss due to GW-NS internet comms potentially dropping packets. Looking at conf DL you have two potential RF loss paths of course so was packet lost on way from node to GW or GW back to node?..

Confirmed packets - dont do it!

Can you explain how that will give you the information you need to estimate packet loss?

I’m using an Arduino nano with the LoRa RF96 module.

The end-device sends an uplink message every one minute.
So is it because of this sending frequency that is affecting packet loss?
Perhaps a solution is to develop an application so that a downlink is sent every so many messages received, thus reducing the loss?

It looks like that’s when the gateway sends the message to the end-device.

One point I still can’t understand is how I remove confirmation messages.
Do I need to go to TTS console and disable somewhere?

I’m counting every packet received (downlink) on the end-device, so I send uplink a counter of lost messages. I’m using the payload cayenne and I’m using it that way. With that, I can see how many were lost. I don’t know if I was able to explain correctly, if you have any doubts, ask me again.

It’s a bit hard to tell what you are actually trying to analyse here. What is the premise of your research?

Gateways usually have good antenna & stable power and we don’t rely on downlinks, so this could be literally very academic. We are more exercised by lost uplinks - and the best way to lose an uplink is to have a gateway doing a downlink as it can’t hear anything on any of it’s typical 8 channels whilst transmitting - something somewhat ironic going on here.

I can count the uplinks using the f_cnt value that comes with every uplink free of charge - if there are any gaps, I can see that straight away.

Downlinks are 10 per device per day - we strive for one a fortnight, if that.

The intention of the research is to analyze whether or not the device loses packets, both downlink and uplink.

In fact what matters are the uplink messages, but what I wanted to analyze is whether the device receives the downlinks or not.

About the confirmation messages that are sent to each uplink, how can I disable them? So, I reduce network usage and try to make an application to send a maximum of 10 downlinks per day. But I can’t find where to disable these messages.

I decreased the uplink sending frequency and consequently changed the downlink confirmation packet frequency, after that it seems that the downlink packet loss rate decreases considerably.

Would you like to know how I can limit the automatic sending of confirmation downlink?

Yeah, sure, I’m intrigued, do tell.

1 Like

I’m sorry, I think I didn’t express myself well in the question, but I also don’t know how to eliminate automatic downlink submission. I looked in the gateway settings, but I couldn’t get it out.
There are some pre-settings, but if I take any of them, the gateway stops working completely.

Unfortunately, you’ve been subject to some replies that were more condescending than technically helpful.

Downlink confirmations are sent if and only if the uplink traffic is sent in confirmed mode - a header bit typically set or not based on the data transmit function you call, or an option passed through it. So the way to not trigger confirmation downlinks, is to send your uplink traffic in unconfirmed mode.

However, modern full implementations of LoRaWAN like TTN v3 are likely going to send some configuration MAC commands in downlinks to any new node, and to keep sending them until they receive a satisfactory response. You mentioned using an Arduino nano, and if that also means you’re also using an old, incomplete, deprecated LoRaWAN limitation like an obsolete version of LMiC, it’s quite likely that your node is not going to correctly respond, and the downlinks will continue, rapidly chewing up your usage allowance.

To implement a node properly, you absolutely must use a current, spec compliant LoRaWAN stack, such as a recent checkout of MCCI LMiC. This can be challenging to compile to fit on a smaller Arduino, but there’s really no option but to do it right with such a full LoRaWAN implementation. If you can’t get it to fit in your existing processor board, you’ll need to get a more capable one.

In terms of reliablity studies, as already mentioned you should look for gaps in the uplink frame counts. If you need the node’s view of affairs, give it meaningful serial console output (be careful not to generate serial output within the tight timing of the TX-RX windows) and collect those logs on something for later analysis.