RN2483 confirmed uplink not working

I have a a RAK813 gateway and node based on a RN2483 running on V1.04 firmware. With the exception of the appropriate keys and a SF of 7 it is working with the factory set parameters and a serial terminal to simplify diagnostics.

The node appears to be working. I can join OTAA, send unconfirmed uplinks and receive downlinks. However confirmed uplinks are not working. I get the ‘ok’ and ‘max_tx_ok’ but not the expected ‘mac_rx 1’.

If I look on my ttn console both the application and the gateway traffic pages show the ack downlink being transmitted. Confirmed uplinks do work with the same gateway using another node based on a SX1272.

I think this shows that the gateway is working correctly and that the RN2483 is capable of transmitting and receiving so I suspect that I need to change the settings somewhere.

(The node will eventually be deployed at a remote location and the ability to do the occasional confirmed uplink is a necessary part of ensuring continuous service. )

I’ve checked some past posts and this one in particular “Issue with confirmed messages” but so far nothing has worked. Any help or pointers would be gratefully received. Thank you.

I can understand why you expect this response because example in the microchip documentation is confusing.

However you will only see mac_rx responses when there is downlink data. If there is no downlink data scheduled the mac_tx_ok response signals the confirmation has been received. If no confirmation is received even after retrying the transmission the response will be mac_err. Keep in mind it might take a while before the error appears due to the retries.

1 Like

Thank you very much for the very prompt response. As you say it’s very confusing having a mac_tx_ok response for both confirmed and unconfirmed uplinks. I have been stuck for the last couple of days and can now move forward.

I would recommend against using network level confirmed uplink. The way it is designed, only the first attempt which reaches a gateway is confirmed by the network. So if it’s actually the downlink packet that gets lost for whatever reason, then none of the retries will ever receive another confirmation. The network will ignore all traffic from the node until the uplink frame count increments, and retries do not increment the frame count, so they simply get ignored.

Instead, I’d suggest you implement your own confirmations at application level. This has several benefits, first each application packet gets a unique frame count, so none will be ignored by the network stack. Next, you can chose the retry strategy that is the best compromise between success and battery and airtime consumption. And whatever data you might include in the retry could be fresher measurements, rather than increasingly stale old ones. But most important, by generating an application-level downlink ack in your data backend infrastructure rather than having the TTN network servers generate a network-level one, your ack confirms what you probably actually care about - that your data back end got the message. TTN’s servers getting the message and it then getting lost on the way to your data backend is interesting from a debugging perspective, but probably irrelevant from a user one, where what you probably want is a truly end-to-end confirmation.


Thank you very much. I hadn’t appreciated the the first attempt nuance. Fortunately when these nodes are eventually deployed they will be monitoring water and pollution levels in our local river system every twenty minutes or so. A missing a data slot is perfectly recoverable.

I like the application solution you outline and sending a scheduled downlink will also provide a RSSI to monitor the health of the link.

On a more general point I’m finding it quite difficult to find descriptions of best practices in the actual implementation of a practical node. Although the available libraries are very welcome starting points the ones I have found so far only provide a basic functionality.

In terms of general best practice, so far I am relying on these



and I’m dipping into some of the academic literature but if you know of any other helpful (ttn approved ) sites I’d be very grateful for the links.

That is a fair point. There is a lot of advice available on the forum but it not concentrated in a few messages.

Most forum users come to ask questions, gain knowledge and then don’t take a moment to contribute back to the community. There are some blogs out there where people write about their experiences, however the advice in them should be carefully considered as there are knowledgeable people and rookies writing them. And the advice dished out by the latter category is questionable to say the least. (That something works does not mean it is implemented as it should and will stay working)

At least for forum contributions there will be many eyes looking at what has been written and if it is questionable there are very likely going to be responses challenging the statements. :slight_smile:

Feel free to ask questions, but at least I would be grateful if you could document the best practices you find/get pointed to and create a nice write-up when you think it’s worthwhile doing so. I can guarantee others will enjoy it as well, even if they don’t express it.

Somewhat opposite of my previous recommendation for literal confirmation, it’s worth taking some time to understand the ADR mechanism and see if you can use it. Granted if you are sending a downlink anyway including a signal report could be only a byte (really even just 2-3 bits would be informative) which is small compared to the downlink headers. But ADR can end up implying some things about connectivity and signal strength.

The real question might be if you find the time constants in the RN2483 firmware’s ADR implementation appropriate, ie, how long it waits without hearing from the network before it decides on its own to increase effort (lower data rate, more power)

Thank you. I can certainly try that and I’ll let you know if I get a reliable figure. First I have to upgrade the firmware to V1.05 so that I can also monitor the rssi. I shall also be interested in whether it reacts to a deteriorating signal or just relies on not hearing from the network a certain number of times.

I’m still trawling the literature and came across

which seem to be addressing a similar problem. although if I understand you correctly it would be much better not to send a confirmed uplink.

The ADR might be a life saver if it operates sensibly. If there is a wait of a multiple of 64 tries before corrective action is taken then, in my case with uplinks every 15 minutes, this would mean a loss of at least 16 hours of data which in a flood/pollution alert system could be significant. However I’ll get to grips with the ADR system first before worrying about that.

So far I haven’t been able to find any Microchip documentation on the ADR algorithm they use, but there may yet be a paper somewhere that alludes to it.

(In my early days we would have referred to a LoRa-like protocol as ‘shout and hope’ which one of my early managers said was akin to sending up flares on the Titanic. We liked our acks.)

Thank you. I’d never thought of myself as an author but I’m collecting a lot of notes that relate to my problem so, if I do get to the bottom of it, then I’ll tidy them up and post them in the hope that they will save somebody else the same journey.

1 Like

For best practices you might want to check the TTN YouTube Channel as well. At last years TTN conference there was a talk by Nicolas Sornin (the inventor of Lora) where he mentioned best practices.