My application's downlink is always queued for next uplink

I did some testing but never managed to get an application’s response message in one of the node’s receive windows. Instead, it always seems to be queued for the next uplink message…

So, should the following be feasible in the real world?

  1. Class A node sends uplink message.
  2. Application server, which has subscribed to TTN back end using MQTT, receives uplink message notification and immediately prepares response message.
  3. Application server publishes response message to TTN using MQTT within 1 second of uplink message reception notification.
  4. Response message is transmitted as downlink message to the node in one of the two receive windows of the uplink message.

Yes this is definitely possible, however, step 3 should be done immediately. If the application is not fast enough, the downlink is scheduled for the next opportunity.

How about adjusting the RX1 slot to 2 or 3 seconds to allow for gateways with slower internet connections and allow applications a decent amount of processing time as well?
Allowing to set this on a node by node base to allow for backward compatibility with deployed nodes would be the ultimate solution.
@htdvisser should I create an issue?


We are indeed planning to add a 5 (/6) second rx window in addition to the existing 1 (/2) second window.


That would be nice!


I’m wondering if this is still the plan? I’m working on a project, where I would like to be able to send a downlink IF the uplink is something specific (scheduled e.g. by a HTTP request)… Is this considered possible? I read somewhere else, that if the downlink is received later than 100 ms from the uplink, it will be scheduled for the next transmission:

Is this still the case?

I am quite new to a lot of this, so I don’t know if that makes it impossible to schedule a downlink for an incoming uplink?


And for future reference, this “next opportunity” refers to the next uplink (not to RX2 when the application is too late for RX1):

For future reference: even if it works at some point, then some time later this might still fail. And this also implies that trying to replace a scheduled downlink might fail, making the scheduled downlink being sent off to the gateway almost immediately when an uplink is being handled, and the new downlink being sent after the next uplink.

It seems some integrations are running from a data center in a different region, probably introducing some additional network latency? See HTTP Integration pushes from European Azure server for application on ttn-handler-us-west?

If true, then maybe using the MQTT Data API from one’s own region (and maybe not even using a decoder in TTN Console) might give one the best chance to have a downlink be transmitted in the window of the current uplink?

(While also scheduling the downlink as soon as possible, like by postponing any time consuming file, logging or database operations until after scheduling the downlink, if possible.)

Hi, doesn’t the MQTT Data API run from the same Azure hosted VMs in Ireland just like the HTTP end-point? I am just trying to wrap my head around how MQTT Data API could produce a better latency compared to HTTP calls? Thanks

A mqtt connection is persistent. For a https request, there must first be the three-way TCP handshake, then the TLS handshake, before the request can be sent.

Since mqtt performs the tcp and tls handshakes once (when establishing the connection), the connection is available for sending data immediately when data comes in from the device.

Cloudflare has a good description:

TLS handshake