Missing Packets on Application Data

I have the same issue with that topic. But I have activated the storage. And yes after refresh the page i see all the data. Even in “life” mode they are not seen. Also not forwarded to my node-red. Only in the storage of TTN backbend.
I have the TTN indoor Gateway and I see the packages is received right. I use the ABP auth.

Well… So, is it a thing that TTN should fix or what? I’ll try the storage and see if I have all data there. Thank you all for the replies, I will keep the topic updated.

I think it is. It looks like one node in an internal cluster could be eating messages…
FYI, as indicated on Dataloss in the backend?, what we did to “solve” the issue is to add a Data Storage integration on the TTN Application and use the REST API on that storage instead of the MQTT interface.

Cool. Just tried to integrate the Data Storage and I have all packets there. Let’s see if they solve this.
Thank you!!

You are right, I enabled the Data Storage Integration and missing data get correctly stored.

But how could you fetch data in an application server? Using the MQTT API makes this operation very easy by subscribing to the correct topic. How can be this acchived using Data Storage? Should I poll the server using the web API once in a while? To me this do not seems a very nice solution. I hope this can be solved soon.

Sounds nice! Just did a quick test with postman, but got only authentification error? Any hint?

Yes, we use influxdb / telegraf and have replaced the MQTT client with a scheduled REST request that queries for the history (with some overlap…) every few minutes. It’s no coincidence I put the word “solved” in quotes :wink:

You can find an authentication key in the application overview tab
Then you’ll have to add it to your request as an Authorization header. Something like the following, IIRC:

“Authorization” = “key ttn-account-v2.blablabla-blabla”

@DeltaQ yes I think also that’s not complete nice to schedule REST requests. But I will do this every 200ms because I want to catch the motion of an PIR Sensor. Here it is important to get each event nearly real time.

LoRaWAN and near real time? You know there is no guarantee any transmission makes it to the network? Let alone all packets (events) sent by a node.

@kersing yes I know that issue of transmitting between node and Gateway. But I think the backend should forward each package that the gateway receives via node red or mqtt and not lost it!

Keep in mind the Semtech based forwarders use UDP which will result in some loss between gateway and back-end.
For packets received by the back-end we all expect them to be delivered to the application, however experience learns this will not always be the case and because the community network does not have any service levels we can only accept the service we are getting.
I hope the V3 back-end when finally deployed to the community network will result in less issues for all of us.
In the mean time you could check the TTI offerings if you need reliability…

1 Like

We have monitored packet losses in TTN very thorowly during the last time (see dataloss-in-the-backend) and the backend currently eats about 25% of our packets.

Regarding the original SPF things are really a bit funny. Semtech has implemented an uplink acknowledge, so the gateway really knows, if a packet was transmitted or not. It just does not use it. Kerlink has implemented a new forwarder they called common packet forwarder (CPF), that can retransfer unacknowledged packets. That helps, if a single packet was lost over UDP.

We found that loss rates are still high. If a packet is lost between gateway and backend, it ist not possible to distinguish if loss was caused by UDP or if the backend was just not listening for some times. So it may be an UDP issue, but also maybe caused by some other part of the system.

This seems to be a different case because the data is available in the database integration. So the back-end has received and processed the data, just not forwarded it to MQTT.

Precisely what we had: packets seen in the gateway traffic, but not in the device traffic.

Same problem here. It started on Monday morning and is still present.

Of course as a community service we can not pretende anything, but the MQTT forwarding system at the moment is not usable, the data losses rate is simplty too high, I don’t know if we can help solve this in any way.
Yesterday I made some test using the HTTP integration, that forwarding system seems to work properly at moment. I suggest everyone to switch to it for the moment in order to have a better solution instead of simply polling TTN Data Storage.

Have you checked the status page? If nothing is listed, please use the ops channel on slack to notify TTN.

Quick note for people not on Slack: This seems to be resolved. It was caused by a bug (?) regarding uplink MQTT packets. See #ops on Slack for details.

Yes! I was up to publish it. I have been doing some testing during these couple days and all packets were shown in the application data. Thanks all.