Thingspeak stops receiving data from 1 of 3 nodes suddenly, data from other 2 continues

I hope someone can help me resolve this problem.

I have 3 end devices (nodes) in my TTN application and I am using a Webhooks integration to send the data to a Thingspeak channel. It has been working fine for many months. But a few days ago the data from one device began to be received erratically and then stopped altogether. Data from the other 2 continues to be received fine. The Webhook status is “healthy”. I tried deleting and re-creating it - made no difference. If I look at the end device Live Data, I can see the new payload uplink every 15 minutes so the device is transmitting OK, but there are also "fail to send webhook " messages. If I open those I don’t see any clues that I can understand. The first lin eis "name ": “”.

I think this is unrelated, but my LPS8 gateway had a power supply failure around the same time, but the other 2 nodes did not seem to be affected - their data kept coming in. I’ve since got it running again. (I thought my gateway was the only one around here, but there must be others or else I would not have received anything into Thingspeak).

Any suggestions as to where to look are appreciated.

what are the rssi, sf & snr of the nodes

If the data arrives at the TTN console OK, the RSSI, SF and SNR shouldn’t matter.

Do you have an uplink decoder on TTN? Does the payload of that node look identical to that of the other nodes? It may be the case that one of the sensors is reporting invalid values and therefore breaks the payload formatting, which Thingspeak in turn rejects.

For the node that is not getting through to Thingspeak.
rssi = -96, channel rssi = -96
snr = 5.5
sf (spread factor?) = 10
But I assume that if it’s displaying all this in TTN console, then that’s not the issue?

Yes there is a decoder and it seems to be working - the values are resonable.
Here is an extract from an uplink

"received_at": "2023-12-12T09:38:34.758738418Z",
    "uplink_message": {
      "session_key_id": "AYmKagVuyUb7PPmWFj/oXw==",
      "f_port": 2,
      "f_cnt": 13622,
      "frm_payload": "DloBGQA9DAEOARY=",
      "decoded_payload": {
        "ADC_CH0V": 0.061,
        "BatV": 3.674,
        "Digital_IStatus": "L",
        "Door_status": "OPEN",
        "EXTI_Trigger": "FALSE",
        "TempC1": 28.1,
        "TempC2": 27,
        "TempC3": 27.8,
        "Work_mode": "3DS18B20",
        "field3": 28.1,
        "field4": 27,
        "field5": 27.8

You may want to check some of the questions listed from here in the FAQ on Thingspeak to make sure that you are not exceeding any plans/storage on that side before further troubleshooting the device on TTN

Thanks - yes, I am well within limits there, as far as I can tell. I’m only sending data every 5 min on 1 nodes and 15 min on the others. Just noticed that some of the “fail to send webhook” messages have some additional information - “too many requests”. What does that mean?

"details": [
        "@type": "",
        "value": {
          "body": "error_too_many_requests"

what is the status code

"status_code": 429,

if it is 429 means you are posting too frequently

Certainly worth checking the limits but 5 over 15 minutes seems a bit tight. And rather odd it only affects one node, but worth @Sgrobler clarifying if the node that has issues is the one sending every 5 minutes.

If ThingsSpeak are rate limiting at TTN level, maybe someone is hammering away at the integration. If changing the rate on the 5 minute node doesn’t help, we can ask TTI to check activity.

Thanks - yes, its code 429.
The node that has issues is sending every 15 min, the other two are at 5 min and 15 min respectively.
All 3 have been operating fine at those rates for months. I will try to slow the rate on the problem node…

how may seconds apart are the messages from the two that are sending every 15 min

seems they rate limit (the free version) at a measurement every 15sec so if those two are synchronized and the messages are with in 15sec of each you will have a problem

1 Like

I decided to use the “IT Crowd” recommendation - “have you tried turning it off and on again?”. Sent a downlink reboot instruction to the node and the problem cleared. Back up and running. Webhooks error message has disappeared. Probably should have tried the reboot before posting to the forum…
Thanks for all the help offered.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.