Data Storage not showing decoded data and only stores data two times a day

I activated Data Storage Integration for my Application with two devices.
Both devices are sending and data is correctly decoded and used with a HTTP Integration successfully.
One device fires every minute the other every 30 seconds.
The storage seems only to store two sets a day and does not decode the data.
I wonder, because I could not read any restriction about the storage frequency. And two times a day is not sufficient for my device since one device requires 3 cycles to communicate all data.
This is what I get when asking for the last day time period:

[
  {
    "device_id": "iv_home_room_sensor",
    "raw": "A2cA/wRodAD/ASg=",
    "time": "2020-08-18T02:04:43.58415383Z"
  },
  {
    "device_id": "iv_home_room_sensor",
    "raw": "A2cA9ARobQD/ASc=",
    "time": "2020-08-18T11:08:43.399893614Z"
  },
  {
    "device_id": "promag400_test1",
    "raw": "MU9LIjYwMDgxODUuNTAwMCJsIjYxMDM2NjQuMDAwMCJs",
    "time": "2020-08-18T15:08:20.749709696Z"
  }
]

Where as the decoded format should look like for the “iv_home_room_sensor”:

{
  "acceleration_x": null,
  "acceleration_y": null,
  "acceleration_z": null,
  "activity": null,
  "activity_count": null,
  "battery_voltage": 2.94,
  "break_in": null,
  "bytes": "A2cA/gRoeQD/ASY=",
  "decode_data_hex": "0x03,0x67,0x00,0xfe,0x04,0x68,0x79,0x00,0xff,0x01,0x26",
  "external_input": null,
  "external_input_count": null,
  "humidity": 60.5,
  "impact_alarm": null,
  "impact_magnitude": null,
  "light_detected": null,
  "mcu_temperature": null,
  "moisture": null,
  "reed_count": null,
  "reed_state": null,
  "temperature": 25.4
}

and for “promag400_test1”:

{
  "bytes": "ME9LIjE4MDk1Ljg3NzAibC9oIjE4MDk1Ljg3NzAia2cvaA==",
  "decode_data_hex": "0x30,0x4f,0x4b,0x22,0x31,0x38,0x30,0x39,0x35,0x2e,0x38,0x37,0x37,0x30,0x22,0x6c,0x2f,0x68,0x22,0x31,0x38,0x30,0x39,0x35,0x2e,0x38,0x37,0x37,0x30,0x22,0x6b,0x67,0x2f,0x68",
  "health_status": "OK",
  "mass_flow_unit": "kg/h",
  "mass_flow_value": 18095.877,
  "totalizer1_unit": null,
  "totalizer1_value": null,
  "totalizer2_unit": null,
  "totalizer2_value": null,
  "totalizer3_unit": null,
  "totalizer3_value": null,
  "volume_flow_unit": "l/h",
  "volume_flow_value": 18095.877
}

Is there anything wrong with my data?

That is far to often to comply with the TTN fair access policy.

Each device is allowed to use an average 30 seconds airtime each day. That means with just one byte at the highest speed you can send about every 3 minutes. More often (or larger packets at that frequency) exceeds the limits of what you are allowed to use on TTN.

1 Like

Thank you for your fast response!
For sure! I’m aware of that. This is only a setup for development purpose.
I’ve read the fair access policy.
As far as I remember, it is acceptable, when additional infrastructure (gateways are installed) that’s the case in this region, where I’m the one and only LoRaWAN user in range.
I will strip it down to an acceptable level as soon as I have a grip onto the technology.
But is this the Reason for the behavior of the Data Storage?
I cannot imagine! If so what would be a feasibly rate for development purpose?
Following the fair access policy I can only send the data 4 times a day. This is far to slow for development.

We’ve seen people who abused the Fair Access Policy only get partial results for each API request, but not as low as just 2; see Is there any limit in the number of results when querying the Data Storage Integration?

So, what value are you using for ?last= in the API request? And did it ever work before? It may just be a temporary flaw which may need reporting; see The Things Network Status Page [HowTo]

…but then you’re really abusing the fair access policy a lot right now? Even for 51 bytes on the worst data rate SF12BW125 you would be allowed 10 messages/day.

1 Like

If you have a local gateway, which you suggest, you can use SF7 and that should allow for 4-6 times an hour, not 4 times a day. That should work for most development purposes.

1 Like

Thank you all! I may have overloaded the system! I’ll take your advice.
I am currently working with a very crude prototype.
Hopefully the modification will help. I’m using already SF7.

I activated data storage last night. I selected “1d” in the query. In the meantime I have 4 entries with “1d”, as with “2d”. With “12h” the oldest is skipped.
So it looks like the data are actually in the memory like this.
I have activated the one device only this afternoon again.
So it looks as if the data is only stored twice a day and is not decoded.

If the Data Storage is overstrained with this, I can find another solution. It would just have been so much easier for the prototype.

If you still have an idea, I would be very grateful, if not I will switch to another solution.

Thank you very much for your commitment! I am very convinced of your concept!
Not least because of that I will stick to it :wink:

I doubt that.

I just checked for one of my devices that uses the Data Storage Integration: I get all expected results, and all is decoded fine as well.

Despite the above: could it be that the code of your Decoder is failing, hence making the storage fail as well? Or maybe you’re using some erroneous Converter script as well? (I’d expect the data to be stored, regardless any errors in the Decoder, but who knows…)

Only if you’re not sure if the HTTP Integration is indeed giving you all details: do you see all data and the decoded values in the device’s Data page (when you have that open when data is received) or when using the MQTT API?

Anything funny when comparing data that is received in the HTTP Integration but not stored, to data that is handled by both?

I think you are rushing your work, sending too often so that you get results - if you need to see an uplink, set it to once per hour and have a switch to send when you are doing a test but don’t keep hitting it.

My test setup at the home office has half a dozen devices with three gateways. At TTN the data has a decoder and is relayed via HTTP Integration to a web server and also kept in Data Storage. I have a database application that downloads twice per hour from Data Storage.

Most of the outage has been my fault. In the last 6 months I think there has been about 4 hours of HTTP Integration outage and I’ve seen a few MQTT messages that have timed out the JavaScript decoding. But mostly, it all just works.

So, take some time to split up the issues, have one device sending every 15 minutes, preferably with a serial debug running, leave the console web page open for it, use the Data Storage Swagger page and then compare what is being sent from your device, up to TTN and then coming out of Data Storage.

1 Like

Answering my own question: I guess they differ in null values?

It seems the Data Storage Integration does not support null values…

One can easily test by just returning some hardcoded JSON result in the Decoder, and then use Simulate uplink (with a random payload) on a device’s Overview page to trigger the Decoder and the integrations. No need to abuse the Fair Access Policy for that:

Simulate uplink

(To automate this, look at ttnctl devices simulate.)

Storage silently fails for this:

function Decoder(bytes, port) {
  return {
    "bytes": "ME9LIjE4MDk1Ljg3NzAibC9oIjE4MDk1Ljg3NzAia2cvaA==",
    "decode_data_hex": "0x30,0x4f,0x4b,0x22,0x31,0x38,0x30,0x39,0x35,0x2e,0x38,0x37,0x37,0x30,0x22,0x6c,0x2f,0x68,0x22,0x31,0x38,0x30,0x39,0x35,0x2e,0x38,0x37,0x37,0x30,0x22,0x6b,0x67,0x2f,0x68",
    "health_status": "OK",
    "mass_flow_unit": "kg/h",
    "mass_flow_value": 18095.877,
    "totalizer1_unit": null,
    "totalizer1_value": null,
    "totalizer2_unit": null,
    "totalizer2_value": null,
    "totalizer3_unit": null,
    "totalizer3_value": null,
    "volume_flow_unit": "l/h",
    "volume_flow_value": 18095.877
  }
}

Removing the attributes with null values, or using undefined, does work:

function Decoder(bytes, port) {
  return {
    ...,
    "totalizer1_unit": undefined,
    "totalizer1_value": undefined,
    "totalizer2_unit": undefined,
    "totalizer2_value": undefined,
    "totalizer3_unit": undefined,
    "totalizer3_value": undefined,
    ...
  };
}

I wonder if things changed, but I guess not.

I’ve always seen null values in the output of the Data Storage Integration, but that’s probably due to the following: changing the Decoder, or conditionally including specific attributes, also affects fetching existing/other data from the Data Storage Integration. Any attributes added after data was already stored, or outputted conditionally, are outputted with null values for old/other data. Likewise, any removed attributes are still outputted with null for new data (even when only new data is returned in the time period of the query).

In other words: it seems all returned items all have the very same attributes, though some may be null when not defined for that very item. That makes it looks like null was supported, but I guess it never was.

This does not explain why some items are stored without any decoded values, but I’ll leave that investigation to you. :wink:

Aside: note that the Decoder is executed upon receiving the data, not when fetching it from the Data Storage Integration. You can easily test by adding something like dummy: new Date().toISOString() in the output.

2 Likes

Wow! That’s a lot of really useful information.
Many thanks to you all.
That is outstanding!
I will now first reduce my sample rates, than tomorrow I will check for the "null"s that I already suspected, when I read other posts that dealing with decoding.
Once again, many thanks to you all.
I will show up again when I have interesting findings.

So! Problem is identified and solved:
It is the “null” element in jason which makes the Data Storage stumble.
It was not the data rate!
Setting the default in the decode from “null” to “undefined” solved the problem.

Funny thing is that the datastore does not collect the data device specific, but collects all elements of all devices in one jason structure, setting all elements to “null”, that were not in the incoming jason.
Fortunately, I do not have a name clash.

My problem is solved. Thank you so much for your help!

Or: simplify the decoder to not set defaults at all. In JavaScript, there is not need to define everything in advance. (In fact, doing that without a proper editor may just introduce funny errors when making a typo.) You may want to show your decoder if you’re open to a review.

Just took the payload decoder as referenced by the manufacturer


The decoder for the self developed device I wrote accordingly.
It’s a benefit to see what parameters could be contained.

Not this time, no, but please be aware that the back end servers are a shared community resource.

1 Like