Experiencing Packet Loss on v3 but not v2

Hello jpmeijers!

I see also packet loss on EUv3 end nodes, which I never saw on v2, at least not in this amount and consistency!
At my home I run three gateways, one Laird RG186 connected through cable internet to TTNv2 (UDP), one Laird RG186 connected through cable internet to TTNv3 (UDP), one Mikrotik connected through LTE to TTNv2 (UDP).
My nodes messages are received and forwarded on all three gateways, but still sometimes messages get lost, so I suspect it gets lost inside TTNv3 somewhere…

Hi @mat89 it would useful for others in comparing if they are seeing similar problems to yourself if you could provide more details and info on what you are seeing and what you are flagging as a problem. Also if you can provide more information on the physical deployment at your home - relative positions of GW’s, nodes, physical environment, local connectivity and connection types etc.

Sadly we can no longer monitor ‘live’ traffic on V2 gw’s (or nodes) directly in the console due to V2 RO and degrading service so when you say

is that based on what you see on local logs and direct gw monitoring or what you are seeing (in case of the 2nd Laird GW) in the V3 console?

We know UDP is inherently a lossy internet comms mechanism though traditionally I havent seen too much loss myself, hence TTi have been encouraging folk to look at potentially using the opportunity of V3 migration to also move over to more robust GW to NS comms e.g. through use of BasicStation (Websockets based) protocol as both more secure and more robust through ‘the net’ :slight_smile: I believe your Laird can support BS (though do not know your firmware revision so may need an update!), it may be useful to try that to see if matters improve for you. If your nodes are on V3 then your V2 GW’s will be servicing them by delivering packets through the PacketBroker, using LTE is generally fine for backhaul if in a decent reception area (I use both 3G & 4G connections in part of my GW fleet, with little problem), but I know of some folk who have struggled with additional latency through cellular connections. In the case of the Mikrotik that latency will be added to the additional latency through PacketBroker so would be really good to get more details and logs. When you do see data arrive in your applications (or on the device traffic tab) in the Console does the metadata show that all three GW’s are servicing each message or is it patchy with perhaps one GW catching most with the other two missing the odd message or appearing to not contribute (I suspect it is poss that if one GW’s handles quickly, and the other two are on long latency/delay then NS may ignore late arriving messages as part of its de-dupe processes and window (I do not know the window set up).

Are you able to quantify that in any way impirically or by anecdote - are we looking at 0.1%, 1%, 10%? More? As you may be aware, with their world view the TTI team say you should plan on up to around 10% message loss for a single GW instance, this is typical of what might be expected in potenially crowded or bursty ISM RF bands, though once into the GW and beyond in terms of internet comms and through the back end losses should not increase significantly, but will increase over all, even in a small way, for reasons e.g. above. Again IIRC TT team claim high 90’s (97%+?) routing performance overall.

If it helps expectation - over time I have typically seen 2-7% loss across my fleet, though suspect I am seeing some increase in that over the last 6-12 months as the airwaves get more heavily used (and also for some installations I am seeing more (mostly private) networks get deployed or load on TTN gets higher in some instances and collision increase (suspect if I look closer now some sites will be more 3-8/9%?) It is therefore wise to design you applications to be resiliant to allow for such losses. In some areas where I am keen to minimise losses I look at increasing local density of GW’s to increase reception redundancy, though where these are co-located or closely located it can only be of limited help as a bursty interferer in close proximity may affect several local GW’s - all three in your case? or intermittent masking/shadowing may affect all GWs in a general direction (I have similar close proximity deployments as yourself in some areas). Looking at historic data I have managed to get some sites down to <<0.1% loss (<1 message in 1000), but the cost is additional GW coverage, which can start to get expensive if only deploying a few nodes, and it seems to be a law of deminishing returns - 2 GW’s say 3-500m apart gives good local area improvement, 3off 100-300m apart improves again, but less so and so on! :wink:

Sure.
Laird RG186 connected with cable to the network, ISP is UPC Switzerland (cable internet), mounted below the roof, approx 10 meter above ground, registered on swiss v2.
Laird RG186 connected with cable to the network, ISP is UPC Switzerland (cable internet), mounted on the desk table, registered on EU1 v3.
Mikrotik connected with LTE, ISP is Sunrise, RSRQ values are very good, the ISPs LTE cell is about 500 meters away without buildings in between, only the roof, mounted below the roof, approx 10 meters above ground.

The node is mounted approx 1 meters above the ground and has one wall between it and the v2 gateway, some more walls for the v3 gateway, RSSI is always >-100.

Yes, it is based on what I see on my gateways and also, on my firewalls.

I agree on this, UDP is a lossy protocol, but it is very unlikely that at the same time, the uplink messages from three gateways, delivered to two different servers (two to v2 one to v3), over two different ISPs are getting lost in the same time.

For the last 24 hours (I don’t take the last calendar day as in that time, there were packet broker issues which can influence the loss percentage), I see a loss percentage of 2% of the node I’ve in V3.
I’ve running some pcap software on the firewall since I first analyzed this issue, I can see all messages from that specific node in the last 24 hours sent ‘into the internet’ from all three gateways.

If I check a v2 node which is at the same place in the same timeframe than the one I’ve in v3, I see 0% packet loss.

I was able to spot one missing uplink message in the last night.
I see the uplink message in the EU1 v3 gateway console log, but not in the console log of the node… unfortunately there were some ‘Console stream connection closed and reconnected’ messages at the same time, so maybe it even reached the application - but, it did not reach the webhook integration. I’ve set up three webhooks each at a different provider, so it is very unlikely that all of them are not reachable…
No connection there in the logs from the webhook integration for the missing uplink. Yes, I use the same servers as webhook endpoint for v2 too… :slight_smile:

Good clear inputs and explanation, thanks…will think on it but one observation:

Given:

And fact both cabled gw’s go out through same local ISP (I assume on same router/inet connection?), it is likely that all three will converge in a local exchange…cell ISP may well contract tower backhaul through your other ISP service or atleast as a colo with common upstream, and therefore if all three UDP based there is a good chance any UDP issues at a given time will indeed impact all three message routes! Had to switch one of my sites from VM in UK to BT as turned out local 3G backup route to main DSL/Cable connection was also being handled in same local exchange and a troublesome shelf with heat problems was proving unreliable… and deaf eared local exchange engineers wouldn’t listen when I kept telling them :frowning: They came back to me 2 months later and apologised when whole shelf went down! (Known the guys for years… this was second major problem I called out…they listen now! :slight_smile: :+1: )

Is your application in V2 also or just a node?.. or have you migrated app but not the device reg? (:thinking: is that even possible), has node been migrated into V3 but also still active in V2…

Really need some logs for console and apps to start debugging and if you can capture/expand the metadata as requested so we can see if all GW’s contributing to message handling please…

@mat89 & @Jeff-UK - moved this as case specific and not related to migrating OTAA devices.

I’ve lost track of where I said it, but I have a project folder for hacking the latest mp_packet_forwarder to provide much clearer logs that I can then correlate with a device with an SD Card on it, a copy of FireHose.js* logging the gateway console (as we don’t appear to have that), a copy of FireHose.js logging the realtime device console, data storage, a Webhook or two and possibly an MQTT for good measure. If I can figure out how to run an SDR I can use something to log the actual RF as well, may be able to get a local Radio peep who likes tracking balloons to help.

This should then allow end to end tracing of everything. But this is NOT going to happen quickly - probably be over Q4 with data analysis in the quiet of the Mid-Winter Solstice of Excess.

* FireHose.js is a Node.js variant of my web console so it can write to file. Still need to trap for disconnects.

1 Like

Unlikely, but yes, there is a chance, then the v2 node should also suffer from packetloss…
And, the same packetloss happens if I first tunnel the messages from the v3 gateway through a tunnel to Germany (Frankfurt). I see all packets sent in Frankfurt ‘sent towards TTN’ but the loss is still the same - so I don’t think it is a local ISP issue.

Sorry, I need to be more clear here.
v3 node = v3 application and v3 device (new not migrated node)
v2 node = v2 application and v2 device (old not migrated node)

If I see something interesting in the console, I will do that.
Yes all three gateways are contributing to message handling according to the metadata:

  {
      "gateway_ids": {
        "gateway_id": "packetbroker"
      },
      "packet_broker": {
        "message_id": "01FEDYPDQ707KKHCBCS245JZJ9",
        "forwarder_net_id": "000013",
        "forwarder_tenant_id": "ttnv2",
        "forwarder_cluster_id": "ttn-v2-ch",
        "forwarder_gateway_eui": "C0EE40FFFF2940EC",
        "forwarder_gateway_id": "eui-c0ee40ffff2940ec",
        "home_network_net_id": "000013",
        "home_network_tenant_id": "ttn",
        "home_network_cluster_id": "ttn-eu1"
      },
      "rssi": -75,
      "channel_rssi": -75,
      "snr": 7.5,
      "location": {
        "latitude": 47.431269,
        "longitude": 8.465581
      },
      "uplink_token": "eyJnIjoiWlhsS2FHSkhZMmxQYVVwQ1RWUkpORkl3VGs1VE1XTnBURU5LYkdKdFRXbFBhVXBDVFZSSk5GSXdUazVKYVhkcFlWaFphVTlwU25kaE1VNU1WbGhhZEZGdGVITmtSMnhEVFVodk0wbHBkMmxrUjBadVNXcHZhVTB3VmxwWU0xSnZZakJ3VEZsWFRuaFVNVlpJWTBVMWVsZEdPREZWVTBvNUxqRXlSbFJ5TUZwYVpHbGlRbFJvU25vMGMwbHVUM2N1WkRoSlRqRnRTQzFFTUhGYVVuTkVXQzVsVGxWUmEweHRkVEJCVXkxMGEwMUJZM1Z6VTI5WlkwUndRVGN0YlZaemJqZHphbVJRVFdkaFVGOTVlWG8zVTNSbVgxcEdYMGhFYkZwd1NqZHphemRYVEVjeFRXUkhWREZ1Um05TlFVbHlPQzFOVkcwMWNtcEdOVkZrYzJRellUVlNOMGM1VTNaV1JrSktOUzB3YUVVd1F6TkpTMlZzYUdSUFlrMUdRMmh6YkVKdmRGSjZXRjltTlhNdFZGbHdha3B5VDB0aWNIWldOMGROTTFGdGFsUmpSekF6TTJjeldHTlVPVlF0U2pWWkxsWk5PR1JzYzB4Sk5WTlJXSE5uYUhWeWVGOXFkMUU9IiwiYSI6eyJmbmlkIjoiMDAwMDEzIiwiZnRpZCI6InR0bnYyIiwiZmNpZCI6InR0bi12Mi1jaCJ9fQ=="
    },
    {
      "gateway_ids": {
        "gateway_id": "packetbroker"
      },
      "packet_broker": {
        "message_id": "01FEDYPDQC6QWG016AX4NKNHM0",
        "forwarder_net_id": "000013",
        "forwarder_tenant_id": "ttnv2",
        "forwarder_cluster_id": "ttn-v2-ch",
        "forwarder_gateway_eui": "313330371D005500",
        "forwarder_gateway_id": "eui-313330371d005500",
        "home_network_net_id": "000013",
        "home_network_tenant_id": "ttn",
        "home_network_cluster_id": "ttn-eu1"
      },
      "time": "2021-08-31T11:14:48.337980Z",
      "rssi": -68,
      "channel_rssi": -68,
      "snr": 7,
      "uplink_token": "eyJnIjoiWlhsS2FHSkhZMmxQYVVwQ1RWUkpORkl3VGs1VE1XTnBURU5LYkdKdFRXbFBhVXBDVFZSSk5GSXdUazVKYVhkcFlWaFphVTlwU2xwaVJrWmFWRlJqZUZkWE1UUlRTRVkyV2xkM01rbHBkMmxrUjBadVNXcHZhV0ZUTVhSa1JXaERWRVJTV2s0emF6QlBWV1JPVkVkR01XRXpTbE5WVTBvNUxsOU1iMHhCZHpabmEwYzViVFpNYkV4a1pXUkVTM2N1TVZKblJESkxZMUJ4T0VsR2JVOVNaQzVNUVRKQlZXWlZaRlZCZWpWTVNsUnJSMHM0VnpNemVUWXlNelZXUVhGbFV6VklSRjlDVmxWcU5scGhTWEZIVkVsQlpsZG1ZMll3VldaRmVtWnZjRVZoUmtRek1uTmFTR1JwZEd4QlExZG1NREZZYWpaSWRFMVZUbkZVY0hKRk16Rk9MVXBTZGtaRloxZE5NRTVCTWtvMWRVRnRhVlUwZVRGb1IwaEpNM2xsYW05UmNDMTNWbmwwYW14YU1VaHNZMFJ0WkRCVFVuSnhSREpYT0VZeU9FeHhMV1E1WDAxU1QwdFFhVVJqVDB4ekxsbEZPR05vTFVRNGRqQnBkR2hRVWtKc0xTMXdOMUU9IiwiYSI6eyJmbmlkIjoiMDAwMDEzIiwiZnRpZCI6InR0bnYyIiwiZmNpZCI6InR0bi12Mi1jaCJ9fQ=="
    },
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw",
        "eui": "C0EE40FFFF2940E9"
      },
      "timestamp": 3272836931,
      "rssi": -89,
      "channel_rssi": -89,
      "snr": 7.8,
      "uplink_token": "ChcKFQoJdjMtdGVzdGd3EgjA7kD//ylA6RDDjs6YDBoMCKieuIkGEPngmMYCILi7oaKgnRY=",
      "channel_index": 2
    }
  ],

@Jeff-UK so I discovered today loss in another situation…

Device and App registered to EU1v3, one gateway in reach, also registered to EU1v3.
Gateway is a Mikrotik UDP gateway again (yes I know, UDP…) with LTE (yes I know, LTE, baaaad!).
I see the uplink from the node in the TTN gateway ‘Live data’ tab, but not in the TTN device ‘Live tab’.
Also, the packet is missing on all my three Webhook endpoints (which are in different datacenters).

So for me it looks cleary that TTN is loosing messages within their own system which should not happen, at least not on a regular basis… and no packet broker involved… and yes, the packet arrived at TTN side :wink:

Metadata of the correctly processed uplink:

{
  "name": "gs.up.receive",
  "time": "2021-08-31T20:03:49.745908419Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3",
        "eui": "4836372047001D00"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QEIKCyYA3CcBE01ET2RMCTrWtOHygexSMP0=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "7FIw/Q==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B0A42",
          "f_ctrl": {},
          "f_cnt": 10204
        },
        "f_port": 1,
        "frm_payload": "E01ET2RMCTrWtOHygQ=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "868300000",
      "timestamp": 2394773225,
      "time": "2021-08-31T20:03:50.991572Z"
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "hir-ttn01v3",
          "eui": "4836372047001D00"
        },
        "time": "2021-08-31T20:03:50.991572Z",
        "timestamp": 2394773225,
        "rssi": -73,
        "channel_rssi": -73,
        "snr": 9.25,
        "uplink_token": "ChkKFwoLaGlyLXR0bjAxdjMSCEg2NyBHAB0AEOmt9fUIGgwIpZa6iQYQ8dDP4wIgqPztnNnrHQ==",
        "channel_index": 1
      }
    ],
    "received_at": "2021-08-31T20:03:49.745793649Z",
    "correlation_ids": [
      "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
      "gs:uplink:01FEEWZ2VHYCPYDT6SE3J9PJYT"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
    "gs:uplink:01FEEWZ2VHYCPYDT6SE3J9PJYT"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEEWZ2VHFM930T1MTXMT9QFX"
}

Metadata of the missing uplink:

{
  "name": "gs.up.receive",
  "time": "2021-08-31T20:08:49.148113249Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3",
        "eui": "4836372047001D00"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QEIKCyYA3ScBkpJWAk1cyoAv+Gti376IhpE=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "voiGkQ==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B0A42",
          "f_ctrl": {},
          "f_cnt": 10205
        },
        "f_port": 1,
        "frm_payload": "kpJWAk1cyoAv+Gti3w=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "868500000",
      "timestamp": 2694196729,
      "time": "2021-08-31T20:08:50.415727Z"
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "hir-ttn01v3",
          "eui": "4836372047001D00"
        },
        "time": "2021-08-31T20:08:50.415727Z",
        "timestamp": 2694196729,
        "rssi": -78,
        "channel_rssi": -78,
        "snr": 8.25,
        "uplink_token": "ChkKFwoLaGlyLXR0bjAxdjMSCEg2NyBHAB0AEPnb2IQKGgsI0Zi6iQYQrZnIRiCoqY7VtPQd",
        "channel_index": 2
      }
    ],
    "received_at": "2021-08-31T20:08:49.147983533Z",
    "correlation_ids": [
      "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
      "gs:uplink:01FEEX877WCQEYNSYM67DA29CP"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
    "gs:uplink:01FEEX877WCQEYNSYM67DA29CP"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEEX877W5FJJ11Z7ED57Z1CS"
}

And the screenshots:

Gateway live data:
2021-08-31 22_18_34-Gateway data - hir-ttn01v3 - The Things Network

Device live data:
2021-08-31 22_19_15-Live data - fi-weather-dev01 - The Things Network

2 Likes

Cool, something to dig in to.

Do you have data storage turned on? If not, please do, as that’s all internal to the servers which can be cross-referenced with the Webhooks and any future examples you get screen shots of.

No, never used it.
I’ve enabled it now… but have to check the docs how to get some useful infos out of it…

This works well: TheThingsStack-Integration-Starters/DataStorage-to-Tab-Python3 at main · descartes/TheThingsStack-Integration-Starters · GitHub

You need to fill in the top two lines. It’s a starter script so not so bright so it will download whatever you ask, it won’t deduplicate or look at previous downloads. Data storage only holds about 36 hours worth of data on v3 so needs to be kicked off at least once a day.

In the repro is also the same sort of functionality for Webhooks - no configuration required. I can set one up on a server if you’d like. There is also the same thing but with MQTT.

1 Like

Today morning, for the node which is in reach by the two v2 gateways and in reach by one v3 gateway I’ve captured a missing uplink.
Again it is shown in the ‘live data’ tab of the v3 gateway, but not in the devices view ‘live data’.

Metadata of the correctly processed uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T03:58:08.100765704Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw",
        "eui": "C0EE40FFFF2940E9"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QFNPCyYA/gIBkK+R/kYLjG9glRc8+bznOpU=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "vOc6lQ==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B4F53",
          "f_ctrl": {},
          "f_cnt": 766
        },
        "f_port": 1,
        "frm_payload": "kK+R/kYLjG9glRc8+Q=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "867300000",
      "timestamp": 4175510243
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "v3-testgw",
          "eui": "C0EE40FFFF2940E9"
        },
        "timestamp": 4175510243,
        "rssi": -91,
        "channel_rssi": -91,
        "snr": 10,
        "uplink_token": "ChcKFQoJdjMtdGVzdGd3EgjA7kD//ylA6RDj9YTHDxoLCND0u4kGEITX/i8guI24/sLwAw==",
        "channel_index": 4
      }
    ],
    "received_at": "2021-09-01T03:58:08.100641668Z",
    "correlation_ids": [
      "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
      "gs:uplink:01FEFR3J74YCBH3SZVE285GRK3"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
    "gs:uplink:01FEFR3J74YCBH3SZVE285GRK3"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEFR3J74J8WCMG6JBPJEWC6H"
}

Metadata of the missing uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T04:03:07.693527097Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw",
        "eui": "C0EE40FFFF2940E9"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QFNPCyYA/wIBD3aTUlLREeufRkdWg9KWK1w=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "0pYrXA==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B4F53",
          "f_ctrl": {},
          "f_cnt": 767
        },
        "f_port": 1,
        "frm_payload": "D3aTUlLREeufRkdWgw=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "868500000",
      "timestamp": 180152179
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "v3-testgw",
          "eui": "C0EE40FFFF2940E9"
        },
        "timestamp": 180152179,
        "rssi": -83,
        "channel_rssi": -83,
        "snr": 8,
        "uplink_token": "ChcKFQoJdjMtdGVzdGd3EgjA7kD//ylA6RDzzvNVGgwI+/a7iQYQ2ri5ygIguLKgj5/5Aw==",
        "channel_index": 2
      }
    ],
    "received_at": "2021-09-01T04:03:07.693001306Z",
    "correlation_ids": [
      "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
      "gs:uplink:01FEFRCPSDWT0HEYVPDJ8PX5KP"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
    "gs:uplink:01FEFRCPSDWT0HEYVPDJ8PX5KP"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEFRCPSD8FW5WW5HRHHH84TN"
}

And the screenshots:

Gateway live data:
2021-09-01 10_12_38-

Device live data:
2021-09-01 10_13_25-

Nick @descartes has flagged your issue/experience to the TTI core team to investigate, so if you see more examples please post as the more ‘evidence’ & instances we have the better to try and trap if issue is real/repeatable and why…

One more from today morning, for the node which is in reach by the two v2 gateways and in reach by one v3 gateway I’ve captured a missing uplink.
Again it is shown in the ‘live data’ tab of the v3 gateway, but not in the devices view ‘live data’.

Metadata of the correctly processed uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T08:07:46.726035599Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw",
        "eui": "C0EE40FFFF2940E9"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QFNPCyYAMQMBFej8rkkOHT9DxKyoUCfnqyg=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "J+erKA==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B4F53",
          "f_ctrl": {},
          "f_cnt": 817
        },
        "f_port": 1,
        "frm_payload": "Fej8rkkOHT9DxKyoUA=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "868100000",
      "timestamp": 1973979307
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "v3-testgw",
          "eui": "C0EE40FFFF2940E9"
        },
        "timestamp": 1973979307,
        "rssi": -88,
        "channel_rssi": -88,
        "snr": 9.2,
        "uplink_token": "ChcKFQoJdjMtdGVzdGd3EgjA7kD//ylA6RCrkaKtBxoMCNLpvIkGEJncycQCIPi319K5pAc="
      }
    ],
    "received_at": "2021-09-01T08:07:46.680685081Z",
    "correlation_ids": [
      "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
      "gs:uplink:01FEG6CNS6X84CKTJHS8P59XA1"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
    "gs:uplink:01FEG6CNS6X84CKTJHS8P59XA1"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEG6CNS6A7WYG6EZ13WRWRN9"
}

Metadata of the missing uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T08:12:45.971283817Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "v3-testgw",
        "eui": "C0EE40FFFF2940E9"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QFNPCyYAMgMBbgnj4bWAmssoJDR2gTmkcmc=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "OaRyZw==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B4F53",
          "f_ctrl": {},
          "f_cnt": 818
        },
        "f_port": 1,
        "frm_payload": "bgnj4bWAmssoJDR2gQ=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "867300000",
      "timestamp": 2273462987
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "v3-testgw",
          "eui": "C0EE40FFFF2940E9"
        },
        "timestamp": 2273462987,
        "rssi": -94,
        "channel_rssi": -94,
        "snr": 9,
        "uplink_token": "ChcKFQoJdjMtdGVzdGd3EgjA7kD//ylA6RDLlYm8CBoMCP3rvIkGELSth88DIPjR0KeVrQc=",
        "channel_index": 4
      }
    ],
    "received_at": "2021-09-01T08:12:45.971101876Z",
    "correlation_ids": [
      "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
      "gs:uplink:01FEG6NT0K2SRE99FAAA9FZ69G"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEF7V0BJQHC31NP9WF5RWG18",
    "gs:uplink:01FEG6NT0K2SRE99FAAA9FZ69G"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEG6NT0KTX5CZ6928XH0GPQH"
}

And the screenshots:
Gateway live data:
2021-09-01 13_32_50-10.0.0.20 - Remotedesktopverbindung

Device live data:
2021-09-01 13_33_28-10.0.0.20 - Remotedesktopverbindung

Does it appear in Data Storage?

And one more missing from today morning for the node which is only connected through a v3 gateway.

Metadata of the correctly processed uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T09:17:24.611990238Z",
  "identifiers": [
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3"
      }
    },
    {
      "gateway_ids": {
        "gateway_id": "hir-ttn01v3",
        "eui": "4836372047001D00"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
    "raw_payload": "QEIKCyYAeygBublRh2GZrgjfAsQDPVf8Wrc=",
    "payload": {
      "m_hdr": {
        "m_type": "UNCONFIRMED_UP"
      },
      "mic": "V/xatw==",
      "mac_payload": {
        "f_hdr": {
          "dev_addr": "260B0A42",
          "f_ctrl": {},
          "f_cnt": 10363
        },
        "f_port": 1,
        "frm_payload": "ublRh2GZrgjfAsQDPQ=="
      }
    },
    "settings": {
      "data_rate": {
        "lora": {
          "bandwidth": 125000,
          "spreading_factor": 7
        }
      },
      "coding_rate": "4/5",
      "frequency": "868100000",
      "timestamp": 2764948561,
      "time": "2021-09-01T09:17:25.929362Z"
    },
    "rx_metadata": [
      {
        "gateway_ids": {
          "gateway_id": "hir-ttn01v3",
          "eui": "4836372047001D00"
        },
        "time": "2021-09-01T09:17:25.929362Z",
        "timestamp": 2764948561,
        "rssi": -77,
        "channel_rssi": -77,
        "snr": 9,
        "uplink_token": "ChkKFwoLaGlyLXR0bjAxdjMSCEg2NyBHAB0AENGIt6YKGgwIpIq9iQYQ/KyEoAIg6LibnrzVKA=="
      }
    ],
    "received_at": "2021-09-01T09:17:24.604051068Z",
    "correlation_ids": [
      "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
      "gs:uplink:01FEGAC5R3EH3Q1W2Y698CSNBY"
    ]
  },
  "correlation_ids": [
    "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
    "gs:uplink:01FEGAC5R3EH3Q1W2Y698CSNBY"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CgN0dG4="
  },
  "visibility": {
    "rights": [
      "RIGHT_GATEWAY_TRAFFIC_READ",
      "RIGHT_GATEWAY_TRAFFIC_READ"
    ]
  },
  "unique_id": "01FEGAC5R3R327JN6V6AW4SQXM"
}

Metadata of the missing uplink:

{
  "name": "gs.up.receive",
  "time": "2021-09-01T09:22:24.106628075Z",
  "identifiers": [
{
  "gateway_ids": {
    "gateway_id": "hir-ttn01v3"
  }
},
{
  "gateway_ids": {
    "gateway_id": "hir-ttn01v3",
    "eui": "4836372047001D00"
  }
}
  ],
  "data": {
"@type": "type.googleapis.com/ttn.lorawan.v3.UplinkMessage",
"raw_payload": "QEIKCyYAfCgBDVwadgF8GAsEz3XIAW/04W8=",
"payload": {
  "m_hdr": {
    "m_type": "UNCONFIRMED_UP"
  },
  "mic": "b/Thbw==",
  "mac_payload": {
    "f_hdr": {
      "dev_addr": "260B0A42",
      "f_ctrl": {},
      "f_cnt": 10364
    },
    "f_port": 1,
    "frm_payload": "DVwadgF8GAsEz3XIAQ=="
  }
},
"settings": {
  "data_rate": {
    "lora": {
      "bandwidth": 125000,
      "spreading_factor": 7
    }
  },
  "coding_rate": "4/5",
  "frequency": "868500000",
  "timestamp": 3064348225,
  "time": "2021-09-01T09:22:25.329676Z"
},
"rx_metadata": [
  {
    "gateway_ids": {
      "gateway_id": "hir-ttn01v3",
      "eui": "4836372047001D00"
    },
    "time": "2021-09-01T09:22:25.329676Z",
    "timestamp": 3064348225,
    "rssi": -73,
    "channel_rssi": -73,
    "snr": 8,
    "uplink_token": "ChkKFwoLaGlyLXR0bjAxdjMSCEg2NyBHAB0AEMH8mLULGgwIz4y9iQYQwtHw1AMg6NuMy5feKA==",
    "channel_index": 2
  }
],
"received_at": "2021-09-01T09:22:23.983312578Z",
"correlation_ids": [
  "gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
  "gs:uplink:01FEGANA7A7C1CHHN81XTW19AQ"
]
  },
  "correlation_ids": [
"gs:conn:01FEB3H9WXPK7GD0AZ7K6AZVRK",
"gs:uplink:01FEGANA7A7C1CHHN81XTW19AQ"
  ],
  "origin": "ip-10-100-5-46.eu-west-1.compute.internal",
  "context": {
"tenant-id": "CgN0dG4="
  },
  "visibility": {
"rights": [
  "RIGHT_GATEWAY_TRAFFIC_READ",
  "RIGHT_GATEWAY_TRAFFIC_READ"
]
  },
  "unique_id": "01FEGANA7AN90YFHGG36CC64EW"
}

And the screenshots:
Gateway live data:
gw
Device live data:
node

I will check that later and update here once more :slight_smile:

Hello, the script downloads some rows yes, but then exits with an error:

Traceback (most recent call last):
File “TTS.DataStorage.Tab.ch-weather.py”, line 76, in
f_port = uplink_message[“f_port”];
KeyError: ‘f_port’

So I was at least able to download the interesting rows, and there I can clearly see, that the missing uplinks are also missing in the storage integration.

Yes it will, it’s an homage to my favourite grumpy programmer:

If you want to fix it, I use this in production code:

f_port = theJSON.get(“f_port”, 0)

You’ll need to do the same for any other fields that may return as zero, null, blank or are uninitialised.

In my not very humble opinion it’s a mistake in the design of Google Protocol Buffers translation utilities to JSON that has been perpetuated by programmers half my age because they all think Google has been around forever so much be right whereas I was coding for hire before the web even existed.

@johan and @htdvisser, per my PM, we have two examples and some correlation from Data Storage, so something appears to be amiss here.

I page them on Slack as well.

@mat89, can you confirm which connection / join method you are using.

I know you originally posted this in the “How to Migrate OTAA Devices from V2 to V3” topic but support have noted that if it there is some channel / frequency configuration issue - both dropped packets are on 868.5 - then the network server may not be able to process the uplink correctly.