TTN & GPRS: dealing with the latency

Hello everyone.
Ok, I got a mission here. I found several threads touching this problem but none really hit to the point, and epilogues are too vague to be of real help to the most, so I’m willing to clear up things and reach a stable point. I’ll try to be as more clear and succinct as possible, and I hope in a constructive “debate”.

EQUIPMENT
I have a Kerlink Wirnet Station expected to work by GPRS using SPF. It is configured and surfs as expected.
Ping round-trip is about 400ms. I tried the same with a WiFi/GPRS HUAWEI soap and situation is more or less the same, so the gateway should be not to blame.

As mote I have a Libelium Waspmote P&S! with LoRa, or an Arduino MKRWAN 1300 … you choose, the outcome is the same.
In this example I use the Libelium EU, datarate: “SF = 9, BW = 125 kHz, BitRate = 1760 bps”.

Obviously TTN account with both configured gateways and nodes.
Assume LoRaWAN keys are all correct.

DESIRED
Nodes are flashed with a simple task: joinOTAA and send with confirmed uplink a string of about 20 bytes.

BEHAVIOUR
Nodes send the uplink (both JoinRequest or ConfirmedDataUp), network server receives and replies, but nodes cannot receive the downlink (both JoinAccept or UnconfirmedDataDown) in 95% of cases. So they keep retrying till giveup (or occasional success).

DEBUG
So I took the tcpdump of Kerlink and (useful) Gateway Traffic of TTN console to understand what’s going on.

aside: Kerlink tcpdump is possible with tcpdump -xi ppp0 -n -v port 1700.

Confirms don’t reach the nodes because the gateway won’t radio-transmit downlinks complaining a “TOO_EARLY” scheduling in reply with SPF packet TX_ACK.

aside: little explanation: “TOO_EARLY” means that the scheduled time from NS is back in time respect to the internal timer of the gateway, that is “too much in advance”. This fooled me for long, so I put here this hint.

EXAMPLE DEBUG
A debug example case follows.

  • node sends PUSH_DATA with JoinRequest (EDIT: actually is the gateway that envelops the received bytes in a PUSH_DATA)
  • gateway detects at 10:20:12.013977 and forwards to NS with tmst of 702008636
  • NS replies PUSH_ACK at 10:20:13.514408, nearly 1.5 seconds later
  • node sends another PUSH_DATA at 10:20:18.296313, nearly 5.3 seconds after first JoinRequest
  • gateway forwards it with to NS with tmst of 708289364
  • NS sends back PULL_RESP with JoinAccept (for first request) at 10:20:18.791382, nearly 5.8 seconds later than the first JoinRequest, and with a scheduled time in tmst of 707008636, the usual 5 seconds after the request.

Gateway’s timer is already at 708289364 (as you can see in the second JoinRequest), so the gateway cannot schedule the downlink for transmission at 707008636, as it is already in the past.

QUESTION
This happens only with GPRS, as Kerlink with Ethernet mode goes OK.
With private ChirpStack I can adjust the RX1Delay (as stated in other threads also) to 5 seconds that is enough to accomplish the handshake successfully.
I tried changing the RX1Delay node-side, but it seems not to help.

So, when facing a situation like this, what do we need to do to get it through using TTN and GPRS?
Thank you.

REFERENCES
Most relevant referenced threads:
https://www.thethingsnetwork.org/forum/t/gprs-delay-too-much-for-downlinks/19139
https://www.thethingsnetwork.org/forum/t/gateway-latency/6942/2

1 Like

While you can customize the RX delays in your own network, in TTN you have to use the same as everyone else.

The delay of five seconds is a bit surprising, but perhaps not completely - older packet forwarders can’t tolerate having more than one packet outstanding, so for a join / accept loop which has a later RX window, it’s possible the network holds onto downlink packets until closer to their due time, and only pushes them to the gateway then, vs if they were pushed earlier potentially causing a conflict with ordinary downlinks in response to later arriving non-join uplinks. More recent versions of the Semtech code implement a software packet queue, but TTN cannot perhaps assume people are using those.

It would be interesting to see in the wired Ethernet case how much before their actual due time packets are being pushed to the gateway.

But this is probably beside the point, as even if you got it to work for the join/accept loop, your GPRS delay simply isn’t going to work for ordinary traffic with TTN’s current shared settings.

And it’s not only going to fail for you, but also for other users whose downlinks might be unworkably routed to your gateway.

2 Likes

I assume you also see PULL_DATA requests from the gateway:

5.2. PULL_DATA packet

That packet type is used by the gateway to poll data from the server.

This data exchange is initialized by the gateway because it might be impossible for the server to send packets to the gateway if the gateway is behind a NAT.

When the gateway initialize the exchange, the network route towards the server will open and will allow for packets to flow both directions. The gateway must periodically send PULL_DATA packets to be sure the network route stays open for the server to be used at any time.

Are those in time, that is: is there a PULL_DATA often enough for the NS to respond in time? Or do you see the PULL_DATA just before you see a PULL_RESP packet being received from the NS?

And seemingly unrelated to your gateway’s latency problems:

Is that the same node? Then that’s odd: as it has not received a Join Accept in RX1 (5 seconds), it should at least await RX2 (6 seconds) and preferably even more, before trying again. If you’re sure it’s the same node, then that makes me wonder if maybe there’s actually some delay in the forwarder itself too, making it create and/or send the PUSH_DATA packet much too late? Can you access gateway log files, for any additional timing details, or even errors?

(And I don’t think it matters, and I guess you know, but: a node doesn’t really “send a PUSH_DATA”. Instead, it sends/transmits an “uplink”, and when a gateway forwards that using the Semtech protocol then it does that using a “PUSH_DATA” packet, which holds the received bits and some additional metadata. Of course, if all is well, that PUSH_DATA packet is sent off right after the gateway has received the uplink. But logging might reveal something.)

@fieldtronics I would similarly discourage use of GPRS with such long ping times also - as stated you can then end up disrupting other legitimate users of the network. Whilst their nodes may work fine with their own or others GW’s in their general area the minute you drop in your GW and ‘go live’ you risk becoming the RF path of choice with the TTN NS selecting your GW to deliver join accepts, confirmation/acks or comand and control downlinks which then simply wont work :frowning:
I have a number of GW’s running on 3G & 4G networks and check they have reasonable & consistent ping times before letting them run full time. Also one quick test on set up isnt enough you need to configure and run over several hours/days and even local weather and e.g. local vehicle traffic conditions as there may be times when RF performance of the Cellular network degrades or cells get saturated. - I had one in range of traffic near M25 London Circular motorway and for several times per day/per week cells in the area saturated and my backhaul suffered (fortunately situation was based on 3G falling back to what I think was poor version of EDGE! and solution was move to 4G network and problem mitigated well enough to continue use…

1 Like

If it has a choice, I wonder if TTN will select a gateway that is often/always rejecting its PULL_RESP packets with an error?

Correct. On the other and I dare not to change those shared settings.

I’ll provide that information soon to make this thread as much complete as possible.

Gateway is up for hours and sends out regularly PULL_DATA every about 10 seconds.

Yes, it is.

I’ll investigate, didn’t think about that actually.

Sure. I’ll try to track down those of the example above.

You are right, I mistyped.

You are right, but I forgot to mention that the same did happen with inspired pings of about 70ms (in a lapse when I guess weather and cells conditions were optimal).

Do you also have a Kerlink Wirnet Station among them?

1 Like
20191218-102001:INFO:IP link OK on nominal bearer
20191218-102031:INFO:KMS CONNECTION n▒75 RECEIVED from 127.0.0.1:46661 (socket 16)
20191218-102031:INFO:Thread "KMS RX" created, id=0xb6901f14, stack_size=262144 min_stack_size=16384
20191218-102031:INFO:APPLI 50 connected (nbconnected=2)
20191218-102031:INFO:End of thread=0xb6901f14
20191218-102033:INFO:IP link OK on nominal bearer

This is from Kerlink “trace_agent” and it’s all I find there relative to the time span of the example above. Maybe there is way to make it more verbose?..

No, sadly whilst I had/have a number of clients running Kerlink Kit and used to refer customers to them in the early days they never sent me one to evaluate and work with myself, despite several requests a few years back, so I have lost touch with their implementations compared to others. Don’t know, therefore, if there is something specific wrt their Wirnet and the associated GPRS backhaul, but generally my advice stands - avoid long ping times and latency no matter which GW and PF implementation you use, especially if running on UDP based PF :wink:

…if Yannick or any of the Kerlink old boys pick up on this thread and want to send me units from across the family to evaluate and play with I and/or one of my associates will happily allocate some time for testing in the new year! :slight_smile:

I did test using the same gateway and node (Kerlink and Libelium) but through ethernet; results follow:

  • node sends uplink with JoinRequest
  • gateway detects and forwards as PUSH_DATA to NS at 16:02:05.125199 with tmst of 4466668
  • NS acks with PUSH_ACK at 16:02:05.175328, nearly 50ms
  • NS responds PULL_RESP with JoinAccept at 16:02:09.179659 and scheduled transmission in tmst for 9466668 (usual 5 seconds)
  • gateway at 16:02:10.090052 replies TX_ACK with no errors
  • gateway transmits JoinAccept, node receives and join is complete

since there is no further PUSH_DATA reporting the gateway tmst, it must be deducted. I think the gateway’s timer when receiving the PULL_RESP should be almost 8519553, that is nearly 4.1s after the JoinRequest (4.052885s), as confirmed by log entries time.

1 Like

I think the issue is unrelated to the gateway model/manufacturer. In my past, I used to employ some dozens of Wirnet Stations with two different models of cell. module and encounter the issue with bad ping much more than once, but it always was caused by poor cell. coverage and other cell. operator-related issues (or, in the lab, by noize level/other equipment that effectively jammed cell. channels). Since that was with our own private LoRaWAN network, we finally increased RX windows delays and also organized our own APN.

1 Like

Indeed. Working with private network one can increase the window and “fix” the problem. But of course that could not be the case for TTN, as stated above too.
Did you ever employ your Wirnet stations with TTN?

Indeed. Not only can you not change the RX delay with TTN, but TTN’s architectural assumption of queue-less packet forwarders means that even in the join/accept case where TTN has a long delay, it doesn’t use it in a way which could overcome slow backhaul.

Essentially, TTN is holding onto the join accept packet and only pushing it to the gateway at a point where no other traffic could come in that could prompt a downlink which would legitimately transmit first.

And TTN can’t even remove the time already elapsed on the uplink path from the duration for which it holds the packet, because it doesn’t have a reliable way to tell how stale an uplink actually is before it gets there.

My conclusion is thus: if a network server is going to use long RX delays to allow for backhaul latency, then it’s necessary to use a modern packet forwarder with queueing in the gateway. I have a custom non-TTN system that works that way - although backhaul latency has not really been seen, the network is configured with long RX delays just to be safe. But since upgrading to the packet forwarder with the “JIT” queue the network server no longer needs to hold packets. I only waits only long enough to receive all timely gateway uplink reports, then prepares the downlink packet and immediately pushes it to the gateway well in advance of the transmit time.

(To be fair, its not immediately clear if you are using a packet forwarder with the jit queue. That typically correlates with a different UDP protocol embedded version number, so it’s possible that if you use a packet forwarder with the queue that fact could be noticed and downlink packets not held over. That would only fix the join/accept case - in theory it might be possible to run with no downlinks otherwise, but ADR would fail probably eventually triggering a rejoin attempt, over and over)

2 Likes

Well, you catch me unprepared as this is not clear to me either. I’ll dig out this information from the Kerlink and update here.

By the way, this problem is present also for regular uplinks and not only for join messages. This fits and doesn’t change your statements, right?

You might be able to build a newer generation forwarder with the jit queue from source.

But I’m only allowing for the possibility that TTN server software could act differently in this case, I don’t actually expect that it does. And that still wouldn’t get ordinary downlinks working.

By the way, this problem is present also for regular uplinks and not only for join messages. This fits and doesn’t change your statements, right?

There’s no solution for regular downlinks on TTN.

Join/Accept should have enough time to work, except that the need to queue packets in the network server (since a basic gateway cannot) means that the actual implementation of that doesn’t even work in a high backhaul delay case either.

1 Like

Just as a reference, from some posts that I undeleted just now:

In August 2017, in Multiple new devices trying to OTAA - #17 by kersing one of the developers wrote:

In December 2017 some timing was changed from 800ms to 1000ms:

I don’t fully grasp the code that handles this.

And dating back to April 2017:

@kersing, is this still valid, and do you think that TTN would know that the gateway has such queue?

Yes, but that was really long ago.

You can just run packet forwarder manually and look at the console for a while. If the build you use supports JIT queue, it will print some statistics on its usage every 30 sec. or so.

1 Like

Inspired by your suggestion I went deeper and actually found that Kerlink Wirnet is logging the SPF output in /mnt/fsuser-1/spf/var/log/spf.log. There I found these records:

### [JIT] ###
Dec 20 10:18:40 Wirnet local1.notice spf: /home/drd/jenkins/workspace/spf_release/lora_pkt_fwd/src/jitqueue.c:448:jit_print_queue(): INFO: [jit] queue is empty

so it definitely supports JIT queue.

Anyway I live-tried a failed join-handshake to snip its SPF log. What I got is a session where first JoinRequest fails and first retry succeeds (maybe this is what @arjanvanb asked for few posts ago?) :

Dec 20 10:45:33 Wirnet local1.notice spf: INFO: Received pkt from mote: D00XXXX0 (fcnt=46037)
Dec 20 10:45:33 Wirnet local1.notice spf: JSON up: {"rxpk":[{"tmst":2276695764,"time":"2019-12-20T10:45:33.971123Z","chan":1,"rfch":1,"freq":868.300000,"stat":1,"modu":"LORA","datr":"SF9BW125","codr":"4/5","lsnr":13.8,"rssi":-20,"size":23,"data":"AKBjAtB+1bNwa1VnUmROYUrS
Dec 20 10:45:33 Wirnet local1.notice spf: WARNING: [up] ignored out-of sync ACK packet
Dec 20 10:45:39 Wirnet local1.notice spf: INFO: [down] PULL_ACK received in 1915.86 ms
Dec 20 10:45:40 Wirnet local1.notice spf: INFO: Disabling GPS mode for concentrator's counter...
Dec 20 10:45:40 Wirnet local1.notice spf: INFO: host/sx1301 time offset=(1576836457s:275255µs) - drift=-9µs
Dec 20 10:45:40 Wirnet local1.notice spf: INFO: Enabling GPS mode for concentrator's counter.
Dec 20 10:45:40 Wirnet local1.notice spf: WARNING: [gps] GPS out of sync, keeping previous time reference
Dec 20 10:45:40 Wirnet local1.notice spf: INFO: Received pkt from mote: D00XXXX0 (fcnt=46037)
Dec 20 10:45:40 Wirnet local1.notice spf: JSON up: {"rxpk":[{"tmst":2282977668,"time":"2019-12-20T10:45:40.253025Z","chan":0,"rfch":1,"freq":868.100000,"stat":1,"modu":"LORA","datr":"SF9BW125","codr":"4/5","lsnr":11.0,"rssi":-19,"size":23,"data":"AKBjAtB+1bNwa1VnUmROYUp3
Dec 20 10:45:40 Wirnet local1.notice spf: WARNING: [up] ignored out-of sync ACK packet
Dec 20 10:45:40 Wirnet local1.notice spf: ##### 2019-12-20 10:45:40 GMT #####
Dec 20 10:45:40 Wirnet local1.notice spf: ### [UPSTREAM] ###
Dec 20 10:45:40 Wirnet local1.notice spf: # RF packets received by concentrator: 2
Dec 20 10:45:40 Wirnet local1.notice spf: # CRC_OK: 100.00%, CRC_FAIL: 0.00%, NO_CRC: 0.00%
Dec 20 10:45:40 Wirnet local1.notice spf: # RF packets forwarded: 2 (46 bytes)
Dec 20 10:45:40 Wirnet local1.notice spf: # PUSH_DATA datagrams sent: 3 (651 bytes)
Dec 20 10:45:40 Wirnet local1.notice spf: # PUSH_DATA acknowledged: 0.00%
Dec 20 10:45:40 Wirnet local1.notice spf: ### [DOWNSTREAM] ###
Dec 20 10:45:40 Wirnet local1.notice spf: # PULL_DATA sent: 3 (133.33% acknowledged, ping 1915.86 ms)
Dec 20 10:45:40 Wirnet local1.notice spf: # PULL_RESP(onse) datagrams received: 0 (0 bytes)
Dec 20 10:45:40 Wirnet local1.notice spf: # RF packets sent to concentrator: 0 (0 bytes)
Dec 20 10:45:40 Wirnet local1.notice spf: # TX errors: 0
Dec 20 10:45:40 Wirnet local1.notice spf: # TX rejected (collision packet): 0.00% (req:3, rej:0)
Dec 20 10:45:40 Wirnet local1.notice spf: # TX rejected (collision beacon): 0.00% (req:3, rej:0)
Dec 20 10:45:40 Wirnet local1.notice spf: # TX rejected (too late): 0.00% (req:3, rej:0)
Dec 20 10:45:40 Wirnet local1.notice spf: # TX rejected (too early): 100.00% (req:3, rej:3)
Dec 20 10:45:40 Wirnet local1.notice spf: # BEACON queued: 0
Dec 20 10:45:40 Wirnet local1.notice spf: # BEACON sent so far: 0
Dec 20 10:45:40 Wirnet local1.notice spf: # BEACON rejected: 0
Dec 20 10:45:40 Wirnet local1.notice spf: ### [JIT] ###
Dec 20 10:45:40 Wirnet local1.notice spf: /home/drd/jenkins/workspace/spf_release/lora_pkt_fwd/src/jitqueue.c:448:jit_print_queue(): INFO: [jit] queue is empty
Dec 20 10:45:40 Wirnet local1.notice spf: ### [GPS] ###
Dec 20 10:45:40 Wirnet local1.notice spf: # Valid time reference (age: 1 sec)
Dec 20 10:45:40 Wirnet local1.notice spf: # GPS coordinates: latitude 43.00000, longitude 13.00000, altitude 20 m
Dec 20 10:45:40 Wirnet local1.notice spf: ##### END #####
Dec 20 10:45:40 Wirnet local1.notice spf: JSON up: {"stat":{"time":"2019-12-20 10:45:40 GMT","lati":43.00000,"long":13.00000,"alti":20,"rxnb":2,"rxok":2,"rxfw":2,"ackr":0.0,"dwnb":0,"txnb":0,"ping":1916}}
Dec 20 10:45:40 Wirnet local1.notice spf: WARNING: [up] ignored out-of sync ACK packet
Dec 20 10:45:43 Wirnet local1.notice spf: INFO: [down] PULL_RESP received  - token[102:229] :)
Dec 20 10:45:43 Wirnet local1.notice spf: JSON down: {"txpk":{"imme":false,"tmst":2281695764,"freq":868.3,"rfch":0,"powe":14,"modu":"LORA","datr":"SF9BW125","codr":"4/5","ipol":true,"size":33,"ncrc":true,"data":"IF6CIVc2FVjRUd+TSgMfGlFsLerNi8EoSbMNRwQIGy/2"}}
Dec 20 10:45:43 Wirnet local1.notice spf: /home/drd/jenkins/workspace/spf_release/lora_pkt_fwd/src/jitqueue.c:251:jit_enqueue(): ERROR: Packet REJECTED, timestamp seems wrong, too much in advance (current=2285897020, packet=2281695764, type=0)
Dec 20 10:45:43 Wirnet local1.notice spf: ERROR: Packet REJECTED (jit error=2)
Dec 20 10:45:44 Wirnet local1.notice spf: INFO: [down] PULL_RESP received  - token[137:221] :)
Dec 20 10:45:44 Wirnet local1.notice spf: JSON down: {"txpk":{"imme":false,"tmst":2287977668,"freq":868.1,"rfch":0,"powe":14,"modu":"LORA","datr":"SF9BW125","codr":"4/5","ipol":true,"size":33,"ncrc":true,"data":"IHai0EtvQHLYVvQtjkxz57XNFHFkro3SIMcYsjbKzZbp"}}
Dec 20 10:45:50 Wirnet local1.notice spf: INFO: [down] PULL_ACK received in 765.89 ms

As you can see, comparing tmst :

  • packet: 2281695764
  • gw timer: 2285897020
  • delta: -4201256 (TOO_EARLY)

whilst the retry:

  • packet: 2287977668
  • gw timer: 2286897020 (deducted)
  • delta: 1080648 (OK)

Beside this, trying to write-down what one could think at this point and being open to corrections, I am understanding that GPRS gateways, although they could work well when covered with strong and jam-free signal, might not be a good idea with TTN because of bad interoperability (or at least not guaranteed) between backhaul latency and relaxed compat TTN delays. Moreover they could potentiality become sort of “bad” or “weak” nodes in the network that might lose downlink packets (that is: JoinAccept and (Un)confirmedDataDown) of nearby motes.

Does this make sense?

2 Likes

This is not an answer, but just in case it helps:

  •  INFO: Received pkt from mote: D00XXXX0 (fcnt=46037)
    

    This is a Join Request. The printed DevAddr (which is not a secret) and counter are bogus but that’s not an issue.

  • Seeing the first Join Request at the gateway’s wall time of 10:45:33.971123Z, and then the retry at 10:45:40.253025Z is more than 6 seconds later. Also, 2287977668 - 2281695764 yields 6.281904 seconds now. That’s nicely after RX2, so that’s good; earlier it seemed we saw a difference of only 5.3 seconds, but I guess we can discard that occurrence.

  • WARNING: [up] ignored out-of sync ACK packet
    ...
    PULL_DATA sent: 3 (133.33% acknowledged, ping 1915.86 ms)
    

    This out-of-sync error and the funny 133.33% also seem to confirm latency issues.

Does anyone know if TTN supports JIT queues for the Semtech UDP protocol? (And above all: would then send the downlink commands much earlier? If TTN supports it, but as TTN is clearly not showing that in this case, then the protocol version in the TCP dump messages might be relevant? That’s apparently the first byte in the TCP/IP packet.)

Also, I wonder if “TOO_EARLY” isn’t a bug, which should read “TOO_LATE”. If it is, then I’d assume Kerlink would have fixed that long ago, making me wonder if there’s more recent software available. But then: even if reported back to TTN I wonder if it’s acting on that error (by, say, sending the downlink command earlier next time).

2 Likes

V2 of the TTN stack does not take JIT into consideration for UDP connected gateways.

2 Likes

I stumbled upon this for a while and eventually it turned out that the “TOO_EARLY” is related to the time of scheduling point of view. So the meaning shouldn’t be the blaming “Oh no, you just arrived too late!”, but the obliging “Sir, you scheduled the packet so in advance that I already went past beyond”.
Anyway could become a bug if considering the roll-over due to unsigned arithmetic, as stated in the source:

*  Warning: unsigned arithmetic (handle roll-over)
                t_packet > t_current + TX_MAX_ADVANCE_DELAY

Don’t know if the dump is relevant for this now, but the following is the tcpdump of the case explained in the topic question (with added packet type):

10:20:12.013977 IP (tos 0x0, ttl 64, id 957, offset 0, flags [DF], proto UDP (17), length 271)
    10.70.191.217.58240 > 52.169.76.203.1700: UDP, length 243
	(ed: PUSH_DATA: JoinRequest 1)
        0x0000:  4500 010f 03bd 4000 4011 ea8d 0a46 bfd9
        0x0010:  34a9 4ccb e380 06a4 00fb 25d0 0279 4d00
        0x0020:  7276 ff00 0b03 2475 7b22 7278 706b 223a
        0x0030:  5b7b 2274 6d73 7422 3a37 3032 3030 3836
        0x0040:  3336 2c22 7469 6d65 223a 2232 3031 392d
        0x0050:  3132 2d31 3854 3130 3a32 303a 3132 2e30
        0x0060:  3031 3535 315a 222c 2263 6861 6e22 3a31
        0x0070:  2c22 7266 6368 223a 312c 2266 7265 7122
        0x0080:  3a38 3638 2e33 3030 3030 302c 2273 7461
        0x0090:  7422 3a31 2c22 6d6f 6475 223a 224c 4f52
        0x00a0:  4122 2c22 6461 7472 223a 2253 4639 4257
        0x00b0:  3132 3522 2c22 636f 6472 223a 2234 2f35
        0x00c0:  222c 226c 736e 7222 3a31 322e 382c 2272
        0x00d0:  7373 6922 3a2d 3238 2c22 7369 7a65 223a
        0x00e0:  3233 2c22 6461 7461 223a 2241 4b42 6a41
        0x00f0:  7442 2b31 624e 7761 3156 6e55 6d52 4f59
        0x0100:  5571 5652 4e5a 3755 6a41 3d22 7d5d 7d
10:20:12.234614 IP (tos 0x0, ttl 64, id 964, offset 0, flags [DF], proto UDP (17), length 40)
    10.70.191.217.56746 > 52.169.76.203.1700: UDP, length 12
	(ed: PULL_DATA)
        0x0000:  4500 0028 03c4 4000 4011 eb6d 0a46 bfd9
        0x0010:  34a9 4ccb ddaa 06a4 0014 f8b4 023d 3402
        0x0020:  7276 ff00 0b03 2475
10:20:13.514393 IP (tos 0x0, ttl 40, id 53116, offset 0, flags [DF], proto UDP (17), length 32)
    52.169.76.203.1700 > 10.70.191.217.56746: UDP, length 4
	(ed: PULL_ACK)
        0x0000:  4500 0020 cf7c 4000 2811 37bd 34a9 4ccb
        0x0010:  0a46 bfd9 06a4 ddaa 000c 99b2 023d 3404
10:20:13.514408 IP (tos 0x0, ttl 40, id 53117, offset 0, flags [DF], proto UDP (17), length 32)
    52.169.76.203.1700 > 10.70.191.217.58240: UDP, length 4
	(ed: PUSH_ACK)
        0x0000:  4500 0020 cf7d 4000 2811 37bc 34a9 4ccb
        0x0010:  0a46 bfd9 06a4 e380 000c 7aa3 0279 4d01
10:20:18.296313 IP (tos 0x0, ttl 64, id 1408, offset 0, flags [DF], proto UDP (17), length 270)
    10.70.191.217.58240 > 52.169.76.203.1700: UDP, length 242
	(ed: PUSH_DATA: JoinRequest 2)
        0x0000:  4500 010e 0580 4000 4011 e8cb 0a46 bfd9
        0x0010:  34a9 4ccb e380 06a4 00fa 7731 02bc 5f00
        0x0020:  7276 ff00 0b03 2475 7b22 7278 706b 223a
        0x0030:  5b7b 2274 6d73 7422 3a37 3038 3238 3933
        0x0040:  3634 2c22 7469 6d65 223a 2232 3031 392d
        0x0050:  3132 2d31 3854 3130 3a32 303a 3138 2e32
        0x0060:  3832 3237 385a 222c 2263 6861 6e22 3a32
        0x0070:  2c22 7266 6368 223a 312c 2266 7265 7122
        0x0080:  3a38 3638 2e35 3030 3030 302c 2273 7461
        0x0090:  7422 3a31 2c22 6d6f 6475 223a 224c 4f52
        0x00a0:  4122 2c22 6461 7472 223a 2253 4639 4257
        0x00b0:  3132 3522 2c22 636f 6472 223a 2234 2f35
        0x00c0:  222c 226c 736e 7222 3a39 2e30 2c22 7273
        0x00d0:  7369 223a 2d32 382c 2273 697a 6522 3a32
        0x00e0:  332c 2264 6174 6122 3a22 414b 426a 4174
        0x00f0:  422b 3162 4e77 6131 566e 556d 524f 5955
        0x0100:  7071 5370 3044 2f67 4d3d 227d 5d7d
10:20:18.791382 IP (tos 0x0, ttl 40, id 53455, offset 0, flags [DF], proto UDP (17), length 237)
    52.169.76.203.1700 > 10.70.191.217.56746: UDP, length 209
	(ed: PULL_RESP: JoinAccept 1)
        0x0000:  4500 00ed d0cf 4000 2811 359d 34a9 4ccb
        0x0010:  0a46 bfd9 06a4 ddaa 00d9 b338 02db da03
        0x0020:  7b22 7478 706b 223a 7b22 696d 6d65 223a
        0x0030:  6661 6c73 652c 2274 6d73 7422 3a37 3037
        0x0040:  3030 3836 3336 2c22 6672 6571 223a 3836
        0x0050:  382e 332c 2272 6663 6822 3a30 2c22 706f
        0x0060:  7765 223a 3134 2c22 6d6f 6475 223a 224c
        0x0070:  4f52 4122 2c22 6461 7472 223a 2253 4639
        0x0080:  4257 3132 3522 2c22 636f 6472 223a 2234
        0x0090:  2f35 222c 2269 706f 6c22 3a74 7275 652c
        0x00a0:  2273 697a 6522 3a33 332c 226e 6372 6322
        0x00b0:  3a74 7275 652c 2264 6174 6122 3a22 4941
        0x00c0:  4832 546c 4755 4173 4144 4167 6b58 7547
        0x00d0:  6b41 5854 4b49 4d61 5554 6238 2f66 3630
        0x00e0:  2f56 3763 6c38 6f49 6959 227d 7d
10:20:18.796338 IP (tos 0x0, ttl 64, id 1450, offset 0, flags [DF], proto UDP (17), length 74)
    10.70.191.217.56746 > 52.169.76.203.1700: UDP, length 46
	(ed: TX_ACK: "TOO_EARLY")
        0x0000:  4500 004a 05aa 4000 4011 e965 0a46 bfd9
        0x0010:  34a9 4ccb ddaa 06a4 0036 fb9c 02db da05
        0x0020:  7276 ff00 0b03 2475 7b22 7478 706b 5f61
        0x0030:  636b 223a 7b22 6572 726f 7222 3a22 544f
        0x0040:  4f5f 4541 524c 5922 7d7d
10:20:19.343295 IP (tos 0x0, ttl 40, id 53505, offset 0, flags [DF], proto UDP (17), length 32)
    52.169.76.203.1700 > 10.70.191.217.58240: UDP, length 4
	(ed: PUSH_ACK)
        0x0000:  4500 0020 d101 4000 2811 3638 34a9 4ccb
        0x0010:  0a46 bfd9 06a4 e380 000c 6860 02bc 5f01
10:20:22.194664 IP (tos 0x0, ttl 64, id 1558, offset 0, flags [DF], proto UDP (17), length 40)
    10.70.191.217.56746 > 52.169.76.203.1700: UDP, length 12
	(ed: PULL_DATA)
        0x0000:  4500 0028 0616 4000 4011 e91b 0a46 bfd9
        0x0010:  34a9 4ccb ddaa 06a4 0014 b5a3 024e 7702
        0x0020:  7276 ff00 0b03 2475
10:20:22.293346 IP (tos 0x0, ttl 40, id 54187, offset 0, flags [DF], proto UDP (17), length 32)
    52.169.76.203.1700 > 10.70.191.217.56746: UDP, length 4
	(ed: PULL_ACK)
        0x0000:  4500 0020 d3ab 4000 2811 338e 34a9 4ccb
        0x0010:  0a46 bfd9 06a4 ddaa 000c 56a1 024e 7704

I’d also like to know what are your thoughts about the question, so to have some strong pointing about this scenario that should be broad enough to be of interest for a large number of users, I believe.