That might not even be needed? The network server assumes the Handler/application would tell if a downlink is needed well within the time frames anyhow? (Though for 3G gateways, that might be troublesome.)
So one of the ideas i’m working on now is first to detect if a node is out of sync, and then handle it.
First one to detect a node is out of sync is essential information. In my application plus minus 10 seconds does not really matter. So if I send the time from my mote directly and compare it with the timestamp from the gateway. On the server i have the transmission information and can calculate the expected airtime and then achieve a fair conclusion if the device is out of sync.
Now i know if the mote is out of sync and a estimation of how much out of sync. Then the server can prepair a downlink package with a correction factor. Next time the mote sends a signal it will receive the correction factor. The mote then applies the correction factor and send the time again. This loop can continue until a certain threshold is reached. Of course it is of high importance to give a very qualified correction factor to avoid having too many transmissions.
to sync nodes RTC’s during production (and later once a year maintainance) you can use BlueTooth to
You should have said that accuracy of the synchronisation was not so important, 10s is an eternity . +/-1s accuracy should be achievable with redundant downlinks every now and then, nothing special required.
There is another thread on this topic in github, user GuntherSchulz explains his solution:
Hello, I am new to this technology, could someone explain me what are the impacts of a bad time synchronisation, and give me some examples?
Thank you in advance.
example would be… ’ you set your alarmclock (node) to wake up at 0700… but the clock itself is off 53 minutes ’
how do you set ( synchronize) your clock … remote .
and the impact of this bad time synchronisation would be that you’re too late for your appointment.
For future reference, see the LoRaWAN Application Layer Clock Synchronization Specification v1.0.0:
This document proposes an application layer messaging package running over LoRaWAN to synchronize the real-time clock of an end-device to the network’s GPS clock with second accuracy.
An end-device using LoRaWAN 1.1 or above SHOULD use DeviceTimeReq MAC command instead of this package. […] End-devices with an accurate external clock source (e.g.: GPS) SHOULD use that clock source instead.
(With contributions from @johan, it seems.)
Uplinks not showing up in application data when payload is not empty
It took some time to get the actual time on the node via LoRaWAN.
Well, meanwhile we did it. Precision is still to be proved by some serious measurements with digital oscilloscope and precise time base. Nevertheless first trials seem to show that we get beyond +/- 1 second precision, and that was the goal.
Meanwhile, probably everyone who joined the TTN conference 2019 knows the use case for this
Note: There is a patent filed for the used algorithm. Not to draw money from this, but to prevent that others do. So feel free to check and test the source code. Pull requests as well as criticism are highly welcome.
funny to read my old comments, when I didn’t see a use case for timesync over LoRaWAN
Wall clock time syncing a LoRaWAN node in TTN v2 stack - it’s doable. First tests show, we are already close to the goal of around 50ms precision.
DeviceTimeReq is quite accurate. 5 ms without subtracting time to print to serial.
[2019-03-21 07:20:02.186] at+gpstime
[2019-03-21 07:20:06.035] 1237206024030
[2019-03-21 07:20:06.035] OK