Time synchronisation of a Node

The point is to have a time source for sites, where GPS reception is impossible, even with external antenna (think of big buildings indoor and tunnels, not in .NL), and when the node hardware has no GPS, GSM, or other time source on board to sync it’s RTC, due to cost and/or batterypower. So there is a use case, at least I do have one (with many nodes).

Back to the question. LoRa has a built-in latency for class A devices of 1 second when using RX1 Slot and 2 seconds with RX2. When using RX2 datarate for downlink is fixed. The node itself knows the datarate of it’s Uplink. So it should be possible to give a good estimation of latency on the nodes side, when querying time over a LoRa RX2 connection. This would result in a chance to calculate absolute time over a query-loop, like NTP protocol does.

The time responder (“NTP-server”) could be a process running on the LoRa-Gateway, to keep loop times for time requests short.

So far my theory. Haven’t tried it yet, wondering if someone did it already?

Any updates on how you solved this problem? I’m in the same boat where i need to syncronise the RTC after a battery change. So very interested to hear your solution or what you have tried since last post.

For use cases where the following suffices:

…make sure the node has joined before the first data is sent. (For example, when not joined yet, LMiC will start joining as soon as the first data is queued, and one does not know how long joining can take.) Once joined, the node will send right away, unless its LoRaWAN stack implements some random timeout, like mentioned in the specifications, emphasis mine:

2.1 LoRaWAN Classes

The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis (ALOHA-type of protocol).

If no such random variation is used, or if the used value can be added to the data, then the node’s time can be calculated based on the time the packet was received.

(Well, if one knows if the gateway’s timestamp and/or the backend’s timestamp are added when reception starts, or when it’s complete.)

When only synchronizing on battery change, then I’d guess that the gateway/server time is much more precise (as it can sync to internet time regularly) than the node’s RTC which will suffer drift?

It’s a bit old style but there are several atomic time transmitters stations you can pick up around the world with low cost hardware, very low power consumption and great signal penetrability

Read about DCF77 (German station works across Europe) https://en.m.wikipedia.org/wiki/DCF77 and look for modules like this https://world.taobao.com/item/548186965810.htm?fromSite=main

3 Likes

We’re working on a time sync protocol.

You are probably aware of it: note that the current gateways are half-duplex (so cannot listen on any channel/SF when transmitting a downlink), which is why downlinks are limited to at most 10 messages per day. So any use case for which the gateway/backend timestamps suffice should really, really, really just rely on that.

(That said: I’m curious what you’ll come up with!)

This seems sufficient for our use case, because our nodes have buffered on board RTC which can keep time up to 7 days accurate within +/- 0,5 seconds. That means we need only once in 7 days a time sync with backend to keep RTCs of nodes synchronized.

(PS: This is what’s going on here: http://dmm.travel/news/artikel/lesen/2017/01/ueberwachung-von-bahnhofsuhren-78979/ )

@Verkehrsrot We’re looking to do something similar (sending a clock sync message once a day or so, in order to keep a local high-precision RTC from drifting over time). Unless I’m missing something, this is as simple as implementing an application-level downlink message that the node understands (i.e. contains the time value–in our case “seconds since the Unix Epoch”). Did you find that this is any more complex than I have described?

It is much more complicated. If you want exact time, you need to calculate latency on the connection.

but how ?

Theoretically that sounds plausible. You will have to take lost downlinks into account though. So your node should confirm downlinks, which is a bit the opposite world. Otherwise the node might still get out of sync, or your gateway would have to send a lot of synchronisation downlinks, possibly violating airtime.

Question is whether LoRaWAN is the right technology for this job. You will be scheduling downlinks on gateway level, while that’s the job of the network server. That will effectively make your gateway non-compliant.

In my experience of working with WSN’s for phasor measurements: if you want pinpoint accuracy, you have to shell out some extra bucks.

That might not even be needed? The network server assumes the Handler/application would tell if a downlink is needed well within the time frames anyhow? (Though for 3G gateways, that might be troublesome.)

So one of the ideas i’m working on now is first to detect if a node is out of sync, and then handle it.
First one to detect a node is out of sync is essential information. In my application plus minus 10 seconds does not really matter. So if I send the time from my mote directly and compare it with the timestamp from the gateway. On the server i have the transmission information and can calculate the expected airtime and then achieve a fair conclusion if the device is out of sync.

Now i know if the mote is out of sync and a estimation of how much out of sync. Then the server can prepair a downlink package with a correction factor. Next time the mote sends a signal it will receive the correction factor. The mote then applies the correction factor and send the time again. This loop can continue until a certain threshold is reached. Of course it is of high importance to give a very qualified correction factor to avoid having too many transmissions.

to sync nodes RTC’s during production (and later once a year maintainance) you can use BlueTooth to

You should have said that accuracy of the synchronisation was not so important, 10s is an eternity :wink: . +/-1s accuracy should be achievable with redundant downlinks every now and then, nothing special required.

There is another thread on this topic in github, user GuntherSchulz explains his solution:

Timesyncing a node on a LoRaWAN network

Hello, I am new to this technology, could someone explain me what are the impacts of a bad time synchronisation, and give me some examples?

Thank you in advance.
Regards

example would be… ’ you set your alarmclock (node) to wake up at 0700… but the clock itself is off 53 minutes ’
how do you set ( synchronize) your clock … remote .

and the impact of this bad time synchronisation would be that you’re too late for your appointment. :wink:

your welcome

1 Like

For future reference, see the LoRaWAN Application Layer Clock Synchronization Specification v1.0.0:

This document proposes an application layer messaging package running over LoRaWAN to synchronize the real-time clock of an end-device to the network’s GPS clock with second accuracy.

Note:

An end-device using LoRaWAN 1.1 or above SHOULD use DeviceTimeReq MAC command instead of this package. […] End-devices with an accurate external clock source (e.g.: GPS) SHOULD use that clock source instead.

(With contributions from @johan, it seems.)