I’m trying to use Application Layer Clock Synchronization.
I enabled it in the Console under General Settings, and I’ve implemented communication based on the TS003 specification. I believe I’m sending valid uplink packets, but I’m not receiving any downlinks.
I haven’t been able to find detailed information on what exactly needs to be configured or how this feature is supposed to work in practice.
From what I understand, it should work starting from version 3.25.0 when enabled, using FPort 202, as stated in the specification.
However, I haven’t found any practical examples or usage guidelines anywhere.
For context: I’m using an Adafruit Feather 32u4 LoRa board with the SlimLora library.
Due to the limited flash memory (32kB), I’m looking for a lightweight way to synchronize real-time clock data.
As v1.0.3 is the most common LW version in use which comes with DeviceTimeReq, I believe that this functionality was included to cover v1.0.2 devices where a time request could be built in to the user firmware. And most devices don’t need to know the absolute real time. So not something that’s going to be in common use.
Given the hardware you are using, SlimLoRa is a choice, assuming you are using @civ’s fork and, coincidentally, their repro’s just had an update for DeviceTimeReq, so that may be your best avenue to explore. Or use LMIC which can be put on a diet to make it fit.
All that said, for “context”, telling us the WHY would allow us to come up with some better suggestions. The main why is why such an old piece of hardware. The second one is why you need the actual time. The third why is how you are going to keep the clock accurate on a device / MCU that will drift by seconds in a temperature controlled environment, if the temp varies, then you will have to rely on TimeReq’s and many downlinks aren’t great for the network.
I suggested @vitkolar to use RTC. With rare DeviceTimeReq it can set the RTC every year I suppose.
@descartes a) What about to ask a downlink every month to correct the unstable AVR timer? It’s again an overuse of network? b) Since device will ask for downlink after ADR_ACK_LIMIT maybe it’s a good idea to ask LNS time together with ADRACKReq? One downlink two problems solved.
Absolutely for the best, because I tried to calibrate old series AVR to use as a clock - the proof of concept was actually along side my RF receiver alarm clock for comparison for a few weeks until I became tired to correcting its drift every few days. Even the cheap RTC’s keep the right time for a few months - all depends on what the use case is for having precision time. The AVR would be off by about 1.5 seconds every day, despite all sorts of algorithms to take feedback from the adjustments - and this was in a temperature stable environment!
As for downlinks, for a device that’s ‘safe’ - ie is easily heard by two or more gateways, so link checks don’t need to be too frequent, then there seems to a general consensus that one a fortnight is about right. And sure, add ADR to get the most out of the downlink.
Which is why I asked the OP why they wanted to do this.
I’ve only got one deployment that uses the actual time via TimeReq and even then it doesn’t impact the uplink time, ie not designed to uplink all at the same time.
Can’t remember the drift direction, it was like nailing jelly to the wall over the space of a month for some mad-cap scheme that would cost far more than the RTC or SAMD/STM32 upgrade would have been …