For use cases where the following suffices:
…make sure the node has joined before the first data is sent. (For example, when not joined yet, LMiC will start joining as soon as the first data is queued, and one does not know how long joining can take.) Once joined, the node will send right away, unless its LoRaWAN stack implements some random timeout, like mentioned in the specifications, emphasis mine:
2.1 LoRaWAN Classes
The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis (ALOHA-type of protocol).
If no such random variation is used, or if the used value can be added to the data, then the node’s time can be calculated based on the time the packet was received.
(Well, if one knows if the gateway’s timestamp and/or the backend’s timestamp are added when reception starts, or when it’s complete.)
When only synchronizing on battery change, then I’d guess that the gateway/server time is much more precise (as it can sync to internet time regularly) than the node’s RTC which will suffer drift?