LMIC os_runloop() conflicts with sensor reading function

Hi folk,

I’m hitting a bit of a dead end trying to work around the os_runloop() routine of LMIC. I’m using an ultrasonic sensor that is based on critical timing as well; this in order to measure distance. When this function is in the setup before doing anything LMIC related, the reading works fine and sensor spit out the right data, so it’s not hardware, nor is my reading function. But, when I do call my sensor read inside the do_send(osjob_t* j) function, before sending a packet, the sensor does not have the time to perform measurement. Note, that my hardware is based on TPL51111 nano timer to raise hw interrupt and wake up my system, all of this works very well and I get very good OTAA stability and very low sleep current, 30uA.

If someone can indicate me where to execute this time critical sensor reading that would be fantastic.

here is my code : https://pastebin.com/fYJKUkXJ

Thanks !

You will probably run in several issues with your code.

First is, that the code which reads sensor data may break LMIC timing or vice versa, e.g. if you’re using blocking routines. Solution is to keep time critical jobs out of an LMIC job. Put it in separate RTOS task instead. Then take care of RTOS task priorities and context switch.

Second is, that your application must take care of some things when putting the CPU to sleep while LMIC is running. Look here for details.

Thank you for your answers! Any suggestion as to how such system could be simply implemented in Arduino dev env ? Couldn’t it be just possible to halt LMIC when I know for sure there is no RX/TX planned. ie: when I just woke up my node, LMIC does not need to do anything ?

Arduino dev does not support real multitasking, as far as i know.
But LMIC implements a simple OS job scheduler osjob_t which you could use.

Put your sensor reading code in a separate osjob_t read_sensor_job(), and remove it from send_job. Take care that your code in read_sensor_job() has no blocking parts, to ensure the LMIC OS scheduler can do it’s job. Build your code e.g. following the concept of an event driven FSM. This way coexistance with LMIC should work.

For sleeping while LMIC is running follow then given link. You will have to implement a solution which advances millis() after wake up from sleep.

Simple solution to avoid advancing of millis() could be to restart the device every time it wakes up from sleep.

About the stability of LMIC after deed sleep, I actually never had any issues and this without implementing any of the recommendation you suggest. It just simply works, packet get sent properly and arrive at destination, this for a long period of time, I have a node running for week every 5min, and never skipped a beat. So I’m not sure wether I should modify the current code? Reseting the device after deed sleep would make me loose my OTAA credential and therefore force me to join every 5min, which is not ideal either.

Thank you for the suggestion with the OS job scheduler, I will see what i can do with this and if it does sort my issues.

You won’t be able to receive downlink data on your node after waking up from sleep.
If you need to receive data only during join for OTAA, this may not affect you. But it may affect your LORAWAN network provider because your node then can’t receive MAC commands sent by the network controller, what could result to your node being blocked from the network.

1 Like

Yes, in my case no downlink data is expected from the gateway, only RX window after join.

Implementing the OS job scheduler does not seem to work and yield same results with the sensor not having the time to perform measurement.

void wakeUp()
void do_read(osjob_t* j)
    digitalWrite(en_pin, HIGH);
    sensor_data = get_distance();
    data[0] = sensor_data & 255;
    data[1] = (sensor_data >> 8) & 255;   

Lots of mistaken information being posted here

For a class A node, the timing-critical part of LMiC is between transmit and receive. Delaying transmit by waiting for something is really not an issue.

Scheduling a another job could be a cleaner solution, but LMiC’s scheduler is cooperative and does not account for the time which jobs will take, thus anything that could result in the sensor job getting run in between transmit and receive would often break things, and there isn’t a real easy way to avoid that with the existing code. Doing your sensor readings on th way to transmitting may be simplest…

In terms of updating millis(), this is primarily an issue if you sleep between transmit and receive - so it’s simplest if you just stay awake for that second.

The second reason for wanting millis() to keep advancing is that if you are in a region where LMiC needs to do duty cycle limiting, not having millis() advance during the sleep will cause it to think time is passing very slowly, and absurdly limit itself compared to the actual advance a time.

Simple solution to avoid advancing of millis() could be to restart the device every time it wakes up from sleep.

This is a truly terrible idea. It fixes the problem with rate limiting only by throwing away the rate limiting history.

Worse, cold-restarting LMiC has to be avoided - a compliant LoRaWAN node must track and not repeat its frame count within a session, so you’d have to have a place to save that and restore it.

Some will try to use OTAA to create a new session instead, but this simply moves the problem and creates a new one. Each join request must use a fresh not-previously-used join nonce, so you then have to keep track of those. And you can only join so many times before the device EUI itself is worn out. Plus joining is very expensive for the network as it costs a bare minimum of one downlink, often more. So don’t do a cold restart and rejoin after each sleep, either.


Digging a little deeper, I think the root issue of my problem is mostly independent from LMIC.

The sensor could run perfectly fine when the en_pin was kept HIGH, it indicates some critical timing upon start-up for the ultrasonic sensor to initialise as the sensor I am using (SRF05) does have a slave MCU to performs the ultrasonic sensor transducer measurement, although no source code or info is available on such timing. So the issue should be that I am not waiting long enough before sending my ping request after waking up. However, and this could be LMIC dependent; I can’t seem to be able to force upon a delay_ms when my node wakes up from deep sleep, any delay I write does not get interpreted (weirdly).

A work around would be to use the interrupt signal from TPL51111 to enable the boost convertor as well as waking up the MCU. And by using an RC circuit or 555 timer to delay the interrupt coming in the MCU to wake up and going un-delayed into the boost enable pin, and tuning the value to a delay that satisfy the slave MCU of the sensor. This somehow seems complicated, but at this point I find such solution more elegant than trying messing around the very fragile LMIC runtime.

If anyone has a better suggestion that does involves an hardware fix; I would be very happy to hear them.

Regarding the other issue of waking up LMIC node from deep sleep and the LoRa WAN standard. I have done some more research and it seems like I am not doing anything wrong. And again, this part of the node works like a charm.


Seems like what you really need to do is understand, preferably by documentation but failing that by experiment, what the actual requirements of your sensor are.

Likely you could wake up, set the sensor pin, go back to sleep, and then wake up to get the answer. With many processors, you wouldn’t actually have to be fully awake to hold a pin in a state, though some of the simpler ones may force that, or keep you in the lightest sleep states to do it.

Realistically though, it’s unlikely that running your main processor at an ordinary clock rate for whatever small amount of time it takes to warm up and prepare the sensor and take a reading is going to be a serious battery expense, compared to transmitting the reading.

Unless your main controller is a bit primitive rather than being a modern traditional MCU, adding hardware is not the answer; understanding the requirement and leveraging the hardware you have is.

Hello, Excuse Me English I dont know if this is the right place to put my problem, but I want to read a sensor that involves a sampling function during 1 second, also when this sensor detecte certain value after the sampling function this send data. However After of put any code that involve a delay for the program the Downlink dont work. For exmple I call this function to do the sampling:
int muestra()
NumPulsos = 0; //This are in the interrupt function
frecuencia = NumPulsos;
I tried to remove the Interrupts() and noInterrupts() only to porve if this function generate conlfict with the downlink but still dont work, the downlink dont works only when i put a delay. Also I need to monitore always the sensor to know if its necessesary send the data or not.

If anyone have some code that do something similar I would appreciate some much.

I would greatly appreciate any help you can give me. Thanks

Typically what this would mean is that you should avoid dealing with the sensor during the interval between transmitting and the end of the second receive window.

Also I need to monitore always the sensor to know if its necessesary send the data or not.

That’s going to be very difficult.

It’s also probably a bad idea to begin with - since you tend to transmit based on a sensor reading, you probably should not try to read the sensor and transmit again so immediately anyway.

If you really want to do it, your simplest option might be to add another processor to manage the sensor. Or switch to a node where the LoRaWAN stack runs in its own processor (probably in a module combining that with the radio).

If you really want to try to work out how to do it all in one, you’d probably want to start a non-blocking read of the sensor about halfway in between the end of the transmission and the start of the receive window. Then you can claim the result a second later. The really tricky part though is that if you want to keep your sensor reading times perfectly regular you’ll have to schedule your transmission by backing up the packet duration from when you want it to end.

That’s going to be fairly expert usage and re-working of LMiC.

Did you solve it? I am having exactly the same problem, using TPL5110 for waking up from deep sleep.

You can use ESP32, running LMIC on one core and the sensor application on the other. Tasks can communicate e.g. via queues.

Why do you need to read the sensor so frequently?

What is it that could change so fast that you’d need to re-read the values whilst transmitting?

And how would you control the amount of times the device sends if it is changing quickly to keep within the fair use policy & local legal requirements?

To fully capture what is going on, any sensor needs to be read in excess of its analog bandwidth.

Something like temperature doesn’t normally change that quickly (though seeing that it does is exactly what one type of fire alarm does); but there are things which change rapidly, such as motion, light used to detect people moving around, temperate indicating an exterior door opened, etc.

If the situation being measured can change quickly, and software needs to be able to evaluate that, then the sensors needs to be read at a high rate, with the evolution of the reading distilled down in software to metrics such as an average and “event of interest” reports which can fit through the narrow capacity of a LoRaWAN network.

In the long run, getting such distillation right is one of the most fundamental challenges in sensing.

Indeed, but in this instance the OP appears to be trying to read the sensor whilst transmitting which as you said:


So if we know why they need to read the sensor so frequently, we may be able to suggest other ways of achieving the desired effect - or, as you suggest:

And at 1 a second, I doubt this is an oversampling issue to increase resolution.