Power consumption with LMIC

Hello,

How is your current consumption in standby mode using LMIC library ?

I am testing with a whisper node and I measure around 90uA. I am using a multimeter to take the measurement for the lack of a better measuring equipment.

While testing with a script provided by the maker I measure around 15uA during sleep.
As far as I can see it uses the lowpower library for the arduino and it also sleeps the radio.
LMIC is supposed to do the latter and I also inserted the former into the code according to this

I am just looking for the reason there is such a difference and if I can drop the current usage around 15uA

Regards

1 Like

I have here a Sensebender on 23uA.

20180618_213452

1 Like

Thanks that helped setting some expectations. I reconfigured everything from scratch and after lots of testing I get the same measurements as you so I think I am good.
I must have misconfigured the temp sensor and I was wasting excessive battery
I am using the bme280 in standby as well and it seems to work well

1 Like

I’m around 2.4 uA on a 3V battery powered device based on a STM32L4 MCU and a NiceRF frontend. I’m putting the MCU into stop mode but I’m pretty sure I could lower that consumption if I start playing with standby or shutdown modes.

1 Like
1 Like

LMiC has no knowledge of how to put an MCU’s core or peripherals into a low power mode.

That is really up to whatever “glue” you use to adapt LMiC to your hardware.

LMiC’s scheduling mechanism while perhaps not perfectly efficient is on the whole reasonable about running only when it needs to, but making sure it’s not wasting power while waiting for the next scheduled job, and making sure I/O’s are left in a low power state, is up to the MCU platform integration code.

1 Like

Hello,
I am running LMIC on the classic Arduino Pro Mini / RFM95 combination. To reduce the power consumption during passive mode LED’s and voltage regulator are removed and the sensors are shut off. But… I think in active mode you can also reduce some power. First to disable the bootloader and second the wait time for an ack after a connection with the gateway/server. For that last isseu: is it possible tho tell LMIC not to wait for ack?

An ATmega bootloader has no impact on power consumption once control is transferred to the main program / sketch. At that point the bootloader no longer has any role in operating the chip, and is just data passively sitting in (code memory) flash.

It’s possible to disable receive windows but perhaps a bad idea.

  • First, LoRaWAN is “transmit mostly” but is really designed to use occasional receives for important purposes. Even if you don’t use OTAA, you should still try to use ADR (unless your nodes and gateway never move and you manually tune the data rate and power).

  • Next, LMiC’s “what should I do next” logic is a bit complex, so ripping this out will require spending a fair amount of time to understand how it works.

  • Whatever you do, please don’t send uplink packets which imply they should result in a downlink, and then not bother to receive that downlink. It’s far more “expensive” for the network to send a downlink, than it is for you to receive it - because transmitting a downlink wipes out all 8 channels of the gateway, unlike uplinking which occupies one, and to an extent only partially.

If you really want a transmit-only node, you might do better finding a simplified codebase that just does that, and doesn’t fully implement LoRaWAN. I think there was one from Adafruit mentioned here recently.

Also, as an aside, some LMiC repos have a bug where if you try to use ADR without OTAA, and ADR fails, it will get confused and get stuck in an eternal loop mistakenly thinking it should do an OTAA join. I believe that’s fixed in MCCI LMiC, not sure about others.

If you use (deep)sleep mode, OK. But if you shut down the power supply completely, it has to boot every time.

I don’t worry about missing some samples. Battery life time is of more importants. Maybe a mode in the LoRaWAN protocol with no ack is for the node, gateway and server less time (power) consuming. It’s also possible the disable the message counter.

I found the “Adafruit TinyLoRa” discussion, thanks…

Without the bootloader, you save about 1.4 seconds of runtime.

Of course one could modify a bootloader to check a boot mode GPIO and run the main sketch immediately.

But more practically, is it really worth reboot each time just to save 5 or 6 uA?

Sometimes not, re-booting can cost you more power than a swift wakeup from sleep mode…

It seems your suggestions will create a device which is not LoRaWAN compliant by not implementing receive windows etc. Are you sure that is the way to go?

OK, I don’t like to ignore/contrary the standard, but maybe it’s an option to change the standard if you can spare time/energy at the end nodes, gateways and servers. If your

and your application do not mind to miss some samples, you can make it an option like the frame counter check.

Some LoRa libs (TinyLoRa) don’t look at the recieve windows at all(?).

The key there would be to see if TTN is ever trying to transmit to it.

If not, then (assuming you chose an appropriate spreading factor) it might be sort of okay.

But if it’s triggering downlinks that it’s never acknowledging which thus keep repeating, you really shouldn’t use it, as triggering pointless and especially pointlessly repeating downlinks is detrimental to the network as a whole.

Which makes them non LoRaWAN compliant. And potentially less energy efficient. If anything changes to a nodes deployment, for instance a new gateway is installed resulting in better reception and the back-end using MAC commands to tell compliant nodes use less transmission power these nodes will be missing out. Also, when a gateway is removed/off-line there is no way the node will know the data is not getting to the network so it can not anticipate by increasing power or spreading factor.
Please don’t tell me the idea is to deploy nodes with SF12 as the amount of ener spend on transmissions for SF12 will dwarf the power required for receive windows. To save battery any node that might require SF12 some of the time should allow for ADR to get back to lower SFs asap.

1 Like

Indeed so, but then the TTNMapper setup recommends that nodes using their service stick to SF7, which makes sense for plotting coverage. TinyLora is a good match for this application ?

Good question, however would TTNmapper require TinyLora? LMIC works quite well for mapping even on resources constrained devices in my experience.

1 Like

+1.
With platforms like 32u4 + GPS, LMIC is adequate. TinyLora was built initially for ATtiny85.