Enable LMIC interrups on Adafruit Feather M0

To let LMIC use interrupt for more precise timing, does it suffice to set #define LMIC_USE_INTERRUPTS for my feather M0 board, or do I need to perform additional steps?

Why do you think interrupts will enable more precise timing?

Typically a node is just busy-waiting in the only situation where precise timing actually matters, which is the downlink windows.

It’s more of a straw to cling to when everything else so far has failed. I would imagine that the radio has more precise timing with respect to the timebase that matters - the radio timebase. But this might be because of my background in “big” radios, with precise clock synch for OFDM and the like.

It’s more of a straw to cling to

Somewhat as I suspected, a desperate gesture not grounded in any fact. And in fact it’s not going to help you.

I would imagine that the radio has more precise timing

No, node class radios have no timebase at all, that has to come from the hosting processor.

Concentrate your efforts instead on understanding what is actually going wrong. Use a maintained branch of LMiC. Toggle GPIOs and use a scope or logic analyzer to see if things are not happening at the times they should be.

The key thing you are looking for is that when the radio’s transmit done DIO changes, it should again be being put into receive move just before the receive window delay has elapsed - 1 or 2 seconds for ordinary RX windows, 4 or 5 for join RX windows.

2 Likes

More precise radio timing by hardware interrupt does only lead LMIC to start the radio “sharper”, what means later, not earlier. Thus, this way does not increase chances to receive a weak or mistimed packet. The hardware interrupt feature is primary a battery saver.

2 Likes

By the way, can anybody knows how to find the LoraWAN MAC version and regional parameters version of the LoRA module. Myself, it is SX1276 based module of RF95M hoperf module.

Thanks in advance

That’s determined by the software which implements LoRaWAN on the node, NOT the radio hardware.

1 Like

Thanks For your Response.
Actually, Somehow I have received the LORA packets in Gateway and not able to receive in device data. I don’t know what configuration I have missed it.
I m working on ABP method, Entered NWkey APPkey as well and saved it. I could not see those packets under device list.

Since you’ve given no relevant information, no one can help you.

Again, relevant information is not hardware, but EXACT software identity and version

If you are using the Feather M0 LoRa with lmic you need to connect DIO1 to GPIO6. “DIO1” is the pin labeled “io1” on the feather board (bottom-right pin). I’m not sure if that is needed for ABP mode, but it is definitely needed for OTAA (which is what I am using). I think without it the device doesn’t receive downlink messages. This is documented in the lmic-node example project (see special note under “wiring”):

For LMIC-node I am using MAC version 1.0.3 in OTAA with good results on a Feather M0 LoRa. If using OTAA, pay particular attention to the LSB/MSB byte ordering of the DEVEUI (LSB) and APPKEY (MSB) in the lorawan_keys.h file. The TTN console has the DEVEUI in MSB by default, but it has a handy button to flip it to LSB when in array view mode.

For ABP mode, a possible problem could be the frame counter checks for the device in TTN. If you aren’t persisting your frame counter between restarts, disable frame checks in General Settings > Network Layer > Advanced MAC Settings > Reset Frame Counters. Don’t use that setting for a production device, it makes you vulnerable to replay attacks.

1 Like

For ABP DIO1 is not needed, but if not connected this should be properly configured in the lmic pin mappings.

It will be better to connect DIO1 however so the device can be used with either OTAA and ABP (OTAA is preferred anyway).

Actually DIO1 (or a software workaround) is universally needed.

TTN V3 absolutely REQUIRES that even an ABP node be able to receive downlinks - and correctly process and respond to MAC commands.

LMiC could be modified to implement receive timeout in software rather than via DIO1 and thus work without needing that pin. I’ve done this in custom code of different heritage, but I don’t think it’s commonly seen.

You are correct.

DIO1 is needed for downlinks which I forgot to mention.

For V2 this was not an issue when downlinks were not used by the application.
For V3, handling of MAC commands is required which uses downlinks and therefore use of DIO1 is needed.