Strange problem with MCCI LMIC, TTN v3 and ABP (SF7 to SF12)

Arduino Pro Mini (ATMega328P) with a RFM95W. Example code (MCCI LMIC Example) i use for testing:
main.cpp (12.0 KB) platformio.ini (817 Bytes)

Circuit diagram: lorapromini_shematic.pdf (121.2 KB)

Thank you in advance! :smiley:

The platformio.ini references v3.3.0, which is/was pretty compliant but not much different in compile size than v4.1.1 but there are changes to the channel handling.

That said, my original hand built devices of ProMini + RFM95 were built with a much older Classic LMIC (matthijskooijman’s) and setup with ABP. Two are in the garden (somewhere) but migrated over from v2 to v3 without issue.

I’m not sure about some of the add-on’s in your main.c, particularly the saving keys to NVM - not really required for something that’s meant to be on all the time &/or is technically session-less. I’d strongly advise using the vanilla ABP example that comes with v4, but please change the uplink interval to >300s so if it’s left running, it’s FUP friendly.

Sorry you’re having problems.

It is kind of hard to keep straight what’s going on from the description and the discussion. However, nothing in the V4 LMIC will prevent it from listening to downlink mac commands in response to class A uplinks.

The description suggests that you are getting a downlink, but that’s kind of irrelevant, because you can override all that after it’s all done.

Why do you not call LMIC_setDrTxPow() right before each uplink, to force the data rate you want? The LMIC always will honor the most recent setting. As long as you wait until the previous uplink is complete – which may take a while, if there are mac downlinks after the initial uplink – that call will override anything the network tells you. Use LMIC_queryTxReady() to determine whether the LMIC is ready to accept another uplink. Pattern is something like this. Sorry that I have no time to test.

if (LMIC_queryTxReady()) {
   // force the datarate we want; check for success
   bit_t fSuccess;
   fSuccess = LMIC_setDrTxpow(desiredDr, KEEP_TXPOW);
   if (! fSuccess) {
     Serial.println("setDrTxpow failed!");
     while (1)
        /* loop forever */;
   }
   lmic_tx_error_t txError;
   txError = LMIC_setTxData2_strict(port, data, nData, /* confirmed */ 0);
   if (txError != LMIC_ERROR_SUCCESS) {
     Serial.print("LMIC transmit rejected, error code "); Serial.println(txError);
     while (1)
       /* loop forever */;
   }
}

As always, when debugging, it’s important to check all APIs to see if an error is returned. Also, bear in mind that these APIs have changed between classic LMIC, V3.3, and V4. The above example is for V4; but the error codes and so forth may not be defined in earlier versions, and the APIs won’t always return useful error codes if they fail.

Good luck!

If I do the PR’s, can we change the uplink interval for the TTN examples to something that is within FUP?

Also, the “Hello world” payload is cute, but is the direct opposite of all the advice here - perhaps I could change the payload to a byte array too?

2 Likes

Hopefully not in reasonable situations.

But earlier in the thread they got an absurd run-on sentence of multiple stacked MAC commands that had somehow been queued up (perhaps because they were previously ignored, and maybe some interaction with ignoring frame count resets allowed that).

YNHpCyYAAAAAUghdhbGWw4Y0Ck4EsCUrWs2T/1mt7BBq8bVENVzlFfRjX4va/a5C/b2UH87RTvU=

Which unfortunately is long enough that it’s sent as port 0 traffic encrypted with the network key rather than in fopts, so we can’t know exactly what it contains, though there was an earlier log file that gave some of the details.

It wouldn’t be hard in concept to imagine LMIC - especially on a '328 - choking on that as it’s likely outside of test cases. What would happen with an equivalent length of legitimate, repeating MAC commands is perhaps something that could be tested.

But where that downlink comes from is likely itself some sort of unusual case in the server triggered by “shouldn’t happen” behavior not of the node stack per se, but of how it’s being (ab)used in the poster’s desire to do atypical and unnecessary on-air transmissions to “research” something that they should simply be modeling or testing into a dummy load in the lab.

Hopefully not in reasonable situations.

Well, I was perhaps being over-precise. It is always listening during the receive windows. The radio may not catch the packet; if caught, the LMIC will look at the packet. It may not act, but that’s a different question. This is true in both OTAA and ABP scenarios.

YNHpCyYAAAAAUghdhbGWw4Y0Ck4EsCUrWs2T/1mt7BBq8bVENVzlFfRjX4va/a5C/b2UH87RTvU=

Which unfortunately is long enough that it’s sent as port 0 traffic encrypted with the network key rather than in fopts, so we can’t know exactly what it contains, though there was an earlier log file that gave some of the details.

It’s pretty easy to decode this if you have the network key. There are several online or offline decoders that will help with this. For ABP, you have the network key, so… why not decode?

OP might, I don’t unless its leaked into the thread somewhere I didn’t notice.

Okay turns out it was easier to grep these out of the earlier posted logfile than I thought.

And in looking through them, the “repetition” I was seeing in an earlier hasty attempt was because it’s adding all the additional channels it doesn’t know if the node already knows about.

MACCommand.RxTimingSetupReq
MACCommand.LinkADRReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.NewChannelReq
MACCommand.RxParamSetupReq

And most of these do look like they’re getting responded to by the node.

In my mind question then becomes if as a side effect of implementing the intent of one of them LMIC coincidentally or even in a spec-mandated fashion goes to DR0.

And the answer is, of course, that’s exactly what a MACCommand.LinkADRReq does! Only it’s not obvious because the logfile is only breaking out the enabled channels portion of it, and not the data rate and power command portion…

The heart of the matter is that LoRaWAN doesn’t make it possible to set the channel map, without also commanding the node’s datarate and transmit power. In a way, the spec makes a bit of ADR mandatory regardless if one wants it or not.

But TTN may not yet have a good “ADR link model” for a node it’s still trying to basically configure (and one that’s not requesting ADR anyway) and so just be playing it safe with DR0.

LinkADRReq carries the DataRate and TxPower in byte 1, and that sets the ‘requested’ data rate and power for uplink.

NewChannelReq carries the min and max permitted data ranges for each added channel (in addition to frequency); but it doesn’t mandate the current requested datarate.

So the network is definitely telling the LMIC something about what data rates to use. And the intent of the V4 LMIC is to follow spec-mandated requests. Would have to look more closely at the payloads to know what the network is saying.

The LMIC V4 code does not allow ABP nodes to prevent the node from automatically following the network (at least temporarily). The client code for an ABP node basically has to repeat all the downlink-adjustable settings before each uplink, because the LMIC unconditionally overrides them based on MAC-layer downlinks.

Things get even more complicated if you consider Class B or Class C; but as these are not supported in V4.1, it’s not (yet) an issue.

Current best practise is to use OTAA. Failing that, second best is to enter the channel plan on the console for the device so that the NS doesn’t feel the need to tell it what it can use.

But rather more importantly, based on the number of times I and others have dealt with issues here:

Please?

Failing that, second best is to enter the channel plan on the console for the device so that the NS doesn’t feel the need to tell it what it can use.

Well, the LMIC is not TTN-specific; ABP application requirements are network specific; so you can’t guarantee you can even do that.

If I do the PR’s, can we change the uplink interval for the TTN examples to something that is within FUP?

I accept PRs, though I admit it takes me a while to get to it. I suggest you start by raising an issue on the LMIC; I hadn’t thought about the FUP in the context of the sample apps because I view them as limited utility – check to make sure the device is working, not “start from this”. Of course, that’s just short-sightedness on my part.

Bear in mind that FUP (Fair Use Policy) is a Things Network concept, not a LoRaWAN concept. We have a lot of users who are using other networks: ChirpStack, Helium, etc. But the “ttn*.ino” sketches could be modified to adhere to the policy – as long as we clearly document that the delay is needed for TTN compliance (not LoRaWAN compliance).

Sorry that I don’t spend much time on this forum; it’s not because I don’t care, but … I’m not a person of independent means. I follow issues on the LMIC github site much more closely.

By the way, the biggest single problem that people report against the LMIC is that they can’t get downlinks working. It would be wonderful to improve the sample sketches so that there’s a step-by-step way to “get your radio working”.

I’m not saying remove it from LMIC, I’m putting it in the context of its application to TTS.

Already anticipated.

:rofl:

Same on here. Not usually an LMIC issue, more about placement of gateway & node (too close usually, sometimes 3rd party too far away). But there may be something I can create with some bullet points in - one for general and one for TTN (where the FUP is 10/day and the preference is 1/fortnight).

A universal challenge - I have LMIC under control so I answer questions here and don’t spend much time on GitHub.