Strange problem with MCCI LMIC, TTN v3 and ABP (SF7 to SF12)

I wanted to investigate the power consumption for different spreading factors for research purposes. I noticed that my node instantly upgrade from SF7 to SF12 after the first message.

To reproduce the problem I have prepared the following setup:

  • Example ttn-abp.ino sketch from MCCI LMIC with fixed SF7
  • MCCI LMIC v4.1 (and testet with v3.3)
  • Fresh ABP Node in TTN v3 with ADR disabled and reset counter enabled

2022-01-17 20_34_26-Live data - eui-70b3d57ed004b628 - The Things Network

If i prevent the device from getting downlink messages, it works like i know it from v3. FYI: I disabled downlinks by changing the packet forwarder port for the downlink in the gateway to a wrong udp port.

Please don’t do that, if you want to ‘experiment’ set up or purchase your own instance. TTN(CE) aka V3 rightly expects associated nodes to correctly handle, process and where necessary react/respond to any downlinks, inc. MAC commands, adr settings, channel settings etc. In many cases if you cripple the node the NS will continue to attempt retry, if even for a short while. This is a waste of both NS resource and the spectrum/gw capacity… your experiment may be forcing additional gw downlinks that render said gw deaf to other uplinks in the community… :frowning:

No problem. I do that only for testing a few minutes, but i will respect that in the future.

But what about my problem? If there is a bug in TTN v3 that forces device to SF12, i think thats a bigger issue because of the waste of airtime…

There’s nothing to “investigate” - you can “dry lab” this since it’s all very predictable.

Slower spreading factors take longer to transmit a message. You can calculate exactly (and I really mean, extra exactly exactly, as it’s a critically key part of the protocol) how long the airtime of a given message takes at given air settings including spreading factor.

You then simply multiply that by the power consumption of the radio at a given power setting, and that of the MCU if it stays awake rather than sleeps until the DIO completion interrupt…

I am working on this as part of a scientific thesis. A normal MCU does not always behave deterministically, especially when confirmed messages come into play. A dry test is not an option. But that is not the point here.

My problem is that the NS upgrades the device to SF12 after its first message, which is not normal?

There is nothing in the log that shows that the NS has sent a message.

I’m sure you are fully aware you can only send 10 per day and that if you search the forum, we prefer people to send one a week or less.

Which you can simulate with a point to point setup.

I see two downlinks (20:33:45 and 20:33:53). Data preview says change of RX1 Delay, but u could see the SF goes from 7 to 12.

I see one downlink scheduled after the uplink had been received.

You shouldn’t be using confirmed messages on TTN, and really not on LoRaWAN in general.

You also shouldn’t base a thesis on uncontrolled air conditions at a given instant in time.

That or controlled lab circumstances are actually the only sort of thing that would have any true validity. Maybe you’d prefer to transmit into a resistive dummy load.

Show the actual contents of the downlink packet from the gateway raw traffic page, and show it as text, not a picture.

What’s in the overview image you posted isn’t any valid sequence of MAC commands in an obvious encoding, so we need to see the actual raw packet in base64 or hex or already broken down form, that’s being pushed back towards the node.

Sure. TTN Console:
eui-70b3d57ed004b628_live_data_1642598183147.json (67.1 KB)

gateway-log.txt (479 Bytes)

Yes, I was wondering about that too. I hope the protocols help. I don’t know what to do anymore. Same node (hardware) but with OTAA activation works without problems.

arduino-source-code.txt (12.0 KB)

I can’t dig into this fully at the moment, but what I think I’m seeing is some astoundingly huge amount of downlink configuration items, possibly even self-repeating ones, all crammed into a packet of absurd size. Given that they don’t fit in the fopts, they’re sent instead as a payload on port 0, which also means that they get encrypted with the network key and aren’t recognizeable in cleartext as they’d be if a smaller number of items were sent in the fopts.

I’m talking about this packet in particular


It’s entirely possible that’s breaking LMICs parsing.

I notice far too many of your uplinks have the frame count zero: is it possible your sketch is crashing and rebooting?

You should probably try an absolute vanilla example sketch for TTN ABP in your region.

But again, the tests you’re actually trying to run on don’t belong on TTN at all, and for your finding to be repeatable they’d have to use simulated circumstances - transmitting into a dummy load, etc. Any attempts to consider actual network performance would have to be based on a theoretical model, since you can’t permissibly collect enough real world data to factor out uncontrolled variations.

Forget the background. The introduction by me was well intended, but now we have more discussion to that as i inspected. I will stop my tests and switch to calculations.

But, the problem remains that with ABP not working with the " absolute vanilla example sketch for TTN ABP" from MCCI LMIC(.

I have just done a test against a ChirpStack instance. When ADR is enabled in the code, it also switches to SF12 after first uplink, but when i disabled ADR (Example code with the only change that i add LMIC_setAdrMode(0);), the problem does not occur! Same code with TTN, problem exists.


How did you configure your device on the console?

2022-01-19 21_21_18-General settings - eui-70b3d57ed004b628 - The Things Network
2022-01-19 21_21_37-General settings - eui-70b3d57ed004b628 - The Things Network

You’re still showing an illegal repeat of frame count 0.

You need to figure out why that’s happening and fix it.

I would watch the serial monitor and see if your node is crashing and restarting.

Also you may want to disable “reset frame counters”; I could imagine ways in which trying to non-compliantly waive that part of the LoRaWAN spec could actually be contributing to that absurd stackup of multiple MAC commands.

But… if you have chirpstack, and you’re only sending dummy data anyway, why does interaction with TTN matter to you? It’s possible you’re hitting an unexpected case bug in the TTN stack - but the fact remains that what you’re doing is not normal usage - or all sorts of people would be hitting it.

Have you added the extra frequencies?

Have you altered the LMIC library to set Rx1 to 5s?
Have you altered the LMIC library to disable the Class B features?

No, i use the EU868 plan in LMIC and TTN

No. When i set it to 1s in TTN, i have the same problem.


build_flags =
    -D CFG_eu868
    -D CFG_sx1276_radio
    -D MIC_ENABLE_arbitrary_clock_error

Not what I asked - have you added the extra frequencies - if you look at your screen shot you’ll see there is a button for it. The Network Server won’t know the device knows the full 8 channels which is why it is trying to send those settings.

The Network Server won’t know that the device is set to 1s which is why it is trying to send that setting.

Not strictly a yes, but same end result.

Remind us again what is wrong with OTAA??

Are u sure? Help text says, its only needed if the device use non default frequencies, but i have insert the frequencies = no change.


OTAA works fine. >120.000 Messages i the last 6 Months from 6 nodes. Same hardware, same software, but ABP makes problems

With ABP still the same problem with TTN v3. After first uplink with SF7, the device switches to SF12. Same code had worked with v2 and with chirpstack it still works. I can’t make sense of it.

No, I made it up to give you something to do :wink:

OTOH, I’ve had ABP devices on V3 for almost a year now.

You asked about all the downlinks, the reason for them is, erm, mentioned above …

What is your device - please be very specific - and I may be able to setup a test as I have approximately one of everything.