I wanted to investigate the power consumption for different spreading factors for research purposes. I noticed that my node instantly upgrade from SF7 to SF12 after the first message.
To reproduce the problem I have prepared the following setup:
Example ttn-abp.ino sketch from MCCI LMIC with fixed SF7
MCCI LMIC v4.1 (and testet with v3.3)
Fresh ABP Node in TTN v3 with ADR disabled and reset counter enabled
If i prevent the device from getting downlink messages, it works like i know it from v3. FYI: I disabled downlinks by changing the packet forwarder port for the downlink in the gateway to a wrong udp port.
Please don’t do that, if you want to ‘experiment’ set up or purchase your own instance. TTN(CE) aka V3 rightly expects associated nodes to correctly handle, process and where necessary react/respond to any downlinks, inc. MAC commands, adr settings, channel settings etc. In many cases if you cripple the node the NS will continue to attempt retry, if even for a short while. This is a waste of both NS resource and the spectrum/gw capacity… your experiment may be forcing additional gw downlinks that render said gw deaf to other uplinks in the community…
There’s nothing to “investigate” - you can “dry lab” this since it’s all very predictable.
Slower spreading factors take longer to transmit a message. You can calculate exactly (and I really mean, extra exactly exactly, as it’s a critically key part of the protocol) how long the airtime of a given message takes at given air settings including spreading factor.
You then simply multiply that by the power consumption of the radio at a given power setting, and that of the MCU if it stays awake rather than sleeps until the DIO completion interrupt…
I am working on this as part of a scientific thesis. A normal MCU does not always behave deterministically, especially when confirmed messages come into play. A dry test is not an option. But that is not the point here.
My problem is that the NS upgrades the device to SF12 after its first message, which is not normal?
You shouldn’t be using confirmed messages on TTN, and really not on LoRaWAN in general.
You also shouldn’t base a thesis on uncontrolled air conditions at a given instant in time.
That or controlled lab circumstances are actually the only sort of thing that would have any true validity. Maybe you’d prefer to transmit into a resistive dummy load.
Show the actual contents of the downlink packet from the gateway raw traffic page, and show it as text, not a picture.
What’s in the overview image you posted isn’t any valid sequence of MAC commands in an obvious encoding, so we need to see the actual raw packet in base64 or hex or already broken down form, that’s being pushed back towards the node.
I can’t dig into this fully at the moment, but what I think I’m seeing is some astoundingly huge amount of downlink configuration items, possibly even self-repeating ones, all crammed into a packet of absurd size. Given that they don’t fit in the fopts, they’re sent instead as a payload on port 0, which also means that they get encrypted with the network key and aren’t recognizeable in cleartext as they’d be if a smaller number of items were sent in the fopts.
It’s entirely possible that’s breaking LMICs parsing.
I notice far too many of your uplinks have the frame count zero: is it possible your sketch is crashing and rebooting?
You should probably try an absolute vanilla example sketch for TTN ABP in your region.
But again, the tests you’re actually trying to run on don’t belong on TTN at all, and for your finding to be repeatable they’d have to use simulated circumstances - transmitting into a dummy load, etc. Any attempts to consider actual network performance would have to be based on a theoretical model, since you can’t permissibly collect enough real world data to factor out uncontrolled variations.
Forget the background. The introduction by me was well intended, but now we have more discussion to that as i inspected. I will stop my tests and switch to calculations.
But, the problem remains that with ABP not working with the " absolute vanilla example sketch for TTN ABP" from MCCI LMIC(.
I have just done a test against a ChirpStack instance. When ADR is enabled in the code, it also switches to SF12 after first uplink, but when i disabled ADR (Example code with the only change that i add LMIC_setAdrMode(0);), the problem does not occur! Same code with TTN, problem exists.
You’re still showing an illegal repeat of frame count 0.
You need to figure out why that’s happening and fix it.
I would watch the serial monitor and see if your node is crashing and restarting.
Also you may want to disable “reset frame counters”; I could imagine ways in which trying to non-compliantly waive that part of the LoRaWAN spec could actually be contributing to that absurd stackup of multiple MAC commands.
But… if you have chirpstack, and you’re only sending dummy data anyway, why does interaction with TTN matter to you? It’s possible you’re hitting an unexpected case bug in the TTN stack - but the fact remains that what you’re doing is not normal usage - or all sorts of people would be hitting it.
Not what I asked - have you added the extra frequencies - if you look at your screen shot you’ll see there is a button for it. The Network Server won’t know the device knows the full 8 channels which is why it is trying to send those settings.
The Network Server won’t know that the device is set to 1s which is why it is trying to send that setting.