Issues w/ Uplink Using SAM R34 and ABP

Hello!

I recently took a dive into LoRa. I have a SAM R34 Xplained Pro evaluation module and The Things Kickstarter Gateway (TTKG), both of which have been configured using the Console in The Things Stack: Community Edition. I used the LoRaWAN Mote Application provided by Microchip to test these devices.

The first time I tried to connect to TTN using the SAM R34 and TTKG was successful as I was using OTAA to join. However, being inexperienced, I turned off both devices at the end of the day in the hopes of using them the next day. I have since learned that OTAA requires a unique DevNonce field, which I will eventually figure out how to increment and store in the NVM of the SAM R34 (but that’s for another day). Since I am still learning/developing, I took some advice from the other forum posts to use ABP.

After setting up a new end device in the Console and hard-coding the required fields into the end device and the Console, the SAM R34 was able to “join” the network successfully and reports having done so in the terminal:

image

However, this is where I began to run into problems. I don’t see any uplink messages in the Live Data tab of the Console in the end device and see no incoming payloads in the Live Data tab of the Gateway (these were previously working when using OTAA). Furthermore, the Overview tab of the end device reports that there have been no uplink or downlinks and that there has never been any activity detected from the end device.

These are things I have tried:

  • Triple-checking that I have all of the right EUIs and Keys on the Console and the SAM R34
  • Made sure that the correct frequency (NA_915) and sideband (FSB 2) were being set on the Console, SAM R34, and Gateway
  • Reprogramming the SAM R34
  • Restarting the Gateway

Would anyone be able to shed some light on what I may be missing? Perhaps some documentation or configuration detail that I missed along the way?

Thank you all in advance!

Opps, which ones? :crossed_fingers: it wasn’t me …

ABP does not involve a join.

Frame counter for ABP which is LoRaWAN 101 - and for OTAA, DevNonce for v1.0.4 isn’t something that we/you increment, that happens automagically, all you have to do is save it but the SAMR34’s lack of explicit NVM means the LoRaMAC-node code base doesn’t have that built-in like other examples.

I like the SAMR34/WLR089U as the pre-built library that’s certified is unalterable by us mortals so pretty much implies a certifiable device - but at the cost of absolute adherence to the spec.

Whilst you can turn off frame counters for ABP you still lose the benefits of OTAA not only clearing the frame counter for you, but setting up some other super-important things like channels and Rx1 delay and so forth. So you have two options, the PitA one of hitting the MAC reset button on the console for the device, or using the CLI to turn off the DevNonce check.

For development overall I do much of my work radio off - so the firmware does everything right up to the point where it passes the payload over to the stack but just prints it (along with any other info) to the debug console. As the LoRaWAN stacks are pretty much a black box, it seems pointless waiting for a send, delay, receive - your debug output can detail what’s happened and give you the final payload byte array that you can use any number of tools to check is correct - although payload creation is sufficiently simple that you shouldn’t really need to keep checking that - the debug output can show the real world values for you to sense check.

For testing the Application Server to my backend, I use the ‘Simulate uplink’ on the console as it saves me waiting for a device to transmit and I can do it faster than the FUP would normally allow.

For end to end testing I use the CLI to turn off the DevNonce check as pressing the console button isn’t always feasible.

For deployment, at worst a device may get restarted but it only takes a few join attempts to catch up with the previous join value, at best the DevNonce is saved to be restored on the next join attempt.

So, I’d go with OTAA and press the MAC reset button or turn off checks via the CLI.

PS, Please do not post text that can be copied & pasted.

Even apart from registration, frame counter, and dev nonce type issues, there should still be raw traffic picked up at the gateway when the device transmits - it just wouldn’t make it through validation and decoding.

Can you log the actual transmissions frequencies from the end device side? As a side effect that indicates that one has actually happened - which is unclear from your screenshot - it gives no hint of either the application payload or overall packet length, nor spreading factor, nor frame count - it might be showing the settings that would be used, rather than that a packet has actually been sent.

One concern is that it’s should be the second sub-band, so its #2 if counting from one but it’s #1 if counting from 0 as more typical in programming. Or to put it another way, you should be uplinking on channels 8-15 counting from 0 the way the LoRaWAN spec counts those. And also theoretically wideband channel 65. In the LoRaWAN spec, OTAA nodes are supposed to randomly try to join across all legal sub-bands, so that’s a configuration issue that formally exists only with ABP and not with OTAA, though practically speaking a lot of people cheat the spec and rig their OTAA firmwares to only try the assumed sub-band.

And you need the preamble sync word set for a public network, though that’s usually the default.

Hi descartes, thank you for your reply

I got this advice from another forum post on this topic (I’ll link it here: Microchip SAMR34 Joining Denied - #19 by Afremont) and I should have mentioned that I was going to use ABP for development purposes, and then use OTAA for production.

I realize this, but my understanding is that with the way the API is built on Microchip’s stack, it is considered a “join” from the Microchip side, even if there’s no actual join process. Hence the quotation marks.

This was incredibly helpful, thank you. I will try to go ahead with OTAA without DevNonce checks and see how far I get.

I’m not sure I understand what you mean by this. Can’t all text be copied and pasted?

Hi cslorabox, thanks for your reply.

I’m not sure how I would go about doing this. I think I would have to take some time to do a deep dive into Microchip’s stack and change the example code around a bit to show this information in the terminal when the program runs. Unless you are referring to the end device Console, in which case one of my problems is that I cannot see anything in the end device Console.

I suppose this could be an issue, but the example code starts out with the sub-band being defined as 1 rather than 2. If the sub-bands were 0-indexed, then I would imagine I wouldn’t have had to change this value to 2, though this didn’t do anything immediately noticeable either. Furthermore, another forum post on this topic corroborates this (Microchip SAMR34 Joining Denied - #32 by Afremont).

It seems that OTAA should be the way to go whether for development or production. Is there a benefit to rigging firmware to only try that one sub-band?

The spec’s instruction to randomly try across all sub-bands means that on average only 1 out of every 8 attempts can actually be received by a gateway.

Of course that’s an improvement on the 0 out of over 8 that occurs if the sub-band is wrong.

But it’s still unclear that your device is even trying to transmit at all.

Yes, that’s definitely the issue I should try and figure out. But in the meantime, I had the OTAA activation working, saw the messages in the cloud Consoles, and the only hiccup was the DevNonce field. So I think I can assume it’s just an ABP configuration issue on my part.