You could always just send the raw register data and do the calculation at the other end, or halfway using TTN’s decoder. The calibration registers would have to be read once and, since most of them have different values for each device, make provision for that it the calculation.
Would not be all that hard to do, hadn’t considered that option. Thanks
I have an issue with my new RM1xx. I want to simply test it on TTN but, for some reason the command example is not compiling. I set the EUIs and Keys (just randomized numbers for testing here) and the ‘XCompile + Run’ and I get this Error. My Firmware version is 184.108.40.206.
I hope someone can help me with this issue!
For future search and better legibility, please don’t post images for what is basically text. And when the text is properly formatted in a monospace font, the
----^ will point to a specific location.
Also, it seems the black part of the screenshot is not related to your programming at all? And obviously, seeing some code will help people help you. The source of the error might very well be in the previous line as well.
Did you search the forum for the error? It’s mentioned earlier in Part 1, and also on GitHub:
@linssenste, just check you have defined (DIM) the variable tkn$ near the start of your code. If that does not fix the problem then try and post your code here (in a window). I can see from the background of your screen shot you have erased and set parameters to flash memory. They seem OK.
Does anyone know if the RM1XX will be capable of receiving BLE advertising packets from multiple RN4870/71? With the aim of then sending them over LoRa.
Best bet is to contact their tech support …They have excellent technical support …and get the info you need …
Thank you, will do
I used the RM1XX to receive BLE advertising packets from multiple sources and then pushed them up to LoRa. Note that I am not specifically referring to RN4870/71 as I don’t know what it is.
Has anyone actually achieved Laird’s quoted 4uA standby doze? The only way we’ve managed so far is to use the LoraMacSleep mode - which renders the Lora chip unusable… Any attempt to subsequently use a Lora function results in the SmartBasic code dropping out with a “!~FAULT~!” error.
Deep sleep is not ideal because we lose the OTAA session when we restart, and we can’t find a way of storing the session keys to subsequently use ABP.
We’re in discussion with Laird tech support on this but I’m interested if anyone has found a work-around.
Which version of the RM1XX firmware are you using and what region AU/AS/EU ? That sounds like issues I had seen in the original firmware on the RM191 modules. I have played around with the RM191 modules and yes the doze mode is the ~4.7ua as stated in the documentation but with the latest firmware. Double check the laird website for updates to the firmware and they have a log of changes/bug fix etc. The documentation for the module is really good.
We’re using Peripheral RM186 EU v220.127.116.11 which is the latest for the RM186 listed on the website (unless I’ve missed something). If you’ve managed 4.7uA while retaining the session keys and a LoraWan system that can send without re-joining then maybe there’s some light at the end of the tunnel…
We need the “Peripheral” version so that we can easily connect with a mobile phone app via BLE for setting our device up. Everything works OK except for the current consumption.
We’ve done all we can with turning off peripherals, setting the DIO pull-ups etc. If we call "LoraMacSleepMode() our current consumption drops by ~100uA so we know what the culprit is.
I’m using the same firmware and the node (with additional hardware) consumes well below 100u. To get there I had to set the digital I/O used for auto booting the app to a defined state. And I had to remove IoC as it consumed a lot of power (one solution) or caused issues with the LoRaWAN stack (the other solution). Working with Laird support on that.
Have you found a way to change the name of the module for Bluetooth discovery? I’m able to change it after connect but would like that before to distinguish between devices before connecting.
“IoC” -> Do you mean “I2C” or have I missed something?
We have nAutoRun pulled to ground with 10k at the moment - will try your suggestion to set the state after running.
Here’s what we use to change the name of the BT:
#DEFINE BLE_APPEARANCE_GENERIC_COMPUTER 128
#DEFINE DEVICENAME “DEV-”
#DEFINE DEVICENAME_WRITABLE 0
#DEFINE APPEARANCE BLE_APPEARANCE_GENERIC_COMPUTER
SPRINT #advDevName$, DEVICENAME;devSerialNum rc = BleGapSvcInit(advDevName$,DEVICENAME_WRITABLE,APPEARANCE,MIN_CONN_INTERVAL,MAX_CONN_INTERVAL,CONN_SUP_TIMEOUT,SLAVE_LATENCY)
Interrupt on Change.
I am using:
rc = GpioSetFunc(28,1,4) rc = GpioSetFunc(17,1,1) rc = GpioSetFunc(0,1,1) rc = GpioSetFunc(3,1,1)
The hardware has nAutoRun pulled down and SIO_28 pulled up for VSP command mode. There is hardware connected to SIO_0 and SIO_3, that is why I initialize them as well.
OK - we’ve made a little progress on the power consumption . Turns out we weren’t closing the BT down cleanly. We needed to call BleDisconnect(), as well as BleAdvertStop(). This has shaved ~40uA off the overall consumption - still a long way to go to get to the published 4uA though.
I’m not sure what we can do to “remove IoC” - we have just one pin used as “wake-up” - and we use GPIOASSIGNEVENT() to trigger a Lora send.
The other issue we have is the EVLORAMACTIMEOUT event is getting thrown frequently when we attempt an OTAA join. Using the debug facility we can see the Rx window isn’t being opened when it should. Seems to be an issue with the size / complexity of our code. It’s an intermittent problem which is making it hard to track down…
Sounds like you might experience the issues I’m running into. My tests using GpioAssignEvent result in failures in the LoRaWAN stack. (Check the packet counter of subsequent transmissions in the TTN console, I’m not seeing the counter increment)
I’m working with Laird support on how to resolve/work around my issues.
Yes I think we have the same problem. I’ve replaced GPIOASSIGNEVENT() with a simple timer that ticks every few seconds, checks the “wake-up” pin and triggers the Lora transmission sequence if needed - this works just fine. It’s less than ideal though. Overall current consumption hasn’t risen significantly.
I did try re-assigning the wake-up pin to SIO_5 since this is wired to a push-button on the Dev-Kit - just in case Laird had only tested the firmware on that board. Made no difference.
I also tried every trick I could think of to decouple the wake-up event from the actual Lora transmission, including using GPIOUNASSIGNEVENT() and the SendMsgApp() function to allow the GPIO event to complete first.
I’ll drop another email to Laird - hopefully we’ll get a fix. The frustration is that we’ve spend more time on a few lines of SmartBasic in attempt to nail the problem, than in the 1000’s of lines of C code for the processor that looks after everything else…