Big STM32 boards topic

Ok. Keep in the loop when you get the solution :wink:
Thanks !

When you try exactly this:

int analogValue = analogRead(PA1);

Does this return 0 for analogValue, or was the 0 maybe the result from calculating the actual voltage?
(I had this problem somewhere during my testing.)

I have it working now, but I have tried and tested so many things that I do not remember what exactly fixed it.
I have first done some tests with a bluepill board because that allowed me to test with different cores: ‘Arduino STM32’, ‘Arduino Core STM32’ and STM32GENERIC.

On the bluepill I used PA1 for the measurements and added a 2x100k voltage divider, similar to the LoRaM3-D boards.
Arduino STM32 on bluepill worked. It uses the full 12-bit ADC resolution (max value: 4093).
Arduino Core STM32 on bluepill also worked but uses only 10 bits resolution (max value 1023), it’s readings were around 10% too low.
STM32GENERIC failed to compile without errors when I included #include <U8x8lib.h> so I didn’t bother to try it any further.

It now also works with BSFrance-stm32 and a LoRaM3-D F103 board. BSFrance-stm32 also uses only 10-bit ADC resolution.

With no battery connected and powered via USB (from computer) the multimeter measured a constant 2.007V on PA1, but analog read reported values fluctuating between 507 and 600+ (around 1.64 to 1.95V).
When battery connected (LiPo) to the battery connector and no USB connected, the multimeter measured a constant 1.864V on Pa1 and analog read is more stable, reporting values between 539 and 542 (1.74 to 1.75V).

#define ANALOG_MAX_VALUE 1023    // 1023 for 10-bit resolution, 4093 for 12-bit resolution
int analogValue = analogRead(PA1);
float voltage = analogValue * (3.3 / ANALOG_MAX_VALUE);

@bluejedi
OK, it is a bit late as the section moved forward to other topics, but I can confirm that HW I2C works as you describe on my F103.

1 Like

Hi
New challenge that I am facing today: Get a Waveshare e-Paper (1.54") running on the F103.
Yes, I know there is an OLED so what do I need a ePaper for … :wink:
And yes, could even be off-topic because it has not specifically to do with the radio part.

Thing is: With the Radio-module sitting on SPI, I am not sure if it is me, the library … why it won’t work.
And I always struggle with SPI, I2C seems to be easier to handle. Less wires maybe.

The soonuse-library https://github.com/soonuse/epd-library-arduino looked promising, simple and straight forward, so I gave it a try. Starting with the Pin description:
3.3V --> 3V3
GND --> GND
DIN --> D11
CLK --> D13
CS --> D10
DC --> D9
RST --> D8
BUSY --> D7

… it is easy to change some in the epdif.h.
like that:
#define RST_PIN PA8
#define DC_PIN PA0
#define CS_PIN PB14
#define BUSY_PIN PB9

These should be free to choose except DIN (MOSI) and CLK (SCLK)
I am not sure if I can share it with the SPI radio pins (PA7, PA5) or not.
Code stops at epd.Init() and I have no real clue where to look next.
If I need to use SPI2 (PB15, PB13) I have no idea how to do that.

Anybody experience with the epapers?

1 Like

OK, as edit is no longer possible, I will do a reply to my own question…

  • SOLVED -

I moved over to the library: https://github.com/ZinggJM/GxEPD
I had a bit trouble finding the parts and Pins and all, but it runs sharing SPI between the LoRa-Radio and the ePaper.
As there is only a suggested Pinout for STM32F103s but not the BSFrance-Board, I chose the following:

#include <GxEPD.h>
#include <GxGDEW0154Z04/GxGDEW0154Z04.cpp>  // The waveshare 1.54" black/white/red 200x200
#include <GxIO/GxIO_SPI/GxIO_SPI.cpp>
#include <GxIO/GxIO.cpp>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306_STM32.h>

GxIO_Class io(SPI, /*CS=*/ PA4, /*DC=*/ PB12, /*RST=*/ PB1);
GxEPD_Class display(io, /*RST=*/ PB1, /*BUSY=*/ PA8);
//CLK=PA5; DIN=PA7

No changes are necessary in the libraries, just in the code to use it.

Btw, regarding I2C:
Sharing the Pins for SDA: PB7 and SCL: PB6 with some other I2C devices also works fine (as should). However I noticed one case (a MOD-1023 from embedded adventures) where u8g2 seemingly interfered sometimes with the data transmission (I have no clue why and how).
Interestingly, the problem went away when using the Adafruit GFX library which I now also use for the OLED (and which is used on the ePaper by GxEPD.) Having two instances run is no problem. Though I like u8g2 over Adafruit’s Lib for it’s fonts I was not able to resolve the problem with the scrambled data.

Maybe this might help somebody in the future.

3 Likes

@GrumpyOldPizza

Contunued from here: Murata CMWX1ZZABZ-xxx LoRaWAN Module Breakout Board - #17 by GrumpyOldPizza

With all respect, what docs? :wink:
The code doesn’t contain any (and the LoRaWAN library code is rather cryptic).
Adding some documentation to the code and adding a description of the library will increase it’s usability for others tenfold.

I tried the included basic examples TTN_OTAA.ino and TTN_ABP.ino using #define REGION_EU868 (on a B-L072Z-LRWAN1 board).
Unfortunately I directly ran into several issues:

  • TTN_ABP.ino
    The examples use LoRaWAN keys/id’s in string format.
    The only (byte array) string format that TTN Console uses is msb-first format.
    (The TTN Console supports both msb-first and lsb-first formats only for the array initializer notation with brackets.)
    Unfortunately the TTN_ABP example expects the devAddr string variable to be in lsb-first format which is not consistent with the format that TTN Console provides and neither is it documented that devAddr requires lsb-format instead of msb-format. nwkSKey and appSKey use msb-first format, like presented on the TTN Console, but again information about the required format is missing.

  • TTN_OTAA.ino
    Different from TTN_ABP.ino, TTN_OTAA.ino expects all LoRaWAN keys/id’s in msb-first format.
    That is at least the only way where I can get a JOIN ACCEPT on a JOIN request.
    I tried devEui, appEui and appKey in different msb-first / lsb-first combinations but only when all were msb-first did I actually see JOIN ACCEPT’s on the gateway and application consoles.
    But then it stops. The JOIN ACCEPT is not followed up by an upstream message from the node. I see no data arriving.
    What can this be? Why doesn’t the node send any data messages?

  • Both the OTAA and ABP examples start communication at SF12 instead of the usual SF7.
    The examples do not specify a spreading factor so SF12 appears to be the default.
    I think the default should be SF7.
    Where can the spreading factor be specified?

1 Like

Yes, as soon as this goes to beta. I have still a long list of things to address. While addressing those I really need to keep the freedom to mock around with the API.

The code is not that cryptic … it just needed to be as small as possible. I still have this notion that a decent LORaWAN sensor application might fit into a STM32L052.

Thanx for pointing out. I’ll updated the comments in the examples. Yes, “devAddr” is LSB and the keys are MSB. I did follow there the common convention (Arduino MKR 1300, Murata’s own AT-Command set …).

Keys are MSB first as common convention, the rest is LSB first. So perhaps this is the issue. Does the gateway send out a CFlist in the JOIN_ACCEPT to populate the rest of the channels ? I heard different things from users (I am in US915 based, so testing EU868 is somewhat tricky).

EU863-870

But even if the gateway would not send a CFlist, you’d still be able to use the core 3 channels. However then there is the dutycycle issue, which means the system will wait till it is allowed to send again. Perhaps for debugging you’d want to use:

LoRaWAN.setDutyCycle(false);

Yes, per default ADR is enabled, so the setup starts with SF12BW128 (i.e. DR_0). SF7 is unclear. There is SF7BW125 (DR_5) and SF7BW250 (DR_6).

What you’d do is:

LoRaWAN.setADR(false);
LoRaWAN.setDataRate(5);

Thanx for the good feedback.

My bad. All keys and IDs are MSB first. Only “devAddr” is currently LSB first. I’ll change the latter one.

Sorry for the confusion. The actual JOIN_REQUEST uses LSB order … Always tripe me off.

The compiled form of less-cryptic code does not have to be larger. :sunglasses:
It is more related to the brief names and lack of descriptions.
(But when you have written the code yourself that will probably be less obvious.)

Does the gateway send out a CFlist in the JOIN_ACCEPT to populate the rest of the channels ? I heard different things from users (I am in US915 based, so testing EU868 is somewhat tricky).

I have no experience with checking the CFList in a join accept. I normally use LMIC-Arduino and didn’t have to bother with that before. How to check that?

Sure, testing EU868 is difficult when you are situated in the US.

Great. That makes it consistent with the TTN Console (when using strings) so the keys/id’s can simply be copy/pasted. But please make it aware to the user.

SF7 is unclear. There is SF7BW125 (DR_5) and SF7BW250 (DR_6).

I’m not sure which one it should be.
The code below is from the LMIC-Arduino ttn-abp.ini example but this does not directly provide an answer.

I do know that LMIC-Arduino starts trying to OTAA join on SB7 first and then gradually steps up to higher spreading factors if the join does not succeed (it takes a long time before it finally reaches SF12).

#if defined(CFG_eu868)
    // Set up the channels used by the Things Network, which corresponds
    // to the defaults of most gateways. Without this, only three base
    // channels from the LoRaWAN specification are used, which certainly
    // works, so it is good for debugging, but can overload those
    // frequencies, so be sure to configure the full frequency range of
    // your network here (unless your network autoconfigures them).
    // Setting up channels should happen after LMIC_setSession, as that
    // configures the minimal channel set.
    // NA-US channels 0-71 are configured automatically
    LMIC_setupChannel(0, 868100000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(1, 868300000, DR_RANGE_MAP(DR_SF12, DR_SF7B), BAND_CENTI);      // g-band
    LMIC_setupChannel(2, 868500000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(3, 867100000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(4, 867300000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(5, 867500000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(6, 867700000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(7, 867900000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(8, 868800000, DR_RANGE_MAP(DR_FSK,  DR_FSK),  BAND_MILLI);      // g2-band
    // TTN defines an additional channel at 869.525Mhz using SF9 for class B
    // devices' ping slots. LMIC does not have an easy way to define set this
    // frequency and support for class B is spotty and untested, so this
    // frequency is not configured here.
#elif defined(CFG_us915)
    // NA-US channels 0-71 are configured automatically
    // but only one group of 8 should (a subband) should be active
    // TTN recommends the second sub band, 1 in a zero based count.
    // https://github.com/TheThingsNetwork/gateway-conf/blob/master/US-global_conf.json
    LMIC_selectSubBand(1);
#endif //defined

I do have a bunch of gatesways around here (including EU868). However they are configured either as simple local gateways, or packet forwarders. There I can see directly in the log files what happens packet by packet … But the TTN package is kind of invasive last time I tried.

I don’t think this is spec conform. Most “devices on our network” style papers from Oarnge/KPI/Senet/machineQ seem to imply a rather strong preference to use the lowest datarate for a OTAA join, and then subsequently (first user data packet) either ADR or a user configured value. In fact the first non-user packet has to include ADR, or there is no way for a US915 based setup to get the channel mask set …

I can see the logic of seeding ADR during the JOIN_REQUEST. But looking at my gateways and their ADR logic, after 6 packets it has figured out the data-rate and from there takes only the minimal number of steps to get to the lowest TxPower (I think that’s at packet 14 … still not that great, as the gateway could have figured out that 30dbm are not legal if it has only 8+2 frequencies).

The following is copied from the LMIC-Arduino ttn-abp.ini example but does not directly give an answer.

Actually this code is only relevant for ABP. There the gateway does not know when you are joining, and hence cannot send NEW_CHANNEL_REQ commands or a ADR_REQ to add new channels and/or enable/disable them.

Again I suspect that you just got caught out by the “why does my first packet take 2 minutes before it is send” effect. LoRaWAN.setDutyCycle(false) will fix that, although it’s not ETSI conform. Perhaps I should change that for the TTN examples to avoid that trap …

That did it. I have TTN_OTAA.ino working now. :+1:
(Not neccesary to disable ADR).

Actually this code is only relevant for ABP.

Yes I know, I only showed it as an example.

I am confused now.

What did address your issue:

(a) LoRaWAN.setDutyCycle(false)

(b) LoRaWAN.setDataRate(5)

I would be rather confused if it’s a datarate problem … Unless your gateway is seriously misconfigured.

LoRaWAN.setDataRate(5) did address my issue (“Not neccesary to disable ADR”).

LoRaWAN.setDataRate(5)

I don’t like magic numbers. Are there any enums or macros defined for these? Where can I find them?

Ah, that is a very good point … one that I have no good answer to.

Here is the problem. The LoRaWAN protocol works with datarates, 0 to 15 (not all are used). 0 is the lowest, 15 the highest.

What a specific datarate means is region specific. So for EU868 DR_0 means SF12BW125. For US915 DR_0 means SF10BW125 … All of that is nicely documented by the LoRaWAN Regional Parameters document you can download upon request from the LoRaWAN alliance … after you register …

Not user friendly at all. TxPower has the same issue, except that I do the “dbm” to “tx-power-index” translation (which is really power-dbm = EIRP/ERP - 2 * power-index

So there I am stuck. Whether you use “0” or “DR_0” does not matter much. It’s bad. If I start using enums along SF[sf]BW[bw] it becomes more intuitive, except that the question comes up “why does SF12BW125” not work for me" … are you on “US915” … “yes” … “that’s not supported”. So you end up needed to reference the LoRaWAN Regional Parameters document again.

I could of course have a dual set enum scheme:

DR_0 = 0,
DR_1,
…
SF12BW125 = 16,
SF11BW125.
…

Yes but I assume that would have to be region dependent.
A dual scheme probably causes confusion and inconsistencies in applications.
A one-fits-all solution is probably not feasible due to region dependent implementation differences.
.

I did additional testing. These are the results:

Data Rate       Join Request   Join Accept   After Join Accept
---------       ------------   -----------   -----------------
setDataRate(0)  SF12BW125      SF12BW125     Nothing, then retry after 2:30
setDataRate(1)  SF11BW125      SF12BW125     Nothing, then retry after 1:20
setDataRate(2)  SF10BW125      SF10BW125     Recognized, starts uploading messages
setDataRate(3)  SF9BW125       SF9BW125      Recognized, starts uploading messages
setDataRate(4)  SF8BW125       SF8BW125      Recognized, starts uploading messages
setDataRate(5)  SF7BW125       SF7BW125      Recognized, starts uploading messages
setDataRate(6)  Exactly like setDataRate(0)

So when doing Join Requests on SF11BW125 or SF12BW123 the gateway sends Join Accepts but they are not recognized / not properly handled by the node.

Notice the different behavior when doing Join Requests at SF11BW125, the Join Accept uses SF12BW125 while the request was done on SF11BW125.

Can you explain the different (accept on SF12) behavior when doing Join Request on SF11BW125?
Why are the Join Accepts when doing requests on SF11BW125 and SF12BW125 not recognized / not properly handled?

Do you have some more info about the frequencies that gateway sees the incoming JOIN_REQUEST, and on which frequency it sends it’s JOIN_ACCEPT.

The RX2 window for TTN is non-standard:

869.525 - SF9BW125 (RX2 downlink only)

And yes, you found that for EU868 you can set the datarate before the join, and it will use that one … I’ll see with my gateway what it answer with. The join accept should be with the same datarate, unless it’s answering on the RX2 slot. This is why I think your gateway answers for SF12BW125 and SW11BW125 on the RX2 slot, but on 869.525MHz with SF12BW125 (which is EU868 standard).

The LoRaWAN_TTN_OTAA.ino contains this line:

LoRaWAN.setRX2Channel(869525000, 3);

Mind removing it and retry the series ? This one would remove the TTN specific RX2 window.

Three sequences for DR 5: (reset before each sequence)

capture%202018-06-05%2019%C2%B715%C2%B737

There are image issues with the forum software the last months, so you will have to right-click the image and then select ‘Open image in new tab’ (for Chrome) or whatever option your browser has.

Note: setRX2Channel(…) still in place (not yet removed).

Three sequences for DR 4:

capture%202018-06-05%2019%C2%B725%C2%B726

Two sequences for DR 2:

capture%202018-06-05%2019%C2%B730%C2%B702

Well, that does not show the join behaviour … The setRX2Channel(…) channel line would be the real interesting one …