ADR moving from SF7 to SF8 despite high SNR margin

Hi, my TTGO Lora32 node running LMIC with ADR enabled, region AS923, starts at SF7 after joining (only 20m from gateway), but after a few days moves to SF8 for reasons i cannot understand and I’d like some advice on how to troubleshoot this. I have already decreased the ADR margin on the console from 15 to 12 but it still happens.
As far as i understand the ADR doc, there is no reason to decrease the DR when
SNRmargin = SNRmax - SNRrequired(DR5/SF7) - ADRmargin = 10 - (-10) - 12 = 8 so Nstep = int(SNRmargin/2.5) = 3
Nstep is obviously > 0 so i don’t understand why ADR wants to decrease the DR.
This is a graph of the SNR and SF of the node
sf-snr-lorawan005
So it is at SF8 now, and i would expect it to move to SF7 because the Nstep is 2.
SNRmargin = SNRmax - SNR(DR4/SF8) - ADRmargin = 11 - (-7.5) - 12 = 6.5 so Nstep = int(SNRmargin/2.5) = 2

Just for reference, I have similar nodes (further away), that stay on SF7, or occasionally move to SF8 but then go back to SF7, as expected.
sf-snr-lorawan001

The MAC data on the console (for that first node) show this, does that give any clues?
Thanks for any advice. I could switch ADR off and force SF7, but i’d like to understand what’s going wrong here with ADR.

  "mac_state": {
    "current_parameters": {
      "max_eirp": 16,
      "adr_data_rate_index": 4,
      "adr_tx_power_index": 6,
      "adr_nb_trans": 1,
      "rx1_delay": 1,
      "rx2_data_rate_index": 2,
      "rx2_frequency": "923200000",
      "ping_slot_frequency": "923400000",
...
    "desired_parameters": {
      "max_eirp": 16,
      "adr_data_rate_index": 4,
      "adr_tx_power_index": 6,
      "adr_nb_trans": 1,
      "rx1_delay": 1,
      "rx2_data_rate_index": 2,
...
    "rejected_adr_data_rate_indexes": [
      5
    ],
    "rejected_adr_tx_power_indexes": [
      7
    ],
    "last_downlink_at": "2022-11-22T01:13:13.749282621Z",
    "last_adr_change_f_cnt_up": 6844
  },
  "mac_settings": {
    "rx2_data_rate_index": 2,
    "rx2_frequency": "923200000",
    "supports_32_bit_f_cnt": true,
    "status_time_periodicity": "86400s",
    "status_count_periodicity": 200,
    "desired_rx1_delay": 1,
    "desired_rx1_data_rate_offset": 0,
    "desired_rx2_data_rate_index": 2,
    "desired_rx2_frequency": "923200000",
    "desired_max_duty_cycle": "DUTY_CYCLE_1",
    "adr": {
      "dynamic": {
        "margin": 12
      }
    }
  }

Detailed list of uplinks when the change occurred (3 minute interval is just within FUP on SF7)
sf-snr-lorawan005-list

Each SF has a min viable SNR level - SF7 needs much better than e.g. SF12 (which can go down to -20) read up and check against the min max avg above and allow some margin as above…what might you conclude?

Thanks @Jeff-UK but i did use those values in my calculations above and i concluded that it did not make sense, so i posted my issue.
Screenshot from 2022-11-24 13-36-36

I think i found the problem. I noticed today that downlinks were not working for any of my nodes: i did not see the usual ns.down.data.schedule.attempt event after an uplink
Screenshot from 2022-11-24 12-51-55
I had a look at my Dragino LPS8 gateway and no downlinks there either, and the home screen was showing IoT Service LoRaWAN offline, despite the fact that uplinks were still working. I restarted the gateway and all the scheduled downlinks came through.
Screenshot from 2022-11-24 12-52-17
So now i’m thinking that the node did not receive the occasional ADR housekeeping downlinks, and therefore decided to move to a lower Data Rate (SF7 to SF8 some days ago, and SF8 to SF9 this morning). The situation seems resolved now, although i don’t understand how my gateway ended up in a state forwarding uplinks to TTN but not handling downlinks from TTN?