You’ve not actually provided any information which could support such a conclusion. If you are in contact with the gateway owners and could get a feed of all of their raw traffic to analyze - not just your nodes, not just TTN, not just LoRaWAN packets, then you might start to be able to say such. Though you’d still be missing the possibility of non-LoRaWAN users of the same band, broadband or power-conducted interference in those locations, etc.
However. trough ADR any node can be forced to SF12 eventually
Well, first in some parts of the world SF12 isn’t even a possibility as the headers themselves wouldn’t fit in a packet. But even where it is, just because the network server suggests it does not as a practical matter mean the node has to do so (though in the case of declining the suggestion it would probably be best to stop setting the ADR bit on uplinks, or only do so rarely to see if conditions have improved)
In this case one single packet from one node is received by multiple gateways.
It’s unclear if you are actually talking about the exact same packets vs. a statistical total. But if you are, it could be interesting to graph those across gateways, ie, use fCnt or time as an axis.
From these gateways only 2 suffer from poor performance at low SF. If the node was to blame, all gateways would have shown the same result. Therefore the issue is not caused by the node.
This does not necessarily follow - if the node is behaving in a way that is marginal, different gateways may respond to that in different ways.
Something very interesting to do would be to repeat your test, but temporarily install the same known gateway in each of these locations in turn, especially in the location of one of the “bad” ones.