is there a paket forwarder which has a “local filter mode”?
This mode does not forward traffic of netid 00 and 01 to the internet but only to 127.x.x.x addresses.
I run loraserver and TTN in parallel and would like to not upload packages that TTN can not use anyways.
Thank you. This seems to not fully go in the direction I want.
I want packages with net id 00/01 to only be delivered “locally” all other packges can go to TTN.
It seems I need ot take a look into the poly_pkt_fwd
I took a look into poly_pkt_fwd and implemented a prototype. That was easier than I thought.
Filtering works and keeps net id 00/01 inside of the local network.
I will test this implementation over the next weeks refine it and post a patch for poly_pkt_fwd.c here.
What do you do when both want the gateway to transmit at the same time? Silently drop the traffic? What if another gateway with slightly weaker signal could have sent the packet you dropped?
The issue is compounded with the older codebases, as even if the packets don’t actually overlap in time, the old code can only have one packet waiting or sending. The newer version with jit at least handles having things outstanding but which don’t overlap.
Also, do you configure the gateway for the public or private preamble?
I did not change the behavior of the forwarder for the sending packages to nodes (downlink). The code of poly_pkt_fwd seems to work in FIFO mode for that part. As it is multithreaded for each server it is not quite FIFO in reality but very close. If downlink packages come in from the servers too fast / too many, eventually the operating systems network buffer will be full and they are lost, if the forwarder can not handle them fast enough.
My network(s) use sync word 0x34 (lorawan_public).
It is not my intention to change the existing code so extensively. My goal is to not send packages of net id 00/01 to non local servers (and my be not accept them from non local servers as well)
I also took a look at mp_pkt_fwd. I saw the change in the uplink handling using a queue and understand the concept. Is there any real world data on how often these conflicts happen? I guess this is especially problematic in heavy downlink oriented areas? Interesting would be a side by side comparison with poly in the same situation. There is (necessary) sorting going on, so I guess the CPU has more to do than with poly.
For uplink handling I could not detect substantial changes apart from being multi protocol (which I might add was a joy to look at). I tested my uplink filter implementation on mp_pkt_fwd and this also works (at least on first sight).
Unfortunately I can not get downlinks to work with loraserver for mp_pkt_fwd. I can see the packages are scheduled in mp_pkt_fwd but OTAA is not working. Same with the (I guess) original codebase lora_pkt_fwd.
@kersing any hints on where I could start debugging?
I do not seem to be the only one: https://forum.loraserver.io/t/otaa-loop-debuging/1644/3
Not really. The hardware can deal with one downlink packet at a time. And packets are sent to the hardware as soon as they come in from the net, and wait there until their scheduled time. So basically, even if the packets don’t actually conflict in time, if two are present in the gateway box at once they conflict in that design at least one is dropped.
But even with a more recent forwarder with the jit queue such that packets can arrive over the net in a different order than they transmit there’s still an architectural problem with packet that do overlap in that if you advertise yourself to a server as available to send a packet (by reporting in an uplink) and then you don’t actually send that packet but rather a conflicting one from a private server, in the case where another gateway was in range that it could have assigned the transmit job to, but you got the job by reporting better received signal quality, you are impairing the functioning on TTN.
The current design of TTN doesn’t accommodate “partial” participation.
Yes, pull the latest updates to both packet_forwarder and lora_gateway from my github repos and rebuild. In the last few weeks I fixed a buffer overflow bug (thanks to @jpmeijers for helping find it) that caused downlink issues when using the Semtech transport.
I can confirm that legacy transport works now. Thank you. 200 bytes more I see….
Testing my filter.
I understood that you are referring to a very basic/fundamental issue.
What would be an alternative? I can only imagine setting up an additional gateway. One that handles TTN and one that handles my private network exclusively.
I then see the following effect for your use case, where two packages are sent at the same time or have overlapped sending. When two independent gateways send in this way most probably both packages are lost as the collision that was formerly “solved” by the packet forwarder has now moved into the frequency spectrum. I see this as a much worse impact on the availability of a LoRaWAN network as none of sensors (private or TTN) received a package.
Trying to keep the collision inside the packet forwarder at least keeps the usability of the spectrum available.
This raises the question of how am I supposed to pick which one to send? This is a very fundamental fairness aspect (one could even say net neutrality) as the spectrum we are talking about is to be used by everyone not just LoRaWAN/TTN. I think the answer for this should be left open to the individual as we are talking about an unlicensed frequency spectrum, free for everyone and especially for TTN on a best effort base. Maybe multiple selection/prioritization algorithms could be implemented in future packet forwarders to reflect that individual choice.
Only if they are at the same frequency. There’s even some degree of capability in LoRa to receive distinct spreading factors at the same frequency.
IIRC there is also a mechanism whereby GW’s ignore traffic from other GW’s - I believe it is something like upchirp vs down chip mechanisms- long since forgotten so will have to go away and refresh…otherwise GW’s would generate incemental workloads for each other and backhaul and NS(s) when in range of each other…so colocated GW’s handlig different network traffic not such a problem. Indeed if it was then the concept of shared mast infrastructure for example woudl be a real probem! Only issue to be aware of is if too close and in region allowing RF to shout loud (less of issue in EU868) then its poss for 1 GW output to de-sensitise and even partially saturate RF front end of another very close by…therefore keep some meters of antenna seperation horizontally (and poss vertically) in such situations.
Yes, the IQ inversion setting is opposite between uplink and downlink, and in places where frequency allocation allows it, the frequencies are also different. Also, the bandwidth is typically different - a gateway can receive only one 500 KHz uplink frequency as compared to eight 125 KHz uplink channels, but most downlinks are at 500 KHz and thus only those aligned with the configured uplink would be receivable.
Worth noting that the IQ mismatch does “leak” occasional packets when all else is a match.
Only as a small increment, for many reasons beyond the incompatible settings. Downlink is infrequent compared to uplink to begin with, and the cycle would stop at the server which would take no action in response to an unintentionally intercepted downlink. Backhaul bandwidth is considered to be cheap in the very design of the backhaul protocols used.
That’s a misunderstanding of the issue. They hypothetical issue was that if two gateways tried to transmit (on the same or overlapping frequency, with the same or overlapping spreading factor), the packets could interfere with neither intelligible to either recipient. That indeed can happen - but collisions are less likely between gateways, than between two servers trying to command one gateway where any temporal overlap (or even insufficient padding space) causes a conflict.
So for someone who only has one antenna in a good position and will not build up a new installation and wants to run multiple networks/backends the best option would be prioritizing packets?
This just isn’t something that works with the current server<>gateway schemes.
A simple solution would be for TTN to implement a “downlink will not be accepted” flag and a “downlink was not accepted” NAK.
If you then run the private network with a 2 second RX1 (which is probably a good idea anyway), for ordinary traffic by the time you hear a TTN uplink you’d already know if you needed to transmit at the same time as its RX1, and could flag your signal report as one you could not respond to. Meanwhile for an OTAA join, you would not known at the time of receipt, but the RX window for a join is enough later that you could decline the packet after it was assigned, and still leave time for TTN to assign it to another gateway. At the least you’d no longer be breaking the contract of the protocol by not transmitting something you were asked to.
Unfortunately TTN architects seem focused on a grand scheme of high-level cooperation that is long out on the roadmap, rather that simply implementing a low level solution to let gateways feed multiple networks.
Your right its the IQ inversion bit that counts - couldnt remember off hand, however, issue not misunderstood. the risk of collisions is inherent in any shared spectrum space where TX’s from the GW’s are on similar settings…as you say that is lower probability and not something we can control or manage unless we control both networks so was ignoring that at this point. what people also forget is that it isnt just with co-located GW’s at same config…one has to consider from receiving nodes perspective as if the two GW are in totally opposite directions wrt the nodes position but with same TX settings and with same/similar RF path length (allowing for in path absorbsion factors) vs necessarily same/similar physical direct path lengthhs then the effect is just the same - even if say the GW’s are out of reach of each other but both in range of the node!
Here a small difference in length helps us discriminate at the node as the inverse square law effect of RF xmission helps accentuate and effectively amplify the difference such that there is often just enough difference in signals of same frequency, SF and amplitude that still allows the receiver to still discriminate at least one of the transmitted signals…LoRa is more robust in this way than many legacy rf modulation schemes Another reason for some seperation of antennas when co locating - one 5m offset one way another offset 5m the other provides 10m seperation which on lower SF’s at shorter distances - say 50-250m from GW’s becomes significant under ISL and helps signal seperation…if it is a real potential concern for me I look to get >15m offset for each (>30m total)…though only helps nodes roughly in the plane of the seperation - those roughly perpendicular still effectiely see the GW’s co-located - basic geometry gets us I’m afraid
In fairness to team TTN that is probably no different to any other LoRaWAN Network service provider. They need to focus on what they can manage vs trying to boil oceans at this point- they have too many other problems to work on and fix (Roll out V3, get PAcketBroker up and running/promoted, fix TTIG & TTOG supply issues and deployments and firmware, fix Basic Station bridging (V3 native?), manage/scale out UDP handling, etc…) so personally I’m willing to cut them a little slack vs asking for such additional mechansims
That’s why I have a problem with their insistence on doing things the hard way requiring huge amounts of development, rather than the easy way that could be quickly implemented.
People who set up gateways because they need their data can’t rely on getting it through TTN infrastructure, but TTN infrastructure lacks a “sorry I can’t do that right now” signal to make it able to take contributions from gateways to which it has only shared access.
Yes I understand that would be a solution.
But for now under the current conditions if one would like to prioritze TTN could one not just “simply” (a bit more logic would be needed) throw out all private packages from the jit queue that conflict with the timing of TTN package? That will create problems for the private network, but it is probably still be acceptable.
Now you are talking solutions! Well, at least if that prioritization works for you.
It happens I recently did graft a crude priority mechanism into the Semtech jit queue (albeit for a different reason).
Being in a hurry, rather than figure out how to unqueue the “loser” what I found I could do was simply set:
queue->nodes[i].post_delay = 0;
Which causes the dequeing mechansim to drop them and look for another.
(“post_delay” is the jit’s term for what is mostly the packet air duration, perhaps plus a small margin)
There’s probably a better way of doing this with a true scheme for deleting from the queue, but the above actually works. The rest of it was just parsing the priority field out of the json and passing it into the jit queue objects, along with some log messages so I could see that it was working.
Yes, first discussion, then solutions.
Thank you for the enlightenment. I will check how I can implement this in mp_pkt_fwd with as less ressources as possible.