Legacy paket forwarder: local filter mode

I can confirm that legacy transport works now. Thank you. 200 bytes more I see….:slight_smile:
Testing my filter.

1 Like

I understood that you are referring to a very basic/fundamental issue.
What would be an alternative? I can only imagine setting up an additional gateway. One that handles TTN and one that handles my private network exclusively.
I then see the following effect for your use case, where two packages are sent at the same time or have overlapped sending. When two independent gateways send in this way most probably both packages are lost as the collision that was formerly “solved” by the packet forwarder has now moved into the frequency spectrum. I see this as a much worse impact on the availability of a LoRaWAN network as none of sensors (private or TTN) received a package.
Trying to keep the collision inside the packet forwarder at least keeps the usability of the spectrum available.
This raises the question of how am I supposed to pick which one to send? This is a very fundamental fairness aspect (one could even say net neutrality) as the spectrum we are talking about is to be used by everyone not just LoRaWAN/TTN. I think the answer for this should be left open to the individual as we are talking about an unlicensed frequency spectrum, free for everyone and especially for TTN on a best effort base. Maybe multiple selection/prioritization algorithms could be implemented in future packet forwarders to reflect that individual choice.

Only if they are at the same frequency. There’s even some degree of capability in LoRa to receive distinct spreading factors at the same frequency.

1 Like

IIRC there is also a mechanism whereby GW’s ignore traffic from other GW’s - I believe it is something like upchirp vs down chip mechanisms- long since forgotten so will have to go away and refresh…otherwise GW’s would generate incemental workloads for each other and backhaul and NS(s) when in range of each other…so colocated GW’s handlig different network traffic not such a problem. Indeed if it was then the concept of shared mast infrastructure for example woudl be a real probem! Only issue to be aware of is if too close and in region allowing RF to shout loud (less of issue in EU868) then its poss for 1 GW output to de-sensitise and even partially saturate RF front end of another very close by…therefore keep some meters of antenna seperation horizontally (and poss vertically) in such situations.

Yes, the IQ inversion setting is opposite between uplink and downlink, and in places where frequency allocation allows it, the frequencies are also different. Also, the bandwidth is typically different - a gateway can receive only one 500 KHz uplink frequency as compared to eight 125 KHz uplink channels, but most downlinks are at 500 KHz and thus only those aligned with the configured uplink would be receivable.

Worth noting that the IQ mismatch does “leak” occasional packets when all else is a match.

Only as a small increment, for many reasons beyond the incompatible settings. Downlink is infrequent compared to uplink to begin with, and the cycle would stop at the server which would take no action in response to an unintentionally intercepted downlink. Backhaul bandwidth is considered to be cheap in the very design of the backhaul protocols used.

That’s a misunderstanding of the issue. They hypothetical issue was that if two gateways tried to transmit (on the same or overlapping frequency, with the same or overlapping spreading factor), the packets could interfere with neither intelligible to either recipient. That indeed can happen - but collisions are less likely between gateways, than between two servers trying to command one gateway where any temporal overlap (or even insufficient padding space) causes a conflict.

So for someone who only has one antenna in a good position and will not build up a new installation and wants to run multiple networks/backends the best option would be prioritizing packets?

This just isn’t something that works with the current server<>gateway schemes.

A simple solution would be for TTN to implement a “downlink will not be accepted” flag and a “downlink was not accepted” NAK.

If you then run the private network with a 2 second RX1 (which is probably a good idea anyway), for ordinary traffic by the time you hear a TTN uplink you’d already know if you needed to transmit at the same time as its RX1, and could flag your signal report as one you could not respond to. Meanwhile for an OTAA join, you would not known at the time of receipt, but the RX window for a join is enough later that you could decline the packet after it was assigned, and still leave time for TTN to assign it to another gateway. At the least you’d no longer be breaking the contract of the protocol by not transmitting something you were asked to.

Unfortunately TTN architects seem focused on a grand scheme of high-level cooperation that is long out on the roadmap, rather that simply implementing a low level solution to let gateways feed multiple networks.

Your right its the IQ inversion bit that counts - couldnt remember off hand, however, issue not misunderstood. the risk of collisions is inherent in any shared spectrum space where TX’s from the GW’s are on similar settings…as you say that is lower probability and not something we can control or manage unless we control both networks so was ignoring that at this point. what people also forget is that it isnt just with co-located GW’s at same config…one has to consider from receiving nodes perspective as if the two GW are in totally opposite directions wrt the nodes position but with same TX settings and with same/similar RF path length (allowing for in path absorbsion factors) vs necessarily same/similar physical direct path lengthhs then the effect is just the same - even if say the GW’s are out of reach of each other but both in range of the node! :slight_smile:

Here a small difference in length helps us discriminate at the node as the inverse square law effect of RF xmission helps accentuate and effectively amplify the difference such that there is often just enough difference in signals of same frequency, SF and amplitude that still allows the receiver to still discriminate at least one of the transmitted signals…LoRa is more robust in this way than many legacy rf modulation schemes :grinning: Another reason for some seperation of antennas when co locating - one 5m offset one way another offset 5m the other provides 10m seperation which on lower SF’s at shorter distances - say 50-250m from GW’s becomes significant under ISL and helps signal seperation…if it is a real potential concern for me I look to get >15m offset for each (>30m total)…though only helps nodes roughly in the plane of the seperation - those roughly perpendicular still effectiely see the GW’s co-located - basic geometry gets us I’m afraid :rofl:

In fairness to team TTN that is probably no different to any other LoRaWAN Network service provider. They need to focus on what they can manage vs trying to boil oceans at this point- they have too many other problems to work on and fix (Roll out V3, get PAcketBroker up and running/promoted, fix TTIG & TTOG supply issues and deployments and firmware, fix Basic Station bridging (V3 native?), manage/scale out UDP handling, etc…) so personally I’m willing to cut them a little slack vs asking for such additional mechansims :+1:

That’s why I have a problem with their insistence on doing things the hard way requiring huge amounts of development, rather than the easy way that could be quickly implemented.

People who set up gateways because they need their data can’t rely on getting it through TTN infrastructure, but TTN infrastructure lacks a “sorry I can’t do that right now” signal to make it able to take contributions from gateways to which it has only shared access.

Yes I understand that would be a solution.
But for now under the current conditions if one would like to prioritze TTN could one not just “simply” (a bit more logic would be needed) throw out all private packages from the jit queue that conflict with the timing of TTN package? That will create problems for the private network, but it is probably still be acceptable.

Now you are talking solutions! Well, at least if that prioritization works for you.

It happens I recently did graft a crude priority mechanism into the Semtech jit queue (albeit for a different reason).

Being in a hurry, rather than figure out how to unqueue the “loser” what I found I could do was simply set:

queue->nodes[i].post_delay = 0;

Which causes the dequeing mechansim to drop them and look for another.

(“post_delay” is the jit’s term for what is mostly the packet air duration, perhaps plus a small margin)

There’s probably a better way of doing this with a true scheme for deleting from the queue, but the above actually works. The rest of it was just parsing the priority field out of the json and passing it into the jit queue objects, along with some log messages so I could see that it was working.

Yes, first discussion, then solutions.
Thank you for the enlightenment. I will check how I can implement this in mp_pkt_fwd with as less ressources as possible.

Class C time frame algorithm with prio was a bit tricky, but I think I got it.
How can I test it? Are the util_* tools useful for this? I do not have a very heavy downlink network.

I don’t think the util_ tools work at all with the packet forwarder, but are rather things you can run instead of it for low level tests.

Could you maybe spin up two nodes, one TTN and one private, that use a common trigger so they both uplink at the same time with the confirmation request bit set? Or you could just send the packet forwarder “invented” downlink traffic with conflicting timing and differing priorities. One of the things I really like about MQTT-in-the-middle backhaul schemes is that it’s far easier to write tests and analyzers that look at what is happening, inject test events, etc.

Ok, I have a test version, for the brave. The filter function has been tested, the prioritization not.

What did I change?

Added boolean parameters priv_filter, pub_filter and priority to server configuration directive
priv_filer:
filters private net id packages, if set true, all data uplink+downlink packages with net id 00/01 are not send to and not received by this server
defaults to false, which means no filtering
pub_filter:
filters public net id packages, if set true, all data uplink+downlink packages not having net id 00/01 are not send to and not received by this server
defaults to false, which means no filtering
If filtering was detected a message will be shown during startup.

priority:
if set true (only one server is allowed), all downlinks by this server are prioritized over all other servers, conflicting packages of other servers in the jit queue will be silently dropped and not sent
defaults to false, which means all packages from all servers are handled equally, which could lead to problems as discussed in this thread
If priority was detected a message will be shown during startup.

Some implementation remarks:

  • If priority is enabled the fetch thread will only fetch one package from the gw per fetch cycle. That was for me (and now) the easiest way to implement it, as usually 8 packages are fetched per cycle, then fused into one udp package and sent to all servers. Not useful for per package filtering. Maybe I change this later by rewriting that logic completely and ordering based on net id. How big is the RX buffer of the gw anyways?
    The fetch cycle had a sleep of 10ms, when no packages were received. To counteract the additional latency off one fetch cycle per package and sending I decreased the sleep time by half. My measurement showed that this adds an additional CPU load (from 2.4% to 3.6% on a pi zero w). I might be too cautious here or too less, but I have real world data missing.

  • The not consistently mutex protected peek and dequeue order in thread_jit might now lead to inconsistencies if packages are removed in between a peek and a dequeue. I fused both functions into jit_peek_and_dequeue.

  • There a many different cases on how the queue could look like when time frame alignment with a Class C ASAP package happens. I did not test this at all. I am pretty sure the I got basics right but the devil is in the detail. The idea was to simply ignore all low prio packages when checking for time collisions. In the final overlap loop (criteria 3 overlap check) I then simply dequeue all packages that conflict and have lower priority.

Any constructive code review and testers are welcome.

1 Like

Could you update the main readme.md at the top to indicate there are (untested) changes?

I did on top, right after changelog chapter and in the text of the prio description.

1 Like

I suspect that will cause problems in heavy usage (wasn’t part of the concern with USB concentrators on busy networks getting all the received packets dequeued before more came in). Couldn’t you just do the filtering at the UDP packet assembly stage? I’d actually have been tempted to have the packet forwarder talk to a single local intermediary process, and have that parcel packets out to destinations, especially as it might want to do format conversions (like feeding the private server json or protobuff over mqtt, as something like LoRaServer typically wants)

The not consistently mutex protected peek and dequeue order in thread_jit might now lead to inconsistencies if packages are removed in between a peek and a dequeue. I fused both functions into jit_peek_and_dequeue.

That complexity is why I opted to just zero the duration (what the code calls post_delay) of any packet in the queue the loses priority to a higher priority packet being added, but leave them in the queue for the consumer to remove and ignore. (If the packet being added is lower priority than a conflicting one already there, the usual behavior of declining to add in the case of conflict endures)

That said, big congrats for taking your ideas from the forum to a code editor and actually implementing them!

1 Like

So I did decide for an internal simulated test of the new jit queue and uploaded that new version.

I defined new _SIM package types that only differ in this way, that they are not send out by the radio. All other handling is identical. A jit_injector_thread then injects a bazillion downlink packages. I guess much more then real life gateways need to handle. I tested a lot of different scenarios. Especially when the queue is full. Does that even happen?

I also moved the filter to sending stage as suggested.

After looking at thousands of loglines and the interesting cases I am pretty sure it works like intended. I am using this as a daily driver from now on for my home installation.

So back to the original question. I (now) give TTN packages priority over private ones and make sure that private packages are not send to TTN servers. That should be as much in line as possible with one shared gateway and the current protocol specification, if I understood everything.

I plan to implement this also at my university. If the package loss for the private network is not acceptable I can at least argue to get money for a second gateway and antenna spot.

1 Like