Thanks @pe1mew, that’s a correct illustration.
For those interested, the reason why we do not route traffic from Packet Broker to The Things Network V2, is all of the below:
The Things Network V2 has an RX1 delay of 1 second. This means that a downlink message needs to be ready for transmission by the gateway 1 or 2 seconds after the uplink was received. The whole roundtrip  cannot be made that fast. This is why we increased the default RX1 delay in The Things Network V3 to 5 seconds
V2 has its own mechanism to route traffic from one region to the other, known as cross-region peering. This is what Packet Broker does in V3. It is not reliable to keep both V2 cross region-peering and Packet Broker in place. We can also not reliably bypass V2 cross region-peering for incoming traffic, so we don’t support any incoming traffic that is outside of V2
Slightly related: we recommend against migrating the device session from V2 to V3, and we want to add a barrier for this by not routing data back to V2. There is a lot of traffic in the DevAddr blocks that are allocated to TTN V2. For TTN V3, we have fresh new DevAddr blocks (
260B0000/16). If Packet Broker would be routing all traffic from TTN V2 DevAddr also to TTN V3 (or, from V3 to V2, what this topic is about), we get two new problems:
- We cannot guarantee that the device session is only in TTN V3. So it could be that the same message ends up and gets handled by both V2 and V3 concurrently, leading to MAC state corruption and conflicting downlinks, and effectively disconnecting the device
- We would be sending all the 600 messages per second to TTN V3. Now that is certainly possible in terms of performance, it comes with a serious cost (€€€ + ops time) of network capacity and it makes logging and tracing really hard, while we want to start with a clean slate and gradually grow TTN V3
 the roundtrip would be: gateway → TTN V3 Gateway Server → TTN V3 Packet Broker Agent → Packet Broker Data Plane → Packet Broker Router → Packet Broker Data Plane → TTN V2 Packet Broker Agent → TTN V2 Broker → TTN V2 Handler → TTN V2 Broker → TTN V2 Packet Broker Agent → Packet Broker Data Plane → Packet Broker Router → Packet Broker Data Plane → TTN V3 Packet Broker Agent → TTN V3 Gateway Server → gateway. You can imagine that it this only works in ideal circumstances, on one continent only, with very low latency gateway backhauls etc etc, i.e. there are so many things that break the downlink flow, that we decided not to support it in the first place, to avoid “why does it work for them but not for me” questions here that need significant troubleshooting effort.