Newbie question- Best practice research

Hello everyone!
I have a question regarding the best practice on how to solve a performance issue I am facing.
I have installed an outdoor thethingsnetwork gateway and I have added several 3rd party sensors but the packages I am receiving are way fewer than expected. The gateway is placed in a very central area and I was wondering whether I am having this issue due to traffic on that gateway.
What would be the best way to solve the above problem? Should I make the network private ( and how) ? Is there another way to control that only my sensors connect to it?
Thank you in advance!!!

First, some information on what gateway & devices you are using would be useful. And what your definition of range is - 100m? 1km? And where you are - not just country, but actual location - which informs the likely activity in the area as well as other gateways. How many actual sensors, how often are they sending, what is the approximate data size, are you joining with OTAA or ABP. Do the devices support ADR?

Height is good - getting the antenna up high is always useful. And having a device that you can put in different places to try to see what happens.

Unless you are getting hundreds of uplinks hitting the gateway a minute with a slow internet connection to the servers, I doubt it is the reason you are not seeing the number of packets you would like.

You can make your setup private by having your own network server. But it won’t stop the gateway hearing the other transmissions and then passing it on to your network server, only to be rejected.

As for using TTN, it’s against the terms of service to block any uplinks.

Thank you for your answer.
First some things for the gateway
Gateway
Brand: The Things Industries
Model: The Things Outdoor Gateway
Location: https://goo.gl/maps/gHNL4k2jVwYTRQVu7
Altitude: 210m
20 uplinks per min 30 bytes
definition of range: 2km
I do not know if the devices support ADR.
I have 8 sensors that send every hour approximate 25 bytes joining with OTAA
The gateway is using 3g.
Can you help me on what I should do next? what I should check for the packet loss?

Nice location and thank you for filling in the details.

You are inevitably going to hear many devices being relatively high in the city centre so if someone has a deployment of ~1,000 devices calling in once an hour, then you are going to see the traffic.

An internet connection on 3G can be very varied depending on what’s going on - and in a city centre, all sorts of other mobile use could impact the internet speed & latency that you get. So the first thing I’d do is move the gateway along with a couple of devices to somewhere with an internet connection that’s not on mobile to test that the gateway and devices are working well. Then you could try and find somewhere with far less mobile use if you want to check the 3G side of things.

Another quick test, if at all possible, is to try 4G - maybe using a mobile phone as an access point as you can do an internet speed & latency test on it - I use http://speedof.me as a simple test.

I’m assuming your devices are on all the time - joining on OTAA and then transmitting uplinks as required - rather than powering off - if they do, they will then have to rejoin which will put pressure on the internet connection on a time sensitive request & response.

One other solution to consider if you are on a budget as most churches are, would be to switch the devices over to using LoRa with a central receiver that sends your transmissions over the 3G - whilst the central receiver may still hear some of the other uplinks, it wouldn’t have to send the data over the 3G connection at all. This would depend on what the devices are if they can be changed - a good project for a 2nd/3rd year electronics university student!

Assuming you mean a 25 byte packet once an hour, while keeping usage low is admirable, this is probably too low.

You may want to consider something like transmitting new readings every 10-15 minutes on the hope that while you will certainly miss data points, on average enough of the packets will get through to meet your needs.

(If instead you mean you send short packets at more frequent intervals, while that’s probably ok up to a point as well, you should keep in mind that each packet has 13 bytes of overhead, so 25 1 byte packets is not 25 bytes, but rather 25 * (1+13) or 350 bytes.

Hopefully you mean that your nodes do an OTAA join once when they are started up and then run on those session details for months.

If you are re-joining every time, that’s not only grossly inefficient misuse of the protocol which in practice prompts multiple uplink and downlink messages, but likely a cause for failure itself as you will rapidly exhaust the join nonce space, and joint attempts with a re-used nonce are simply ignored.

In terms of 3g, you are unlikely to saturate it as the receiver can only decode so many packets no matter how many signals hit the antenna, but being aware of periods of outage would be useful. Normally a gateway sends a stats message once or twice a minute with or without any signals, I forget if TTN exposes these readily, but having a task on a cloud box that keeps track of them is very useful - if you see a data outage from your sensors, you can go and see if the gateway was checking in during that time period, or not.

Adding custom daemons on the gateway hitting other endpoints might be an option, too. Realistically, my gateways have seen far more effort into the systems which monitor and maintain them, but only minor tweaks to the actual “being a LoRaWAN gateway” aspect.