Quick search of forums didnt show anything useful so new thread…
I am evaluating coverage vs not-spots in a small region (say 5x5km) targeting key sites and some general areas and find it useful to monitor the gateway console pages to see if a packet from any one of several known test nodes is picked up as they wander around or move placement deliberately in an area - allows me to determine location of not-spots vs hot-spots for optimisation or for future infill GW placement.
Assume I have 4 GW’s called North South East & West with partially overlapping coverage for a small degree of resilience. As the nodes move around I can check the GW Console data ‘live’ to see if sequenced numbered packets are received. If I see a gap in seq# on say North console I can quickly check the others to see if that sequence # captured…if it is then great I have near constant coverage as the data will still get to back end. If not, I check on the other two consoles in turn and if none captured that TX I know I have a potential not-spot and can either infill with another GW or I can iterate a given GW placement or possibly antenna height to try and re-establish coverage and improve/optimise as needed.
This is fine for quick live checking and I have another more detailed set-up capturing and storing some sensor GPS & other data plus the metadata that is captured and stored for later off-line examination of that one specific GW location (independent of N,S,E & W gateways) to build up sequence coverage and map GPS locations & tracks from where ever that one system is used.
My problem is the GW Console data I see is ‘transient’ in that I can see a reasonably long list until it gets truncated but then its gone. Also with say iPad if I switch been the browser tabs for each GW then any recent packets received (minutes) quickly update and I see full list to visually parse through for target Dev Addr and check for missing Seq#'s. On PC with FFox or IE behaviour is different in that if I go off tab for too long I get ‘Fatal Error’s’ and need to manually refresh page to update - loosing any prior data. On a different thread when I flagged this a forumite suggest Adblocking may be issue with respect to Pusher.com used by TTN backend but even though I have white listed behaviour continues and I loose ability to cross check across all the GW’s if i wait too long to look at any given console page.
So my question is - is there an easy/simple method of capturing & storing the traffic data for any given console so I can analyse and review offline - possibly even as audit or archive of captured coverage data to allow much later (3 months 6 month , years?) analysis and comparisons to see how coverage has changed over time - e.g. impact of a new tower block, new warehouse, new motorway bridge causing shadowing, seasonality or adverse extreme weather, etc.? I did consider something like Squix (Daniel) http integration push of application data to Google sheets:- https://blog.squix.org/2017/07/thethingsnetwork-how-to-use-google-spreadsheet-to-log-data.html but that needs application integration vs GW and I’m not much of a Softie to be able to hack an equivalent for GW - if even possible through that route - and indeed I am not interested in a given ‘test’ nodes apps data just ‘can I see it & how well?’ - so no need to decode payloads etc. hence appeal here.
For now a simple grab of the Dev Addr and Seq# is a good start to tell me if seen or not - ideally though would like to store the expanded info with timestamp, RSSI, SNR etc as seen in drop down for any listed received packet in the Console for more detailed off line analysis or comparisons.
Folks any good pointers?..