Job one is avoid such a situation - dependig on how many GW can service the nodes you will run into several problems. Easiest to explain if running with one GW. Also recommend you read up on the documentation and check out the LoRaWAN 101 as problems then become obvious.
Basically when a node starts up it will initialise then start its join process (assuming OTAA as best practice here). A node then initiates a join request and all being well a short time later the NS will issue a join accept via the GW. 1000 nodes all literally doing same thing at same time impractical due to conflict - GW can only listen to and react to a limit number at a time. The next issue - even if a given node does get its message through - is that the response will be a join accept downlink. That TX of the downlink renders the GW briefly deaf - to ALL other nodes that may be sending uplinks - including by then likely re-trys of the join request from ALL the not yet joined nodes. So you can see that all the none joined nodes will back off and try again shortly there after and so on…you get a cascade blockage where the nodes if the are lucky will gradually succeed in joining but where the GW also struggles to send all the messages back in a reasonable time. The 3rd issue is the GW is itself a transmitter and - depending on where in the world will also have rstrictions on its operation - in EU regions we will see a duty cycle limit - and with lots of nodes joining at the same time there is a good chance the GW will be legally throttled from sending further join accepts until such time as it falls back within its DC limits. So the drawn out process above gets stretched out even further. In the meantime also your 1000 nodes are also then effecctively engaging on a DDoS attack on all the other nodes in range of that community GW - and that is not a good thing. Also depending on your location and the density and activity of the other community nodes your nodes will have to share access and wont just be competing with each other but will be competing with the community so even the drawn out process above will be an ideal and likely the community nodes will cause your overall join process to be drawn out even further!
Ok, thats the problem, what is best practice?
- limit the number joining in close time frame
- ensure that any given group join with a truely random delay & dither - many nodes running same firmware/code and perhaps with same psuedo random mechanisms will still end up largely synched - bad!
- Ensure that if initial join request fails they dont all have the same retry timing - again use a truely random delay before retry and even then also add a random ‘dither’ if you think timing will be close
- Use an extended back off period - make the time between each retry gradually longer and longer (with a random dither element) to give GW time to breathe and recover from DC restrictions and to limit clashing nodes
Note ABP may solve part of the problem (at least part of the GW TX capacity problem) but will cause others and if assumption is “I transmit and therefore I will be heard” with no way to manage at the node or via application it can be far worse…even where some random timings and dither are introduced
TL::DR 1000 nodes all turning on at same time cannot be handled in one hit with just one or only a few GWs. Further, with normal LoRaWAN timings and event handling, if they start up and only run for 1-2 hours it is possible some stragglers may never join in time to send a useful message before turning off again! (do the maths once you understand all the individual timings)
One other thought if deploying such a fleet for a short period of operation before shut down - like your 1-2 hrs, that doesnt allow much time for the network to balance and optimise operation (SF & TX power), as mechanisms like ADR take time to adjust. It may be worth setting a random range of SF’s for initial operation for any given joining group (note the LoRa Alliance, who define the LoRaWAN specs, dont allow networks to accept and join devices at SF12 so avoid that and if in US SF11/12 cant be used anyhow due to dwell time limits), if running for more time then be sure to enable ADR, however, as NS can then start process of optimising operation… (if running for min say 30-100 Tx’s per node, as it usually takes 20-25 uplinks on TTN for ADR to start to cut in and improve/optimise node behaviour)