USA & Canada Sub-Band Plan

With the 72 uplink channels we have in the USA, and the limitation of current gateways to 8 channels, we have some frequency planning to do. The 72 uplink channels are divided into 8 sub-bands, each containing eight 125kHz wide channels and one 500kHz wide channel. The question is, as we deploy gateways (eventually blanketing metropolitan areas or even whole states), how do we want to assign gateways to sub-bands?

We could put all gateways on one sub-band (say #7). The principle advantage of the single sub-band plan is redundancy/localization: multiple adjacent gateways can pick up the same signal, providing redundancy in case of interference / loss, and allowing poor-man’s localization using RSSI (and, eventually, localization via DToF ?).

Another plan would involve a “cellular” assignment of sub-bands, where adjacent gateways listen on distinct sub-bands. This makes better use of spectrum (unless you figure that more retransmissions may be required due to lack of redundancy).

My opinion is that we should stick to a single sub-band plan for USA until we hit capacity issues, and only then (re)assign gateways to a secondary sub-band.



The plan is indeed to start with a common sub-band for North America. This will maximise coverage initially, and get the project off the ground. We do the same in EU with a single channel plan.

We probably should not be using sub-bands #1 and #8, as they might be used as default by other systems.

Do you have any insight in what a good choice would be? We have been using #7 a bit so far, but maybe something in the middle, like #4 or #5 is better?

How about sub-band #0 ?

In USA the 915MHz band is shared with many services, so it’s hard to predict what portions of the band will be less utilized.

LoRaWAN’s frequency hopping should help avoid narrow-band interferers, but we are still vulnerable to wide-band interferers (such as 802.11ah). Using sub-band #0 would minimize interference from the wideband interferers because they do not operate so close to the band edge.

Sorry Chris - it was a trap. A facetious trap to flag the issue that sub-bands don’t exist (officially).
In the thread above one post uses #1 to #8 and as I’ve seen elsewhere, I suggested #0 to #7.

We need to get the LoRa Alliance to include the concept of sub-bands in to the standard so that we can all agree on the numbering used, otherwise we’ll run in to trouble between different vendors of nodes/gateways/servers.

Sorry for bringing up such an old thread. Maybe there is more official documentation or a more recent thread now about this.

The FCC regulates that if you are operating in the 915MHz ISM band within the US you need to bounce between 50 different channels. By only bouncing between 8 (or 8 + 1) channels would these LoRa devices be in violation of this regulation?

Based on FCC Part 15.247 : “the system shall use at least 50 channels if the 20 dB signal bandwidth (from peak power down to 20 dB less of it = how wide the channel will be) is less than 250 kHz”. LoRa UPLINK in US 915 MHz has 64 @125 kHz BW channels so we’re good (64 channels of 125 kHz are > than the 50 minimum @ 250 kHz BW each). It also states that any given channel within that band (ISM 902 - 928 MHz / aka 915 MHz) 20 dB signal bandwidth will not exceed 500 kKz, and the 8 alternative 500 kHz channels are also ok there.

Now, to your question which is a very good one and one I’ve pondered myself, it is my understanding that when they say 'System" they mean 'Technology"; such as LoRa AS A WHOLE. In other words, you can use ANY of the 64 channels as a starting point and jump between 1, 8 or 64 in any fashion you like as long as the above FCC requirements are met.

The frequency hopping mechanism inside the LoRa sensor, based on what I’ve read in FCC and Semtech docs, will look for a ‘default’ first carrier/channel (based on programming, OTAA/ABP etc) and will ‘jump’ 2 carriers ‘over’, then 1 carrier over, then it will jump back 2 and will cycle through this 4 channel sequence until it gets an ACK/JA message. As an example, I can start in Sub Band 1 (Channels 0-7 of the 64) on channel 1…if nothing, then I jump to 3…if still nothing, then I jump to 4…if still nothing, then I jump ‘back’ to last one 2 out of original 4…if nothing I jump to the next 4 channels… to me this makes sense as to try as much ‘contiguous’ channels on a default/programmed sub band than ‘all over the place’.

How many channels a Gateway can the radio ‘listen’ to at once is what defines how many of the 64 will be used - and how pricey the Gateway will be. The Laird Sentrius ones that I’m using are 8 channels (Sub band 2 channels 9-15 typically used by TTN in the US). All my sensors will OTAA and ‘jump’ on any of these 8 channels - but it doesn’t have to. As long as we’re under the FCC requirements above, we could OTAA/Jump to any sub band. Cisco Gateways and more pricey basestations can even do 16 channels. But remember that FCC Part 15 states that a “device must accept any interference received”, hence there is no guaranteed QoS. So you could program your LoRa GW to whatever Sub Band or pair of Sub Bands for the more pricier ones if you encounter too much ‘noise’ in any particular area/bandwidth of the 915 MHz spectrum - we just have to deal with it.

This is why LoRa’s Rx sensitivity and Signal to Noise ratio comes into play; with such low sensitivity and SNR’s to be even ‘negative’ (meaning the signal/burst can be 20 dB or 100 times lower than the noise around) we’ll still be able to make it although it’ll take a couple of seconds for it to arrive. If you still can’t, then Part 15 and LoRa says we’re screwed so either find the noise interference and do something about that or get another GW in that area or change the Sub Band. Again, no guarantees.

I hope this is not more ‘confusing’ than before and if anyone has any additional comments OR something I have understood incorrectly, by all means please let me/US know!!! j

Hi. New to Lora and TTN, so please forgive me if this has been previously discussed and answered. I did make a pass at search and have reviewed the LoRaWAN 1.1 Regional Parameters document, but have not come up with an answer to the question about why TTN specifies a limited subset of channels for use in the United States:

The reason I ask is that I’ve setup a Lora gateway and am now getting a couple of nodes online. The later are based on the Dragino/RFM based devices and I am using the LMIC 1.6 adaptation by wklenk :

I noticed right away that my node was only getting about 1-in-8 packets through to the gateway. After some research I realized its because the LMIC code chooses random upstream channels in the full range (1-64). Whereas TTN only uses 8 125 kHz upstream channels, I was randomly losing ~7/8 of my upstream transmissions.

I’ve adapted the LMIC code to limit the channel selection to only those supported by TTN in the US and now have good results.

In any case, I wanted to understand why TTN specifies such a small subset of the available channels.


Eric Davis

1 Like

From a simplistic perspective a GW comprised 4 key elements - a concentrator/baseband processor (using an appropriate front end with either SX1301 or SX1308), an embedded control processor, some kind of network/backhaul interface, and a (software) packet forwarding engine. A ‘full’ (vs single channel) concentrator handles 8 channels as you have found. To cover all 64 channels simultaneously would therefore required the core basic elements with 8 concentrators - on board or as cards - and then likely with more expensive and complex RF switches and front ends, (if not also multiple discrete antenna i/o requiring distinct seperation). Some of the more expensive solutions use 2x concentrators (16 channels - as noted elsewhere), but it is rare to find 8off (full 64 channels). As you can imagine this would make for a very expensive, larger and more power hungry GW! :frowning: What you are seeing is something similar to where a single channel GW is installed in an 8 channel environment/deployment with 7/8 chance of loosing packets if nodes are using all 8 channels.

Deployments need to be planned well to ensure nodes & gw’s have matching channel capacities and operating frequencies for economic & reliable coverage.

Separately (IMHO) we should refrain from deploying single channel GW’s as they lead to inherent packet loss in communities…indeed they shouldn’t even be called GW’s (again IMHO! :wink: ) but rather should be described as a basic ‘bridge’ or something similar to avoid confusion. People see a ‘GW’ deployed on the TTN map and often mistakenly assume they will have coverage when what is really there is only a single channel device…


Eric and Jeff, way better explained and more simplified than I did! I agree about the Single Channel ‘bridges’ vs ‘true’ 8 channel ‘gateways’.

I’ve noted that some manufacturers/network server providers I’ve worked with have already ‘declared’ some sub-bands as default; for example MultiTech, Cisco gateways connected to Actility (non-TTN) will use US Sub Band 1 (channels 0-7); Laird, MyDevices and almost every other TTN-connected hardware/sensor/gateway manufacturer will start in US Sub Band 2 (channels 8-15). Every device that was programmed as OTAA (Over The Air Activation), which is typically the most common ‘Join’ procedure used (as opposed to ABP, or manual programming) didn’t have any issues (whether Actility or TTN) as long as the device in question (sensor, gateway) firmware already was ‘set’ for such network operator and sub-band. The only exception to this rule was my Adeunis Field Test device, US version. From ‘default’, it OTAA’d right away on Cisco/Actility Sub-Band 1. But when I tried to ‘use’ it on Laird/TTN, it didn’t worked right away. After a firmware/registers update from the manufacturer, it switched/worked successfully on Sub Band 2/TTN. Same happened with the latest ‘off the shelf’ sensors I got which are primarily defaulted to work with Comcast’s machineQ - and where happily returned to the manufacturer for new ones after they admitted the firmware had ‘issues’ with TTN operation and they would take care of it themselves. :slight_smile:

I note this as an FYI to all that, before you get into the ‘code’ of the device (especially if ‘off the shelf’ and/or if you don’t want/know how to code it) and you suspect your devices trying to OTAA on the wrong sub band, as Jeff-UK pointed out, you can always contact the mfr and they ‘should’ be able to send you a firmware/doc update.

Thanks all for the thread and the info! j

1 Like

From memory both Cisco & Multitech offer base units that are ‘expandable’ via add in comms cards for the Conduit family - inc but not limited to LoRa cards - though MTech also offer a fixed fn device like the CAP - and I believe the Cisco (IR family?) options include 16 channel versions (2 cards?) so it may be they support both 0-7 & 8-15…have you tried Cisco with TTN in US?

No, I didn’t use Cisco with TTN, only with Actility as it was a Private LoRaWAN network that used Cisco and in the US Cisco will only use (for private projects) Actility as their Network Server/Long Range Controller, unless the client provides their own. We didn’t use the IR829 either (it came as an option from Cisco to provide Wireless backhaul, but we were inside a ‘controlled’ indoor test environment so no need). We used their standalone IXM (Interface Module) LoRa Packet Forwarder GW which, as you say, came with 16 channels already ‘set’ for Actility/Cisco/Multitech on Sub Band 1 (don’t remember what the other band was to be honest). If you google ‘Cisco IXM’ on the ‘images’ you’ll find it. IMHO it wasn’t very good - lots of trouble to deal with, not very reliable, bugs here and there, no GUI-all through the UNIX console etc. I personally didn’t like it at all. But it is an outdoor rated 16 channel with GPS and 2 antenna ports ‘rugged’ small base station, with capability to be powered by PoE+ 30 watts (only one cable), so it is an option out there, just not for me :slight_smile:

Good to get your perspective & experience, thanks for sharing with the community :slight_smile:

1 Like

Likewise; glad to exchange the knowledge - educate and be educated - in this global effort! j

In LoRaWAN 1.1 (not yet supported by TTN), the Join Accept response can include a CFList, to tell the node which channels to use. Of course, that still requires a node to first find a suitable channel for the OTAA Join Request. For this, LoRaWAN 1.1 Regional Parameters defines (emphasis mine):

If using the over-the-air activation procedure, it is recommended that the end-device transmit the Join-request message alternatively on a random 125 kHz channel amongst the 64 channels defined using DR0 and a random 500 kHz channel amongst the 8 channels defined using DR4. The end-device SHALL change channel for every transmission. For rapid network acquisition in mixed channel plan environments, it is further recommended that the device follow a channel selection sequence (still random) which efficiently probes the groups of nine (8 + 1) channels which are typically implemented by smaller gateways (channel groups 0-7+64, 8-15+65, etc.).

For ABP, this states:

Personalized devices SHALL have all 72 channels enabled following a reset and shall use the channels for which the device’s default data-rate is valid.

I don’t quite understand that; when using ABP one would already lock a device to a specific provider, hence one could also choose specific channels, if a provider would only support some channels? Like TTN has made a selection already.

As an aside, LoRaWAN 1.0.1 also supports the CFList for, e.g., EU868, but does not for US902-928:

7.2.4 US902-928 JoinAccept CFList

The US902-928 LoRaWAN does not support the use of the optional CFlist appended to the JoinAccept message. If the CFlist is not empty it is ignored by the end-device.

LoRaWAN 1.0.3 supports it for US915 as well, but I think TTN does not support this.


Thanks @Jeff-UK. This makes sense and gives me an area to study for better understanding. The gateway I am using is a “full” gateway. I am using the SandboxElectronics LoraGo Pi development board which uses the SX1257 radio. From a quick glance I see that it supports the full range of the European and NA ISM bands, and based on your response, it makes perfect sense that it’s bandwidth is limited to a subset of its tunable range.

As I mentioned in my OP, the loss of packets was not due to my gateway being single channel - it is in fact a multi-channel gateway. The loss was occuring because the IBM LMIC code was randomly choosing upstream channels in the range of 0-63. I’ve modified that code to limit the channel selection to just those channels supported by the TTN frequency plan for NA.

FWIW, I agree with your suggestion to refrain from deploying single channel GWs. If they were characterized as bridges, where the channel was known, then at least fixed node lora devices that are within range could make more effective utilization.

1 Like

Thanks @arjanvanb.

This has been an insightful exchange. From this I realize that my changes to the LMIC code break the implementation from a standards perspective. As I noted in my OP, the LMIC code was selecting random channels from the full range of channel groups during both the join and tx processes. I “fixed” it by limiting the selection to only the range supported by TTN in NA. I think now I must study this code a bit further to see if it supports the CFList, which would answer the question in mind that was “why would the driver code use a full range of channels when regional operations dictate that only a subset are provisioned”?

If indeed the LMIC code expects that the join response should include the CFLIst, and TTN does not support it as you indicated, do you have any thoughts about how to handle this?

Best Regards,


1 Like

I’m not even sure if LMiC expects/supports CFList for US915 (it does for EU868). And if it does, you might need LMiC 1.6, as I think 1.5 only supports LoRaWAN 1.0.

Are you using OTAA? Maybe TTN is actually sending a CFList in LoRaWAN 1.0.x for US915 too? See if you want to investigate. (That example might only work for EU868.)

Finally, the V3 network stack, which supports 1.1, is almost due; maybe it will be released during the LoRaWAN Conference in a few weeks…?

Ah, and I suddenly remembered this about ADR:

There are a several moments when an ADR request is scheduled or sent:

  1. The initial ADR Request (for US915 and AU915). This is sent immediately after join and is mainly used to set the channel mask of the device. This one is a bit tricky, because we don’t have enough measurements for setting an accurate data rate. To avoid silencing the device, we use an extra “buffer” of a few dB here. This request is only needed with pre-LoRaWAN 1.1 on our v2 stack. With LoRaWAN 1.1 devices on our v3 stack, we can set the channel mask in the JoinAccept message. ABP devices pre-LoRaWAN 1.1 will only get this message once, if they reset after that, they won’t get the message again; this issue is also solved by LoRaWAN 1.1.

  2. […]

1 Like

Sorry for being so late to answer this one - I’ve been on a “no-TTN January” for my new years resolution.

There’s a lot to read in the thread above, so I’m going to jump back to the question from 28days ago - are these 8-channel gateways violating the FCC regs. The answer is no, they are fine.

FCC describes two types of spectrum ‘fair use’. One is hopping over 50+ channels and the other is DTS: Digital Transmission System. Both are really trying to ensure that a user doesn’t hog and block a channel, either by hopping off it or by keeping the spectral density low.

There is also an option for ‘hybrid’ mode which as the name suggests, is a hybrid of the two schemes. Easiest to link to here rather than re-write it all:

If you search Google for FCC DTS Hybrid there is plenty of material.

When LoRaWAN spec was originally written, the USA channel plan was based on 64 channels just because it could be. It satisfied the 50+ restriction, gave good capacity. All sounds great but the gateways were damn expensive. I remember LinkLabs was promising to make them but didn’t, and also remember having to beg Senet for one of their early 64ch gateways to test against.

So some smart dude noticed the ‘hybrid’ rule and I guess plenty of 8ch EU gateways were available to be converted to US frequencies. So an 8ch system is a capacity restriction but still good enough for most cases, and the price is right.

The problem is that the LoRaWAN spec was written for 64ch gateways at that time and so these 8ch hybrid sub-bands were an unofficial de-facto standard that was deviating from the official spec. Some companies labelled the sub-bands as 0-7, some as 1-8, I think Semtech was suggesting A-H. Interoperability nightmare ensued and as written above, devices needed to be aligned to whatever random choice the network operator decided.

I guess what I’m saying is that there was no centralised design by the LoRa Alliance here, just practical implementations by its members.

All a real pain for devices/modules/stacks trying to match the 64ch spec but knowing they would not find service on 7/8 of the channels. And its not just during join, also needs to be considered in the ADR back-off strategy if you fall off your network.

As correctly stated above, the solution is now that the LoRaWAN spec is including ways to steer devices to the right sub-band using a channel mask. Happy days

Maybe 64ch gateways will come back if the cost of SX1301 comes down or a future version gets more demodulators integrated. 16ch seems like a reasonable compromise


Question, is there anything in the FCC regulation that prohibits using 128 channels, each 125KHz wide for both up and down packets? I’ve looked and can’t find it, but I haven’t been as involved as others on this forum. As far as I can tell, it just limits each packet time and power and states hopping has to be over more the X channels. Of course this wouldn’t be a LoRaWAN protocol, but it is still good to understand what FCC allows and doesn’t.