To make a good comparision you have to include the cost of the deployment.
LoRa network can be less long range, but cheaper, so more gateways could be added to get the same PER as Sigfox?
To make a good comparision you have to include the cost of the deployment.
Exactly. Have you seen a SigFox gateway? I don’t know the cost but its a very big (19" rack) expensive looking box so I assume would cost several €k.
By comparison, LoRaWAN gateway cost is dropping rapidly, with TTN now setting the bar at €200.
@nestorayuso do you have more information about the technology used by Waviot, because the table on the website is a commercially targeted table which will only focus on the best case.
e.g. the number of nodes / gateway -> the spectrum is the bottle neck in most cases not the gateway.
About TDMA vs random, TDMA will be more per formant but will need more synchronization overhead and thus listening by the node which will impact the power consumption. Moreover, TDMA only works when you are in control of the network, which is very difficult on an unlicensed spectrum.
On the topic of using more gateways: even with a TX power of 2 dbm and SF7 (sensitivity -123 dbm), according to the Hatta propagation model for cities the range is 860 m (suburban 1.7 km), resulting in a coverage of 3 km². so there will always of course be in impact.
WAVIOT uses ultra narrow band DBPSK modulation at 50bps (sigfox is 100bps). On top of that, uses FEC/coding gain improving sensitivity and reducing datarate to 8bps or 12bps.
Waviot base station receiver BW is 500KHz versus 192KHz in sigfox. More capacity and more processing power needed (use Nvidia Cuda). Also use three sectorial antennas versus an omnidirectional antenna to get more range and capacity.
Of course TDMA syncronization has drawbacks as you said and precludes the possibility to make an uplink only device.
I think TDMA can be an option in UNB solutions but It should be a must in not spectrum friendly solutions like spread spectrum or LoRa.
thx for the info!
But 8bps means 25 byte message -> 25 seconds, but according to ETSI you can only transmit with a maximum of 36sec/hour so that means max 1 message/hour and no potential retransmission, correct?
@maartenweyn Yes 8bps = 1 byte per second. with ETSI duty cycle maximun 36 bytes per hour including headers.
Of course this limits the use cases.
To improve reliability Sigfox transmits three times every message, so the effective datarate is 33bps. Waviot does not use retransmissions, uses coding gain, which is a better implementation.
Sigfox transmits three times every message in random channels to avoid co-channel interference. In Waviot, maybe (not sure) the gateway can analyze the spectrum and the cloud server coordinate the nodes to avoid the noisy channels.
Is the model always the same if you put more gateways in the same area ?
I mean : if two gateways are on the same area (if a correct distance between them) , is the same messages will be lost or not ?
Yes the model shows sensor nodes which are transmitting, no matter how many gateways there are or how many lora network there are next to each other. At this moment it does also expects the spectrum to be only used by lora.
When discussing with @aliekens I used the concept “it does not matter how many beautiful cities there are or how many people fit in it, if you cannot pass the bridge together”
I would like to thank you for sharing this information.
Could you explain me how to find out the overhead of a LoRa message compared to a Sigfox message.
You’re telling the package size of both is 25 bytes. I think the payload size isn’t the same then?
Because i think LoRa has more overhead due the coding gain it uses to spread the code in the spectrum?
I’ve read the datasheet of the LoRaWAN specifications and a LoRa message consists of an preamble, PHDR, PHDR_CRC, PHYPAYLOAD and an CRC. But i have no idea about those size of them.
And in comparison against Sigfox, which one has more overhead?
indeed I did not take into account the impact on the payload because for LoRa it is no so straight forward since it depends on SF and coding rate.
For Sigfox a 12 byte payload results in a 25 byte message on air.
For . http://www.semtech.com/forum/images/datasheet/LoraDesignGuide_STD.pdf gives some information. on top of the PHY header there is the mac overhead of 5 bytes (header of 1 byte, a MIC of 4 byte) then you have to add the PHY headers, CRC (only in uplink) and preamble (0 to 8 bytes). Then then frame header. In total you need at least 13 extra byte but can be more a good discussion is on Spreadsheet for LoRa airtime calculation
It Is a little higher than Lorawan gtw but not too much
Interesting, this is rather straightforward. Assuming a Packet Error Rate of 10% is the upper-bound for proper operation, for a single base station, we have:
Lora using 125kHz (but actually 200kHz channel spacing) and optimal (i.e. impractical) adaptive data rate: 100 messages/minute
Sigfox using 200kHz: 1,500 messages/minute
That is an application layer uplink capacity of 0.0008 bit/s/Hz for Lora and 0.012bit/s/Hz for Sigfox. That is, 15 times more uplink capacity/spectrum efficiency for Sigfox compared to Lora. It would therefore require upward of 3MHz of spectrum for Lora to accommodate as many uplink messages/minute as Sigfox.
The proposed “solution” of increasing the gateway density defeats the purpose of LPWAN, and will make the usage of Lora CSS obsolete as a physical layer. More than ever it is time to use flexible base station design, like SDR approaches used in Sigfox, Weightless or Cellular IoT. Using a hardware receiver like what is typically done for LoraWAN requires a lot of faith…
There might be a reason after all why Sigfox, Weightless-P, NB-IOT or NB-LTE have all elected to use (Ultra) Narrow Band in 200kHz to cater for the expected IoT traffic.
As for synchronization and TDMA/FDMA, this is indeed a way to scale, as used by Weightless-P or NB-IOT. It is inaccurate to claim EU has to go with duty cycle limitations. It is indeed the case when you use Lora CSS, but it is not when you use more spectrum-efficient narrowband modulations.
@petitgrf Fabien- Regarding duty cycle/TDMA point: If the gateway itself is limited in its ability to transmit, it can’t waste much time coordinating slots, etc. I think this applies to any gateway, NB or CSS. NB systems have the advantage of having many more channels, so mitigating collisions is less of a problem.
In the US, with LoRa Symphony Link, we use a 2 second frame header transmission which sets slots by spreading factor. Then nodes randomly select some number of slots, based on QOS, and then choose a random LBT interval. This increases capacity by about 400%.
The reason we still like LoRa over NB for Symphony Link is the downlink link margin is very good. Systems like Sigfox or Weighless-P (I think) rely on standard FSK downlink, which has 20 dB less margin compared to LoRa, which can be >-136dBm on both ends. Implementing a FHSS compliant high gain downlink in a cheap chip is a big advantage of LoRa.
For “city wide” uplink applications, I agree that thousands of narrow band channels will provide a much higher capacity.
@emery02, NB or CSS does matter, as this would have implication on whether you could qualify for FHSS. NB also makes LBT usable (remember LTE-U does use LBT for Wi-Fi coexistence). Therefore, duty cycle limitation does NOT necessarily apply in EU.
Synchronization is a sensible thing to do very obviously, and from the little I know about Symphony Link it seems a good patch to LoraWAN. The slight overhead it carries would be long forgotten once the density of end devices really picks up. This overhead also decrease when increasing the density of base station.
It was clear very early that LPWAN would not be about sheer range, but it is, like always in wireless communication, all about capacity, interference management and spectrum efficiency.
To your point, Lora CSS does have slightly better sensitivity than regular FSK at a given datarate. This is not so true if you compare to other standard physical layers like 802.15.4g OQPSK (-126dBm@6.25kbps) or its NB extension like Weightless-P (-136dBm@625bps).
Just some notes:
FHSS makes LBT very difficult. In fact, I can’t think of a great way to do LBT using FHSS while also meeting the hopping requirements. CSS or DSSS, or just some method of FDMA or even “trunking,” however, can work well with LBT.
To optimize LBT, you’ll also want a method for low-power group synchronization. DASH7 has the best scheme for this, but there are others.
Semtech’s sensitivity figures are a bit rigged. They factor-in FEC, which is OK, but there’s basically no description of the implementation of LoRa’s FEC model. I’ve reverse-engineered it, however. I’m pretty sure it’s a base-32 Reed Solomon scheme performed on 40 bit payload chunks. You can simulate that or look up book values of the processing gain, here, but it’s about 4dB. On top of that, they utilize a coherent receiver at the gateway, which adds 3dB over the endpoint receiver (this is the somewhat disingenuous part). Semtech also tends to use low SF in their sensitivity, which will reduce some of the SNR loss due to de-spreading. At the end of the day, Semtech’s sensitivity figures have more to do with the quality of the front end AS WELL AS the over-engineered reference design (it has TX/RX switches ahead of the antenna, unlike most of its competitors… again, disingenuous, because any compact endpoint can’t accommodate the size of this reference design) than it does with LoRa modulation. A good DSSS QPSK scheme with a more advanced error-correction model (e.g. convolutional code concatenated with base-256 Reed Solomon) will outperform LoRa any day of the week, even in multipath environments. TI’s CC13xx devices can do this, although you need to do the RS in firmware.
TDMA is perfectly achievable with low overhead, low-duty, and roaming as long as you have a good clock source on the endpoint as well as some protocol mechanisms for self-synchronization and re-synchronization. There’s emerging work on this topic, although the first part of it is little more than adding a good RTC to your device. Incidentally, this is a hell of a lot cheaper and smaller than implementing the LoRa reference design (big IC with lots of extra components).
@jpnorair, these are very interesting comments. Agreed FHSS with LBT is tricky, though this is not required except for contended access, in which case you can decide on your LBT scheme, as there is on regulatory constraint (except in Japan). I do not think DSSS and LBT work well, especially if you do not want to sacrifize spectrum efficiency and have CDMA-like Multiple Access scheme.
Agreed on channel coding as well, actually Weightless-P does support the rate 1/2, K=7 conv coding found in 802.15.4g OQPSK (as well as Wi-Fi). Concatenation with RS is attractive, it has been considered as well but was left out (of Weightless-P Core Specification v1.0) for complexity reasons. It may be added in a later revision though.
Also agreed on Lora modulation comments, and as it turns out Weightless-P does implement DSSS OQPSK (with limited SF=8 max), again similar to 802.15.4g, except that it is also defined for operation in 12.5kHz channels (while 802.15.4g is 100kHz with 6.25kbps as the lowest datarate. Sensitivity figures are indeed not worse than Lora.
Finally, Weightless-P is TDD (easier downlink/uplink load balancing, open loop power control, etc) and TDMA. Yes, it does require synchronization overhead and a RTC (or equivalent), but that’s quickly compensated as soon as the network really scales, as you have much improved capacity/spectrum efficiency.
It seems that Semtech is well aware of the capacity issues in this patent application. CSS followed by UNB:
Also interesting discussion here with Stråle and M2MCOMM’s CTO:
Feels a bit like an advertisement here for Weightless-P. I assume you are involved? I’ve built an adaptive RS codec that runs very fast on Cortex-M. You’ll want to update your PHY, though, to make the best use of any type of error correction. It’s sort of odd to me that 802.15.4g – and IIRC Weightless-P – specify convolutional code FEC but then specify MAC frames that make them less effective than they should be. At least this is something LoRa gets right, even if (as I’ve been told) in present silicon there a glitches in some of the coding modes.
Anyway, you can find me pretty easily if you want to chat more, either through this forum or even by googling my username. I’d be happy to chat.
Apart from the technical difficulties of using LBT with DSSS and FHSS, I’m guessing that LoRaWAN, like others don’t specify LBT (when not mandated) because of the prisoner’s dilema - at a crowded dinner party if you wait for a pause in the conversation you almost never get to speak. If you stick your fingers in your ears and talk over someone else then there is still a good chance you will be heard so long as your target is nearer to you. The third party’s message may also get through as well.
On the other hand, if you don’t get to talk because you’re too polite to interrupt others, your message has zero chance of being heard. Perhaps not good for overall spectrum capacity but it would need the regulators to get involved to maximise that. Also LBT is not fair to those in the centre of a cluster of active transmitters compared to those at the edges. I’m not aware of any rules which prevent you from adopting both strategies though to take advantage of the greater LBT DC when traffic is light and reverting to shouting with lower DCs when contention increases.
@jpnorair Indeed I am involved in Weightless-P, but this does not prevent from being unbiased (or trying to).
I am not sure I get your point about MAC frame not taking most advantage of CC FEC, may you elaborate on this?
I remember we exchanged before (I believe on LinkedIn), RS is interesting, especially for long transmission time like LPWAN, and we may get in touch later.