V3 - MQTT - single payload field subscription?

Hi,
in V2 the MQTT subscription of single payload fields was disabled:

Will it stay that way in V3?

Background for my question:
I am developing a device based on a single ESP8266 - and this device is getting the data via MQTT. (yes - i am using an ESP to GET the data from TTN, not to send it :wink: ) Subscribing to one device is working fine - but requires a huge buffersize to handle the whole data.
Later i would like to subscribe up to 10 devices - but with all the unneccessary data i am running into some memory problems. Subscribing the payload fields only could save a lot of memory and would make the code a lot smarter.

Thanks for the answers in advance!

It really seems like you probably want your own custom component in the cloud which will subscribe to the incoming feed, distill it into exactly what you want to pass on to your embedded system, and then do so.

1 Like

There isn’t really a way to specify which fields you want from a topic.

You can use other APIs where you can specify which fields you want but you have to poll rather than have data pushed to you.

If the first thing you do is parse out just the payload fields, how often are these 10 devices uplinking that can generate a memory issue - or are they all arriving at the same time?

1 Like

I wondered about that, too. I would think that an embedded MQTT client could be set to only accept a single message per cycle of the loop, even if there are more available (such that it would immediately get another after finishing handling the first). But admittedly I haven’t actually looked.

I am still in beta development - and do not know now how often the 10 devices will uplink. And yes - could be a possibility to handle one dataset per loop to avoid the data masses. But the idea to use another API to parse first and fetch data from there seems to solve my problem - i will have a look! Thanx for that idea!!

But my question may also be interesting for others - will V3 make subscribing single fields available again?

If this is something you want to see, perhaps file an issue on GitHub?