I’m a little confused by your question though about installing ttn-lw-stack separately? I merely following the instructions from here: Configuration | The Things Stack for LoRaWAN
Which says to set up the folder structure w ttn-lw-stack-docker.yml inside config>stack
From there I changed all the necessary settings within that file (ie, set domains, sendgrid, certs, etc)
Then did the same for docker-compose.yml file (choose postgres, add any additional services, etc)
The result with a docker ps is 3 containers - redis, postgres and stack.
I ran ttn-lw-stack config | grep localhost because I saw somewhere on the forums that that would list all the localhosts left behind when running the above steps. However that does make sense that it is not configured to look at the settings in my config>stack>ttn-lw-stack-docker.yml file so it wouldn’t reflect the changes I made in the steps above.
If I run docker-compose run stack config | grep localhost now, with the containers already running the output shows --console.ui.gcs.base-url="http://localhost:1885/api/v3"
But that is likely because I had to manually add the following to my docker-compose.yml to get rid of the token refused error:
environment:
TTN_LW_BLOB_LOCAL_DIRECTORY: /srv/ttn-lorawan/public/blob
TTN_LW_REDIS_ADDRESS: redis:6379
# If using CockroachDB:
# TTN_LW_IS_DATABASE_URI: postgres://root@cockroach:26257/ttn_lorawan?sslmode=disable
# # If using PostgreSQL:
TTN_LW_IS_DATABASE_URI: postgres://root:root@postgres:5432/ttn_lorawan?sslmode=disable
TTN_LW_CONSOLE_UI_AS_BASE_URL: https://tts.mydomain.com/api/v3
TTN_LW_CONSOLE_UI_IS_BASE_URL: https://tts.mydomain.com/api/v3
TTN_LW_OAUTH_SERVER_ADDRESS: https://tts.mydomain.com/oauth
TTN_LW_IS_OAUTH_UI_CANONICAL_URL: https://tts.mydomain.com/oauth
TTN_LW_IS_OAUTH_UI_IS_BASE_URL: https://tts.mydomain.com/api/v3
TTN_LW_APPLICATION_SERVER_GRPC_ADDRESS: tts.mydomain.com
TTN_LW_DEVICE_CLAIMING_SERVER_GRPC_ADDRESS: tts.mydomain.com
TTN_LW_CONSOLE_OAUTH_AUTHORIZE_URL: http://tts.mydomain.com:1885/oauth/authorize
TTN_LW_CONSOLE_OAUTH_TOKEN_URL: http://tts.mydomain.com:1885/oauth/token
@descartes@benolayinka Unless I missed something, I didn’t notice any core differences in the config examples between open and enterprise. Outside of the obvious like multi-tenant.
Getting somewhat obsessed here. Very disappointed that in this day and age something like Ubuntu Server 20.04 installs with a borked DNS entry, but I have a fresh install.
The Enterprise files just shouted a lot about a missing license so I switched back to Community
Multiple incantations later, I removed all https from the ttn-lw-stack-docker.yml and it let me log in.
Tried kickstarting the acme LetsEncrypt bit by using cURL at the command line - I think the automagical configurator needs kicking several times for things to appear in the acme folder but it seems that’s what’s triggering some sort of token refused issue - all the settings say https but if that’s not working, something glitches.
Yikes. Does that mean you’re avoiding the token-refused by eliminating https altogether?
Not ideal. I had a similar problem where my gas tank was empty once so I bought a new car.
For the purposes of getting something running so I can see what it will take to backup the databases and given the VM is on my internal network with no routes from the outside world, I’m good.
Obviously this needs resolving before it’s put on an external server.
It would be nice as well if the senior TTI staff didn’t think that telling us what needs backing up wasn’t their problem.
One of my online colleagues asked a blindly obvious question about the https config and pointed out that LetsEncrypt doesn’t do internal IP addresses even if setup in an external DNS.
So it turns out that it’s taking a few seconds on first go to try to setup the certificate and is failing silently. I tried setting up a certificate with Certbot which gave me the details of the fail to confirm.
Then I tried to create an OpenSSL config and change the ttn-lw-config.yml at /run/secrets/ but the startup complains it can’t find those files and I can’t break in to the stack container to see if the files need to be there.
All this would explain why making it all http only works.
I’ll have a go using an external server on Linode or similar over the weekend.
Question, considering you sound a bit of a pro at debugging - have you played around a bit with the prometheus endpoint? I’m interested in piping some of those metrics into a 3rd party monitoring tool.
The instructions will pull and run the stack from a Docker container, without ever copying the ttn-lw-stack binary to your PATH, so if you simply follow those instructions, it should not be possible to run ttn-lw-stack from the command line.
Running docker-compose run stack config | grep localhost runs the same command on the stack instance inside a Docker container. It looks from the response that your server addresses are configured correctly. If you are still getting Token Exchange refused, it could be because certificates are not generating correctly. Are you using ACME on a domain you own?
Ah, yes now I understand. You are right, I did install separately.
Yeah, I used the ACME script. I’m not sure how they could be generated incorrectly - I did do a manually inspection of the files in the acme folder and they do appear to be fine. I also assumed that the automated cert process worked because I do get this on some urls (typically once I can log in):
But I also get this on most of the auth urls:
This is the only way I could get passed the token error, but adding env variables for those routes that tell it to skip SSL.
Now I’ve got one running on the public interweb I can revisit a local install and setup the certificates now I understand a little more about how to sacrifice chickens, goats and first born to get paths in Docker to co-operate.
It definitely looks like your certificates are being retrieved correctly. Unfortunately, the only thing I can recommend is starting over, and leaving docker-compose.yml as is, i.e. don’t add those environment variables.
Once you have updated ttn-lw-stack-docker.yml with your server address, client secret, and http keys, run docker-compose run stack config to verify that your configuration settings are all loaded correctly, and you should be good to go.