Hi all! This is my first real post on the fediverse, coming to you live from my own Lemmy instance, which took me way too long to set up. Turns out, the provided docker-compose.yml
file provided by the official Lemmy documentation does not allow outbound access to the internet, which prevents users from seeing other instances on yours. For SEO’s sake, I was receiving the following error message:
error trying to connect: dns error: failed to lookup address information: Try again
Anywho, I updated the docker-compose.yml
file to put all containers on one network, and allow that network outbound access while restricting inbound access to only ports 80 and 443, which worked a treat.
version: "3.3"
networks:
lemmy:
internal: false
services:
proxy:
image: nginx:1-alpine
networks:
- lemmy
ports:
# only ports facing any connection from outside
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
# setup your certbot and letsencrypt config
- ./certbot:/var/www/certbot
- ./letsencrypt:/etc/letsencrypt/live
restart: always
depends_on:
- pictrs
- lemmy-ui
lemmy:
image: dessalines/lemmy:0.17.4
hostname: lemmy
networks:
- lemmy
restart: always
environment:
- RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
volumes:
- ./lemmy.hjson:/config/config.hjson
depends_on:
- postgres
- pictrs
lemmy-ui:
image: dessalines/lemmy-ui:0.17.4
networks:
- lemmy
environment:
# this needs to match the hostname defined in the lemmy service
- LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
# set the outside hostname here
- LEMMY_UI_LEMMY_EXTERNAL_HOST=localhost:1236
- LEMMY_HTTPS=true
depends_on:
- lemmy
restart: always
pictrs:
image: asonix/pictrs:0.3.1
# this needs to match the pictrs url in lemmy.hjson
hostname: pictrs
# we can set options to pictrs like this, here we set max. image size and forced format for conversion
# entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
networks:
- lemmy
environment:
- PICTRS__API_KEY=API_KEY
user: 991:991
volumes:
- ./volumes/pictrs:/mnt
restart: always
postgres:
image: postgres:15-alpine
# this needs to match the database host in lemmy.hson
hostname: postgres
networks:
- lemmy
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
restart: always
I can only imagine this is intentional to prevent people from accidentally exposing themselves to the internet before they intend to.
That said, a comment or something explaining this would be welcome.
Interesting. I took a different approach to solve this issue.
I left all the containers on their internal only network, but added a secondary external facing network for lemmy so it could make outbound calls (otherwise it couldn’t make outbound calls to SMTP or for searches of communities on other instances).
I think it is more secure to leave the backend services on the internal network only otherwise they might be exposed.
Hmm, so only the UI needs to be able to make outbound calls? Because as far as I understood it, the the backend needs to be able to do so to automatically aggregate whatever you’ve subscribed to… But if not, that’s a good workaround for sure!
Here is the problem I had (it was that both smtp and external community searches were timing out) and how I fixed it. I think the only service I added to the new external network was lemmy. I can double check in the morning but it’s documented here in this post.
https://lemmy.ml/comment/494632
My thought process was that I wanted the lemmy services to all communicate on the internal network, but allow the lemmy service to make outgoing calls.