I’m to the point now where my little home device has enough services and such that bookmarking them all as http://nas-address:port is annoying me. I’ve got 3 docker stacks going on (I think) and 2 networks on my Synology. What’s the best or easiest way to be able to reach them by e.g. http://pi-hole and such?

I’m running all on a Synology 920+ behind a modem/router from my ISP so everything is on 192.168.1.0/24 subnet, and I’ve got Tailscale on it with it as an exit node if that helps.

  • beeng
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Everybody is saying a reverse proxy which is correct, but you said docker stacks, so if that means docker compose then the names of your container is also in DNS so you can use that.

    Can’t remember if port is needed still or not however.

    • Perhyte@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      AFAIK docker-compose only puts the container names in DNS for other containers in the same stack (or in the same configured network, if applicable), not for the host system and not for other systems on the local LAN.

      • beeng
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you don’t set the network, doesn’t it default to host?

        I’m pretty sure it’s available locally… Yes but maybe not via network. So might not be as useful for OP. Correct!

        • CalicoJack@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes, that’s how it’s supposedto work if they’re all on the same Docker network (same yaml). In practice, it can be flaky and you’re much better off using ip:port.

        • emax_gomax@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          In general yes. You can think of each container in a docker network as a host and docker makes these hosts discoverable to each other. Docker also supports some other network types that may not follow this concept if you configure them as such (for example if you force all containers to use the same networking stack as one container (I do this with gluetun so I can run everything in a vpn) all services will be reachable only from the gluetun host instead of individual service hosts).

          Furthermore services in a container are not exposed outside of it by default. You must explicitly state when a port in a container is reachable by your host (the ports: option).

          But getting back to the question at hand, what you’re looking for is a reverse proxy. It’s a program that accepts requests from multiple requested and forwards them somewhere else. So you connect to the proxy and it can tell based on how you connect (the url) whether to send the request to sonarr or radarr. http://sonarr.localhost and http://radarr.localhost will both route to your proxy and the proxy will pass them to the respective services based on how you configure it. For this you can use nginx, but I’d recommend caddy as it’s what I’m using and it makes setting up things like this such a breeze.

        • i_am_not_a_robot
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It might work if you put them on the same Docker network? I use Kubernetes and it definitely has this feature.