So my SWAG docker can’t see other containers on the same docker network, all the conf files need the IP and Port to work.

The other containers can see each other (sonarr and sab for example) and they are all on the same network.

Anyone know why?

Found the fix:

  • scaredofplanes@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Swag has to have its own docker network, and the containers proxies through swag have to be on that network. It can’t be bridge or host. Spaceinvaderone did a good video in setting this up and covers that part very clearly, I think. Maybe I misunderstood, but since you said they’re all on the same network, I assumed it was their original network.

  • Faceman🇦🇺
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I run multiple lans for different services on my server, so I was hitting these brick walls all the time too. I just went and hardcoded the LAN IPs for everything and it’s been absolutely perfect for years now.

  • Tiff@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I get hit by this all-the-time.
    The worst thing is when docker containers scale up/down, so they get new IPs.
    The proxies (mostly nginx) only do DNS resolution at startup. Which is why they say to add this resolver configuration to your nginx. It forces a re-validation every 30 seconds.

    You’ll have containerA/B/C have ips 172.20.0.[2-4] and all have a hostname of “container”. Then if you add a new container (scale=4) containerD comes up with 172.20.0.5.
    Your nginx container still resolves “container” to [2-4] and it will never resolve to the new container unless you restart the container, or you have this resolver configuration (which will make it force a resolution after 30 seconds).

    This one feature makes me hate using nginx as a reverse proxy for containers but it’s more intuitive than having to write a constant traefik middlewares just so I can have everything the way I want it

    This hit us at Reddthat recently, and was part of the reasons why we had some downtime. The UI containers were scaling out with load but the proxy wasn’t resolving the new containers and was still sending traffic to the original container. :eyeroll:

    • Entropy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Surely that’s only an issue if you’re telling nginx the internal IP of the container, instead of the IP it’s mapped to on the host? The host IP would always be the same, assuming the host IP is static.

      Always better to use the container names where possible to get around all of this crap.

      I’ve considered using traefik but it seems to have more features than I need, I know nginx and I’m comfortable with what I know.

      • Tiff@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        that’s only an issue if you’re telling nginx the internal IP of the container container names

        Oh how naive I thought so to. Nope.

        If you have an nginx container (swag) that is inside the docker network, without a resolver 127... configuration line. Upon initial loading of the container it will resolve all upstreams. In this case yours are sab and sonarr. These resolve to 127.99.99.1 and 127.99.99.2 respectively (for example purposes). These are kept inside memory, and are not resolved again until a reload happens on the container.

        Lets say sab was a service that could scale out to multiple containers. You would now have two containers called sab and one sonarr. The IP resolutions are 127.99.99.1 (sab), 127.99.99.2 (sonarr), 127.99.99.3 (sab).
        Nginx will never forward a packet to 127.99.99.3, because as far as nginx is concerned the hostname sab only resolves to 127.99.99.1. Thus, the 2nd sab container will never get any traffic.

        Of course this wouldn’t matter in your usecase, as sab and sonarr are not able to have high availability. BUT, lets say your two containers were restarted/crashed at the same time and they swapped ips/got new IPs because docker decided the old ones were still inuse.

        Swag thinks sab = 127.99.99.1, and sonarr = 127.99.99.2. In reality, sonarr is now 127.99.99.3 and sab is 127.99.99.4 So you launch http://sonarr.local and get greeted with a sonarr is down message. That is why the resolver lines around the web say to have the ttl=5s to enforce a always updating dns name.

        This issue is exactly what happened here: https://reddthat.com/comment/1853904

        I know nginx

        Oh don’t get me wrong, nginx/Swag/NPM are all great! I’ve been trialing out NPM myself. But the more I use nginx with docker the more I think maybe I should look into this k8s or k3s thing, as the amount of networking issues I end up getting and hours I spend dealing with it… It just might just be worth-it in the end :D

        /rant

  • gazoinksboe@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Have you tried using the container name over the IP address? I’ve had that work with proxies in the past

    • Entropy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thats whats not working, but I can ping them from within the swag container using the container name