Hello,

I have two Podman containers. One container that contains Linkstack and another container for the Nginx Proxy Manager. Now I want the Nginx Proxy Manager to retrieve the website from the Linkstack container. Unfortunately this does not work.

I integrate the two containers in a network. I realize this with podman-compose.

First, I created the network with “podman network create n_webservice”.

Compose.yaml

services: NGINXPM: networks: - n_webservice container_name: NGINXPM volumes: - /home/fan/pod_volume/npm/data/:/data/ - /home/fan/pod_volume/npm/letsencrypt/:/etc/letsencrypt ports: - 8080:80 - 4433:443 - 9446:81 image: docker.io/jc21/nginx-proxy-manager:latest linkstack: networks: - n_webservice container_name: linkstack ports: - 4430:80 image: docker.io/linkstackorg/linkstack networks: n_webservice: external: n_webservice

I have tried everything possible in the Nginx Proxy Manager with the entry, but unfortunately I can’t get any further. The destinations http://linkstack:4430 and http://127.0.0.1:4430 are not working.

Can someone please help me how I can access the linkstack container from the NGINXPM container?

  • makiOP
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Thank you 🙏 Can you post a sample for the configuration, please?

    • static09@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      Took awhile, but here’s how to get the previously mentioned article working at a basic level. I’ll write this out for future people in case they come across this post.


      If you would like to reset podman to factory default (i.e. absolutely nothing configured), then you can start with the below command. I used this a lot when testing out different things in podman to help give me a clean slate.

      podman system reset --force
      

      Create the pod and containers within the pod. Note the pod is treated like a container itself, so we publish the ports on the pod instead of the containers.

      podman pod create --restart unless-stopped -p 8080:80 -p 4443:443 -h podhost testpod
      
      podman run -dt --pod testpod --name httpd docker.io/jitesoft/lighttpd:latest
      
      podman run -dt --pod testpod --name alpine docker.io/library/alpine:latest
      

      And to test I did:

      podman exec -it alpine apk update && apk upgrade
      
      podman exec -it alpine apk add curl
      
      podman exec -it alpine curl http://localhost
      

      Which will return the default 404 page since lighttpd is not configured.

      And then running curl outside the container on localhost will present with the same default 404 page:

      curl http://localhost:8080
      

      This may not fit your exact use-case, but with the article it should get you going. If using pods, macvlan, or slirp4netns, you should be headed in the right direction.

      I’ll repost the sources that led me down this path here:

      Podman starting tutorial
      https://github.com/containers/podman/blob/main/docs/tutorials/podman_tutorial.md

      Podman network tutorial
      https://github.com/containers/podman/blob/main/docs/tutorials/basic_networking.md

      Redhat Container Networking article
      https://www.redhat.com/sysadmin/container-networking-podman

      Baeldung Communication Between Containers
      https://www.baeldung.com/linux/rootless-podman-communication-containers

      • static09@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        3 months ago

        To do some further testing, I added a mariadb container to the pod, added mycli to the alpine container, and was able to connect to the mariadb database from the alpine container.

        podman run -dt --pod testpod --restart unless-stopped --name testdb --env MARIADB_ROOT_PASSWORD=a_secret_pass \
        --volume:/fake/path/databases:z docker.io/library/mariadb:11.2
        

        This command is all one-line, but I added a line break for readability. I used MariaDB 11.2 because that’s what I had on-hand from another project. Note the “:z” in the volume – this is due to SELinux needing to give podman access to that directory.

        podman exec -it alpine apk add mycli
        
        podman exec -it alpine mycli -u root -p a_secret_pass
        

        This connects to the database successfully and, as you can see, looks as if the database is running right within Alpine; however, the database is not accessible outside of the pod.

        It’s also worth noting that I had some trouble initially accessing a webapp from outside of my VM which was hosting the container. This was due to firewalld blocking the connection. Since I’m using AlmaLinux with SELinux, I had to add a firewalld rule to allow traffic on port 8080.

        edit: 1) a capital was missed | 2) added info about firewalld

        • makiOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          Thank you. What can I do, if some containers use the same port? For example more than one nginx container in one pod?

          Pod (NginX Proxy Manager :8080, Nginx1 :80, Nginx2 :80, Nginx3 :80)

    • static09@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Here is the article I used to help me understand what I wanted to do. Hiding away in the actual Podman tutorials lol. Once I get my laptop up and running, i’ll post my config since it’s running in my learning environment and I haven’t done anything with podman in my prod homelab; however, this let me get two containers (database and webapp) connecting together.

      https://github.com/containers/podman/blob/main/docs/tutorials/basic_networking.md#Communicating-between-containers-and-pods

      My environment is podman in AlmaLinux 9.4 SELinux inside a Hyper-V VM on Windows 11. I can access the webpapp in the podman pod from outside my laptop.