Hi all,

as I’m running a lot of docker containers in my “self-hosted cloud”, I’m also a little bit worried about getting malicious docker containers at some points. And I’m not a dev, so very limited capabilities to inspect the source code myself.

Not every docker container is a “nextcloud” image with hundred of active contributors and many eyes looking at the source code. Many Self-Hosted projects are quite small, and Github accounts can be hacked, etc. …

What I’m doing in the moment, is:

Project selection:
- only select docker projects with high community activity on GitHub and a good track record

Docker networks:
- use separate isolated networks for every container without internet access
- if certain APIs need internet access (e.g. Geolocation data), I use an NGINX-proxy to forward this domain only (e.g. self-made outgoing application firewall)

Multiple LXC containers:
- I split my docker containers into multiple LXC instances via Proxmox, some senitive containers like Bitwarden are running on their own LXC instance

Watchtower:
- no automatic updates, but manual updates once per month and testing afterwards

Any other tips? Or am I worrying too much? ;)

  • nukacola2022@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Since you are using LXC/LXD, make sure that AppArmor is enabled on the host and ensure that a configuration profile exists (should be a decent default one available) that blocks the containers from reading things like the /etc/passwd file.

    I personally run all containers in centos/alma/fedora systems specifically to take advantage of the strong SELinux-container policies.

    Other things you can do would be to rebuild public images, patch them, and save them to your private registry. I find that not all container maintainers patch as aggressively as I would like. Furthermore, you can look into running containers as non root and use a non root “daemon” like Podman instead of Docker.

  • ck_
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    You are not worried too much.

    Docker containers are notoriously riddled with outdated, security issue loaded content. Even reputable creators (eg. Nextcloud) only really bother with their own part of the container, but rarely release new builds of their containers when system dependencies could get updated, even less so for base images they depend on. So yes, Docker containers should always be run in a very secure environment, and doing so is by no means trivial, given that docker itself runs as root. Best advice, if you can: don’t run Docker containers if you don’t really have to, don’t run docker containers if you are not sure what you are getting into.

  • Charming-Molasses-22@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Depends with your security priorities and if you trust the software you plan on using. Securing software/docker containers can be as deep deep a rabbit hole as you willing to go.

  • jesuisoz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’re not worrying too much. Projects selection, global awareness is definitely the crucial point.

    Isolated networks and concerns is also important.

    To avoid data leak, take time to review your firewall rules. Do not “allow to any” from LAN interface. Take time to allow just the ports you need. It takes time and everyone at home is going to scream when using a new app/software, but it’s worth the price.

    You can also add IDS/IPS on the lan side to prevent malicious app establishing outside connections. Have a look at ZenArmor or Crowdsec.

    You can also have a look at proxmox internal firewall system to isolate VM and their accessibility scope.

  • SamanthaSass@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m a big fan of having a testing environment that is isolated. Even something as simple as a VM that isn’t connected to the production network can be incredibly valuable when testing new software or new processes.

  • WiseCookie69@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Granted I use Kubernetes, but here you go:

    • I run stuff with user namespaces, so even a root process within the container is unprivileged on the host
    • I isolate namespaces via NetworkPolicies
      • Even my Nextcloud instance has no business to check upstream for updates (i have renovate for that)
    • I use securityContexts to make my containers as unprivileged as possible
      • drop all capabilities
      • enforce a read-only container filesystem
      • enforce running as a specific UID/GID (many maintainers are lazy and just run their stuff as root)
  • Not_your_guy_buddy42@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s funny how as a self-hoster with no open ports, sort of supply chain attacks are almost my biggest worry… Here’s the tidbits I’ve collected so far, but just getting into this so take it with a grain of salt …

    1. working out how to run my containers as non-root… Most support this already. It’s adding a user:UID:GID in the compose file and making sure that user can read and write to any dirs you want to map, and it’s done. Now whatever runs in the container does not have root and less chance of shenanigans in its container and on the host.
      Some smaller projects, you have to tweak or rebuild.*
    2. If I can manage I’ll also run the docker daemon as rootless as the next milestone. I already had this working on Proxmox Ubuntu VM, but could not get it to work on a netcup VPS, for example.
    3. Docker sock proxy
    4. VLANs
    5. in compose files, if the containers can handle it:
      security_opt:
      - no-new-privileges:true
      cap_drop:
      - ALL
    6. (I have to work out the secrets stuff! secrets in files, ansible vault,…)

    (* One example for non-rootifying a docker, I got tempo running as non root the other night as it is based on a nginx alpine linux image, after a while I found a nginx.conf file online where all the dirs are redirected to /tmp so nginx can still run if a non-root user launches it. Mapped that config file to the one in the container, set it to run as my user and it works. Did not even have to rebuild it.)