• xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      23
      ·
      17 hours ago

      Honestly? Pretty fucking awesome if you get it configured correctly. I don’t think it’s super useful for production (I prefer chef/vagrant) but for dev boxes it’s incredible at producing consistent environments even on different OSes and architectures.

      Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        15 hours ago

        I don’t think it’s super useful for production (I prefer chef/vagrant)

        Yeah!

        Docker and OCI get abused a lot to thoughtlessly ship a copy of the developer’s laptop into production.

        Life is so much simpler after taking the time to build thoughtful correct recipes in an orchestration tool.

        Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

        Exactly. The learning curve is mean, but it’s worth it quickly as soon as the first mystery bug dies in a rebuild fire.

    • Platypus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 hours ago

      In my experience, very, but it’s also not magic. Being able to package an application with its environment and ship it to any machine that can run Docker is great but it doesn’t solve the fact that modern deployment architecture can become extremely complicated, and Docker adds another component that needs configuration and debugging to an already complicated stack.

      • skuzz
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        And a new set of dependency problems depending on the base image. And then fighting layers both to optimize size, and with some image hubs, “why won’t it upload that one file change? It’s a different file now! The hashes can’t possibly be the same!” And having to find hackey ways to slap it so the correct files are in the correct places.

        Then manipulating multi-arch manifests to work reliably for other devs in a cross-processor environment so they don’t have to know how the sausage works…

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      16 hours ago

      It’s a way to provide standard configuration for your programs without one configuration interfering with another.

      Honestly, almost all alternatives work better. But docker is the one you can run on any system without large changes.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      I think they’re really useful, there are alternatives that I think have feature parity at this point but the concepts of containerization are the same