I’d expected this but it still sucks.

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    5 months ago

    Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago…

    • kalpol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 months ago

      I need full on segregated machines sometimes though. I’ve got stuff that only runs in Win98 or XP (old radio programming software).

          • DeltaTangoLima@reddrefuge.com
            link
            fedilink
            English
            arrow-up
            10
            ·
            5 months ago

            No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              20
              ·
              edit-2
              5 months ago

              If you’re already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

              Proxmox will eventually kill the free / community version, it’s just a question of time and they don’t offer anything particularly good over what LXD/Incus offers.

              • DeltaTangoLima@reddrefuge.com
                link
                fedilink
                English
                arrow-up
                16
                ·
                5 months ago

                I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

                I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  arrow-down
                  6
                  ·
                  edit-2
                  5 months ago

                  comment history keeps taking aim at Proxmox. What did you find questionable about them?

                  Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

                  While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

                  At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

                  Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

                  There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

                  I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                  Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

                  • DeltaTangoLima@reddrefuge.com
                    link
                    fedilink
                    English
                    arrow-up
                    9
                    ·
                    edit-2
                    5 months ago

                    OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

                    But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).

                    I’m interested in having a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).

                    Something I need to ponder…

              • fuckwit_mcbumcrumble@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                2
                ·
                5 months ago

                why are you stuck with their questionable open-source and ass of a kernel

                Because you don’t care about it being open source? Just working (and continuing to work) is a pretty big motivating factor to stay with what you have.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  11
                  ·
                  5 months ago

                  Because you don’t care about it being open source?

                  If you’re okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let’s say that not “ending up with your d* in your hand” when you least expect it is also a pretty big motivating factor to move away from Proxmox.

                  Now I don’t see how come in a self-hosting community on Lemmy someone would bluntly state what you’ve.

                  • fuckwit_mcbumcrumble@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    arrow-down
                    3
                    ·
                    edit-2
                    5 months ago

                    What makes you think that can’t happen to something just because it’s open source? And from all companies it’s from Canonical.

                    It’s “Selfhosted” not “SelfHostedOpenSourceFreeAsInFreedom/GNU”. Not everyone has drank the entire open source punch bowl.

      • eerongal@ttrpg.network
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        11
        ·
        edit-2
        5 months ago

        Fear no my friend. Get get yourself into LXC/LXD/Incus as it can do both containers and full virtual machines. It is available on Debian’s repositories and is fully and truly open-source.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      edit-2
      5 months ago

      So… you replaced a property solution by a free one that depends on proprietary components and a proprietary distribution mechanism? Get get yourself into LXC/LXD/Incus (that does both containers and VMs) and is available on Debian’s repositories. Or Podman if you really like the mess that Docker is.

      • kalpol@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 months ago

        I’ve seen you recommending this here before - what’s its selling point vs say qemu-kvm? Does Incus do virtual networking without having to straight up learn iptables or whatever? (Not that there is anything wrong with iptables, I just have to choose what I can learn about)

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          5 months ago

          Does Incus do virtual networking without having to straight up learn iptables or whatever?

          That’s the just one of the things it does. It goes much further as it can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes). Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.