So, I’ve been self-hosting for decades, but on physical hardware. I’ve had things like MythTV and an asterisk voip system, but those have been abandoned for years. I’ve got a web server, but it’s serving static content that’s only viewed by bots and attackers.
My mail server, that’s been active for more than two decades is still in active use.
All of this makes me weird in the self-hosted community.
About a month ago, I put in a beefy system for virtualization with the intent to start branching out the self hosting. I primarily considered Proxmox and xcp-ng. I went with xcp-ng, primarily because it seems to have more enterprise features. I’m early enough in my exploration that switching isn’t a problem.
For those of you more advanced in a home-lab hypervisor, what did you go with and why? Right now, I’m pretty agnostic. I’m comfortable with xcp-ng but have no problems switching. I’m particularly interested in opinions that have a particularly negative view of one or the other, so long as you explain why.
I’ve personally used Proxmox in the past because it’s easy to use, and it served pretty well for what I wanted to do (simple services like Headscale, Bitwarden, etc).
But I’m kind of a noob so you should probably ask more people.
It looks like !selfhosted@lemmy.world is more active and has become the “replacement” for r/selfhosted.
If you post there you’ll probably get more helpful answers.
If you don’t actually want to allow external untrusted people accessing your server, why go the VM route? That seems like a huge waste of resources and just complicates things compared to using containers (Podman is best IMHO).
I have no problems with untrusted people accessing resources I intend to be public. A VM provides an extra layer of protection in that scenario, as does a container. I’ve been playing with Lemmy containerized in an xcp-ng VM.
But really, it’s a chance to learn and play with something new.
I mean as in renting out servers (VMs), where untrusted people have full root access.
Ah. Yes, I have no plans to do something like that.
My answer still applies. If there’s a remote code exploit that can be used to gain root, running it in a container just gets you root there. Running it in a VM only gets you root there. Both provide layers to protect the underlying OS.
Indeed, VMs are more secure than containers, but they come had a quite heavy price performance wise and are also harder to maintain. With Podman you can manage containers just like any other systemd service, which is really convenient.
Please do add a tag to your post as stated on the sublemmy sidebar! Thank you. :)
Ive tried all the main “Homelab Hypervisors” in my lab. VMWare, HyperV, Proxmox, XCP-NG/XenServer. I always come back to proxmox because it offers all of the features I need (HA and Backups primarily) in an extremely easy to use fashion.
I had a great deal of problems getting XCP-NG/XenOrchestra’s backup process to function correctly.
Proxmox Backup Server just works. Its the first time in many years of homelabbing/SysAdmin in general where a solution does what its supposed to without needing to contact support.
I started with Proxmox in my homelab as I was a beginner and the proxmox forums were amazing as a resource to learn about everything proxmox. I decided to stick with it as it was so easy for a beginner like me.
I like Promox for home, and XCP-NG for work. I’m just significantly more resource constrained at the house then work, so container management in the main interface is nice. At work, everything is a VM with containers on top (when needed)