Hey all,
I know this is a bit off-topic but this sub is one of the few where one can talk about technical stuff and not about tech careers lol.
I have been thinking about that a lot lately. Are there use cases where bare metal is better than virtualization? Have you ever encountered an use case that the virtualization overhead is an issue? I would love if you share your experiences. Thanks =)
As someone who deals with this both at home and at work, 10+ years ago there were valid reasons to not use virtualization, but these days there really aren’t – especially at home, unless you enjoy the pain of building things back up from scratch (on your own time) when something breaks. Disk IO is good, GPU passthrough works, USB passthrough works, etc., etc. Maybe very, very edge case scenarios like mining chia or something would still benefit from bare-metal access, but that’s not what we self-host for, right? ;)
I’d say it depends. I run BSD’s in my setup, with a freebsd “development” server running a bunch of jails for projects. I find that easier to maintain than a vm host lately. I just have one more vm to port over to a jail, then I’m retiring my 12 year old server, and looking for a new one.
I have another freebsd box running as a file server for the house, where I drop all our media and a copy of important backups.
I kind of use bsd jails like containers. Once I get one working, I can package it and send it to a server, jnpackage it, and run it with minimal effort/reconfiguration.
Sounds complicated, but worth considering!
One big advantage of VMs is better resource allocation. If you have multiple different server types load leveling is better (fewer idle cores) with VMs. You also have the security implications. These days I tend to run Docker though and eliminate VM overhead even more.
Another major benefit is security. Say someone hacks your web server. Ok so from there they are stuck inside the VM. Assuming you’ve practiced zero tier compromising that one server is useless against the others. Plus there is not just backups but maintenance and recovery. If a server has a hardware failure (say one of a couple PSUs or a fan) with a VM environment you can just move the servers over or set them to auto boot on a second hardware server. And recovery is simply copying over the VM and booting it in seconds.
I ran performance tests years ago on a couple Dells with VMWare vs Xen vs bare metal. What I found is that VMWare has better advertising than Xen but basically uses the same software on a buried RHEL host. Performance wise on any load it was something like 99.7% throughput/CPU vs bare metal. So there is a difference but it literally comes down to roughly the overhead of running the VM if it was a process on the VM instead of the host.
As far as pass through hardware this has gotten to be less and less of a thing. Only a few annoying products “require” bare metal (TrueNAS). Not so much licensing as just stupid implementations. You aren’t “losing benefits of VMs” though except perhaps storage allocation flexibility or sharing a GPU.
Latency sensitive operations like gaming and downloading linux torrents. Makes me think unraid’s claim of “hard core gamers” is a joke :p
for selfhosting stuff, I don’t see any requirements to be bare metal. the exception is the router/firewall. if you want to do software for that, then at least have a dumb all in one router as a spare when you inevitably break something and have family waiting on you to get the network back up. in an enterprise network, i would 100% keep networking physical. only keep physical server for applications that need every ounce of performance.
This…
It’s good to have the router/firewall as it’s own device. An argument could be made for a NAS too simply doing NAS functions.
For servers I see no point because in the home environment you can squeeze more out of the system using a hypervisor. Even in an enterprise environment you are likely thinking about clustering/HA which still will be utilizing a hypervisor.