Small installations, rPi vs PC-based VM server?

iroc409

[H]ard|Gawd
Joined
Jun 17, 2006
Messages
1,385
For smaller lightweight services do you prefer to run a lightweight PC-based system running a hypervisor of choice, or is it just as reliable or efficient to run the services on a rPi? I would consider mostly a batch of small production network services.

I've gone both ways and I'm having trouble deciding which makes more sense. The rPi footprint is certainly appealing, but Proxmox has been pretty solid for me and I could consider virtualizing the router.
 
For actually running a hypervisor, I think I'd rather go with a x86 platform. The apps/tools (e.g., Proxmox, VMware, KVM) are more flexible and mature, there's fewer problems with compatibility (e.g., docker containers not compiled for ARM), and the hardware is just generally more powerful.

If all you wanted was a basic system that could run pretty much any service/daemon needed for home or a similar environment, then a RPi would be more than sufficient.

FWIW, for home I'm currently running Proxmox on a ~5-6 year-old Xeon 4-core. Works really well. About the only negative I can think of is there's no native Docker support (been using LXC containers for most services), but it would be easy enough to just fire up a VM with the needed support.
 
Depends on the requirements. Most things I run in containers - VMs are too bloated and resource hungry.
For my edge router, I use a PCEngines box. I don't want my WAN traffic to hit the same physical gear as my other devices. It's mostly safe with passthrough, but, not entirely.

For IoT things like my 3DPrinter (OctoPi), home automation (HASS), things like that, are raspberry Pis.

Right tool for the right job.
 
a lot of really great responses in here. A lot of it comes down to personal preference, budget, and how much you want to tinker with it. There is definitely something sexy(in my mind anyway) about having a few rpis setup and doing simple low and slow services. Let them handle the those work loads and allow your main machine machine to do its thing. But there is also something to be said about running a hypervisor and making it all virtual. A lot comes down to manageability. If you have a farm of rpis, you have to make sure you patch them, and maintain them all while making sure they have power, and all the other odds and ends. If one of them dies, that service hosting it dies. =( You may run into potential packages you want to run not compiled for your rpi. If you virtulize it, you really only have 1 physical machine to mess with, but you have to maintain a bunch of vms. However, you have the ability to easily move those VMs to other physical machines down the road. But now, all your eggs and one basket. If that box dies.. You're really screwed. I've thought a lot about what your trying to do, and for me it comes down to.. How much do I wanna mess with it daily, how critical to me are these services, is there compromises in my goals and how easy is this thing to maintain long term? Maybe painful, but try and do both for a little while until you settle on what works best for you.
 
My biggest problem with using Raspberry Pi's is SD storage and the performance and reliability problems with them. I've had a few cheap Microcenter SD cards fail, and have also had some filesystem corruption from power disruptions.

That being said, Raspberry PI's are extremely convenient and easy if you need to throw together a quick server for whatever reason. I personally don't want to have a power hungry Xeon hypervisor running 24x7 for the few things I might need it for. I recently migrated my home webserver to a cheap Vultr instance.
 
My biggest problem with using Raspberry Pi's is SD storage and the performance and reliability problems with them. I've had a few cheap Microcenter SD cards fail, and have also had some filesystem corruption from power disruptions.

That being said, Raspberry PI's are extremely convenient and easy if you need to throw together a quick server for whatever reason. I personally don't want to have a power hungry Xeon hypervisor running 24x7 for the few things I might need it for. I recently migrated my home webserver to a cheap Vultr instance.

Damn, good point. I didnt even think about the SD failing part. If your project is mostly in memory and cpu related, that might be a good fit for your rpis. Maybe use a NAS to drag the data to those rpis? Maybe even remote boot? Ouside of that? Physical maybe your direction. This sounds crazy, but I'm still rocking my amd 939 vishira CPUs from 2010/2011 and managed to write an entire video streaming platform dedicated to the stuff I dj using that hardware as my development environment with vmplayer. Im transcoding essentually 2 feeds in real time to 1280x720 at 1meg bit rates more or less in memory. I know I can push it harder if I wanted. In fact, im developing my own CDN on that hardware to support my project. You may not need a monster xeon box for your needs. You may be able to get away with something way lower in terms of demand, you may still be just as successful and at a price point where your wallet doesn't hate you.
 
My longest running Raspberry Pi is a little VPN gateway at my parents house, they use it to watch region locked content from overseas. It's running on a very old Raspberry Pi 1 Model B+.
 
  • Like
Reactions: kdh
like this
My biggest problem with using Raspberry Pi's is SD storage and the performance and reliability problems with them. I've had a few cheap Microcenter SD cards fail, and have also had some filesystem corruption from power disruptions.

That being said, Raspberry PI's are extremely convenient and easy if you need to throw together a quick server for whatever reason. I personally don't want to have a power hungry Xeon hypervisor running 24x7 for the few things I might need it for. I recently migrated my home webserver to a cheap Vultr instance.
This is SUPER easy to get around:
1) Boot from USB drive instead, or,
2) netboot / PXE boot
 
This is SUPER easy to get around:
1) Boot from USB drive instead, or,
2) netboot / PXE boot

Up until the Pi4 with USB 3.0, USB performance wasn’t really much better than SD. And I don’t want to run a second server so I can boot the first server.
 
Up until the Pi4 with USB 3.0, USB performance wasn’t really much better than SD. And I don’t want to run a second server so I can boot the first server.
Well, for most tasks with rPi's, you don't need wonderful disk performance. And the USB drives are very, very much more reliable than SD card. And since Pi4 has been out the better part of a year now, I think it's safe to include it in the case for "Boot from USB".
 
I've had pretty good luck with SanDisk SD/microSD cards (touch wood). SanDisk has a (new?) "High Endurance" series that are supposed to be more reliable, waterproofier, etc: Amazon Link. Booting from USB pretty much solves all of that, though. I should check that out.

I think my biggest problem as well is that with a more powerful Proxmox rig I practically have unlimited power for what I do, with a SBC I don't know when I'll run out of CPU necessarily so proper sizing can be an issue.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
For actually running a hypervisor, I think I'd rather go with a x86 platform. The apps/tools (e.g., Proxmox, VMware, KVM) are more flexible and mature, there's fewer problems with compatibility (e.g., docker containers not compiled for ARM), and the hardware is just generally more powerful.

If all you wanted was a basic system that could run pretty much any service/daemon needed for home or a similar environment, then a RPi would be more than sufficient.

FWIW, for home I'm currently running Proxmox on a ~5-6 year-old Xeon 4-core. Works really well. About the only negative I can think of is there's no native Docker support (been using LXC containers for most services), but it would be easy enough to just fire up a VM with the needed support.
I used to run a RPi as a DNS server with PiHole (to save bandwidth, I'm stuck on satellite internet). I also used to use proxmox. I now just run everything on my old xeon server (dell R710, 2x 6core/12thread xeons, so 12/24 total) with 96GB of ram. I just fire up a new docker container whenever I want to play around with something. It's much simpler than proxmox, way more repsonsive than my rpi. I even tried running octoprint on my old rpi and it makes my 3d printer stutter horribly, it can't even keep up just sending data from the network -> printer. I just throw my gcode files onto an sd card and plug it in now, slightly less convenient but saves me a lot of time (and quality) while printing. I never had the stuttering issue when I ran it directly via my PC so I know it's not the USB interface. I paid like $150 for the server, $40 for the new processors (L5640 as I wanted the low power variant, not that it really makes much difference) and $100 for the 96GB of ram (thanks Hard Forum!) I did this in stages, as the CPU's that came with the server worked fine, as did the original 24GB of ram. Honestly, I didn't even max out the 24GB but... I wanted more :). Heck, I barely ever hit like 10% CPU and 10% RAM usage on this. My CPU spikes to like 15% when i'm transcoding a stream, but normally just sits around 0%-1% most of the time. I have a couple of RPi's laying around, but can't think of a real use. Maybe I'll put together an Arcade system at some point or maybe some sort of display for one of my car projects. However neat they are, I haven't been able to find a great use for them. Anything I need electronic project wise, I just grab a microcontroller and program it since I can meet real time deadlines much more consistently. Anything that requires a little more CPU I just fire up a docker image and it's super easy to configure and change with plenty of horsepower.
 
Back
Top