help me clear up some things regarding my planned virtualisation build

  • Thread starter Deleted member 82943
  • Start date
D

Deleted member 82943

Guest
i'm currently eyeing a fully virtualised setup

i'm thinking about allocating resources like this:
cpu: https://www.sabrepc.com/p-3686-amd...-6366-he-18ghz-abu-dhabi-server-cpu-tray.aspx
2 cores hypervisor
4 cores linux handbrake encoding vm
2 cores linux nas + smb + cifs + nothing else
2 cores pfsense firewall
4 cores linux apache server + mysql + ruby on rails
2 cores hptc + gpu - probably need some sort of passthru + linux dedicated xmbc setup probably

random questions bear with me

for the htpc - if i plug the gpu out from the box hosting the virtual htpc will monitors or projectors hooked up to it work like it was a single use box?

for the pfsense firewall - I only need two dedicated nics right, one for wan in, then for lan out

that would mean that all the other virtual instances will communicate over a single physical gigabit port - will this work?

i think in all honesty I'll just end up making a more redundant fileserver to act as a filesystems for the virtualbox... and probably scale it back to an 8 core amd setup... but i do want to rock a supermicro board and some ecc ram to make it as robust as i can

the only thing giving me the jeepers is the design and setup as i don't want to blow some cash on cool hardware only to realise i didn't think about it properly and have hardware that wont do what i want it to or i didn't get what i should have
 
Why do you think you need a 16 core processor if you aren't going to be using all of the VM's simultaneously? The whole point of virtualization is to reduce hardware costs and power costs.

Using a bare metal hypervisor (esxi) and expecting a video output from a VM (HTPC) means having 2 video cards (one for esxi) and one passed through to the HTPC VM.

It sounds like you have grossly overestimated your processor needs. If you do setup your system how you intend you disk drives are going to be what's holding you back.

If your intentions are to have 1 box, you would be better having 8 cores, 32GB+ memory, 1 array of disks (non raid) passed through to a NAS running ZFS(for the encoding and the apache VM), a hardware raid array for the rest, a small boot drive and an SSD for esxi swap.
 
I noticed that adding up the cores gets 16, was there a particular reason for this?

You don't need to dedicate cores to VMs. Let's say you have a 4 core system. You can have more than 4 VMs. I could, for example, have 10 VMs, each allocated 2 vCPUs. The encoding VM you have seems about the only one that will really burn through CPU time.

So that CPU, while it's nice that it has 16 cores, isn't necessary to your design. I might go with a CPU with faster but fewer cores. That should give you a lot more options.
 
my understanding of virtualisation is still in its infancy it seems.

I figured I'd dedicate cores to the VMs so that there was no question that each VM would have the resources it needed. For example if I am having heavy traffic on the LAN I don't want the pfsense vm to be starved for cpu time when the other vms are running maybe full bore.

The more I think about it the more I need to figure out the disk subsystem first. How do you guys do it? I don't quite understand the drive pass through thing.

I know I need a disk for the hypervisor, and some sort of storage for the vm's probably virtualised, but I'm not sure how to really conceptualise it just yet.

is there a performance hit if say a few vm's are all communicating over the same physical gigabit port?
 
my understanding of virtualisation is still in its infancy it seems.

I figured I'd dedicate cores to the VMs so that there was no question that each VM would have the resources it needed. For example if I am having heavy traffic on the LAN I don't want the pfsense vm to be starved for cpu time when the other vms are running maybe full bore.

The more I think about it the more I need to figure out the disk subsystem first. How do you guys do it? I don't quite understand the drive pass through thing.

I know I need a disk for the hypervisor, and some sort of storage for the vm's probably virtualised, but I'm not sure how to really conceptualise it just yet.

is there a performance hit if say a few vm's are all communicating over the same physical gigabit port?

You can set up resource pools to guarantee a minimum amount of MHz's to dedicate to each VM all the time or use shares to prioritize what VM's get more pCPU time only when there is actually contention going on.

You definitely do not need to dedicate a core for each VM vCPU. As stated above, spend less on a faster processor with fewer cores.

As far as the storage piece, I'm running on the assumption that this was all going to be on local storage since it was a 1 host setup right? After you load ESXi on bare metal the remaining space will make itself available as a VMFS datastore that you can store virtual machine files on.
 
I'm planning a similar setup, and I agree a faster 8 core CPU would probably serve you much better.

I've got a question about the post above saying he would need a gpu for esx and one to passthrough to the htpc vm. After initial setup, wouldn't esx be running headless anyway? Why does it need a gpu of its own? All of the other vms are headless, esx won't need much but maintainence once everything is setup? doesn't esx hav any ssh or telnet or something for remote management? Isn't there a specific VMware product to manage the hyper visor from another system? So why exactly does esx need its own gpu? Would it not run on a system without a gpu? If it is absolutely necessary, would one of those USB video cards (USB to hdmi) work as the esx card or would a pci video card be the minimum?
 
I'm planning a similar setup, and I agree a faster 8 core CPU would probably serve you much better.

I've got a question about the post above saying he would need a gpu for esx and one to passthrough to the htpc vm. After initial setup, wouldn't esx be running headless anyway? Why does it need a gpu of its own? All of the other vms are headless, esx won't need much but maintainence once everything is setup? doesn't esx hav any ssh or telnet or something for remote management? Isn't there a specific VMware product to manage the hyper visor from another system? So why exactly does esx need its own gpu? Would it not run on a system without a gpu? If it is absolutely necessary, would one of those USB video cards (USB to hdmi) work as the esx card or would a pci video card be the minimum?

VMware vCenter or vSphere client would be the remote application you're referring to which you can manage everything from it on a seperate machine. I believe there's ways of using telnet as well I just haven't gone down that road yet.

I believe the virtualization layer requires hardware on the host to run the virtualized hardware for the VMs. Which is why if you pass-thru the video card to your HTPC there won't be any video hardware available on the host for the virtualization layer providing hardware to the remaining VMs not using the pass-thru hardware. I'm sure others will have a better explanation than I can provide. :)

As for using the USB to HDMI hardware I'm not sure. I've only use pci & pcie cards with hypervisors. I'd assume hardware compatibility might be a challenge with hypervisor drivers. In theory if you just pass-thru the USB ports to the HTPC VM you may have better luck installing the drivers on the VM itself.
 
Does PFSense really need 2 cores? I don't know how much traffic you pushing but I was sitting pretty on a 1.2ghz celeron in a house full of torrenters / gamers. Ran out of bandwith way before maxing out the proc.
 
pfsense definitely doesn't need more than 1 core. I've run 40/20 internet on a P3 with 512mb of ram just fine (no snort or anything though).
 
>2 cores hypervisor
In case of ESXi around 200-500mhz will be more than enough. And you can't dedicate cores to hypervisor. Hypervisor itself doesn't use much resources.
>4 cores linux handbrake encoding vm
that's ok. But i have 2-core encoding VM (1080p -> lower-quality when i want to watch film when i'm not at home) which works perfect.
>2 cores linux nas + smb + cifs + nothing else
overkill. Unless you have insane load on fileshare.
>2 cores pfsense firewall
even my mipsbe 800mhz mikrotik can handle 120mbps NAT with 4000-15000 connections + some VPNs without problems...
>4 cores linux apache server + mysql + ruby on rails
how much clients? This is more like high-end webhost/VDS/etc...
>2 cores hptc + gpu - probably need some sort of passthru + linux dedicated xmbc setup probably
yes, VT-D needed to bypass PCIe devices. It will work in that case. Is two cores really needed? In case of XBMC almost all load will be on GPU.

Overall, your setup looks a bit overkill. As already said, virtualization is about minimizing power/etc consumption. I doubt you will use all of that at once. I run almost same setup (except XBMC, which is run on raspberry) on 4-core Xeon E3, works without any problem. With encoding i see load of 6-7ghz, but typically it's around 1-2ghz. Even if you will fire all tasks at once, hypervisor will take care to distribute CPU amongst all VMs so everyone can work flawlessly.
Oh, and final - these AMD CPUs are power hungry and generate ALOT of heat. I have 1U server with 2x16c opteron at work and that bastard is extremely noisy. Fans are always at full speed, CPU is always warm... well, unless you have basement or something like this to put your server, this is definitely bad idea for home server.
 
Get away from the typical processors you would use on a bare metal install. I run most of my servers with 1 core virtualized.

Heck on my little host, I have 10 VMs, 12 total v-cores assigned, on a dual core host.
 
thanks for the feedback guys

i did definatley go overboard on cores.
 
see this is why you guys are legit

i'm actually getting pretty excited about this BD setup.
 
Last edited by a moderator:
I would personally get a Norco RPC-4220....but I share a LOT of files and love knowing I will probably never worry about running out of HD slots.
Also, are you sure this motherboard will work with ESX?
Please update and let us know how it goes. I'm thinking about upgrading. If that all works out for you I might just order the same thing.
 
Back
Top