New server E5-2670 Virtualization or not

WoooT

n00b
Joined
Aug 1, 2010
Messages
43
I finally managed to build new server with these specs

E5-2670
32gb ECC RAM
Supermicro x9srl
Areca 1220 raid 6 card
20TB disk space

I want to run 2-3 virtual machines
* Plex server
* torrent
* lab / testing

i want to seperate plex and torrent just to make it secure but what about the storage is it possible to
divide it between two virtual servers without any problems?.

or is it better/easier just to run everything under one operating system?

i have access to win 7/win 10/win server 2012 and win server 2016
linux would be an option aswell if it more stable and secure than Windows.
 
If I were you I would pick an OS to start with, and vertically learn.
You will gain experience with what does and does not work for that moment which may help you in the future as you progress.
I don't have an opinion about stability as long as I'm not on a bare metal install with multiples users with root/admin access screwing with the box.

You want to improve your taste about networking.
I would advise you to monitor disk io, sometimes a single dedicated ssd is a better solution than days of array test/config/retest/reconfig.
 
Sharing disks between multiple vms sucks with anything except the very fastest ssd arrays it would be significantly faster to have a dedicated drive for each vm even if its a 5k spinning rust drive and then just sharing the 20tb for non os tasks. Its always easy to run everything under one os but given your hardware It should run very well under vms as long as each vm has a dedicated disk.
 
We should have kept the drive input silent until he learned firsthand what a drive array is and isn't good for.
I mean he is asking instead of trying KVM vs Hyper-V.
OP, build and deploy, gather actual data on your gear.
 
We should have kept the drive input silent until he learned firsthand what a drive array is and isn't good for.
I mean he is asking instead of trying KVM vs Hyper-V.
OP, build and deploy, gather actual data on your gear.

possibly but i have seen far too many people give up and swear aginst virtralizing ever again as a result of poor preformance caused by a terrible drive setup. minus well save op the time of figuring that out.
 
Sharing disks between multiple vms sucks with anything except the very fastest ssd arrays it would be significantly faster to have a dedicated drive for each vm even if its a 5k spinning rust drive and then just sharing the 20tb for non os tasks. Its always easy to run everything under one os but given your hardware It should run very well under vms as long as each vm has a dedicated disk.

Huh?

Shared storage has been used on all-spinning disk for a long time before SSDs came around, even for virtualization. You have to balance your requirements to what your resources are; spinners have less resources, sure, but that doesn't mean they're useless for a home lab.
 
possibly but i have seen far too many people give up and swear aginst virtralizing ever again as a result of poor preformance caused by a terrible drive setup. minus well save op the time of figuring that out.
Sure, if you put a set of 5400 RPM drives into a RAID 5 it'll be pretty slow, but that all depends on what you're using it for. If you're learning/testing/etc, performance is far less important than having it, and every enterprise setup out there is based on shared storage.
 
Sure, if you put a set of 5400 RPM drives into a RAID 5 it'll be pretty slow, but that all depends on what you're using it for. If you're learning/testing/etc, performance is far less important than having it, and every enterprise setup out there is based on shared storage.

maybe i wasnt very clear having a single hard drive (or possibly a array) dedicated to each vm will be far faster in many cases then sharing a raid array (even a fast one).
 
maybe i wasnt very clear having a single hard drive (or possibly a array) dedicated to each vm will be far faster in many cases then sharing a raid array (even a fast one).

IOPS are not like space; they cannot be saved for later. Unless every VM is doing something regularly (most machines idle in the 5-15 range) you can easily share the spare ones, as long as your requirements are such that your simultaneous workloads are not significantly greater than what the array can supply.

And for that matter, raid array is somewhat ambiguous. You can have thousands of drives in an array, even spinning ones, that provide a significant amount of performance, and it's very possible (even easy) to build a high performance shared storage system out of home parts as well. Modern hypervisors are very good at simultaneous IO scheduling, which is why even local controllers have significant queue depths.
 
IOPS are not like space; they cannot be saved for later. Unless every VM is doing something regularly (most machines idle in the 5-15 range) you can easily share the spare ones, as long as your requirements are such that your simultaneous workloads are not significantly greater than what the array can supply.

And for that matter, raid array is somewhat ambiguous. You can have thousands of drives in an array, even spinning ones, that provide a significant amount of performance, and it's very possible (even easy) to build a high performance shared storage system out of home parts as well. Modern hypervisors are very good at simultaneous IO scheduling, which is why even local controllers have significant queue depths.


of course its possible I used to do it with a fairly fast flash array however as i have found out after much testing in every case i tried it was almost always faster to have each vm on a separate drive and then just share the raid array for the usfull things ie. a download location. and as a disclaimer I wass assuming multiple vms were active at the same time obviusly if only one is ever active they would have no issue.
 
Then to be utterly blunt, you don't know what you're doing, or you're doing it wrong.

More drives/spindles is better performance for each individual IO stream, assuming there is not a conflict. A single spinning drive, at best will get you around 250 IOPS, and that's if it's a SAS 15k spinner. That can be exceeded with a set of 7200 RPM drives on a controller with a bit of cache and a decent queue depth.

The math on RAID is well understood at this point, and there's a reason that you're the only one using single individual drives for VMs. I don't know which hypervisor you're using, but I suspect strongly you have something configured very very wrong - I've been in this industry for well over a decade now, and I'm well known in it too. We wouldn't be anywhere near where we are with shared storage if the only way to make it effective and sane was for individual LUNs/Volumes per-VM, or an all-flash array.
 
Then to be utterly blunt, you don't know what you're doing, or you're doing it wrong.

More drives/spindles is better performance for each individual IO stream, assuming there is not a conflict. A single spinning drive, at best will get you around 250 IOPS, and that's if it's a SAS 15k spinner. That can be exceeded with a set of 7200 RPM drives on a controller with a bit of cache and a decent queue depth.

The math on RAID is well understood at this point, and there's a reason that you're the only one using single individual drives for VMs. I don't know which hypervisor you're using, but I suspect strongly you have something configured very very wrong - I've been in this industry for well over a decade now, and I'm well known in it too. We wouldn't be anywhere near where we are with shared storage if the only way to make it effective and sane was for individual LUNs/Volumes per-VM, or an all-flash array.

ok this will probably be my last mes on this topic because their seems to be a bit of a miscommunication. It could very well be my fault :p but simply 2 drives in raid will get less iops then double the iops of one. Whither or not the vms are nearly instinctive enough to use anywhere near that amount depends entirely on what your doing.

And this was done with vmware and proxmox
 
So I worked for VMware for a long time.

2 drives in RAID will get differing performance depending on the RAID level. For RAID0, it's double read/double write. For RAID5, it's generally far better reads Read/fractional write, depending on your particular implementation (eg: 4+1 will be 5x read, .2x write), etc. In other words, it all depends.

The VMs don't have anything to do with it - it's all the hypervisor scheduler. Trust me on this - there's almost no one in the world doing single volume/single VM anymore, it's all in designing the storage behind it. :)
 
OP, so you are getting a lot of input.
Probably shouldn't get too deep into running out and buying a bunch of SAS spindles on a real filer head to compare to an ssd & against the spindles you have. Kinda pointless, you can do that at work.

To clarify there is a lot of nuance to virtualization, most of it is easy to dive into.
Sadly, a lot of people just go off marketing jargon, then you see a Docker daemon die and take out a monolithic environment and people start spewing HA/DR/FT concepts incorrectly (they are the same for your given platform) and you go outside to chain smoke.
The initial assumption that continues into public cloud is seeing your processes as a 1:1 from p2v.
Forget all that for now.

Kinda drives me nuts, as I still run into "decision makers" that see virtualization public/private as either:
A) Applications that can fall into itself, thereby guaranteeing uptime
B) Costing less bc Dev can self-deploy infrastructure "instantly" that magically tests/updates/monitors itself.....misuse of Terraform comes to mind.

1) Pick the underlying OS you are most comfortable with.
2) Launch a VM.
3) Google yourself a script that will run the CPU to 100%, see what it does scaling cores and ram.
4) Google yourself a script that will slam writes and reads to your volume. Really want to pay close attention to on instance vs mounted volumes here. Probably want to try a few different schemes for your vms.
5) Now spin up a spin up a 2nd instance, install a db of your choice, throw a load balancer in between the instances, and slam them with writes. Load balance the volume(s).

There, now you've slogged thru the required basic labs that every platform makes everyone do.

Now you can go compare a different platform separating obsessing over the platform itself from administrating your environment.
There's a difference, see what I did there?
 
alot got inputs here even if some went a little off target.

1. if i go with Esxi and virtualization then ill probably run 2-3 ssds to start with for the OS
2. The raid array and the 20tb is of course for storage and nothing else
3. main goal is stability and security and my requirements isn't that high, i want a secure and stable Plex server which is the main "server".

Offcourse i could try out all possible solutions but seems irrelevant and nothing i really have time for its not like iam running a corporate server.
i have current server running windows 7 without any "big problems" only thing i hate with windows is that all the updates require almost reboot.

Another possibility would be running one OS with Plex server and when i need a lab machine or torrent i could fire up VirtualBox
 
alot got inputs here even if some went a little off target.

1. if i go with Esxi and virtualization then ill probably run 2-3 ssds to start with for the OS
2. The raid array and the 20tb is of course for storage and nothing else
3. main goal is stability and security and my requirements isn't that high, i want a secure and stable Plex server which is the main "server".

Offcourse i could try out all possible solutions but seems irrelevant and nothing i really have time for its not like iam running a corporate server.
i have current server running windows 7 without any "big problems" only thing i hate with windows is that all the updates require almost reboot.

Another possibility would be running one OS with Plex server and when i need a lab machine or torrent i could fire up VirtualBox

If you're going ESXi, get a 16-32gb small footprint thumb drive for the OS. Most modern motherboards will even have a USB port directly on the motherboard for this purpose. For the raid, make sure you have a real raid card, and not softraid.
 
For my home setup I went this route:
HP Microserver Gen8 running Hyper-V with 2 DCs and a Plex server
Norco ZFS box running openindiana/nappit on bare metal.

I've played with all-in-one/passthrough in the past, but it was more hassle than it's worth. If I need more compute I can set up an ESXi/HyperV box with minimal storage and provision space on the ZFS box for iSCSI.
 
So how of the servers resources will i loose if i run win server 2016 hyper-v instead of esxi?
 
So how of the servers resources will i loose if i run win server 2016 hyper-v instead of esxi?
I think that you'll gain in memory usage with Server 2016. Especially because of Dynamic Memory.
I have 5 VMs now with 28GB ram and i5 2400 so your EAnd i also use Plex.
For home you don't need to worry about utilization of raid :)
Raid6 will work fine for that.
 
So how of the servers resources will i loose if i run win server 2016 hyper-v instead of esxi?

Depends on a couple of million things. You may get more, you may get less, it all depends on the workloads, configuration, hardware, and phase of the moon. Both are great hypervisors.
 
possibly but i have seen far too many people give up and swear aginst virtralizing ever again as a result of poor preformance caused by a terrible drive setup. minus well save op the time of figuring that out.
I've been running multiple vms on a single SSDs for AGES.

iops on an SSD are an order of magnitude or more better than spinning rust. With all that being said, I mostly run low iops vms as well & mostly on linux:
dns server
web server
mail server
etc...
 
Back
Top