Need help deciding if I should go fully virtual

rpeters83

Gawd
Joined
Jan 11, 2009
Messages
513
I currently have a Windows 2012 server on an older Core 2 duo with 8GB DDR2. Currently, I host a couple Linux VMs (Plex, ownCloud, and spamassassin, primarily, but I fire up others for development every now and then). The host itself also serves IIS/.NET, TFS, FTP, MySQL, and media file sharing.

Recently I've ran into issues upgrading parts of my host (such as TFS from 2010 to 2013) and wished I could either roll-back cleanly or rebuild from scratch, with the latter being an issues since I only have one physical machine.

I'm looking into seeing if going fully virtual would be of benefit, and what would I lose out on. If I was to wipe my server and start fresh, using either Hyper-V server or something like ESXi, and installing Windows server for .NET development, and a couple other VMs, what would be any cons to this type of setup? So far I see the only benefits being that I can snapshot/rollback and, if necessary, rebuild a server without affecting the original.

How much would performance suffer? Keep in mind this is an older CPU, so it lacks some of the newer extensions (I forget the name).

This may be a different thread, but for those who have a similar, simple setup, is a certain host platform preferred over the other?

Thanks.
 
Since you only have the 1 machine it seems like going full might be better IMO.
Since server will eat up resources while running a VM, seems easier to have each OS as a VM. Run whichever OS you need at the same time, especially with only a dual core.
You are asking a lot out of that poor little machine though! :D
 
I used to run ESXi on an old Core 2 Duo with 8 gigs of RAM and a similar workload. I ran out of RAM way before I ran out of CPU.
 
Also, would anyone feel there's benefit to, instead of having IIS/MSSQL/TFS/MySQL all on one box, like I do now, have them split out onto their own virtual OS (e.g., a sqlserver on windows server, a webserver instance on another windows server, etc)? Makes sense from a maintainability standpoint, though I'm sure it'll use up more RAM and CPU, maybe?
 
If you want to passthrough hardware devices to a specific VM (such as HBA's or NICs), your CPU/motherboard must support VT-d. I'm not sure how prevalent that was during the C2D days. My Supermicro server board supported it, and I passed through a HBA and NIC to access hard drives and share media.

Overall, it worked well, but I found out (and you might too) that 8 gb was way too small since I could only run a few VMs before running out of RAM.
 
If you split up services, it will use up more RAM, since each installation of Windows is going to have its own RAM overhead. I don't think CPU should be affected much. My Windows domain controller VM does nothing but be the DC, and almost all of the time, it's hovering at only 20 mhz or so.
 
I virtualized my server 2008 on ESXi and after realizing that running on a hypervisor does not support native OS caching, i went back to running server 2008 on bare metal.

My disk performance dropped so much while virtualized, i would only do it again if i used ZFS.
 
Also, would anyone feel there's benefit to, instead of having IIS/MSSQL/TFS/MySQL all on one box, like I do now, have them split out onto their own virtual OS (e.g., a sqlserver on windows server, a webserver instance on another windows server, etc)? Makes sense from a maintainability standpoint, though I'm sure it'll use up more RAM and CPU, maybe?

It will most definitely use more RAM. However, you are right that from the maintenance standpoint, it will make life easier. What is the maximum RAM that your system will support?

I've got a server at work that sounds very similar in specs. The RAM is the biggest issue. It averages around 10% CPU usage but rarely is it not using 90%+ of the RAM.
 
What is native OS caching and how does virtualizing not support the feature?

When you install a modern day operating system on bare metal, it will utilize IO caching to make certain work loads fast. IO caching can affect you from the application level, to the OS level, to the controller level. IO caching is exactly what it sounds like, a cache to buffer IO to maximize your performance from reading / writing to disk.

I've done some simple tests, such as create a windows software raid 5 array. When writing to a windows software raid 5 array, the OS will store data into available memory and tell you the job completed (performing the write at 200+ meg/sec), but even after the job is completed you will still see the HD light blinking as it commits the data to disk. However, when you run Windows on a hyper visor, ESXi can not guaranty the bare metal resources and instead does not support IO caching at all. What this means is my write speed drops to be around 30 meg/second. ESXi guarantees my data gets written the slow and steady way, but writing it directly to disk.

However, ZFS operates in a completely different manner. ZFS handles everything at the application level and does not utilize and OS or controller level caching. Thus still offering the same performance virtualized or on bare metal.
 
Is this windows-specific? Sounds really strange to me - if the OS guest uses RAM for a filesystem cache, I don't see how it would know or care that it's virtual? Also, "ZFS handles everything at the application level and does not utilize and OS or controller level caching"? What on earth are you talking about?
 
Back
Top