Understanding a One Box solution (ESXi & NAS)

t3chn0g33k

n00b
Joined
May 22, 2011
Messages
6
Looking for some intitial guidance...

End Goal:
One Physical machine with many purposes

What this is being used for:
- Centralizing, Limiting footprint, Testing, Education, Mimic systems I work with

Desired Virtual Systems:
- One physical system that runs 7 VM's (24/7)
- 4 of the VM's will have SQL databases (Windows 2k8 64-bit R2)
- 1 AD Server (Windows 2k8 64-bit R2)
- 1 Exchange 2k10 Server (Windows 2k8 64-bit R2)
- SuSe Linux
- Approximately 3 additional guests that will be started from time-to-time

NOTE: Even though these systems will have databases and will be running constintly I will most likely only be actively using 1 or 2 of the 6 running VM's with the others only running to provide various services that the system I am actively using may need to mimic the work environment.

Additional Functionality of system:
- Remotely access this system when traveling (OpenVPN?)
- NAS for centralized storage of everything (VM's, backups, family info, media, etc...)
- Backup & Restore
- Snapshots of VM's from different periods
- Restore System state
- As in machine could die and I could boot to restore interface, pick my backup and my machine would be completely restored to that time

My system will consist of: (May purchase differently depending on feedback)
- Quad core processor (w/HT support?) (Supporting 64-bit & virtualization) (AMD vs Intel undecided)
- Motherboard (4 - 8 SATA Controllers) (16GB Memory support minimum)
- 12GB to 16GB Memory
- 4x1TB SATA (6GB) 7200rpm HD --> RAID 10
- 2x??GB For OS (RAID 1) or possible just boot from USB drive

The Question:
Will this work (in theory)? Am I going to run into huge performance issues if I try to meet my goal? What additional or more specific information can I provide for me to help you assist me? Can this be accomplished with one machine that is running VMware ESXi 4 with a guest running as a NAS which will have all my VM images residing on it and running from.
I am familiar with the concepts of VMware and use images every day but I have nothing to do with the configuration or administration of this technology, only the use of the guests residing there. I have spent a lot of time researching but am still missing what is in front of my face. Thanks in advance...

First post so not sure if I'm giving you all what's needed...
 
Have you read the zfs all in one thread over on the Data Storage Systems board? If you have a motherboard and cpu that support vt-d (iommu for amd?) this can work with virtually zero overhead (virtualizing the SAN/NAS). I am running a xeon e2-1230 on a supermicro x9scl mobo with 16GB RAM. An openindiana appliance on local esxi datastore serves up an NFS datastore back to esxi which has all the other VMs on it. Works fine.
 
I have been reading a ton of data on this topic and I've gottta admit (sheepishly) that I haven't researched the zfs thread. Partially because I didn't think to and partially due to it being even further outside my comfort zone. I'll read up on those tomorrow...thanks for the heads up and push!

And, thanks for sharing your setup and thoughts....definite help.
 
you're welcome. you certainly don't need to use zfs for the NAS, but you'd be crazy not to. the features it makes available are really powerful for a NAS.
 
Disk I/O is probably one of my biggest concerns. Based on cost and reality my hope is that by using SATA 6/GBs 7200RPM drives in a hardware RAID 10 configuration that I may still be okay. I currently have a dual-core, 8GB machine with one RAID for the OS and applications and a RAID-10 for my file server and all my VM's. This is a straight Windows 2k8 R2 box with VMware Server running on it. I can run 3 or 4 VM's with little issue. The main issues I see are when I am actively using the host (things like streaming or application use), alonmg with having those 3 VM's running where I am actively in one that is constintly writing data.

Making an ESXi box (or Win 2k8 core install) gives me some resources back and upping my SATA drives from 3/GBs to 6/GBs will hopefully give me the added boost to meet my goal.

My concern though (for Disk i/O) is that I kill any gains the mentioned above the second I have my NAS virtualized and my VM's running from that virtual NAS.

Still need to read up on the zfs side of the house though...
 
Is your disk load mostly read, write or both? Also, if you can virtualize the NAS (and I strongly recommend using a ZFS solution), the virtualization overhead is insignificant. To give you an idea, I have 6 600GB sata drives (WD blue) in a raid6 (zfs style). Reading from a file on this from inside the virtualized opensolaris NAS, I get: 381MB/sec. Writing is about 25% slower, but this is just about what I got on bare metal.
 
Both but more reads than writes typically. I've started playing with FreeNAS and the ESXi apps from USB inside VMware workstation as I read through these forums to get a bytes idea of what questions to ask and where my lack of knowledge lies. Now that you've mentioned RAID6 I'm seeing it in more articles and am itching to play with that configuration. Wouldn't you know one of my drives failed today. No big deal as it was in a mirror but fine timing to help justify my next step. Many thanks for some real numbers and stats for what you are seeing, feeling better about the idea jus need to figure out ways to test pieces of it with current hardware if I can. The ESXi cd boots and installs within vm guests on both laptop and server but the moment I try to install on the host itself I get errors, so im needin to research some more. Since I know what I want to do I might just get the new hardware tha I know has worked for others and start building. There were some 1TB SATA 7200 rpm w/32mb cache on sale for $50 yesterday that were pretty tempting....
 
RAID6 is for data protection, not performance. It's horrible for performance, in fact - the write penalty is absurd. ZFS does things a wee-bit different than traditional, but there's no way I'd use RAID6 for SQL, unless it's simply an archive that is read from. Especially since MB/s is not what you care about - iops is, and all SATA disks are lousy for IOPS.
 
I can't speak to standard raid6, but zfs does things more than 'a wee bit differently'. The write penalty you refer to doesn't really exist. In fact, if doing sequential writes, it can be faster than raid10, due to writing to more spindles. And yes, I know sql is a random write job mix - the trick here is that with zfs, all writing is sequential, due to the copy on write nature of the filesystem. In general, I can't disagree with your statement about SATA disks and IOPS :) Where raid10 wins over raid5/6 for zfs is random reads, as zfs is smart enough to spread the reads around...
 
Last edited:
I can't speak to standard raid6, but zfs does things more than 'a wee bit differently'. The write penalty you refer to doesn't really exist. In fact, if doing sequential writes, it can be faster than raid10, due to writing to more spindles. And yes, I know sql is a random write job mix - the trick here is that with zfs, all writing is sequential, due to the copy on write nature of the filesystem. In general, I can't disagree with your statement about SATA disks and IOPS :) Where raid10 wins over raid5/6 for zfs is random reads, as zfs is smart enough to spread the reads around...

Standard RAID6 it's a 6xish (IIR my math right, don't have the calc up in front of me) write penalty, which means that for every write IOP you burn at least 6 IOPS on the back end to commit it to disk, due to the parity. In other words, very, very, slow.
 
Understood. And I agree. Just saying that zfs is a lot better in that respect... zfs also doesn't have the write hole issue.
 
Hopefully I'll know first hand soon enough. You Guys have been giving some good topics to deep dive on and potentially build once I determine the parts.
 
I was 100% sure that intel was for me and after reading and pricing and comparing, along with looking at a lot of specs posted here I find myself looking very hard at AMD. Thanks for the info on upcoming chips as I was looking at current.

Kind of a side note but same idea is my laptop (Dell Latitude E6400) SATA hd crashed today and another one is being sent. This is a work laptop and almost 100% used for VM so I need every ounce of resources it has to offer. The ESXi USB installer worked fine on this machine and has me wondering if I can install a ESXi and NAS on this one Drive and keep on trucking. I currently (before the crash) would run Windows 7 and could have 3 VM's running well'ish, even 4 if needed. Probably should just stick with VMware Workstation but I am ESXi 'happy' right now and want to use it. I am looking at getting a hard drive Caddy so that would give me to drives....

just pondering.....
 
I was 100% sure that intel was for me and after reading and pricing and comparing, along with looking at a lot of specs posted here I find myself looking very hard at AMD. Thanks for the info on upcoming chips as I was looking at current.

I built an AMD whitebox myself some time ago. Intel wasn't an option since the only chipset that support ECC memory and run with i5/i7 is the 3450 chipset (if I'm not mistaken). AMD has ECC support on a wide range of chipsets.

http://communities.intel.com/thread/12334
 
Back
Top