Hardware for VMware home lab and NAS

Ghost26

Weaksauce
Joined
Jan 18, 2009
Messages
110
Hi all

I finally want to jump into ZFS for my personal storage.

The same server will be running a VMware home lab to study VMware and Microsoft certifications.

I want 200MB/s + for storage performance. The storage will be at the same time a home media server, workstation and VM backup server, Time Machine backup server, DNS/DHCP/FTP/Web server, while running the lab VMs. So I need something reliable and fast.

Which hardware do you recommend, while remaining in a decent price.

I though going into SuperMicro motherboard under socket LGA2011 (E5-1620v2), 32 GB Kingston ECC.

Is a quad core enough for everything or a 6-cores would not be too much ?

I need something like 8 TB usable storage for the moment.

I think that a quad-port Intel i350 controller would be very nice for my setup. I could make a LAG with this.

It can be rackmounted (I'm also looking for a wall mountable 9/12U cabinet like this : http://www.tripplite.com/en/products/model.cfm?txtModelID=4266 for server installation and Cisco 2811 router and Cisco 2960s-24TS-L switch )

Since my storage capacity is relatively small, I don't think I need a dedicated RAID/SAS controller.

I also though going with Dell PowerEdge servers. Are those good for that usage ? But I must acknowledge I'm a lot more interested in building this server instead of buying an already assembled server.

I'm a PC builder, but I never built a server-grade computer. That's why I need your help. Budget is around 2000 $.

Thank you very much for your help.
 
ESXi lab + ZFS NAS = I would go All-In-One

- use a case (Norco, SuperMicro Chenbro etc) with a backplane
- buy a serverboard from SuperMicro with a 4 or 6 core Xeon and
min 16GB ECC RAM (RAM is more important than CPU)
for up to 2000$, you may think of a SuperMicro X9SRH-7TF
where you can use up to 512 GB RAM with a Highend LSI SAS Controller
and a dual Intel 10 GbE nic onboard
- cheaper Alternative: Supermicro X10SL7-F

- you need the extra SAS controller for All-In-Ones (onboard or extra)
use LSI HBA all the way (or rebranded like IBM 1015).

Dell or HP server are ok if you avoid their SAS controllers because
they are often not reflashable to a raidless mode.

some more infos about All-in-One
http://www.napp-it.org/doc/downloads/all-in-one.pdf
http://napp-it.org/manuals/hardware_en.html
 
Many thanks for your input.

HBAs are better than RAID card ? RAID card often have onboard memory. Would the system be faster with this on-board memory ?

My switch is only Gigabit but can do 802.3ad.

So the main thing is to know between a 4 cores or 6 cores. I don't really know if the 2 additional cores could be interesting since the server will do storage while virtualizing multiple OS.
 
More cores are better..but cost more, obviously. We can't say if you NEED them because we don't have enough information or sizing to go by. Storage doesn't use a lot of CPU so that, to me, isn't a big deal.
 
Do you have a budget?

Budget is around 2000 $

Well, I'm totally in the dark because I don't know what's the requirements to get a decent lab for VMware certifications. I'm currently a student, my employer told me he could pay me every exams I pass, but I will have to pay the lab to study the certifications.

Maybe for VMs I would get a Samsung 840 or Crucial M500 1 TB/960 GB to get fast storage. I hate to wait, definitively :p

I thought ZFS used a lot of CPU time for checksum and parity control.
 
Last edited:
If need a setup for certifications then just get 128GB SSD (or larger, if desired), 16GB of RAM (hopefully you already have) and a VMware Workstation license (there is a trial version too) and load Autolab on a system. Done and on the cheap. You don't NEED a SSD, however it sure helps. I'll even help you deploy it if you need help.
 
But I need VSphere and ESXi to pass the VSphere exams :p

I can virtualize ESXi, but I don't think it's a very nice setup to virtualize an hypervisor :p

The company for which I work is growing. They want to establish virtualization services and VSphere is definitively interesting them.
 
You do run VSphere in Autolab. It's the 60 day trial version, but still VSphere.
 
A T320 will do all of this for you, even a T110 2 should meet your requirements. I bought the T320 last year and it runs my home ESXi environement just fine, however it's not super cheap. I got it through the dell rep I was dealing with so I did ok there. Building server hardware personally is boring and not that interesting, it's no different than workstation hardware typically except it's stuff in a rack server and more expensive.

ESXi free will get you running and the 60? day license for vCenter should give you enough to get familiar with it.

HBA's and RAID cards do very different things. HBAs in my experience are for connecting Fibre Channel devices in SANs, which in a home setup isn't going to happen. A raid card is a must if you want to run a raid 10 for performance or anything, The H310 H710 Dell options work fine, just grab the Dell ESXi 5.5 ISO and load that.

For CPU don't bother going high end, my vpshere environment at work in production is averaging 10% utilization, go for ram and fast disk. You'll get way more out of it, for a home lab just for a test I would go with the SSD suggestion but if you're going to continue to use it afterward for anything figure out your use case before you buy drives, I have 8 WD Blacks stuffed in mine in a raid 10 for space/speed.

Also, just to make sure, you do know you have to attend a vSphere training course in order to write the exam correct? This isn't like a Microsoft exam where you can just go pay and write it. I don't want you to get blindsided by that.

Edit: A T110 2 like these with nothing but an H200 raid controller would easily fit the bill for running a learning environment. You could easily add more ram and hard drives after the fact as Dell servers have never given me any grief about 3rd party ram and drives. http://www.dell.com/ca/business/p/poweredge-t110-2/fs
 
Well I don't know a lot about vSphere trainings. My employer just told me he would pay me the certifications and I would only have to pay the lab to practice. But I haven't get information yet about how to follow the courses.

I didn't know Autolab. Seems nice, but I prefer to run it using the "right" way, physically.

I think I might go with a Ivy Bridge-E but quad core, just to keep me the possibility of adding more RAM in it. Xeon 1620v2 with, for beginning, 32 GB of RAM.
 
You might even look into the video's for learning also. They are really great!
 
HBA's and RAID cards do very different things. HBAs in my experience are for connecting Fibre Channel devices in SANs, which in a home setup isn't going to happen. A raid card is a must if you want to run a raid 10 for performance or anything, The H310 H710 Dell options work fine, just grab the Dell ESXi 5.5 ISO and load that.

HBAs are more a general piece of hardware. They are used to connect disks and use them on the OS without Raid functionality. This is similar to onboard Sata in AHCI mode.

If you look at the LSI HBAs (they are also used from Dell, HP, IBM and others) you can use the HBA controller with an IT firmware (completely raidless) or with an IR firmware (basic raid 0/1/10 raid) together with different drivers.

This option to reflash the HBA is common on LSI but some of the Dells and other second source vendors lacks this feature even when using the same base hardware. This is the reason of the success of the IBM 1015 that can be reflashed to an LSI 9211 (IR and IT firmware). It cost less than the half of the LSI

The main reason you should care of this feature is your configuration. If you need a raid for Windows or for the local ESXi datastore, you can use any supported raid card. If you intend to go the ZFS + SAN route, you should not use anything beside an LSI 9207 or LSI 9211-IT mode or a card that can be reflased to them.

You do not need a raid-cache with battery. They are useless for ZFS. You should have some RAM for better performance (every RAM over 2 GB is used to improve readperformance). CPU is not as critical. My home machine use a 17W Dualcore Xeon and thats ok with some VMs on SSDs and 32 GB ECC RAM (8 GB or more for ZFS).
 
<snipped for brevity>
Also, just to make sure, you do know you have to attend a vSphere training course in order to write the exam correct? This isn't like a Microsoft exam where you can just go pay and write it. I don't want you to get blindsided by that.
</snipped>

The above is a good point as it may change your budget.
The mandatory course cost in the thousands ($), while the exam in the hundreds ($).
So is your employer's offer for the course, the exam, both?


Well I don't know a lot about vSphere trainings. My employer just told me he would pay me the certifications and I would only have to pay the lab to practice. But I haven't get information yet about how to follow the courses.

I didn't know Autolab. Seems nice, but I prefer to run it using the "right" way, physically.

I think I might go with a Ivy Bridge-E but quad core, just to keep me the possibility of adding more RAM in it. Xeon 1620v2 with, for beginning, 32 GB of RAM.

I have used the T110-II's in the past .. http://hardforum.com/showpost.php?p=1037953079&postcount=474
only downside to them is the memory cost a little more and maxes at 32GB last I checked.

For $2000, your should be able to build a decent 3 node lab (2 hosts + 1 Storage).
 
But I need VSphere and ESXi to pass the VSphere exams :p

I can virtualize ESXi, but I don't think it's a very nice setup to virtualize an hypervisor :p

The company for which I work is growing. They want to establish virtualization services and VSphere is definitively interesting them.

I wanted to mention, don't write off virtualizing your hypervisor especially for a home lab. Obviously for a production environment this isn't ideal but the main goal in a home lab is to play around with various technologies.

I went the route of building a 2 CPU server and getting as much RAM installed as I could (you have a decent budget - ECC RAM is easy to come by on eBay). Our company uses VMWare exclusively but we are wanting to explore Hyper-V as an alternative. I was able to follow the various guides for virtualizing Hyper-V and got a 3 node Hyper-V cluster up and running and it allowed me to test things like cluster aware updating, failover, etc. It just seems that in a lab scenario you want to have as much flexibility as possible but at the same time, lots of hardware means bigger electric bill. The problem is generally always RAM, not CPU - one system with more RAM lets you put the money to work in the place where it does the most good.
 
But I need VSphere and ESXi to pass the VSphere exams :p

I can virtualize ESXi, but I don't think it's a very nice setup to virtualize an hypervisor :p

If the choice is between blowing 2k on a setup you basically don't need and just running ESXi on VMware Workstation/Fusion on your existing Desktop then for me the Desktop would win because I can think of a lot of better things to use the 2k for.

My home lab is ESXi running on Workstation, I also run Windows 2008 and 2012 servers on that. The Windows 2012 machine provides iSCSI targets and NFS exports for shared storage.

For anyone who just wants to occasionally spin up a lab environment I cannot recommend a Workstation/Fusion based solution highly enough. It's so easy, the costs is low (just buy local SSDs to run this on, no need for any kind of RAID at all). Totally inexpensive.

Other than that, the minimum cost for commercial training that is required to actually be eligible to take the VCP exam is 4k which you can get discounted down to like 3.2k with VMUG Advantage. Alternatively you can check whether a local college is a vITA school and take the required course over the duration of one semester at whatever the tuition cost for that class is. Passing the exam without a decent amount of hands on experience is difficult unless you study from brain dumps.
 
What kind of usage do you expect on the file storage? What kind of network serving protocols, if any, how many simultaneous programs?

What about all those virtual machines. Even if they are all up all the time, will you be actually using them or will they otherwise be busy.

If you really want to remove bottlenecks you should go with a proper SMP system with multiple memory banks. 200 MB net out of ZFS has a lot of raw throughput, and the memory bank doing that will not be happy to -say- rebuild FreeBSD with itself at the same time. ZFS also uses lots of RAM and if you want to interactive use the machine at the same time you should have as many RAM slots as possible (so that you can buy cheaper smaller modules) and you want the option of going registered RAM (which is not the same as ECC).
 
Back
Top