Home Lab - VMware / Storage

mikkhail

n00b
Joined
Aug 23, 2012
Messages
45
Hi Guys,

I was wanting some advice on setting up a home lab to advance my skill set in this area expertise and I have been reading the forums for a couple of days. I think I have found my passion for IT again :)

Working in cloud computing at the moment so I have exposure to a vast range of technology but I am wanting to move to third level and get myself out of the desk and I think this is the best way to do it.

Anyway I'll quit with the life story.

I was wanting to setup a ZFS SAN/NAS - can this be done with a Whitebox and a Synology drive?

Also anyone have any recommendations on builds for 2 x whiteboxes for a clustered VM Setup?

Last question, I have steered clear of networking and I am purely systems, but do I need any network gear for this setup?

Thanks :)
 
You could always do what I did and pick up a used dell pe1950/2950 and run those. They are pretty cheap and only lack vt-d.

You will always need networking gear. Used Cisco gear can be had cheap. I just picked up two catalyst 2948G with fiber gbics for $50. They are only 10/100 but that will d for lab use. Just link aggregate with multi port nics and pass data between the switches over fiber. Plenty of power.
 
What sort of price range should I be looking at to setup the three whitebox machines for ESX / Storage?

Any ideas?
 
Most of the cost would likely be in the raid setup. The card alone could easily cost a few hundred.

I haven't done the white box server thing in years so I cannot help you much.
 
Are you doing this at home to learn?

What is your budget?

Are you interested in putting together your own storage server, or would you prefer a pre-built one?

How much storage do you want/need?

How much power do you want/need ESX box(es) to have?
 
I am doing this at home to learn.

Budget would be $3000 for now.

I'm wanting to put together my own storage server if I can do it on the cheap (otherwise would you suggest a Synology NAS?)

5TB should be plenty with room for expansion.

I'd want them to run fairly smooth if possible, but it doesn't really matter I guess as it is for learning purposes.
 
Simply build a couple whiteboxes with AMD Phenom II or Bulldozer CPUs, 32GB of normal DDR3 RAM, and a quad port Intel NIC from ebay (PRO/1000 VT). The CPU, mobo, RAM, and NIC shouldn't cost more than $400 each plus a case and PSU.

Next you can build an OpenIndiana+Napp-It whitebox SAN with an AMD CPU, 32GB of RAM, and another Intel quad port NIC. Load it up with a bunch of fast SATA drives in mirrored vdevs and set up NFS and iSCSI. Or go for the gusto and get some nice SSDs instead.

Connect it all together with an HP 1810-24g switch and you're good to go!

Whole thing shouldn't cost more than $1,500 and you'll have two hosts each with 4+ cores, 32GB of RAM, and a nice SAN that can server up iSCSI and NFS.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
Do you guys think I should go and buy a server, or build my own machines?

I'm guessing for the SAN you guys would suggust I buy a second hand server so I can learn all the hardware aspects and RAID.

I already do a bit of work with ILO/iDRAC though when physicals crash.
 
I think you should buy the storage server for sure. It is just plain cheaper and it works.

For the ESX boxes, personally, i would buy them. I do think it is a good idea to know how to build servers, but if you've built one high end server you've built them all (as long as you are an experienced PC builder)

In all reality, how many fully custom built servers are in data-centers? I sure don't see that many all that often. Most companies value the simplicity of pre-built servers because of the warrenty that comes along with it.

If you are comfortable with building a server, I would just buy it all pre-built.

I mean really, what you REALLY need to learn is the software and networking.
 
Do the two ESX hosts need to be servers, or can I just buy pre-built standard machines.

I guess I'm a little confused with the whole 'Whitebox' scenario (thinking these are standard built PCs which are beefed up and act as servers)
 
Do the two ESX hosts need to be servers, or can I just buy pre-built standard machines.

I guess I'm a little confused with the whole 'Whitebox' scenario (thinking these are standard built PCs which are beefed up and act as servers)

Yeah, you can like Child of Wonders builds. If you plan on using them alot, you could even go up to 8120's cpu's in the whiteboxes.
 
You could use a beefed up desktop for the ESX. If you are going for the full experience, why not use servers? Most high-end workstations (which most people aim to use for their whiteboxes) use server boards, as well as server memory and hard drives. Using a rackmount server will just make everything easier to manage space-wise.

Those rackable servers will even work in a 2-post relay rack. You can mount the ears mid-server. Great for home use on or under a desk.

Really, for $96 more than that processor you could have an entire server with 500GB of storage, 16GB ECC memory and 4 2.2ghz cores, which is plenty for learning on. Can't beat that for bang for the buck.

BTW, I think it is important to learn on a setup with 2 ESX servers and a common storage.
 
You could use a beefed up desktop for the ESX. If you are going for the full experience, why not use servers? Most high-end workstations (which most people aim to use for their whiteboxes) use server boards, as well as server memory and hard drives. Using a rackmount server will just make everything easier to manage space-wise.

Those rackable servers will even work in a 2-post relay rack. You can mount the ears mid-server. Great for home use on or under a desk.

Really, for $96 more than that processor you could have an entire server with 500GB of storage, 16GB ECC memory and 4 2.2ghz cores, which is plenty for learning on. Can't beat that for bang for the buck.

BTW, I think it is important to learn on a setup with 2 ESX servers and a common storage.

Yeah, those rackable systems would be cheaper than building your own if its just for learning or light use, IMO. Just would need the 2/4 port nics for em, not sure if they come with more then 1 or 2.
Yeah, its important to have the 2 esxi systems and the shared storage for using the advanced and nice features that it has to offer.
 
I'm a little weary of the servers if I buy them second hand as most people I have read have had bad experience with them in regards to hardware failures...where would be the best place to buy server hardware?

Any recommendations?
 
Yeah, those rackable systems would be cheaper than building your own if its just for learning or light use, IMO. Just would need the 2/4 port nics for em, not sure if they come with more then 1 or 2.
Yeah, its important to have the 2 esxi systems and the shared storage for using the advanced and nice features that it has to offer.

Yeah, those only come with 2 gigabit nic's, but at least those intel dual gigabit nic's are cheap.

That setup would give 1x nic for management network, 1x nic for network access and the dual nic for SAN. That would be a great setup IMO. Use link aggregation on the dual-port nic's on the ESX boxes as well as the two onboard nic's on the storage server.
 
I'm a little weary of the servers if I buy them second hand as most people I have read have had bad experience with them in regards to hardware failures...where would be the best place to buy server hardware?

Any recommendations?

I'm running two of them at home right now, never had a problem. There are certain motherboard models to stay away from, but in general, the c2004 platform is pretty good.

You also have to stay away from certain sellers.
 
Can anyone recommend any sellers?

Not sure if I'll go down the eBay path unless someone has a recommended seller in Australia..
 
Simply build a couple whiteboxes with AMD Phenom II or Bulldozer CPUs, 32GB of normal DDR3 RAM, and a quad port Intel NIC from ebay (PRO/1000 VT). The CPU, mobo, RAM, and NIC shouldn't cost more than $400 each plus a case and PSU.

Next you can build an OpenIndiana+Napp-It whitebox SAN with an AMD CPU, 32GB of RAM, and another Intel quad port NIC. Load it up with a bunch of fast SATA drives in mirrored vdevs and set up NFS and iSCSI. Or go for the gusto and get some nice SSDs instead.

Connect it all together with an HP 1810-24g switch and you're good to go!

Whole thing shouldn't cost more than $1,500 and you'll have two hosts each with 4+ cores, 32GB of RAM, and a nice SAN that can server up iSCSI and NFS.

Would it be possible to setup RAID with this kind of setup?
 
What do you guys think?

ESX Whitebox:


2 x AMD FX Series FX 8150 Black Edition Processor - Eight-Core - 3.6 GHz (Turbo Core boost up to 4.20 GHz) - 8MB L2 Cache 8MB L3 Cache - 125W - 32nm - Zambezi - Socket AM3+ - $222
2 x Gigabyte GA-970A-D3 Motherboard - AMD Socket AM3+ / AM3 - AMD 970 & SB950 Chipset - 4x DDR3-2000 - 2x PCIe x16, 3x PCIe x1 & 2x PCI - Support for AMD CrossFireX - 6x SATA3 - 14x USB2.0 & 2x USB3.0 - Gigabyte LAN - 7.1 Channel Audio - ATX - $109
2 x Corsair Dominator 32GB (4x 8GB) Quad Channel DDR3 Memory Kit - DIMM 240-pin - 1600MHz (PC3-12800) - CL10 - DHX Pro Connector - $281
2 x Intel Pro1000 MT Gigabit NIC Quad 4 port server Adaptor - $119

SAN:

1 x AMD FX Series FX 8120 Black Edition Processor - Eight-Core - 3.1 GHz (Turbo Core boost up to 4.00 GHz) - 8MB L2 Cache 8MB L3 Cache - 125W - 32nm - Zambezi - Socket AM3+ $169
1 x Gigabyte GA-970A-D3 Motherboard - AMD Socket AM3+ / AM3 - AMD 970 & SB950 Chipset - 4x DDR3-2000 - 2x PCIe x16, 3x PCIe x1 & 2x PCI - Support for AMD CrossFireX - 6x SATA3 - 14x USB2.0 & 2x USB3.0 - Gigabyte LAN - 7.1 Channel Audio - ATX
1 x Corsair 16GB(2x 8GB) Vengeance Dual Channel Memory Kit - PC3-12800 / DDR3-1600MHz - CL 10 - Unbuffered - $109
1 X Intel Pro1000 MT Gigabit NIC Quad 4 port server Adaptor - $119
1 x Adaptec RAID 1430SA card - SATA II - RAID 0 1 & 10 - 4 Port - Low Profile - PCIex4 - $141

I haven't placed in hard drives yet, but how are the specs so far?
 
Both are a bit overkill for home lab/"wanting to learn" use IMO, but I'm not the one to tell you your hardware requirements. That's up to you decide, based on the expected load and your end goals with the lab. Also, why two ESX boxes? If you're planning on playing with vMotion and such, you'll need to budget in for licensing after your trial is up (or, alternatively, keep reinstalling ESX when your trial is up, but that'd be a big PITA if you ask me).
I also see that you mention SAN (block level storage), but you have no FC HBA listed. Are you just going to go with a NAS (file level) instead (not a bad thing, just trying to make sure that if you do intend on going for SAN, that you get the right hardware). Also, you mentioned no OS on the possible NAS, so you make need to add in cost there as well.
 
Thanks for your input, I'm throwing around ideas and trying to gain as much feedback as possible as you are aware I'm building from the ground up.

A couple of people at work are recommending one machine, use VM Workstation (which I can have two ESX hosts still sit on) and use an Openfiler VM for storage.

What do you think on this setup?
 
The question is what you want to achieve from your ESX whiteboxes. If you ar enot going to run ultra-heavy stuff, then a Core i5 and 32GB memory will suffice. Think about licensing too if you want two ESX systems.

The storage server is overspecced. I see people filling giabit with an AMD E350.
Some tips:
- Go software RAID. There is nothing fancy to learn about hardware raid apart from knowing how much TLER disks cost.
- As its going to run 24/7, keep power usage in mind. I run my NAS on an Intel Celeron G530.
- You'll need to decide whether you'll use ZFS (needs more RAM, fancy perks) or mdadm (less RAM, standard raid).

Regarding the workstation idea: It depends on what you want to do. I run a few Avamar Virtual Editions and while it is already pulling my NAS to the limit as it is, putting it all together on a workstation is going to cripple the whole box. If you just want to create some Debian/WinServ08 VM's to play with DRM and vMotion, then the workstation should suffice.

In the end: See what you really want and go from there:
- Fancy/exotic VM's or just generic linux/windows.
- Will the NAS only serve datastores or also home stuff.
- Do you mind power bills.
 
Everyone is different and it's all personal preference but I used to have my own home ESXi lab with VLAN trunking on the switches, multiple vSwitches, pFsense with VLAN support for my view labs, etc.

The problem I had is that it was a bit overkill for what I needed. It was great from a hands on perspective of the hardware and common snafu's that you run into without have jumbo frames on, or VLAN issues, etc...but outside of that it was a bit much.

I've recently ripped everything down and have my core machine with 4 SATA drives in it dedicated to VMware Workstation...Workstation is now what I'm using for my lab.

Basically in the Workstation setup, I have 2 ESXi servers with 4gb each associated with them, a domain controller, vcenter, openfiler server (iSCSI), freenas server (NFS) and all of this can be shutdown and brought up at will without breaking the bank and/or using a ton of electricity and more importantly (the reason I got away from physical boxes) generating more heat than a coal stove. Seems to be working out pretty well and there are a few SNAFU's that you'll likely run into but for VCP testing it should get you everything you need and then more.
 
" - You'll need to decide whether you'll use ZFS (needs more RAM, fancy perks) or mdadm (less RAM, standard raid)."

This is a bit of a misconception. ZFS can certainly use any RAM you throw at it for ARC (read cache), but will work just fine with a couple GB.
 
Maybe we should post a sticky whitebox thread that shows entry, midrange, enterprise class for Lab use.
 
Maybe we should post a sticky whitebox thread that shows entry, midrange, enterprise class for Lab use.

+1 for this. Maybe also showing 2 configs of a AIO, and a 2 whitebox/storage solution?

32gb's, 8 core system is overkill if its for pure lab/learning. hehe
 
+1 for this. Maybe also showing 2 configs of a AIO, and a 2 whitebox/storage solution?

32gb's, 8 core system is overkill if its for pure lab/learning. hehe

I vote for a sticky, I'm still so undecided on what to buy!
 
Would it be possible for someone to spec me up 2 whiteboxes with storage?

I'm just so lost as to what hardware I would need for the storage...in terms of dual port NICs. Why are these required? and does each whitebox need one?

I will even pay someone to talk me through it!
 
I use the following whitebox:
- Intel i3 2100
- Gigabyte H67MA-USB3 (lots of PCIe)
- 16GB DDR3

The only real limit i have seen is obviously the memory, but an upgrade to 32GB is a breeze.
Apart from that it's great for home fiddling and it only draws 39/40 Watts. Another plus is that this doesn't break the bank so you can easily get two.

Storage is done by a Celeron G530 powered NAS with 5x2TB in R5, drawing 60ish Watts.
If you want two whiteboxes to try vMotion and stuff, you'll need centralized storage.
 
Last year, I picked up some low end refurb Delll R210s from the Dell outlet and maxed out the memory for ESXi servers. Added a Dell SAS 6Gbps HBA and a norco rack with 20 drives for a ZFS storage server. I also run an older QNAP for shared NFS storage and it works for a dozen VMs to test out things with. Paired with a couple Dell/Linksys/Netgear semimanaged GBE switches and it's a good testing lab.With the VMS stored on the shared NFS storage, I can reinstall ESXi on USB drives every 60 days and test out the full features or just leave it as is in free mode. To me, it was worth the extra cost of getting a low end server, adding my own RAM & hard drive, and not having to deal with the hardware part. it's easier to just spend the extra $50-100 and i also have 2 years hardware support which also saves on time spent troubleshooting or researching for a compatible but cheap part. 5 years ago, i would have definitely built my own, but its cost and time efficient to just buy used or buy a refurb and i have something that might be easier to resell when I want to upgrade. in production, we would have hardware support cuz it's not worth my time to troubleshoot stupid stuff like a bad power toggle switch or hunt down replacement parts online. :-p

If I had to start over now, I would pick up a couple Rackable systems/OEM dell servers from Ebay with 32GB of ram and dual 4-6 cores for a ESXi box or to setup a small cloud. There are some good deals online... Still have a Norco 4u to provide file server storage... With the Dell SAS 6Gbps external HBA, I can use any machine that supports VT-D to serve as a ZFS file server and upgrade as hardware drops in price...
The onboard dual nics is sufficient unless you want to play with vlans or test out other features, but I would save money and camp ebay/For Sale forums for good deals on dual or quad nics or wait for the need.

I run esxi on some dell D820/E6500 laptops and optiplex 75x/76x desktops as ESXi sandboxes at work with intel gig nics. Good for some linux vms or testing out software installs in windows. The cpu doesn't matter as much since I'm not running any heavy dbase/processing on it. Go cheap, then spend the extra $ when you run into performance issues... Occasionally i borrow a couple E6500s from work to test out multiple VMs at home, ie hadoop cluster :-p purely for proof of concept and for breaking things to see how they work... :-D
 
Back
Top