Your home ESX server lab hardware specs?

Made some changes to the lab. One of my X8SIL-F boards had a DIMM slot go bad so I figured it was time to bump that host. It's now:

- X9SCM-iif
- Intel e3-1230v2 CPU (3.3GHz)
- 32GB of RAM

I can tell already it's running cooler than the old X3430 as the fan doesn't spin up nearly as much under heavy load. I'll rev the other two hosts at some point.

Also put an Intel S3700 100GB SSD in each host and am using PernixData's FVP for read/write caching....well, I was. I will again when I get the new bits for vSphere 5.5. Currently running the RC of 5.5.

Put two Samsung 840 Pro 128GB SSDs in my Synology DS3611xs using the new SSD Read Caching in DSM 4.3. That's getting me by until I get FVP working again.

And finally... I have two Fusion I/O 785GB ioDrive2 PCIe cards on the way. :) Does everyone's home lab do like 250K IOPS? Hah....
 
Nice setup, Jason. Not too far from mine. I upgraded to the 1812 from the 1512 so I could have more spindles and tiers of storage. Where did you get the ram from? I need to update a host to 32gb and I have been able to only find them at outrageous prices.
 
They should really focus on scaling RAM for these NAS solutions, 8GB on a $3k NAS is absurd if you ask me. I would take that over having to populate SSD's for cache.

I'm revamping my storage as we speak, adding a Samsung 1TB EVO to my Iomega for a dedicated SSD store. I also have just added some Corsair 3's in each ESXi host, and i'm waiting on Pernix Data to provide the demo as we are looking into becoming a partner..we shall see.

Patiently waiting for 5.5 to GA so I can rebuild. Plan is to get vCAC going, with a net new vCloud deployment.
 
Last edited:
All in one setup

Openindiana as Storage with PCI passthrough of two IBM 1015 HBA cards Storage is a 8 disk Raid-z2 array with 200GB reach cache. 16GB of ram dedicated to storage. Rest is for virtual machines.


X1-Norco 2424 Case
X1-Supermicro X9SRL-F
X1-- Intel Xeon Quad-Core Processor E5-2609
X2--IBM ServeRaid M1015 (Flashed with IT Firmware
X1 Intel 240GB MLC L2ARC Cache SSD
32GB of ram....
X1---Corsair 750W ATX-----Should of went modular :( cabling ugly
Also a quad port intel NIC

Of course the picture....

1zm371u.png
 
They should really focus on scaling RAM for these NAS solutions, 8GB on a $3k NAS is absurd if you ask me. I would take that over having to populate SSD's for cache.

I'll take 200GB of very fast SSD cache over another 8GB of RAM. You also have to remember..many of these NAS are using Atom CPUs. The one in the DS1813+ like I have on the way maxes out at 4GB. Nothing Synology can do unless you want them to go larger, more expensive, CPU.

Most enterprise arrays have smaller DRAM cache than you'd think.
 
Lol. I have a 1813 on the way too. I had someone willing to take my 1812 for a cost price so I couldn't pass upgrading. The 1813 is sporting the i210 nics too.
 
I'll take 200GB of very fast SSD cache over another 8GB of RAM. You also have to remember..many of these NAS are using Atom CPUs. The one in the DS1813+ like I have on the way maxes out at 4GB. Nothing Synology can do unless you want them to go larger, more expensive, CPU.

Most enterprise arrays have smaller DRAM cache than you'd think.
__________________

Thats because the cost of the memory for cache on Enterprise Arrays..etc is astronomical.

I'm moving to a fully hardware Nexenta system. It supplies great performance, provides read/write cache, and supports VAAI all at less than 1/2 the cost of the beefy Synolgy's and that's with Disks. On top of that, I can add as much RAM as whatever Motherboard will allow and it will make full use of every bit of it.

My issues around the high end NAS is the cost really. While it offers a lot of functionality, you can build for a lot less with similar functionality.
 
You can always build your own for less. That is until you factor in the cost of YOUR time which may or may not be constrained.
 
Lol. I have a 1813 on the way too. I had someone willing to take my 1812 for a cost price so I couldn't pass upgrading. The 1813 is sporting the i210 nics too.

What drives you putting in it? My 1813+ that shipped today will have 8x3TB WD RED and then I'm going to use a DX213 w/ 2xSSD for front-end cache. I'm hoping the spare RAM I have here will work to upgrade it to 4GB.
 
The drives are just being swapped over from the DS1812. I will have/already have Seagate Barracuda 6x 1TB 7200-rpm drives (the new Barracudas) and 2x 128GB Samsung 830's. One SSD for Gold tier, 4x disk raid 5 for Silver and 2x disk RAID 1 for bronze. The other SSD has yet to find a roll so I may just unload it. I'm not finding much time to work on it write now since my ECM ISA cert. course just started for me.
 
Good question. I had them originally in raid but thought there really was no advantage /disadvantage either way. If I am incorrect, please LMK.
 
Don't mind me over here, I'm just watching you all talk about storage since it's very relevant to myself for a project and my home lab.
 
Good question. I had them originally in raid but thought there really was no advantage /disadvantage either way. If I am incorrect, please LMK.

What do you mean? :)

I'm saying take the 6 SATA drives and put them in a single RAID5 or even RAID10 (if capacity is good enough). Then put the two SSDs in and set them to be Read Cache for that RAID set. You can do that in the new DSM 4.3. Works well.
 
Still on DSM 4.2. :)

I'll think about switching it up. I like the idea of having different storage tiers.
 
Same reason we all want things... TOO PLAY! :)

I like researching certain scenarios for some of the situations I come across at work.
 
Well in the awesomeness of people the buyer if the 1812 backed out when the funds were suppose to be sent this AM. Got to love it. Well I may be staying with the 1812 for now. Not too sure yet.
 
Very minor change going to the 1813. Shouldn't be much difference at all. My 1813+ should be here on Monday.
 
Just finished ordering the new storage for the lab so i'm going with this setup to start:

Nexenta CE w/VAAI
Coolmaster HAF XB
Corsair 650Watt Modular (repurposed from my 1st original Dedicated ESXi Host)
Supermicro X8SI6-F (repurposed from my 1st original Dedicated ESXi Host)(Built in 6GB SAS Controller LSI2008)
Xeon 3440 (repurposed from my 1st original Dedicated ESXi Host)
Corsair H60
32GB Kingston UDIMM 1333 ECC (repurposed from my 1st original Dedicated ESXi Host)
2 x Corsair 3 60GB Mirrored ZIL
1 x Corsair 3 120GB L2ARC
4 x Seagate 2.5" Hybrid 1TB drives 8GB Flash (Haven't decided on Mirrors or RAIDZ yet)
1 x 100GB Gskill Phoenix Pro Local SSD datastore to run Recovery site Virtual Machines for SRM testing
1 iPMI dedicated 10/100 onboard Out-of-Band MGMT
2 onboard Intel GB
Dell Silicom PCI-E 6PT GB Adapter (contains 3 x Intel DP GB Controllers)

I wanted to accomplish a couple of things here. First, I wanted a smaller form factor but still be able to leverage my investment I already made in the Full ATX Supermicro...and it included and onboard LSI SAS2008 controller so it was a no brainer to keep it. I also wanted to not worry about speed. I'm sure like most of you, my time is limited and I need things to happen when I initiate a command etc...I don't want a bottleneck. My lab is being rebuilt and most workloads again will run on the latest vCloud Director software, however this time i'm going to incorporate the automation and DR pieces.

If I start to feel this setup is being stressed, i'm going to start slowly adding a full SSD tier with the Samsung 1TB EVO's, and hopefully by that time they will have come down to a reasonable price as well as a DP 10Gb Card to match the Intel AF DP cards in my vCloud Hosts. Right now one port on each is cabled to each other via TwinAX for 10Gb vMotion. That will leave one port free for each for storage connectivity down the road.

I'll post pics of the build etc and benchmarks when i'm done in the coming weeks.
 
Last edited:
I guess I could post it there. I never thought about it. Thanks for the recommendation. I will be on the lookout to see if you Ram from the 1511 works.
 
I have the leftovers from our previous production stuff.

2 HP DL380 G6's
Each has 2x Xeon X5570 (Quad core @ 2.93)
48 GB Memory
3x 146GB SAS each

Storage comes from a NetApp FAS 2050 shelf with about 4.5 TB usable

I tend to keep it powered down when I don't need to actively test - the power suckage is terrible when I leave this shit on all the time :eek:
 
I had a FAS2050 as well and it was a nice little unit, but yeah it was a huge power drain so I never powered it on.
 
So here's my home setup. I've been poking around here for a bit now and built a all in one system. This is all from hardware that I had around so it is what it is. there is fairly low load on this system as its just my home server running plex and whatnot. The file server is running zfs.

2x Xeon L5420
20GB fbdimm
supermicro X7DWN+
Norco RPC-4216
Corsair TX750
2x M1015 flashed to IT (passthrough for zfs)
6x 80mm misc fans
8x 500gb sata (connected to M1015)
4x 2tb sata (connected to M1015)
2x 120gb sata (datastore)
 
CPU: Intel Xeon 1230 V2
Mobo: Supermicro X9SCM-iif
Memory: Kingston DDR3 1600Mhz ECC Unbuffered 4 x 8GB
Storage: Samsung 840 Pro 256 (management/critical VMs)
WD Black 1TB (lab/test VMs)
WD Raptor 150GB (ancient... holding ISOs... must purge)
500GB iSCSI file extent on FreeNAS
NIC: Intel PRO 1000 PT Dual Port
Case: Fractal Design Define Mini
PSU: Seasonic G450 Modular

A little much on some of the components for a first try, but hey, the budget was earmarked for a second box anyway.

The IPMI and KVM over IP is phenomenal, although my antivirus decided to flag the KVM java applet as malware.
 
Last edited:
Got the DS1813+ full of 3TB WD Reds yesterday. Very nice. Doing some performance testing on it right now. Going to do a post showing iSCSI File/iSCSI Block/NFS performance with vSphere. Spoiler Alert: Use NFS.
 
Wooo just learn at STH that 4,1 (2009) and later Mac Pros make very nice ESXi boxes in terms of compatibility and, of course, performance. Have a few at work gathering dust!
 
Yes. They work well. Send me one...or make me a deal on one. :) I just want one to use as another desktop.
 
Heh not mine to get rid of unfortunately or one would be at home as my second host. This will allow me to try multiple hosts and clustering and vMotion and what not.
 
Wow this is so damn sweet. I guess it never occurred to me that this might be possible at all in the first place seeing as how you need Boot Camp to strap a Windows DVD and all. *nix obviously works "natively" like OS X though and I guess I knew that now that I actually think about it.
 
Last edited:
Got the DS1813+ full of 3TB WD Reds yesterday. Very nice. Doing some performance testing on it right now. Going to do a post showing iSCSI File/iSCSI Block/NFS performance with vSphere. Spoiler Alert: Use NFS.
Oooooh I can't wait to see this information! I always assumed iSCSI in any fashion would beat out NFS.
 
Back
Top