Your home ESX server lab hardware specs?

ESXi1/2:
Intel x3430
X8SIL-F-O
8GB ECC
16GB Flash Drive

Storage:
QNAP 459 Pro II
4x 1TB drives

Network:
1810G-24

Should be enough to carry me through VCP and down the XenServer path as well. Thanks NetJunkie for the build info. Stupid me bought non-HT CPUs on accident though. Ah well its just a lab.
 
Happy to help. Not sure if I mentioned I've built a 3rd server. Needed one more for some demos I'm working on. Come to the VMware User Summit in Charlotte on the 5th and you can see my wondrous lab in person. :)
 
we just got our new esx blades running... yup blades..
i dont have vcenter setup yet but gonna be awesome!!!!

4 x HP c7000 chassis with currently 12 esx blades... 256gb ram EACH.. and dual amd 12 core cpus!!!!!
 
Used my lab at the Carolina Regional VMUG yesterday to do a live install/config of the dvSwitch and Nexus 1000v. Worked perfectly and got several comments on it. :)
 
I decided to downsize to a single system again and run my environment virtually via VMware Workstation.

Here is what the hardware looks like:

Norco 4020 w/Corsair 750TX
Asus P6T
i7 920 w/Corsair H50 - it is in an office..so I need keep the noise down.
24GB G-Skill Ripjaws DDR3 1600
Nvidia Quadro 370
3 x Intel PRO/1000 PT dual port NIC's
LSI RAID
2 x WD640 Blacks - RAID 1- OS
2 x 15k 300GB SAS - RAID 0 - Uber VNX VSA
8 x 1TB WD Blacks - Future File Server storage
Dell 2407

Software:
Windows Server 2008 R2 - AD/DNS
VMware Workstation 7.1.4-
VM's:
2k8 R2 SP1 vCenter/View Composer/VUM
2 x ESXi 4.1 U1 w/8GB allocated each
Uber VNX VSA
Cisco UCS Platform Emulator 1.4
WHS 2011

Embedded VM's :
1 x Windows Server 2008 SP2
2 x Windows XP SP3 PC's (View testing..etc)
vCenter Mobile Access - For iPAD App

I have to look for some good switches so I can set up VLAN's..port channeling, trunking ..etc..right now..i'm just breaking out my connections with different subnets.

Works quite well..unfortunately..I can only run 32bit VM's since the ESXi hosts are virtualized and I can't play with FT nor does it support EVC..but not that big of a deal for testing and studying.
 
Last edited:
Just picked up some Qlogic 8200 CNAs and a 3200 10Gb NIC. Qlogic has some neat things coming out for their adapters that'll be blogging about soon. Working on getting another 2960S switch for the house so I can get 2 more 10Gb ports. Cisco doesn't have any Nexus 5010s for "long term loan" right now....
 
Two more 2960S switches showed up today...but I need a stacking module and I can't find one.
 
In July I'm taking the VMware View class (VMware View: Install, Configure, Manage V4.5) in order to prepare for the VCA4-DT exam.

I'm planning on putting this together for a home lab in the next week to practice on.

SUPERMICRO MBD-X8SIL-F
Intel Xeon X3440
(2x) 4GB Kingston ValueRAM
Patriot Xporter XT Boost 8GB Flash Drive
Western Digital Caviar Black WD5002AALX 500GB
OCZ ModXStream Pro 500W
LIAN LI PC-A04B
 
Last edited:
New additions to the lab. Three Cisco 2960S switches w/ 10Gb uplinks. I had the 48-port already...the two 24-port PoE+ are new. Two Qlogic QLE8242 CNAs and a Qlogic QLE3242 10Gb NIC. And the little ASA5505...

cisco_gear.jpg
 
And the lab continues to grow...yeesh. Another Synology unit is on the way. This time a DS1511.
 
Intel Xeon x3220 2.4GHz quad core
8gb ddr3 ram
Evga 790i SLI FTW
1x 2tb wd green (For vm's and their storage) (Lacks redundancy)
1x 500gb (Hypervisor install, as well as ISO storage for installers and such)
 
That HP 1810G didn't last long in the lab did it?

The 1810G-24 is still doing just fine... These 2960S switches are too loud to run all the time...lab is in my office. I use them when I'm doing testing and got them for the 10Gb uplinks...no one at Cisco could find me a Nexus 5010 to use...
 
New lab gear! Synology DS1511+ and 5x2TB Seagate Barracuda LP drives. I already have a DS1010+ that I love. This will be dedicated to media therefore taking the load off the other DS1010+ that I use for my vSphere lab.

DS1511.jpg
 
I have decided that it's time to learn more about ESXi
rather than just being a user and doing simple server consolidation.

Server 1:
- Dell PowerEdge 1900
- two E5335 Quad Core Processors @ 2.0 GHz
- 16GB Ram
- Perc 5i
- Six 1TB enterprise class Seagate drives (ST31000340NS)
- Two Intel E1G42ET 10/ 100/ 1000Mbps PCI-Express Dual Port Server Adapter
- Built-In Broadcom Gb NIC

Server 2:
- Dell PowerEdge T110
- Intel Xeon X3440 Quad Core
- 16GB Ram
- Perc H700
- Four 1TB retail class drives (Hitachi)
- Two Crucial 300 64GB SSD
- Two Samsung 640GB 2.5"

Server 3:
- Dell PowerEdge T110
- Intel Xeon X3440 Quad Core
- 16GB Ram
- Perc H700
- Four 1TB retail class drives (Hitachi)
- Two Crucial 300 64GB SSD
- Two Samsung 640GB 2.5"

Server 4: (coming soon)
- Dell PowerEdge T110
- Intel Celeron
- 8GB Ram


I am thinking of rebuilding and moving local storage in Servers 2/3 to NAS
which is what "Server 4" will be for (4 3.5" drives + 4 2.5" drives or maybe
even 8 x 2.5" drives + 4 x 3.5")
I will move the dual port NICs to the NAS box too.

I am also thinking about making Server 1 a NAS running StarWind - I need advice here.
System Requirements surprisingly recommend a quad core 5600 series processor to
run StarWind. I am wanting to try StarWind because it seems they lifted the 2TB limit, but not sure i want to run my PE1900 24x7 any more.

I was also weighing the possibility of getting a Synology DS1511 or Qnap SS-839 unit,
but the price tag was double vs. buying "Server 4" which was $310 delivered (already had the ram from the other T110's)... I can still refuse delivery :(

I might try StarWind iSCSI on the Celeron to see how good/bad it performs, or just
run FreeNAS 8 NFS ??? I dunno.

Input appreciated.

I am already thinking of selling off some server stuff to aquire some network equipment (switches), but
not sure what I should get.

EDIT: If those HP 1800 series switches ^^^^^ are quiet, that may be for me as quiet is better
Which is why I have been getting towers over rack servers.

EDIT: Search for SAN and replace with NAS :confused:
 
Last edited:
Hello everyone I just bought a base T110 tonight to set up as an esxi box at work to start learning more.

I got the $379 unit:

x3430
2gb memory
250gb hard drive

What inexpensive raid controller does everyone recommend? I have several seagate 7200.12 1tb drives. I would really like to do raid 5, and have one of my VMs run a FOG server for imaging machines. I keep seeing the perc 5, is there a specific model of the 5? Also any problems with using those drives?

Eventually I would like to play around with NAS or better yet SAN/iSCSI vm storage, but for now it would be sweet to get a raid array going in the machine.
 
the 5i's are ok. for your future forget NAS, most companies and such use san/iscsi

which is nice, fiber iscsi networks are better than sex.
 
Fiber iSCSI? Wtf?

You'll find many companies, even large ones, use NAS (NFS) for VMware. The majority of large installs are on Fibre Channel, but plenty of NFS out there. iSCSI is quickly moving out of favor for several reasons....mainly due to the fact that storage vendors are doing really cool integrations with vSphere and NFS on the arrays.
 
Fiber iSCSI? Wtf?

You'll find many companies, even large ones, use NAS (NFS) for VMware. The majority of large installs are on Fibre Channel, but plenty of NFS out there. iSCSI is quickly moving out of favor for several reasons....mainly due to the fact that storage vendors are doing really cool integrations with vSphere and NFS on the arrays.

I've seen fibre used around here for TCP/IP, I suppose it can be done. Though, I had your reaction at first. I think people in large organizations like to pick their poison. Do you want to deal with storage groups or the network group. Which is more evil? :D
 
Stop linking that damn it! I want to buy a couple 2 port NICs for my hosts and then I saw your twitter feed with this and now this! :D

Must resist....
 
Last edited:
Stop linking that damn it! I want to buy a couple 2 port NICs for my hosts and then I saw your twitter feed with this and now this! :D

Must resist....

Sorry. :) Mine got here today and just popped it in one of my lab boxes. Make it easier for DPM to shut down two of the three most of the time.
 
My esxi box:
Motherboard: Asus M4A785TD-V, AMD 785G, SB710
CPU: AMD Athlon II X4 630 AM3
Memory: 12GB ECC Kingston ValueRAM TS DDR3 1333MHz CL9
PSU: 80+ Tagan 600W
NIC: Intel PRO/1000 MT Dual Port Server Adapter

Raid Controllers
IBM M1015 (in PCI-E x16 #1)
DELL PERC5/i (in PCI-E x16 #2)

Disks
4x SEAGATE BARRACUDA GREEN 2TB 5900RPM SATA3 64MB (ST2000DL003)
3x SAMSUNG 1500 GB 5400rpm SATA2 32MB (HD153WI)
3x SAMSUNG 1000 GB 7200rpm SATA2 32MB (HD103UJ)
 
New Lab!
Dell Poweredge 2900 with Dual Quad 3ghz 8mg cache xeons, 32gb DDR2 FB-DIMMS, 4x 146gb 15k SAS + 4x 2tb 7.2k Hitachi. Running OpenIndiana + ESXi all in one.

This replaces my Dell Precision Dual Quad 1.86ghz, 16gb ram, 2x 146gb/2x250gb box. I tossed it in another room so I can no longer hear it, the Precision 690 has been donated to my brother as a new work PC to replace his dual dual core. :)
 
Stop linking that damn it! I want to buy a couple 2 port NICs for my hosts and then I saw your twitter feed with this and now this! :D

Must resist....

I've been using dual port Broadcom 5709's I find on ebay for around $60. Anyone seen any better alternatives?
 
I would use the Open-E free version instead since your storage is <2tb. Open-E's iSCSI target is supposedly much improved over the old IET target in OpenFiler. Open-E is pretty easy to use, as well. Hopefully, OpenFiler will integrate the LIO iSCSI target and will take the lead again.

Solaris w/ZFS is an even better choice IMHO - I had no trouble maxing out gigabit while running Solaris/ZFS/iSCSI on a VM in my above ESXi box, but Solaris isn't as easy to use as a NAS distro.

Openfilers 2.99 Version allows usage of SCST iSCSI (2.0) and it works really well. i have a VM with PCI-Passthrough on it running with it and right now its running great on my test environment.

ZFS is awesome but to fully use it to its potential Lots of Memory required. my friend uses Nexenta on his ESXi Clusters, im the Openfiler guy. its all about the flavor of how you want your storage to run.
 
I've been using dual port Broadcom 5709's I find on ebay for around $60. Anyone seen any better alternatives?

I was gonna pick up two of these. A bit more than your Broadcoms. But if they're on the hcl I might get yours.

[ame="http://cgi.ebay.com/Intel-pro-1000-PT-PCI-E-Dual-Port-NIC-Card-Dell-X3959-/180685585740?pt=LH_DefaultDomain_0&hash=item2a11b33d4c"]Intel pro/1000 PT PCI-E Dual Port NIC Card Dell X3959 | eBay[/ame]
 
Just upgraded the DSM on my Synology (DS1010) in my lab. It rebooted and came up fast enough that I didn't lose any VMs on the NFS datastore. Nice. :)
 
Just upgraded the DSM on my Synology (DS1010) in my lab. It rebooted and came up fast enough that I didn't lose any VMs on the NFS datastore. Nice. :)

How would you rate the performance of your Synology? I'm running a custom built iSCSI server at home running Debian with a dual core AMD, 2GB RAM, Dell PERC 5/i, and 4x WD 250GB RE3 drives in RAID 5 and two dedicated Gb NICs for multi-pathing (RR).

Everything works fine but it doesn't scale very well. Have you done any stress tests on the Synology?
 
It does anything I ask. It's a 5-bay NAS box. The limitation isn't on the CPU or memory in the box...it's the 5 WD Black drives I have in there.
 
New user and new ESXi custom built box:
Intel Xeon E3 1230
Motherboard Intel S1200BTS
8GB Kingston ECC memory
2xVelociraptor 150GB hard drives, no RAID
Antec Sonata case with Enermax Liberty PowerSupply

So far, the only problem i encountered was memory: the mobo only accepts ECC.
I know other Intel server boards on socket 1156 works with non-ecc, but this time no.
 
Back
Top