Your home ESX server lab hardware specs?

I just built one about 3 months ago:

single quad core E5410 (dual quad capable)
6 Tb of drives[SATA], 4 Tb usable (RAID1 and RAID5)
40 Gb of RAM (128 Gb capable)
RAID controller with 128 Mb cache and battery backup

Running ESX 3.5 with 20-30 VMs. Need more spindles/controllers + RAM for more VMs.
 
I just built one about 3 months ago:

single quad core E5410 (dual quad capable)
6 Tb of drives[SATA], 4 Tb usable (RAID1 and RAID5)
40 Gb of RAM (128 Gb capable)
RAID controller with 128 Mb cache and battery backup

Running ESX 3.5 with 20-30 VMs. Need more spindles/controllers + RAM for more VMs.

This is your play lab? 40gb of ram? :confused: :eek::confused:
 
why cant this just be it? :)

Well this thread has been pretty much all text and it is called Your home ESX server lab hardware specs? But if a new thread was started don't limit it to just esx, allow xen, openvz and others.
 
Nothing Fancy:
Q6700
4GB DDR2
WD 80GB 7200 SATA 3GBPS

Running Win2k3 Standard as a DC with DNS/AD/TS and a few XP images joined to the domain.
 
Not quite as [H]ard as some of you guys, but it does what I need it to for now with room for exapnadability.

Asus Z8NA-D6 (wish I could get the asmb4 managemnt board to work)
1 x Xeon L5520 Quad (once the price comes down a bit, I'll pick up a second)
6GB DDR3 1333 (waiting for the 12GB kit to come down a bit)
1 x Hitachi 250GB SATA
1 x Seagate 320GB SATA
2 x Seagate 1.5TB
2 x WD Green 1.5TB
1 x Intel PRO/1000 Dual Port PCI

ESX 4
WHS Guest
Untangle Guest
XP Guest

This replaces an esx3i setup with the same guests on the following:

MSI GM965 motherboard
Core2Duo T5450 Socket P Mobil CPU
4GB DDR2 800
Adaptec 5805 controller
Same Hard drives (transferred them to the new system.)
Intel PRO/1000 Dual Port PCI (transferred ot the new system.)

The original system was kind of strange for an esx/esxi setup I know, but I was shooting for low power. Whats interesting is that the new system with the same guests/drives only runs about 18-20W more than the mini-itx system. I guess going for the L series xeon paid off. That and I'm saving about 13.5W by not having the adaptec in the system. Now granted the system is only utilizing a fraction of its power, it's interesting though when you take notice of how much more computing power the new system has and how it only consumes a little more. Now on the flip side, i'm sure at 100% utilization the gap between the two systems would be a whole lot larger.

I'm still very much a noob when it comest to ESX/ESXi so I'm really only using a small portion of it's features(basic features at that). I feel a bit overwhelmed. Any suggestions for good resources on learning more about ESX/ESXi? Something not too fast paced preferably. I'm also still trying to decide what other VM's to add for learning purposes. I'll probably add a server 2k8 guest, not sure what else though. Any suggestions for useful/interesting/fun stuff to add as guests for learning?
 
A 3ghz quad-core with 4GB of ram works fine for dual domain controllers and multiple clients here - I keep the VM size around 8GB total and map an iSCSI drive for storage.

Dual or quad NICs can help if you have high traffic.
Just setup a virtual switch to use a different physical NIC and you'll have more dedicated bandwidth if needed.

Since it's all across-the-wire when communicating with the VM's I haven't noticed a lot of lag etc. in any situations.

Greatest free software package to date in my book -
 
I ended up going with the following:

2 ESX 3.5 hosts:

M2N-E motherboard
8GB RAM
Dual-core Athlon 64 X2 5200+
Extra Intel gigabit NIC

Openfiler iSCSI/NFS host:
MSI Motherboard
2x Athlon MP 2000+
2GB RAM
Adaptec 2100S RAID controller
5x 147gb U320 10k SCSI disks
Dual-port Intel gigabit NIC

Virtual Center Server/Domain Controller
IBM ThinkCentre desktop
- Dual core processor
- 2GB RAM
- 80GB SATA
- 300GB SATA



Had a lot of the parts except for the core components for the ESX hosts which is why I started this thread. Motherboards, power supplies, RAM, CPU and video cards ran me about $400 (some from eBay, some from NewEgg).

So far things are rock solid. But I haven't REALLY started taxing things yet. Just a clustered SQL 2005 server running on a pair of Win2k3 64-bit VMs.
 
Not quite as [H]ard as some of you guys, but it does what I need it to for now with room for exapnadability.

Asus Z8NA-D6 (wish I could get the asmb4 managemnt board to work)
1 x Xeon L5520 Quad (once the price comes down a bit, I'll pick up a second)
6GB DDR3 1333 (waiting for the 12GB kit to come down a bit)
1 x Hitachi 250GB SATA
1 x Seagate 320GB SATA
2 x Seagate 1.5TB
2 x WD Green 1.5TB
1 x Intel PRO/1000 Dual Port PCI

ASUS Z8NA-D6 (ASMB4-IKVM) -- when you said you couldn't get the asmb4 management board to work did you mean that feature, or the board itself? (the one that the model number is ASmB4-IKVM)?

I'm trying to mirror your setup and want to make sure I get the right stuff. Can you give me links maybe for what to get? One last question, do you know of another processor I could use instead of that? I could get that one, but prefer to get something cheaper. Any help/advice you can give me would be awesome.

Thanks!!
 
ASUS Z8NA-D6 (ASMB4-IKVM) -- when you said you couldn't get the asmb4 management board to work did you mean that feature, or the board itself? (the one that the model number is ASmB4-IKVM)?

I'm trying to mirror your setup and want to make sure I get the right stuff. Can you give me links maybe for what to get? One last question, do you know of another processor I could use instead of that? I could get that one, but prefer to get something cheaper. Any help/advice you can give me would be awesome.

Thanks!!

I have the board(ASMB4-IKVM) attached and configured and it seems to be functioning, but I'm unable to connect to it. I believe it has something to do with the fact that the board only has 2 onboard nics and the ASMB4-IKVM board was designed around boards with 3 or more nics it seems. When I try to perform the configuration, it only allows me to setup nic3(which doesn't exist on this board), not nic1(The ASMB4-KVM can only be set up on nic1 or nic3). I'm unsure which actual nic it's binding to, but I am able to ping the address assigned to nic3, i'm just unable to get the asus software to connect. I finally gave up. I'll try to take another look at it to refresh my memory on the problems. Here's the pertinent stuff I ordered from newegg:

http://www.newegg.com/Product/Product.aspx?Item=N82E16835233029 - I believe 2 of these will fit, but I have no way to know for sure as I'm only using 1 cpu

http://www.newegg.com/Product/Product.aspx?Item=N82E16835185058 - For the heastink.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813131389 - Now keep in mind there is the cheaper Z8NA-D6C which doesn't have the sas ports(which you need an ASUS add on card to use anyway) and I believe it doesn't have the ASMB-KVM either.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820148247 - Just wanted to something cheap for temporary ram until the 12GB kits come down a little. Side not, this ram has already gone up, it was $80.99 when I bought it.

I got the L5520 because I wanted hyperthreading which is only offered on the 5520's and above and because I wanted as low a power as possible. Alternatively you could get the E5520 which is the 80W version of my processor or if you're not worried about hyperthreading you could get the E5506 or E5504. The intel Dual Port nic I bought on ebay a while back. Much cheaper there, but rarely are they retail boxed.

Be careful with the heatsink. It seems to be real hit or miss on which current 1366 heatsinks fit the 1366 Xeon boards due to slight changes in the heatsink mounting setup. Something to do with the backing plates on the boards I think. There's some talk about it over at 2cpu.com. I went the dark knight because there was 1 report that it worked on supermicro board. I can say that mounting problems aside, I don't think you'll get two of anything that's much bigger than the dark knight on this board. Again, i'm not sure 2 dark knights will fit.
 
My ESX server is a HP ProLiant ML110 G5 running ESXi v4.0. It's mainly used for trying out various server OSes and running game servers at LAN parties now and then.

Specs:
CPU: Intel Xeon X3220
RAM: 8 GiB ECC DDR2
Storage: 2 TB RAID 10 array of 4 x 1 TB SATA HDDs connected to HP Smart Array E200 controller
NICs: Onboard Broadcom and Intel Pro/1000 GT PCI.
 
Last edited:
Just got a shiny new Dell PE T300, 2.83ghz quad core, 16gb RAM, 4x500gb RAID 6, DRAC, hotswap, redundant PSUs for ESXi 4.0. Hooray for a 20% off Small Biz coupon!

I'm running iSCSI targets for the Hyper-V boxes and other misc stuff. It'll host a DAG node when Exchange 2010 comes out.
 
well I am setup with the ESX 3.5U4 class starting on 8/3-8/7 (fast track)

My boss approved it but waiting for his boss to approve it..

but my current test bed setup is..


*************************************************
Dell PowerEdge 2600
4gb ram
dual 2.8ghz Xeon cpu (HT non dual core)
perc 4i
6 x 146gb u320 drives

1 x onboard nic
1 x intel pro100 nic
4 x intel pro 1000+ nic
*************************************************
2 x Dell GX620 desktops
p4 2.8
2gb ram
40gb sata drives
*************************************************
Dell GX270 - Openfiler
p4 2.8
2gb ram
40gb sata drive
*************************************************

now I couldnt figure out why the gx620 will install and run esx but wont show any available drives space... sata is not supported via esx 3.5u4....

so my thinking now is to take a dell gx270 and install openfiler on it... and have them use it as a NAS for space...

my goal is for the class (online..) is to remote in on the class and do what i am supposed to do.. but i want to also mess around at night with what we learned.... so i need 2 machines minimum for clustering and vmotion.... and we will see....
 
so my thinking now is to take a dell gx270 and install openfiler on it... and have them use it as a NAS for space...

I would use the Open-E free version instead since your storage is <2tb. Open-E's iSCSI target is supposedly much improved over the old IET target in OpenFiler. Open-E is pretty easy to use, as well. Hopefully, OpenFiler will integrate the LIO iSCSI target and will take the lead again.

Solaris w/ZFS is an even better choice IMHO - I had no trouble maxing out gigabit while running Solaris/ZFS/iSCSI on a VM in my above ESXi box, but Solaris isn't as easy to use as a NAS distro.
 
I would use the Open-E free version instead since your storage is <2tb. Open-E's iSCSI target is supposedly much improved over the old IET target in OpenFiler. Open-E is pretty easy to use, as well. Hopefully, OpenFiler will integrate the LIO iSCSI target and will take the lead again.

Solaris w/ZFS is an even better choice IMHO - I had no trouble maxing out gigabit while running Solaris/ZFS/iSCSI on a VM in my above ESXi box, but Solaris isn't as easy to use as a NAS distro.

What about freenas +ZFS? :confused:
 
I would use the Open-E free version instead since your storage is <2tb. Open-E's iSCSI target is supposedly much improved over the old IET target in OpenFiler. Open-E is pretty easy to use, as well. Hopefully, OpenFiler will integrate the LIO iSCSI target and will take the lead again.

Solaris w/ZFS is an even better choice IMHO - I had no trouble maxing out gigabit while running Solaris/ZFS/iSCSI on a VM in my above ESXi box, but Solaris isn't as easy to use as a NAS distro.

ill look into open e free.... openfiler is free too.... i know some of their add-ons cost money.... but not sure if iscsi is free....

yeah im just looking to setup a lab prior to be ready....
solaris... yeah not up for that learning curve at the moment...

openfiler seemed easy.... but will check them out...

Never used it. My other concern is the iSCSI target. I don't know if the one in FreeNAS is any better than OpenFiler. I could be wrong, though.

i only used free-nas once.. it is plane jane.. i liked the quota in openfiler and all the gui based eye candy in openfiler.... i do know that i can download freenas as a stand alone vm appliance and just fire it up on say my q6600 really easy... nothing to install just go....
openfiler may have that too...

so that could be the source of my gx620's... my q6600 as the nas in either of them.... instead of the gx270.....
 
well I am setup with the ESX 3.5U4 class starting on 8/3-8/7 (fast track)

You sre stoked on that fast track. I just finished 3.5 Install & Config and it was pretty much a waste of time, but I did get some knowledge gaps filled. My vendor threw in 30 credits in a package deal so I had enough to take I&C but not enough to do the fast track.

Either way I'm now "allowed" to sit for the VCP. What a hustle.
 
Single box lab. :) Mac Pro w/ 2xXeon 5400s at 2.8GHz, 10GB RAM, 2x640GB RAID0. Running ESX under Fusion and EMC's Celerra VSA for storage. Normally I do this when I create my demo videos so it's not about how many VMs I can run but showing some feature set.

Two labs at the office. One with two EMC Celerras replicating and 6 HP DL360s. Cisco Nexus 5000 10Gb switch, MDS fibre switch....few other things. The other has a single Celerra and 6 DL360s plus a lot of other EMC gear.
 
i managed to get ESXi working for a little while on a HP server i had (don't have it anymore) ML350 or MLG350 or something.
 
CPU: Intel 2.4 GHz Q6600
RAM: 4x2GB DDR 2 800 .
Storage: Perc 5i with 5x500GB 7200RPM Samsung HD501Js
NICs: 2xIntel 1000GTs
 
Wooo. Thread necro. :D


To see if I could, I installed a clustered ESX 3.5 environment on four laptops: 2 hosts, 1 openfiler and one Virtual Center server running on a Windows Domain Controller. It ran like crap, but it was fun to pop open four laptops and a switch and have a VM cluster.
 
What about Hyper V users?

HP Proliant DL360 G5
Windows Server 2008 R2
2x Intel Xeon 2.0Ghz
20GB DDR2 RAM
3x 72GB SAS 10k HDD
2x 10/100/1000 NIC
2x 700w PSU
Fortinet FortiWifi 60B Firewall
HP ProCurve 1800-24G Switch
 
Last edited:
I have 3 identical PowerEdge T300's... two running 2008 R2, one running ESXi. Live migration is sweet!

Xeon X3363 (2.83ghz)
16gb RAM
hotswap 500gb x4 RAID 6
Perc6/i
dual port Intel gigabit NIC
redundant psu's
 
I have 3 identical PowerEdge T300's... two running 2008 R2, one running ESXi. Live migration is sweet!

Xeon X3363 (2.83ghz)
16gb RAM
hotswap 500gb x4 RAID 6
Perc6/i
dual port Intel gigabit NIC
redundant psu's

what do you use for shared storage ?
 
I got a good deal on a Dell T605 which I'm assuming was being discontinued (about 4 months ago) as it was insanely cheap and I've never seen them on the Dell site again since then:
Dell T605
Dual AMD Opteron 2376 Quad-Core 2.3Ghz
16GB of DDR2 ECC
2 x 250GB Hot swap WD SE's in a mirror for local storage
Dell 6i SAS controller with 256MB of BBWC
1Gb onboard with a quad-port intel pci-e NIC (I don't recall the model offhand.. anyone?)
Additional storage served out via iSCSI off my Solaris file server running 7 x 750GB Samsung F1s in a RAIDZ array.

I also have a monster of an old IBM server tower filled with 146GB 15k drives and 16GB of memory but the thing sounds like a jet taking off and only has 2 x 3.4Ghz Single Core Xeons which don't support VT on this board so I quit using it after about a month of running 3.5 and sweating from the heat output.
 
Windows Storage Server 2008 from Technet. I have it running on ESXi.

I just setup Windows Storage Server with iscsi target yesterday in my vmware workstation esx cluster.

Actually works. wouldn't mind moving it into my dev setup and seeing the performance. from what i'm reading its not that great.
 
Old dell 2650, dual 3 and 12gb of ram.

will probably get rid of it soon, need something x64
 
To study for my VCP cert update:

1.) AD (Win2k3 x86 R2) - VIA C7D mini-ITX, 2x512MB PC2-5300, 80GB Seagate 2.5"
2.) ESX 1 (vSphere 4.1) - Abit KN9 SLI, 4x2GB PC2-8500, x2 5600+, 80GB Seagate 2.5"
3.) ESX 2 (vSphere 4.1) - Abit KN9 SLI, 4x2GB PC2-8500, x2 4200+, 80GB Seagate 2.5"
4.) vCenter (Win2k3 x86 R2) - VM in the environment
5.) WHS (PowerPack 3) - ASUS M3A78-EMH HDMI, 2x1GB PC2-6400, x2 4850e on Water, Dell SAS6/iR, Supermicro 8-in-2 2.5" SAS enclosure w/expander, 2x 80GB Seagate 2.5", 4x 36.7GB SAS Seagate Savvio 2.5" 10k, iSCSI target software
6.) 8port D-Link GbE switch
 
DL580 g2
ESXi 3.5
quad 2.7 xeon's
8 GB of ram
dual intel pro 1000
2 33gig 10k scsi drives for the operating system

for storage for the vm's i use freenas with iscsi setup to one of my raid 5's setup with 3 500 gig sata drives

Next month I will probably be replacing my esxi server with a dl585 with 4 quad opteron processors, will be building a new freenas box and going to esxi 4
 
I've been adding to my lab and grabbed two of these.
Everything works for ESX4 out of the box.

Gateway SX2800-03 Desktop PC
Intel Core 2 Quad Q8200 2.33 GHz / 4GB RAM / 640GB Hard Drive / DVD±R/RW Drive

Uses DDR3 and it's small w/ low power draw.

*i wouldn't use for production machines though*
 
Last edited:
My box sucks, but it gets the job done. Barely.

Whitebox, ESX 4.0
E2180, 2GB, 400gb IDE
Server 2003, Win XP, and Backtrack
 
Back
Top