Your home ESX server lab hardware specs?

NetJunkie,

I'm in the process of setting up something similiar to yours. From the whitepapers, VMware seems to want you to dedicate nics for a lot of different functions, requiring a large amount of nics. I will have a single onboard for management and a dual Intel NIC. I was thinking 1 to attach to my HP VSA iSCSI SAN and 1 for Heartbeat/Vmotion, etc?

Is that your thoughts using 3 nics?

Thanks!

The whitepapers from VMware are for production environments, not for two-host lab setups. No way I'd set up a production cluster like I have in my home office. Right now I have both my onboard NICs in a single port-channel and just let it balance across. I set priority for vMotion to go over one, VM traffic over the other, etc...but they are in the same channel and will failover if needed.

You should have no problem.
 
Just a note...really happy with choosing SuperMicro for my lab servers. One motherboard had a bad DIMM slot and they are cross shipping a new one via overnight for no charge. Glad I didn't have to go through NewEgg.
 
Planning to do some testing and compare Synology to Iomega. Looks like I have an IX4-200r on long term "loan"...so we'll see.
 
Appreciate the blogs, very informative. I'm waiting on my second combo to arrive before I start the build.

Working with Hyper-V the last couple of years and we have a 3.5 ESX VDI environment running on some Dell Blades and an EMC CX4 at work. I handled the Hyper-V environment and my partner handled the VDI environment and unfortunately did not get the opportunity to really get into VMware like I wanted..ya know..same story, tons of work, no staff and no time really. I was finally able to just hire 2 additional Server admins so I will be able to complete some proactive work.

We are planning an upgrade to Vsphere 4.1 in the near future which I will be handling so I have a lot of catching up to do. We may be also migrating to a HP VSA P4000 10Gb iSCSI environment from our EMC CX4 because of a request of sepration from our commercial business and gov't clients. I want to test out the performance of the Virtual SAN appliance before so this will just be a start to that. I'll post pics etc when I get there.
 
About to test an Iomega IX4-200r in my lab. EMC gave me one to help move some lab VMs for a VMUG that we hosted for them. Said I could keep it until someone (if ever) asked for it back so I'm going to snag it and bring it here to test. Interested to see how it compares to the Synology.

Funny. I ordered a new HP switch..and found one two days later. Bought the Synology..and EMC gives me an IX4 (though I'd rather have the Synology).
 
Lab is up and fully functional:

HP Lefthand P4000 Hyper-V VSA iSCSI 1TB

ESXi Node 1
Asus Maximus III Gene
i7-860-Corsair H50
8B Gskill DDR3 1600
Nvidia Quadro FX370
Intel PCI-e Dual 1GB NIC
Corsair 4GB Voyager Mini Usb Flash Drive w/ESXi 4.1

ESXi Node 2
Intel DP55WB
i7-875K-Corsair H50
8GB Corsair XMS DDR3 1600
Nvidia Quadro FX370
Intel PCI-e Dual 1GB NIC
Corsair 4GB Voyager Mini USB Flash Drive w/ESXi 4.1

hpvsa.png


vlab.png
 
Last edited:
What hardware are you putting behind that HP 4000?
 
Last edited:
What hardware are you putting behind that HP 4000? It's supported for XenServer 5.6 StorageLink so I'm curious. I'm planning a home lab to study for this stuff since my job has taken a turn towards Citrix. That's just the trial right? 60 days I believe

I have a Storage Server running Windows 2008 R2 w/AD-DNS-Hyper-V Roles
Norco 4020
Corsair 750 Watt PS
Gigabyte 880GMA
AMD 1050T 6-Core
16GB DDR3 1333
2 x Supermicro AOC-SASLP-MV8
Mixture of Western Digital Black 1TB and 500GB drives
 
Update...

I'm running the Essentials Plus package.

Here's my set up:
ESXi on 3 identical Dell T300's.
Xeon X3663 2.83ghz QC
24gb RAM
4x500gb in a RAID 6 on a Perc6/i
Intel dual gigabit card
redundant PSUs

Each have 4 ports trunked into an PowerConnect 2748 gigabit switch.

Storage:
nfs1: 4x500gb RAID 10 for VMs, 8x2tb RAID 6 for file storage (Fedora 14)
nfs2: 4x500gb RAID 10 for VMs, 12x2tb RAID 6 for file storage/backup (Fedora 14)

vMotion is sweet....
 
Last edited:
Got rid of the HP P4000 Hyper-V VSA and went with the Uber Cellara VSA. It's free and it supports iSCSI and NFS and it's working great. I'll be sticking with this for my Lab.

unisphere.jpg
 
Found out the cause of the lockups on one of my lab servers was the CPU. Now I'm fighting Intel to get it replaced. They are claiming it's not a retail boxed CPU and therefore warranty is through my reseller...when I have the box sitting right here.
 
So i'm thinking I want to consolidate my lab within VMWare Workstation so I have some more room in my office in my basement. I have too much in there as it is. I'm thinking along the lines of this hardware:

SuperMicro MBD-X8DTE-F-O
2 x Intel Nehalem E5504
24GB Memory
3 x Intel Dual PCI-E 1Gb Nics- Total of 8 NICS and one Integrated IPMI 2.0 with Dedicated LAN

This will run 2 x 2 node ESxi Clusters. 2 x EMC Uber Cellaras for DR testing..etc. Windows 7 as the host OS w/latest version of Vmware workstation.

Will this cut it?

Also..does anyone know how to passthrough physical disks to the Uber VSA? I'd rather have that ability and not have to carve out Virtual Disks then run virtual disks on top of that or am I missing something here. When I add the physical disk, Uber VSA doesn't see the disk.

I am getting horrible performance using NFS trying to setup a 2k3 R2 box for Vcenter. Could be the box i'm hosting the VSA on ..though. I'll have to look more into it but alas..I need to take a break from all things IT..lol..
 
Last edited:
Pretty basic compared to what many of you are doing, but I'm really just trying to get my feet wet with it. My main interest in the beginning was the virtual networking and utilizing remote storage. A bonus is being able to remote into various versions of Windows at the click of a button. I currently work in the technical support field and being able to quickly access whichever version of Windows a client is using makes some processes far easier.

For the past few months I've been using a pair of dual Xeon systems...

Intel SE7500WV2
2x 2.2GHz 400FSB Xeons
1GB PC2100 ECC Registered
2x 36.7GB Seagate 15k Cheetahs in RAID1 on a Dell Perc 4/sc

Very basic system running ESXi 3.5u5 virtualizing IPcop, basic Ubuntu 10.04 server (very basic website), and a PXE (UDA2) server.

Intel SE7320VP2
2x 2.8GHz 800FSB Xeon LV's
8GB PC2700 ECC Registered
1GB USB stick

A bit more respectable as far as RAM is concerned. Also running ESXi 3.5u5 off the 1GB USB flash drive. No local storage on this machine, had been using an NFS datastore on a FreeNAS box (dedicated second gigabit connection directly to NAS). Had a few Windows installations and a couple *nix flavors.

I ended up picking up a pair of dual Opteron 275 (dual-core 2.2GHz) combos that I was going to end up replacing the above Xeon systems with, but when I saw the power consumption of them on my Kill-a-watt compared to the Xeons, I couldn't justify the cost of running them 24/7 at home.

I'm waiting for a couple more parts to show up, but I'm basically replacing the above systems with a single quad-core 2.8GHz Phenom II setup starting with 4GB RAM and upgrading to 12GB once I get rid of a few of the older parts to order up a pair of 4GB sticks. Far more power-efficient and it will be more than capable for what I'll be doing with it.
 
I'm looking to order a couple cheap dells in order to start testing ESXi. In the production environment we're moving from a DroboPro on our single host to an MD3200i, and I was wondering if a DroboPro could do iSCSI for 2 ESXIs hosts even though the documentation says I shouldn't - performance and reliability aren't a huge concern since its something I'm going to wipe and start over with fairly regularly. Will this work or is a Drobo Pro actually single host only?
 
Basic setup here at home for VMWare Testing and Lab:

2 x Dell 2950 2 x Quad Xeon [email protected], 16GB RAM + Local storage (146GB 10 & 15k drives)
1 x Dell R5400 Workstation (Dual 2.0GHZ Quad Core, 8GB Ram + 2 x 500GB SATA Local Storage)

1 x Dell MD3000i (4 x 146GB 15k Drives for now, will add more 146/300GB 15k drives in the future)

1 x QNAP TS-809 (8 x 2TB SATA Segate Drives) - Utilizing this for backup storage and NFS/iSCSI mount on not so important vm's.


Hoping to sell and replace the TS-809 with a OpenSolaris type machine (ZFS w/ Block level De-Dupe) if I can find someone that wants it.

It's been a pretty good setup so far. Just re-configuring everything now.. Nice to see everyone else's setup here.
 
Basic setup here at home for VMWare Testing and Lab:

2 x Dell 2950 2 x Quad Xeon [email protected], 16GB RAM + Local storage (146GB 10 & 15k drives)
1 x Dell R5400 Workstation (Dual 2.0GHZ Quad Core, 8GB Ram + 2 x 500GB SATA Local Storage)

1 x Dell MD3000i (4 x 146GB 15k Drives for now, will add more 146/300GB 15k drives in the future)

1 x QNAP TS-809 (8 x 2TB SATA Segate Drives) - Utilizing this for backup storage and NFS/iSCSI mount on not so important vm's.


Hoping to sell and replace the TS-809 with a OpenSolaris type machine (ZFS w/ Block level De-Dupe) if I can find someone that wants it.

It's been a pretty good setup so far. Just re-configuring everything now.. Nice to see everyone else's setup here.

A Md3000i for home use ? Wow I'm jealous... Was it expensive ??
 
A Md3000i for home use ? Wow I'm jealous... Was it expensive ??

Well I do use it to host my companies exchange and lync (Pbx+) server too. So it does have some business purpose too. 50/10mbps business service at home with ups & generator backup. :)
 
Ok..well..here's the deal. My plan was to consolidate into 1 system with more memory..etc and utilize VMware Workstation to run everything in it. I'll list the issues that i've encountered as a warning to all:

1. You can not run 64bit nested VM's in a ESXi host that's a VM under VMware Workstation because ESXi can't tell what CPU you're running since it's Virtual.
2. EVC does not work.
3. FT does not work.

I've built my lab for the sole pupose of learning. Obviously, some of the functions above are necessary to get the most out of your learning experience. I'll have to go back to getting another i7/MB combo and running straight ESXi using my USB keys again. Before I do that, however, I want to get others input to see if they had any luck with the above, researching and scouring the internet, it appears to hold true, but I would like to see if others had any workarounds for the above issues.

Lesson learned.
 
Just a note...really happy with choosing SuperMicro for my lab servers. One motherboard had a bad DIMM slot and they are cross shipping a new one via overnight for no charge. Glad I didn't have to go through NewEgg.

I have one board that doesn't boot up and the other with ban lan ports, both are being RMA'd tonight and hopefully I'll have them back in a week but I would agree their support has been great overall.
 
Why do you need to get an i7/MB for ESXi? I ran it on a Conroe-L Celeron for a while. But the cheap newer chips do have 64-bit and all that so what's the need for i7?
 
I have a Q6600 and decided to buy a Lian Li PC-V351 (shout out to Netjunkie), a Supermicro X7SBL-LN2 (and bought an add on card for IPMI) and a 550 watt power supply.

I have 6 GB of DDR2. I'll probably just run a 2008 R2, a Cisco CCM 8.0 and dabble with Untangle/PFSense.

I wouldn't mind getting X3440 but I don't want to get a real lab setup until I delve into the VCP course.
 
I got a HP GX609AA that I got in trade as an ESXi server.
Right now it has a Q6700, 8GB RAM, Intel Dual MIC and 1TB of storage

I'm running 7 VMs on it right now.

My current gaming rig will become my next gen ESXi server when the next Sandy Bridge processore become available. Its an i920 @ 3.5GHz with 16GB RAM.

I'm Also looking into an 8TB iSCSI box so I can build a VM to replace my Windows Home Server. And my 1TB storage will become 1.5TB RAID5.

Anyone know if Windows Home Server can be installed in a VM? I can't get it to recognize storage in ESX for some reason...
 
Not technically my home setup, but close enough. Sadly I'm not able to get 64-bit guests running on the DL140 yet :(
PVESXi - HP DL360G6 running ESXi
  • Dual Xeon E5506
  • 32 GB RAM
  • 2 146GB 10k RPM SAS Drives in RAID 1
DEVESXi - HP DL140 running ESXi
  • Dual Xeon 5140s
  • 32GB RAM
  • 2 73GB 10k RPM SAS Drives in RAID 1
DEVSTOR - Dell something or other running OpenFiler
  • LSI SCSI Card
  • HP SmartArray P411i w/ 256MB BBCache
  • 6 146GB 15k RPM SAS Drives in RAID 5
  • 3 320GB 7.2k RPM SATA Drives in RAID 5
  • 3 1TB 7.2k RPM SATA Drives in RAID 5
  • 8 180GB 7.2k SCSI Drives in RAID 10
 
I have just assembled the PC that will host ESXi. As per the post above, I have:

Intel Q6600
Lian Li PC-V351 (This thing is a nightmare for cable management with the Power Supply)
Supermicro X7SBL-LN2 (Boo...no HD Audio but the manual was poor in stating the necessary pin out)
IPMI DIMM (for remote management)
ANTEC BP550 PLUS RT 550 watt Power Supply
750 GB Seagate HD (just had it lying around)
6 GB of DDR2 (I know, I know more RAM the merrier)

I don't plan on getting more RAM, as I plan to use this as a simple test bed. When I take the VSphere class, I'll invest in two boxes with Xeon (hopefully, Intel updates the line soon).

My questions are the two onboard NICs good for hosting ESXi (I only plan to have maybe four VMs, a 2008 R2 DC/DNS/etc, Cisco UCM 8.0 and maybe a vm to test Unix, etc)?

Also, since I can boot ESXi from a thumb drive, could I install Win 7 or whatever on the HD and just put the thumb drive higher in the boot order?

Thanks.
 
Drop the ESXi cd in the system while the empty USB key is installed. ESXi boot cd will boot up and let you install to the USB key. After done, remove the cd and you have a bootable ESXi install on your USB key.

Takes about 10-15 mins.

If that works and is so simple, why did someone create such a crazy work around to create a bootable iso on a jump drive?

Also can an ESXi host be managed by a V Center Server? Example: Netjunkie has his twin setup, can he add the two Lian Whiteboxes into V Center? Sorry for the incessant questions.
 
If that works and is so simple, why did someone create such a crazy work around to create a bootable iso on a jump drive?

Also can an ESXi host be managed by a V Center Server? Example: Netjunkie has his twin setup, can he add the two Lian Whiteboxes into V Center? Sorry for the incessant questions.

Because ESXi 4.1 makes it stupid simple. Boot the disc and point the install to the USB drive. I use 4GB thumb drives.

ESXi doesn't matter. It's the license level you have that matters. ESXi goes from the free base hypervisor all the way up to Enterprise Plus. It can do everything the "full" ESX can do. That's a common misconception. So yes, you can manage ESXi from vCenter. I'm doing it right now.
 
Here's my "all-in-one" ESXi / NAS server:

Intel Xeon 3440 Quad Core Processor (w/HT)
SuperMicro X8SIL-F Motherboard (dual nics, IPMI)
16GB ECC DDR3 Memory
LSI SAS 9211-8i HBA (PCI-e x8)
8GB Flash Drive (ESXi Boot)
2 x 1TB Drives (ESXi Datastore)
Fractal Design R3 Case

I have a NexentaStor Appliance running as a VM, with 'pass-thru' access to the HBA, with the following ZFS volumes:

2 x 1TB Drives (zfs mirror, 885GB actual)
6 x 1.5TB Drives (zfs raidz, 6.63TB actual)
 
ESXi doesn't matter. It's the license level you have that matters. ESXi goes from the free base hypervisor all the way up to Enterprise Plus. It can do everything the "full" ESX can do. That's a common misconception. So yes, you can manage ESXi from vCenter. I'm doing it right now.

I see. So in order to do the more advanced features depends on the license level. Man, that Enterprise Plus license is pretty expensive. Could I get by on Vsphere Advanced? I'm going to have to see how I can nab a license from my employer for my lab without raising eye brows.

Fatguy, that's a pretty boss setup.

I'm going to post pictures of my box once I have ESXi loaded up (and have the license situation figured out).
 
I'm currently running 2008 R2 on my server but I am going back to ESXi when I finally pick up a real NAS. I recently replaced my hardware with an E6600 with 8GB ram. It was an older amd system with 2003 server/vmware server.

I also run vmware server on the box in my sig as well and it's my current "lab" since my normal machine is still being tweaked.
 
Okay, so I officially have the new rig up and running.

MSI 785GM-E51 mATX
Phenom II X2 B53 (2.8GHz) unlocked to X4 B93
16GB DDR3
4x 250GB WD RE drives
1GB USB flash drive
1GB SD card on SD-to-IDE adapter

With the system booted up and idling at the ESXi yellow/black screen, my Kill-a-Watt reads a pretty steady 97w. While these may not be the most accurate devices, this is the same one that measured my old dual-Xeon 2.2GHz ESXi3.5 box at 110-120w with two 10k SCSI drives (RAID1) and my P4-M 1.2GHz NAS at 42w with two 5900RPM Seagate LP drives (RAID1). When the 250GB 7200RPM drives get replaced with newer 2TB drives, I'll probably drop another 10-15w idle power. With this setup, I'm getting far more performance while using far less power, and this is with an older Antec TruePower430 (efficiency in the low-mid 70's). A new 80+ Bronze even would shave off even more.

Onboard Realtek 8111DL gigabit NIC works great after injecting driver found on vm-help.

SATA controller is in AHCI mode on the SB710 southbridge, and ESXi detects the drives fine. FreeNAS will be given direct access to these via physical RDM to either stripe a pair of ZFS mirrors or to run a RAIDZ1. FreeNAS will provide network storage/backup to my laptops/HTPC and will be an NFS target for ESXi itself. Ideally a drive won't fail until 2TB drives can be had in the $40-50 range. At that point, I'll pick up 4 of those and then a 30GB SSD to use for ZIL/cache to fill the 5th internal SATA port.

At this time, I'm limited by the SD card. ESXi boots from the USB flash drive fine, but I need the SD card to be a local datastore for the RDM VMDK's, my FreeNAS installation, and my IPCOP installation. I thought I would be able to do all of this with a 1GB SD card, but vClient tells me I need to have at least 1.2GB usable space in order to create a datastore. Maybe I can get around this with some vmkfstools commands, haven't researched yet. Either way, ordered a 2GB card for $6 last night.

Only issue I'm having is getting both of my Intel gigabit NICs detected in addition to the Realtek one. For some reason, it is only seeing one of the Intel NICs, but I haven't really played with it much yet. Been more concerned with just getting the den cleaned up/looking nice. UPDATE: Resolved NIC issue.. turns out one wasn't quite seated correctly due to a slight bend in the card's bracket. Reversed the bend slightly, and NIC now seats correctly.

VM's planned:
IPCOP (I've had good luck with this virtualized in the past)
FreeNAS (eliminate a physical machine and improve performance over my old one)
XP, Vista, and 7 as it is a huge benefit to me at work having these available via LogMeIn in a few mouse clicks (service desk tech).
Ubuntu 10.04 LTS server (fairly basic Apache/PHP website)
Ubuntu 10.x desktop
OpenIndiana (OpenSolaris fork)
Server 2K8 (strictly for testing/learning)

EDIT: Ah, and I also run VMware Workstation on my laptop. Uprgrading my hard drive and RAM to 4GB from 2GB made a nice improvement. I constantly have an Ubuntu VM running now as well as an XP one most of the time.. zero performance drop when running them now.
 
Last edited:
Back
Top