Your home ESX server lab hardware specs?

Finally built my ESXi whitebox, after waiting on a another video card as the one I ordered was DOA. That was frustrating. Running ESXi 5.1 in trial mode while I study and practice with all of the features.


  • Gigabyte 990FXA-UD3
  • AMD FX 8320
  • 16GB G-Skill RAM
  • 1 TB Western Digital Black hard drive for datastores
  • MSI R5450 video card
  • Corsair CX750 power supply
  • Corsair Carbide 200R case
  • And a lowly 8GB flash drive to boot ESXi off of
I appreciate everyone's input on this forum, I did a lot of research before making the purchase. I love the fact I nailed a set up with IOMMU and even a compatible video card, all for under $700. VMware sees 8 logical processors. :cool: Life's good.
 
I've recently come into some 1U servers that I want to build into an ESX cluster. That said, I want to boot from a USB key connected to the internal USB connectors on the motherboard. However, being a 1U box, I cant just stick a USB key in and be done.

Can anyone recommend me a low-profile right-angle USB stick or some other solution that worked for them?
 
Finally built my ESXi whitebox, after waiting on a another video card as the one I ordered was DOA. That was frustrating. Running ESXi 5.1 in trial mode while I study and practice with all of the features.


  • Gigabyte 990FXA-UD3
  • AMD FX 8320
  • 16GB G-Skill RAM
  • 1 TB Western Digital Black hard drive for datastores
  • MSI R5450 video card
  • Corsair CX750 power supply
  • Corsair Carbide 200R case
  • And a lowly 8GB flash drive to boot ESXi off of
I appreciate everyone's input on this forum, I did a lot of research before making the purchase. I love the fact I nailed a set up with IOMMU and even a compatible video card, all for under $700. VMware sees 8 logical processors. :cool: Life's good.

do let me know if the video card passthrough works.
 
do let me know if the video card passthrough works.

Video passthrough worked perfectly. I haven't created any Windows VM's yet, but VMware recognizes the device and I can check it for passthrough. For $40 the video card gets the job done.
 
well windows or linux, whichever you are able to get working in a vanilla passthrough mode would be awesome as I'm trying to do the same thing but got all sorts of problems from ubuntu
 
well windows or linux, whichever you are able to get working in a vanilla passthrough mode would be awesome as I'm trying to do the same thing but got all sorts of problems from ubuntu

I'll test it tonight and let you know.
 
Ubuntu 12.10 will detect the card under restricted drivers. I installed the ATI/AMD proprietary FGLRX graphics driver before I added the video card to the VM. I don't have a Windows 7 disc/ISO to play with, but Server 2008 blue-screens when I add the video card to it. Might be time to resurrect this thread to discuss video passthrough.
 
Last edited:
restricted drivers? did you just apt-get the drivers without the card being installed? how did that work?

I just want to make sure it will work as a htpc

thanks for looking into that btw
 
I'll nuke my Ubuntu install and start fresh, this time I'll check to see if restricted drivers will pull it right off the bat. I'll post my results to the other thread to avoid hijacking this one.
 
it might very well be hijacking, but lots of people are interested in your findings, but yes, i'll look for your info in the other thread
 
I build a dual- motherboard system in a MountainMods case (and call it my Borg Cube)

The left half based on a
- SuperMicro X9SCM-F
- Xeon E3-1240 v2,
- 16GB ECC
- LSI SAS9207-8i

running ESXi 5.1 for my

- pfSense,
- Ubuntu with Plex,
- Ubuntu for R and a
- test install for FreeBSD/ZFS (problem with SAS9207 while booting; hw.pci.enable_msi=0 & hw.pci.enable_msix=0 helped; one CPU; still test mode)

Soon another 16 GB and more disk will join that part to become my home server.


The right half based on a SuperMicro C7Q67 with i7. That one is a kind of sandbox to play with ESXi 5.1 and passthrough (not yet very successful with graphic cards) and should host a Windows 8 with other Ubuntus/FreeBSDs.
 
Last edited:
Hyper-V Setup

AMD A6-3670
Gigabyte A55M-DS2
8GB RAM
2x Western Digital Blue 500GB Raid 1
Windows Server 2008R2

I didnt know if it can handle it or not.
 
oh yeah i thought you had 2012 running as a host to a 2k8r2 vm lol. no ive played with hyper-v on 2012 and its cool. i bet 2k8r2 is pretty legit too
 
I've tried that to no avail. still got the storm, probably because I had two sas 2008 based cards passed through
 
well i'll definatly look into this again, i've got a beefy nas being built and would love to get freebsd virtualised

X8DTE-F-B
48GB ddr3 ecc
2x l5520's
IBM 5015
Intel SAS expander
15 3TB drives, soon to be 20

awesome find man, thanks
 
Thanks for pointing to and sharing those hints; specially with different MSI setup. Will give them a shot tonight.
The VM BIOS setting I have done already and disabled floppy etc.
Get 137 MB/sec write speed on single disk with 9207 and red WD 2TB
 
Christian, that's awesome. Those are pretty much native speeds for that drive right?
 
so its up

AMD FX 8120
Asus 990FX Sabertooth
32GB Corsair Vengeance
1 TB Seagate
80 GB Seagate
NFS (from WHS11)
Broadcom NIC
Ati 2400 Pro


esxi_zps52583279.jpg


does everything look right?
 
Last edited:
yep looks good. what kind of perf are you getting from windows serving NFS? just curious
 
yep looks good. what kind of perf are you getting from windows serving NFS? just curious

if you tell me how to test that i will. to be honest i dont think the nfs share is working porperly.. i put some data in there but i cant see it.

ok, the nfs is working. i will install something tonight and test out the speed.
 
Last edited:
if you have a linux or unix vm on the host and the vm is hosted on an iscsi disk you could try writing using the DD tool to test writes
 
Christian, that's awesome. Those are pretty much native speeds for that drive right?
I fear my measurements are a bit "unscientific" :eek: (it's ok; I'm a n00b :D )

Direct after reboot the VM with the command (in a directory on the ZFS-file system via 9207)
Code:
dd if=./test of=/dev/zero bs=1M count=1024

Result
1073741824 bytes transferred in 9.051990 secs (118'619'422 bytes/sec)

the next two read attempts give already cached values
1073741824 bytes transferred in 0.108633 secs (9'884'139'916 bytes/sec)
1073741824 bytes transferred in 0.097115 secs (11'056'390'709 bytes/sec)

when I recreate the test file with
Code:
dd of=./test if=/dev/zero bs=1M count=1024

1073741824 bytes transferred in 16.024828 secs (67'004'889 bytes/sec)
1073741824 bytes transferred in 3.142094 secs (341'728'115 bytes/sec)
1073741824 bytes transferred in 7.699660 secs (139'453'147 bytes/sec)

and a new file
Code:
dd of=./test1 if=/dev/zero bs=1M count=1024
Result:
1073741824 bytes transferred in 3.070250 secs (349'724'554 bytes/sec)

What does it mean ? I need some hints/help on how to measure consistently the disk IO, not the cache.

Update:
Code:
diskinfo -vt /dev/da1

Result
Code:
/dev/da1
	512         	# sectorsize
	2000398934016	# mediasize in bytes (1.8T)
	3907029168  	# mediasize in sectors
	4096        	# stripesize
	0           	# stripeoffset
	243201      	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware

Seek times:
	Full stroke:	  250 iter in   8.155693 sec =   32.623 msec
	Half stroke:	  250 iter in   5.796541 sec =   23.186 msec
	Quarter stroke:	  500 iter in   9.527554 sec =   19.055 msec
	Short forward:	  400 iter in   3.213514 sec =    8.034 msec
	Short backward:	  400 iter in   2.787221 sec =    6.968 msec
	Seq outer:	 2048 iter in   0.234938 sec =    0.115 msec
	Seq inner:	 2048 iter in   0.180347 sec =    0.088 msec
Transfer rates:
	outside:       102400 kbytes in   0.740011 sec =   138376 kbytes/sec
	middle:        102400 kbytes in   0.838504 sec =   122122 kbytes/sec
	inside:        102400 kbytes in   1.452345 sec =    70507 kbytes/sec

Update 2: file size for dd bigger as memory of VM (memory 3GB)

write test
Code:
dd if=/dev/zero of=./test bs=1M count=4000
4194304000 bytes transferred in 33.403257 secs (125'565'720 bytes/sec)
4194304000 bytes transferred in 34.241038 secs (122'493'483 bytes/sec)
4194304000 bytes transferred in 34.052467 secs (123'171'811 bytes/sec)

readtest
Code:
dd of=/dev/zero if=./test bs=1M count=4000

4194304000 bytes transferred in 33.485603 secs (125'256'936 bytes/sec)
4194304000 bytes transferred in 26.600475 secs (157'677'787 bytes/sec)
4194304000 bytes transferred in 26.119918 secs (160'578'758 bytes/sec)
 
Last edited:
I'm not sure about writing and the cache, I understand how getting results that are just from the controller cache is frustrating. What if you ran dd to write or read with a count= value of > the size of the cache?
 
Did that in my later approach with 4GB file size (and 3GB assigned memory): data looked more consistent
Write speed around ~123MB/s;
Read speed around ~135MB/s
That was also backed by "zpool iostat"
 
Not exactally ESXi, but still my home VM Server.

HP MicroServer N40L
  • RAC
  • 8GB RAM
  • 2x320GB RAID 1 Boot Array
  • 2x40GB RAID 1 VM Array (only 2)
  • 4x1TB RAID 5 Data Array (toying around w/ switching it to RAID 10)

Server 2008 R2 SP1
  • Hyper-V
  • WSUS
VMs
  • Ubuntu Server 12.04 LTS running Amahi
    • 20GB HD
    • 1 vCPU
    • 2GB RAM
  • PBX in a Flash
    • 10GB HD
    • 1 vCPU
    • 1GB RAM

I know its not a supported config, but it works for what I need. I'd use ESXi, but my "RAID" cards are not detected by it (onboard softraid & a Silicon Image 3132).

Maybe in the near future I can get an actual HW raid card.
 
I'm currently running 4 boxes:
pfSense 1 x 250gb
WS08 2 x 1.5tb
WHS 2 X 1.5tb
Ubuntu 1 x 250gb

I just picked up the following set-up based on several reviews. I managed to get the 2nd NIC working ;)
The software raid controller does not work with ESXi, So I'm looking to pick-up a HP Smart Array P410 Controller.

3 Western Digital Red WD30EFRX 3TB IntelliPower SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive
$539.97

2 Kingston 8GB 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory Server Hynix M Model KVR13E9/8HM
$159.98

1 Intel Xeon E3-1220 V2 Ivy Bridge 3.1GHz (3.5GHz Turbo) LGA 1155 69W Quad-Core Server Processor BX80637E31220V2
$214.99

1 SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Intel Xeon E3 Server Motherboard
$204.99

Whats the Best way to set-up RAIDs?
4 x 1.5tb RAID1+0
3 x 3tb RAID5
 
I would go with ESX All-in-one setup(check napp-it all-in-one setup) and wouldn't but a P410 controller...

You could save some money(you would still need to buy a sata kontroller (IBM M1015 with price around 100€) and spend rest of the money on more hard drives(the same machine could later be changed to NAS if that is what you need)... Anyway, take a look at that solution...

As far as RAID goes, it's up to you... RAID10 is faster for read and write, but you loose space, where RAID5 you dont, but you look some read and write speed... It depends on your OI operations...

MAtej
 
Posting my home lab setup, been reading this forum for a lot of years but always been to lazy to post :p

Primary ESXi Host:

3.jpg


ESXi 5.0
Fractal design XL R2
Asrock 990FX Extreme 9
AMD FX 8120
Intel Pro GT NIC PCI
32 GB Corsair Vengence 1600 Mhz
3x 3TB Seagate
1x 1TB Seagate
1x 250 GB Samsung 840
500W Corsair PSU
Passive cooled Nvidia something
Corsair H100

(Note the Muffins!)


Secondary ESXi Host:

1.jpg


ESXi 5.1
NZXT Silent Whisper
MSI 890FXA-GD70
AMD X6 1155T
16 GB Corsair Vengence 1600 Mhz
Some HDDs 2TB -> 1TB
2x 120 GB OZC SSD
430W Corsair PSU
Note the awsome GPU heatsink (AMD 5450)
Antec 620
2 Random sata controllers


Vmware Workstation/Backup server

2.jpg


Vmware Workstation 9
Fractal design XL
Gigabyte Motherboard (cant remember model)
AMD x4 955 Black edition
Some NICs
16 GB A-Data 1600 Mhz
6x 2TB WD Green
1x 500 GB Samsung OS
400W Corsair PSU



Running the two ESXi hosts in a cluster and the workstation is only for some tests, all 24/7.

Everything is soon to become rack mounted.

6.jpg



Just got some lab Gear to play with from work! :D

4.jpg

5.jpg
 
Last edited:
I would go with ESX All-in-one setup(check napp-it all-in-one setup) and wouldn't but a P410 controller...

You could save some money(you would still need to buy a sata kontroller (IBM M1015 with price around 100€) and spend rest of the money on more hard drives(the same machine could later be changed to NAS if that is what you need)... Anyway, take a look at that solution...

As far as RAID goes, it's up to you... RAID10 is faster for read and write, but you loose space, where RAID5 you dont, but you look some read and write speed... It depends on your OI operations...

MAtej

eBay has the P410 BBWC with 256mb RAM for ~ $165USD which makes it about 30% more expensive than the M1015, but the M1015 has no cache which will cause a performance hit on your volume.

I'd suggest going with the RAID-10 volume on the P410, making sure you find a card with the battery-backed cache enabled. IF you want to splurge a bit, find a card with 512mb of flash cache for even bigger performance gains on your raid volume.


I just finished putting together my home lab and posted about it here but the highlights are

- five supermicro barebones servers with an AMD 8-core CPU and 16GB each (two for a Microsoft cluster and three in an ESXi cluster)
- NetApp FAS2050 iSCSI filer with 20x 300GB 15k FC drives

Yum!

ESXcluster.PNG


FailoverClusterManager.PNG



The one thing I am bummed about is not having enough NIC ports on each server to split out the heartbeat network from the failover network, but whatever...
 
thats got to be the first time i've seen a DIY setup with failover and the like. What is your use case? Just tinkering? Concept testing?

I mostly do shit for shits and giggles but that's pretty cool.
 
I bet that 2050 is loud. I want one at home, but I can't justify the noise levels.
 
thats got to be the first time i've seen a DIY setup with failover and the like. What is your use case? Just tinkering? Concept testing?

I mostly do shit for shits and giggles but that's pretty cool.

I built this to keep my tech chops up. A homework assignment in HA, if you will. And now that it's built I'm going to be donig a bunch of data warehouse stuff on it for a little while.


I bet that 2050 is loud. I want one at home, but I can't justify the noise levels.

I agree. the FAS2050 sounds like a jet engine. I'm keeping it just long enough to get everything running, then I'm going to sell it off and build an iSCSI target with less performance, but far far quieter and with less power draw.
 
Back
Top