Your home ESX server lab hardware specs?

i don't mean to take this thread off topic but here's what i've got maybe you can help me out

esxi is installed on the following box

dual L5639's
72GB ram
20x 3TB drives
etc

I want to run FBSD for ZFS but also leverage that for the VM's

can I route it back to esxi via nfs as a vmdk?

and also partition some for a network fs
 
Provide an NFS export for ESXi and then build VMs using it as a datastore. Easy stuff.
 
sweet! thanks guys I'll look into that, maybe even make a thread for it later on, back to gawking over sick hardware
 
Gotta thank Ebay for making this system happen.System is running great and is very capable.I got a couple more parts to get before I get things setup the way I want it.

Chassis: Supermicro SC846TQ
Motherboard: Supermicro X8DAH+
CPU: 1X Intel Xeon E5620(Will add another in the future when funds permit)
Memory: 12GB Corsair Dominator 1600mhz(Would have went ECC but funds have been tight recently)
Storage: WD 500GB Velociraptor and 4X 3TB WD Red's.
GPU: XFX 7750 Core Edition
NIC1: Intel X540-T2 10Gbe
NIC2: Intel i350-T4 1Gbe
VM's: Ubuntu Server 13.04,PFsense and FreeNas for now.
 
Got a ten gigabit switch?

Yep.Netgear XS708E 8-port switch.Been solid so far with handling the bandwidth between my machines.I had to tweak some settings on the network cards and within windows to push out the speed my RAID arrays are capable of.Haven't tested 10gbe bandwidth in non-windows environments yet.
 
damn i need one

but i don't need one at 814 USD.

I've got some dell x997's that I want to put to use...but I think I'll just direct connect them

nice find with the switch tho
 
Last edited by a moderator:
I have 3 or 4 CNAs for my home lab but no 10Gb SFP+ switch. The ones I can get are all too loud... Oh well.
 
I used a plextor m5m 128gb recently in a gigabyte brix. It was faster than I expected, and the price is pretty reasonable. I think 64gb is the smallest they make though.
 
I was looking at the Plextor but their write speeds was much slower then the others I've looked at when comparing 30GB and 60/64GB models.
 
Scored a Xeon X3480 ES for only $100 on eBay to swap out the i7 870 in my P55 ESXi host. Can use 32GB RAM now (and much more fitting anyway).
 
I am planning to build the ESXi home server as well. I decide to use supermicro X9SCM-iiF as I am using the E3-1200 CPU on my desktop. I wanna ask if any dude try to install non-ECC RAM on this motherboard? As the budge issue, I am still considering if I need to switch to AMD platform as what I wanna test is basically the firewall, router and switch. Here is my ideal list.

MB: X9SCM-iiF
CPU: Intel Xeon E3-1230
RAM: 24GB (Non-ECC)
Hypervisor: ESXi 5.1
 
Xeon requires ecc memory, otherwise pc wont start...
Also be careful to buy the right ecc memory(unreg.unbuf)
 
Ok well I know Xeons of course support ECC but if indeed a "requirement" that must be on server mainboards because I am running the above-mentioned X3480 just fine in my P55 'consumer' board with non-ecc (3.5 at stock volts woo). Though I don't really understand that as the memory controller is internal now.
 
So what's the deal with OS X guests exactly? Certain versions supported officially if ESXi host is Apple hardware (and "just works" like any other OS)? Otherwise need the "unlocker" a la a hackintosh?

At first I thought I read that 5.5 was the first to have official support but upon further review that looks to be related to Web Client on OS X now being supported.
 
Per the EULA it has to be on Apple hardware, and vSphere checks. So either do it on real Apple or do a hack to enable it.

5.5 added OSX support for things like OVA deployment and console access to the Web UI.
 
Well, my home server is at least halfway decent:

SuperServer 7046A-HR+F

Chassis - CSE-745TQ-R1400B
Motherboard - X8DAH+-F
CPU - 1x L5639
Memory - Kingston KVR1333D3D4R9S/4G x 4, KVR1333D3D4R9SK2/16G
Storage - 2x160GB Velociraptor, 1 500GB Constellation.2


*On my "To Get" list*

-Another L5639 (sometimes this week)
-One more KVR1333D3D4R9SK2/16G
-M1015
-X520-DA2
-5x 2tb Seagate ST2000VN000
 
Are Dell Poweredge C1100 2x Xeon L5520 2.26Ghz servers good enough to run ESXi deployments these days? I know they are 3-4 years old but would they suffice for a home ESXi server cluster (2 of them obviously)? I'm looking to do some testing yes but also run a media server VM (Plex & Subsonic) that serves 10+ clients (multiple transcodes at once) as well as a few different Windows Server VM's.
 
@jimphreak

Those processors should be able to handle at least 4 simultaneous transcodes, but I doubt 10. Also, I hope you have fast storage.
 
@jimphreak

Those processors should be able to handle at least 4 simultaneous transcodes, but I doubt 10. Also, I hope you have fast storage.

And yea no I don't have 10 clients transcoding at once but often 2-4. My plan was going to be to use SSD's for local VM storage on the Dell Servers and I'm in the process of putting together a new RAIDz2 NAS (FreeNAS) for all my media, VM backups, etc.
 
Are Dell Poweredge C1100 2x Xeon L5520 2.26Ghz servers good enough to run ESXi deployments these days? I know they are 3-4 years old but would they suffice for a home ESXi server cluster (2 of them obviously)? I'm looking to do some testing yes but also run a media server VM (Plex & Subsonic) that serves 10+ clients (multiple transcodes at once) as well as a few different Windows Server VM's.

At least the 5530s have turbo I think. You might do better with a higher clocked i7 26xx or 37xx etc
 
Are Dell Poweredge C1100 2x Xeon L5520 2.26Ghz servers good enough to run ESXi deployments these days? I know they are 3-4 years old but would they suffice for a home ESXi server cluster (2 of them obviously)? I'm looking to do some testing yes but also run a media server VM (Plex & Subsonic) that serves 10+ clients (multiple transcodes at once) as well as a few different Windows Server VM's.

I have 2 of these with 72GB of RAM each for several months now. While I'm not running ESXi anymore, they are running Hyper-V 3.0 and I've had no issues with them so far.
Currently I'm running 11 VM's between the 2. (one of them is Plex server)
 
I thought about it, but went with 2 HP Micro Gen8 Servers. Replaced the processors and added 8GB ram and 3 TB of storage to each for my testing.
 
just updated my white box over the weekend from a Asus Rampage IV Gene almost went with a GIGABYTE GA-6PXSV4.

Supermicro X9SRL-F
Kingston KVR16R11D4/16HA DDR3-1600 16GB


t0MURRS.png
 
Last edited:
How adequate would an HP Z200 workstation work from some ESXi action?

Its i3-530 based, with 16gb max capable ram..

I ended up snagging a couple of them from work, as they are now decommissioned, out-of-warranty hardware..
 
Added a second microserver and created a cluster, DRS doing a very nice job of keeping it balanced.

 
Last edited:
Lab storage expansion coming!

Moving from 10x2TB drives in Windows Storage Spaces serving up VMware, Hyper-V, and file share vdisks to:

VM Storage Space in Windows 2012 R2 using SSD tiering
4x512GB Toshiba SSDs
8x600GB WD Velociraptors

File Share Storage Space
5x3TB 5.4k drives

While my original setup ran very well, I typically had 15+ VMs running at a time plus the family randomly playing music or movies from the file share and whenever I performed any disk intensive tasks my read and write latencies would spike dramatically. Having 30% of my LUNs in SSD and the rest in 10,000 RPM drives and the file shares on their own disks should alleviate that.
 
Just got my new ESXi host (first of two) delivered:
- Core i5 4570S
- AsRock B85M-ITX
- Crucial 16GB DDR3

Already have:
- Intel quad gigabit NIC
- 8GB stick
- PicoPSU 150W

The board is maxed at 16GB which is not that big of a deal since I'm getting two of them for VCP and other stuff. 32GB will be sufficient for testing.
The AsRock board has an atheros chip so I'll be equipping this with a quad gigabit NIC.
mITX board because I'm trying to find a chassis that can house two boards.
 
My current build for my home lab. ESXi 5.5 ent plus.

HP DL 360 G7 + ilo 3 advanced
Dual quad core xeons with hyperthreading @ 2.2ghz
104 GB RAM
2 x OCZ vertex 4 512gb SSD
6x 1tb WD red in raid 5
Dual 480w psu
10nics - 4 on board, 4 on quad nic pcie card and 2 on dual nic card.

I use to have whitebox build but realised it was easier to pick up a HP server for £500 and add hardware over time to get to to a really powerful and power efficient box.

HP_g7_spec.png
 
Back
Top