Your home ESX server lab hardware specs?

So now i know my 290 will 100% not work anyone got an ideal replacement?

I can then sell my old workstation/esxi host which is a 4770, 32gb ram. Be awesome if i could use some of the power of this server as i can create a full virtualized domain test lab and still give myself 32gb of ram.

How does it fail?

I have read some things about needing to manually edit vmx files under ESXI to add memory holes when forwarding GPU's to guests with more than 2GB of ram.

I wonder if this is related.

It's a different platform than your DL380, but it might be related:

See here:
https://communities.vmware.com/message/2330282
 
I just started playing with ESXi about a week ago, and I'm liking it a hell of a lot more then XenServer so far.

Current Clients include MineOS, Win7, and I'm just now getting around to installing WinServer 2k8.

System was built in the case of an old 1U D525 system that gave up the ghost. Now it's running an Intel i5 3330S, 16GB Memory, 120GB SSD, and an Intel Quad Gigabit NIC. Uses minimal power / idles really well :D
J1INBst.png
 
I just started playing with ESXi about a week ago, and I'm liking it a hell of a lot more then XenServer so far.

Current Clients include MineOS, Win7, and I'm just now getting around to installing WinServer 2k8.

System was built in the case of an old 1U D525 system that gave up the ghost. Now it's running an Intel i5 3330S, 16GB Memory, 120GB SSD, and an Intel Quad Gigabit NIC. Uses minimal power / idles really well :D
J1INBst.png

Nice,

What motherboard are you using? From your screenshot I see that DirectPath I/O is supported. This means that VT-d is working. Most non-D Intel CPU's support it, but it is usually very hit or miss to find a consumer motherboard that does.
 
I'm using an ASRock H61MV-ITX 1155 board. I actually have three of them, but the other two are running G1610's + 16GB memory + 32GB SSDs for XenServer.

For a $50 motherboard that fits in tiny places, I've loved it.
 
I'm using an ASRock H61MV-ITX 1155 board. I actually have three of them, but the other two are running G1610's + 16GB memory + 32GB SSDs for XenServer.

For a $50 motherboard that fits in tiny places, I've loved it.

Nice, good to know.

Someone ought to maintain a list of VT-d supporting consumer Intel motherboards.
 
I watch this thread all the time, but I've never posted in it. I'm stoked to finally have some hardware worth posting about.

I just picked up an HP ProLiant DL380p Gen8 with a single six core, 12 thread E5-2620v2 xeon and 16 gigs of ram for cheap. It's the smart buy version from the website with a P420i and 2GB of cache and 12LFF drive bays. I spent Saturday getting ESXi 5.5u2 onto an SD card in the internal reader. So far it's a beast compared to the machine it's replacing (Dual Quad-core Opteron HE procs, L1n64WS/b board, 8 gigs of ram).

I'm planning to purchase a second processor in the future, but I want to take the RAM up to 64 gigs in the meantime (3 more 16 gig sticks). I just snagged a 1.2TB Fusion IO ioDuo from TType85... I have a feeling this machine is going to be a beast.

Ultimately, I'd like to consolidate a few other machines onto this new host if possible. My old host would not support passthrough (way too old) in any way, but the new one does. I have an 18TB ReadyNAS Pro Business Edition that has all of my media content for XBMC / Kodi, the actual media center system running the XBMC front-end is an Apple Mac Mini, and a low power Soekris Engineering net6501-70 that I use to run pfSense. If it's possible, I'd love to throw an appropriately sized graphics card in the host and pass it through to a VM to replace the Mini, and virtualize the other two systems as well.
 
I just started a new build thanks to some killer flea-bay deals. I am moving away from 1366 machines to 2011/1356. I should have all the parts here next week.

Lab Server 1(Hyper-V) :
2x E5-2450L 8 Core 1.8Ghz processors
Tyan S7045GM4NR Motherboard
96GB DDR3 ECC Ram
Brocade 1020 10GB Adapter
Norco 4220
Intel S3500 DC 800GB SSD
4x1TB SAS
4x300GB SAS
Going to try to use ISCSI to:

Lab Server 2 (ESXi)
2x E5-2418L 4 core 2.0Ghz Processors
Tyan S7042AGM2NR Motherboard
64GB DDR3 ECC Ram
Brocade 1020 10GB Adapter
 
Last edited:
Does anyone run a whitebox in a motherboard+cpu with Intel vPro?

I'm wondering if the hardware reporting in esxi will work with the vPro CIM information.

So far, I've yet to find any sort of confirmation for this on the web.
 
I just started a new build thanks to some killer flea-bay deals. I am moving away from 1366 machines to 2011/1356. I should have all the parts here next week.

Lab Server 1(Hyper-V) :
2x E5-2450L 8 Core 1.8Ghz processors
Tyan S7045GM4NR Motherboard
96GB DDR3 ECC Ram
Brocade 1020 10GB Adapter
Norco 4220
Intel S3500 DC 800GB SSD
4x1TB SAS
4x300GB SAS
Going to try to use ISCSI to:

Lab Server 2 (ESXi)
2x E5-2418L 4 core 2.0Ghz Processors
Tyan S7042AGM2NR Motherboard
64GB DDR3 ECC Ram
Brocade 1020 10GB Adapter

Very nice, using that $99 Motherboard from ebay? Where did you find a good deal on the ram?
 
I just started a new build thanks to some killer flea-bay deals. I am moving away from 1366 machines to 2011/1356. I should have all the parts here next week.

Lab Server 1(Hyper-V) :
2x E5-2450L 8 Core 1.8Ghz processors
Tyan S7045GM4NR Motherboard
96GB DDR3 ECC Ram
Brocade 1020 10GB Adapter
Norco 4220
Intel S3500 DC 800GB SSD
4x1TB SAS
4x300GB SAS
Going to try to use ISCSI to:

Lab Server 2 (ESXi)
2x E5-2418L 4 core 2.0Ghz Processors
Tyan S7042AGM2NR Motherboard
64GB DDR3 ECC Ram
Brocade 1020 10GB Adapter


Nice on the 10GB brocades.

Would love to pick up a couple of these, but I don't have two servers I need to direct attach. Would be great to run it to a switch, but I don't want to spend $1000+ on a switch to get 1Gbe uplink ports. :(
 
I just started a new build thanks to some killer flea-bay deals. I am moving away from 1366 machines to 2011/1356. I should have all the parts here next week.

Lab Server 1(Hyper-V) :
2x E5-2450L 8 Core 1.8Ghz processors
Tyan S7045GM4NR Motherboard
96GB DDR3 ECC Ram
Brocade 1020 10GB Adapter
Norco 4220
Intel S3500 DC 800GB SSD
4x1TB SAS
4x300GB SAS
Going to try to use ISCSI to:

Lab Server 2 (ESXi)
2x E5-2418L 4 core 2.0Ghz Processors
Tyan S7042AGM2NR Motherboard
64GB DDR3 ECC Ram
Brocade 1020 10GB Adapter

To what are you connecting the Brocade adapters? Do you already have a 10G switch or are you just doing a crossover connection between these two machines?
 
Very nice, using that $99 Motherboard from ebay? Where did you find a good deal on the ram?

Yes, these are the $99 boards from ebay. Awesome deal for these boards.
I have had a few good deals from the FS section here on ram and had some other laying around.


Zarathustra[H];1041444591 said:
Nice on the 10GB brocades.

Would love to pick up a couple of these, but I don't have two servers I need to direct attach. Would be great to run it to a switch, but I don't want to spend $1000+ on a switch to get 1Gbe uplink ports. :(

To what are you connecting the Brocade adapters? Do you already have a 10G switch or are you just doing a crossover connection between these two machines?

The brocades are just going to be connecting the hosts each other with a twinax cable, no switch. The plan is the storage for the ESXi server is going to be on the hyper-v exposed via iSCSI. The SAS drives are going to be controlled by a hardware raid controller. My media server will be on this also but will be just JBOD protected by snapraid.

I have everything but the Norco 4220 in hand but have bench tested the systems. I did a quick initial test with the E5-2450L's on Cinebench R15. It gave me a 1352 multi-core rating, 80 for single core. The dual X5650 (95W) listed in the benchmark is 1297 and 93.
 
The brocades are just going to be connecting the hosts each other with a twinax cable, no switch. The plan is the storage for the ESXi server is going to be on the hyper-v exposed via iSCSI. The SAS drives are going to be controlled by a hardware raid controller. My media server will be on this also but will be just JBOD protected by snapraid.

Nice! If I didn't run my storage AIW inside my ESXi box, this is what I would do.

I would LOVE to grab a couple of brocades and run a line from my server to my main rig though, as data transfer to storage is currently limited by gigabit Ethernet, and I do a lot of transfers back and forth. Cable length would be a problem though.

Whats the longest SFP+ Direct attach cable confirmed to be working with the brocades these days? I understand they can be quite picky regarding the cables. (and the cables are on the expensive side for trial and error...)
 
Zarathustra[H];1041444877 said:
Nice! If I didn't run my storage AIW inside my ESXi box, this is what I would do.

I would LOVE to grab a couple of brocades and run a line from my server to my main rig though, as data transfer to storage is currently limited by gigabit Ethernet, and I do a lot of transfers back and forth. Cable length would be a problem though.

Whats the longest SFP+ Direct attach cable confirmed to be working with the brocades these days? I understand they can be quite picky regarding the cables. (and the cables are on the expensive side for trial and error...)

Not sure mine is only like 1m, but you could do SFP+ transceiver --> fiber --> SFP+ transceiver and I that may go 3-400m.
 
picked up two supermicro barebone intel e5-24xx server and one 4u case all for 800$ :D

Supermicro SC113MTQ-600CB + X9DBL-3F no cpus yet

Supermicro SC826E16-R1200LPB + X9DBi-TPF 2x E5-2420 v2

Supermicro SC846E16-R1200B + X9SRL-F 1x E5-2658 v1



 
Last edited:
picked up two supermicro barebone intel e5-24xx server and one 4u case all for 800$ :D

Supermicro SC113MTQ-600CB + X9DBL-3F no cpus yet

Supermicro SC826E16-R1200LPB + X9DBi-TPF 2x E5-2420 v2

Supermicro SC846E16-R1200B + X9SRL-F 1x E5-2658 v1

Nice find.

I am liking my E5-24xx cpu's, i'm running E5-2450L's in one box (8c/16t) and am putting up another with E5-2418L (4c/8t). So far the only downside is memory density on the motherboards. Mine are both Tyan and one only has 8 slots, the other has 12, still plenty for a home lab though.
 
the prices on ebay are just to good and i was lucky to get local pickup. i missed out on the 2011 barebones with 10gb. i know memory is limited compard to a 2011 setup but perfect for my needs.

i also got a tyan S7045GM4NR board with two ES 2.2ghz e5-2420 v2 i think or close to it.
no vt-d so im may use smartos.

6UmGVHyl.jpg
JYI1NRMl.jpg
thFyip2l.jpg
 
lRAzO7Dh.jpg


Switch: HP ProCurve 1810G-24
3U: Supermicro MicroCloud SYS5038ML-H8TRF w/ 2xE3-1220v3 (+6 blades waiting on CPU/RAM)
4U: Supermicro SC846 / X9SCM / E3-1230v2. 1TB SSD datastore, 24x2TB NAS
4U: Supermicro SC846 / X9SCM / E3-1230v1. 24x2TB Backup

Each host has a 128GB or larger 840 or 850 Pro SSD for vFRC. I'm waiting to finish populating the 3U until the E3-12xx v4 CPUs come out. Total noise level is 60dBA @ 3'. About the same amount of noise your average kitchen refrigerator makes. Oddly, the 3U blade system is quieter than the 4U NAS units. Not that I'm complaining! Only the NAS is sitting on the battery backup. It hosts the "important" VMs. Everything else is connected to a surge protector. It can crash and I won't care, but that one box needs to shut down nicely.
 
um, holy freaking awesome dude. is that a special 10U enclosure?
 
Finally get to do some "upgrades" thanks to retired hardware from work ;)

Old home lab
  • Host1 - Supermicro X8SIL-F-O with an Intel x3440 and 32GB RAM
  • Host 2 - Supermicro X83ST-F-O with an Intel E5640 and 24GB RAM
  • Dell 2848 managed network switch
SAN - Supermicro X8SIL-F-O with an Intel x3430 and 16GB RAM, 6x 3TB Seagate drives (3 mirrors), 4x Sandisk Extreme 240GB SSD (RAIDZ)

New lab gear consists of ;
  • 2x Dell R710 servers (dual Intel X5650 procs and 64GB RAM). I have the option of one more R710 but would need to grab dual procs and heatsinks from ebay... already have an extra 64GB RAM.
  • Cisco 3750G 48 port gigabit switch

Hosts are already in place... now I have to learn enough about Cisco to configure the switch (was reset when taken out of service). Gives me something new to start learning :p Trying to decide if I really need the third host. Looks like I can get 2x X5650's for around $150-160 and another $30 or so for a second heatsink (so well just call it an even $200). After stripping out the SAS drives and throwing in a single SSD these seem to be averaging around 160W on my average workload... but I can run everything (for now) on one host if I want.

Now I need a real rack. I've been using an old AV rack (not nearly deep enough) but now I just have to sit the second R710 on top for now. Really want a "half size" cabinet, but whenever I see on pop up on Craigslist they seem to want $400+ for them.. when I could get a full size one for around $200...
 
Last edited:
Probably not as sexy as some of the gear I see posted here.. but I grabbed two servers local to me, that were comparable to what I would have paid on eBay..
Was going to use some HP Z200's I grabbed from work, but my frustration has run thin on trying to get some compatible ram to load these up to 16gb..
So, "new" toys are two HP DL380 G5's.. each came with dual 5150's and eight 146gb 10k drives in each .. one has 32gb of memory , the other only has 8gb

Yeah.. power hogs.. but they will not be run 24/7.
Thinking about having the one with 32gb setup with two nested ESXi hosts.. connected to the other one with 8gb, that would be setup for the connected storage. The one with 32gb of ram would of course be diskless..

Not sure how this would function; should be adequate for labbing
 
Single server lab at home:
HP Proliant ML150G6
2x Xeon E5540 @ 2,5Ghz (16 threads, CPU's maxed out)
12x 4GB = 48GB DDR3-1066 ECC REG (maxed out)
Dell PERC6 RAID controller with 2x 4 port SFF-8484 to SATA cables
1x 256GB Crucial C300 SSD
1x 256GB Samsung 830 SSD
6x 500GB desktop HDD's on PERC6 in RAID10 (1,5TB usable, 3TB RAW)


czsAjp0.png


5IYIe1R.jpg


r0SisT7.jpg


yTdgKYL.jpg
 
Running EXSI 5.5 Free

EXSI01:
HP XW6600
32GB RAM
2x E5450 Xeon (Quad core)
WD Blue 500GB
mounted 2 extra Gbit ethernet cards (HP branded Broadcom).

EXSI02:
HP XW6600
32GB RAM
2x X5260 Xeon (Dual core)
320GB WD Blue
mounted 2 extra Gbit ethernet cards (HP branded Broadcom).

External Storage:
Seagate blackarmor nas 440 4x1TB Seagate drives JBOD.

Network:
Zyxel GS1910-24
Ubnt ERL5Poe
 
Last edited:
Running ESXi 5.5 Enterprise Plus and vCenter 5.5 Standard - contact me if you are interested in learning on how to purchase discounted licenses.

HQ-ESX-01 - Internal and QC environment
HP Z800 Workstation with 003 rev board
2x Xeon X5650 (hex core)
48GB RAM - half of the memory slots are populated. Planning on going higher if needed
1x 120GB Samsung 850 SSD - OS and cache
1x 40GB POS WD drive - Swap
Intel i350-t2

HQ-ESX-02 - DMZ and some Internal
MSI H67MA-E45 (B3)
1x Core i5-2300
32GB RAM
1x 120GB Samsung 850 SSD - OS and cache
1x 40GB POS WD drive - Swap
Intel i350-t4 - built-in NIC is incompatible with ESXi 5.5

External Storage - pure iSCSI implementation
QNAP TS-251 with 2x 480GB Samsung 845DC EVO JBOD - VM OSs
Netgear ReadyNAS NVX with 4x 1TB WD Red RAID5 - Data

iSCSI backend has a dedicated (5 port) switch running jumbo frames.
Management Network has a dedicated (5 port) switch for vMotion and management capabilities.
My PC has an Intel i350-T2 dedicated to the Management Network and the iSCSI backend to manage them.

My "DMZ" host is directly connected to Verizon FIOS and doesn't use the Verizon router at all. I only use FIOS for Internet; I don't have any TV services through them.

VMs:
PROD (all running 2008 R2)
2x DCs
Exchange
File
vCenter
Web/FTP/TeamSpeak
TMG 2010 for firewall between DMZ and Internal
Jumpbox

QC (all running 2012 R2) will be promoted to PROD in about a week
2x DCs
File
Web/FTP/TeamSpeak
TMG 2010
Jumpbox
Also, QC has a vCenter Appliance which will be promoted to PROD as well.
 
Primary Hypervisor:
2*L5520's (two quad xeons w/HT @ 2.26Ghz)
48GB Memory
8*1TB HD's in RAID10
10gbps NIC

Secondary Hypervisor:
2*L5520's (two quad xeons w/HT @ 2.26Ghz)
48GB Memory
4*1TB HD's in RAID10
4*1gbps bonded NIC

Primary NAS:
2*L5640's (two six-core xeons w/HT)
72GB Memory
20*2TB HD's in ZFS effective RAID10
60GB SSD - Ubuntu Server 14.04
10gbps NIC

VMs:


My main switch is a 48-port 500w Ubiquiti Edgeswitch.
 
Last edited:
not as elaborate as what you guys are running... but it works..

Used to run 3 - Dell Precision T3500's
Quad core xeon
24gb ram

now the
HP Z800 dual CPU 48gb ram takes charge...
I know I am not taxing them and it surves my purpose well...


vsphere1.jpg


vsphere2.jpg
 
Wow some of y'all have some beastly systems!

Note: I am not running my box primarily for work, but to lower power bills at home.

Running ESXi 6 free:
i7-4790 undervolted (-0.2 offset)
32 gig of Kingston value 1600
AsRock b85m Pro 4
M1015 (IT mode)
i350-T4

7x2tb seagate constellations
1x256 gig samsung 850 pro

Runs:
Pfsense
Mythbuntu
Lubuntu (mdadm raid 6, lvm, ext4)
2xwindows 10 desktops for "clean work machines"/rdp
 
Hi :)

At home ongoing project 3rd party backup

http://www.virtualolivia.com/virtualolivia-home-data-center-amazon-aws-project-crazy/

VirtualOlivia-Data-Center-Amazon-AWS-Project.png


Supermicro X10SLH-F with Intel Xeon E3-1270 v3 on board
4 x 8GB GOODRAM ECC UNBUFFERED DDR3 1600MHz PC3-12800E UDIMM | W-MEM1600E38G
Be Quiet! Dark Power PRO 10 650W 80PLUS Gold
LSI MegaRAID SAS 9271-8i
6 x Seagate SV35 Series (3TB, 64MB, SATA III-600) (ST3000VX000)
Fractal DesignDefine XL R2 Grey

Soon I will migrate to rack mount chassis.

VMware-ESXi-vSphere-18-09-2015.PNG
 
Wow, you guys go all out on home labs! I don't have much money to throw at equipment right now, so I'm making due with a HP DL360 G6 || 2x Quad core E5540 @ 2.53 || 68GB PC3-10600R || 8x 300GB drives.
 
Wow, you guys go all out on home labs! I don't have much money to throw at equipment right now, so I'm making due with a HP DL360 G6 || 2x Quad core E5540 @ 2.53 || 68GB PC3-10600R || 8x 300GB drives.

Nothing wrong with that. That is still a strong server.

Have fun with her!
 
Wow, you guys go all out on home labs! I don't have much money to throw at equipment right now, so I'm making due with a HP DL360 G6 || 2x Quad core E5540 @ 2.53 || 68GB PC3-10600R || 8x 300GB drives.

Nothing wrong with that. That is still a strong server.

Have fun with her!

Agreed. FCLGA1366 Xeons are still very viable servers. Mine is a similar vintage to yours, but my own build, with a Supermicro motherboard, Norco case and two L5640's at 2.26Ghz (turbo up to 2.8)

I have no need for anything more. It runs my ~10 guests just fine. Only reason I'd upgrade is to get the power use down. It seems to hum along at just south of 300W 24/7. (then again, I DO have 12 3.5" drives, in it...) I'd love to use less power, but thus far newer, less expensive chips haven't come down enough in price to justify their power savings, so I'll probably have this setup for some time to come.
 
I have two hosts at home:
G620, 24 GB ddr3, 128 GB ssd, 1.5 tb HDD, single port Intel nic
I3-2100, 16 GB ddr3, 1 tb HDD, quad port Intel nic in lag to a dell switch.

Combined they pull 70 watts idle and around 130 at load.
 
i'm probably going to put VSphere/ESX 6u1 on my quad opterons with 128GB of ram
 
not as elaborate as what you guys are running... but it works..

Used to run 3 - Dell Precision T3500's
Quad core xeon
24gb ram

now the
HP Z800 dual CPU 48gb ram takes charge...
I know I am not taxing them and it serves my purpose well...





I will say that the Z800 is definitely a great system to be a hypervisor. I love mine. You can start relatively small with it (a single E5540 and 16GB) and upgrade it for cheap (bought two x5650's for $150 on eBay).
It cost me about $800 to outfit it. It's not a deep pizza box so I don't need a deep rack for it, and it's lightweight enough to sit on top of my shallow wall mount 8U rack I use for comms. AND it's quiet compared to a pizza box.
 
Back
Top