Your home ESX server lab hardware specs?

Here's my setup. Post wiring/chassis tornado. I'm just getting into esxi and setting up the goods. This will hopefully be much cleaner when done.

Left to right

Left/Top- 4port D-Link gigabit switch
Left/Bottom- 48port Cisco 10/100 switch (temporary)
Mid/Top- S5500GC w/2x E5506 and 28GB. 500GB disk. Using to manage vcenter and hopefully as nas (more disks to be added)
Mid/Bottom- Intel WS board w/C2Q9450 and 8GB. 2x500GB disk. "Prod" for game and chat servers. Will eventually move to virtual
Right/Top- HP DL380 G6 w/2x E5520 and 64GB. NO DISKS. Running ESXi
Right/Mid- HP DL380 G6 w/2x E5520 and 64GB. NO DISKS. Running ESXi
Right/Bottom- IBM X3550 7978 w/1x5110 and 32GB. NO DISKS. Used for doorstop and workbench :)

I still need to find a home for the hardware in my basement. This is how it sits until then. :/

vr4gid.jpg



Oh yeah. Sorry for the crappy pic. I unplugged the overhead light to plug in the monitor since I ran out of outlets with the servers/switches.
 
I'll post pictures later. Im currently moving it from my network closet (too hot), to my office.

Dell R710.
2 Intel Xeon E5507
64gb Ram
3x 1tb Hard drives in raid 5
2x 570w PSUs

All connected through an 8 port Meraki switch.


It's nothing special, just a home lab to screw around in.
 
I'll post pictures later. Im currently moving it from my network closet (too hot), to my office.

Dell R710.
2 Intel Xeon E5507
64gb Ram
3x 1tb Hard drives in raid 5
2x 570w PSUs

All connected through an 8 port Meraki switch.


It's nothing special, just a home lab to screw around in.

Nice! Here are my specs. This is still a WIP and I don't have most of it racked yet. Natex forgot to ship my heatsinks with the server and I am waiting on the rails for my UPS in the mail.

VM Host:

Quanta QSSC-2ML
2x Intel Xeon E5-2650 v1
128GB ECC Registered DDR3-1333
10GBe NIC
VMware ESXi 6.0

Storage Server:

Dell PowerEdge R610
2x Intel Xeon E5520 (Soon to be replaced with L5620 CPUs)
24GB ECC Registered DDR3-1333
Mellanox ConnectX-2 NIC
LSI SAS9200-8e connected to a Promise vTrak E310s DAS unit with 8x 2TB SATA Drives
2x 600GB 10K SAS drives
2x 146GB 15K SAS drives

VM management server:

Dell PowerEdge R210 II
Intel Xeon E3-1220v1
16GB ECC DDR3 (unbuffered)
500GB SATA HDD
VMware vSphere

UPS:

Dell 2700W UPS with additional battery shelf

Networking:

150/10 cable internet
Ubiquiti Networks Edgerouter Lite
24-port unmanaged 3COM gigabit switch

I want to upgrade to some PoE gear, as well as a 10GBe switch.
 
Well fellas .... not as sexy as some of your setups, but this is what I have with my limited budget.
-Linksys WRT1200AC
-TP-LINK TL-SG1016DE
-HP Z200, i3-530 - Windows Server '12 Standard, Core, file server (smb, nfs, plex)
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0


Z7uwnR0.jpg


What kind of case is housing everything.
 
Freaking sweet... 2 Synology (xpenology) VM's setup for clustering .. No wonder I break shit all the time :) always tinkering
 

Attachments

  • HA CLUSTER.JPG
    HA CLUSTER.JPG
    91.6 KB · Views: 440
Freaking sweet... 2 Synology (xpenology) VM's setup for clustering .. No wonder I break shit all the time :) always tinkering
Got XPEnology also running on my home ESXi. Via PCI-Passtrough it got my HP P420 2GB Raid Controller (With 8 6TB HDDs at Raid6).
Runs fine, got very good performance over 10GBit/s network. But as soon as I try to host an iSCSI Storage internal for the Host itself, the performance drops over the internal 10Gbit/s network drastically. External iSCSI works just fine. Dunno, some performance issue with that small HP Server probably...
 
Finally getting around to setting up a few different VM's on my dual E5-2670 ESXi box :)

grr.. it shrank down my image. Anyhow -- Kali Linux 2016 rolling, Ubuntu 16.04 LTS, Mac OSX 10.11.6, Windows Server 2016 Standard x64 eval.

ESXi-Zeus.png
 
Finally getting around to setting up a few different VM's on my dual E5-2670 ESXi box :)

grr.. it shrank down my image. Anyhow -- Kali Linux 2016 rolling, Ubuntu 16.04 LTS, Mac OSX 10.11.6, Windows Server 2016 Standard x64 eval.

View attachment 8413

I just got an Ubuntu 16.04 VM set up myself, enabled native ZFS w/ pass thru to an HBA, setup a 6TB pool and installed Crashplan. In the early stages of testing for viability as a long term archive-offload box for the cold (stale) data from my main storage box. So far so good!
 
@ w1retap
How did you get Mac OSX running? I once failed lol

Same here, and I wondered the exact same thing after I saw w1retaps post so I Google'd around and tried again, was up and running a few hours later. The only thing I haven't completed is the resolution issue (upon reboot it reverts back to 1024x768) but to get started you'll need the Unlocker script.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
@ w1retap
How did you get Mac OSX running? I once failed lol
Unlocker script over at InsanelyMac. (I don't recommend you use this on a production box or anything with data of personal importance) Then you can use an OSX image to create the VM on ESXi. I don't have any resolution issues on mine as Phantum stated.
 
Unlocker script over at InsanelyMac. (I don't recommend you use this on a production box or anything with data of personal importance) Then you can use an OSX image to create the VM on ESXi. I don't have any resolution issues on mine as Phantum stated.

Have you passed through a GPU in order to get full OpenCL support on it? the back end imaging support seems to really slow down the OS if it's not present.
 
Have you passed through a GPU in order to get full OpenCL support on it? the back end imaging support seems to really slow down the OS if it's not present.
No, I don't have a GPU capable of passing through in my ESXi server. But as is, the OSX VM runs pretty fast with minimal lag. It isn't bad at all.
 
@ w1retap
How did you get Mac OSX running? I once failed lol

I too have Sierra running in a vm. was pretty easy. if you need I can point you to the youtube channel i followed. Straight forward and easy. im not a mac guy so now i have it have no clue what to do with it... hah did it just because.
 
Ive had mac osx running in a VM, the performance just seemed to lag a little for me, and I didnt really need it so i got rid of it.
 
I really need to get into this. My new job is in love with virtual server instancing. It would be nice to brush up on my skills.
 
I really need to get into this. My new job is in love with virtual server instancing. It would be nice to brush up on my skills.

For sure. At work I just downsized 24 physical servers across 5 sites down to just 12. Certainly will help the hardware budget going forward.
 
Revamped the lab and happy to be rid of the old AMD Opteron's! These Intel CPUs just scream. Everything is so much more responsive now.

2x hosts - dual booting ESXi 6.5 and Hyper-V Core 2016

- Intel e5-2650 v4 ES
- Supermicro X10SRL-F
- 4x 32GB DDR4 2133MHZ LRDIMMs
- Intel X520-DA2 dual 10Gb NIC

Shared Storage

- QNAP TVS-471
- Intel i3-4130T
- 4GB RAM
- 4x Toshiba 512GB SSDs in RAID 5
- Intel X520-DA2 dual 10Gb NIC

Network

- Ubiquiti US-16-XG 10Gb switch
 
Revamped the lab and happy to be rid of the old AMD Opteron's! These Intel CPUs just scream. Everything is so much more responsive now.

2x hosts - dual booting ESXi 6.5 and Hyper-V Core 2016

- Intel e5-2650 v4 ES
- Supermicro X10SRL-F
- 4x 32GB DDR4 2133MHZ LRDIMMs
- Intel X520-DA2 dual 10Gb NIC

Shared Storage

- QNAP TVS-471
- Intel i3-4130T
- 4GB RAM
- 4x Toshiba 512GB SSDs in RAID 5
- Intel X520-DA2 dual 10Gb NIC

Network

- Ubiquiti US-16-XG 10Gb switch

Very nice. I assume you are using optics to connect yours hosts and QNAP to the ES-16-XG? I tested that switch and couldn't get any DAC's to work and wound up spending the extra dough for a Cisco SG350XG-24F instead.
 
Cisco 3M TwinAx cables

I tried those and they would not connect to my Dell X1052 switch. Every cable I tried (3-4 different brands) would connect some devices (ES-16-XG to SuperMicro boards for example) but not all devices. So just keep that in mind if you ever add an additional switch you want to connect to it, it will be difficult. Ubiquiti is aware of this limitation and suggests mainly using optics right now.
 
I tried those and they would not connect to my Dell X1052 switch. Every cable I tried (3-4 different brands) would connect some devices (ES-16-XG to SuperMicro boards for example) but not all devices. So just keep that in mind if you ever add an additional switch you want to connect to it, it will be difficult. Ubiquiti is aware of this limitation and suggests mainly using optics right now.

I had the same issue. Had to ditch the Cisco TwinAx and buy 10GTek 1M TwinAx and those took right off like a champ.
 
Intel X520-DA2 cards

Ahh yes. From what I've been seeing people who find a DAC that works are able to get it working with one device but not many. I needed a switch that would connect to both my Dell X1052 SFP+ ports as well as my Intel X552 SFP+ ports and no cables would connect both. Glad you were able to get your setup working though.
 
I too have Sierra running in a vm. was pretty easy. if you need I can point you to the youtube channel i followed. Straight forward and easy. im not a mac guy so now i have it have no clue what to do with it... hah did it just because.

Yeah point me to the youtube channel
 
Not the latest and greatest, just a cheap home lab to play around with ESXi, Windows Server and Linux...

Dell Precision T5500
Xeon E5620 @ 2.4GHz (4c/8t) - looking to upgrade to a low-power six core, eventually 12c
18GB ECC DDR3-1066 (will need more)
640GB WD Blue
2TB Samsung 5400RPM
PNY Geforce 7300GS 256MB (bad VRAM, but it works fine as a text console)
MSI Geforce 670 2GB (PCIe passthru - for gaming and a GUI console)
ESXi 6.0.0U2
Windows 8.1 Pro VM w/GPU passthru for GUI console and gaming
pfSense/FreeBSD VM for OpenVPN endpoint, dynamic DNS and certificate management.
Various Windows Server/Linux VMs for educational purposes

Total cost so far - $150 plus some spare hard drives and gfx cards.

One problem I'm having is power consumption and heat output. Part of the drive to do this was to have just one rig in my office, otherwise things get rather toasty. However, even with just a couple of VMs running 24/7 mostly idle this thing puts out a lot of heat. It also bumped up the power bill last month. Any suggestions for reducing heat output/power consumption? Already enabled P/C-states and low power policy for ESXi and enabled C-States and low-power mode in BIOS.
 
Did some server consolidation last month:

main:
846E16-R1200B
2x E5-2690 v4
Supermicro x10DRi
256 (16 x 16GB) Crucial ecc ddr4
24x 3tb toshiba
Intel S3710 (400 gb)
Intel S3700 (100 gb)
Samsung 840 evo (1 tb)
Samsung 840 pro (256 gb)
Intel x540-t2
lsi sas 2008 IT mode (internal)
lsi sas 2308 IT mode (external connection to jbod)
esxi 6.5

jbod:
846E16-R1200B
supermicro CSE-PTJBOD-CB1
24x 3tb toshiba
 
Not the latest and greatest, just a cheap home lab to play around with ESXi, Windows Server and Linux...

Dell Precision T5500
Xeon E5620 @ 2.4GHz (4c/8t) - looking to upgrade to a low-power six core, eventually 12c
18GB ECC DDR3-1066 (will need more)
640GB WD Blue
2TB Samsung 5400RPM
PNY Geforce 7300GS 256MB (bad VRAM, but it works fine as a text console)
MSI Geforce 670 2GB (PCIe passthru - for gaming and a GUI console)
ESXi 6.0.0U2
Windows 8.1 Pro VM w/GPU passthru for GUI console and gaming
pfSense/FreeBSD VM for OpenVPN endpoint, dynamic DNS and certificate management.
Various Windows Server/Linux VMs for educational purposes

Total cost so far - $150 plus some spare hard drives and gfx cards.

One problem I'm having is power consumption and heat output. Part of the drive to do this was to have just one rig in my office, otherwise things get rather toasty. However, even with just a couple of VMs running 24/7 mostly idle this thing puts out a lot of heat. It also bumped up the power bill last month. Any suggestions for reducing heat output/power consumption? Already enabled P/C-states and low power policy for ESXi and enabled C-States and low-power mode in BIOS.

Nature of the (old) beast ... CPU, RAM, and add-on boards are your culprits.
Sell it and put the funds to a newer (but still used) build if you want less heat/power use.
 
Big changes in store.. at least for me lol

2 DL380 G6's, with dual L5630's w/ 64gb are in route

Got rid of my G5's this past summer. Been running a piecemeal all-in-one for the time being to tide me over.
 
Did some server consolidation last month:

main:
846E16-R1200B
2x E5-2690 v4
Supermicro x10DRi
256 (16 x 16GB) Crucial ecc ddr4
24x 3tb toshiba
Intel S3710 (400 gb)
Intel S3700 (100 gb)
Samsung 840 evo (1 tb)
Samsung 840 pro (256 gb)
Intel x540-t2
lsi sas 2008 IT mode (internal)
lsi sas 2308 IT mode (external connection to jbod)
esxi 6.5

jbod:
846E16-R1200B
supermicro CSE-PTJBOD-CB1
24x 3tb toshiba
Beauty, why the variation in ssd. If you don't mind me asking.
 
The 2 samsungs are my main VM Datastores for the critical VMs (AD, Solaris-Nappit, etc..). The S3710 serves as a write-cache for plex(transcodes) and nzbget. S3700 is passed-through as the primary SLOG for the zpool.
 
Big changes in store.. at least for me lol

2 DL380 G6's, with dual L5630's w/ 64gb are in route

Got rid of my G5's this past summer. Been running a piecemeal all-in-one for the time being to tide me over.

So far. I'm a fan of the G6's. Been running them for a while. It's nice they still have somewhat of an upgrade path with the max of 2x6core and up to 192GB supported. I have 2x80W CPU's and I will say they are pretty quiet considering (not sure how the G5's were). Might need to get a few of those 5630s. could cut my power usage in half.

Here is my slightly cleaned up setup.
20170307_183102 by Outlaw

20170307_183128 by Outlaw

20170307_183140 by Outlaw

Left (under Monitor)
-Management PC for when I'm in the basement (Tri-core AMD w/6GB)
-48Port Cisco Gigabit

Right (Under 2 towers)
-DL380g6 w/ 2x E5540 and 72GB, 2x146GB
-DL380g6 w/ 2x E5540 and 72GB, 3x1TB
-MSA2012sa w/ 6x300GB SAS 15k, 3x1TB Sata, 3x2TB Sata

Front tower (w/USB stick hanging out back)
-S5500GC w/2x E5506 and 28GB. Currently in between uses


cluster by Outlaw
 
Just upgraded from a NUC i5 1.2GHz w/ 12GB RAM, 256GB SSD + 1TB data to a Dell PowerEdge T320 from work. Xeon 1.9GHz hex core w/ 96GB RAM, 3x256GB SSD & 3x1TB 7200 RAID 5 arrays.

Kind of makes my laptop less necessary...I can go back to nesting things on a server instead of carrying around my lab and not using it.

Although, whenever I get a proper license, I'll likely go back to a few NUCs and a NAS. But, for now, this will suffice. Especially with the price.
 
Back
Top