Your home ESX server lab hardware specs?

You're going to pay that much ?

Not sure what this post means. Quad Opterons seem like a great platform, and you always need lots of RAM for guests.

If anything, it probably uses a bit more power than I'd like, but to each their own :p
 
I don't have any more evals for ESX so it's now a no-go.

Been playing with docker though so I might forgo ESX altogether and just rock KVM when I need it.

Just means getting a windows VM with a gpu as passthrough is going to be that much more difficult with KVM.

Ok I'm done hijacking this thread.
 
AFAIK you don't get 4P capability from vmware with eval or free versions thus the question ;)
(Directed @ Giga)
 
Oh hmm. All of my trials were on X8Dtes

Right now I'm just rocking hyper-v
 
AFAIK you don't get 4P capability from vmware with eval or free versions thus the question ;)
(Directed @ Giga)

Hope these are as helpful as I was trying to be :)

Response on Free ESXi 5.5 limits
https://communities.vmware.com/thread/458412
vSphere 5.5 supports 320 Physical CPUs per ESXi host
ESXi host supports 4 TB of Memory in vSphere 5.5
16 NUMA Nodes per host in vSphere 5.5
4096 Maximum vCPUs per host
Support for 40 GBps Physical Network adapters
62 TB VMDK virtual disk support
Supports 16 GB FibreChannel end-to-end Support
Increased VMFS Heap, 64TB open VMDK per host max
Virtual Machine Hardware Version 10 in vSphere 5.5



Referring to Free ESXi 5.1 limits
http://www.vladan.fr/esxi-5-1-free/
Is an article. Didn't want to messy up the post.


Also, while searching "esx eval socket limitations", I found this link.
https://www.vmware.com/products/vsphere-hypervisor/gettingstarted
 
I'm very glad I got my server. I may be working a contract job soon that wants Mirage implemented. I've experimented with it before, but that was a year and a half ago, so I need to refresh my memory :D
 
I'm very glad I got my server. I may be working a contract job soon that wants Mirage implemented. I've experimented with it before, but that was a year and a half ago, so I need to refresh my memory :D

What's that you say? Lab environments are helpful?
 
It's a start.

0ofyZB5.jpg


Ubuntu machine runs a Unifi controller for my home wireless.


sEUnCDA.jpg


Win 7 machine is connected to a dedicated DSL line back to the CO of the phone company I work for. I have the DSLAM connected to our office network. So even if the public internet is down I can still see all my devices on my work network. This way I can monitor DSLAMS for alarms and the MRTG of our towns bandwidth.
 
Last edited:
Just ordered;

2x Supermicro 1U Server E3-1240v2 3.4Ghz Quad Core w/ 32 GB

to compliment/potentially replace my existing;

2x Supermicro 1U Server X3470 2.93Ghz Quad Core w/ 32 GB

I'm hoping they're going to perform slightly better and maybe even consume a little less power.

The existing servers currently have 2x 500 GB Samsung EVO 840 SSD which i'm running alongisde a Synology DS1813+. But i'm intending to stick a single 1 TB Saumsung EVO 850 SSD + 3 TB WD Red in each and ditch the NAS (I will use likely use something like Veeam to backup between servers).

I will update once all received and up and running.
 
What needs to perform better? The E3 is definitely a better CPU but the X3470 is still a capable chip.
 
Nothing particularly needs to perform better. I'm in the process of moving office and will need to be running both offices in parallel for a short period (which means additional hardware).

The hosts run quite a range of VMs;

Active Directory
SQL Server
Microsoft Windows Servers (web servers/application servers)
Linux Servers (web servers)

Any additional performance would be a bonus, but not a necessity. Primary aim was to not break the bank and keep ongoing cost (power consumption) low.
 
Putting together a new server for ESXi- putting in the E5-2695v3 I just got- too bad I couldn't get two....
 
TLDR - title says it all...
"Scope Creep .. Over Budget ... Out of Time... WAF Score = -10"

Stale thread ... let's take a trip down memory lane:

My Lab V1.0
- 2x Dell Poweredge 1900's with 2x Quad-Core E5335 Xeon CPUs each
- 1x PowerVault 745N re-flashed/converted to NAS duty

My Lab V2.0
- 1x PowerEdge 1900
- 3x PowerEdge T110 (Gen 1)

My Lab V3.0
- 3x PowerEdge T110-II (Gen 2)

= ----- = Power/Cooling/Noise takes it's toll after a while so start shrinking

Old:
Netwerkz101_Compute_old.png


My Lab V4.0
2x each of the following
- Intel DQ67SW3 Motherboard
- Intel i7-2600 CPU
- GSkill Ares 32GB RAM (4x 8GB)
- InWin BK644 mATX case
- 4x 2.5" in 5.25" drive cage (SNT-SAS425)
- 3x Crucial RealSSD C300 64GB


Netwerkz101_XenServerLab.PNG

I actually multi-boot different hypervisors, but XenServer is primary.

Netwerkz101_Compute.png

I sit in front of my lab so it's "Workstation" material.

Netwerkz101_Network.png

2x Cisco SG300-20 switches

Netwerkz101_Storage.png

1x QNAP TS-659 Pro II NAS
1x Dell PowerEdge T110 w/ PERC H700 (yes from my second lab) NAS

I'll try to find my old images and update links but I have my first pic of my newest rendition .. aka Lab 5.0.

I started this project about a year ago and still... I am not completely done with it.
I sold off all my compute node stuff to raise funds and still went way over budget. :eek:

What you see below is _not_ what was supposed to happen..
I simply wanted a small desktop rack with my "Always On" gear in it.
Not the extreme ....but low noise and low power

It was supposed to be:

2) Cisco SG300-20 Switches
1) SuperMicro A1SRi-2758F based server for infrastructure (Access Gateway)
1) Rackmount UPS

And that's it!!! That's all it was supposed to be - something I could remote into and
boot up the remaining nodes as needed.

Then I got a wif of the new X10SDV-TLN4F .... I knew my new compute nodes
were going to be small and powerful ....but then I got the sticker shock:
Oh hell no!!! Not giving up a grand for a limited CPU/Mobo combo.

<Sigh> .. maybe I should have gone that route......psych!!!

My inspiration ... a circa 2003 CCNA lab I had sitting in the basement:


From that to this....


Anyhow ... the stats as of now:

Gateway Node:
(AD/DNS/DHCP/File/Print + VM management/appliances + Remote Access)
- SuperMicro A1SRi-2758F
- Intel Atom C2758 8 core / 8 thread 2.4GHz
- Crucial 32GB DDR3 (4 x 8GB ECC UDIMM)

Compute nodes:
- SuperMicro X10SRI-F
- Intel Xeon E5-2620v3 6 core / 12 thread 2.4GHz
- 64GB DDR4 (2x32GB modules - up to 256GB possible)
- Mellanox Connectx3 dual port 10Gb (DAC to host and storage)

Storage node:
- SuperMicro X10SL7-F
- Intel Xeon E3-1220v3 4 core / 4 thread 3.1GHz
- Crucial 32GB DDR3 (4 x 8GB ECC UDIMM)
- Mellanox Connectx3 dual port 10Gb (DAC to hosts)
- Samsung EVO SSD (8 x 500GB)
- Can expand up to 24 7mm drives if needed ...not likely though.
- NAS4Free 10.2
- Plan to back up to existing QNAP unit that backs up to a single 3TB drive.

Networking:
- Cisco SG300-20 switches

Power
- CyberPower 1u UPS PR500LCDRT1U


At this point I simply have everything rewired and powered up ...nothing configured.
 
Last edited:
After moving, I finally got my lab up and running again with new hardware.

Supermicro 2022TG-HTRF Server chassis

2x Hyper-V 2016 TP4 hosts, each with the following hardware:
- 2x AMD Opteron 6320 8 core CPUs
- 128GB ECC RAM
- 64GB Kingston SSD boot drive
- 6x Intel 1Gb network ports

2x VMware vSphere 6.0 hosts, each with the following hardware:
- 1x AMD Opteron 6320 8 core CPUs
- 64GB RAM
- USB thumb drive boot
- 6x Intel 1Gb network ports

QNAP QNAP TVS-471-i3-4G-US
- 4x 512GB Toshiba Q Series SSDs in RAID 5
- 4x 1Gb network ports for iSCSI storage presented to Hyper-V and VMware hosts
 
whats the game plan?


Sell it all and get some Raspberry PIs!
Regain my sanity!
Save my marriage!
Put kids through college!

Take up bird watching from my back deck... it's a much cheaper hobby!
Seriously... I have a problem ... I need to sell this stuff and break the cycle.
Just as soon as my Eval Experience expires ;)

VCP/MCSE .... but mostly more VDI testing.
 



Running vSphere 6 & vSAN. However I can't figure out why I can't upload any files to the vSAN or use the desktop app to view the console for any VMs.
 
Running vSphere 6 & vSAN. However I can't figure out why I can't upload any files to the vSAN or use the desktop app to view the console for any VMs.

becouse VSAN is object store:

Virtual SAN manages data in the form of flexible data containers called objects. virtual machine files are referred to as objects.
Virtual machines files are referred to as objects.

There are four different types of virtual machine objects:
  • VM Home
  • VM swap
  • VMDK
  • Snapshots

Virtual machine objects are split into multiple components based on performance and availability requirements defined in VM Storage profile.

so I assume you want to "put" files on VSAN.

You can use Nexenta Connect for VSAN to "make" space on VSAN for sharing that using NFS/CIFS ( or even you can deploy anything like FreeNAS and deploy HDDs on top of VSAN )

regarding console - i have no clue - cant you access from browser ? ( right click on deployed VM ) - right now i cant see any VM's deployed in your cluster...
 
becouse VSAN is object store:



so I assume you want to "put" files on VSAN.

You can use Nexenta Connect for VSAN to "make" space on VSAN for sharing that using NFS/CIFS ( or even you can deploy anything like FreeNAS and deploy HDDs on top of VSAN )

regarding console - i have no clue - cant you access from browser ? ( right click on deployed VM ) - right now i cant see any VM's deployed in your cluster...

I meant I can't upload ISOs to use to install VMs. However I figured it out. I had redone my router config and forgot to set primary DNS to my home lab domain controller, once I did that everything works great (at least on wired, don't care a lot about WiFi right now).
 
Well fellas .... not as sexy as some of your setups, but this is what I have with my limited budget.
-Linksys WRT1200AC
-TP-LINK TL-SG1016DE
-HP Z200, i3-530 - Windows Server '12 Standard, Core, file server (smb, nfs, plex)
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0


Z7uwnR0.jpg
 
Last edited:
This is my contribution

The rack is a Toten 37U 1800x1000x600mm

1x Virtual Sophos UTM Firewall
1x Brocade 300 8Gbit SAN fiber switch
1x D-Link DGS-3120-24SC (Core Switch, WAN is connected here)
2x D-link DGS-1224T (2x uplinks in trunk from both switches to the core switch)

1x QNAP TS-219 PII 2x 2TB i RAID1, Used for backup of VMs
And then scheduled backup to USB-disk.

1x Dell PowerEdge R710 ESXi 6.0 U1, Is used for backup for the ESXi-cluster..
[Booting from internal USB-stick]
[2x Xeon E5520 2,26GHz]
[72GB RAM DDR3]
[3x 146GB SAS 15K RPM i RAID 5]
[1x QLogic QLE220E FC-card]

1x Dell PowerEdge R610 ESXi 6.0 U1, ESXi-cluster.
[Booting from internal USB-stick]
[2x Xeon E5540 2,53GHz]
[96GB RAM DDR3]
[1x Emulex LPe12000 8Gbit FC-Card]
[1x Intel PRO 1000PT DualPort NIC]

1x Dell PowerEdge R610 ESXi 6.0 U1, ESXi-cluster.
[Booting from internal SD-card]
[2x Xeon E5540 2,53GHz]
[96GB RAM DDR3]
[1x Emulex LPe1150 FC-card]
[1x Intel PRO 1000PT DualPort NIC]

.:: Disk Array ::.
1x Fujitsu Eternus DX60, [2x Controllers][2x PSU], Running all VMs
[12x 450GB SAS 15K RPM in RAID 50]

2x APC Smart-UPS 1500VA

Temperature and video surveillance of the room 24/7.

177376


177375


177368


177377


177378
 
Just a fresh spawn here.
Upgrades on ram and NICs are in the future. (Actually have more RAM on the way)

J1UgXWR.png
 
Lenovo TS440 I got on sale at tigerdirect for sub-300.

Xeon E3-1225 v3
32GB DDR3 ECC 1600MHz (shipped with 4GB)
2x 240GB Sandisk SSD
1x Intel X25-M G2 80GB SSD
1x 500GB 7200RPM 2.5"
3x 4TB Toshiba 7200RPM drives
Intel I350-T2
AMD R5 230
PCI-E 2.0 1x SATA controller
PCI USB controller

Customized ESXi 6.0 ISO via v-front.de "customizer" for SATA controller support, html5 client and improved cpu-microcode info.

Daily it runs:
2x 2012R2 VM's for AD, DNS, DHCP, File Server/Backup, Plex, Headphones, Sonarr and CouchPotato (serv01 and serv02)
1x Windows 10 Pro with the AMD GPU and USB controller passed through which outputs to my Pioneer VSX-70 Receiver and Panasonic 58" HDTV (win10-htpc)

I interface with the HTPC with a microsoft wireless multimedia keyboard with a built in trackpad.

I'm incredibly happy with my build and it's capabilities and have completely filled every port the machines have to offer to expand functionality, it's clear to me how useful this device has been for me and I intend to build or purchase either another or a more capable one sometime in the future. A dual proc box with more RAM and more PCI-E slots is a must. I'd like to run FreeNAS but just don't have the ability to do so with my current hardware setup. I definitely need more HDD's
 
Just upgraded my mobo and CPU last week.

2x E5-2670
TYAN S7050
32 GB DDR3 ECC REG
12x WD RE SAS 4TB
2x Intel S3500 160GB
1.28 TB FusionIO Duo
Dell PERC H700
Supermicro CSE846 with 2x 1200W PSUs

Runs my fileserver, usenet and torrent clients, UPS management software, and iSCSI LUNs. Pretty much every VM is CentOS.
 
From that to this....


Anyhow ... the stats as of now:

Gateway Node:
(AD/DNS/DHCP/File/Print + VM management/appliances + Remote Access)
- SuperMicro A1SRi-2758F
- Intel Atom C2758 8 core / 8 thread 2.4GHz
- Crucial 32GB DDR3 (4 x 8GB ECC UDIMM)

Compute nodes:
- SuperMicro X10SRI-F
- Intel Xeon E5-1620v3 6 core / 12 thread 2.4GHz
- 64GB DDR4 (2x32GB modules - up to 256GB possible)
- Mellanox Connectx3 dual port 10Gb (DAC to host and storage)

Storage node:
- SuperMicro X10SL7-F
- Intel Xeon E3-1220v3 4 core / 4 thread 3.1GHz
- Crucial 32GB DDR3 (4 x 8GB ECC UDIMM)
- Mellanox Connectx3 dual port 10Gb (DAC to hosts)
- Samsung EVO SSD (8 x 500GB)
- Can expand up to 24 7mm drives if needed ...not likely though.
- NAS4Free 10.2
- Plan to back up to existing QNAP unit that backs up to a single 3TB drive.

Networking:
- Cisco SG300-20 switches

Power
- CyberPower 1u UPS (sorry..capacity and model escape me right now)


At this point I simply have everything rewired and powered up ...nothing configured.

Really liking that setup.
One chassis is that for the compute nodes?
i am about the pull the trigger on something very similar. just trying to pick the right chassis for the job.
Very slick.
 
Well fellas .... not as sexy as some of your setups, but this is what I have with my limited budget.
-Linksys WRT1200AC
-TP-LINK TL-SG1016DE
-HP Z200, i3-530 - Windows Server '12 Standard, Core, file server (smb, nfs, plex)
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0
-HP DL380 G5, Dual L5148, 16gb, running HP customized ESXi 6.0


Z7uwnR0.jpg

Youre Homelab loogs great ;-)

Can you tell me, What for a Rack Chassis you have there??

Thx
 
Really liking that setup.
One chassis is that for the compute nodes?
i am about the pull the trigger on something very similar. just trying to pick the right chassis for the job.
Very slick.

OnStage Rack Stand RS7030 $30+

1 gateway node at the top under the bottom switch. (1U)
1 storage node at bottom with drive array under it. (2U + 1U)
2 compute nodes in middle (2U x 2)

Cases are Supermicro SC504-203B
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Youre Homelab loogs great ;-)

Can you tell me, What for a Rack Chassis you have there??

Thx

They are 2 of the Ikea Lack Tables placed together with a couple of l-brackets, and I put 4 wheels on the bottom panel..
LACK Side table - birch effect, 21 5/8x21 5/8
$7.99 a piece for the 22" black table.. works great..

all of the weight is bearing on the bottom flat piece and not the legs.. (except for my switch)
 
Adding a new VM box to my network.. just ordered:
Intel S2600CP Dual LGA 2011 Motherboard
128GB (16x8GB) Kingston 2Rx4 PC3L-10600R ECC RAM
2 x Intel Xeon E5-2670 2.60 Ghz. 20MB cache

VM box specs currently in production:
Norco RPC-4220
3x Arctic Cooling F12 PWM Fans
2x Arctic Cooling F8 v2 PWM Fans
1x Arctic Cooling F11 Plus PWM CPU Cooler
Corsair RM1000 PSU
Xeon E3-1240 CPU
16GB Supertalent DDR3-1333 ECC Unbuffered RAM
Supermicro X9SCM Motherboard
3x IBM M1015's flashed to IT Mode
2x 250GB Samsung 850 Pro SSD's
7x 4TB Western Digital SSHD's
13x 2TB Western Digital Green HDD's
 
front-covered.jpg front-uncovered.jpg rear.jpg


I have been working on a home build for awhile and I am just about done. Here is what I have.
24U Rack Server Enclosure
Avocent Rackmount Console
TRENDnet TK-803R 8 port KVM
Netgear Prosafe Plus Switch 24 port Gigabit Ethernet
Startech Rackmount 8 outlet PDU (x2)
Dell PowerEdge R610 2x Hex X5650 2.66 GHz 12 Core 96GB RAM 1TB x 6 HDDs (x2)
Dell PowerEdge R710 2x Hex L5640 2.26 GHz 12 Core 144GB RAM 3TB x 6 HDDs (x2)
Dell PowerVault MD1200 3TB x 12 HDDs
19in Rackmount Type UPS Battery Backup 2200VA/1320W
3U Drawer (x2)
1U Drawer

So far I have the top R610 running ESXi 6.0. It is my Plex server. I have three VMs each running an instance of Plex. Since I share my Plex with family and friends, I wanted to split it out so there is no interrupts when multiple movies are playing.

I have an R710 that I just finished updating firmware. It is running ESXi 6.0. This will be my work server and I will be installing multiple Windows Server VMs and a SQL Server VM. I will be connecting it to the MD1200. Still waiting on a Perc H800 so I can install and connect.

Still working on the other two servers... not sure yet what I will do with them.

Liking what I have so far... seeing that I'm coming from a Dell PowerEdge 2900. This this is loud as hell compared to the R610/R710s.
 
This is my semi portable Dev Lab i use for various projects
2 x HP Microservers Gen 8 Running Hyper-V or VMware from SD (Depends What I Need)
1 x HP Microsever NL Running Freenas iSCSI Storage
1 X HP 2610 (Layer 3 Switch)
1 x Gigabit Layer 2 Switching
1 x WiFi Bridge
1 x Blue walker UPS
1x HUAWEI AR1220v
Firewall is a Zentyal VM

WP_20160304_19_25_48_Pro_LI 1.jpg
 
Current Visualization Machine:

  • MB: SuperMicro X9DRD-EF-B
  • CPUs: 2x Xeon E5-2670 2.6Ghz
  • RAM: 64GB (8x8) DDR31333Mhz Low Voltage (will be getting another 64gigs soon)
  • Case: Supermicro SC846TQ 24 HotSwap Bay
  • PSU: SEASONIC X-850W
  • CPU Coolers: 2x Noctua NH-U9DXi4 90mm SSO2
Being a new build its still a WIP as in how its all set up, however I currently am running Ubuntu Server 16.04 and using VMWare from that, however I am probably going to switch over to Proxmox here shortly. Overall I am new to visualization but I am a quick learner.

This is actually a new build overall, I had ORIGINALLY planned on making this build my homeNAS however, I thought it had a little bit too much horsepower and deserved a better fate than that :D

I have a line on several Rosewill RSV-L4000 Chassis that someone is only wanting 40 bucks each so I am probably going to pick up 1...or 3 of them, and swap this build into that case to free up the hotswap case for my NAS coming shortly.
 
Since i'm not CPU constrained I went with this...

2x HP Z420 ($200-300 each) workstations :

Specs on each setup
8x 8GB ECC DDR3 Registered (It works even though HP says registered doesn't work on this board... I'd love 16GB Reg sticks to try)
1x 2670 v1 8 Core + HT CPU
5x Intel DC3500 SSD's on each.
Dell H310 HBA
Qlogic 8GB FC card

Going to do 10GBe soon since they've gotten so cheap.

Freenas runs long-term storage, and some slower IO Vm's via FC.
 
Just completed the following build this weekend.

image.png


Specs from top of the rack to bottom:

[1U] 24-port Cat6 Patch Panel

[1U] Dell X1052 Switch (48 x 1Gb, 4 x 10Gb)

[1U] pfSense Firewall
[2U] vSAN Node #1 (Main computing host)
[2U] vSAN Node #2 (Failover computing host)
  • Same as Node #1
[3U] vSAN Node #3 (Backup/Slave Bulk Storage Array)
[4U] - vSAN Node #4 (Main Bulk Storage Array + vSAN contributing storage)
[2U ] CyberPower PR1000LCDRT2U 900w UPS


Total CPU cores: 22
Total vCPUs: 44
Total RAM: 176GB
vSAN Datastore: 2.16TB
Bulk Storage Arrays: 64TB (56TB usable) + 960GB cache.
 
Just completed the following build this weekend.

image.png


Specs from top of the rack to bottom:

[1U] 24-port Cat6 Patch Panel

[1U] Dell X1052 Switch (48 x 1Gb, 4 x 10Gb)

[1U] pfSense Firewall
[2U] vSAN Node #1 (Main computing host)
[2U] vSAN Node #2 (Failover computing host)
  • Same as Node #1
[3U] vSAN Node #3 (Backup/Slave Bulk Storage Array)
[4U] - vSAN Node #4 (Main Bulk Storage Array + vSAN contributing storage)
[2U ] CyberPower PR1000LCDRT2U 900w UPS


Total CPU cores: 22
Total vCPUs: 44
Total RAM: 176GB
vSAN Datastore: 2.16TB
Bulk Storage Arrays: 64TB (56TB usable) + 960GB cache.


Nice dude, just beautiful. I actually followed your build over at STH. Was wondering how those Xeon D's are working out for you?
 
Nice dude, just beautiful. I actually followed your build over at STH. Was wondering how those Xeon D's are working out for you?

I could not be happier with them. Extremely low power usage and the performance of my cluster has been fantastic. I continue to add more and more VM's to my cluster each week. I'm just waiting on the new Ubiquiti ES-16-XG 10Gb switch so I can start to utilize the 2nd SFP+ port in each of my servers.
 
Back
Top