Your home ESX server lab hardware specs?

Zarathustra[H];1041106456 said:
Where is your Norco? My 4216 is actually rather quiet. I got the optional 120mm fan divider, and use a temp probe fan controller to automatically adjust fan speeds.

That being said, everything is relative, as I came from a HP DL180G6 2U rack server which was a freaking jet engine, and wasn't even quiet enough for basement use (I could hear it from bed on the second floor!) which is why it only lasted about a month, before I replaced it with the Norco.

It's housed in a small 18U mobile media rack (not a full depth rack, so it sticks out in the back) sitting right next to my desk in the office room (1st floor, all other bedrooms are on the second).

Unfortunately, it is the original 4220... which can't take their new 120 fan wall.. and the only fan walls for it were custom-made by cavediver. Thinking about downsizing, just haven't found a case I like without having to replace the motherboard/cpu setup completely for something like a UNAS 800.
 
This is my setup at home.

1x D-Link DFL-800
1x D-link DGS-1248T
1x Brocade 300 SAN fibre switch
1x QNAP TS-212 2x 750GB in RAID1, Dedicated backup for VMs via Veeam.

1x Fujitsu Primergy RX300 S4 ESXi 5.5 U1
1x Dell PowerEdge R710 ESXi 5.5 U1
2x Dell PowerEdge R610 ESXi 5.5 U1

1x DotHill 2730T dual controllers
1x Fujitsu Eternus DX60 dual controllers

2x APC Smart-UPS 1500VA
1x APC Smart-UPS 1000VA

http://wickedworld.org/url_20140904_210152.jpg

http://wickedworld.org/url_20140904_210040.jpg

http://wickedworld.org/url_20140904_210053.jpg

http://wickedworld.org/url_20140905_184354.jpg

Nice.

I just picked up one of those APC SUA1500's

Working on my shutdown script using apcupsd in a Linux guest.

How accurate / reliable do you find the "time remaining" they calculate is?

I'm trying to figure out how much of a buffer to give myself for shutdown time.

Thanks,
Matt
 
Zarathustra[H];1041144847 said:
Nice.

I just picked up one of those APC SUA1500's

Working on my shutdown script using apcupsd in a Linux guest.

How accurate / reliable do you find the "time remaining" they calculate is?

I'm trying to figure out how much of a buffer to give myself for shutdown time.

Thanks,
Matt

They are ok but i would like to get the rack mount model instead :)
I do not run any shutdown script on my hosts are running untill the battery is drained.
Last time i did a test of the batteries the calulated time was not 100% but "ok"
 
Changed up the home lab a little.

3x VMware 5.5 hosts running VSAN
- AMD FX-6300 6 core CPU
- 32GB RAM per host
- Intel PRO/1000VT quad port Gb NIC
- Seagate 600 240GB Pro SSD, 2x WD Raptor 10k 600GB drives for VSAN

2x Hyper-V 2012 R2 hosts
- AMD Opteron 6320 8 core CPU
- 64GB ECC RAM per host
- Intel I340-T4 quad port Gb NIC

Windows 2012 R2 Storage server
- Intel i3-4110T dual core CPU
- 8GB RAM
- 8x Toshiba Q Series 512GB SSD for Hyper-V storage
- Intel I340-T4 quad port Gb NIC

2x HP v1910-24g stacked 24 port Gb switches
 
They are ok but i would like to get the rack mount model instead :)
I do not run any shutdown script on my hosts are running untill the battery is drained.
Last time i did a test of the batteries the calulated time was not 100% but "ok"

Yeah, I was looking at the rack mount versions as well before buying mine, but with the identical specs, the rack mount versions on eBay were 3-4 times more money...
 
Zarathustra[H];1041147485 said:
Yeah, I was looking at the rack mount versions as well before buying mine, but with the identical specs, the rack mount versions on eBay were 3-4 times more money...

I know, had the same problem. I wonder why the rack mount models always are more expensive no matter what the brand is.
 
Stale thread ... let's take a trip down memory lane:

My Lab V1.0
- 2x Dell Poweredge 1900's with 2x Quad-Core E5335 Xeon CPUs each
- 1x PowerVault 745N re-flashed/converted to NAS duty

My Lab V2.0
- 1x PowerEdge 1900
- 3x PowerEdge T110 (Gen 1)

My Lab V3.0
- 3x PowerEdge T110-II (Gen 2)

= ----- = Power/Cooling/Noise takes it's toll after a while so start shrinking

Old:
Netwerkz101_Compute_old.png


My Lab V4.0
2x each of the following
- Intel DQ67SW3 Motherboard
- Intel i7-2600 CPU
- GSkill Ares 32GB RAM (4x 8GB)
- InWin BK644 mATX case
- 4x 2.5" in 5.25" drive cage (SNT-SAS425)
- 3x Crucial RealSSD C300 64GB


Netwerkz101_XenServerLab.PNG

I actually multi-boot different hypervisors, but XenServer is primary.

Netwerkz101_Compute.png

I sit in front of my lab so it's "Workstation" material.

Netwerkz101_Network.png

2x Cisco SG300-20 switches

Netwerkz101_Storage.png

1x QNAP TS-659 Pro II NAS
1x Dell PowerEdge T110 w/ PERC H700 (yes from my second lab) NAS
 
Lab update:

For over 7 years I've had some sort of VMware lab running 24/7. First it was VMware Server 2.0, then ESX 3.02 once I finally got some licenses.

Lately as I've gotten more and more comfortable with Hyper-V, I've been moving many of my VMs over to that environment since I prefer Dynamic Memory to save on RAM usage. Last night I got to a point where the only VMs running in my VMware lab were vCOPS, VDP, Log Insight, vCAC, vCM, and vCO. I decided I don't need those running 24/7 so I shut the whole VMware lab down, only to be turned on again when I need to monkey with all of VMware's extra products (or when I get some NSX NFR keys). Keeping them off saves about 250W of electricity or almost $1 per day.

Here's where the lab stands now....

2x Hyper-V 2012 R2 hosts clustered
- AMD Opteron 6320 8 core CPU
- Supermicro H8SGL-F motherboard
- 64GB ECC DDR3 RAM per host
- 2 onboard Intel NICs, vSwitch for host management, CSV heartbeats, and VM vNICs
- 4 addon Intel Gb ports, used for SMB3 file share access and Live Migration (yay 4Gbps storage and Live Migrations!)

1x Hyper-V 2012 R2 standalone host
- Zotac ID91 mini-PC
- Intel i3-4130T 35W dual core CPU
- 8GB DDR3L SO-DIMM
- 240GB Seagate 600 Pro SSD (OS and local VM storage)
- 2 onboard Realtek NICs, 1 vSwitch connected directly to ISP, 1 vSwitch connected to home network
- 1 RRAS VM, 1 Domain Controller/DHCP VM
- Standalone host allows me to play with main Hyper-V cluster without taking down home internet
- Draws 10W of power

Windows Server 2012 R2 File Server
- Intel i3-4130T 35W dual core CPU
- Some Asus motherboard, I don't know, lots of SATA3 ports
- 8GB RAM
- 2x 80GB 2.5" 5400RPM drives for OS
- 8x 512GB Toshiba Q SSDs for Hyper-V file shares
- 5x 3TB Seagate 7200RPM drives for home file share
- Intel quad port Gb NIC for SMB3 Hyper-V file share access

3x VMware ESXi 5.5 hosts (peacefully sleeping until needed again)
- AMD FX-6300 6 core CPU
- 32GB RAM
- Intel quad port Gb NIC
- VSAN cluster, each host sporting 1x 240GB Seagate 600 Pro SSD, 2x 600GB WD Raptor 10k drives
 
Last edited:
My lab update:





Supermicro 19u Rackmount - Love this Rack, filter system..very nice, love it
Monster Power Conditioner
Supermicro FAT TWIN 6026TT-HDTRF 2 x Sleds/96GB RAM/2 x Xeon L5520's/Intel AF DA 10Gb
Supermicro 6026TT-BTRF 4 x Sleds/48GB RAM/2 x Xeon L5520's/Intel DP PT1000
Supermicro 2u 24 x 2.5 Nexenta Server 14 x 1TB Hybrid Drives/2 x Intel S3700 100GB
EMC PX4-300d 3 x 3TB WD RED NAS - 1 x Intel S3700 100GB SSD (Cache)
2 x HP 1800-24G Switches
Ubiquiti Edmax Lite
Ubiquiti Unifi AC

Soon to be added:
3d GPU Offload for View
Rackmount UPS (more than likely Dell)
Color Coded with different length Patch Cables
Cable MGMT

Running full VMware Set of Products and soon will be adding OpenStack. I'm also waiting on NFR NSX keys as well.
 
Last edited:
probably best for a new thread but you have a lot of useable hardware there that's for sure
 
2 x ESXi 5.5 Hosts:

Supermicro X9SRH-7TF
Xeon E5-2620
64GB ECC DDR3
Seasonic X-400 Platinum Fanless PSU
Noctua NH-U12
Fractal Define R4
Intel quad port NIC

Always on host has one 240gb SSD and 4 Crucial M4 128GB SSD's for local VM storage
and 6x2TB drives in RAIDZ-2 attached to onboard LSI 2308 in IT mode passed through to linux VM running ZFS on linux, sabnzbd, sickbeard, couchpotato and plex

Nexenta CE 4.0.3 Shared storage:

Supermicro X9SCM-F
Xeon E3-1230
16GB ECC DDR3
Seasonic 750W Gold PSU
Norco 4220
Intel quad port NIC
M1015 flashed to 9211-8i
8 x Samsung SM843 480GB SSD's in RAID10 for main VM storage pool
12x1TB 7200rpm drives for bulk storage (only using 4 in RAIDZ1 config for VDP backups currently)

Cisco SG300-28 Switch
Tripp-lite 1000W UPS
Tripp-lite 18U rack








 
all of that runs on 400w psu's, that's awesome i bet the whole setup is quiet?
 
Yep very quiet, one host w all the vm's listed above, the switch, access point, storage server and cable modem only draws ~260 watts. The loudest component is the storage server but even that is quiet w the 120mm fan wall.
 
all of that runs on 400w psu's, that's awesome i bet the whole setup is quiet?

Honestly, once you don't have a fancy video card and run at stock speeds, it becomes rather difficult to draw more than 300w, unless you throw A LOT of drives at it.
 
Here's my lil' vSphere 5.5 lab that I'm using to study for my VCDX

- Synology DS412+ w/ Western Digital Red 4TBs in a RAID10
- HP ProCurve 1810-24G
- Shuttle SZ87R6 (x2 - parts ordered for number two, will probably go for a 3rd once vSAN GAs)
-- i7-4770
-- 32GB DDR3
-- Intel Pro/1000 PT Dual Port 1GbE*
-- Syba Dual Port 1GbE

* The system never properly detected the Quad-Port card... ended up replacing with a dual port.

KF4ZZxDl.jpg

qNUIfECl.jpg

EppVckKl.jpg

rLkVPrMl.png


Pics of the mini-host:
WfJXyFbl.jpg

SrPIa6wl.jpg

LewuaBNl.jpg

What unit/device/PC is it on top the Synology DS412+ ?
 
Custom workstation with ASUS Maximus V Gene, Intel Core i5-3570K, 32GB RAM, and 500 GB Samsung EVO SSD, which is running Windows 7 Ultimate x64 and VMware Workstation with nested hypervisors/VMs for testing.
Synology DS413 w/ 4x4TB HDS disks running file storage, DNS/DHCP, etc.

I used to have a much larger setup but found that it was quite a waste. With what's listed above I've been able to do all the testing that I could ever need.
 
Here is my build for my first ESXi server running ESXi 5.5 update 2.
I have succesfully passthrough the radeon 7850 with audio and usb controller.


Computer Case: Lian-Li PC-V358
Power Supply: Silverstone ST60F-PS
Motherboard: Asrock H87M-Pro
Processor: Intel Core i7 4770
Ram: CORSAIR DOMINATOR 6GB (3 x 2GB)
System Hard Drive : Patriot Supersonic Boost XT 8GB
SSD Hard Drive: Samsung 850 Pro 256GB
Storage Hard Drive: will come later
Video Card : XFX Radeon 7850 1GB

http://www.michons.us/2014/11/24/esxi-whitebox/
 
Here is my build for my first ESXi server running ESXi 5.5 update 2.
I have succesfully passthrough the radeon 7850 with audio and usb controller.


Computer Case: Lian-Li PC-V358
Power Supply: Silverstone ST60F-PS
Motherboard: Asrock H87M-Pro
Processor: Intel Core i7 4770
Ram: CORSAIR DOMINATOR 6GB (3 x 2GB)
System Hard Drive : Patriot Supersonic Boost XT 8GB
SSD Hard Drive: Samsung 850 Pro 256GB
Storage Hard Drive: will come later
Video Card : XFX Radeon 7850 1GB

http://www.michons.us/2014/11/24/esxi-whitebox/


I am curious what you are looking to do with this setup.


It's a little bit unusual to see someone put a gaming GPU in an ESXi box, which are typically used for servers (or in the lab case, learning about server virtualization)
 
Zarathustra[H];1041275351 said:
I am curious what you are looking to do with this setup.


It's a little bit unusual to see someone put a gaming GPU in an ESXi box, which are typically used for servers (or in the lab case, learning about server virtualization)

It is not based on what i saw on several forums.
In my case, the GPU is used to do a small gaming for my wife and for my friends when they come to the house. I have a windows 7 VMs to which I passthrough the GPU, this enables the user to have a pretty nice gaming rig. Most of the time, the ESXI server is running other stuff.

On my reseach on the internet, people do that for gaming rigs, and for plex media server or other.
 
Zarathustra[H];1041275351 said:
I am curious what you are looking to do with this setup.


It's a little bit unusual to see someone put a gaming GPU in an ESXi box, which are typically used for servers (or in the lab case, learning about server virtualization)

This is slowly becoming the norm for home users. It allows us to take what used to be 3-4+ boxes/servers and put it all into one machine and still be able to game/use as a workstation(s)
 
My lab update:





Supermicro 19u Rackmount - Love this Rack, filter system..very nice, love it
Monster Power Conditioner
Supermicro FAT TWIN 6026TT-HDTRF 2 x Sleds/96GB RAM/2 x Xeon L5520's/Intel AF DA 10Gb
Supermicro 6026TT-BTRF 4 x Sleds/48GB RAM/2 x Xeon L5520's/Intel DP PT1000
Supermicro 2u 24 x 2.5 Nexenta Server 14 x 1TB Hybrid Drives/2 x Intel S3700 100GB
EMC PX4-300d 3 x 3TB WD RED NAS - 1 x Intel S3700 100GB SSD (Cache)
2 x HP 1800-24G Switches
Ubiquiti Edmax Lite
Ubiquiti Unifi AC

Soon to be added:
3d GPU Offload for View
Rackmount UPS (more than likely Dell)
Color Coded with different length Patch Cables
Cable MGMT

Running full VMware Set of Products and soon will be adding OpenStack. I'm also waiting on NFR NSX keys as well.

Been looking at those Supermicro 6026TT's. How loud are they? Planning on using one to consolidating two hosts that I have for my esxi home lab. Multi node seems to be an easy way to bring additional hosts online when I need more resources.
 
Been looking at those Supermicro 6026TT's. How loud are they? Planning on using one to consolidating two hosts that I have for my esxi home lab. Multi node seems to be an easy way to bring additional hosts online when I need more resources.

They aren't that bad and have a quiet fan setting in the BIOS. The Power Supplies have a bit of a whine though.
 
Just picked up a broncade X448 and an Intel 10Gbe fiber card. Will post specs and pics once I have all that setup after the holidays.

Question: Do you guys mind XenServer posts in this thread? or just ESX?
 
Last edited:
Hey now, Hyper-V is awesome. Could use some better VM resource utilization monitoring though.
 
It is not based on what i saw on several forums.
In my case, the GPU is used to do a small gaming for my wife and for my friends when they come to the house. I have a windows 7 VMs to which I passthrough the GPU, this enables the user to have a pretty nice gaming rig. Most of the time, the ESXI server is running other stuff.

On my reseach on the internet, people do that for gaming rigs, and for plex media server or other.

That makes a lot of sense actually.

If virtualization were around back in my college days when I had limited space/budget, this is definitely something I would have done.

How does one manage ESXi in this case though? Using the ESXi app from the guest which has the video card forwarded to it? What if there is a problem? One must have a secondary way of reaching the ESXi server in that case. Can't rely on the guest.
 
I disagree, hyper-V has way too many limitations, for example usb passhtrough pain in the neck,no passthrough for audio

USB difficulty could be an issue, but remember, the target audiences for these VM Hosts are servers, sitting on racks in a server room, so it's really not fair to bash them for such things as lack of audio passthrough.

I consider anything desktop friendly to be a bonus, more than I do an expectation.
 
Zarathustra[H];1041283181 said:
Audio, I agree, but USB? Many things use USB in a server environment.

That being said, if you REALLY need USB, I guess you could just pass through a USB controller...

In a cluster why would you want to pass it through on the host anyway? Defeats the whole purpose of VM cluster if the vm can't move between hosts. We use network USB hubs in both the VMware and Hyper-V environment

Client Hyper-V is lacking that I will agree with but in the data center they trade blows. They both have pros and cons and that's why we have both.
 
Back
Top