Your home ESX server lab hardware specs?

It is in the basement. The subpanel’s upstairs in the garage.

They must do things differently down south. I keep all my equipment in the basement room with the HVAC, subpanels right there and I don't need to worry about sound.
 
They must do things differently down south. I keep all my equipment in the basement room with the HVAC, subpanels right there and I don't need to worry about sound.
There aren't many basements down south, we dont need to build below the frost line for foundation stability because whats a frost line.
That combined with all the rivers, swamps, and lakes the water table is usually pretty close to surface level makes it a liability for flooding and costly addon.

If you put the electric panel in a basement thats gonna flood and fry the panel as a matter of when not if.
 
There aren't many basements down south, we dont need to build below the frost line for foundation stability because whats a frost line.
That combined with all the rivers, swamps, and lakes the water table is usually pretty close to surface level makes it a liability for flooding and costly addon.

If you put the electric panel in a basement thats gonna flood and fry the panel as a matter of when not if.

I'm talking south of Denver in the Springs...
 
If you get snow yearly you ain't in the south :p.
My wife insists that southern Texas isn't "The South".

Me: It's one of the furthest points south you can get and still be in the US...
Her: No, 'The South' is like North Carolina.
Me: But North Carolina has 'North' in the name, how is that 'The South'?
Her: Texas isn't 'The South' or 'Deep South' it's just Texas.
Me: Well, OK. As long as Texas is still Texas, I guess.

Then we both agree that Alabama is weird and move on. The way I see it, anything south of the Mason-Dixon line is "The South". (And yes, I'm well aware that NC qualifies.) She thinks that any place that has sweet BBQ doesn't count. Pfft. It's like all the radio stations in Denver that talk about being "Southern Colorado's Number One X Station!". Bitch, you're above the halfway point in the state. You ain't south. You're north. Knock it off.
 
Last edited:
My wife insists that southern Texas isn't "The South".

Me: It's one of the furthest points south you can get and still be in the US...
Her: No, 'The South' is like North Carolina.
Me: But North Carolina has 'North' in the name, how is that 'The South'?
Her: Texas isn't 'The South' or 'Deep South' it's just Texas.
Me: Well, OK. As long as Texas is still Texas, I guess.

Then we both agree that Alabama is weird and move on. The way I see it, anything south of the Mason-Dixon line is "The South". (And yes, I'm well aware that NC qualifies.) She thinks that any place that has sweet BBQ doesn't count. Pfft. It's like all the radio stations in Denver that talk about being "Southern Colorado's Number One X Station!". Bitch, you're above the halfway point in the state. You ain't south. You're north. Knock it off.
Except for Austin they’re weird too and basically a California extension.
 
I have a good bit of experience with Dell T3500. Before popping that X5680 in be sure it has BIOS 'A07' or later or it won't boot with a Westmere CPU.

I'll make sure to check the bios version before I upgrade. This used to be an old work PC of mine so I think I kept the bios up to date. My 5680 should be coming any day now so I"ll report back if it works.

So far It's been a capable minecraft server for my kids. I just need to decide what other games I want to run servers for...
 
Well, it's not ESX, I use Proxmox, but I run a Dell R710 with 2x L5640 (12 core/24-threads) with 96gb of Ram and a 5 disk Raid array. Getting a bit old, but still works good for me.
 
Installing the Xeon x5680 on my T3500 was a success. I was already on bios A17 so it worked with no issues. Now to think of what VMs to install to actually make use of these extra cores and threads, lol.
 
ive gotta get my stuff back up and running... my esxi boxes aren't the issue... I setup an HP z420 e5 xeon with 64gb ecc ram for freenas.

will have to do pics, etc...

so running a few non server class machines has been so quiet....


HP Procurve 6400CL 6 port and all full...
  1. Server 2019 - file share - i5
  2. Windows 10 - plex - i7
  3. ESXI 6.7u2 - vm lab - e5 Xeon
  4. ESXI 6.7u2 - vm lab - e5 Xeon
  5. Freenas - iSCSI - e5 Xeon
  6. Windows 10 - gaming rig - e5 Xeon

Newest freenas wont see my 10gb ConnectX-1 card... but the older one will...put a ticket in with them and gave logs, etc.. they said...
Can you try to update to FreeNAS 11.3-nightly to see whether new driver version there may fix the issue? Unfortunately we do not use Mellanox cards widely, so I can't reproduce the problem. If the problem still reproducible in 11.3, I'd recommend you con report it to FreeBSD/Mellanox developers, since we never modified the driver.
so will have to go back to that version as I want to have an iSCSI SSD Datastore for my ESXI boxes...
 
TeleFragger why not update your card?

well I am going to find which machine has the ConnectX-2 card in it and try that...
these are a proprietary port so not a typical SFP+ or RJ45...
I'm $75 in on all 6 machines being 10gb and wow is it fast...
even though old hardware, still runs and works great.

copying files from my win10 gaming rig to my server 2019, I get 1.09gb/s
copying files from my win10 gaming rig to the freenas older version, I was getting 1.9gb/s !!!!!!
 
The lab servers will be getting rebuilt into Windows 2019 w/ Hyper-V Role.

The DL380 G6's seem to have no issues taking 2019 Server, including all hardware/drivers installed without any fuss. Was only able to run ESXi 6.0U3 on these ; no matter what I tried.. version.. etc.. could never get ESXi 6.5 to work on these without locking up. 6.0U3 ran without issues.. but it's starting to approach the point of it not being "modern".
 
The lab servers will be getting rebuilt into Windows 2019 w/ Hyper-V Role.

The DL380 G6's seem to have no issues taking 2019 Server, including all hardware/drivers installed without any fuss. Was only able to run ESXi 6.0U3 on these ; no matter what I tried.. version.. etc.. could never get ESXi 6.5 to work on these without locking up. 6.0U3 ran without issues.. but it's starting to approach the point of it not being "modern".

Using Starwind converter, I was able to get all of my VM's converted over to Hyper-V format and moved them over to my other DL380, running W2K19 + Hyper-V role..
- It seems to import them in as Gen 1 VM's with a scsi disk. Once I detached the hard drive and re-added as IDE, they seemed to boot up fine and discover their "new hardware" on the Hyper-V host.

Unmounted my iSCSI datastore from the other DL380 running ESXi 6.0 and shut it down..
MPIO + iSCSI setup on the new DL380 Hyper-V host ; connecting to same iSCSI lun that was previously connected to my other DL380 ESXi host.
Two paths to the target ; Round Robin, both paths show "Active/Optimized".. all good
Rescan disks in Computer management.. bring online, setup simple volume.. format... boom.. all good thus far..
Migrated one of my lower I/O respective VM's off the local storage to the new drive that is mounted via iSCSI.. (my Plex server)

So far , so good.. not too difficult to manage from my Windows 10 computer, with the built in Hyper-V management tools (+ additional few commands to get connected to non-domain Hyper-V host)
 
I have a very modest "home lab" I picked up a Dell R710 with 96GB RAMand threw a couple of 1TB SSDs in it. Running ESXi 6.7 as host, one VM running Ubuntu Server that runs my Plex install and Nextcloud. I did have pfsense running on it as well, but mucked up my network "tweaking" it. Now I'm just keeping an eye out for a cheap 1u to run pfsense on as a stand alone box.

I'm new to the Plex game so my media currently isnt taking up much space, but I have 6 empty drive slots to expand in to when needed.
 
Picked up a Dell PowerEdge R620 with 2x Xeon E5-2640v2, 192 GB DDR3, Intel x540 dual port 10Gb, Intel I350 quad port 1Gb, redundant supplies, and a couple of 300GB SAS drives in a 1U chassis. I'll probably end up replacing its fans with quieter Noctua models, increasing the memory to at least 256 GB, and going for SSDs.
 
Last edited:
picked up an old ass Dell CS24-SC 1u server with no drives for next to nothing
dual Xeon CPU L5420
32GB DDR2
added 1 250GB SSD

its running pfsense to handle firewall and routing and pfblockerng for network wide ad and malware blocking

I pay the same amount for elec every month so no hit there. Just need/want to mod the fans to run at a lower speed. Its currently no louder than my gaming PC back in the day, but I wouldntmind it being quieter.
 
My wife insists that southern Texas isn't "The South".

Me: It's one of the furthest points south you can get and still be in the US...
Her: No, 'The South' is like North Carolina.
Me: But North Carolina has 'North' in the name, how is that 'The South'?
Her: Texas isn't 'The South' or 'Deep South' it's just Texas.
Me: Well, OK. As long as Texas is still Texas, I guess.

Then we both agree that Alabama is weird and move on. The way I see it, anything south of the Mason-Dixon line is "The South". (And yes, I'm well aware that NC qualifies.) She thinks that any place that has sweet BBQ doesn't count. Pfft. It's like all the radio stations in Denver that talk about being "Southern Colorado's Number One X Station!". Bitch, you're above the halfway point in the state. You ain't south. You're north. Knock it off.

Where I'm from we consider Rhode Island to be the deep South :p
 
Using Starwind converter, I was able to get all of my VM's converted over to Hyper-V format and moved them over to my other DL380

I used Starwind v2v for virtualizing physical servers for some customers, the main advantage is support of different output formats. They have also a free vsan version, which allows to play with high availability without physical san in home lab environment.
 
updated today a bit...


added 64gb ram...

added an IBM 5110 SATA/SAS-81 controller AND 2x 1.8gb sas drives...

all for my home screw around lab...

upload_2019-9-18_23-3-18.png
 
Going to bump this with an update of mine. Looks like I last posted it back in 2017... Unfortunately, not too much has changed.

2017
1585943757419.png
1585943787813.png


2020
1585943831139.png
20200403_150319.jpg
20200403_150343.jpg

Shuttle- AMD 5600+ 4GB running pfsense // Internet modem // Iomega NAS for backups 6TB Usable
24port HP GB switch
Nutanix CE Server - 2xE5540 32GB 1x256 SSD, 1x600 WDRaptor
Synology RS815RP+ - Currently 1x2TB and not really used yet
DL380g6 ESXi 2xL5638 72GB 1x146
MSA2012 - Currently 12TB (3x 2TB Sata // 8x 450GB SAS)
DL380g6 ESXi 2xL5638 72GB 1x146 2x1TB
HP UPS
1u PDU

Going to try giving some of you cabling guy's a heart attack. lol. I cringe every time I see it but it turned into a "i just need to plug this in and get it working to test stuff" I do need a full depth cabinet though so I can actually mount stuff in here.
 

Attachments

  • 1585943833854.png
    1585943833854.png
    3 MB · Views: 0
Last edited:
Not bad especially for at home too. I'm just glad I have a decent lab at work I can play with. Migrating vms between hosts on a common 10gb switch is nice. Must be bad ass on those 100gb backplanes.
 
Not bad especially for at home too. I'm just glad I have a decent lab at work I can play with. Migrating vms between hosts on a common 10gb switch is nice. Must be bad ass on those 100gb backplanes.

I wish I had a good work lab for hands on. They do offer us a virtual labs where we can still get experience in the software, but when you are an installer, nothing beats hands on learning.

Not sure you were talking to me about some 100gb backplanes, but I don't have anything near that in my systems.
 
I wish I had a good work lab for hands on. They do offer us a virtual labs where we can still get experience in the software, but when you are an installer, nothing beats hands on learning.

Not sure you were talking to me about some 100gb backplanes, but I don't have anything near that in my systems.

No just. Commenting about the blade chassis and how smooth that must be.
 
I have two identical servers in different houses, they replicate each other overnigh:

20200507_085401.jpg


1588106495908.png


1588106623400.png

FSP 1200W 80 PLUS + PLATINUM
Adaptec 6805T
2x SSD system 256GB (RAID1)
2x HDD System 3TB (RAID1)
3x HDD DATA 8TB (RAID5)
1x HDD BACKUP 5TB
1x NVIDIA GTX 1070 8GB (for video encoding and stream)
All installed in a Frectal Design XL2 with 6x 140mm PWM fans
+
Smart APC 1000 with AP9619 and temperature sensor

All bought second hand (pretty cheap) where case and disks came in new.
What is impressive in my opinion with this build is:
- it all fit into a single eatx case externally you see only 1 ethernet cable and the power coming from the APC
- very silent due to the 140mm PWM fans
- Original 2U Supermicro passihe heatsink where I fitted a pressure optimised 120mm PWM fan on top of each
- it is reported to use only 1/5 load capacity on the APC front panel when running
- while video encoding the fans spin slightly faster and the load goes up to 3/5.

You probably don't need so many case fans but since they are PWM, I and have tweaked the minimum rotation speed via ipmitool to spin to the minimum unless needed.

Overall exceeding my expectations :)
 
Last edited:
Just disassembled it for another build, but I had ESXI 6.7 with:
AMD FX-8350
ASRock 970 Extreme3
CM Hyper 212
HD5970
GTX 580
Thernamtake 875W (850W?) RGB PSU
24gb DDR3
A few extra NICs
Some SSD's, some HDD's

Just moved it to a lower power proxmox build (I was just testing GPU passthrough on ESXI) with:
Athlon 2 X4 610E
Stock cooler from my 1090T
ASUS M4A785TD-M EVO
120gb Intel SSD
24gb DDR3
Antec Neo ECO 420C
A few extra NICs
 
Crazy that this thread is twelve years old. How things have evolved in that time.

My home office / moonlighting gig setup is currently a three-host ESX6.7 cluster with a dedicated FreeNAS SAN over a 10gig fiber network (prices are what I paid on eBay):

Each host (3):
Supermicro X9SRi8-F motherboard ($132)
Intel Xeon E5-2665 8-core 2.4GHz ($33)
96GB (6x16gb) PC3-12800 Registered ECC RAM ($24 / stick - $144)
16GB Sandisk low-profile USB key ($9)
Total: $318 each host

iSCSI storage:
FreeNAS-11.1-U7
Supermicro X8DTE-F motherboard ($92)
(2) Intel(R) Xeon Quad-core CPU E5620 @ 2.40GHz ($5 each)
96GB (6x16gb) PC3-10600 Registered ECC RAM ($24 / stick - $144)
(2) IBM M1015 8-port SATA-III HBA ($35 each)
(2) Mini-SAS SFF-8087 Male to 4x SATA Ports Cable ($10 each)
(16) Samsung 850 EVO 250GB SSD ($42 each)
Norco RPC-4020 4U Case with 20 Hot-Swap SATA/SAS Drive Bays ($300?)

Total: $1308

Network:
(4) Mellanox MNPH29D-XTR ConnectX-2 2-port Fiber HBA ($45)
(16) AOI A7EL-SN85-ADMA 10GB 850nm MMF SFP+ Transceiver LC-LC ($5 each)
(8) 3 Meter 10Gb OM3 Multimode Duplex Fiber Optic Cable (50/125) - LC-LC ($15)
Dell Force10 S4810 48-Port 10GbE & 4x 40GbE Ethernet Switch ($450)

Total: $830

Hardware totals:
24-cores @ 2.4GHz = 57.6GHz
3 x 96 = 288GB RAM
4TB (2TB useable) SSD in RAID-10



IMG_8859.JPG


IMG_8907.jpg
IMG_8910.JPG
 
Crazy that this thread is twelve years old. How things have evolved in that time.

My home office / moonlighting gig setup is currently a three-host ESX6.7 cluster with a dedicated FreeNAS SAN over a 10gig fiber network (prices are what I paid on eBay):

Each host (3):
Supermicro X9SRi8-F motherboard ($132)
Intel Xeon E5-2665 8-core 2.4GHz ($33)
96GB (6x16gb) PC3-12800 Registered ECC RAM ($24 / stick - $144)
16GB Sandisk low-profile USB key ($9)
Total: $318 each host

iSCSI storage:
FreeNAS-11.1-U7
Supermicro X8DTE-F motherboard ($92)
(2) Intel(R) Xeon Quad-core CPU E5620 @ 2.40GHz ($5 each)
96GB (6x16gb) PC3-10600 Registered ECC RAM ($24 / stick - $144)
(2) IBM M1015 8-port SATA-III HBA ($35 each)
(2) Mini-SAS SFF-8087 Male to 4x SATA Ports Cable ($10 each)
(16) Samsung 850 EVO 250GB SSD ($42 each)
Norco RPC-4020 4U Case with 20 Hot-Swap SATA/SAS Drive Bays ($300?)

Total: $1308

Network:
(4) Mellanox MNPH29D-XTR ConnectX-2 2-port Fiber HBA ($45)
(16) AOI A7EL-SN85-ADMA 10GB 850nm MMF SFP+ Transceiver LC-LC ($5 each)
(8) 3 Meter 10Gb OM3 Multimode Duplex Fiber Optic Cable (50/125) - LC-LC ($15)
Dell Force10 S4810 48-Port 10GbE & 4x 40GbE Ethernet Switch ($450)

Total: $830

Hardware totals:
24-cores @ 2.4GHz = 57.6GHz
3 x 96 = 288GB RAM
4TB (2TB useable) SSD in RAID-10



View attachment 259394

View attachment 259395View attachment 259396
Yummy thats a very cool setup :)
 
Crazy that this thread is twelve years old. How things have evolved in that time.

My home office / moonlighting gig setup is currently a three-host ESX6.7 cluster with a dedicated FreeNAS SAN over a 10gig fiber network (prices are what I paid on eBay):

Each host (3):
Supermicro X9SRi8-F motherboard ($132)
Intel Xeon E5-2665 8-core 2.4GHz ($33)
96GB (6x16gb) PC3-12800 Registered ECC RAM ($24 / stick - $144)
16GB Sandisk low-profile USB key ($9)
Total: $318 each host

iSCSI storage:
FreeNAS-11.1-U7
Supermicro X8DTE-F motherboard ($92)
(2) Intel(R) Xeon Quad-core CPU E5620 @ 2.40GHz ($5 each)
96GB (6x16gb) PC3-10600 Registered ECC RAM ($24 / stick - $144)
(2) IBM M1015 8-port SATA-III HBA ($35 each)
(2) Mini-SAS SFF-8087 Male to 4x SATA Ports Cable ($10 each)
(16) Samsung 850 EVO 250GB SSD ($42 each)
Norco RPC-4020 4U Case with 20 Hot-Swap SATA/SAS Drive Bays ($300?)

Total: $1308

Network:
(4) Mellanox MNPH29D-XTR ConnectX-2 2-port Fiber HBA ($45)
(16) AOI A7EL-SN85-ADMA 10GB 850nm MMF SFP+ Transceiver LC-LC ($5 each)
(8) 3 Meter 10Gb OM3 Multimode Duplex Fiber Optic Cable (50/125) - LC-LC ($15)
Dell Force10 S4810 48-Port 10GbE & 4x 40GbE Ethernet Switch ($450)

Total: $830

Hardware totals:
24-cores @ 2.4GHz = 57.6GHz
3 x 96 = 288GB RAM
4TB (2TB useable) SSD in RAID-10



View attachment 259394

View attachment 259395View attachment 259396

That is immaculate, the fiber runs at the top are super clean. What do you do for work, if you dont mind me asking?
 
That is immaculate, the fiber runs at the top are super clean. What do you do for work, if you dont mind me asking?

I work for AWS, but this kit is a combination of my home lab and DR sites for a few customers that I support through a consulting gig I run on the side.
 
Going to bump this with an update of mine. Looks like I last posted it back in 2017... Unfortunately, not too much has changed.

2017
View attachment 235329View attachment 235330

2020
View attachment 235334View attachment 235343View attachment 235344
Shuttle- AMD 5600+ 4GB running pfsense // Internet modem // Iomega NAS for backups 6TB Usable
24port HP GB switch
Nutanix CE Server - 2xE5540 32GB 1x256 SSD, 1x600 WDRaptor
Synology RS815RP+ - Currently 1x2TB and not really used yet
DL380g6 ESXi 2xL5638 72GB 1x146
MSA2012 - Currently 12TB (3x 2TB Sata // 8x 450GB SAS)
DL380g6 ESXi 2xL5638 72GB 1x146 2x1TB
HP UPS
1u PDU

Going to try giving some of you cabling guy's a heart attack. lol. I cringe every time I see it but it turned into a "i just need to plug this in and get it working to test stuff" I do need a full depth cabinet though so I can actually mount stuff in here.

While I love seeing some of the sexy setups, mine looks same as yours, although perhaps a little jankier because I don't have a rack for it. They just sort of lay on top of shit or on the floor. My philosophy is the same - functionality. Plug in what I need, test what I need and be done. it's out of sight so I'm not gonna spend hours fixing it to be pretty.
 
While I love seeing some of the sexy setups, mine looks same as yours, although perhaps a little jankier because I don't have a rack for it. They just sort of lay on top of shit or on the floor. My philosophy is the same - functionality. Plug in what I need, test what I need and be done. it's out of sight so I'm not gonna spend hours fixing it to be pretty.

I'm with you but it still bothers me lol I just don't have a proper cabinet to do quality management, even with some cable management rails to shove the cables behind would be helpful. There is nothing like a quality cable mullet though :D
 
^ If anyone's in Austin I could score you a similar deal really cheap we have stacks of R710/720s we're planning on e-cycling or selling to a whole saler (and a few 810/20 models that we'll be selling as well).
PM me and we'll work out details
 
I'm with you but it still bothers me lol I just don't have a proper cabinet to do quality management, even with some cable management rails to shove the cables behind would be helpful. There is nothing like a quality cable mullet though :D

FYI - I built a 21U cabinet many years ago and it was an afternoon's work that cost less than $100. The most expensive thing was the metal rails themselves, for $63 back then. I think I posted it around here somewhere but it was over a decade ago...

dimensions.jpg
 
Back
Top