Your home ESX server lab hardware specs?

FYI - I built a 21U cabinet many years ago and it was an afternoon's work that cost less than $100. The most expensive thing was the metal rails themselves, for $63 back then. I think I posted it around here somewhere but it was over a decade ago...

View attachment 279462


That looks really nice and a very nice write-up. Even a 34u rail set is ~$75 and some lumber would provide for a relatively cheap rack. (y):athumbsup:
 
Last edited:
Yep. basic construction pine 2x4s and 2x6s are dirt cheap, sand down relatively well, and take stain readily.
 
Yep. basic construction pine 2x4s and 2x6s are dirt cheap, sand down relatively well, and take stain readily.

Awesome, thanks! I saw the casters on yours, did you go under the posts where you mounted the rail? Or did you mount differently? I'm curious from a weight perspective.
 
Yeah, the casters fit straight into holes drilled directly into the upright posts (the two 2x4s and two 2x6s). I was expecting to crumble when I had the cabinet fully loaded, but these casters proved very sturdy: they were replacement casters for a couch, I believe.
 
My desktop is a esxi server which has been amazingly convenient for running any OS/software as wanted. I used gou passthrough for gaming.

Hardware is
3900x (great processor for esxi)
Msi am4 motherboard. (Terrible for passthrough, perpetuals dont all work, some of the pcie lanes wont work, just bairly acceptable)
32gb ram
1tb nvme

this setup is much less acceptable for esxi then older x99 and dual 2011 setups where everything just worked.

Any recommendations for a am4 motherboard that works well for esxi?
 
Hmm... I'd say both..

Whats hot? Anyone moved to containers, versus VM's?
I am really digging proxmox these days. mixture of containers and full blown VMs. hardware I would say depends on goals. a lot of cheap OEM SFFs that make great nodes and with the fall of SDD prices...
 
Hmm... I'd say both..

Whats hot? Anyone moved to containers, versus VM's?
I have moved most of my stuff to containers and/or VMs that run on unraid for the aio convenience.
Anything esxi I leave for my work stuff nowadays at the datacenter.
 
I have moved most of my stuff to containers and/or VMs that run on unraid for the aio convenience.
Anything esxi I leave for my work stuff nowadays at the datacenter.
agree. For home labs there are just wayyy to many better options than esxi
 
Home lab has progressed so many ways in the past few years. I went from HP DL380 G6's... running in a Windows Server 2019 cluster + Hyper-V..
I didn't mind the power usage.. it was just the heat those put off.. so I sold those off.. and went with some HP ProDesk 600 G1's.. went through various phases of running XCP-NG, Proxmox, ESXI, and Hyper-V.
Wanted new toys.. so I sold those off and now have 3 Intel i5 based NUC10's.. maxed 64gb ddr4 each.. 256gb NVME + 1tb Samsung SSD.

Started with Hyper-V with those.. but then wanted to go back and try ESXi 7... then just recently my vCenter bombed out and doesn't want to start up normally.... so.... as of today I gave Proxmox a try , as I read various reports that NUC's didn't cooperate well with installing Proxmox...
All are running latest v8.0.3 Proxmox, in a 3-node cluster. Just 1 , built-in NIC from each host.. nothing fancy.. KISS method.

*edit*
So much easier to manage than those loud, power hungry beasts DL380's
1696881141592.jpeg
 
Last edited:
While technically not an ESXi lab (in fact, I migrated from ESXi 5.5.0 to 6.7.0u3, then Proxmox VE at work because nobody wanted to eat the insane cost for new ESXi licenses and our aging HPE Gen8 servers wouldn't have been on the HCL anyway), I'm starting to get hit by the homelab bug pretty hard, to the point that I bought a 12U rack cart just to start housing it all.

-Lenovo System x3650 M5 (dual Xeon E5-2640 v3s, 768 GB DDR4-2133R, basically something I got because it had that ludicrous amount of RAM fitted)
-OWC Mercury Rack Pro mini-SAS JBOD enclosure (driven by an LSI SAS9207-4i4e, easy way to add four 3.5" bays to a server that only has 2.5" bays)
-Dell PowerConnect 5524 switch (less fan noise than the 7024, also doesn't need optional modules in the back to add 10GbE)
-APC 1000VA/600W UPS

The original plan was to run Proxmox VE on that server, but too many frustrations with trying to set up the LSI HBA for passthrough led me to run TrueNAS SCALE bare-metal instead and plan around that being my VM hypervisor.

It's also thrown another wrench into the works in that I can't install NVMe drives without the fans ramping up to unacceptably loud hairdryer mode; they technically work, but I can't even override the fan behavior with ipmitool commands because it just goes right back without constantly running a shell script.

Part of me wants something much quieter and less of a space heater, but I don't think I can pull that off without doing a custom ATX build on something that supports DDR4 RDIMMs so I can move all that RAM over. Alas, my Threadripper box is not it - UDIMMs only on that platform.
 
I also just moved from ESX to Proxmox, but at home. Work is all KVM on Red Hat.

Home prox servers are all xeonD-1541 supermicro boards
 
Home lab has progressed so many ways in the past few years. I went from HP DL380 G6's... running in a Windows Server 2019 cluster + Hyper-V..
I didn't mind the power usage.. it was just the heat those put off.. so I sold those off.. and went with some HP ProDesk 600 G1's.. went through various phases of running XCP-NG, Proxmox, ESXI, and Hyper-V.
Wanted new toys.. so I sold those off and now have 3 Intel i5 based NUC10's.. maxed 64gb ddr4 each.. 256gb NVME + 1tb Samsung SSD.

Started with Hyper-V with those.. but then wanted to go back and try ESXi 7... then just recently my vCenter bombed out and doesn't want to start up normally.... so.... as of today I gave Proxmox a try , as I read various reports that NUC's didn't cooperate well with installing Proxmox...
All are running latest v8.0.3 Proxmox, in a 3-node cluster. Just 1 , built-in NIC from each host.. nothing fancy.. KISS method.

*edit*
So much easier to manage than those loud, power hungry beasts DL380's
View attachment 604647
Does ProxMox have storage clustering like ESXI does for fast local storage?
 
While technically not an ESXi lab (in fact, I migrated from ESXi 5.5.0 to 6.7.0u3, then Proxmox VE at work because nobody wanted to eat the insane cost for new ESXi licenses and our aging HPE Gen8 servers wouldn't have been on the HCL anyway), I'm starting to get hit by the homelab bug pretty hard, to the point that I bought a 12U rack cart just to start housing it all.

-Lenovo System x3650 M5 (dual Xeon E5-2640 v3s, 768 GB DDR4-2133R, basically something I got because it had that ludicrous amount of RAM fitted)
-OWC Mercury Rack Pro mini-SAS JBOD enclosure (driven by an LSI SAS9207-4i4e, easy way to add four 3.5" bays to a server that only has 2.5" bays)
-Dell PowerConnect 5524 switch (less fan noise than the 7024, also doesn't need optional modules in the back to add 10GbE)
-APC 1000VA/600W UPS

The original plan was to run Proxmox VE on that server, but too many frustrations with trying to set up the LSI HBA for passthrough led me to run TrueNAS SCALE bare-metal instead and plan around that being my VM hypervisor.

It's also thrown another wrench into the works in that I can't install NVMe drives without the fans ramping up to unacceptably loud hairdryer mode; they technically work, but I can't even override the fan behavior with ipmitool commands because it just goes right back without constantly running a shell script.

Part of me wants something much quieter and less of a space heater, but I don't think I can pull that off without doing a custom ATX build on something that supports DDR4 RDIMMs so I can move all that RAM over. Alas, my Threadripper box is not it - UDIMMs only on that platform.
I am really suprised on the pass through issues. Passing through LSI card direct to TrueNAS Core VM within ProxMox. Asrock AMD 470 "server" M-ATX. Supports ECC but only UDIMMs. Zero issues. Even got my Quadro P400 passed to my Plex Container for Hardware encoding/decoding. Running a single AIO with Compute and storage in one. Idle power consumption is a bit high around 75 watts, but under load it stays near 100-125 watts which is not bad for a 5900 Ryzen chip and 7 4TB spinners.
 
I run Proxmox VE on a CWWK Mini PC with 6x Intel i226-V NICs. Purchased as a barebones, I added the RAM and storage. I found an unlocked/modified BIOS which enables HWP and gives more options.

Specs of the PC are:
i5-1235U
2 x 32GB Crucial CT2K32G4SFD832A
TeamGroup Z440 2TB NVME

Any recommendations for a am4 motherboard that works well for esxi?

MSI is the only motherboard vendor that doesn't support ECC on AM4, so avoid another MSI board if you'll ever use ECC. You'll have to research boards with good IOMMU group support, alternatively you can look into ACS Override on your current board or other boards. It will allow you to break up the IOMMU group and pass through individual devices. From what I've seen people can generally use it with no issue, but it potentially could introduce instability.
 
My desktop is a esxi server which has been amazingly convenient for running any OS/software as wanted. I used gou passthrough for gaming.

Hardware is
3900x (great processor for esxi)
Msi am4 motherboard. (Terrible for passthrough, perpetuals dont all work, some of the pcie lanes wont work, just bairly acceptable)
32gb ram
1tb nvme

this setup is much less acceptable for esxi then older x99 and dual 2011 setups where everything just worked.

Any recommendations for a am4 motherboard that works well for esxi?
little spendy is the asrock line of m-atx mother boards based on the x470 and x570. I have had good luck with most of my pass throughs expect stability issue trying to pass a Tesla Nvidia GPU to a windows VM for cloud gaming. I think the issue was more of the hacked drivers or the pci-e power to the mobo causing crashing only under load ;/
 
Well, that Lenovo x3650 M5 decided that it wanted to brick itself out of the blue. IMM2 hangs on trying to get through U-Boot, and without the IMM2 BMC fully booted up, the actual server will hang on "System initializing" forever while ramping the fans up to eardrum-offensive levels.

The official solution is to replace the entire damn motherboard rather than to just reflash the uBoot ROM chip on the board - not worth it on an old C612/Haswell-DT platform, unless the replacement mobo was free.

Lesson learned - no more Lenovo servers, ever. Even HPE ranks above them now, and I can't stand the way HPE paywalls firmware updates on top of how awful their BIOS is on the Gen8 systems.

Now I get to find a new home for all of those DDR4 RDIMMs, how frustrating! Can't use the Threadripper 1950X build for that, UDIMMs only. The 768 GB RAM dream sure was a short-lived one.
 
Does ProxMox have storage clustering like ESXI does for fast local storage?

I believe on the Proxmox side that would be "Ceph".
3 nodes recommended (required?). Uses up quite a bit of additional ram... assuming same as VMware vSan..
Or you could just use local ZFS on some fast storage and setup replication for your volumes between two nodes...

No need to setup ceph, and I wouldn't recommend 3-node ceph, only 4 or more, that way you can still create storage volumes if a single node is offline.
 
esxi server: Dell R740 with 2x 5217, 256GB RAM, 74TB SSD storage
Nutanix CE server: Dell R740 with 2x 6136, 128GB RAM, 9TB PCIE storage
Harvester server: HPE DL360 gen9 with 2x E5-2640, 256GB RAM, 72TB SSD storage

Have the Nutanix and Harvester servers set up to test replacements for esxi. Eventually I'll be down to that first Dell server.
 
can never beat that...

i started on the 10gb network journey with all in at $125 for cables, 10 nic and 6 port switch...
now upgrading to newer technology... from connectx-1 and 2 with cx4 ports...
That's pretty good entry point!

Former employer went out of business so I got a bunch of 10Gb networking stuff and servers for free. I love having 10GbE around the house.

I have 25Gb cards and SFP28's now but no 25Gb switch. Been looking at Arista since that is the SFP's I have but haven't had the budget to do it. I really need to sell some of my stuff.
 
That's pretty good entry point!

Former employer went out of business so I got a bunch of 10Gb networking stuff and servers for free. I love having 10GbE around the house.

I have 25Gb cards and SFP28's now but no 25Gb switch. Been looking at Arista since that is the SFP's I have but haven't had the budget to do it. I really need to sell some of my stuff.
well what about just port to port on say 2 endpoints that you do most of your copying from?

for me...
my gaming rig, I DVR tv with hdhomeruns, rip through mcebuddy plus kids sports photography and vidoes..so i always have a lot of data to move around and could just do port to port to my file server...

plex, hyper-v lab and esxi lab are not crucial to have 10gb but why not lol... when 10gb internet comes, I am not going to be sitting here wondering if i can get myself to that... i will know my house is not the bottleneck.
 
well what about just port to port on say 2 endpoints that you do most of your copying from?

for me...
my gaming rig, I DVR tv with hdhomeruns, rip through mcebuddy plus kids sports photography and vidoes..so i always have a lot of data to move around and could just do port to port to my file server...

plex, hyper-v lab and esxi lab are not crucial to have 10gb but why not lol... when 10gb internet comes, I am not going to be sitting here wondering if i can get myself to that... i will know my house is not the bottleneck.
Not a bad idea if most of my copying wasn't from a NUC to a Synology and then into Plex which is on the esxi box. I will eventually replace the Synology with something like OpenMediaVault on a 2nd virualized server. At that point I'll probably get that 25Gb working, I only have what I have because I got it free. Otherwise 10Gb has been treating me well enough. :)
 
Back
Top