Your home ESX server lab hardware specs?

Discussion in 'Virtualized Computing' started by agrikk, Dec 18, 2008.

  1. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    Post it anyway!~ I love to ogle any setups and specs.
     
  2. Grimlaking

    Grimlaking 2[H]4U

    Messages:
    2,423
    Joined:
    May 9, 2006
    I'll get right on that tomorrow when I am online to the office.
     
  3. Grimlaking

    Grimlaking 2[H]4U

    Messages:
    2,423
    Joined:
    May 9, 2006
    upload_2018-5-10_9-27-29.png

    That is my Dev environment.

    upload_2018-5-10_9-28-54.png

    The first is a 3 host setup with capacity with one to fail.

    The second is one of our prod clusters. 5 hosts with one as a hotspare setup for failover. Clearly not as active as of yet but that's coming. We do our dancing in dev.
     
  4. dgingeri

    dgingeri 2[H]4U

    Messages:
    2,818
    Joined:
    Dec 5, 2004
    My self training lab:
    MSDN OS subscription (I highly recommend this for any sysadmin for self training. Getting 5 licenses for any server OS for training for $900/year is cheaper than buying licenses and less aggrevating than dealing with perpetual trial licenses.)

    Storage Server: Windows Storage Server 2012r2, Core i5 2500k, 8GB, Dell H710 RAID, 120GB boot SSD, 6X 4TB HGST Ultrastar in RAID 10, 4 1TB WD blue laptop drives for low level VM storage, running several iscsi targets plus file storage for my main machine

    Server 1: Ryzen 1700X. 32GB DDR4-2933, 128GB NVMe boot/primary VM storage, 500GB Samsung 850EVO secondary VMs, 2TB iscsi from storage server (runs pfsense router, local DC, DHCP, DNS, WSUS, and WDS), 2X 500GB iscsi luns from the laptop drives in the storage server for playing around with Ubuntu

    Server 2: Core i7 4790k, 16GB DDR3-2400, 256GB NVMe web host VM (Windows and Ubuntu Server), 6TB iscsi for SQL Express and PostGRE SQL servers

    I happen to be looking to sell a pair or cores that were used for servers, a Core i7 5930k with 16GB of DDR4-2400 and a Core i7 4930k with 32GB of low latency DDR3-1600 if anyone might be interested.
     
  5. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005
    ^^^ do you find the i series better than xeon for vms?
     
  6. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    I use a 3770k for mine, frankly it depends on your use case, Xeon processors are qualified to handle heavier, more intensive loads consistently.
    It also allows support for ECC ram so your maximum ram capability is often much higher.

    The i series have the benefits of overclock ability, lower price, and performance/cost ratio per GHz additionally they generally drop in cost long term alot quicker.

    Its not a matter of better for say, but what use case do you have that suits it best.
    If you're running a heavy work load 24/7 with 50%+ CPU usage, a xeon would probably be best.
    If you barely use 15% CPU and thats only during certain peak hours, the i's would save you money.

    EDIT: Just realized you wanted a Ryzen comparison, I have a coworker that runs one I'll grab his 2 cents.
     
    TeleFragger likes this.
  7. dgingeri

    dgingeri 2[H]4U

    Messages:
    2,818
    Joined:
    Dec 5, 2004
    I find no differences, as far as the CPU goes.

    The Core chips are just what I had around for my home lab. I did have a Xeon E5-2603 v2 on my old P9X79-E WS board for a while, but it was just too slow for most of what I did, so I switched it out for the 4930k. I also had a couple Dell T110 II servers with Xeon E3 (v1) chips for a while, but ended up selling them. The last Xeon I had personally was a T110 II with an E3-1230, and I sold that over 3 years ago, I think.

    I work with Xeons all the time at work, though. The servers have many things that do better, but it is more about the surrounding hardware than the processor that makes any differences in a VM host. The server hardware is usually pretty easy to get cheap on Ebay, though, like server NICs and RAID controllers. The other surrounding hardware that would make a difference is memory. Obviously, we can't install nearly as much memory in a Core than a Xeon in most cases because of the Core's lack of support for ECC-Registered memory. The ECC only makes a difference in stability when memory is going bad, so that makes no difference most of the time. Getting reliable memory makes ECC moot for a home lab, and most people aren't going to spend the money for 256GB of memory on their home lab.

    The processor itself is no different otherwise, even under heavier workloads. A Core chip would be just as reliable under a heavy load as a Xeon, as long as the memory doesn't start throwing errors. (Games are heavier workloads than server apps in most cases, and the heaviest games and benchmarks would actually be harder than any server app, as servers should have some headroom to operate reliably, whereas benchmarks and some games just take all the CPU they can.)

    It's the same with Ryzen and Epyc. Ryzen even supports ECC memory, with the right motherboard, so that part of the comparison makes no difference. Ryzens make very good servers for either home or small business use, with the right surrounding hardware.
     
    Grimlaking and TeleFragger like this.
  8. beyonddc

    beyonddc Limp Gawd

    Messages:
    392
    Joined:
    Sep 25, 2002
    Someone please tell me that I got a good deal on the R420 on eBay. This server costs me $440 included shipping

    Will be used as a home lab for learning virtualization and also hosting some VMs

    ss.jpg
     
  9. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    About average to alittle below avg in price, its a solid unit though.
     
  10. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005
    Very nice...4x2tb dell drives.. nice... def needs more memory well depending on amount of vms you run. I find that i run out of memory long b4 cpu
     
  11. CombatChrisNC

    CombatChrisNC [H]ard|Gawd

    Messages:
    1,080
    Joined:
    Apr 3, 2013
    2x ESXi 6.0 Hosts in a cluster
    HP ProLiant DL360p Gen8
    Intel(R) Xeon(R) CPU E5-2620 @ 2.00GHz (2x 6-core, 12-thread each host)
    192GB DDR3 each
    Brocade 1020 HBAs (FCoE and iSCSI capable)

    1x FreeNAS (SAN)
    HP ProLiant DL360p Gen8 (same CPU and RAM spec as above)
    6x 600GiB disks (1.8TiB usable, 3x Z1's)
    2x 100GiB SSD for ZIL
    1TB iSCSI share for the ESXi hosts

    Oh, it's 10gbs iSCSI/network through a single port on each box.

    It's perfectly quick enough for what we need. I'm only worried I'm going to get full on the SAN before anything else. I'm kicking myself I didn't find the 900GiB drives I had laying around and go with those. But I suppose I can migrate the array to them one at a time.
     
    jlbenedict and TeleFragger like this.
  12. beyonddc

    beyonddc Limp Gawd

    Messages:
    392
    Joined:
    Sep 25, 2002
    My R410 arrived couple days ago. I finally have time to install ESXI 6.7 on it.
    - 2x Intel Xeon E5-2440 @ 2.4GHz
    - 48GB of RAM
    - 2x1GB NIC
    - 4x2TB hard drives configured in RAID5
    - DVD Drive
    - Dual PSU

    Note in the picture that I am using a UPS. The CyberPower consumer level UPS is not useful for my server. It didn't have enough capacity to drive the server when all the server fans were spinning during startup. The UPS just turned off itself so right now I just have the server plugged into a regular surge protector. I will get a better UPS when I have a chance.

    ^60FC78914680BEA6688B2D074A86F6E411FD4BB69A501BFE28^pimgpsh_fullsize_distr.jpg
     
  13. ChRoNo16

    ChRoNo16 [H]ard|Gawd

    Messages:
    1,308
    Joined:
    Feb 3, 2011
    I don't have a UPS at all. Been playing chicken for 15+ years so far.
     
    jlbenedict likes this.
  14. Grimlaking

    Grimlaking 2[H]4U

    Messages:
    2,423
    Joined:
    May 9, 2006
    Yea a good UPS for a dual power supply setup. I would put that on two Cyberpower's if the cost is right. Then you will have the juice needed to run the server just fine and be protected from power fluctuation.
     
  15. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005
    No ups here... just a vm lab that gets powered down...
     
  16. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005

    any issues with this case?
    ive got one locally on CL that he wants $175.. was gonna offer $150...


    thoughts? stay clear due to the warning?
     
  17. ChRoNo16

    ChRoNo16 [H]ard|Gawd

    Messages:
    1,308
    Joined:
    Feb 3, 2011
    I wouldnt be afraid of it. Plan to not use that bay, or buy a replacement backplane. cant be that expensive.
     
  18. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005
    I got a new setup running...

    I didn't care to raid the ssd's as this is a test box.. who knows what im going to do as I could...

    but here is what I got to play with...

    got an HP Procurve cx4 connect-x switch coming... 6 ports.. and I need 5 WOOT!!!! lol




    host.png

    storage.png

    network.png
     
    Outlaw85 likes this.
  19. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    TeleFragger likes this.
  20. TeleFragger

    TeleFragger Gawd

    Messages:
    724
    Joined:
    Nov 10, 2005
    drats.. I don't think I have HT turned on.. only sees 16 cpu... should be 32... LOL
     
  21. Orddie

    Orddie 2[H]4U

    Messages:
    2,230
    Joined:
    Dec 20, 2010
    Yeah well... I have friend chicken.


    You already learned something that should help you when ya do this stuff in production
     
    TeleFragger likes this.
  22. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    Might not be your fault, VMware turns it off automatically I think depending on the version to "fix" the spectre/meltdown vulnerability.
    The fix is turning off HT, its dumb and not an acceptable solution.
     
    TeleFragger likes this.
  23. Modder man

    Modder man [H]ard|Gawd

    Messages:
    1,770
    Joined:
    May 13, 2009
    It's not like VMware has an alternate option...A hardware issue cannot be fixed in software in this case, only mitigated.
     
  24. Spartacus09

    Spartacus09 Limp Gawd

    Messages:
    466
    Joined:
    Apr 21, 2018
    I'm aware, still an annoying to cut the CPU thread count in half as the only solution wasn't directing that at VMware specificallly.