Separate names with a comma.
Discussion in 'Virtualized Computing' started by agrikk, Dec 18, 2008.
Post it anyway!~ I love to ogle any setups and specs.
I'll get right on that tomorrow when I am online to the office.
That is my Dev environment.
The first is a 3 host setup with capacity with one to fail.
The second is one of our prod clusters. 5 hosts with one as a hotspare setup for failover. Clearly not as active as of yet but that's coming. We do our dancing in dev.
My self training lab:
MSDN OS subscription (I highly recommend this for any sysadmin for self training. Getting 5 licenses for any server OS for training for $900/year is cheaper than buying licenses and less aggrevating than dealing with perpetual trial licenses.)
Storage Server: Windows Storage Server 2012r2, Core i5 2500k, 8GB, Dell H710 RAID, 120GB boot SSD, 6X 4TB HGST Ultrastar in RAID 10, 4 1TB WD blue laptop drives for low level VM storage, running several iscsi targets plus file storage for my main machine
Server 1: Ryzen 1700X. 32GB DDR4-2933, 128GB NVMe boot/primary VM storage, 500GB Samsung 850EVO secondary VMs, 2TB iscsi from storage server (runs pfsense router, local DC, DHCP, DNS, WSUS, and WDS), 2X 500GB iscsi luns from the laptop drives in the storage server for playing around with Ubuntu
Server 2: Core i7 4790k, 16GB DDR3-2400, 256GB NVMe web host VM (Windows and Ubuntu Server), 6TB iscsi for SQL Express and PostGRE SQL servers
I happen to be looking to sell a pair or cores that were used for servers, a Core i7 5930k with 16GB of DDR4-2400 and a Core i7 4930k with 32GB of low latency DDR3-1600 if anyone might be interested.
^^^ do you find the i series better than xeon for vms?
I use a 3770k for mine, frankly it depends on your use case, Xeon processors are qualified to handle heavier, more intensive loads consistently.
It also allows support for ECC ram so your maximum ram capability is often much higher.
The i series have the benefits of overclock ability, lower price, and performance/cost ratio per GHz additionally they generally drop in cost long term alot quicker.
Its not a matter of better for say, but what use case do you have that suits it best.
If you're running a heavy work load 24/7 with 50%+ CPU usage, a xeon would probably be best.
If you barely use 15% CPU and thats only during certain peak hours, the i's would save you money.
EDIT: Just realized you wanted a Ryzen comparison, I have a coworker that runs one I'll grab his 2 cents.
I find no differences, as far as the CPU goes.
The Core chips are just what I had around for my home lab. I did have a Xeon E5-2603 v2 on my old P9X79-E WS board for a while, but it was just too slow for most of what I did, so I switched it out for the 4930k. I also had a couple Dell T110 II servers with Xeon E3 (v1) chips for a while, but ended up selling them. The last Xeon I had personally was a T110 II with an E3-1230, and I sold that over 3 years ago, I think.
I work with Xeons all the time at work, though. The servers have many things that do better, but it is more about the surrounding hardware than the processor that makes any differences in a VM host. The server hardware is usually pretty easy to get cheap on Ebay, though, like server NICs and RAID controllers. The other surrounding hardware that would make a difference is memory. Obviously, we can't install nearly as much memory in a Core than a Xeon in most cases because of the Core's lack of support for ECC-Registered memory. The ECC only makes a difference in stability when memory is going bad, so that makes no difference most of the time. Getting reliable memory makes ECC moot for a home lab, and most people aren't going to spend the money for 256GB of memory on their home lab.
The processor itself is no different otherwise, even under heavier workloads. A Core chip would be just as reliable under a heavy load as a Xeon, as long as the memory doesn't start throwing errors. (Games are heavier workloads than server apps in most cases, and the heaviest games and benchmarks would actually be harder than any server app, as servers should have some headroom to operate reliably, whereas benchmarks and some games just take all the CPU they can.)
It's the same with Ryzen and Epyc. Ryzen even supports ECC memory, with the right motherboard, so that part of the comparison makes no difference. Ryzens make very good servers for either home or small business use, with the right surrounding hardware.
Someone please tell me that I got a good deal on the R420 on eBay. This server costs me $440 included shipping
Will be used as a home lab for learning virtualization and also hosting some VMs
About average to alittle below avg in price, its a solid unit though.
Very nice...4x2tb dell drives.. nice... def needs more memory well depending on amount of vms you run. I find that i run out of memory long b4 cpu
2x ESXi 6.0 Hosts in a cluster
HP ProLiant DL360p Gen8
Intel(R) Xeon(R) CPU E5-2620 @ 2.00GHz (2x 6-core, 12-thread each host)
192GB DDR3 each
Brocade 1020 HBAs (FCoE and iSCSI capable)
1x FreeNAS (SAN)
HP ProLiant DL360p Gen8 (same CPU and RAM spec as above)
6x 600GiB disks (1.8TiB usable, 3x Z1's)
2x 100GiB SSD for ZIL
1TB iSCSI share for the ESXi hosts
Oh, it's 10gbs iSCSI/network through a single port on each box.
It's perfectly quick enough for what we need. I'm only worried I'm going to get full on the SAN before anything else. I'm kicking myself I didn't find the 900GiB drives I had laying around and go with those. But I suppose I can migrate the array to them one at a time.
My R410 arrived couple days ago. I finally have time to install ESXI 6.7 on it.
- 2x Intel Xeon E5-2440 @ 2.4GHz
- 48GB of RAM
- 2x1GB NIC
- 4x2TB hard drives configured in RAID5
- DVD Drive
- Dual PSU
Note in the picture that I am using a UPS. The CyberPower consumer level UPS is not useful for my server. It didn't have enough capacity to drive the server when all the server fans were spinning during startup. The UPS just turned off itself so right now I just have the server plugged into a regular surge protector. I will get a better UPS when I have a chance.
I don't have a UPS at all. Been playing chicken for 15+ years so far.
Yea a good UPS for a dual power supply setup. I would put that on two Cyberpower's if the cost is right. Then you will have the juice needed to run the server just fine and be protected from power fluctuation.
No ups here... just a vm lab that gets powered down...
any issues with this case?
ive got one locally on CL that he wants $175.. was gonna offer $150...
thoughts? stay clear due to the warning?
I wouldnt be afraid of it. Plan to not use that bay, or buy a replacement backplane. cant be that expensive.
I got a new setup running...
I didn't care to raid the ssd's as this is a test box.. who knows what im going to do as I could...
but here is what I got to play with...
got an HP Procurve cx4 connect-x switch coming... 6 ports.. and I need 5 WOOT!!!! lol
In case you decide you wanna fix those crappy nic namings
(the section about half way down on Changing the names assigned by the ESXI host)
drats.. I don't think I have HT turned on.. only sees 16 cpu... should be 32... LOL
Yeah well... I have friend chicken.
You already learned something that should help you when ya do this stuff in production
Might not be your fault, VMware turns it off automatically I think depending on the version to "fix" the spectre/meltdown vulnerability.
The fix is turning off HT, its dumb and not an acceptable solution.
It's not like VMware has an alternate option...A hardware issue cannot be fixed in software in this case, only mitigated.
I'm aware, still an annoying to cut the CPU thread count in half as the only solution wasn't directing that at VMware specificallly.