Dual E5-2670 Motherboard Advice Needed For VMware Workstation Lab

Joined
Aug 27, 2016
Messages
4
Greetings,

I've recently picked up two E5-2670 (v1) processors with the intention of building an everyday workstation PC that would also be used as a VMware Workstation Lab, plus some occasional gaming. I would also like 128 GB RAM (minimum of 64 GB to start).

I need advice on what motherboard to buy for this type of use. A few that I was looking at were:

ASRock Rack EP2C602-4L/D16 SuperMicro X9DR3-LN4F+ ASUS Z9PA-D8

Should I just consider building a system with one processor and keep the other for a spare, or is two CPUs a good way to go for what I want?

Any other advice is welcomed as well, such as memory, power supply, video and sound cards, storage controller, case, liquid cooling, etc.

Thanks very much for all your help!
 
Last edited:
What's your budget? You might even consider getting one of the used dual 2670 Dell or Lenovo workstations that are pretty common on eBay. No need to worry about building a system in that case, especially since overclocking is all but impossible with these chips.

I would strongly recommend against water cooling if you're going to be using it for production purposes and leaving it unattended. The failure rate of water cooling systems, while acceptable for home or gaming purposes, might be too high if you're going to be using it when actual money or time is on the line.
 
What's your budget? You might even consider getting one of the used dual 2670 Dell or Lenovo workstations that are pretty common on eBay. No need to worry about building a system in that case, especially since overclocking is all but impossible with these chips.

I would strongly recommend against water cooling if you're going to be using it for production purposes and leaving it unattended. The failure rate of water cooling systems, while acceptable for home or gaming purposes, might be too high if you're going to be using it when actual money or time is on the line.

Thanks. A build sounded like fun, however, I'm not opposed to saving money if the Dell setup is that much cheaper.

My budget is pretty flexible, but I guess I'm looking to stay around $1,300, maybe less.

This would be used as my Home PC, VMware Workstation Lab, plus some gaming.

The idea of all this in a big case with a window and liquid cooled with lights and all sounded cool to me, though I also have a 12U rack server cabinet and I was wondering if building this in a 4U case and then rack mounting it in the cabinet would be good. But, if I'm using it as a desktop PC, I guess I'd have to keep it real close to my desk.
 
Last edited:
HP z420 for 1x cpu, entire setup can be bought for under $300

z620/z820 for dual. Motherboard as low as $150 on ebay. averages $200. This is literally half price as those motherboards you mentioned.
 
I recently put my E5-2760 setup into a Air 540 case with a pair of AIO Intel Coolers.
I am using an Intel S2600COE board with 128GB of Ram, eVGA 750 watt PSU, and GTX960 4GB and a Plugable USB sound card.
IMG_0843.JPG
 
As an Amazon Associate, HardForum may earn from qualifying purchases.

Ya, had to add more standoffs, so drilled and screwed 4 more in. the bottom right has no area for a standoff since the large hole and grommet are there so no material to drill into.
The Red holes are the new standoffs and the White holes are where I couldn't drill since there was no material there. those are the pass through holes for cables.

xeon1-holes marked.jpg
 
I'm still on a 4core e3 Xeon and 32gb ram.
Currently it's main use is as a Ubuntu and Centos Docker host running on Hyper-V.
I check out builds, test them, and push them to AWS or back to Dev with my notes.
I really need to get them more involved with maintaining their own applications, but just getting them to care at all about maintaining their own applications was a huge victory.
I mean, why did I build out a near push button CI/CD pipeline on my work box and AWS?

I got as far as building out 5.5 clusters on my own as principal architect for an ISP, then had to focus out on AWS.
A lot of guys I know found themselves without a mandate for their own datacenters 2013-current.
Many have moved on to learning how to decouple applications bc until only recently people were just paying lip service to production velocity.
Now, people are losing jobs to make budget for "agility thru DevOps".
Bottomline the enterprises that found olther solutions to $1mil-1bn saved by not buying VMware licenses has changed them.

If your use case requires a lot of cores and RAM, you may put some focus on IOPs.
At one point I was running 6xIntel 320's and 8x 520's for storage pooling.
Amazing how in my case, all my devs dropped relational DB's and wanted to slam NoSQL db's in test.
Currently they just want to connect containers to a data source, or try design patterns that run a swarm of db's.
Personally I'd rather they pick utility over random "let's try this".

VMware had a lot of product thrash 5.5-current.
They seemed to not really know what their offerings were going to be.
I found myself dealing with a lot uncertainty over storage choices.

In the end, system virtualization is a lot more than just standing a system up or covering the synthetic day to day tasks.
Rarely are you best served by building 1:1 resources p2v.

When I do hardware evals, we always include Supermicro as a vendor bc technically getting their gear isn't white box.
I wouldn't look at Asrack bc I have no experience with them swapping in cold spares for me at a colo.
I have had experience with Asrock/Asus RMA, and the downtime and chasing was unacceptable.

ymmv in all of this.
 
Back
Top