Dual Intel® Xeon® Processor E5-2699 v3 Build

SpeedyVV

Supreme [H]ardness
Joined
Sep 14, 2007
Messages
4,210
For those with question about the E5-2699 v3 check out my build in progress.

Parts List so far:

WP_20150410_10_20_21_Pro_zpsimoblh5z.jpg
 
dam ya $5k wow!

Was going to say what was the cost on this build.. obviously pretty high!
 
I hate you... whoops, I mean very sweet setup.. common type-o..

but yeah.. 5k on cpus and then throwing in EVOs seems kinda silly unless they're just place holders.

why not use the m.2?
 
With only 4 sticks of RAM isn't it only going to be running in dual channel mode since you will only be filling up 2 slots for each processor?
 
With only 4 sticks of RAM isn't it only going to be running in dual channel mode since you will only be filling up 2 slots for each processor?

I'm guessing with the SSDs and the memory.. money was tight after the CPUs :-D the 4x16 gives him the option to go to 128 once money becomes available? Just guessing.
 
Why 1500W PSU ??
Quadruple SLI with TITAN X ??

With Xeon Dual socket locked down, Why a Dual socket mobo with only 8 total slot of RAM ??
36 cores with only 64GB of RAM ???
If I want a dual socket I want tons of RAM slots period.

4xSSD in RAID 0 ??? did you Know that the X99/C612 is saturated by 3 SSD in the best case thanks at DMI 2.0 linking the hub with the CPUs ? Better to buy a LSI MegaRAID controller so you'll be free to run even 8 SSD at full speed

IF I want the fastest Game Rig I'll go for Xeon 1680 v3/i7 5960X both unlocked.
If I want a server I'll go for a Supermicro board with 16/24 Ram slots....
 
Last edited:
Why 1500W PSU ??
Quadruple SLI with TITAN X ??

No, 390X in Crossfire :p (sorry, bad joke)
Mostly because it had all the connectors I needed, and just maybe 3 video cards in the future. 4xTitant X highly unlikely.

With Xeon Dual socket locked down, Why a Dual socket mobo with only 8 total slot of RAM ??
36 cores with only 64GB of RAM ???
If I want a dual socket I want tons of RAM slots period.

Even at 64GB of RAM is enough for my needs now, and I can add another 64 if required.


4xSSD in RAID 0 ??? did you Know that the X99/C612 is saturated by 3 SSD in the best case thanks at DMI 2.0 linking the hub with the CPUs ? Better to buy a LSI MegaRAID controller so you'll be free to run even 8 SSD at full speed

That I did not know. Thanks, and I will look into this in more detail.

IF I want the fastest Game Rig I'll go for Xeon 1680 v3/i7 5960X both unlocked.
If I want a server I'll go for a Supermicro board with 16/24 Ram slots....

Neither, this is a HTPC with some good transcoding horsepower. :D

Joking aside, this is not intended to be a balanced system by any means. It was selected for a very specific purpose, which I explained elsewhere (basically many VMs running a bunch of small websites and databases in their own instance of a VM).

Oh yeah, it helped that the customers were willing to pay for it.
 
No use in 72 threads if you only have 64Gb of ram. Optimal price/performance is at 2Gb/thread (tested up to 16threads).
"many VM's" means lots of RAM.
At least use a PCIe SSD or M.2 or NVMe. Those 850's will be like HDD's for your purposes compared to these. I would max go for 2 SSD's in Raid 0 personally.
 
Joking aside, this is not intended to be a balanced system by any means. It was selected for a very specific purpose, which I explained elsewhere (basically many VMs running a bunch of small websites and databases in their own instance of a VM).

Oh yeah, it helped that the customers were willing to pay for it.

No offense, but why would a customer pay for that?

Running webservers and databases on what is essentially a desktop computer is not anywhere close to professional unless they don't care about data integrity, reliability, downtime, redundancy, etc.

Its a sweet rig though.
 
No use in 72 threads if you only have 64Gb of ram. Optimal price/performance is at 2Gb/thread (tested up to 16threads).
"many VM's" means lots of RAM.
At least use a PCIe SSD or M.2 or NVMe. Those 850's will be like HDD's for your purposes compared to these. I would max go for 2 SSD's in Raid 0 personally.

512GB or bust for that many cores.
 
512GB or bust for that many cores.

128GB should be fine, this will give a big performance increase compared to 64GB when he's actually using the 72 threats. 512GB will only increase performance under 5% for ram intensive tasks (estimated, I never seen 72 threads with these absurd amounts of ram). Even if your pc shows it's maxing out all your ram, it doesn't mean that adding extra ram actually increases performance by a lot. Also compare this small performance increase (from the needed 128GB to 512GB) to the price of upgrading to 512GB and everyone knows it makes no sense.
 
No offense, but why would a customer pay for that?

Running webservers and databases on what is essentially a desktop computer is not anywhere close to professional unless they don't care about data integrity, reliability, downtime, redundancy, etc.

Its a sweet rig though.

They wanted the most powerful "workstation" for the initial research.

These websites, are then farmed out to "smaller" workstations for all the non functional requirements you mentioned. One or two of them might even work on a laptop that is online every now and then.
 
No offense, but why would a customer pay for that?
Running webservers and databases on what is essentially a desktop computer is not anywhere close to professional unless they don't care about data integrity, reliability, downtime, redundancy, etc. Its a sweet rig though.

I can imagine such system used for development, debugging, testing of servers or server type apps with full realtime high-resolution GUI visualization. Otherwise I can not imagine workstation type of applications needing 36/72 threads at low processor speed of 2.3 GHz. Maybe somebody knows example where it could be useful?
 
I can imagine such system used for development, debugging, testing of servers or server type apps with full realtime high-resolution GUI visualization. Otherwise I can not imagine workstation type of applications needing 36/72 threads at low processor speed of 2.3 GHz. Maybe somebody knows example where it could be useful?

High end SQL transactions.
 
I can imagine such system used for development, debugging, testing of servers or server type apps with full realtime high-resolution GUI visualization. Otherwise I can not imagine workstation type of applications needing 36/72 threads at low processor speed of 2.3 GHz. Maybe somebody knows example where it could be useful?

running counter strike maybe? :D
 
No use in 72 threads if you only have 64Gb of ram. Optimal price/performance is at 2Gb/thread (tested up to 16threads).
"many VM's" means lots of RAM.

How did you derive this? How is price : performance related to memory : thread?
 
How did you derive this? How is price : performance related to memory : thread?

I'm interested in this myself.

Obvious that I am currently not bottlenecked by processor threads, but I do wonder when the RAM bottleneck will show up.
 
I can imagine such system used for development, debugging, testing of servers or server type apps with full realtime high-resolution GUI visualization. Otherwise I can not imagine workstation type of applications needing 36/72 threads at low processor speed of 2.3 GHz. Maybe somebody knows example where it could be useful?
High end SQL transactions.
This is workstation-type of app???
Not what I am doing, but gambling would be a good example?
Why gambling would need so many threads and dual processor??
 
This is workstation-type of app???

Why gambling would need so many threads and dual processor??

They do a lot of gambling?

Seriously, I use a lot of applications and write applications that could definitely use that many cores. I'm dealing with 1.7 billion objects that I need to process, and using that many cores vs. a standard I7 4/8 core machine means the difference between getting it done in an hour vs. a day.
 
They do a lot of gambling? Seriously, I use a lot of applications and write applications that could definitely use that many cores. I'm dealing with 1.7 billion objects that I need to process, and using that many cores vs. a standard I7 4/8 core machine means the difference between getting it done in an hour vs. a day.

No doubt the need for computing power is unlimited but these are mostly server/data center applications. Here we have the case of workstation. Is your application workstation-type? By workstation app type I mean the need of realtime high-res (3D) visualization.

Now regarding your i7 example. Full power of the i7 should include overclocking in my opinion. Thus a 4 GHz OC i7 with 8 cores will be for many apps equivalent or better to a 16 core Xeon locked at 2 GHz. Only if this power is fully utilized one should think about moving to dual-socket configurations - if there is a need for workstation-type large in-memory operations. If there is no such need, two single-socket machines might be even better.
 
I can imagine such system used for development, debugging, testing of servers or server type apps with full realtime high-resolution GUI visualization. Otherwise I can not imagine workstation type of applications needing 36/72 threads at low processor speed of 2.3 GHz. Maybe somebody knows example where it could be useful?

Developers.

We load up systems like this but with 256gb RAM or more, SAS card with SAS SSDs and 10k rpm 2.5 hdds.

Developers can then run all their own virtual sql, hadoop, and other virtualized development environments locally. They make their stuff on CentOS locally and move to the testing VMs, then the staging VMs, then we move everything off to RedHat production. So yeah, there is a use for these types of cores on a workstation... but those CPUs are going to be cheap once you start pricing out the type of storage you need as well.

But most people don't need to test hadoop clusters or deal with large SQL transactions locally. It makes far more sense to just spin up VMs on a server cluster for development. It only makes sense to do it locally when you have contract isolation requirements.
 
Ok, let me be clear as the OP.

There is almost zero reasons for a 36 core dual cpu workstation if performance/£ is what you are looking at. Amdahl's law makes this pretty obvious.

But in our particular scenario £s were not the limiting factor. Time was, and the workstation pretty much paid for itself already.

Time in this case is more related to Time to proof-of-concept, and Time to Live.

A single 18 core CPU would have been plenty for our needs, but we decided to get dual workstation mobo, and then we just loaded it up with a 2nd CPU for the cost of a consultant for 2 days.

Since I get to keep the equipment after they go operational, I did not argue :-p it makes a great rig to just load up VMs pretty much ad-hoc without worrying about CPU limitations... I am sure, in the future RAM will be a limiting factor, but we will deal with that when time comes.
 
Last edited:
Ok, let me be clear as the OP.

There is almost zero reasons for a 36 core dual cpu workstation if performance/£ is what you are looking at. Amdahl's law makes this pretty obvious.

But in our particular scenario £s were not the limiting factor. Time was, and the workstation pretty much paid for itself already.

Time in this case is more related to Time to proof-of-concept, and Time to Live.

A single 18 core CPU would have been plenty for our needs, but we decided to get dual workstation mobo, and then we just loaded it up with a 2nd CPU for the cost of a consultant for 2 days.

Since I get to keep the equipment after they go operational, I did not argue :-p it makes a great rig to just load up VMs pretty much ad-hoc without worrying about CPU limitations... I am sure, in the future RAM will be a limiting factor, but we will deal with that when time comes.

Much as I hate to break this to you, you still are CPU limited when it comes to VMs! You're also RAM limited and your storage is abysmal for it. That's the thing, real power user stuff (SQL, VMs) is not like the bullshit childrens toys people here call "enthusiast".

There is no limit to CPU, memory, or storage once you get into real power user stuff.

This isn't about price per performance. It's that honestly anything under 384gb of RAM is a waste with those CPUs. Furthermore the type of things that can use all of that, they require a SAN storage using sas12 SAS arrays over SAS SSDs and SAS 2.5 hdds, preferably as an ISCSI target, if not you better have a similar storage situation inside, which is going to make 5k for CPUs look dirt cheap. You'll also need a proper quadport NIC to make this thing run properly network side (probably just under a grand).

I know, dealing with VMs in the development, testing, staging, and production levels is what I do for most of my work day.

The system just seems schizophrenic. It's as if an enthusiast went out and bought the only components really covered on enthusiast sites (cpu, mobo, ssd, PSU) and bought them from enthusiast (ASUS, Samsung, Corsair) brands. Of course not using any of the actual brands a workstation or VM host would use and skipping over all the completely essential poweruser/professional/enterprise components that are completely needed to actually do workstation tasks... starting with a proper SAS card and moving all those disks and ssds to SAS!

It almost makes me wonder what version of Windows 7 you have, and if it's one with multi socket support? Because if it wasn't a version with multi socket support... well that's less odd than other things about this build.
 
4 EVOs at ~1.5GB/s read (after saturation) and probably ~1.5GB/s write (lower write, but hidden by saturation perhaps).

If you run 30VMS with 2GB ram each for small web server, while leaving 4GB for the OS, that leaves you 50MB/s simultaneous bandwidth per VM. Each EVO can handle about 90k IOPs R/W, so you're left with 360k IOPS (assuming you can queue them all properly with the bottleneck).

With losses you'll have about 11k IOPS per VM, which isn't much. The real problem will be having so many simultaneous threads running.. it won't be quite as efficient as you think.

If you're planning on running a max of 20 VMS, you'll probably be ok.
 
4 EVOs at ~1.5GB/s read (after saturation) and probably ~1.5GB/s write (lower write, but hidden by saturation perhaps).

If you run 30VMS with 2GB ram each for small web server, while leaving 4GB for the OS, that leaves you 50MB/s simultaneous bandwidth per VM. Each EVO can handle about 90k IOPs R/W, so you're left with 360k IOPS (assuming you can queue them all properly with the bottleneck).

With losses you'll have about 11k IOPS per VM, which isn't much. The real problem will be having so many simultaneous threads running.. it won't be quite as efficient as you think.

If you're planning on running a max of 20 VMS, you'll probably be ok.

The issue isn't just IOPS, it's that consumer (aka enthusiast/gamer) does not have the features you need and is vastly more prone to the sort of errors that cause problems with VHDs SQL transactions and the other sort of things these CPUs are built for.

If you're going dualie and can use all those cores, you need to be on SAS, end of story.
 
The issue isn't just IOPS, it's that consumer (aka enthusiast/gamer) does not have the features you need and is vastly more prone to the sort of errors that cause problems with VHDs SQL transactions and the other sort of things these CPUs are built for.

If you're going dualie and can use all those cores, you need to be on SAS, end of story.

No, you really don't. There are plenty of configurations that don't involve SaS that could work in a workstation environment.
 
The issue isn't just IOPS, it's that consumer (aka enthusiast/gamer) does not have the features you need and is vastly more prone to the sort of errors that cause problems with VHDs SQL transactions and the other sort of things these CPUs are built for.

If you're going dualie and can use all those cores, you need to be on SAS, end of story.

So, did you just come up with that randomly? If not please explain your thought process.
 
I appreciate the info you guys are providing. I really do, but you are bringing in stuff that has nothing to do with our application.

We are NOT running a data centre, a cloud service, or even a HPC workstation.

As I also already said, 18 cores would have been enough CPU for us, and 64 GB is also enough (for the 18 cores).

We decided to go with Workstation class motherboards, and it is proving to be a good choice for us.

Primary reason it is working out, is because it is much simpler for us to manage/support remote sites, with not always top IT support, easy of procurement/availability, and reduction of theft risk.

NO, we are not sending out this setup to remote sites. This is a one of, for our lab.

I do appreciate the input. 4 EVO drives for the OS is not required. But now we have the choice of RAID config, and a spare drive when one fails and before a replacement is procured.

Last but not least, it was a requirement for these workstations to be not COTS server, but something that could be "built" with off the shelf parts.

BTW, I am a gaming enthusiast, but that has nothing to do with this.

I also designed, in 1992 Canada's largest call centre, with 28 sites with off the shelf PCs (as required by our executive), and our operation ran for 6 years with Zero downtime! Even more crazy, is that we coded our own IVRs, and CTI applications, in-house (as opposed to paying IBM to do it) using a combination of mostly MS-DOS machines, some NT Workstation, and 2 OS/2 machines, as opposed to the standard UNIX which was the standard OS of choice in Telecom.

I have since worked at Defence, Telecom, and Finance, and I hate to break it to you, but not all successful systems are run on standard Data Centre type equipment. As a matter of fact, more and more large organisations are coming to that conclusion themselves.

There is a LOT more to well architected systems, or systems of interconnected systems than to just using the latest and greatest enterprise grade server environments. There is a place for them off course, but they are not the only option.
 
Last edited:
As an aside, I just had a crazy idea!

This would be a great HTPC.

I need to throw in a Nvidia 960 GPU, so that I can do 4K with 4:4:4 chroma, and think of the speed power for ripping Bluray and transcoding.

Sounds like a fun project/proof-of concept!
 
so how do you make money on this so you can pay for the equipment. I want a business model so i can do the same :) need to pay for future hardware somehow haha. Any way cool build.
 
so how do you make money on this so you can pay for the equipment. I want a business model so i can do the same :) need to pay for future hardware somehow haha. Any way cool build.

Money is mostly on software, and a bit on services.
 
Back
Top