P9X79-E WS vs Rampage IV Extreme

frankiee

n00b
Joined
Oct 20, 2013
Messages
20
Hi there,

plan to build an "all purpose workstation" that will be used mainly for software development and content creation, but also (occasional!) gaming. The machine should also be able to handle a lot of VMs for testing purposes.

Current plan is to equip it with a 4930k (mildly OCed), 64GB RAM and a NVIDIA Titan (with a second one maybe later on).

I look at these two mobos as my current favorites, and I am looking for additional input, as I am a rookie and bad with making decisions :confused:

Of course you could say that the WS (= Workstation) might be better suited since I do not want to OC to the max (machine should be very stable!) Also it has more SATA ports and the Firewire header that I need, would save me an additional card.

But, what I do not like about the WS that it only has one(!) USB header onboard, not sure if I can also use the additional standard onboard port as header. The R4E on the other side seems about the same overall build quality as the WS, has dual BIOS and _much_ better BIOS fan control (also with temp sensors) so this board might save me an additional fan controller. A minus on the R4E is the onboard fan however, I also want a rather quiet machine. Not sure if it can be tamed so you cannot hear it anymore.

Plus, not sure if both boards can reliably handle 64GB of 1866 RAM. Heard that the WS has some problems there, but also might be due to incompatible RAM or older BIOS.

So, what would be your opinion on this? Anything in addition to be aware of?
 
The WS would probably suit you better overall I think. The fan control between the boards should be the same in UEFI (barring the temp sensors on the R4E).

64GB at DDR3-1866 is going to come down to the memory controller on the CPU as well. Intel only supports 1 DIMM per channel as a stock operation at this frequency, which means getting 64GB 100% stable may need manual tuning (voltages and timings).
 
We are adding more ECC DRAM kits to the QVL for the WS as well:




ECC Memtest with 64GB:

 
The WS would probably suit you better overall I think.

Thanks for replying! Yes, I am looking at the WS board now.

The fan control between the boards should be the same in UEFI (barring the temp sensors on the R4E).

Yeah, but the temp sensors make the difference. With the WS, you seem to have only CPU temps as an input source. But I think I'll go with a external controller anyway.

64GB at DDR3-1866 is going to come down to the memory controller on the CPU as well.

But the 4930K should be able to do that?

As for the RAM, you posted some ECC models, which I could not use with the 4930K. But: would you think ECC will make a clear difference in terms of reliability? (My current machine has ECC, but I only got _one_ corrected Error in the past 12 months, so I wonder if that really makes such a difference)

Ah and one more question:

So I am only running 2 GPUs. What about the PLX chip on the "E WS" - will it introduce additional latency, can it be switched off or is it only active when it is actually needed? (eg for 16/16/16 and above).

Any clarification would be greatly appreciated!
 
But the 4930K should be able to do that?

Most can, some samples with weaker memory controllers may need timings relaxed or increased voltages.

As for the RAM, you posted some ECC models, which I could not use with the 4930K. But: would you think ECC will make a clear difference in terms of reliability? (My current machine has ECC, but I only got _one_ corrected Error in the past 12 months, so I wonder if that really makes such a difference)

It depends how mission critical your stuff is. For some that would be one error too many. With 64GB of memory in tow, you do increase the possibility of issues, especially in overclocked configuration. Just depends how much the potential of a BSOD at a critical time would bother you I guess.


Ah and one more question:

So I am only running 2 GPUs. What about the PLX chip on the "E WS" - will it introduce additional latency, can it be switched off or is it only active when it is actually needed? (eg for 16/16/16 and above).

Any clarification would be greatly appreciated!


You can't switch the PLX off manually I don't know which slots it's assigned to off the top of my head, there may be two slots that aren't routed via it in which case plcing the GPUs in them would circumvent it. The latency penalty is not that large in real world apps. Again, I suppose this comes down to how critical these things are to you.

-Raja
 
First of all, thanks for the insights Raja!

Most can, some samples with weaker memory controllers may need timings relaxed or increased voltages.

Ah so that could be a bit tricky - so would have a XEON / ECC setup have the same (potential) problems?

Just depends how much the potential of a BSOD at a critical time would bother you I guess.

hard to say ... I mean an occasional crash would be tolerable, my main gripe would be corrupted data that goes unnoticed.

You can't switch the PLX off manually I don't know which slots it's assigned to off the top of my head, there may be two slots that aren't routed via it in which case plcing the GPUs in them would circumvent it.

OK, I see. Of course would be interesting to know what slots would be best then (for dual config), so if you should stumble over this information, I'd be interested to know. Didn't find anything about that in the manual, but at least it is recommended to use slots 1 and 5 with dual GPU setup. Maybe this is the configuration where the PLX does not kick in? Now, that would be perfect of course.
 
Hi,

Anything past DDR3-1600 with more than one DIMM per channel is overclocked - Xeon or not. With 64GB, at the speeds you wish to run, there is the chance you'd have to make some changes to voltages/timings.

I will ask HQ about the lane routing and report back.

-Raja
 
I just got a reply on the PLX routing:

Slots 1,2,3,5,6,7 are all PLX routed. One PLX for slots 1,2,3 another for 5,6,7. Slot 4 is direct to the CPU.
 
I just got a reply on the PLX routing: Slots 1,2,3,5,6,7 are all PLX routed. One PLX for slots 1,2,3 another for 5,6,7. Slot 4 is direct to the CPU.

Thanks again for this quite interesting information! Just saw that slot 4 is only x8, so it seems that the GPU will be routed through the PLX chip in any case (also with single and dual setups).

The only remaining question remaining would be how great the performance hit actually is (with only single / dual GPUs)? Just did some general research and the hit caused by such chips is said to be anything between 1% and 5% depending on the implementation. 1% wouldn't matter much, but 5% would.

So Raja, if - by chance - you can get your hands on some more harder facts & numbers, I would really appreciate (and think this might be interesting to others as well). Otherwise, the WS (non "E" version) could be also an option (has no PLX), but on the other hand I'd still prefer the "E" bc it has much more SATA ports available.

Again, thanks for the insights.
 
It would be between one and five percent - it's always that. You can see what you can find on websites that perform FPS tests with PLX mobos. I don't have boards here to setup and run these benchmarks for you to compare - sorry.

-Raja
 
@Raja

Hi,
The P9X79-E WS look like a fantastic board for a WS, with Xeon E5 16xx.
I looking forward to buy it asap I can find a E5 16xx V2 (it's out of stock everywere in Europe)
I'd like to know about ECC Memory if are there settings into the BIOS to enable/disable it and other related options (Scrubs, ..).
I'd like to test the RAM without any corrections to better find out if it's stable.

Thanks.
 
Last edited:
This is all I have from HQ. Not sure about other options, you'll need to check when you get the board.

ECC option becomes available if ECC DRAM is used with a supported Xeon:

 
It would be between one and five percent - it's always that. You can see what you can find on websites that perform FPS tests with PLX mobos. I don't have boards here to setup and run these benchmarks for you to compare - sorry.

-Raja

It's been tested in previous articles. There is tons of data out there on it as well. Typically 1 or 2 frames is the difference you see in most games. Latency penalties exacting a 5% performance penalty would be a worst case scenario. My experience says that it's closer to 1-3% most of the time. Its the type of difference you only see in a benchmark test. In actual gaming you'd never notice it.

It was the same with the NF100 / NF200's as well. Pretty much any thing that multiplexes the lanes is going to introduce a slight amount of latency.
 
Back
Top