HAF XB Server - Dual Xeon Sandy Bridge-EP LGA 2011

Here are some initial measurements regarding the impact a top fan has in this build. Initially, I tried using a stock CoolerMaster 200mm fan, but it is 30mm high and didn't clear the top of both Artic Freezer i30 heat pipes. I decided to complete the build without a top fan, then measure the temperatures of both Xeon CPUs. Thereafter, I mounted a thinner (20mm) BitFenix Spectre 200mm case fan on top, and I was happy to see that it does clear both heat pipes:


tronTopFanInstalled.png


The Xeon E5-2670 processor has a 100C Tj Max core temperature limit. The Artic Silver 5 thermal compound takes a while to cure, so I'm expecting the following initial temperature readings to drop a bit over the next few days during burn-in.

Idle Performance - No Top Fan

  • CPU1 Average Core Temperature 32C
  • CPU2 Average Core Temperature 34C

Idle Performance - BitFenix Top Fan

  • CPU1 Average Core Temperature 30C
  • CPU2 Average Core Temperature 32C

Prospects for amazing cooling performance look promising. Average core temperatures are simply fantastic at idle with the top fan in place. Using the BitFenix case fan dropped idle temperatures 2C or about 6%. The fan is extremely quiet mainly because it spins at 700 RPM. I will take this modest temperature improvement at idle. All of the other case fans are Noctua PWM hardware, controlled by the ASUS motherboard's BIOS. All the fans are spinning between 600 and 800 RPM at idle. The power supply unit fan does not spin at idle, operating in total silence.

This platform is silent at idle. We'll see if that remains true after the K5000 is installed later today.

I am looking forward to future measurements at higher/hotter loads...

Note: Only two of the four BitFenix fan mountings align with the case top holes, but using two Nexus silicon fan mounts has nevertheless kept the fan very stable:


tronUnbox02.jpg

 
Last edited:
The KingWin 1000W power supply replaced the older Enermax PSU, which means there was a second round of cable management fisticuffs. The good news is the new unit is smaller and the flat modular cables are much easier to manage. Overall, clearances and open space improved, which should help air flow and provide slightly better thermal performance.


tronKingWin1.png


tronKingWin2.png


tronKingWin3.png


tronFinalCable1.png


tronFinalCable2.png


tronFinalCable3.png


tronFinalCable4.png


tronFinalCable5.png


The final piece to the puzzle is at hand-- time to kick the passive Radeon adapter to the curb and seat the Quadro K5000. Despite the shorter PCB, the board itself is still rather long, and the 6-pin power connector is located behind the board, not on top, so clearances are rather tight up front next to the 140mm Noctua intake case fan. Fortunately, clearances between the CPU coolers and other slot hardware is good, with plenty of room for good air flow on both sides.



tronQuadroInstall1.png


tronQuadroInstall3.png


tronQuadroInstall4.png

 
After the motherboard RMA issue, I was never happier to see these rather old school ASUS server BIOS screens. Server BIOS software is boring compared to exotic gaming platforms. Xeon processors are locked. Server ECC registered memory is made for the long haul, so there's no juicing that up either. Basically, there is really nothing to overclock and that makes for boring BIOS. Nevertheless, there was much relief and satisfaction when these crude screens reported both processors and all the memory correctly:


tronBIOS1.png


tronBIOS2.png


tronBIOS3.png


tronBIOS4.png

 
Initial thermal testing looks extremely promising. CPU and GPU temperatures at idle and load look wonderful. I am truly amazed how well the CoolerMaster HAF XB case, Noctua fans, KingWin PSU and Artic Freezer heat pipes are performing together- operating in silence at idle, and nothing is noticeable even while Prime95 is running a torture test!. The Quadro K5000 is also silent at idle and while working with Adobe CS6 applications. The only perceptible fan noise finally appears from the Quadro card while running GPU 3D stress tests.

That's it as far as system noise-- lowest db levels ever encountered by me-- all the more impressive given the amount of computing power involved and the lack of any water cooling...

The numbers are more than hoped for. The first CPU idles with an average core temperature of 29.75C. As expected, given the push-pull Artic Freezer heat pipe configuration, the second CPU cores idle slightly hotter on average at 32C. The Quadro K5000 idles at a very cool 26C. During Prime95 stress testing the first CPU average core temperature stabilizes at 55.5C, and the second CPU core temperature average is 57.625C. The highest reported core temperature in the first CPU was 58C and the highest core reading from the second CPU was 61C-- 39 degrees below the 100 Tj Max core temperature limit.

The system remained virtually silent during the entire test-- superb results!

Besides the case/fan cooling setup, other factors behind these thermal results include the Xeon E5 series improvements in power efficiency (9W idle, 115W max.), a 20 percent power reduction in the K5000 Kepler architecture compared with the prior Fermi generation, and a very efficient, amazingly silent KingWin PSU.


TronServerInfo.png

Xeon E5-2670 and Quadro K5000

TronServerIdleTemps.png

Idle Temperatures

TronServerLoadTemps.png

Prime95 Stress Test Temperatures

TronServerWindowsSystem.PNG

Windows 7 System Information

TronServerNVIDIAControlPanel.PNG

NVIDIA Control Panel

TronServerDesktop.png

The Server Desktop
 
Last edited:
Looks great. Your thread has me strongly considering a HAF XB for m next build.
 
Brilliant build and quite a interesting case, I may have to pick one of these up for review
 
Looks great. Your thread has me strongly considering a HAF XB for m next build.

This CoolerMaster case has my strongest endorsement-- super easy to work on, inexpensive, and it lives up to its name by exhibiting excellent cooling characteristics.
 
Who needs Prime95 to stress test a system? It appears good old BONIC provides a harder work out. Do you want to evaluate your CPU cooling design? Well, it certainly seems like searching for extra-terrestrial signals in the universal background ether for never-ending hour upon hour is a good approach, no?


TronBonicCrunch.PNG


Although the Xeon E5-2670 base clock rate is 2.6GHz, note how under heavy load CoreTemp reports TurboBoost is in play bumping all cores up to a sustained 3GHz clock rate. After running BONIC for a few hours with all 32 threads at 100% utilization the average core temperature in CPU 1 was 59.875C. It was 4C higher in CPU2 at 63.875C. The second/rear CPU doesn't intake as much cool air compared to the first/front CPU, and that accounts for this 4C delta at maximum CPU loading. After hours of madly searching for E.T. throughout the cosmos, the average core in the hotter CPU was still 37C below the Tj Max core temperature threshold.

I am more than satisfied with the CPU cooling situation. It is time to stress the Quadrophonic Kepler 5000...
 
Last edited:
The server has successfully passed its burn-in stage. For ten solid days at 100 percent utilization across all 16 cores this machine has crunched and munched without a single hiccup, without a single core ever coming any closer than 30C to Tj Max core temperature. It is time for this hardware to enter service, marking the conclusion of the build log.



Desktop.PNG

3D desktop with one Tron interface symbol for each available thread
 
Last edited:
Hi SonataSys,
Could you tell us how is the system going on? Are you happy with it?
thanks

Thanks for your question. The server has remained up and stable. It's supporting two remote software engineers during the day, along with a part-time 3D modeler who works on the machine directly. Everybody remarks how silent the server is, and how the room remains agreeable for the modeler.

The CoolerMaster case continues to impress with its ability to house considerable power in a very small space, its open test bench design makes servicing a breeze, and most importantly its superb cooling performance. It's the next-best thing to having the server living within a chilled rack...

The Quadro K5000 adapter is working very well. The modeler is enjoying fast render times in the Lightwave view-port, even with larger models. It was also very easy to get all of the Adobe CS6 applications (Premiere Pro, After Effects) to recognize the K5000. Video timeline editing is a breeze with the GPU hardware assist-- no lag, no staggering, etc.

On a related note, here are some gaming and professional 3D benchmarks for the K5000...


Gaming Benchmarks

To evaluate high-stress 3D gaming load, benchmarks were gathered for both the server's Quadro K5000 and a gaming platform featuring cross-fired Radeon HD 7970 GPUs using FurMark v1.10.4 at 1920x1080 resolution. All video cards were running at stock speed (no overclocking).

FurMark_benchmarking_K5000.PNG


The Quadro K5000 system scored 2,011 points, averaging 33 FPS with a 60C maximum GPU temperature:

FurMark1.10.4_K5000.PNG


Compared to a cross-fired Radeon HD 7970 setup, this chart and graph reveals how the Radeon hardware is far better suited for gaming workloads:

FurMarkBarChart.PNG


Based on these results, it is clear the gaming card performed much better under heavy gaming load, providing three times the performance for half the cost. How far behind is the Quadro K5000 in terms of gaming experience? The gaming score produced by the Quadro K5000 platform is roughly equivalent to prior-generation AMD Radeon HD 5870 or NVIDIA GeForce GTX 580 gaming cards.


Professional Benchmarks

To evaluate heavy professional modeling load, benchmarks were gathered using SPECviewperf 11 at 1600x1200 resolution.

SPECviewperfBenchmarking7.png


SPECviewperfBenchmarking8.png


The Quadro K5000 system scored an average of 73.14 and a Radeon HD 7970 CrossFireX system averaged 21.13:

SPECviewperf1600x1200_K5000.PNG


The chart and graph reveals how the Quadro hardware is far superior when rendering large 3D models. In fact, the tables turn completely, with the professional card providing well over three times the performance on average. Moreover, several vendor-specific (CAD and Maya especially) tests are six to ten times slower when running the gaming card:

SPECviewperfBarChart.PNG


These numbers present a clear distinction between professional and gaming video hardware: Both video platforms do well within their intended domain, and both perform poorly when loaded outside their domain.
 
Last edited:
Sorry to revive this thread, but I was grinning from ear to ear reading this with a shear glad I found this feeling! I have on order the Asus Z9PA-D8 along with E5-2680 SB after reviewing the Supermicro X9DRL-EF-0 as I wanted an ATX size motherboard and the Asus had better PCI-E support of the 2. I really appreciate the pictures and commentary of this thread which has confirmed some things for me and has raised some concerns before I continue my build purchases that I'm hoping you might be of some assistance. I am switching over from AMD as I needed more powerful cores with floating point performance for some serious number crunching applications that perform portfolio optimization runs -- the more faster cores the better to what I can afford; this is a personal build. I will not need more than 8GB RAM for this at all, just a lot of fast cores.

Specifically, I have struggled mightily trying to decide which PSU to use with this motherboard to meet the 2 8-pin EPS connectors and am leaning toward the Corsair AX760i as I will not being using many disks (1 SSD, 2 HDD & 1 DVD-RW), no GPUs or other peripherals though considering re-writing my application to take advantage of an Intel Phi PCI-E card for parallel number crunching...at some future point (suboptimal use of my time right now). This Corsair is the lowest Watt PSU that is built by Flextronics and not Seasonic for Corsair; so, the audible whine of the Seasonic's won't be an issue, it has the PMbus capability that will work with this motherboard and importantly is ~$190. Any concerns here or other thoughts?

The pictures of the Arctic coolers were most appreciative and the most disconcerting as I was leaning on the Noctua NH-U12S coolers that are compatible with this Motherboard which looks actually wider at the base than the Arctic coolers which seem to be 'extremely' close to the DIMMs nearest to them! So, I am concerned that the Noctua's are too wide and wondered if that was why you opted for the Arctic's instead given all your fans are Noctua's which I wondered about; the price delta isn't much between them particularly if you're swapping out the stock fans. Can you help explain your decision and how much clearance you have between the cooler and DIMM? Can't argue with the cooling performance you're getting though and if the clearance isn't really an issue, then I may just go that route though I've a question into Noctua about the NH-U12S regarding DIMM clearance.

Which leads to the RAM, this motherboard has limited validated DIMMs from their website (yours aren't listed, but clearly they're working for you), but Kingston claims they've a compatible low-cost ValueRAM kit for it (8GB as 4x2GB for $100) and, well, I've always used and trusted Kingston ECC memory given their lifetime warranty, but...now I'm wondering whether I should be using Very Low Profile DIMMs instead given the cooler clearance issue though none seem supported or switch coolers from air to liquid. Would appreciate your thoughts here as well.

I am curious whether this system is still running without hardware issues? Thanks very much in advance.
 
Last edited:
Yes, we went with the Asus mainboard for the same reason. The server has been working flawlessly since the burn-in earlier this year.

Regarding PSU selection, to avoid possible heartache the feedback I received from Asus tech support was to opt for a very recent PSU that fully conforms to all of the latest Intel power standards-- basically avoid the older products, even products a few years old. Don't under-estimate power needs either, because just having two E5 CPUs can eat some serious wattage when they are pushed hard.

Regarding memory, I would strongly encourage selecting from the Asus compatibility list. I tried to go with cheaper Kingston sticks that weren't on the list and had to return them since the system would not post. I took another chance by selecting 8GB Samsung sticks that weren't on the vender list, but other smaller capacity Samsung sticks were on the list. Tech support told me they cannot validate every server module, especially the more costly, higher capacity memory-- but Samsung is a good choice because they have tested the smaller capacity sticks, and Samsung also supplies a lot of memory to other vendors as well that are also compatible. The Samsung memory works well and was reasonably priced. Whatever you chose, make sure you purchase memory from a vendor that will accept returns/exchanges.

Regarding CPU coolers, the Artic coolers perform rather well with Noctua's latest focus-flow fans, and the Artic towers also provide reasonable memory stick and case clearance as well. However, if the new NH-U12S from Noctua fits in your case then you should go with that cooler for sure, since its new taller/thinner design delivers improved clearances in most directions. The NH-U12S was not available during our server build.

Finally, you may want to consider running two CPUs instead of one, depending on your math-intensive application's use of threads/cores. For the same money, you may be better off with two Xeons that deliver more cores but a slower clock compared to a single monster Xeon with fewer total cores and a higher top clock rate. Moreover, having two CPUs on hand allows you to confirm the Asus board is fully operational with both processors installed. Our first Asus motherboard did not post with both processors installed and had to be returned. Thankfully, the second board has worked perfectly thus far.
 
Great, thanks for the info!

The Corsair AX###i is the top of the line professional platinum series PSUs; I looked briefly at the older AX750 series due to cost, but ruled that out. Appreciate the comment on watts, I'd found some materials from IBM and Anantech on peak watts for dual E5-2680 SB at ~440W and for dual E5-2697 v2 (perhaps I can afford the 24 Cores someday, but based on how well the Xeon prices stay up that may be a pipe dream) at ~550W. I also ran my expected configuration through a PSU calculator online and at most it stated ~700W; so, I think I'll be ok - not a lot of headroom for sure, but I will think about this a bit more and check Black Friday/Cyber Monday prices for an AX860i if I can get it close to my target price of $190ish.

Yes, I recognize the risk with just 1 cpu and not knowing whether the 2nd socket works particularly given your experience, but $$ is preventing that decision. I was originally going to go with dual E5-2620s at $730, but was able to get a new (damaged box only) E5-2680 for $990 with an Amazon guarantee if I cannot get it to work (not an Engineering Sample either); so, it had a better price/performance ratio and I couldn't pass it up. The cheapest I've seen a retail E5-2680 is ~$1570 which is a great price, but not within my spouse-sanctioned budget.

I heard back from Noctua today, they suggested either the NH-U12S or the NH-U9DXi4 which has identical specs except for improved mounting options and slightly less expensive. So, I will go with that and revert to the Arctic like you if I run into issues. I am glad there's no clearance issues with the memory; thanks for that!

I will hunt around for a compatible Samsung 8GB kit which I've found challenging unlike Kingston's memory checker. Any suggestions here?

Thanks ever so much!
 
You are doing a great job and should avoid all of the pitfalls out there.

The parting overall impression I'd like to leave with is this: The SuperMicro and Asus dual LGA 2011 ATX mainboards are tightly packed and touchy-picky-finicky, especially in regards to the following:

  • memory compatibility
  • power supply quality
  • dual CPU post issues (some boards post with one CPU, not two)
  • cramped CPU socket and memory slot clearances
  • tight clearance between the CPU sockets and bus slot 1

Some other little snippets that you may be interested in:

There have been several BIOS upgrades this year, so I would highly encourage flashing the very latest stuff ASAP.

For the very best memory performance, I've read that filling all four memory channels is the only way to achieve quad-channel performance/bandwidth, so consider filling up with memory instead of leaving open banks on either CPU.

There is stock fan control annoyance with the ASUS Z9PA-D8 motherboard-- zero ability to monitor or adjust fan speed outside of BIOS during start-up. I tried many tools and none of them detected any PWM fan rates until I found AIDIA64 Extreme Edition. The support group and website at AIDA are both fantastic.

Because the ASUS Z9PA-D8 server motherboard fans are connected to the IPMI BMC chip instead of the on-board Nuvoton Super IO sensor chip, only AIDIA64 EE detects and renders all fan speed/RPM values on the "Computer / IPMI" page instead of the "Computer / Sensor page"

AIDA64-EXTREME-ASUS-Z9PA-D8.PNG


P.S.
I should note that there is another way to monitor fan speed from outside BIOS, but it requires installing the entire ASWM Enterprise management software from Asus, which involves considerable overhead including Microsoft SQL Server and other stuff. I did go through the entire install/configuration process and verified the monitoring capability, but we decided the enterprise management suite was overkill for our team's (only) server-- really designed to monitor a bunch of servers...
 
Last edited:
Thanks for that feedback. I've researched TONS on this setup in the hopes of avoiding costly mistakes and really appreciate what you've shared. This interchange has been helpful.

I did see you other posting on another site about the Aida software, but was hoping that since this board supports PMBus and the Corsair has an interface for it and claims to be able to see/control the fans/PSU that this will work. However, I've now entered a ticket with their tech support on a pre-sales basis to determine compatibility given your feedback about the IPMI chip. If this comes back ok, then it seems BestBuy has the AX860i at $200 which is close enough to my budget and provides some extra headroom. If not, then I'll consider less expensive options and go the AIDA route.

I heard back from Kingston support stating the KVR16R11S8K4/8 kit should work as it was also recommended for that motherboard on their site's compatibility checker, but we shall see. This is a 4 RDIMM kit totalling 8GB; so, I'd intended to populate all of the CPU DIMM slots for the performance reason you state. I searched and couldn't find any 2GB Samsung RDIMMs that would be compatible; bit frustrating there's not more RAM listed for this board, but fingers-crossed the Kingston's work. If not, I'll end up buying the cheapest I can get off the RAM listing regardless of performance just for cost reasons and upgrade later with the same type.

I also heard back from Noctua, seems the NH-U12DXi4 or NH-U12S coolers will work, however, they've a height taller than my tower case allows, but confirmed that the slimmer NH-U9DXi4 will work, but with a caveat: "We've tested this cooler with both fans installed using 130w TDP single CPU LGA2011 systems and I'm able to state they're suitable for typical workloads, but you will have to expect increased CPU temperatures under continuous heavy CPU load. I would recommend you to fit both coolers blowing towards the top of the enclosure if you're using a tower type case and ensure good case ventilation." Bit perplexed by what he means here, place the coolers on the CPUs such that the fans are pointing towards the PSU at the top of the case or somehow retrofit the fans on the cooler in that direction? Could be that they don't recommend having the CPU1 fan blowing out hot air into the CPU2 fan intake...not sure. The Arctics are also too tall; so, I'm now considering other cooling options like the Thermalake NiC F3 or F4 or perhaps the Supermicro SNK-P0050AP4 with a Noctua 200mm fan like you did with the Arctics...bummer.

Thanks again; assuming you're in the USA, Happy Thanksgiving.
 
Given the dimensional envelope that you are having success with using the Arctic CPU cooler with the 120mm Noctua fan at 161mmH x 139mmW x 100mmD with just the height being my case's obstacle at 155mm, I've run a bunch of comparisons and have come up with these options:

Option 1: $53 for 5 direct contact heat pipes at 126 x 96 x 105mm
Cooler: Supermicro SNK-P0050AP4 (recommended for the X9DRL-EF)
Fan: Noctua NF-BP 92mm (1600RPM, 37.8CFM, 64MaxAir & 17.6dbA)

Option 2: $64 for 4 direct contact heat pipes at 155x140x50mm (1mm wider than Arctic)
Cooler: Thermaltake NiC F4
Fan: Noctua NF-F12 PWM 120mm (1500RPM, 55CFM, 93MaxAir & 22.4dbA)

I'm leaning toward option 1 at the moment, but would appreciate your thoughts on it?
 
I also heard back from Noctua, seems the NH-U12DXi4 or NH-U12S coolers will work, however, they've a height taller than my tower case allows, but confirmed that the slimmer NH-U9DXi4 will work, but with a caveat: "We've tested this cooler with both fans installed using 130w TDP single CPU LGA2011 systems and I'm able to state they're suitable for typical workloads, but you will have to expect increased CPU temperatures under continuous heavy CPU load. I would recommend you to fit both coolers blowing towards the top of the enclosure if you're using a tower type case and ensure good case ventilation." Bit perplexed by what he means here, place the coolers on the CPUs such that the fans are pointing towards the PSU at the top of the case or somehow retrofit the fans on the cooler in that direction? Could be that they don't recommend having the CPU1 fan blowing out hot air into the CPU2 fan intake...not sure. The Arctics are also too tall; so, I'm now considering other cooling options like the Thermalake NiC F3 or F4 or perhaps the Supermicro SNK-P0050AP4 with a Noctua 200mm fan like you did with the Arctics...bummer.

Noctua appears to be advising the towers be placed so the fans push air upward instead of out the back (by rotating each tower 90 degrees from the current placement). However, the CPU sockets are too close to allow this configuration, although you could mount the tower this way easily if there's only one CPU in the system.

Regardless, I would take Noctua's advice seriously. Our server's dual CPU configuration cannot follow Noctua's advice, yet there are important differences that explain why the Artic coolers are performing well under heavy load:

  • The E5-2670 processors are 115W, not 130W, under max load. This may not sound like a big delta, but it is . . . with significant and obvious power usage/costs over time . . . and an even bigger thermal increase under max load-- which Noctua is referring to.
  • The Artic coolers are full-sized with five heat pipes designed for 130W CPUs, but they are only dealing with 115W hardware in our server.
  • In addition, the Noctua fans perform better with less noise than the stock Artic fans, which also improves thermal performance under load even more.

Despite the tower arrangement that Noctua is warning about, our server's second CPU is only running 3C warmer than the first CPU during normal workloads, 66C below the 100C max. Under max load, the delta between each CPU only grows from 3C to 5C, with the second CPU still 30C below the threshold.
 
Last edited:
Option 1: $53 for 5 direct contact heat pipes at 126 x 96 x 105mm
Cooler: Supermicro SNK-P0050AP4 (recommended for the X9DRL-EF)
Fan: Noctua NF-BP 92mm (1600RPM, 37.8CFM, 64MaxAir & 17.6dbA)

Option 2: $64 for 4 direct contact heat pipes at 155x140x50mm (1mm wider than Arctic)
Cooler: Thermaltake NiC F4
Fan: Noctua NF-F12 PWM 120mm (1500RPM, 55CFM, 93MaxAir & 22.4dbA)

Given all of the tight clearances you must overcome, I suspect option #1 is the way to go. Regarding clearance, it has nearly every advantage when space it tight all around. The cooler is small enough to mount two towers side-by-side in either configuration, pushing air out the back or up to the top of the case. My only concern is the cooler is seated rather low to the mainboard and its fans will very likely block one memory bank if a cooler is mounted in the direction that blows air upwards to the top of the case. The other configuration wouldn't have that problem. Regardless, the stock fan would be very loud under load, so buying a Noctua fan is definitely the right choice.

The Thermaltake double-fan configuration is too wide and won't work in any configuration with two CPUs. Going with just one Noctua fan would make things fit, but I'm concerned about a big thermal risk under heavy load if only one fan was used with that heat pipe, especially with a 130W CPU glowing white hot.
 
Option 1: $53 for 5 direct contact heat pipes at 126 x 96 x 105mm
Cooler: Supermicro SNK-P0050AP4 (recommended for the X9DRL-EF)
Fan: Noctua NF-BP 92mm (1600RPM, 37.8CFM, 64MaxAir & 17.6dbA)

Option 2: $64 for 4 direct contact heat pipes at 155x140x50mm (1mm wider than Arctic)
Cooler: Thermaltake NiC F4
Fan: Noctua NF-F12 PWM 120mm (1500RPM, 55CFM, 93MaxAir & 22.4dbA)

I'm leaning toward option 1 at the moment, but would appreciate your thoughts on it?

I've communicated with another builder who really likes the Dynatron R17 cooler. All-copper heat pipes in a small 4U package with 92mm fan. The fan is loud during this stress test, but the temperatures are really good-- and I think these are 3.8 GHz 135W CPUs. Perhaps a Noctua will deliver the same temps with less ROAR:

http://www.youtube.com/watch?v=wH9ole4ypTs
 
I was wondering why you didn't get the newer released "XB EVO?" Or better yet the "Corsair Carbide Series® Air 540" case.? Just was wondering what your thoughts was for getting the older model XB. And why that one over the Corsair one.
 
Appreciate that, I'm returning the ThermalTake NiC F4 unopened. After researching all the cooling options & after debating whether to replace my case to allow for better cooling options, I agree and have ordered the Dynatron R17 which seems like the best choice where I can rotate CPU1's cooler/fan to push air to the case top with CPU2 in normal position pushing air to the rear. I will look at other 92mm relatively quiet/performance fans later. Components are starting to come in and expect to start building next weekend; will post results for future Z9PA-D8 owners or considering being owners...
 
Components are starting to come in and expect to start building next weekend; will post results for future Z9PA-D8 owners or considering being owners...

Excellent. Much good luck with your build-- and please keep us in the loop along the way.
 
Well, have great news to report! Enough of the components came in early I was able to get it built and everything worked as planned! I'll post pics along with build/config/cost info later as I'm still waiting on a couple of case fans; thus, stressing the system will have to wait though I'm collecting baseline info. As I've only the 1 CPU, I'm a bit concerned about the viability of the 2nd socket, but the Kingston RAM works as did migrating my SSD/HDD from an AMD to Intel motherboard with just a Windows OS re-activation. Sigh!
 
Solid-- I can almost feel your relief . . . and excitement!

I am very pleased to hear about your clean build-- not an easy thing to pull off with these tightly packed configurations and heavyweight parts.

I know others are also looking forward to seeing your new machine and your thermal measurements.

Congrats!
 
Back
Top