Backbreaker, my 1950x build

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
6,617
Backbreaker, because it's really getting too heavy that it hurts to move. And the name kind of goes with the whole Threadripper thing. I don't usually do build threads because I tend to build them too quickly but this one is going to take a little longer than usual and I didn't want to keep hijacking other peoples threads. :stinkyfeet:

The goal is to combine my htpc/server and daily rig into one machine that does everything I need in one package. What that essentially means is hand down my relidded 6700k to my son and sell off the old 3930k server. I think I can hold onto this TR build for a good many years, crosses fingers.

Main parts:
cpu - Threadripper 1950x
mb - Asus Zenith Extreme *Note the Zenith's armor does not clear mb tray grommets bending board (fix is to remove grommets)
ram - G.Skill TridentZ 3733mhz 4x8gb
gpu - Nvidia Titan X Pascal
m.2 - Samsung 960 EVO 500gb m.2
ssd - SanDisk Ultra II 960gb SSD x2
psu - EVGA 1300G2
case - ThermalTake Core X9
raid - Highpoint RocketRaid 3530 12port RAID
raid - Adaptec 5805z 8 port RAID
disk - Seagate 3TB x12 = RAID 6 27TB
disk - Seagate 10TB Ironwolf, Seagate 6TB x2
disk - WD 8TB Red x2
lcd - BenQ XL2420TE x3, surround 144hz

Cooling parts:
cpu - EK Threadripper EVO waterblock
gpu - EK Titan X waterblock
res - EK RES X2 150
pump - Swiftech MCP35X w/ Acool metal sinked body
rad - XSPC RX480mm
rad - TFC Xchanger 480mm
fan - Gentle Typhoon AP15
fan - Sunbeam Rheosmart 6


The stock Core X9 while flexible is astonishingly weak on storage especially for such a large cube type case. The thing only came with 9 drive bays total. I really cannot comprehend the lack of foresight by TT. Thus on this build storage was the key to making this case work for me. Cooling wise I have a pair of EK Coolstream XE480's but as I got down to it, I was just too lazy to tear them out of the Enthoo Primo they are currently residing in, which also happens to be the pc I'm typing on. I'll get around to swapping the rads in 2018, maybe. I figure I am in no hurry because 960mm of good but old rads is still more than enough for the load.

The first thing I did was start modding the X9 to hold 20 drives while still retaining the tray system that TT uses. For this I found the Rosewill 4x3.5in server drive cages. They come with a junk 120mm fan in a somewhat decent looking grill/frame. But I didn't need the grill either. I ordered 5 of these and took off everything attached to them. I measured, drilled, used some 6/32 screw/nut sets to attach them to the existing TT drive trays. A quick coat of flat black and voila.

https://smile.amazon.com/gp/product/B005FHHOXE/

Test fitting Rosewill cages

oVcfgLdh.jpg


Painted black and mounted

kB1p6PDh.jpg


PSU side of case with pump mounted and last cage

VCk3ziTh.jpg


Just got the board and I had to see it in there

vhPCIGLh.jpg


Test fitting drives and drive cabling

D5pmNUXh.jpg
 
Last edited:
Very impressive! I hope my build comes out as nice as this. The X9 is not a case you take to a LAN party!
 
I had one of those cases for a little while. +1 on the lack of storage foresight, TT designed some pretty awful and space inefficient drive racks. It's a downright pleasure to work with water cooling in that case though, and they do have lots of removable parts.
 
Very impressive! I hope my build comes out as nice as this. The X9 is not a case you take to a LAN party!

Yea, it is starting to get real heavy and I've only got half the drives installed atm. To be honest I'm kind of uncertain the Core X9 won't end up too flexy. I removed the psu/hard drive divider. It connected the middle of the case with to the back and the left side of the case, giving some triangulation. W/o the divider in there the bottom of the case is pretty flexy. I suppose that answers my fears.
 
LMAO! I had to move my X9 which is similar to the OP's (dual 480 rads, etc.) but not quite as many hard drives. A visit to my chiropractor was scheduled soon after.
 
Yea, it is starting to get real heavy and I've only got half the drives installed atm. To be honest I'm kind of uncertain the Core X9 won't end up too flexy. I removed the psu/hard drive divider. It connected the middle of the case with to the back and the left side of the case, giving some triangulation. W/o the divider in there the bottom of the case is pretty flexy. I suppose that answers my fears.

How difficult would it be to fabricate something to replace the divider and put rigidity back into it?

LMAO! I had to move my X9 which is similar to the OP's (dual 480 rads, etc.) but not quite as many hard drives. A visit to my chiropractor was scheduled soon after.

Maybe OP should integrate a foldable dolly into the bottom or something (y).
 
How difficult would it be to fabricate something to replace the divider and put rigidity back into it?


Maybe OP should integrate a foldable dolly into the bottom or something (y).

Fabricating wouldn't be hard but probably not worth it given its just a L shaped stamping that connects 3 sides together. I will stick the divider back in. I removed it to improve cooling on the drives. The divider encloses all the drives under the mb tray. It is what it is I suppose but luckily there is a 140mm fan opening that will help.
 
Word of warning, don't follow EKs blob tim method. I did and I had 10c worse temps than when I redid it as a star (big x and small vert/horizontal lines). Use the whole 1g tube (although I use Kryonaut rather than the included Hydronaut).
 
Word of warning, don't follow EKs blob tim method. I did and I had 10c worse temps than when I redid it as a star (big x and small vert/horizontal lines). Use the whole 1g tube (although I use Kryonaut rather than the included Hydronaut).

Thanks for the warning. You think you know large but damn seeing it in your hands I was taken back a little bit by how massive the IHS is. I use CLU and it took nearly a whole tube of it to cover the IHS. Add to that, the IHS' surface is fresh and clean that the CLU did not want to stick. Luckily with CLU the spread and trapped air bubbles are not a concern.

And I had to move the rig onto the floor dolly. It's too darn heavy to spin around now.


Update, I reinstalled the psu/drive divider and installed the psu. I also routed the HD power cables and man was that a pain!

VRfvR2Ph.jpg


UfTcyZih.jpg


toFEh18h.jpg
 
Looking great! I bet that WAS a pain. Did you custom make the power cables for the drives? I'm guessing the loose wires are for the pump and that you are going to sleeve them? Twist them together - makes them easier to sleeve.
 
I was bummed, I was looking around for some hose to plumb this thing up, but seems I was out. Have to wait a couple days, man... But wait, I then remembered my server is external, so there's 20ft of tubing right there lol. Hand to forehead, doh!
 
I got it plumbed up last night and went about cleaning flushing the loop. The rads had been sitting for a while and needed a good cleaning.

ZGJFGuJh.jpg


Today I managed to get the drives in and check the arrays. I had a scare for a bit as one of the RAID cards overheated doh and I thought it was dead. It was just taking a break.

AzJ9VHyh.jpg




The build is almost done. After some more testing and tuning, I'll fix up the rest of the cabling. It''s running at 3.7ghz/3200mhz memory. I'm still waiting on another set of dram.
 
May wanna check total system draw if you're overclocking later on. 1300W is probably enough, but that thing will be a beast under load. The seagate drives use 6-8W/ea according to the specs, but i don't know what they draw during spin-up.
 
May wanna check total system draw if you're overclocking later on. 1300W is probably enough, but that thing will be a beast under load. The seagate drives use 6-8W/ea according to the specs, but i don't know what they draw during spin-up.

It's not that much actually and this rig is much more efficient than the last rig. worst case, the spin up power at most is 600w for the drives and obviously it ran in the old rig w/o issue. I ran the same level gear on a 3930K clocked around 4.4ghz give or take the weather with all them drives. It idled around 260w at the wall and could go as high as 600w depending on task. I didn't game on it much though. I will overclock this rig for sure. When I do finish this I'll put it thru its paces with killawatt in hand.

**This morning I connected the killawatt up and boot draw is 300w average at 4ghz overclock. Later I'll connect my ups to complete it. Doh, its idling at 4ghz with 260-270w draw. Not bad, not bad at all.
 
Last edited:
Update:

Zenith Extreme - This is one annoying board for a variety of reasons and some may be game breaking and the ROG forum is not that helpful. First on my list of gripes is that PWM control doesn't work, so the pump just runs at max rpms, this is pretty high up their on game breaking bugs. Second, the vrm fan is annoyingly loud and there is not much control over it. Third, regular old SATA RAID is broken/doesn't work not to mention the setup of a webserver (for admin) for something ubiquitous as SATA RAID. Fifth, the screws on the socket latch are just a tad bit too short so its just a lil hairy to have to put so much force to make a thread grab. Fourth, the cpu temp is whacked, when overclocked idle temps drop to 8c in Ryzen Master. Another annoyance is that Asus only chose to go with 6 sata ports vs 8 ports like Asrock and MSI. At times I really could have used those two extra ports.

Case issues - The drives are running too warm. I'm going to space the the cages apart to fit an AP15 between them. I'll throw a couple more on either side to get more air movement thru the cages.

I managed to do a couple Firestrike runs. The rig still needs a good tuning. Btw it was drawing around 500-550w during the FS runs.

https://www.3dmark.com/fs/13587871
https://www.3dmark.com/fs/13585505
 
Last edited:
Nice. I like the modded drive bays you added.
The X9 was one of the cases I was going to get for either my Dual Xeon or 4790k setups. I opted for the Corsair Air 740 since I like the external design, but this huge case only holds three 3.5" drives.
If they are 7200 performance drives, they will be cooking since the drive bay has all three sandwiched together and there is no fan in that chamber.
I ended up leaving one drive in the stock area and put the other 2 on the floor of the chassis, those 2 are just sitting there unsecured.
21433239_1638962579456043_8887505032614215827_n.jpg
 
Fourth, the cpu temp is whacked, when overclocked idle temps drop to 8c in Ryzen Master.

Are you sure the cpu temp is off? Or is it that you're just applying less voltage than at stock? For example, when mine is set to stock, the voltage can fluctuate all the way up to 1.488 due to XFR kicking in. Subsequently, when I "lock" the voltage, the cpu actually idles at a lower temp.
 
Are you sure the cpu temp is off? Or is it that you're just applying less voltage than at stock? For example, when mine is set to stock, the voltage can fluctuate all the way up to 1.488 due to XFR kicking in. Subsequently, when I "lock" the voltage, the cpu actually idles at a lower temp.
The are issues. I put in documentation last week with AMD, no replies yet.
 
Nice. I like the modded drive bays you added.
The X9 was one of the cases I was going to get for either my Dual Xeon or 4790k setups. I opted for the Corsair Air 740 since I like the external design, but this huge case only holds three 3.5" drives.
If they are 7200 performance drives, they will be cooking since the drive bay has all three sandwiched together and there is no fan in that chamber.
I ended up leaving one drive in the stock area and put the other 2 on the floor of the chassis, those 2 are just sitting there unsecured.

Omfg, only 3 drives spots? That is ludicrous for a cube case, much like this X9.

Are you sure the cpu temp is off? Or is it that you're just applying less voltage than at stock? For example, when mine is set to stock, the voltage can fluctuate all the way up to 1.488 due to XFR kicking in. Subsequently, when I "lock" the voltage, the cpu actually idles at a lower temp.

Yea, I'm using manual volts atm. Does your cpu idle at 10c though?
 
Hey Kyle, have you tested the PWM fan control on any of these boards yet? I don't know if its all boards or just the Zenith that cannot control my mcp35x pump via PWM. It just runs at max and the PWM settings in bios don't do anything.
I used the fan control, but I set everything for max. Dan will be looking into this. Also, I gave ASUS a heads up on your post above.
 
Case issues - The drives are running too warm. I'm going to space the the cages apart to fit an AP15 between them. I'll throw a couple more on either side to get more air movement thru the cages.
I was afraid you were going to have issues with drive temps. I'm guessing there is insufficient space between the side panel and the drive cages to get fans in there? Doesn't look like there is any space down the middle either. Top and (I'm assuming) bottom of the cages are too solid to allow any significant air flow from above or below so that's out. If you could move the pump/res up to the MB tray level then you could move one of the drive racks over to the other side and make room for the fans.

Just throwing some ideas out there. :)
 
I was afraid you were going to have issues with drive temps. I'm guessing there is insufficient space between the side panel and the drive cages to get fans in there? Doesn't look like there is any space down the middle either. Top and (I'm assuming) bottom of the cages are too solid to allow any significant air flow from above or below so that's out. If you could move the pump/res up to the MB tray level then you could move one of the drive racks over to the other side and make room for the fans.

Just throwing some ideas out there. :)

There's roughly a .5 inch gap between each cage so I could slip one of those 12mm thick Scythe fans in, hot glue it in place if needed. That way I don't have to take apart the cages to re-drill them. Four cages = 3 12mm Scythes. Crosses fingers.
 
lol, with the down time, everyone is like where's the server? I can't get to my shows? Even my daughter is like what's up, lol. Little do they know its gonna go down again for a mb swap. And I'm going to upgrade the RAID 6 array to 6gb/s from 3gb/s with a Highpoint 3740a... so much to do.
 
There's roughly a .5 inch gap between each cage so I could slip one of those 12mm thick Scythe fans in, hot glue it in place if needed. That way I don't have to take apart the cages to re-drill them. Four cages = 3 12mm Scythes. Crosses fingers.
If you find that the 12mm fans won't fit you might be able to move the first and last cage out another .5"-.6" and be able to put a full size fan between cage 1 and 2 and another between 3 and 4. Then you'd only have to move/re-drill 2 cages and it should provide adequate cooling to all 16 drives. If you put the 200mm fan in the front that should provide cooling for the 4 drives on the other side.

Thought maybe you could put the 12mm fans between the cages and the side panel but the pics look like you only have about a quarter inch there. Bummer.
 
Update:

Zenith Extreme - This is one annoying board for a variety of reasons and some may be game breaking and the ROG forum is not that helpful. First on my list of gripes is that PWM control doesn't work, so the pump just runs at max rpms, this is pretty high up their on game breaking bugs. Second, the vrm fan is annoyingly loud and there is not much control over it. Third, regular old SATA RAID is broken/doesn't work not to mention the setup of a webserver (for admin) for something ubiquitous as SATA RAID. Fifth, the screws on the socket latch are just a tad bit too short so its just a lil hairy to have to put so much force to make a thread grab. Fourth, the cpu temp is whacked, when overclocked idle temps drop to 8c in Ryzen Master. Another annoyance is that Asus only chose to go with 6 sata ports vs 8 ports like Asrock and MSI. At times I really could have used those two extra ports.

Case issues - The drives are running too warm. I'm going to space the the cages apart to fit an AP15 between them. I'll throw a couple more on either side to get more air movement thru the cages.

I managed to do a couple Firestrike runs. The rig still needs a good tuning. Btw it was drawing around 500-550w during the FS runs.

https://www.3dmark.com/fs/13587871
https://www.3dmark.com/fs/13585505


1. What EFI are you using?
2. What headers do you have the Swiftech MCP35X pump attached to at the moment. Are you trying to use the EFI or Fan Expert for control of the pump.
3. The VRM fan has a couple of different profiles in the latest EFI that should help the noise level depending on the CPU thermals.
4. What AMD driver release are you using? Also, the Webserver requirement is AMD, not ASUS. RAID works fine, regardless of EFI version.
5. As for the CPU installation, we have a how to in the ROG forum, if followed, the amount of force can be greatly reduced.

PM me if you have further questions.
 
First on my list of gripes is that PWM control doesn't work, so the pump just runs at max rpms, this is pretty high up their on game breaking bugs.

This is something I want to start testing going forward, though this wasn't my experiencing using an AIO on this server, or if it was it didn't annoy me using the setup I had on it.

Second, the vrm fan is annoyingly loud and there is not much control over it.

I didn't experience this.

Third, regular old SATA RAID is broken/doesn't work not to mention the setup of a webserver (for admin) for something ubiquitous as SATA RAID.

This is something I did experience with BIOS updates past 0301. It worked again on a later BIOS though. SATA RAID arrays failing to detect in 2017 is bullshit, I'd agree. I found that these wouldn't come up in Windows or show up in the regular BIOS, but did show up in the regular BIOS. On 0301, SATA performance is crap. This was resolved on later BIOS revisions but it didn't always work, which is a bigger problem.

Fifth, the screws on the socket latch are just a tad bit too short so its just a lil hairy to have to put so much force to make a thread grab.

I didn't have this issue.

Fourth, the cpu temp is whacked, when overclocked idle temps drop to 8c in Ryzen Master.

Temperatures do some odd things with some BIOS revisions. I didn't use Ryzen Master all that much for monitoring, and used AI Suite III instead. I never saw temperatures drop that low. I did however see some BIOS revisions double the idle temps of the CPU when nothing changed.

Another annoyance is that Asus only chose to go with 6 sata ports vs 8 ports like Asrock and MSI. At times I really could have used those two extra ports.

Yeah, I don't get this either. I use 9 of 10 SATA ports on my X99 system.
 
Last edited:
1. What EFI are you using?
2. What headers do you have the Swiftech MCP35X pump attached to at the moment. Are you trying to use the EFI or Fan Expert for control of the pump.
3. The VRM fan has a couple of different profiles in the latest EFI that should help the noise level depending on the CPU thermals.
4. What AMD driver release are you using? Also, the Webserver requirement is AMD, not ASUS. RAID works fine, regardless of EFI version.
5. As for the CPU installation, we have a how to in the ROG forum, if followed, the amount of force can be greatly reduced.

PM me if you have further questions.

1. Currently am on 0601, but was on 9960. 9960 caused bios to read double idle temp so went back to 0601. However 0601 causes Ryzen Master to read idle temp at 10c.
2. I've tried using the cpu and waterpump headers with no luck. The pump rpms shows max speed with no attenuation. Using uefi for PWM control. The headers read a regular fan ok.
3. The problem is related to the bios, if using 9960 vrm fans runs loud but with 0601 its manageable however cpu temps are too low, catch 22. For ex with 0601, hammering the cpu with Prime and temps only reach 40c. It's hard to have faith in the system when I don't know what figures to trust.
4. 17.30
5. The how to should be printed in the mb box.
https://community.amd.com/thread/219286

2a. I've been using ROG boards for years and have never had issue with the uefi and pump control. That's one of the reasons I am partial to Asus. However on the Zenith, it did nothing. The pump runs at max and fundamentally that's not the desired approach on a 35x. I never want to go over 52% duty cycle on this pump, it's killing me.
 
Last edited:
Fifth, the screws on the socket latch are just a tad bit too short so its just a lil hairy to have to put so much force to make a thread grab.
If you watch my videos on install, this is a socket issue, not just on ASUS. Grab a thread on each mount point, then tighten in order instructed.
 
I'm telling you there are serious tolerance discrepancies. On my socket, there is no way to install it like you did in your video. I close the latch and the screw just spins, and on all three points too. There's no way to tighten down, it's exactly like in the complaints on the amd forum links. Eventually I have to apply a LOT more pressure to force the latch down in order to get a thread to engage. And it really pissed me off to have to deal with tolerances like this and apply that sort of pressure on a $550 board and a $1k cpu. You know what I mean?
Yes, this is exactly what I showed and why I outlined it that way. "There's no way to tighten down," but yes there is. Gotta put some elbow grease into it.

I have beat the hell out of this socket and CPU, no issues yet.
 
Its just feels too sketchy, worst nightmare is and accidental slip which gouges the board or worse. I'm gonna have to pull out my small torque wrench I use for bike repair and measure this someday.
Pretty hard slip off the bolt head with a torx head wrench. That socket is built like a tank.
 
Its just feels too sketchy, worst nightmare is and accidental slip which gouges the board or worse. I'm gonna have to pull out my small torque wrench I use for bike repair and measure this someday.

It worked fine for me. Once I got used to the design, I grew to like it.
 
Back
Top