Build: 3970x, dual 2080ti, 8TB m.2 RAID = Render Monster

Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?
 
Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?

1. Ram speed can make a big difference in performance on TR3, huge even. Take your dimms and throw them onto a different machine to run dram calculator as TR3 is not supported by dram calculator yet. We went with 64gb to start because we chose a very tight timed ram set and it was rather expensive. If it goes on sale again will add another 64gb. Aside from speed, more ram is always better in a production rig however 64gb hasn't held my this rig down afaik. It is probably the weakest link (size) then the cooling imo.
2. Apps used run the whole gamut from Adobe to Houdini to Redshift. Btw, AMD just released some plugins for Houdini. Also, try the Nv Studio Driver.
3. 280mm is a bit small imo. It would barely cover one of your two blocks. This rig runs two 60mm thick 480mm rads and it is maxed out when all three devices (cpu/gpu/gpu) are rocking. Water temp delta is 10c, and deep into a 5 hour render water temps will hit their max of 38c. Idle water temps are 27c/28c to give you an idea with a typical ambient of 23c. Your choice of case will determine what rad/size you can fit so choose wisely. I'd suggest two 360mm rads and some padding ideally especially if you don't want to end up in an overloaded heat situation, ie. a steady state. It's when the heat cannot be removed from the system and its stuck there which means your cpu is throttled. Also keep in mind above 70c the boost will be reduced, ie. reduced performance. This state of cooling can be really misleading for ppl. The temp is not going up because the chip is throttling, bam. An easy way to show this is with a killawatt of in the least hwi64 monitoring power draw. First establish a baseline reading of power draw (first 5 minutes), then note if the power rises or drops as time goes passes. Granted you if you run a bench you need it to run for an hour or more (similar to a real world workload). If the power draw drops then that means the heat is trapped and system is throttled. We want the power draw to be steady thru the job. Thus you want the longest stretch of high efficiency rendering not just 5 hours of throttling for ex. You'll have to weigh the pros and cons of it all. In this rigs case, I knew it was compromise going in. Yea, only two 480mm rads is a compromise lmao but it is.
4. Yes, one gen4 drive is used as a dedicated scratch drive for Adobe. I don't know if it is worth it as its what was requested but I guess it is de rigeur with Adobe. The real meat of the storage system is the 8Tb gen4 m.2 RAID array though and that's what holds the projects and assets during the job. It obviously removes drive speed as a limiting factor.
5. To improve the current machine it would be the ram size and the cooling. I would have plumbed the 1080mm external rad from post 100 into the loop which would be a huge boost to lowering water temp delta.

As for the AIO if you really must go AIO I suggest you check out the customizable ones like the Eisberg/etc or better yet the Apogee Drive II which uses a real powerful pump.
 
Last edited:
1. Ram speed can make a big difference in performance on TR3, huge even. Take your dimms and throw them onto a different machine to run dram calculator as TR3 is not supported by dram calculator yet. We went with 64gb to start because we chose a very tight timed ram set and it was rather expensive. If it goes on sale again will add another 64gb. Aside from speed, more ram is always better in a production rig however 64gb hasn't held my this rig down afaik. It is probably the weakest link (size) then the cooling imo.
2. Apps used run the whole gamut from Adobe to Houdini to Redshift. Btw, AMD just released some plugins for Houdini. Also, try the Nv Studio Driver.
3. 280mm is a bit small imo. It would barely cover one of your two blocks. This rig runs two 60mm thick 480mm rads and it is maxed out when all three devices (cpu/gpu/gpu) are rocking. Water temp delta is 10c, and deep into a 5 hour render water temps will hit their max of 38c. Idle water temps are 27c/28c to give you an idea with a typical ambient of 23c. Your choice of case will determine what rad/size you can fit so choose wisely. I'd suggest two 360mm rads and some padding ideally especially if you don't want to end up in an overloaded heat situation, ie. a steady state. It's when the heat cannot be removed from the system and its stuck there which means your cpu is throttled. Also keep in mind above 70c the boost will be reduced, ie. reduced performance. This state of cooling can be really misleading for ppl. The temp is not going up because the chip is throttling, bam. An easy way to show this is with a killawatt of in the least hwi64 monitoring power draw. First establish a baseline reading of power draw (first 5 minutes), then note if the power rises or drops as time goes passes. Granted you if you run a bench you need it to run for an hour or more (similar to a real world workload). If the power draw drops then that means the heat is trapped and system is throttled. We want the power draw to be steady thru the job. Thus you want the longest stretch of high efficiency rendering not just 5 hours of throttling for ex. You'll have to weigh the pros and cons of it all. In this rigs case, I knew it was compromise going in. Yea, only two 480mm rads is a compromise lmao but it is.
4. Yes, one gen4 drive is used as a dedicated scratch drive for Adobe. I don't know if it is worth it as its what was requested but I guess it is de rigeur with Adobe. The real meat of the storage system is the 8Tb gen4 m.2 RAID array though and that's what holds the projects and assets during the job. It obviously removes drive speed as a limiting factor.
5. To improve the current machine it would be the ram size and the cooling. I would have plumbed the 1080mm external rad from post 100 into the loop which would be a huge boost to lowering water temp delta.

As for the AIO if you really must go AIO I suggest you check out the customizable ones like the Eisberg/etc or better yet the Apogee Drive II which uses a real powerful pump.


Sweet, I was in touch with ek's customer support, they are releasing new products for their Phoenix line. Might hook up two 280mm dedicated to the cpu.

Thank you for the advice
 
Sweet, I was in touch with ek's customer support, they are releasing new products for their Phoenix line. Might hook up two 280mm dedicated to the cpu.

Thank you for the advice

Btw, you should avoid 140mm size rads. Go 120mm fan/rad size instead. 140mm fans are not as strong as 120mm fans. This was proven years ago by Martinsliquidlab and well, physics. Even though a 140mm rad is larger which is good, the fan has lower static pressure making the larger rad size moot. And if you are building and choosing parts, you REALLY need to get 360mm to 480mm rads, not a bunch of 280mm rads. The more smaller rads you have the more losses you introduce.
 
Last edited:
  • Like
Reactions: mikeo
like this
Btw, you should avoid 140mm size rads. Go 120mm fan/rad size instead. 140mm fans are not as strong as 120mm fans. This was proven years ago by Martinsliquidlab and well, physics. Even though a 140mm rad is larger which is good, the fan has lower static pressure making the larger rad size moot. And if you are building and choosing parts, you REALLY need to get 360mm to 480mm rads, not a bunch of 280mm rads. The more smaller rads you have the more losses you introduce.

The problem I have is my friend isn't as tech savvy as I am and I have to work around that to give him the max amount of power with the least amount of maintenance. I am thinking the 3960x might be the ideal processor if I can solve this heat issue. I live in Canada so everything is more expensive and that's a problem.

Also what case would you suggest without TG or RGB?
 
Last edited:
Assuming cooling isn't an issue, what is the max 24/7 peak core voltage you would set in ryzen master for a 3970x during manual overclocking?
 
Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?
If you use one RTX Titan and you render using Cuda or OpenCL, using a Threadripper might even be useless.
You can go with a 3900X, 128GB RAM eventually or 64Gb may be enough (it's difficult to find compatible 32GB UDIMM), and no AIO. I'd even recommend using ECC RAM. No need for high speed ram, 2666 ECC would be enough. Eventually in case he's frightened 64 isn't enough, he can grab an Optane U2 with 480GB and put a huge pagefile on it + Primodisk to cache the SSDs profrom Optane (I'd rather put on Raid1 using 2 diffrent brands). A cheaper solution would be to use a 512 GB M.2 Samsung 970 Pro for the same purpose. To make better use of the Titan you need another small graphics card for viewing and the Titan only on compute mode (TCC mode). You will maximize the 24GB of RAM on the Titan for compute and your computer won't freeze during the rendering.
It is even possible to use a 2700 + 64GB RAM ECC with a Titan RTX, some micro-ATX B450 motherboard if you limit the use to one graphics card (and on some micro-ATX motherboards even a single-slot graphics card for viewing (like a small Quadro). As long as it's using CUDA, the CPU needs to be good enough to support some data exchange for the graphics card but it's not really going to count on the rendering speed. The RTX Titan is kind of 6 times faster in the worst case than a Threadripper 3970X and RTX Titan don't even take profit from PCI 4.0 (it's a PCI 3.0 card). Some time you need to move that case to another place and on micro-ATX size it's way easier. Some of those cases have even a handle for that purpose.
To use multiple Titan RTX cards from the beginning, you can even grab an "old tech" Threadripper 2000, like the much cheaper 2950X, or even the 1920X which is very cheap now, on a X399 motherboard and use the difference to buy the second RTX Titan and up the RAM to 128GB as 8x16GB 2666 ECC. And again buy a small graphics card for viewing to leave the 2 Titan for full compute. This would be so much more clever for rendering on the same budget. That would make 12 times the speed at worse against the 3970X doing the job alone. In some situations by using Optics together with CUDA you can easily have more than 20 times the speed of that CPU.
If you use GPU for rendering just don't think about the speed of the CPU but what motherboard and CPU is enough to handle the graphics cards and nothing more (meaning several full PCI x16 or x8 slots). Intel HEDT is fair enough and even older HEDT from Intel.
If you want to use a 3970X with high end graphics, you need to wait for Ampere probably supporting PCI 4.0 if not better. Not sure what AMD is up to, but it lacks CUDA and Nvidia supports OpenCL. Don't look after Big Navi (yet).

If you use the graphics cards to render, what is needed on the CPU side is not that much high end plenty of cores than bigger RAM to hold on working the scene on your software that may be Blender or 3DS Max or something else. Some of those software are not even multi-threaded (so much for your cores when a Pentium IV would be enough).
 
Last edited:
If you use one RTX Titan and you render using Cuda or OpenCL, using a Threadripper might even be useless.
You can go with a 3900X, 128GB RAM eventually or 64Gb may be enough (it's difficult to find compatible 32GB UDIMM), and no AIO. I'd even recommend using ECC RAM. No need for high speed ram, 2666 ECC would be enough. Eventually in case he's frightened 64 isn't enough, he can grab an Optane U2 with 480GB and put a huge pagefile on it + Primodisk to cache the SSDs profrom Optane (I'd rather put on Raid1 using 2 diffrent brands). A cheaper solution would be to use a 512 GB M.2 Samsung 970 Pro for the same purpose. To make better use of the Titan you need another small graphics card for viewing and the Titan only on compute mode (TCC mode). You will maximize the 24GB of RAM on the Titan for compute and your computer won't freeze during the rendering.
It is even possible to use a 2700 + 64GB RAM ECC with a Titan RTX, some micro-ATX B450 motherboard if you limit the use to one graphics card (and on some micro-ATX motherboards even a single-slot graphics card for viewing (like a small Quadro). As long as it's using CUDA, the CPU needs to be good enough to support some data exchange for the graphics card but it's not really going to count on the rendering speed. The RTX Titan is kind of 6 times faster in the worst case than a Threadripper 3970X and RTX Titan don't even take profit from PCI 4.0 (it's a PCI 3.0 card). Some time you need to move that case to another place and on micro-ATX size it's way easier. Some of those cases have even a handle for that purpose.
To use multiple Titan RTX cards from the beginning, you can even grab an "old tech" Threadripper 2000, like the much cheaper 2950X, or even the 1920X which is very cheap now, on a X399 motherboard and use the difference to buy the second RTX Titan and up the RAM to 128GB as 8x16GB 2666 ECC. And again buy a small graphics card for viewing to leave the 2 Titan for full compute. This would be so much more clever for rendering on the same budget. That would make 12 times the speed at worse against the 3970X doing the job alone. In some situations by using Optics together with CUDA you can easily have more than 20 times the speed of that CPU.
If you use GPU for rendering just don't think about the speed of the CPU but what motherboard and CPU is enough to handle the graphics cards and nothing more (meaning several full PCI x16 or x8 slots). Intel HEDT is fair enough and even older HEDT from Intel.
If you want to use a 3970X with high end graphics, you need to wait for Ampere probably supporting PCI 4.0 if not better. Not sure what AMD is up to, but it lacks CUDA and Nvidia supports OpenCL. Don't look after Big Navi (yet).

If you use the graphics cards to render, what is needed on the CPU side is not that much high end plenty of cores than bigger RAM to hold on working the scene on your software that may be Blender or 3DS Max or something else. Some of those software are not even multi-threaded (so much for your cores when a Pentium IV would be enough).

I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?
 
I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?

Yer better off ignoring...
 
Assuming cooling isn't an issue, what is the max 24/7 peak core voltage you would set in ryzen master for a 3970x during manual overclocking?

The max fit voltage for high current loads is 1.325v, though keep in mind that is a generalization. Each chip is different and that limit will vary per silicon. When I tested all core overclocks, 4.2ghz which was iirc around 1.2v ish was pushing the cooling to the limit so for practicality that was the limit to all core oc'ing. An all core overclock will net some gains if you can keep it cool and the cores loaded but ofc it is not as practical given a wide array of loads. You give up speed in low threaded workloads, ie. Adobe crap but gain in a long cpu render, etc. And then given the cost of said cpu, we just left it at stock since it slays everything thrown at it anyways.
 
I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?
I don't think that AM4 will be replaced this year by AM5. Most probably next year, and most probably at the end of year 2021. Most of the time you buy CPU and motherboard and change both. Am4 has a nice life but don't count on the fact that you need to have something upgradable at the CPU level. Mots interesting is if ever your mother board fails, you can still buy another motherboard, which is difficult when like at Intel some platforms change after 6 months.
Furthermore, don't count on actual TRX40 platform to stay compatible on the socket level more than AM4 does. The main change is going to come because of DDR5 support and there will be need for memory slot support. AMD used to support several types of RAM in the past over the same kind of socket, including motherboards with dual kind of RAM support. I doubt this is is going to happen now, and most probably not on the Threadripper platform who needs lots of slots of the same kind.

There is TCC mode (Tesla Compute Cluster) on the Titan Line. This is specific to all Titan cards, even older ones. In this mode they act like cheap Tesla cards. You need to put your lesser expensive graphics card (low end Quadro is a good choice) as the main card on the motherboard. This depends on the motherboard.
TCC mode.
https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm
Related to Titan RTX :
https://devtalk.nvidia.com/default/...roblems-with-setting-a-titan-rtx-to-tcc-mode/

Sorry I googled : it's all I've got. I don't own a Titan RTX. Owned a previous model and this was the way.
About the 3990x, I'm not so sure it is a great upgrade for a workstation. It has more cores but runs slower to stay on a TDP close to the 3970X. And if he even considers upgrading to the 3990X he may have a better solution with a future 4970X in less than a year (Zen3 is kind of nearly announced by Lisa Su - AMD CEO - already speaking of it more than the Big Navi) and I'm rather sure this is going to be compatible with TRX 40 and take full advantage of it. Also there may be a future chipset which takes the 3990X on higher TDP and doubles the memory lanes.
But if you look at the price of those things, the motherboard price nearly doesn't matter. In fact, I, myself only consider the problem on the side of the time spent to reinstall everything, that I want to avoid the most.
 
I don't think that AM4 will be replaced this year by AM5. Most probably next year, and most probably at the end of year 2021. Most of the time you buy CPU and motherboard and change both. Am4 has a nice life but don't count on the fact that you need to have something upgradable at the CPU level. Mots interesting is if ever your mother board fails, you can still buy another motherboard, which is difficult when like at Intel some platforms change after 6 months.
Furthermore, don't count on actual TRX40 platform to stay compatible on the socket level more than AM4 does. The main change is going to come because of DDR5 support and there will be need for memory slot support. AMD used to support several types of RAM in the past over the same kind of socket, including motherboards with dual kind of RAM support. I doubt this is is going to happen now, and most probably not on the Threadripper platform who needs lots of slots of the same kind.

There is TCC mode (Tesla Compute Cluster) on the Titan Line. This is specific to all Titan cards, even older ones. In this mode they act like cheap Tesla cards. You need to put your lesser expensive graphics card (low end Quadro is a good choice) as the main card on the motherboard. This depends on the motherboard.
TCC mode.
https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm
Related to Titan RTX :
https://devtalk.nvidia.com/default/...roblems-with-setting-a-titan-rtx-to-tcc-mode/

Sorry I googled : it's all I've got. I don't own a Titan RTX. Owned a previous model and this was the way.
About the 3990x, I'm not so sure it is a great upgrade for a workstation. It has more cores but runs slower to stay on a TDP close to the 3970X. And if he even considers upgrading to the 3990X he may have a better solution with a future 4970X in less than a year (Zen3 is kind of nearly announced by Lisa Su - AMD CEO - already speaking of it more than the Big Navi) and I'm rather sure this is going to be compatible with TRX 40 and take full advantage of it. Also there may be a future chipset which takes the 3990X on higher TDP and doubles the memory lanes.
But if you look at the price of those things, the motherboard price nearly doesn't matter. In fact, I, myself only consider the problem on the side of the time spent to reinstall everything, that I want to avoid the most.

It's cool thx for the info
 
Title was a typo, since fixed. It's an 8TB array. To my knowledge there are not 4TB gen4 drives out w/o getting into ludicrous enterprise drives. Then again, 1.5k is enough spent in that area.


The Sabrent Amazon store now lists 4Tb m.2 NVME 4.0 Drives for sale for 900 Canadian dollars as of a few weeks ago. Phison also had 4Tb and 8Tb m.2 80mm PCI 3.0 and 4.0 drives at CES coming this Q or next.
Either way, your setup blows mine right out of the water!! Congrats.
 
I've never actually done a custom-loop before, but may have to since there are no AIO options for the 3960x...how hard is it to set up a loop? I may just use the one that thesmokingman did here since I'm using the same form-factor..
 
I've never actually done a custom-loop before, but may have to since there are no AIO options for the 3960x...how hard is it to set up a loop? I may just use the one that thesmokingman did here since I'm using the same form-factor..

It's not super hard but it can be rather involved. I'd research it a bit, figure which way you're going to monitor the cooling system. Watch some tutorial vids, etc etc. Don't try to do things fancy on your first go, focus on reliability and performance, ie. get the right parts for the job and keep it simple.

Btw, you might want to look at the modular aio setups like the Eisbaer or Swiftech Apogee Drive II. Swiftech has a TR4 mount for the AP II. The AP II is as close to a full loop as you are going to get especially if you stick a 35x pump in it.
 
  • Like
Reactions: tived
like this
Why do you assume gpu rendering as the only option?
I'm not. I'm in the unfortunate position of choosing one over the other, being restricted by budget considerations. All the signs are pointing to gpu, but there's interference caused by recent launches -like the 3990x from AMD.
That said, I ask again: is it smarter to invest in CPU or GPU rendering today?!
 
I'm not. I'm in the unfortunate position of choosing one over the other, being restricted by budget considerations. All the signs are pointing to gpu, but there's interference caused by recent launches -like the 3990x from AMD.
That said, I ask again: is it smarter to invest in CPU or GPU rendering today?!

You are looking at it wrong. I'm not even sure why one even needs to ask this question? Money dictates what one does, and by that I mean the money one gets paid to complete any project. That is dictated by the tools needed to complete a paying job. Those tools can vary in their use of cpu and gpu rendering. This build was obviously built for both cpu and gpu rendering because both are used. If you don't know what is demanded by your apps at this point, you have some things to figure out.
 
  • Like
Reactions: tived
like this
Update: I have the rmonster back in the lab for a few days to do some upgrades and maintenance. Swapping in a pair of HWL GTX480s and a EK 250m reservoir. The system ram was recently doubled to 128Gb, so I'll dig into the memory timings as well.
 
Nemesis GTX rads with fans mounted.

20200907_113907.jpg

Oh the carnage...

20200907_130226.jpg
 
U gonna get 3090's?

Yea eventually, probably next spring ish though. No hurry atm though, waiting to see how the production side of things update for the new gpus. Also waiting on the work Redshift is doing on AMD side of things.
 
Here's the new reservoir setup. It's the Bitspower DDC res/pump combo. At first I had tried the EK X3 res, but quickly pulled that pos and returned it. There's a serious design flaw with that res tube. It only holds on with a few threads and there is hardly any acetal material so with too much pressure and the res pops off. Now imagine with a tube full of water! WTF were they thinking? Anyways the BP setup is a solid and with it the system is back to solid stability. Now to tweak it some more.

20200911_142637.jpg


Over a 1kw whilst running the witcher 3 at 6.5mp at 125fps, damn.

20200911_200716.jpg


I got the system to run 3400mhz on the ram at cas 16 timings at 128gb density. Primed it on cpu only for roughly an hour at those settings. 3600mhz was a no go btw. Considering how long this thing takes to boot, I don't know if I want to spend the time to investigate 3600mhz at 128gb ram density...
 
Last edited:
Here's the new reservoir setup. It's the Bitspower DDC res/pump combo. At first I had tried the EK X3 res, but quickly pulled that pos and returned it. There's a serious design flaw with that res tube. It only holds on with a few threads and there is hardly any acetal material so with too much pressure and the res pops off. Now imagine with a tube full of water! WTF were they thinking? Anyways the BP setup is a solid and with it the system is back to solid stability. Now to tweak it some more.

View attachment 278452


Over a 1kw whilst running the witcher 3 at 6.5mp at 125fps, damn.

View attachment 278449


I got the system to run 3400mhz on the ram at cas 16 timings at 128gb density. Primed it on cpu only for roughly an hour at those settings. 3600mhz was a no go btw. Considering how long this thing takes to boot, I don't know if I want to spend the time to investigate 3600mhz at 128gb ram density...
came out nice man! I run 2x 480 rads also i would hate to try and run with much less rad my 3960 can suck some juice when its ruining 4.4 prime 95... whats your idle power mine runs at about 250-280w just sitting in windows....also yeah they do take a mim to boot but they are such kick ass setups
 
came out nice man! I run 2x 480 rads also i would hate to try and run with much less rad my 3960 can suck some juice when its ruining 4.4 prime 95... whats your idle power mine runs at about 250-280w just sitting in windows....also yeah they do take a mim to boot but they are such kick ass setups

Have you tried tuning your ram on TR yet?
 
Yeah I get one notch under the 1900 imc/ram at cl14 and tight timings I get about 64ns and around 110gb read/write if I remember correctly il get a ss when I get home tonight

What is your ram density? I haven't been able to run very tight timings since going over 64gb. And now at 128gb, its so freaking slow now but it is what it is. Be happy with 3400mhz and 1700mhz FCLK I guess. I'm currently trying to drop the latency down below 80ns.
 
Can this run Win 7? Cuz you know, stuff runs 10% faster on Win 7 than 10......


I kid. I do wish I had a purpose for a build like this, but my days for needing, or even wanting a beast like this are waning. I can still appreciate a decent build though, and you reaffirm my lack of faith in EK product quality.
 
Can this run Win 7? Cuz you know, stuff runs 10% faster on Win 7 than 10......


I kid. I do wish I had a purpose for a build like this, but my days for needing, or even wanting a beast like this are waning. I can still appreciate a decent build though, and you reaffirm my lack of faith in EK product quality.


Haha, lmao @ windoze 7!

Regarding EK, yea the X3 res is a total POS. There's no other way to put it. I only got it because the X2 res was such a solid res and I just assumed it would be the same with rgb. How wrong I was. They really just went for the RGB over logical design with the X3, smh. The BP res/pump combo on the other hand is pretty stout, I'm so glad it was in stock for Prime shipping. I need to button this rig back up and get it back into the field. The guy who works on this machine is dying to get it back. He's working a rental 3970x system and it apparently sucks, it's aircooled. Though I would say that using this rig on water and given its state of tune is not a fair comparison to a base stock workstation. But for him the rental worked out ok given the downtime even though the animation station rental techs have no clue about setting air fan profiles in relation to Zen's context switching speed.
 
Haha, lmao @ windoze 7!

Regarding EK, yea the X3 res is a total POS. There's no other way to put it. I only got it because the X2 res was such a solid res and I just assumed it would be the same with rgb. How wrong I was. They really just went for the RGB over logical design with the X3, smh. The BP res/pump combo on the other hand is pretty stout, I'm so glad it was in stock for Prime shipping. I need to button this rig back up and get it back into the field. The guy who works on this machine is dying to get it back. He's working a rental 3970x system and it apparently sucks, it's aircooled. Though I would say that using this rig on water and given its state of tune is not a fair comparison to a base stock workstation. But for him the rental worked out ok given the downtime even though the animation station rental techs have no clue about setting air fan profiles in relation to Zen's context switching speed.
I still don't understand air on a TR, I bet he will be happy to have this one back, it's a nice box
 
Hi thesmokingman,

you mentioned earlier that you had added a fan to your Gigabyte AORUS Gen4 AIC Adaptor? How did you mount it please?

i am having issues with mine with the enclosure on. If i take it off, it works fine, but I am worried it will eventually overheat

appreciate your time

henrik
 
Hi thesmokingman,

you mentioned earlier that you had added a fan to your Gigabyte AORUS Gen4 AIC Adaptor? How did you mount it please?

i am having issues with mine with the enclosure on. If i take it off, it works fine, but I am worried it will eventually overheat

appreciate your time

henrik

You have to take off the heatsink that is mounted to the cover. Peel back the thermal pad to reveal the screws that attach the heatsink to the cover. Once removed you now have to attach the heatsink to the card somehow. I used zipties. You'll see in my pics that hmm, only one ziptie used but I did use two. Then I used goop glue and glued a fan to it. For the final build I used an 80mm noctua pwm fan.
 
  • Like
Reactions: tived
like this
Back
Top