Build: 3970x, dual 2080ti, 8TB m.2 RAID = Render Monster

gwertheim

n00b
Joined
Jan 21, 2020
Messages
53
Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?
1. Ram speed can make a big difference in performance on TR3, huge even. Take your dimms and throw them onto a different machine to run dram calculator as TR3 is not supported by dram calculator yet. We went with 64gb to start because we chose a very tight timed ram set and it was rather expensive. If it goes on sale again will add another 64gb. Aside from speed, more ram is always better in a production rig however 64gb hasn't held my this rig down afaik. It is probably the weakest link (size) then the cooling imo.
2. Apps used run the whole gamut from Adobe to Houdini to Redshift. Btw, AMD just released some plugins for Houdini. Also, try the Nv Studio Driver.
3. 280mm is a bit small imo. It would barely cover one of your two blocks. This rig runs two 60mm thick 480mm rads and it is maxed out when all three devices (cpu/gpu/gpu) are rocking. Water temp delta is 10c, and deep into a 5 hour render water temps will hit their max of 38c. Idle water temps are 27c/28c to give you an idea with a typical ambient of 23c. Your choice of case will determine what rad/size you can fit so choose wisely. I'd suggest two 360mm rads and some padding ideally especially if you don't want to end up in an overloaded heat situation, ie. a steady state. It's when the heat cannot be removed from the system and its stuck there which means your cpu is throttled. Also keep in mind above 70c the boost will be reduced, ie. reduced performance. This state of cooling can be really misleading for ppl. The temp is not going up because the chip is throttling, bam. An easy way to show this is with a killawatt of in the least hwi64 monitoring power draw. First establish a baseline reading of power draw (first 5 minutes), then note if the power rises or drops as time goes passes. Granted you if you run a bench you need it to run for an hour or more (similar to a real world workload). If the power draw drops then that means the heat is trapped and system is throttled. We want the power draw to be steady thru the job. Thus you want the longest stretch of high efficiency rendering not just 5 hours of throttling for ex. You'll have to weigh the pros and cons of it all. In this rigs case, I knew it was compromise going in. Yea, only two 480mm rads is a compromise lmao but it is.
4. Yes, one gen4 drive is used as a dedicated scratch drive for Adobe. I don't know if it is worth it as its what was requested but I guess it is de rigeur with Adobe. The real meat of the storage system is the 8Tb gen4 m.2 RAID array though and that's what holds the projects and assets during the job. It obviously removes drive speed as a limiting factor.
5. To improve the current machine it would be the ram size and the cooling. I would have plumbed the 1080mm external rad from post 100 into the loop which would be a huge boost to lowering water temp delta.

As for the AIO if you really must go AIO I suggest you check out the customizable ones like the Eisberg/etc or better yet the Apogee Drive II which uses a real powerful pump.
 
Last edited:

gwertheim

n00b
Joined
Jan 21, 2020
Messages
53
1. Ram speed can make a big difference in performance on TR3, huge even. Take your dimms and throw them onto a different machine to run dram calculator as TR3 is not supported by dram calculator yet. We went with 64gb to start because we chose a very tight timed ram set and it was rather expensive. If it goes on sale again will add another 64gb. Aside from speed, more ram is always better in a production rig however 64gb hasn't held my this rig down afaik. It is probably the weakest link (size) then the cooling imo.
2. Apps used run the whole gamut from Adobe to Houdini to Redshift. Btw, AMD just released some plugins for Houdini. Also, try the Nv Studio Driver.
3. 280mm is a bit small imo. It would barely cover one of your two blocks. This rig runs two 60mm thick 480mm rads and it is maxed out when all three devices (cpu/gpu/gpu) are rocking. Water temp delta is 10c, and deep into a 5 hour render water temps will hit their max of 38c. Idle water temps are 27c/28c to give you an idea with a typical ambient of 23c. Your choice of case will determine what rad/size you can fit so choose wisely. I'd suggest two 360mm rads and some padding ideally especially if you don't want to end up in an overloaded heat situation, ie. a steady state. It's when the heat cannot be removed from the system and its stuck there which means your cpu is throttled. Also keep in mind above 70c the boost will be reduced, ie. reduced performance. This state of cooling can be really misleading for ppl. The temp is not going up because the chip is throttling, bam. An easy way to show this is with a killawatt of in the least hwi64 monitoring power draw. First establish a baseline reading of power draw (first 5 minutes), then note if the power rises or drops as time goes passes. Granted you if you run a bench you need it to run for an hour or more (similar to a real world workload). If the power draw drops then that means the heat is trapped and system is throttled. We want the power draw to be steady thru the job. Thus you want the longest stretch of high efficiency rendering not just 5 hours of throttling for ex. You'll have to weigh the pros and cons of it all. In this rigs case, I knew it was compromise going in. Yea, only two 480mm rads is a compromise lmao but it is.
4. Yes, one gen4 drive is used as a dedicated scratch drive for Adobe. I don't know if it is worth it as its what was requested but I guess it is de rigeur with Adobe. The real meat of the storage system is the 8Tb gen4 m.2 RAID array though and that's what holds the projects and assets during the job. It obviously removes drive speed as a limiting factor.
5. To improve the current machine it would be the ram size and the cooling. I would have plumbed the 1080mm external rad from post 100 into the loop which would be a huge boost to lowering water temp delta.

As for the AIO if you really must go AIO I suggest you check out the customizable ones like the Eisberg/etc or better yet the Apogee Drive II which uses a real powerful pump.

Sweet, I was in touch with ek's customer support, they are releasing new products for their Phoenix line. Might hook up two 280mm dedicated to the cpu.

Thank you for the advice
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
Sweet, I was in touch with ek's customer support, they are releasing new products for their Phoenix line. Might hook up two 280mm dedicated to the cpu.

Thank you for the advice
Btw, you should avoid 140mm size rads. Go 120mm fan/rad size instead. 140mm fans are not as strong as 120mm fans. This was proven years ago by Martinsliquidlab and well, physics. Even though a 140mm rad is larger which is good, the fan has lower static pressure making the larger rad size moot. And if you are building and choosing parts, you REALLY need to get 360mm to 480mm rads, not a bunch of 280mm rads. The more smaller rads you have the more losses you introduce.
 
Last edited:
  • Like
Reactions: mikeo
like this

gwertheim

n00b
Joined
Jan 21, 2020
Messages
53
Btw, you should avoid 140mm size rads. Go 120mm fan/rad size instead. 140mm fans are not as strong as 120mm fans. This was proven years ago by Martinsliquidlab and well, physics. Even though a 140mm rad is larger which is good, the fan has lower static pressure making the larger rad size moot. And if you are building and choosing parts, you REALLY need to get 360mm to 480mm rads, not a bunch of 280mm rads. The more smaller rads you have the more losses you introduce.
The problem I have is my friend isn't as tech savvy as I am and I have to work around that to give him the max amount of power with the least amount of maintenance. I am thinking the 3960x might be the ideal processor if I can solve this heat issue. I live in Canada so everything is more expensive and that's a problem.

Also what case would you suggest without TG or RGB?
 
Last edited:

mikeo

Gawd
Joined
May 17, 2006
Messages
622
Assuming cooling isn't an issue, what is the max 24/7 peak core voltage you would set in ryzen master for a 3970x during manual overclocking?
 

Jandor

Gawd
Joined
Dec 30, 2018
Messages
530
Hello thesmokingman

I am putting together a similar rendering/editing build for a friend but replacing the 2080ti with a single RTX Titan (for now) and I am planning on using an ATX or EATX motherboard. I am not OCing anything and I am going with an AIO instead of a custom loop so maintenance will be less. I just have a couple of questions about your build.

1. My friend wants 64 gigs of ram, should I talk him into 128 or 256 gigs?
2. What software and plugins are you using?
3. Would a 280mm AIO be able to take the heat of 8 hr rendering sessions with a 3960x or 3970x?
4. Do you have a scratch SSD drive? Is it worth it
5. What would you change about your build to make it a reliable workstation?
If you use one RTX Titan and you render using Cuda or OpenCL, using a Threadripper might even be useless.
You can go with a 3900X, 128GB RAM eventually or 64Gb may be enough (it's difficult to find compatible 32GB UDIMM), and no AIO. I'd even recommend using ECC RAM. No need for high speed ram, 2666 ECC would be enough. Eventually in case he's frightened 64 isn't enough, he can grab an Optane U2 with 480GB and put a huge pagefile on it + Primodisk to cache the SSDs profrom Optane (I'd rather put on Raid1 using 2 diffrent brands). A cheaper solution would be to use a 512 GB M.2 Samsung 970 Pro for the same purpose. To make better use of the Titan you need another small graphics card for viewing and the Titan only on compute mode (TCC mode). You will maximize the 24GB of RAM on the Titan for compute and your computer won't freeze during the rendering.
It is even possible to use a 2700 + 64GB RAM ECC with a Titan RTX, some micro-ATX B450 motherboard if you limit the use to one graphics card (and on some micro-ATX motherboards even a single-slot graphics card for viewing (like a small Quadro). As long as it's using CUDA, the CPU needs to be good enough to support some data exchange for the graphics card but it's not really going to count on the rendering speed. The RTX Titan is kind of 6 times faster in the worst case than a Threadripper 3970X and RTX Titan don't even take profit from PCI 4.0 (it's a PCI 3.0 card). Some time you need to move that case to another place and on micro-ATX size it's way easier. Some of those cases have even a handle for that purpose.
To use multiple Titan RTX cards from the beginning, you can even grab an "old tech" Threadripper 2000, like the much cheaper 2950X, or even the 1920X which is very cheap now, on a X399 motherboard and use the difference to buy the second RTX Titan and up the RAM to 128GB as 8x16GB 2666 ECC. And again buy a small graphics card for viewing to leave the 2 Titan for full compute. This would be so much more clever for rendering on the same budget. That would make 12 times the speed at worse against the 3970X doing the job alone. In some situations by using Optics together with CUDA you can easily have more than 20 times the speed of that CPU.
If you use GPU for rendering just don't think about the speed of the CPU but what motherboard and CPU is enough to handle the graphics cards and nothing more (meaning several full PCI x16 or x8 slots). Intel HEDT is fair enough and even older HEDT from Intel.
If you want to use a 3970X with high end graphics, you need to wait for Ampere probably supporting PCI 4.0 if not better. Not sure what AMD is up to, but it lacks CUDA and Nvidia supports OpenCL. Don't look after Big Navi (yet).

If you use the graphics cards to render, what is needed on the CPU side is not that much high end plenty of cores than bigger RAM to hold on working the scene on your software that may be Blender or 3DS Max or something else. Some of those software are not even multi-threaded (so much for your cores when a Pentium IV would be enough).
 
Last edited:

gwertheim

n00b
Joined
Jan 21, 2020
Messages
53
If you use one RTX Titan and you render using Cuda or OpenCL, using a Threadripper might even be useless.
You can go with a 3900X, 128GB RAM eventually or 64Gb may be enough (it's difficult to find compatible 32GB UDIMM), and no AIO. I'd even recommend using ECC RAM. No need for high speed ram, 2666 ECC would be enough. Eventually in case he's frightened 64 isn't enough, he can grab an Optane U2 with 480GB and put a huge pagefile on it + Primodisk to cache the SSDs profrom Optane (I'd rather put on Raid1 using 2 diffrent brands). A cheaper solution would be to use a 512 GB M.2 Samsung 970 Pro for the same purpose. To make better use of the Titan you need another small graphics card for viewing and the Titan only on compute mode (TCC mode). You will maximize the 24GB of RAM on the Titan for compute and your computer won't freeze during the rendering.
It is even possible to use a 2700 + 64GB RAM ECC with a Titan RTX, some micro-ATX B450 motherboard if you limit the use to one graphics card (and on some micro-ATX motherboards even a single-slot graphics card for viewing (like a small Quadro). As long as it's using CUDA, the CPU needs to be good enough to support some data exchange for the graphics card but it's not really going to count on the rendering speed. The RTX Titan is kind of 6 times faster in the worst case than a Threadripper 3970X and RTX Titan don't even take profit from PCI 4.0 (it's a PCI 3.0 card). Some time you need to move that case to another place and on micro-ATX size it's way easier. Some of those cases have even a handle for that purpose.
To use multiple Titan RTX cards from the beginning, you can even grab an "old tech" Threadripper 2000, like the much cheaper 2950X, or even the 1920X which is very cheap now, on a X399 motherboard and use the difference to buy the second RTX Titan and up the RAM to 128GB as 8x16GB 2666 ECC. And again buy a small graphics card for viewing to leave the 2 Titan for full compute. This would be so much more clever for rendering on the same budget. That would make 12 times the speed at worse against the 3970X doing the job alone. In some situations by using Optics together with CUDA you can easily have more than 20 times the speed of that CPU.
If you use GPU for rendering just don't think about the speed of the CPU but what motherboard and CPU is enough to handle the graphics cards and nothing more (meaning several full PCI x16 or x8 slots). Intel HEDT is fair enough and even older HEDT from Intel.
If you want to use a 3970X with high end graphics, you need to wait for Ampere probably supporting PCI 4.0 if not better. Not sure what AMD is up to, but it lacks CUDA and Nvidia supports OpenCL. Don't look after Big Navi (yet).

If you use the graphics cards to render, what is needed on the CPU side is not that much high end plenty of cores than bigger RAM to hold on working the scene on your software that may be Blender or 3DS Max or something else. Some of those software are not even multi-threaded (so much for your cores when a Pentium IV would be enough).
I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?
Yer better off ignoring...
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
Assuming cooling isn't an issue, what is the max 24/7 peak core voltage you would set in ryzen master for a 3970x during manual overclocking?
The max fit voltage for high current loads is 1.325v, though keep in mind that is a generalization. Each chip is different and that limit will vary per silicon. When I tested all core overclocks, 4.2ghz which was iirc around 1.2v ish was pushing the cooling to the limit so for practicality that was the limit to all core oc'ing. An all core overclock will net some gains if you can keep it cool and the cores loaded but ofc it is not as practical given a wide array of loads. You give up speed in low threaded workloads, ie. Adobe crap but gain in a long cpu render, etc. And then given the cost of said cpu, we just left it at stock since it slays everything thrown at it anyways.
 

Jandor

Gawd
Joined
Dec 30, 2018
Messages
530
I went with the threadripper platform because the am4 socket is getting an upgrade this year and I wanted to futureproof the system a bit and if he wanted to upgrade to a 3990x they can do it no problem.

How would you assign software to use the titan rtx only for compute and not display and vice versa with the lower end card?
I don't think that AM4 will be replaced this year by AM5. Most probably next year, and most probably at the end of year 2021. Most of the time you buy CPU and motherboard and change both. Am4 has a nice life but don't count on the fact that you need to have something upgradable at the CPU level. Mots interesting is if ever your mother board fails, you can still buy another motherboard, which is difficult when like at Intel some platforms change after 6 months.
Furthermore, don't count on actual TRX40 platform to stay compatible on the socket level more than AM4 does. The main change is going to come because of DDR5 support and there will be need for memory slot support. AMD used to support several types of RAM in the past over the same kind of socket, including motherboards with dual kind of RAM support. I doubt this is is going to happen now, and most probably not on the Threadripper platform who needs lots of slots of the same kind.

There is TCC mode (Tesla Compute Cluster) on the Titan Line. This is specific to all Titan cards, even older ones. In this mode they act like cheap Tesla cards. You need to put your lesser expensive graphics card (low end Quadro is a good choice) as the main card on the motherboard. This depends on the motherboard.
TCC mode.
https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm
Related to Titan RTX :
https://devtalk.nvidia.com/default/...roblems-with-setting-a-titan-rtx-to-tcc-mode/

Sorry I googled : it's all I've got. I don't own a Titan RTX. Owned a previous model and this was the way.
About the 3990x, I'm not so sure it is a great upgrade for a workstation. It has more cores but runs slower to stay on a TDP close to the 3970X. And if he even considers upgrading to the 3990X he may have a better solution with a future 4970X in less than a year (Zen3 is kind of nearly announced by Lisa Su - AMD CEO - already speaking of it more than the Big Navi) and I'm rather sure this is going to be compatible with TRX 40 and take full advantage of it. Also there may be a future chipset which takes the 3990X on higher TDP and doubles the memory lanes.
But if you look at the price of those things, the motherboard price nearly doesn't matter. In fact, I, myself only consider the problem on the side of the time spent to reinstall everything, that I want to avoid the most.
 

gwertheim

n00b
Joined
Jan 21, 2020
Messages
53
I don't think that AM4 will be replaced this year by AM5. Most probably next year, and most probably at the end of year 2021. Most of the time you buy CPU and motherboard and change both. Am4 has a nice life but don't count on the fact that you need to have something upgradable at the CPU level. Mots interesting is if ever your mother board fails, you can still buy another motherboard, which is difficult when like at Intel some platforms change after 6 months.
Furthermore, don't count on actual TRX40 platform to stay compatible on the socket level more than AM4 does. The main change is going to come because of DDR5 support and there will be need for memory slot support. AMD used to support several types of RAM in the past over the same kind of socket, including motherboards with dual kind of RAM support. I doubt this is is going to happen now, and most probably not on the Threadripper platform who needs lots of slots of the same kind.

There is TCC mode (Tesla Compute Cluster) on the Titan Line. This is specific to all Titan cards, even older ones. In this mode they act like cheap Tesla cards. You need to put your lesser expensive graphics card (low end Quadro is a good choice) as the main card on the motherboard. This depends on the motherboard.
TCC mode.
https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm
Related to Titan RTX :
https://devtalk.nvidia.com/default/...roblems-with-setting-a-titan-rtx-to-tcc-mode/

Sorry I googled : it's all I've got. I don't own a Titan RTX. Owned a previous model and this was the way.
About the 3990x, I'm not so sure it is a great upgrade for a workstation. It has more cores but runs slower to stay on a TDP close to the 3970X. And if he even considers upgrading to the 3990X he may have a better solution with a future 4970X in less than a year (Zen3 is kind of nearly announced by Lisa Su - AMD CEO - already speaking of it more than the Big Navi) and I'm rather sure this is going to be compatible with TRX 40 and take full advantage of it. Also there may be a future chipset which takes the 3990X on higher TDP and doubles the memory lanes.
But if you look at the price of those things, the motherboard price nearly doesn't matter. In fact, I, myself only consider the problem on the side of the time spent to reinstall everything, that I want to avoid the most.
It's cool thx for the info
 
Joined
Jun 10, 2004
Messages
3,657
Title was a typo, since fixed. It's an 8TB array. To my knowledge there are not 4TB gen4 drives out w/o getting into ludicrous enterprise drives. Then again, 1.5k is enough spent in that area.

The Sabrent Amazon store now lists 4Tb m.2 NVME 4.0 Drives for sale for 900 Canadian dollars as of a few weeks ago. Phison also had 4Tb and 8Tb m.2 80mm PCI 3.0 and 4.0 drives at CES coming this Q or next.
Either way, your setup blows mine right out of the water!! Congrats.
 

MattyS22

n00b
Joined
Jan 6, 2020
Messages
22
I've never actually done a custom-loop before, but may have to since there are no AIO options for the 3960x...how hard is it to set up a loop? I may just use the one that thesmokingman did here since I'm using the same form-factor..
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
I've never actually done a custom-loop before, but may have to since there are no AIO options for the 3960x...how hard is it to set up a loop? I may just use the one that thesmokingman did here since I'm using the same form-factor..
It's not super hard but it can be rather involved. I'd research it a bit, figure which way you're going to monitor the cooling system. Watch some tutorial vids, etc etc. Don't try to do things fancy on your first go, focus on reliability and performance, ie. get the right parts for the job and keep it simple.

Btw, you might want to look at the modular aio setups like the Eisbaer or Swiftech Apogee Drive II. Swiftech has a TR4 mount for the AP II. The AP II is as close to a full loop as you are going to get especially if you stick a 35x pump in it.
 

coccosoids

n00b
Joined
Aug 27, 2015
Messages
7
Why do you assume gpu rendering as the only option?
I'm not. I'm in the unfortunate position of choosing one over the other, being restricted by budget considerations. All the signs are pointing to gpu, but there's interference caused by recent launches -like the 3990x from AMD.
That said, I ask again: is it smarter to invest in CPU or GPU rendering today?!
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,972
I'm not. I'm in the unfortunate position of choosing one over the other, being restricted by budget considerations. All the signs are pointing to gpu, but there's interference caused by recent launches -like the 3990x from AMD.
That said, I ask again: is it smarter to invest in CPU or GPU rendering today?!
You are looking at it wrong. I'm not even sure why one even needs to ask this question? Money dictates what one does, and by that I mean the money one gets paid to complete any project. That is dictated by the tools needed to complete a paying job. Those tools can vary in their use of cpu and gpu rendering. This build was obviously built for both cpu and gpu rendering because both are used. If you don't know what is demanded by your apps at this point, you have some things to figure out.
 
  • Like
Reactions: tived
like this
Top