Build: 3970x, dual 2080ti, 8TB m.2 RAID = Render Monster

Why ECC for a rendering workstation? Note, I am not asking what ECC is or does.

Also, for optane isn't storemi not that great when it comes to caching and he would need to use something like PrimoCache?
Why suggest Storemi and not Primocache ? Because it's free instead of some 10s of $ ? Not even sure Storemi is compatible with Raid. Get the real thing, aka Primocache. And some youtube video suggests it works great with Optane on AMD CPU+MB.
 
Why suggest Storemi and not Primocache ? Because it's free instead of some 10s of $ ? Not even sure Storemi is compatible with Raid. Get the real thing, aka Primocache. And some youtube video suggests it works great with Optane on AMD CPU+MB.
I didn't suggest storemi over primocache. In fact I suggested the opposite.
 
My friend is out of the country so I can't get my hands on the 2080ti. It will be a day or two before I can throw blocks on them. For now, I've validated the memory which is 3600mhz 16-16-16 at stock cpu speeds. I'm priming it right now at [email protected] (loaded) and its kind of crazy watching the power draw climb. Edit, dropping it down 50mhz and as well as lowering some volts, the draw numbers are getting large.

Mmm, temps peak at 76c with PrimeAVX at max load (not in test mode) and water temps are chilling at 27c. Damn these ML120s can get loud, though they are running at 2.5K rpm. Since water temp is low, the heat is all in the block. Going to lower voltage some more and bring fan pwm scaling down some so its not a tornado in here, lol.
 
Last edited:
So how about running some render tests, cpu only, stock then clocked?

Not quite there yet though I did manage to run a Puget AE script but the bench result is on the rig which is down right now. In the process of gluing some HD fans and redoing the wiring, looms, etc etc.

20191215_163025.jpg
 
Cool.
Does your buddy have any existing projects you can pull and run?
My concern would be ram use for whatever projects are running now.

There was another TR thread where they didn't get my points.
A build optimized for a given project, or slate of projects, changes once the next set of contracts get signed.

The builds change, and so on and so forth.

Pcie reserves for future capture cards, nics, storage, etc are abundant.

So headroom to accommodate invoices start speeding up velocity of work in a way that I find exciting and enjoyable.

I've spent enough time in studios where at the end of a given product arc I'll look over 12-24 main builds like this TR box, and up to 50 generic workstations feeding the mains.

It's crazy, but in smaller studios you get the above or more gear but fewer people running clusters of gear dedicated to project lines or workflows.
 
Last edited:
Do not touch storemi

WARNING

I'm literally avoiding you calamity and disaster and a failed friendship in the making. Trust me!!

Create a stable raid 10 using a true nvme raid hba like... Broadcom or LSI nvme cards

Or motherboard raid 10 on software level in raid 10. Dont use 0. And for the love of God do not use tiered caching at all. It's like politicians, it sounds great behind the pulpit, but when its allowed to run the office, it's a public disaster in the making.

This reply is for anyone reading.
 
Last edited:
I get the dual 2080ti tonight. While adding those into loop, will be plumbing QDCs in as well, and maybe some casters.

Don't use storemi. RAID 0 is fine for the render drive, that's kind of the whole point of the bifurication AIC.
 
So are you putting it one county over or two when you put it under load?

;)

Theres no need to run them past 1100rpm. They go to 3700 which is crazy.

I run them via Corsair icue profile. So not based on motherboard vs CPU load throttle up down stuff.

I have corsair commander pro. Which is a very good fan controller on PWM.

I'll make a video later with them running quietly.

I have a silent, workstation, and gaming profile I hand built so they are hands down the absolute BEST radiator fan I have ever laid my hands on.
 
I think your gonna "Jizz in my pants" -lonely Island

When you get those dual cards running.

I would love to see some rendering benches that can use SLI. Dont forget your nVlink bridge, they are not included with gpus.

They aren't cheap either.
 
I think your gonna "Jizz in my pants" -lonely Island

When you get those dual cards running.

I would love to see some rendering benches that can use SLI. Dont forget your nVlink bridge, they are not included with gpus.

Omfg, finding 2 slot Quadro 6k or 8K bridges is a pita. I will get one eventually but SLI is secondary since its not important for rendering. But would be nice for game benching.

Update, got the blocks on but am waiting for the backplates. Will use a scalar bridge also. It's annoying though that the cpu block is argb, but the gpu block is 12v. Now I need more rgb calbes, smh. Will also plumb QDc's to the rads, then one could pull both rads as one unit to access the mb/cards, etc.
 
Last edited:
The bridges are a prob, always have been.
Test before and after to see what the real benefits are bc I have my doubts on the rigs I've built.
 
Getting late here but I ran some tspy and fstrike. Darn, just realized it's running ram at 3200 instead of 3600. Oh well will fix it tomorrow.

https://www.3dmark.com/spy/9752103
https://www.3dmark.com/fs/21270633

Also your FS score will be low because your not nVLinking your cards. But I think you said you were not going to SLI.

Here's to hoping nV actually releases checkerboard rendering soon. I would toss a 2060 alongside my 2080ti so I could get more fps while ray tracing. Let the 60 supplement from behind.
 
Also your FS score will be low because your not nVLinking your cards. But I think you said you were not going to SLI.

Here's to hoping nV actually releases checkerboard rendering soon. I would toss a 2060 alongside my 2080ti so I could get more fps while ray tracing. Let the 60 supplement from behind.

I ordered one at rip off rate shrugs. We don't need sli on a production machine but will use it for gaming. Hopefully it gets here soon so I can throw some benches at it.
 
Thanks man. At this point, running a higher all core overclock is possible but the cooling is the limiting factor, that and of course the power draw. In sli on Vray, it was pushing over 1100w, higher at the wall. 24/7 use will be with cpu stock with PBO and the gpus overclocked for the best balance of heat/power vs the cooling on hand.
 
My trusty old EVGA 1200w PS was able to push 2 old Titans in SLI easily, something my Antec 1600w Platinum couldn't do.

Is the EVGA 1600w PS that much better?
 
My trusty old EVGA 1200w PS was able to push 2 old Titans in SLI easily, something my Antec 1600w Platinum couldn't do.

Is the EVGA 1600w PS that much better?

I'm guessing guessing you had a multi rail psu then. That has nothing to do with quality but design. Single rail psus like the EVGA G/P/T are great for high power setups where you may not know what your exact power draw per component is. Multi rail psus were kind of popular a few years back but it seems not so much anymore. I gave up on multi rail designs back then since I was a long time quad gpu user/bencher. With multiple high power devices it became a nightmare balancing what when on which rail. And so you'd get rails that were over taxed and rails that were under used, etc etc.
 
I'm guessing guessing you had a multi rail psu then. That has nothing to do with quality but design. Single rail psus like the EVGA G/P/T are great for high power setups where you may not know what your exact power draw per component is. Multi rail psus were kind of popular a few years back but it seems not so much anymore. I gave up on multi rail designs back then since I was a long time quad gpu user/bencher. With multiple high power devices it became a nightmare balancing what when on which rail. And so you'd get rails that were over taxed and rails that were under used, etc etc.
I’m still using the EVGA 1200. No more SLI but I do have a [email protected] 2080Ti and 6 hard drives. Takes it all in stride, Best Psu I’ve ever owned. The Antec 1600 Platinum was almost double the price and was junk.
 
My corsair ax1200i can easily run anything thrown at it.

But before the days of huge KW psus, in past years I ran 4x r290x cards lol it would pop a breaker in my panel. I had to run the quad cards on a separate psu.
 
My corsair ax1200i can easily run anything thrown at it.

But before the days of huge KW psus, in past years I ran 4x r290x cards lol it would pop a breaker in my panel. I had to run the quad cards on a separate psu.

Hehe, the good ole days!

r16Fkuz.jpg
 
Thanks man. At this point, running a higher all core overclock is possible but the cooling is the limiting factor, that and of course the power draw. In sli on Vray, it was pushing over 1100w, higher at the wall. 24/7 use will be with cpu stock with PBO and the gpus overclocked for the best balance of heat/power vs the cooling on hand.

There is a reason why studios run dedicated 20a circuits as their workload increases.

At some point your single/dual box TR user is going to have to contemplate subpanel upgrades if they go multi gpu or tr4 is 2x what TR3 is.

Storage arrays are easy to take to single phase.
 
There is a reason why studios run dedicated 20a circuits as their workload increases.

At some point your single/dual box TR user is going to have to contemplate subpanel upgrades if they go multi gpu or tr4 is 2x what TR3 is.

Storage arrays are easy to take to single phase.

Nah, don't need a 20a circuit. System draw doesn't approach the the limits of 15a.
 
Nah, don't need a 20a circuit. System draw doesn't approach the the limits of 15a.

System doesn't on its own, was thinking NAS and other job flow peripherals.

Add a 2nd box or gpu expand this build with Batt backups for everything.
 
My apc sua1500 goes into overload status with a single 2080ti and 3960x both at full load (never happens for me out side of load testing). Still works tho. No way I can afford another sua2200, nor do I have room for it.
 
My apc sua1500 goes into overload status with a single 2080ti and 3960x both at full load (never happens for me out side of load testing). Still works tho. No way I can afford another sua2200, nor do I have room for it.

You should look at the PR1500LCD, its pretty well priced for a 1500w ups. The problem with ups labeling is they go by VA and not watts.

System doesn't on its own, was thinking NAS and other job flow peripherals.

Add a 2nd box or gpu expand this build with Batt backups for everything.

Ok, but that's not my problem.
 
My apc sua1500 goes into overload status with a single 2080ti and 3960x both at full load (never happens for me out side of load testing). Still works tho. No way I can afford another sua2200, nor do I have room for it.

A 1500VA UPS can't even handle my [email protected] with a custom loop and an RTX 2080 Ti. There is no chance it will handle a modern HEDT CPU.
 
Back
Top