Having some trouble with routing on a quad GPU setup

Joined
Oct 23, 2018
Messages
1,015
First of all, my apologies if this is in the wrong forum.

Secondly, my further apologies if this ends up as a crosspost because I don't get the answer I was hoping for from [H]ard.

Thirdly, let's get to the shit.

I'm putting together a quad 2080 Ti build. I'm running into two issues. One of them is important and the other one isn't. Here's the important one:

I'm trying to run this as two completely independent loops, with each loop running a pair of GPUs (the CPU is on its own third loop but it mostly sits idle anyway). THIS IS NOT IN ANY WAY INTENDED TO BE OPTIMIZED FOR GAMING SO PLEASE STFU ALREADY THANKS. Also, I'm not mining with this setup either - so STFU about that as well. Kindest regards, please do the needful.

Quad blocks don't work because of the two loop thingy I mentioned just now. Twin double blocks don't really work because they are too wide to support having two of them side by side. This leaves me manually routing between cards. I initially hoped/dreamed that I could fit 90deg joints inbetween card pairs. It turns out that this is impossible:
IMG_1382.jpg

But one of the nice things about Alphacool is that they also give you a 90deg/vertical adapter with each GPU block. OMG I'M SAVED!!@!#!!..... right?

IMG_1385.jpg

LMAO wrong. Expletive. Heavy breathing. One more Expletive. Ok, we're good now.

I think the easiest solution is to just plumb the odd cards together on one loop and the even cards on a second loop. It will be ugly, but it will be effective (your mom joke goes here). For the primary usage, I really don't see an issue here since all cards will be worked evenly when the CUDAs are crunching. When I'm slacking off and playing games, however, cards 1 & 3 are likely going to be working harder since they'll be NVLink-connected. They'll be going through a 480x60mm radiator, so I guess WGAF it'll be fine, but I'll still know on the inside that this isn't ideal. Sometimes, it's what's on the inside that counts (joke #2 about your mom goes here).

So, with that out of the way, any suggestions on how to clean this up and make it work as I had dreamed?


As a distraction, Issue #91 out of 100 is this:
IMG_1381.jpg

What's a good way to get these parallel with each other? The button heads down below are interfering with each other, and that ends up showing big time with the reservoirs.

IMG_1380.jpg

The tiny little Corsair 1000D doesn't have mounting on the back wall for two reservoirs. It has mounting for one and the rest of the space is taken up by grommets and cutouts for wires which don't exist.

Any suggestions?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Here's my thought: right angle elbow pointing straight back from the rear ports of all cards, use a D shaped bend. That would let you link cards 1+2 and 3+4. Might even look pretty cool if you're using hard tubing.
 
I apologize for not contributing, and no snark intended because I'm genuinely curious: if not for mining or gaming, what under the sun are you using all that GPU powah for? :eek:
 
Here's my thought: right angle elbow pointing straight back from the rear ports of all cards, use a D shaped bend. That would let you link cards 1+2 and 3+4. Might even look pretty cool if you're using hard tubing.

That's probably what I'll do, but with soft tubing. I'm still new enough that I'm not ready to take on hard tubing. That really would look cool though.

I apologize for not contributing, and no snark intended because I'm genuinely curious: if not for mining or gaming, what under the sun are you using all that GPU powah for? :eek:

I run a bunch of finite element sims (CFD, RF sims, mechanical FEA) which are CUDA-accelerated. Being able to run some of the simpler simulations in near real time is incredible, as is running the more complex ones in minutes instead of hours or days. That kind of quick feedback really changes the development process. Oh, and I also dabble in rendering but that's mostly just for making pretty images for my website and the occasional usage for a client.
 
That's probably what I'll do, but with soft
I run a bunch of finite element sims (CFD, RF sims, mechanical FEA) which are CUDA-accelerated. Being able to run some of the simpler simulations in near real time is incredible, as is running the more complex ones in minutes instead of hours or days. That kind of quick feedback really changes the development process. Oh, and I also dabble in rendering but that's mostly just for making pretty images for my website and the occasional usage for a client.
That's awesome! Do the raytracing and deep learning hardware in the RTX cards help with your workload too?
 
That's awesome! Do the raytracing and deep learning hardware in the RTX cards help with your workload too?

They definitely will help in the future, but they're not directly supported just yet. The timeline for that varies by software package, naturally, but I'm guessing it will be within the next 12mos for most of what I use. I am a bit curious what the performance benefit will be. There are certain types of calculations which these cards (and the Volta cards) do extremely quickly but I'm not sure how much that impacts FE workloads. For rendering, those capabilities enable AI usage which is used not to do the same renders more quickly but instead make some very good guesses about what _not_ to calculate in order to get a very similar render in much less time. I'm not sure how this will impact FE performance unless it can be used to do something like a dynamically variable mesh. To be honest though, I don't really know much about the underpinnings of the tech. I just use it to make funny colored images.
 
Still waiting on parts from Alphacool. At least their customer service is top notch and they definitely never fail to ship you parts that their website shows as in stock. There definitely is no sarcasm here.

Got tired of waiting. The first build is going to be routed as suggested earlier in the thread. But I also got annoyed at the lack of parts availability and decided to put this together. Going to send it out to a local machine shop and, if it fits and doesn't leak, I'll be going with a pair of them for my loops. The images can be hard to make sense of, but these are 2-slot SLI bridges with spacing/mounting to match up with the Alphacool GPX blocks on my GPUs. Apologies for being terrible at creating decent renders - I'm really bad at anything resembling art.

Anyway, this will run one block "backwards." As best I can tell, that has no meaningful impact outside of some bubbles on the initial purge/fill. My plan here is to use the QDCs to run the second block forwards (first block backwards) when filling and then swap the QDCs and run them with the first block fowards from then onwards.

Waterblock ASY Rev 01 9.jpg Waterblock ASY Rev 01 10.jpg
 
There is no forwards or backwards on fullcover GPU blocks. They don't have the space (height) necessary for it. The only time forwards or backwards matters is on blocks with a center inlet directing flow onto a micro-pin array, which is most CPU blocks.
 
There is no forwards or backwards on fullcover GPU blocks. They don't have the space (height) necessary for it. The only time forwards or backwards matters is on blocks with a center inlet directing flow onto a micro-pin array, which is most CPU blocks.
My Phanteks GPU block definitely has a forward. At least, a flow direction is recommended. The channeling is exactly as you describe: water is forced down over the GPU die micro fins.
 
My Phanteks GPU block definitely has a forward. At least, a flow direction is recommended. The channeling is exactly as you describe: water is forced down over the GPU die micro fins.

Welp, exceptions exist… those really are not the favored designs by most GPU waterblock manufacturers though.
 
Did a quick test fit last night. The connectors on the cards are still so close together that the soft tubing (Alphatube 13/10) starts to kink. Dang it.
 
Thunderdolt , the second image in your OP gives me an idea.

Those two 90° rotaries you have on those blocks - do they have the soft tube compression fittings integrated? Or are they just 90° rotaries, like this:

1-141115150g1359.jpg

Because if they're just 90° rotaries that you can thread any fitting you want into, it looks to me like you've got something like 20-40mm of space between them.

Perhaps you could use a telescoping fitting like this?

Barrow-G1-4-Thread-9mm-Micro-Adjust-Telescopic.jpg


Using those, you could go left to right, card to card, in a "Z" sort of pattern from the bottom card to the top.
 
Oooo. That definitely looks like it could work. I'll take some measurements tonight and see which option matches up.

Thanks!
 
I ended up bending some hard tubing to use for bridges until the custom pieces come in. Fired up the machine and did the first test render a few minutes ago. Hooray - it's alive!
 
Got the first samples in to check my dimensions. I think this might work. Not sure why the came in clear-coated, but that doesn't look too bad.
IMG_1516.jpg IMG_1515.jpg
 
Back
Top