Massive Water Cooling Setup Suggestions Welcomed

Use a closed loop to water-cool the cards using generic VGA coolers instead of full-cover ones. That way you can re-use the blocks and they are much cheaper as well.

Then cool the closed loop's water with a heat exchanger pushing the heat into the mains water as suggested above.

BTW, this is incredibly bad for the environment and I kinda hope the whole bitcoin/litecoin etc thing crashes and burns.

Your mileage obviously varies.
 
The first thing I would do is contact EK sales directly. You may be able to get the blocks at a better price if you're literally placing an order equivalent to an order from one of their e-tailers.
 
As someone else pointed out I'd look at getting something like the NZXT G10 to replace the stock coolers which in theory would allow you to either run at lower temps a or push more KH/s since you could then overclock it either way you'd cut the noise down a lot. Once you have you might be able to build a rig/wall to mount all the radiators to trap all the hot air from the rads into a a single enclosed section that you can put a high power blower to exhaust all the air outside this would in theory make it easier to cool the rest of the room it's a very similar approach to how new data centers work with cold and hot aisles.

The biggest concern I'd have is this looks to be in a basement... That much heat will be a major issue and will likely cause issues with humidity and condensation among other things.
 
Last edited:
Use a closed loop to water-cool the cards using generic VGA coolers instead of full-cover ones. That way you can re-use the blocks and they are much cheaper as well.

Then cool the closed loop's water with a heat exchanger pushing the heat into the mains water as suggested above.

BTW, this is incredibly bad for the environment and I kinda hope the whole bitcoin/litecoin etc thing crashes and burns.

Your mileage obviously varies.

Yes this would be an option if you wanted to use mains water. You just have to make sure that you aren't running the mains water through the waterblocks! :)
 
This is not a good idea for PC watercooling. It is fine for what you are doing. The reason it is bad for PC watercooling comes down to the design of the waterblocks that are doing the cooling. In order to properly cool the cores, they rely upon small structures of fins to increase the cooling right on the core. Using the city water supply you will quickly clog up these structures with all of the small contaminants in that water supply, not to mention intentional contaminants like chlorine which is a corrosive. There is a reason why people use distilled water and not tap water for cooling, this is just one of them.

Thermoelectric cooling will only exacerbate his main problem, and that is the amount of heat being generated that needs to leave the room. Thermoelectric cooling will generate *more* heat than simply air cooling, you don't get something for nothing with the laws of thermodynamics!

Using glycol for coolant is a bad idea, it isn't as good at heat transfer (about 20% worse) as just water and it has a higher viscosity which lowers your flow rate and might put undue stress on your coolant pump. However, mixing in some glycol isn't a bad idea if you need it for the corrosion inhibiting and biocide. You are used to cars, they use glycol because it doesn't freeze not because it is better at heat transfer.

I actually got the glycol from high output transmit tubes like those used in broadcast t.v. stations. I figured they used it for a reason and maybe they knew something we didn't. Most people that race run pure water and maybe a bottle or two of water wetter for the reason that you stated. As far as corrosion, maybe that could be fixed with water filters, they make some that are very good and the price is not bad. The ones we use can be easily changed like an oil filter. Also maybe check out http://en.wikipedia.org/wiki/Cooling_bath
 
Last edited:
ASIC for bitcoin, not litecoin type currently. I have 2 of them at 7.4 and 7.6GH/s which is ~12 gpu worth of performance per at a fraction of the power use.

it maybe making x LTC or other alt-coins per day, but the power use and of course heat output which he now want to get rid of need to be part of that equation.

Yeh the bitcoin farms are astounding in scope, considering they have custom cases full of thousands of blades pumping out 100s of GH per "rack"

There is a reason why the asic companies are quite wealthy now, not to mention the actual pools, they maybe paying out x amt per share, but because they are the pool, they keep on average 92% of the payout.

with this amount of power, you would be better to solo mine, and declock the cards to massively reduce their heat output.

I know for me, if I take my 7870 and declock it almost as low as it goes, and reduce voltage even below lowest voltage the card does on auto, it will mine at 250KH/s and never hit 34c 43% fan, if I leave it stock it does about 320KH/s 45c 62% fan, if I tune it 380KH/s 53c 75% fan, if I max speed 64c 78% fan voltages .825v, 1.118v, 1.050v, 1.175v respectively.

I couldn't even comprehend the sheer amount of power this must suck back, but I suppose when you got $ you get to play.

as far as the asic efficiency, to give an idea, most gpu will get ~300MH/s at 150w the smallest asic plugs into usb and can do 350+MH/s at 2.5w the ones I have are supposed to be 5GH/s at 35w or so, the ones I got do 7.3-7.8 depending on how I set the config file and on average use 28w of power. The biggest single unit I believe at this point was 600GH/s+ per unit ~600w. so yeh ASIC dwarf GPU hands down in regards to hashing power, cost to performance, raw heat output etc.

Like an F1 car purpose built to do what they do VERY well, but, they are a one trick pony.

Thats some fairly outdated info these days, bi-fury's will do 5GHS at usb3 specs, or 900ma at 5v or <5w while bluefury and redfury's do 2.5GHS at usb 2.0 specs, 500ma at 5v, or 2.5w.

The largest mining units I know of are the cointerra 2th/s and the KNC jupiter which is 3TH/s, KNC and a few others have 2TH/s and 1TH/s offerings also, the days of 1w a GHS are over, the new 28nm stuff use much less than that. The KNC Jupiter is supposed to take around 850w, thats around 3.5GH/s a watt.


As for OP, I'd say water cool it all, use compression fittings and quick disconnects, its going to cost more but with 50+ GPUs you are going to want it to be easy to work on cards and to have the lowest chance of failure as possible. I'd also make sure you don't put too much pressure through the blocks, the higher the pressure the faster those orings are going to fail. I'd get a couple triple core car rads and put one outside and one inside, that way you can easily heat your house in the winter or dump the heat outside in the summer.

Don't forget about silver kill coils and the like, you don't want stuff growing in your loops.

I'd also use black tubing to reduce light and make it less likely that things will grow.

EDIT: And don't mix metals, stick with 100% copper, the last thing you need is galvanic corosion on a system that big.
 
IMO, you would probably be better served sticking with air cooling solutions. A problem I see with most of the water-cooling approaches is that you connect several cards in serial. While I'm sure you can overcome the maths issues involved in building the loop (flow rates, heat dissipation, etc..), you'll have some annoying issues during maintenance times.

If anything goes wrong with the loop that causes you to halt coolant flow (bio growth, corrosion, pump goes out, etc...) you'll have several systems down and will take a big hit to your hash rate as you fix the issue, and go through restart prep (work out any bleeding, trust check the loop if you have to disassemble a large part of it, etc..)

Maybe I'm missing a key concept when scaling water up that big.. if so, please enlighten me as it seems like an interesting problem to overcome.


You may also want to look at other enterprise level angles to ensure the longevity of your investment - environmental monitoring, UPS to handle blips, and graceful automated shutdown should either get out of whack while you're away. It would be sad to have them cook themselves in the event of cooling failure or replace hardware due to dirty power, or even restart processes should a system not pick itself back up in the event of a power failure.
 
IMO, you would probably be better served sticking with air cooling solutions. A problem I see with most of the water-cooling approaches is that you connect several cards in serial. While I'm sure you can overcome the maths issues involved in building the loop (flow rates, heat dissipation, etc..), you'll have some annoying issues during maintenance times.

If anything goes wrong with the loop that causes you to halt coolant flow (bio growth, corrosion, pump goes out, etc...) you'll have several systems down and will take a big hit to your hash rate as you fix the issue, and go through restart prep (work out any bleeding, trust check the loop if you have to disassemble a large part of it, etc..)

Maybe I'm missing a key concept when scaling water up that big.. if so, please enlighten me as it seems like an interesting problem to overcome.


You may also want to look at other enterprise level angles to ensure the longevity of your investment - environmental monitoring, UPS to handle blips, and graceful automated shutdown should either get out of whack while you're away. It would be sad to have them cook themselves in the event of cooling failure or replace hardware due to dirty power, or even restart processes should a system not pick itself back up in the event of a power failure.

Most people are suggesting the use of quick disconnect fittings. The way they are suggesting is basically to have each "system" connected in parallel with the quick disconnect fitting. If a card or system has a problem, you just disconnect that system from the loop and everything else continues operating.
 
I personally would also try to keep it as cheap as possible because lets face it, with 53 GPU's you don't wan't to spend any more than you have to per GPU.

I'd personally go for universal blocks + vram heatsinks or just the cheapest copper full coverage blocks one can find. IIRC 7970 blocks fit on 290's but i'm not 100% sure.
 
You could always get a copper heatsink on everyone of them, then make a big hole in your basement concrete and pipe all the heatsinks into the earth. ahhh global warming though!:p
 
IMO, you would probably be better served sticking with air cooling solutions. A problem I see with most of the water-cooling approaches is that you connect several cards in serial. While I'm sure you can overcome the maths issues involved in building the loop (flow rates, heat dissipation, etc..), you'll have some annoying issues during maintenance times.

If anything goes wrong with the loop that causes you to halt coolant flow (bio growth, corrosion, pump goes out, etc...) you'll have several systems down and will take a big hit to your hash rate as you fix the issue, and go through restart prep (work out any bleeding, trust check the loop if you have to disassemble a large part of it, etc..)

Maybe I'm missing a key concept when scaling water up that big.. if so, please enlighten me as it seems like an interesting problem to overcome.


You may also want to look at other enterprise level angles to ensure the longevity of your investment - environmental monitoring, UPS to handle blips, and graceful automated shutdown should either get out of whack while you're away. It would be sad to have them cook themselves in the event of cooling failure or replace hardware due to dirty power, or even restart processes should a system not pick itself back up in the event of a power failure.

you guys are still treating this too much like a pc a water cooling rig for this would be partially pvc or abs pipe. glued with shutoff valves to isolate probably sets of 4 cards
you probably need multiple pumps and a large reservoir.
 
you guys are still treating this too much like a pc a water cooling rig for this would be partially pvc or abs pipe. glued with shutoff valves to isolate probably sets of 4 cards
you probably need multiple pumps and a large reservoir.

That's where I was going, using PVC for rails, drill individual tap points along the rails and epoxy in the nipples for the flex lines to the heat blocks. It really wouldn't cost that much to build the PVC frame. The water blocks, pump, and radiator would be the expensive part. I'd just use an (new) automotive radiator. It's already got a siphon hose to reservoir and you wouldn't need a huge external reservoir, just an overflow since the radiator would act as the reservoir.
 
Question, is that your basement? Doesnt look like a normal living room. If that is a basement (I see pipes) why do you care about noise/heat?

Because people still need to be in the area around the rig. And the amount of heat kicked out is significant enough that it could impact other areas of the house. Especially during the summer months.
 
That's where I was going, using PVC for rails, drill individual tap points along the rails and epoxy in the nipples for the flex lines to the heat blocks. It really wouldn't cost that much to build the PVC frame. The water blocks, pump, and radiator would be the expensive part. I'd just use an (new) automotive radiator. It's already got a siphon hose to reservoir and you wouldn't need a huge external reservoir, just an overflow since the radiator would act as the reservoir.

i would not drill holes i would use t-junctions with a threaded end stack 4 of them per card. 1/2 inch or 1 inch pvc then ball valves to block off flow to sections for maintenance.
 
Yeah, I've seen that bitcointalk post. Pretty impressive indeed. I'd like to go much bigger but the problem is power and stability with current mining software with R9 290 drivers. Hopefully in the next week or two I get these units dialed in and are rock solid stable. I have no plans on overclocking these units. I'd rather add more cards than overclock as overlocking decreases the performance to watt ratio. I was happy getting these cards at $399 each through Newegg with BF4 copies :)

Soo... Any extra BF4 copies you don't want or need :D lol
 
you guys are still treating this too much like a pc a water cooling rig for this would be partially pvc or abs pipe. glued with shutoff valves to isolate probably sets of 4 cards
you probably need multiple pumps and a large reservoir.

The most efficient would be to calculate his flow needs and buy an appropriately sized pump.

That's where I was going, using PVC for rails, drill individual tap points along the rails and epoxy in the nipples for the flex lines to the heat blocks. It really wouldn't cost that much to build the PVC frame. The water blocks, pump, and radiator would be the expensive part. I'd just use an (new) automotive radiator. It's already got a siphon hose to reservoir and you wouldn't need a huge external reservoir, just an overflow since the radiator would act as the reservoir.

i would not drill holes i would use t-junctions with a threaded end stack 4 of them per card. 1/2 inch or 1 inch pvc then ball valves to block off flow to sections for maintenance.

Lunas hit the nail on the head, with standard plumbing supplies he wouldnt need to do anything but cut some of the lines to the appropriate length.
 
course declocking can increase the per/watt used if you take the time to find the optimal thread conc, core and memory clocks at the right voltage. for my 7870 just as a specific example, 983 core and 1375 memory with 1.175 gets just under 400KH speed next bump has to be 983 with 1535 at 1.180v difference in watts is ~35w for 415KH
I use these settings in CGminer 2.11.4 as I found this one best so far for stability and performance
these are my conf settings
"intensity" : "13",
"vectors" : "2",
"kernel" : "scrypt",
"scrypt" : true,
"thread-concurrency" : "13456",
"worksize" : "256",
"shaders" : "1536",
"lookup-gap" : "0",
"temp-cutoff" : "90",
"temp-overheat" : "75",
"temp-target" : "60",
"api-mcast-port" : "4028",
"api-port" : "4028",
"gpu-dyninterval" : "4",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "15",
"no-pool-disable" : true,
"no-submit-stale" : true,
"failover-only" : true,
"load-balance" : true,
"expiry" : "70",
"scan-time" : "40",
"queue" : "3",
"temp-hysteresis" : "3",
"shares" : "0",

I did find going up 1 step in the thread con helped between 10-15KH/s in my case thread con was based on 7870XT shaders, in the 290 case would base on the shaders for 290x.

I have read more then once running room temperature type cooling rather then forced cooling is better for the chips in question and obviously far less power required to cool as you only need to pump the liquid elsewhere to rid of extra heat, IBM and HP(probably others now) do warm water cooling for servers in which the water is pumped to heat swimming pools and sits at around 50c this ends up saving a boat load of power compared to chilling the water to 30c, in this case they do not need to use AC or whatever to force cool it, they just make sure there is enough heat exchangers and such to wick the heat created away, easier on the chips, less troubles in regards to making sure they stay above ambient so on and so forth.
 
My other concern is most pumps have a certain height to which it can pump water. I am trying to think if you would be better off pumping water into a tank of sorts on top and letting gravity pull it down into the cards and out then the question is how to feed the sections if you have a pump that is adjustable to keep the upper tank at a set level 1/2-3/4ths full not sure if you should split the feed into 2 or just one section
the whole setup would likely consist of
1 50 gallon drum as a tank
1 radiator probably a full car radiator with a slow big fan on it
2-4 water pumps 1 for the radiator circulates water from the 50g drum through it dumping waste heat from the whole setup
1-2 pumps used to feed the intake side 1 pump used to pull out of the cards and feed the return side
I might also consider the return side being a tower using a series of baffles to make the water interact with the air to dump heat out.

as for the coolant ehh 75% water to 25% car antifreeze or what ever your pleasure is.

tempting to make the top of the big tank this http://hardforum.com/showthread.php?t=1421690&highlight=bong+cooler
 
Last edited:
500.00 avg each for one of those cards x 50 is 25,000 for a average bitcoin mining of 700-800 MH/s to get 37.5 GH/s. The average power for those devices is 300 watts per card 15,000 watts per hour. According to the Bitcoin calc your setup would take 4 years and 8 months to break even minus power.

Should have just bought a Butterflylabs 600 GH/s PCI-E for 5 grand and that would break even in 18 days.
 
Wow, so that one bitcoin mining card can mine better than 50 - 290's at ~1/50th the power output? My mind is blown. Although I dont know much at all bout mining.
 
500.00 avg each for one of those cards x 50 is 25,000 for a average bitcoin mining of 700-800 MH/s to get 37.5 GH/s. The average power for those devices is 300 watts per card 15,000 watts per hour. According to the Bitcoin calc your setup would take 4 years and 8 months to break even minus power.

Should have just bought a Butterflylabs 600 GH/s PCI-E for 5 grand and that would break even in 18 days.

Wow, so that one bitcoin mining card can mine better than 50 - 290's at ~1/50th the power output? My mind is blown. Although I dont know much at all bout mining.

He mines scrypt coins, one would have to be incapable of basic reading and match skills to invest this much into GPU's just to mine SHA256, and never, EVER would anyone who cares about their money buy something from BFL, they never deliver on time and have zero customer support. If he wanted to mine straight BTC he should have jumped on early pre-orders for KNC miners.
 
Wow, so that one bitcoin mining card can mine better than 50 - 290's at ~1/50th the power output? My mind is blown. Although I dont know much at all bout mining.

It's an ASIC, so it's only purpose is to mine coins by hashing SHA-256. It's basically useless for anything else not involving SHA-256 calculations. A GPU is a much more general purpose chip.
 
It's an ASIC, so it's only purpose is to mine coins by hashing SHA-256. It's basically useless for anything else not involving SHA-256 calculations. A GPU is a much more general purpose chip.

So the only use it has is for bit coins and the op is farming lite coins that go for about 20-30usd per coin so his setup makes him about 17 ltc a day... And if the op does things right he will trade the ltc for btc on a low swing and then sell the btc high...
 
Wow... :eek:

I vote for industrial fans. In all seriousness, that's more hardware than some entire F@H teams have. That really is an amazing array of hardware, I really hope you can find something effective to cool it/them.
 
I should have my cooling setup ready to go by end of next week hopefully. I'll post pictures of the cooling setup and final rig pictures.
 
Awesome project you're working on. I'm curious how it turns out. Cant wait for the next pictures :D
 
Back
Top