2kW+ PSU options?

I'm not sure this would be a good idea. Most internal PC power connectors aren't designed for hot swapping, so you'd run a risk unplugging the water pumps on the loops being modified; and unless you've got T connectors with unused ends breaking the loop to change connections around would leave the pump running but unable to move water which isn't good for it either.
It'd still be flowing in the cpu loop...unless he has the pumps supplying the loops individually? But didn't sound like that.
 
I'm not sure this would be a good idea. Most internal PC power connectors aren't designed for hot swapping, so you'd run a risk unplugging the water pumps on the loops being modified; and unless you've got T connectors with unused ends breaking the loop to change connections around would leave the pump running but unable to move water which isn't good for it either.

The pumps could keep running the whole time and no electrical would need to be unplugged or have a plug swap. It would be a dual pump setup with one example of potential routing being pump1 -> rad1 -> GPU1 -> GPU2 -> pump2 -> rad2 -> GPU3 -> GPU4 -> pump1. Or, if a pump died, it could be replumbed out of the loop entirely. The QDCs are all gender matched for consistent input/output, so it wouldn't be difficult to route differently. I'm not really sure of the value in such a scenario though. This is more about "could" than "would," especially since the Alphacool QDCs are pretty leaky even with the pumps off. The basic idea behind easy re-routing (separate from the goal of modularity for service & swaps) is for "oh shit" recovery if, for example, a pump dies during an extended compute. I could, in theory, just swap the loop routing without having to restart the job (the majority of them can't be paused).
 
It'd still be flowing in the cpu loop...unless he has the pumps supplying the loops individually? But didn't sound like that.
Normally, each loop is completely standalone aside from sharing an Aquaero for fan control. The CPU pump supplies only the CPU loop, the GPU Loop 1 pump supplies only GPU Loop 1, etc. If something broke, combining loops in various different ways would be as simple as swapping QDCs and using QDC-equipped jumpers wherever existing tubing was too short. The QDCs are all gender matched with male as the output and female as the input, so the output of any component could be connected to the input of any other component. The matching here is not about maintaining flow direction (since nothing other than pumps are directional), but rather it's just a simple way to ensure that every component has one of each gender on it.

Oh, and forgot this in the previous reply: the GPUs are paired via terminal blocks on the blocks, so they always get plumbed as pairs.
 
Last edited:
Necroing my own thread here. The AX1600i seems to be having some trouble with the GPUs capped at 300W. Getting hard power cycles when renders run longer than 4-5min.

Did anything new come out since the OP? I’m not seeing anything in my Googling.
 
We announced a 2050W unit last month, it's shipping now and should be available in North America by the end of the year:
https://www.silverstonetek.com/product.php?pid=1023

Thanks for the link and I appreciate that you're posting here.

As silly as it sounds, not having a power switch is kind of a deal breaker for me. That is definitely an unexpected design decision. I may just hold off for the Rev 1 release.
 
Thanks for the link and I appreciate that you're posting here.

As silly as it sounds, not having a power switch is kind of a deal breaker for me. That is definitely an unexpected design decision. I may just hold off for the Rev 1 release.
Unfortunately we won't be able to just add a power switch to the current design. The PSU at only 180mm deep (typical size for many 850W modular PSUs from other brands), is already as packed as it can be with components!
 
Unfortunately we won't be able to just add a power switch to the current design. The PSU at only 180mm deep (typical size for many 850W modular PSUs from other brands), is already as packed as it can be with components!
That's fine. I'll just buy the 220mm long EVGA unit instead I guess. The 40mm of empty space that can't be used for anything and offers zero operational benefit shouldn't be a bragging point for you guys. It's like trading modular cables for fixed ones and then telling us it was done so the housing could be painted an extra dark shade of black.
 
The 40mm of empty space that can't be used for anything and offers zero operational benefit shouldn't be a bragging point for you guys. It's like trading modular cables for fixed ones and then telling us it was done so the housing could be painted an extra dark shade of black.
There are actually lots of use cases for a shorter high wattage PSUs, we know because we design cases too! Below is one of the examples (our RM42-502 case), the PSU in the photo is 140mm deep, so an 180mm one could still work, while a 220mm one will not with the adjacent 5.25" bay occupied by something like hot-swap bays:

rm42-502-4.jpg
 
Your example here is a system that tops out at under 1000W TDP. Why would I want to put a 2000W PSU into that?
 
Because power efficiency drops off a cliff if you're maxing out your PSU?
Maybe we just have different definitions of "cliff?" The HX1200i "falls off a cliff" when it goes from 94% efficiency at 50% load all the way down to 93% at 83% load (1000W). It's even worse at 100% load where efficiency plummets to 92%. I mean, you're literally setting an entire penny on fire if you do that. If your electricity is expensive, that could even be TWO pennies!

If you're running this at 1000W 24/7/365 (a PSU without a power switch is obviously never going to be turned off) and your electricity is $0.21/kWh, the amount of money getting wasted is staggering. At a planet-killing 93% efficiency, you're losing $129/yr to heat. Compare that with the 94% efficient HELA 2050, which loses only $110/yr at that load.

With the $290 difference in list price, this puts your breakeven at a paltry 15 years. That's 1.5 decades - for some definitions of "decade."
 

Attachments

  • efficiency-2.jpg
    efficiency-2.jpg
    48.9 KB · Views: 0
  • efficiency.jpg
    efficiency.jpg
    47.7 KB · Views: 0
  • Wk4R6FdXrUwwrYeZZWVmLZ-1310-80.png
    Wk4R6FdXrUwwrYeZZWVmLZ-1310-80.png
    55.1 KB · Views: 0
Last edited:
Your example here is a system that tops out at under 1000W TDP. Why would I want to put a 2000W PSU into that?
Sorry I should have made it clear that we are showing an example "chassis" that could benefit from having a shorter PSU. The system components in the photo is for illustrative purpose (to give buyers of the chassis an idea of size and component clearance), you can of course build a more powerful system into the same chassis than the one shown in the photo!
 
Last edited:
Yes, a more powerful system will also fit a standard sized 200mm PSU.

I'm having trouble coming up with a system which is powerful enough to justify a 2000W PSU which also cannot fit a standard 200mm one.
 
I'm not sure this would be a good idea. Most internal PC power connectors aren't designed for hot swapping, so you'd run a risk unplugging the water pumps on the loops being modified; and unless you've got T connectors with unused ends breaking the loop to change connections around would leave the pump running but unable to move water which isn't good for it either.
I killed a x58 board doing something similar in 2009.
 
Yes, a more powerful system will also fit a standard sized 200mm PSU.

I'm having trouble coming up with a system which is powerful enough to justify a 2000W PSU which also cannot fit a standard 200mm one.

I have an Azza Genesis 9000 which is an XL-ATX case, meaning I could stuff all the GPUs I want into it. My Coolermaster V1000 just barely fits with a 280 mm radiator mounted on the bottom, and it has a length of 170 mm. 200 mm would be out of the question and the case can surely handle 2000 watts worth of equipment. I would not consider the lack of a power switch a deal breaker if it was the only PSU on the market that fit my needs. Delivering poor power would be the deal breaker.
 
I do not use this kind of power in ATX settings anymore so it might not be applicable anymore but back in the day some of us ran multiple PSUs usually two of them. As in some things would plug into one or the other PSU. There were various means of turning them on at the same time.
 
did he ever post any pics of this pc of his.. I really wanted to see if this thing really did exist ..........
 
If you mean my rig, it exists as just a 3x3090 machine due to lack of a suitable power supply. I would like to give the 2kW EVGA unit another shot, but I don't want to gamble $600 + 1-2 days of system rebuilding in order to find out if it works. This is then compounded by the fact that it wouldn't be enough for 4x4090. thus, The build is largely a dead-end as a 3-GPU rig unless a magical 2.5kW PSU comes to market.

I did investigate running multiple PSUs since the 1000D has mounting for an SFF PSU in addition to the main ATX. The more I looked into running multiple ATX/SFF PSUs, however, the more I realized that was a bad idea for a desktop - even a seemingly large one like a 1000D. The 1600W EVGA units are supposed to be much more tolerant of transient spikes than the Corsair AX1600i (which very consistently power cycles at 1350-1400W continuous load), so simply changing over to one of those might have been enough to run 4x3090 with TDPs limited to 300W, but I just don't have the energy to swap PSUs between machines these days.

In an open frame mining rig, I can see how dual and triple ATX PSUs become viable given the right PCI risers. I build production workhorses that end up on users' desks, however, so those open frame rigs don't really work for me. Given the limitations of today's ATX PSUs, 1600W bricks powering just 2 GPUs really seems to be the sweet spot. And, honestly, when it comes to rendering, simulation, and other compute-focused uses, 2x4090 is looking like it could be equivalent to 4x3090. That's such a crazy amount of compute capability (crammed into just a 5000D worth of space, too!) that users truly aren't ready to adjust their workflows to optimize for it and probably won't be until after the 50-series gets launched.

I did end up building a couple of 10x GPU rigs, but those were in proper server chassis with proper server PSUs. I would like to build another as a hobby project, but for now I think I'm going to focus on building large rackmount water cooling arrays using gamer parts.
 
Last edited:
Back
Top