Any reason not to use dual PSUs?

EnderW

[H]F Junkie
Joined
Sep 25, 2003
Messages
11,249
I have a O11 mini case which does not fit an ATX PSU. It does however support 1-2 SFX PSUs.

I currently have a SF600 platinum. Thinking of upgrading the computer to a 12900K and 3090 Ti.

For less than the price of the silver stone 1000 watt PSU, I could simply add another SF600 or even SF750.

My plan would be to run the GPU from the new PSU and leave the motherboard and CPU on the existing SF600.
 
You can. I would makes sure they are both quality supplies tho. I did it in most of my mining rigs.

With that said it may be better to just buy one nice psu with components that pricy
 
idk any reason not to, someone will chime in im sure, but all you need is a dual psu adapter for ~$10-15
 
The biggest thing is you are not using a redundant PSU then you are simply increasing the possible failure points to keep the system from working. Given your use case and the units involved I would lean towards a different solution, personally.
 
The biggest thing is you are not using a redundant PSU then you are simply increasing the possible failure points to keep the system from working. Given your use case and the units involved I would lean towards a different solution, personally.
I'm not sure if I get this. Are you saying it can be harmful or just that if one has issues it can be harder to diagnose? I have thought of doing the same thing as OP so I am interested in the reality of the situation if you care to expand.
 
If you're running 2 PSUs without redundancy, you're essentially doubling your chance of PSU failure, especially if you use the same model (or PSUs from the same mfg batch). Imagine RAID 0 but with PSUs, you get all the same pitfalls.

If the machine isn't mission critical and/or you can tolerate failure and data loss then it's totally fine.
 
The biggest thing is you are not using a redundant PSU then you are simply increasing the possible failure points to keep the system from working. Given your use case and the units involved I would lean towards a different solution, personally.
What about my use case? I didn’t state this earlier but it’s a gaming system which is rarely used as my #3 desktop computer.

I thought the Corsair platinum units were well regarded as the best SFX PSUs available. Am I missing something? The alternative is the 1000 watt silverstone which seems to have iffy reviews.

I’m not interested in switching cases to a ATX PSU.
 
Am I missing something?

Yeah, you're missing the complicated dual PSU setup where you're relying on janky adapters to keep both PSUs running in unison, and relying on perfect mains to keep one of the PSUs not shutting off from power issues.

Also missing the fact that one SF600 doesn't have enough PCIe power connectors for a 3090 Ti, which means you'd have to use both supplies to power the card, which is extremely dangerous if one PSU drops offline.

If there's any hiccup that causes one of the PSUs to fail, the other PSU is going to be grossly overloaded and potentially blow up and cause very expensive damage.

I've run many dual non-redundant PSU setups for a variety of situations, and more than once had one of the PSUs shut off and cause component damage. In my case, I didn't care because the hardware being powered was old and not really worth much if it died. You on the other hand want to run a $2000-3500 video card and a $700 processor on what would equate to building a $5000 system and using a Logisys power supply and expecting everything to be fine.

If you're dropping that much money on components, then you need to have a proper power solution, not huang lo wofat janky dual non-redundant PSU nonsense. Get a proper 1000W PSU or change your system expectations to fit within the power envelope of your current PSU, which is not a 3090 Ti and 12900k.
 
Yeah, you're missing the complicated dual PSU setup where you're relying on janky adapters to keep both PSUs running in unison, and relying on perfect mains to keep one of the PSUs not shutting off from power issues.

Also missing the fact that one SF600 doesn't have enough PCIe power connectors for a 3090 Ti, which means you'd have to use both supplies to power the card, which is extremely dangerous if one PSU drops offline.

If there's any hiccup that causes one of the PSUs to fail, the other PSU is going to be grossly overloaded and potentially blow up and cause very expensive damage.
Running in unison? It’s not like PSUs are locomotive engines that need to be synchronized or something. They’re either on or off. A simple relay switch isn’t janky anymore than a power switch or sleeved cable set is. Also wouldn’t a power loss on the main result in everything shutting off? Either way the system in on a battery backup.

The SF600 has 3x PCIe/CPU 8pin connections. Since the secondary PSU would be used only for the GPU, all 3 would be available.

In the event of a one power supply failing, there shouldn’t be any way for the other to become “overloaded”. And by keeping the load relatively low on each unit they would be less likely to fail.
 
I've run many dual non-redundant PSU setups for a variety of situations, and more than once had one of the PSUs shut off and cause component damage. In my case, I didn't care because the hardware being powered was old and not really worth much if it died. You on the other hand want to run a $2000-3500 video card and a $700 processor on what would equate to building a $5000 system and using a Logisys power supply and expecting everything to be fine.
I really don’t think this plan is equivalent to what you’re describe there at all.
 
Running in unison? It’s not like PSUs are locomotive engines that need to be synchronized or something. They’re either on or off.

Wrong. SMPS supplies will have their outputs interfered with if you connect them together. No two supplies will have the exact same output voltage, leading to load balancing issues and regulation issues. Whichever supply has the highest voltage is going to see the most current sinking. Since the controller on the supply measures the output for regulation, both supply controllers will constantly be fighting each other whenever loads change.

And a SMPS is not "on or off". ATX power supplies have soft start circuitry that relies on the motherboard for some of the protections to work. If you just force a supply on by grounding the PS_ON wire, you're completely ignoring the PWR_GOOD signal, which the motherboard uses to determine if there's a fault with the power supply and whether to allow the system to turn on or not.

Also wouldn’t a power loss on the main result in everything shutting off? Either way the system in on a battery backup.

You're making the erroneous assumption that cabling and plugs will always be good. I've come across more than a few bad power cables, plugs, IEC connectors and individual ports on UPSes and power strips over the years. Many of which caused one or more redundant PSUs in a server to shut off. Having such a setup for NON-redundant PSUs is just begging for trouble.

The SF600 has 3x PCIe/CPU 8pin connections. Since the secondary PSU would be used only for the GPU, all 3 would be available.

Wrong. The PSU has TWO PCIe and ONE EPS12v cable. They are NOT interchangeable, the polarity is reversed. You can attempt to rig up your own EPS12v -> 8 pin PCIe power adapter, but you assume all risk for blowing up your $2000+ video card.

In the event of a one power supply failing, there shouldn’t be any way for the other to become “overloaded”. And by keeping the load relatively low on each unit they would be less likely to fail.

Since you cannot power the 3090 Ti with one SF600, yes, the remaining PSU can become overloaded and anything can happen. I don't know why you're trying to justify playing russian roulette with janky dangerous power setups when you can just do it the proper way and buy a higher capacity power supply.
 
Wrong. SMPS supplies will have their outputs interfered with if you connect them together. No two supplies will have the exact same output voltage, leading to load balancing issues and regulation issues. Whichever supply has the highest voltage is going to see the most current sinking. Since the controller on the supply measures the output for regulation, both supply controllers will constantly be fighting each other whenever loads change.

And a SMPS is not "on or off". ATX power supplies have soft start circuitry that relies on the motherboard for some of the protections to work. If you just force a supply on by grounding the PS_ON wire, you're completely ignoring the PWR_GOOD signal, which the motherboard uses to determine if there's a fault with the power supply and whether to allow the system to turn on or not.



You're making the erroneous assumption that cabling and plugs will always be good. I've come across more than a few bad power cables, plugs, IEC connectors and individual ports on UPSes and power strips over the years. Many of which caused one or more redundant PSUs in a server to shut off. Having such a setup for NON-redundant PSUs is just begging for trouble.



Wrong. The PSU has TWO PCIe and ONE EPS12v cable. They are NOT interchangeable, the polarity is reversed. You can attempt to rig up your own EPS12v -> 8 pin PCIe power adapter, but you assume all risk for blowing up your $2000+ video card.



Since you cannot power the 3090 Ti with one SF600, yes, the remaining PSU can become overloaded and anything can happen. I don't know why you're trying to justify playing russian roulette with janky dangerous power setups when you can just do it the proper way and buy a higher capacity power supply.
I’m not talking about connecting the GPU with a EPS12v cable. I would use 3x PCIe cables.

The power supplies wouldn’t really be connected together imo if one is running the GPU alone and the other the rest of the system.

GPU mining rigs use a similar configuration and I’m not aware of widespread issues or failures and they’re running higher loads continuously.

The reason I’m considering dual PSUs is my case only fits SFX units and the only 1000 watt SFX unit doesn’t have good reviews.

Either way I don’t think we’re going to agree here. I appreciate your input and I’ll post back if I decide to try it.
 
I am not caught up on my motherboards lately so take that as is. The power supplies if they have a common power rail connection in parallel that can cause the voltage to oscillate between them as they try to regulate each other or show up as a fault and shut down. This occurs at start up because two supplies do not power up at exactly the same rate even if identical.

So if there is a +12V connection on the pcie slot that connects between the two supplies you can have a problem. Might need to add some capacitance in there to buffer it.

Once up and running you should be fine.

If the power rails are isolated from eachother i dont see an issue, running these power supplies at low load is very inefficient so you would be wasting power on your light load supply depending what its curve looks like.
 
Not an expert here and don't understand most of what is being said, but, this is really common in mining rigs and is used extensively and with a high power draw as usually miners will try to max the amount of GPU's they can in a rig. Is it actually inefficient and possibly damaging?
 
Not an expert here and don't understand most of what is being said, but, this is really common in mining rigs and is used extensively and with a high power draw as usually miners will try to max the amount of GPU's they can in a rig. Is it actually inefficient and possibly damaging?
Its fine. The potential for damage only exists if you are using some garbage power supplies. most power supplies will overload or fail in a peaceful manner.

Connecting the two power supplies is trivial. All those boards do is tie the green wire on the 24 pin connector together that is all that is required and is relatively foolproof. You can just spice the wires together if you feel.

Power the CPU and motherboard off one and gpus off another. It will probably be a few percent less efficient but nothing to be concerned about.

Technically its best practice to isolate the GPU pcie power and 8 pin power with a riser as those two hold the potential to be slightly different. Although it will run just fun just plugging a second psu into a gpu seated in a pcie slot.
 
Last edited:
The power supplies if they have a common power rail connection in parallel that can cause the voltage to oscillate between them as they try to regulate each other or show up as a fault and shut down. This occurs at start up because two supplies do not power up at exactly the same rate even if identical.

So if there is a +12V connection on the pcie slot that connects between the two supplies you can have a problem. Might need to add some capacitance in there to buffer it.

Once up and running you should be fine.

If the power rails are isolated from eachother i dont see an issue, running these power supplies at low load is very inefficient so you would be wasting power on your light load supply depending what its curve looks like.
I think the reason it works in practice is due to the capacitance of the relatively large mobo power plane and tiny switching delay of the psus.

People can have issues running the pcie riser off a different psu then the gpu for mining rigs. For mining rigs its always best practice to run the gous independently of the mobo supply but the issues are often due to poor components and shotty setups.
 
Its fine. The potential for damage only exists if you are using some garbage power supplies. most power supplies will overload or fail in a peaceful manner.

Connecting the two power supplies is trivial. All those boards do is tie the green wire on the 24 pin connector together that is all that is required and is relatively foolproof. You can just spice the wires together if you feel.

Power the CPU and motherboard off one and gpus off another. It will probably be a few percent less efficient but nothing to be concerned about.

Technically its best practice to isolate the GPU pcie power and 8 pin power with a riser as those two hold the potential to be slightly different. Although it will run just fun just plugging a second psu into a gpu seated in a pcie slot.
What does adding a riser do?
 
I thought it was 75 watts and how does a riser isolate it?
Yes that was a typo. I think high end cards will use a smaller portion of that and just rely on the auxiliary power.

Commonly the pcie risers used for mining only run data and ground. they use a onboard 6 pin for the 75w the pcie can use. That can be supplied by the same psu powering the GPU.

Youre limited to a x1 lane though. The ribbon risers can be kinda sketchy sometimes and just pull the 75w from the motherboard.
 
Last edited:
I’m not talking about connecting the GPU with a EPS12v cable. I would use 3x PCIe cables.

The power supplies wouldn’t really be connected together imo if one is running the GPU alone and the other the rest of the system.

GPU mining rigs use a similar configuration and I’m not aware of widespread issues or failures and they’re running higher loads continuously.

The reason I’m considering dual PSUs is my case only fits SFX units and the only 1000 watt SFX unit doesn’t have good reviews.

Either way I don’t think we’re going to agree here. I appreciate your input and I’ll post back if I decide to try it.

The golden rule for multi psu with mining is each gpu is only powered by a single psu. Meaning riser and pcix cable powered from same psu. If the card is plugged into tje mobo you will be powering the slot from one psu and the external power from another. Bottom line os what you are suggesting is not a great idea. It may work ok but its really asking for trouble
 
I'm no expert here but at a high level it's not ideal. You're adding complexity for a tiny bit of cost savings. It's going to be louder, potentially mess with airflow, cause twice the chance of a hardware failure, give you headaches when trying to diagnose a problem, etc. Keep it simple, it's probably worth the extra cost of selling or tossing out your old PSU for a new one. They're not that expensive and probably outlast the whole build along with the case.
 
Back
Top