Feed 3080 with separate 8-pin inputs from PSU?

sphinx99

[H]ard|Gawd
Joined
Dec 23, 2006
Messages
1,059
My system is a 3960X TR3 with an EVGA 3080 XC3 Ultra. I have a Corsair AX1000 PSU and the system was utterly stable until the RTX. The problem I experienced was very intermittent sudden power off of the entire system, as though a fuse tripped. It happened rarely, only during extended gaming (typically with some stuff going on in the background) and only after some hours--in fact only in CP2077.

I had an 8-pin cable from the PSU, with two 8-pin connectors, and each of the two daisy-chained connectors powering the RTX. A couple of days ago I switched to running two separate cables to each 8-pin connector on the RTX and thus far I have not seen the issue re-emerge. However it's not a clean A-B-A sort of root cause and solution since the issue was happening so infrequently. Wanted to check in here with those more knowledgeable than I am: is there reason to believe that I could over-saturate a single connector (rail?) on the PSU?
 
I do know that many of the recent cards can spike power demands faster than a lot of the older PSU designs were designed to do. Sustained power draw =! power spike even for the same wattage.

Moving to a different cable probably fixed this issue.

This is all a guess of course based on previous reading.

If this solution works for you, I wouldnt be worried at all.
 
Just look up single 12volt rail PSU issues. Funny, this isn't something you hear about much until you get a new incredibly power hungry card like these and then you have to make sure the rails are split evenly or you will get these issues that you are talking about. The cure for these types of problems are single 12v rail PSU's which are always labeled as such. Cheers
 
Just look up single 12volt rail PSU issues. Funny, this isn't something you hear about much until you get a new incredibly power hungry card like these and then you have to make sure the rails are split evenly or you will get these issues that you are talking about. The cure for these types of problems are single 12v rail PSU's which are always labeled as such. Cheers

The single rail vs multi-rail argument, for the most part, is baseless. The vast majority of PSUs are single rail PSUs with multiple OCPs to create the additional "rails," if they have multiple rails. Nearly all quality multi-rail PSUs balance the connectors out on the multiple rails such that they wouldn't get overloaded if used as intended. It can in fact be argued that the multi-rail design was safer because there were multiple lower power OCPs monitoring each set of connectors instead of the main larger OCP monitoring everything. The only consumer PSU I could think of that had a true multi-rail design is the old HX1000 from 2008.

That said, almost every modern design is single rail. A few bad apples ruined the reputation of multi-rail PSUs, particularly at the beginning when 12v power hungry GPUs were just making it onto the scene and PSU manufacturers hadn't properly accounted for that yet. Implementing the OCPs meant more board space required and higher cost for a negligible benefit that the vast majority of consumers didn't care about, so going to a single rail saved PSU companies money.

The OP's PSU is a modern single rail PSU, so the rails have nothing to do with it. What may have happened is that transient loads caused an under-voltage situation at the PSU connector, and distributing that over a second connector kept the voltages more stable. However, this will generally manifest in instability before it causes a shutdown.
 
My system is a 3960X TR3 with an EVGA 3080 XC3 Ultra. I have a Corsair AX1000 PSU and the system was utterly stable until the RTX. The problem I experienced was very intermittent sudden power off of the entire system, as though a fuse tripped. It happened rarely, only during extended gaming (typically with some stuff going on in the background) and only after some hours--in fact only in CP2077.

I had an 8-pin cable from the PSU, with two 8-pin connectors, and each of the two daisy-chained connectors powering the RTX. A couple of days ago I switched to running two separate cables to each 8-pin connector on the RTX and thus far I have not seen the issue re-emerge. However it's not a clean A-B-A sort of root cause and solution since the issue was happening so infrequently. Wanted to check in here with those more knowledgeable than I am: is there reason to believe that I could over-saturate a single connector (rail?) on the PSU?

I'm using an old AX750W PSU with my 3090.
It needs 3 power connections and had a problem with one making a game shut the PC down.
These PCIE power cables have seen a lot of use, one of the 4 PCIE power connectors was too worn.
Swapping the bad one to the last remaining one fixed it.
I'm not worried it will happen again but if it does I'll buy a new PCIE power cable.

You are in the same boat as me now, a better PSU with the same connections.
It will be fine.
 
Yeah I should have started by looking up the PSu to see if it was a single 12v rail derp. lol
 
My system is a 3960X TR3 with an EVGA 3080 XC3 Ultra. I have a Corsair AX1000 PSU and the system was utterly stable until the RTX. The problem I experienced was very intermittent sudden power off of the entire system, as though a fuse tripped. It happened rarely, only during extended gaming (typically with some stuff going on in the background) and only after some hours--in fact only in CP2077.

I had an 8-pin cable from the PSU, with two 8-pin connectors, and each of the two daisy-chained connectors powering the RTX. A couple of days ago I switched to running two separate cables to each 8-pin connector on the RTX and thus far I have not seen the issue re-emerge. However it's not a clean A-B-A sort of root cause and solution since the issue was happening so infrequently. Wanted to check in here with those more knowledgeable than I am: is there reason to believe that I could over-saturate a single connector (rail?) on the PSU?
Running two separate cables instead of using the daisy chain is probably a good idea. If the PSU side pins are the same type as the video card side pins, and the video card is pulling more current than the spec for a single connector, you're over the spec on the PSU side. Pulling more current than the pins can handle leads to heat, which leads to more resistance (which leads to more heat... etc), and lower voltage at the card. Could be enough to crash the card, I suppose. My computers aren't [H]ard enough to see this, but see similar things in pinball machines; if you don't correct the issue, eventually the heat burns the plastic housing and unsolders the connectors. More wires is usually the proper remedy (but pinball makers like cutting corners, and problems that take years to show up don't get fixed).
 
Running two separate cables instead of using the daisy chain is probably a good idea. If the PSU side pins are the same type as the video card side pins, and the video card is pulling more current than the spec for a single connector, you're over the spec on the PSU side. Pulling more current than the pins can handle leads to heat, which leads to more resistance (which leads to more heat... etc), and lower voltage at the card. Could be enough to crash the card, I suppose. My computers aren't [H]ard enough to see this, but see similar things in pinball machines; if you don't correct the issue, eventually the heat burns the plastic housing and unsolders the connectors. More wires is usually the proper remedy (but pinball makers like cutting corners, and problems that take years to show up don't get fixed).
I'm not sure he means daisy chained, I think he had a bad connection like I did.
I have the smaller version of the very same PSU, the PCIE power cables come in pairs, 2 connecting to one connection block on the PSU.
But they run parallel to each other as separate cables, they are not daisy chained.
Its possible there are 2 designs and his is different but I wouldnt have thought Corsair would sell a high end PSU with that kind of cable.
 
Running two separate cables instead of using the daisy chain is probably a good idea. If the PSU side pins are the same type as the video card side pins, and the video card is pulling more current than the spec for a single connector, you're over the spec on the PSU side. Pulling more current than the pins can handle leads to heat, which leads to more resistance (which leads to more heat... etc), and lower voltage at the card. Could be enough to crash the card, I suppose. My computers aren't [H]ard enough to see this, but see similar things in pinball machines; if you don't correct the issue, eventually the heat burns the plastic housing and unsolders the connectors. More wires is usually the proper remedy (but pinball makers like cutting corners, and problems that take years to show up don't get fixed).

The PCI-E spec vastly underrepresents the actual hardware limits. The Mini-fit Jr pin is rated for up to 9 amps. 3 circuits in each 8-pin connector means it's technically capable of 27 amps, or 324 watts. Spikes in loads combined with sub-optimal contacts could push the PSU-side connector beyond what it is capable of if two 8-pin connectors are run off of one cable, assuming PSU-side PCI-E connectors are the same as the GPU side.
 
Hey all, OP here. I have some updates and at least for me, not so good news.

The sudden power-off behavior has re-emerged. Thinking that switching to parallel 8-pin cables might help proved to be a red herring. The behavior is still the same... during fairly heavy load during gaming. As a reference, it does not happen when I OC my 3960X (~4.2-3.4GHz all-core, Furmark CPU burner) but if I add the GPU burner on top of that, I get the shutdown.

Here's what I've tried that didn't work:
- removing all power hungry hardware - three 14TB drives and a quad 10GbE adapter. Still happens.
- trying different circuits at the wall e.g. to make sure I wasn't dealing with a brownout scenario unique to a particular circuit in my home.
- switching from wall to a sinusoidal UPS (Cyberpower 1500) - no change (didn't get better, didn't get worse)
- Seriously bumping up thermals. Thought I might have an overtemp issue somewhere to set all fans including CPU watercooling pump speed and GPU fan speed to 100%. Was loud as heck but can still be coaxed into a reset during heavy gaming

Here's what's worked:
- going into EVGA Precision X, underclocking the GPU by ~60Hz and reducing the GPU power ceiling from 100% to ~90%. The issue *definitively* goes away at that point.

I'm open to any ideas on next steps. Again, PSU is an AX1000, about a year old and never particularly pushed.
 
Best guess is you have a faulty power supply. You can also have a faulty GPU. Does it also shut down when running GPU burner by itself at regular clock speed and power ceiling?

Hard drives are about 15 watts each when actually running, and 5 watts at idle. I doubt your 10GbE adapter uses more than 25 watts. A drop in the bucket for a 1000 watt PSU.
 
Indeed, 3080s have to be on 3 all independent lines, doubling up is a major no-no from what I've heard
 
Back
Top