What's y'all's read on the 12-pin PCIe rumor surrounding the 3080ti?

Also, probably a good time to bring back this meme.

1595136114480.png
 
My initial reaction to a new power connector was that there'd be no positives but after reading about it more, I understand.

There's two main problems with our current system: Needing to use multiple power cables for high end GPUs, and power supply manufacturers violating spec by daisy-chaining multiple PCI-E connectors on a single cable (which is a major source of stability problems with 2080 Ti's from what I've read.)

So the new 12-pin connector is designed to be far more robust, using higher quality electrical connections and thicker wires, allowing it to handle 4x as much power as an 8-pin. Using one of these would only require a single cable even on the most powerful GPUs, simplifies board design with fewer pins and connectors, and keeps your case cleaner. It could also potentially be split into 2 6-pin blocks for use with lower powered GPUs (would not be compatible with current 6-pins though.)

Chances are if it does come to fruition it will only be on Ampere founder's edition cards, with an included adapter. In the meantime, nVidia will be working to get the connector added to the ATX power supply spec (or maybe even Intel's new ATX12V?) to standardize it for future usage.

What does AMD actually have? Some rumored “BIG NAVI OMG” that is gonna blow everything away that Nvidia has? Where have we heard that before...
nVidia themselves are actually concerned about AMD beating them this time around. An inside source at nVidia reported that they're expecting AMD to release a ~300W GPU that's 40-50% faster than a 2080 Ti. AMD's keeping such a tight lid on GPU development this year that this is literally the only information we have but if nVidia's expecting it, it's probably going to happen. Of course the exact numbers are always up in the air and subject to change at any time due to chip binning and clock speed changes.
 
No fucking way they release a new card that makes us buy a whole new power supply to run it.

if it is indeed a 12pin of sorts, they’ll just include some sort of adapter that’ll allow everyone to run their card perfectly fine.
 
seems odd yet understandable at the technical level but my gut tells me this is likely server related and not consumer hardware.

OR this is related to supporting intel's new PSU spec they're trying to push but 2x6 pins can also be plugged in using standard spec psu's.

those are my guesses.
 
Adapter and done. Simple
If I read the spec right, and the assumptions on power delivery for this new connector are true, then in order to use an adapter I will need to have two 8-pin connectors available from my PSU to begin with.

So... why wouldn't Nvidia just put two 8-pin plugs on their card? Like everyone else has done for higher wattage cards forever?
 
It's not guaranteed an adapter would work, we just don't know at this point.

No, but we can make educated guesses. When was the last time a new power connector came out where an adapter wouldn’t work? My first PC build was in the Pentium 3 days and I can’t remember a single one in all this time. Unless nvidia is also introducing a fundamental change to the ATX standard amd this connector uses an input voltage that’s not 12, 5 or 3.3 volts, which is very highly unlikely, an adapter will definitely work.
 
If I read the spec right, and the assumptions on power delivery for this new connector are true, then in order to use an adapter I will need to have two 8-pin connectors available from my PSU to begin with.

So... why wouldn't Nvidia just put two 8-pin plugs on their card? Like everyone else has done for higher wattage cards forever?

Problem with the 8 pin is the only difference between it and the 6 pin is 2 extra ground wires. It uses the same amount of hot leads that actually supply the power as the 6 pin. I’m willing to bet this new connector increase those hot leads.
 
12 pin? What does that do that 2 8PINs can't? Assuming 2 pins are ground...

Those are 10 power pins. 2 8PINs would have 12 power pins anyway.

Not Anti NV here. Just wondering since I haven't read much about this topic.
 
  • Like
Reactions: noko
like this
Problem with the 8 pin is the only difference between it and the 6 pin is 2 extra ground wires. It uses the same amount of hot leads that actually supply the power as the 6 pin. I’m willing to bet this new connector increase those hot leads.
The difference between 6 and 8 pin is one +12V, one ground, and a second sense. The 6 pin is only supposed to have two +12v and grounds plus a sense. But most implementations just used the sense wire as a return for another +12v.
 
12 pin? What does that do that 2 8PINs can't? Assuming 2 pins are ground...

Those are 10 power pins. 2 8PINs would have 12 power pins anyway.

Not Anti NV here. Just wondering since I haven't read much about this topic.
You have to have a ground for every power pin. So at most 6 power. However the 12pin ups the power limit to 600w, So you’d need four 8-pins to provide the same capability (and still be within spec).
 
12 pin? What does that do that 2 8PINs can't? Assuming 2 pins are ground...

Those are 10 power pins. 2 8PINs would have 12 power pins anyway.

Not Anti NV here. Just wondering since I haven't read much about this topic.

i would bet more than 300w
 
Literally zero chance a GPU with a new power connector will require a brand new PSU. That would be a disaster for Nvidia sales-wise.

The GPUs will have an adapter bundled with it if true.
 
You have to have a ground for every power pin. So at most 6 power. However the 12pin ups the power limit to 600w, So you’d need four 8-pins to provide the same capability (and still be within spec).
Something doesn't make sense with this math. An 8-pin has 3 +12v pins, 3 VREF pins, and two sense pins, and can supply 150W of power. If the 12-pin connector has six +12v pins, and six VREF pins, and no sensing at all, it could deliver 300w of power at most - unless the 12-pin connector also implements thicker gauge wiring to achieve the increased ampacity, in which has an adapter would not work (safely).

Of course, that's assuming that 150w through a triad of 18ga wires is pushing the ampacity limit to begin with, which it isn't. If we assume an even split with 50 watts for each conductor, at 12v, that's only 4.16a per. The max "chassis wiring" ampacity for an 18ga conductor is 16a. If we go by that maximum, a single 8-pin should already be capable of delivering 576 watts.

If there's some spec for max ampacity per wire in an ATX PSU, I can't see it, but it's got to be somewhere between this measly 4a per conductor and the full-bore 16a per conductor. Maybe it's derated for temperature or something.

In either case, if they aren't messing with the wire gauge or the voltage, then this supposed 12-pin connector cannot be capable of delivering more than two 8-pins. Which brings me back to my original question: why?

As many others have noted, it's definitely not that big a deal. It's, at worst, an annoying change made for no reason.
 
I haven't seen anybody say anything about it going to be harder to plug in and unplug a 12-pin vs an 8-pin or 6-pin but it definitely would be. Some of the 8-pin cables are a but of a pain to get seated all the way already. a 12-pin would make it that much harder.

Maybe Nvidia will also start selling or packaging stand-alone PSUs for the new GPUs.... That would be silly as well.
 
I don't really care one way or the other what connector it uses. I don't know why that's bothering some people. I'm sure it will come with an adapter if it does require a new type of connector so the point is moot. Your power supply should be of sufficiently high quality before considering a new high end card anyway and I'm sure my 1000w unit will do just fine with a possible adapter or not.
 
Nvidia can make money off of special non-standard design power connector with different configurations, bends, 45 degree, colors, RBG with Nvidia Logo or RTX or your video card model . . . $$$$$$$$ :D

So one can magically increase the power using two 8 pin power connectors to an uber 12 pin adaptor -> nah, still limited by the 8 pin connectors which have more wires to begin with. I hope this is just a rumor because it sounds rather stupid but then again it can be a unique location on card that the adaptor(options as well) may fit better. With the oddball 400w+ cooler design.

AMD 400w + cooler design was quiet and worked rather well on the Vega 64 LC, I pushed close to 500w out of that card and it kept it cool enough (while not gaining much of anything) but was indeed very quiet at normal core settings with a cooler fan curve.
 
I don't like the idea of using adapter cables, too many horror stories.
I'm hoping the custom models (non-FE) will have standard 6/8 pin connectors. Otherwise, using a 12-pin adapter is going to be a hard sell for me.
 
Something doesn't make sense with this math. An 8-pin has 3 +12v pins, 3 VREF pins, and two sense pins, and can supply 150W of power. If the 12-pin connector has six +12v pins, and six VREF pins, and no sensing at all, it could deliver 300w of power at most - unless the 12-pin connector also implements thicker gauge wiring to achieve the increased ampacity, in which has an adapter would not work (safely).

Of course, that's assuming that 150w through a triad of 18ga wires is pushing the ampacity limit to begin with, which it isn't. If we assume an even split with 50 watts for each conductor, at 12v, that's only 4.16a per. The max "chassis wiring" ampacity for an 18ga conductor is 16a. If we go by that maximum, a single 8-pin should already be capable of delivering 576 watts.

If there's some spec for max ampacity per wire in an ATX PSU, I can't see it, but it's got to be somewhere between this measly 4a per conductor and the full-bore 16a per conductor. Maybe it's derated for temperature or something.

In either case, if they aren't messing with the wire gauge or the voltage, then this supposed 12-pin connector cannot be capable of delivering more than two 8-pins. Which brings me back to my original question: why?

As many others have noted, it's definitely not that big a deal. It's, at worst, an annoying change made for no reason.
From what I’m seeing, it’s going to require 16ga wire.

In most cases, it’s not the wire that limits ampacity, it’s the pins. If I remember right, the pins used in PCIe connectors are limited to 9a each.
 
From what I’m seeing, it’s going to require 16ga wire.

In most cases, it’s not the wire that limits ampacity, it’s the pins. If I remember right, the pins used in PCIe connectors are limited to 9a each.
That's a good point. But in that case, at 9a per pin, the 8-pin on 18ga wire is still capable of reaching 324w, theoretically.
 
From what I’m seeing, it’s going to require 16ga wire.

In most cases, it’s not the wire that limits ampacity, it’s the pins. If I remember right, the pins used in PCIe connectors are limited to 9a each.
18 way can go up to 20amps @ 1.8ft (~21"). So 16 gauge is not a requirement to hit the proposed 8.5amp new spec... Then again as you mentioned the connector they are using ready far exceeds the spec (6-pin spec is ~2amps). The specs allowed very cheap connectors to be used without melting. In reality an 18g PSU with a good connector can easily supply 300w on single 6-pin. It's just they are now mandating the use of decent quality ;). If you tried to pull 300w through a low quality connector it would create enough heat to melt the plastic and short out your pins. To avoid this possibility they will have to ensure they aren't trying to pull to much through a cheap PSU.

Ps. The biggest benefit of 16g wire is to reduce the losses/resistance in the wiring. This keeps heat down and voltage up.
 
That's a good point. But in that case, at 9a per pin, the 8-pin on 18ga wire is still capable of reaching 324w, theoretically.
The 6-pin is capable of the same amount since they both have the same # of +12v pins and physically use the same pins. They probably made the spec that low not thinking we'd be talking up power this quickly, but I fail to see why the decided 2amps in a molex connector was a good spec in the first place. They could have easily started @4/5amps on a 6 pin and used the same connectors as can be seen from the advent of the 8-pin that uses the exact same 6 wires for twice the power with no changes besides making sure the connector your using isn't horribly built.
 
The 6-pin is capable of the same amount since they both have the same # of +12v pins and physically use the same pins. They probably made the spec that low not thinking we'd be talking up power this quickly, but I fail to see why the decided 2amps in a molex connector was a good spec in the first place. They could have easily started @4/5amps on a 6 pin and used the same connectors as can be seen from the advent of the 8-pin that uses the exact same 6 wires for twice the power with no changes besides making sure the connector your using isn't horribly built.
I figure it must be derated for temperature or in anticipation of poor quality control. In any case, this new 12-pin can, at most, deliver the same as two 8-pins without the sense leads. Whatever the spec actually is, 3+3=6. The more I think of it the less sense it makes. I figure at this point that it's got to be an internal connector if the rumor is real at all; the cooling solution will need to deliver all the power to the oddly-shaped PCB if those speculations are accurate, or else Nvidia would have to put the power connectors on the middle of the front of the card.
 
a 12 wire cable is going to be getting fat enough to start being a pita to route through the case due to stiffness when trying to bend it. I'm hoping this is either fake, or just something for their proprietary form factor server cards.

With the PCIe spec itself topping out at 300W/card (225 from external wires, 75 from the slot), the 600W cable capacity claim seems silly for anything consumer focused and is a big part of why I'm dubious. Also the need to connect 4x 8 pin PCIe plugs to be able to safely deliver the rated power level would make any sort of adapter dongle an utter mess.
 
Last edited:
The 6-pin is capable of the same amount since they both have the same # of +12v pins and physically use the same pins. They probably made the spec that low not thinking we'd be talking up power this quickly, but I fail to see why the decided 2amps in a molex connector was a good spec in the first place. They could have easily started @4/5amps on a 6 pin and used the same connectors as can be seen from the advent of the 8-pin that uses the exact same 6 wires for twice the power with no changes besides making sure the connector your using isn't horribly built.
The original spec for 6 pin was two power wires (which was an increase from the single +12v provided by the 4-pin molex connectors that were being used). Remember this was being drawn up in the late 90’s - early 2000s. 150w total card power seemed like a lot at the time.
 
The original spec for 6 pin was two power wires (which was an increase from the single +12v provided by the 4-pin molex connectors that were being used). Remember this was being drawn up in the late 90’s - early 2000s. 150w total card power seemed like a lot at the time.

For context from anyone who wasn't building PCs 15-20 years ago, the parallel PCI slot was limited to 10W, and consumer AGP card slots to 48W (a workstation variant could do 110W), with even a supplemental 4Pin molex connector being uncommon. (Looking through images in Wikipedia, the only nVidia AGP cards I see with them were some top end 5xxx series cards - the last generation before PCIe.) Most cards were either passively cooled or had a single 40-60x10mm fan that could only move a little bit of air around.
 
The original spec for 6 pin was two power wires (which was an increase from the single +12v provided by the 4-pin molex connectors that were being used). Remember this was being drawn up in the late 90’s - early 2000s. 150w total card power seemed like a lot at the time.
The 6-pin is 3 power and 3 ground with 2 amps per pin giving it 75w. The 8-pin is 3 power and 5 ground with 4 amps per pin giving it 150w. The # of pins for power did not change just the amps.they could carry was doubled. Since current 6 pin connectors are just 8 pins with the last seperated (typically) then it stands to reason the 3 power line can supply 4amps each just like when the other 2 grounds are plugged in (aka, 150w). It is not 2 12v lines like your thinking. If they would have made the spec match what the connector could handle, then a 6 pin could handle around 9amps per pin which would have given it over 325w. Then 8 pin could have added one more pin and hit 432watta. Of course, neither of these is 600w like the single 12-pin would be, but dual 3 pins would have been equivalent. They need a new connector that's incompatible so people don't plug in low spec connectors that meet the old spec (2 or 4 amps) and the new up with melting connectors. It's more of a safety issue since they made the spec so low to start with. I have a hard time imagining they didn't see GPUs breaking 150watts, but I'm not on the board are a member so I cant really say why they made the decision.
 
I figure it must be derated for temperature or in anticipation of poor quality control. In any case, this new 12-pin can, at most, deliver the same as two 8-pins without the sense leads. Whatever the spec actually is, 3+3=6. The more I think of it the less sense it makes. I figure at this point that it's got to be an internal connector if the rumor is real at all; the cooling solution will need to deliver all the power to the oddly-shaped PCB if those speculations are accurate, or else Nvidia would have to put the power connectors on the middle of the front of the card.
Yeah, it had to be for cost savings or something to allow cheap/low cost psu's to meet spec. Bind sight and all, but the only way to fix it is a new connector with higher ratings. Makes sense sell eventually need a new spec, so I could see this rumor being true. Can't just up the spec on 6-pin lines due to backwards compatibility causing issues with lower quality parts. A new connector is/would be warranted.
 
We survived the Molex -> PCI-E power connector change, we'll survive moving to a single 12-pin connector. There is not a chance they'd change up the power connector without making it adaptable. If it comes with a new connector, expect a 2x 6 (or 8) pin to 1x 12-pin adapter in every box for years to come. Absolute non-issue.

I actually sort of embrace it... the 6+2 pin connectors are a pain in the ass. I absolutely welcome the idea of a single, non-break apart cable for all future GPUs.
 
The 6-pin is 3 power and 3 ground with 2 amps per pin giving it 75w. The 8-pin is 3 power and 5 ground with 4 amps per pin giving it 150w. The # of pins for power did not change just the amps.they could carry was doubled. Since current 6 pin connectors are just 8 pins with the last seperated (typically) then it stands to reason the 3 power line can supply 4amps each just like when the other 2 grounds are plugged in (aka, 150w). It is not 2 12v lines like your thinking. If they would have made the spec match what the connector could handle, then a 6 pin could handle around 9amps per pin which would have given it over 325w. Then 8 pin could have added one more pin and hit 432watta. Of course, neither of these is 600w like the single 12-pin would be, but dual 3 pins would have been equivalent. They need a new connector that's incompatible so people don't plug in low spec connectors that meet the old spec (2 or 4 amps) and the new up with melting connectors. It's more of a safety issue since they made the spec so low to start with. I have a hard time imagining they didn't see GPUs breaking 150watts, but I'm not on the board are a member so I cant really say why they made the decision.


What is implemented and what is specified are two different things. The original PCIe 1.0 specification called for two +12v, because there was only room for two grounds due to the sense pin (well three, but then you were out of pins for the third +12v). The purpose of the sense pin was to tell the add in card that extra power was available. In reality, you could just ground the controller's sense pin to any of the plug's ground lines and know you had extra power available (that doesn't work for two sense pins though). So pin 5 becomes an extra ground, allowing for a third +12v on pin 2.


Now why they only went with 75w, more than likely they set the power target for the slot first, doubled that with an auxiliary connector, and then looked for a part that would work.
 
The power numbers doesn't really line up with A100 PCIe...hint-hint.


Yeah, if they were launching this seriously, they would have announced it a month ago with A100 Pcie. This is the finest grade-A horseshit rumor
 
I haven't seen anybody say anything about it going to be harder to plug in and unplug a 12-pin vs an 8-pin or 6-pin but it definitely would be. Some of the 8-pin cables are a but of a pain to get seated all the way already. a 12-pin would make it that much harder.

Maybe Nvidia will also start selling or packaging stand-alone PSUs for the new GPUs.... That would be silly as well.

I suspect you havent seen it mentioned because that is beyond a trivial complaint.
 
I suspect you havent seen it mentioned because that is beyond a trivial complaint.
To me, "having two cords is a hassle and I wish there were only one" is beyond a trivial complaint, but I've seen it mentioned as a reason for the change. Anyone with this tier of GPU will have a PSU equipped to supply two eight pins.
 
Yeah, if they were launching this seriously, they would have announced it a month ago with A100 Pcie. This is the finest grade-A horseshit rumor

welp surprise.. it's not a rumor, it's likely OEM/reference only..

(starts off at the part talking about the 12 pin connector)
 
  • Like
Reactions: noko
like this
I wonder if the ultimate goal to avoid issues due to users using 6 and 8 pin splitters to power higher end cards. I seem to vaguely recall that some of the RX5700 crash issues were being blamed on how the auxiliary power was being delivered to the card.
 
Back
Top