What's y'all's read on the 12-pin PCIe rumor surrounding the 3080ti?

Joined
Apr 5, 2016
Messages
2,286
I don't mind the idea of the next gen being more power hungry. If it's more advanced silicon AND more OF it, I'm all for it. That's what my custom loop is for.

Still, I find the idea of introducing a new PCI power connector to be a little hard to swallow. I'm likely going to be in the market for a new PSU anyway - my current 650w is likely to be insufficient - so I don't mind needing more wattage. However, not being able to buy that supply in advance because the requisite connector doesn't exist yet is pretty annoying.

AIBs have been doing 2X8 and even 3X8 for years now, and I'm willing to bet they will continue to do so even if Nvidia drops a new connector.

My hope is that it's not true or - as was suggested by one article I read - the connector will be an interface between the PCB and cooling solution, where the cooling solution will have normal 8-pin and / or 6-pin connectors on the exterior.
 
Even if it's true, its a huge nothing burger. Even if true, who cares. A simple adapter included like they used to do when 6-pin and 8-pin first got introduced would solve the problem. Any current PSU with 2 6-pin connectors available can be ti d together and supply this. It's like $3 more worth of material to add, and almost zero effort for anyone to plug in. Nobody needs to buy a new PSU to support this, the sky isn't falling.
 
Even if it's true, its a huge nothing burger. Even if true, who cares. A simple adapter included like they used to do when 6-pin and 8-pin first got introduced would solve the problem. Any current PSU with 2 6-pin connectors available can be ti d together and supply this. It's like $3 more worth of material to add, and almost zero effort for anyone to plug in. Nobody needs to buy a new PSU to support this, the sky isn't falling.

Agreed - I also see the card coming with the needed adapter if they do come out with this new 12pin connector on the high end Nvidia GPUs, so I also really don't see why all the fuss is being made. Some purists might want a PSU that has this new cable/plug natively provided... If so, they can decide to go with a new PSU. Most folks building a brand new rig would probably be buying a new PSU anyway. PSU companies just lap this stuff up when they have something new/different/novel to market and drive sales. Life goes on.
 
Agreed - I also see the card coming with the needed adapter if they do come out with this new 12pin connector on the high end Nvidia GPUs, so I also really don't see why all the fuss is being made. Some purists might want a PSU that has this new cable/plug natively provided... If so, they can decide to go with a new PSU. Most folks building a brand new rig would probably be buying a new PSU anyway. PSU companies just lap this stuff up when they have something new/different/novel to market and drive sales. Life goes on.
Those same purists most likely have a modular PSU which can easily accept a new plug if they aren't to old of a PSU and the MFG feels like making one or 3rd party could make one, or worst case build their own connector like they do now when they sleeve their wires. Since it's similar enough connector it'd probably be enough to just unpin 2 6 pin connectors and push them into the 12 pin in the right order. People are making it out to be like everyone will have to buy a new PSU but in reality, very few will have to unless their PSU is to weak to begin with.
 
Still points to the fact that this new card from Nvidia is going to be power hungry if this rumor is true.
 
I don't see it as an issue.
Perhaps a good long term thing to think about now that we see where power is really going in most systems - the GPU.
Also, easily adapted into older systems.

Sure, it does imply at least one company wanting to increase headroom on a halo product. I think that's fine. Push the highest-end up further - there's a market for that.
Although for most, it may just be that you get one nice connector instead of running two for even mid-range cards. Also good.
 
Still points to the fact that this new card from Nvidia is going to be power hungry if this rumor is true.
I hope it's power hungry. Nvidia needs to abandon this notion that the target market for desktop gaming GPUs give a crap about power efficiency.

You're all correct though - for a person who isn't looking to upgrade PSUs anyway, this is a pretty minor thing. An adapter will easily convert two 6-pins into this new form factor. I am one of those "purists" though, who would prefer my PSU to support the form factor natively, or at least can be upgraded with modular cables to seem as such. Here's hoping CableMod is quick on the adoption as well!
 
If you have to buy a new PSU I'm gonna pass on that. However, I feel like that is highly unlikely.

As long as there is a simple adapter that is included, I don't see any issue.
 
Noone wants another GTX480 or R9 29x equivalent space heater in their boxes.
 
My EVGA 1300 Supernova G2 will be able to handle it, no problem.

I was running 3-way 980Ti SLI on it 5yrs ago. lol
 
No way in hell would Nvidia sell a consumer-level GPU that requires a new power supply. Virtually no one would buy it.
 
Noone wants another GTX480 or R9 29x equivalent space heater in their boxes.
If the heat is because there is performance, then yes - yes I do. I paid good money for a custom loop and would LOVE for Nvidia to make a card with all the stops pulled out so I can really use it.
 
I am still waiting for a logical reason why a 'new' 12 pin power connector will deliver more power versus the 'old' 6+6, 6+8 or 8+8 adapters if the gauge of the wire stays the same.
8-pin only has 3 power wires and 5 ground wires.... The 12 pin has 6 & 6, so it's twice the power lines which leads to twice the power (about 4amps per). This 12v connector can (in theory) carry up to 600 watts. The spec on a 6 pin is only 75w (although most any recent implementation can carry 150w just like 8-pin since they both have the same # of +12v lines) with 8 pin being 150w. So in theory dual 8 pins can carry 300w while a single 12 pin can carry double that. This makes some sense as dual 6 pins won't be enough to feed a 350w GPU (if used within specs), but dual 8-pins + 75w pcie power will just be enough (150+150+75 = 375w). If they are indeed making a 600w GPU this would be much better than 4x 8-pin connectors. The broke compatibility due to the fact that 8-pin has only 3 +12 and 5 gnd pins... Meaning one ground pin is on the same side as power. This connector keeps +12v on one row and the on the other. Not sure WTF they were thinking with 8 pin connectors honestly.

Ps just wanted to make sure I answer your questions:
OG 6-pin spec is only ~2amps per pin, they increased it to 4amps per pin for 8-pin, but didn't increase the # of 12v lines, and since most connectors are the same for 6/8 pin, they can both actually handle 4-amps and 150watts, but that's not "spec" even if true.
 
As long as it comes with an adapter for current 6/8 pin stuff...whatever.
 
8-pin only has 3 power wires and 5 ground wires.... The 12 pin has 6 & 6, so it's twice the power lines which leads to twice the power (about 4amps per). This 12v connector can (in theory) carry up to 600 watts. The spec on a 6 pin is only 75w (although most any recent implementation can carry 150w just like 8-pin since they both have the same # of +12v lines) with 8 pin being 150w. So in theory dual 8 pins can carry 300w while a single 12 pin can carry double that. This makes some sense as dual 6 pins won't be enough to feed a 350w GPU (if used within specs), but dual 8-pins + 75w pcie power will just be enough (150+150+75 = 375w). If they are indeed making a 600w GPU this would be much better than 4x 8-pin connectors. The broke compatibility due to the fact that 8-pin has only 3 +12 and 5 gnd pins... Meaning one ground pin is on the same side as power. This connector keeps +12v on one row and the on the other. Not sure WTF they were thinking with 8 pin connectors honestly.

Ps just wanted to make sure I answer your questions:
OG 6-pin spec is only ~2amps per pin, they increased it to 4amps per pin for 8-pin, but didn't increase the # of 12v lines, and since most connectors are the same for 6/8 pin, they can both actually handle 4-amps and 150watts, but that's not "spec" even if true.
12pin can only have up to 6 +12v since it must have at least one ground pin for each one. That's only 300w, not 600.

As for the 8-pin having a ground on the same side as the power, it's not actually not a ground. It's one of the two sense wires the card uses to detect whether an 8-pin, 6-pin, or no connector is plugged in so it can set the correct power limits to operate at.


No way in hell would Nvidia sell a consumer-level GPU that requires a new power supply. Virtually no one would buy it.
Just like they didn't when 6pin PCIe connectors replaced 4pin Molex connectors.. or when PCIe 8pin replaced the 6pin? No. Just like before they'll throw an adapter in the box for the first couple generation of cards.
 
73785_03_nvidias-next-gen-ampere-geforce-rtx-30-series-new-12-pin-pcie-power_full.jpg


change for the sake of change

Also they say an extra 4 pin too.

https://www.tweaktown.com/news/7378...tx-30-series-new-12-pin-pcie-power/index.html
 
I like the idea of using one connector for the gpu. Anytime you can reduce the number of cables in the case I like it (I hate you rgb). But, it looks like they’re doing this to get more power to the gpu. Like 350w. So instead of having 2 x 8 pin and 1 x 6 pin they came up with a new connector.

As someone has said earlier, if amd made a gpu with 2 x 8 and 1 x 6 people would be talking about all the power it uses. So if you don’t see all the cables going to the gpu people might not think it’s using that much power.
 
12pin can only have up to 6 +12v since it must have at least one ground pin for each one. That's only 300w, not 600.

As for the 8-pin having a ground on the same side as the power, it's not actually not a ground. It's one of the two sense wires the card uses to detect whether an 8-pin, 6-pin, or no connector is plugged in so it can set the correct power limits to operate at.



Just like they didn't when 6pin PCIe connectors replaced 4pin Molex connectors.. or when PCIe 8pin replaced the 6pin? No. Just like before they'll throw an adapter in the box for the first couple generation of cards.
Yep. People complaining about change.

I could understand if this was a proprietary connector like Apple/Dell tend to do. This isn't. It's just an extension of an open standard.
 
I think the timing is the only thing that bothers me. I would like to just try to get the video card at launch, now I will have to try and get a proper power supply and a video card at launch. If the power supplies were already out for 6 months prior to the video card launch I wouldn't care. Hopefully there is an adapter included if the power supplies are hard to find.
 
I think the timing is the only thing that bothers me. I would like to just try to get the video card at launch, now I will have to try and get a proper power supply and a video card at launch. If the power supplies were already out for 6 months prior to the video card launch I wouldn't care. Hopefully there is an adapter included if the power supplies are hard to find.
This is no different from cards that require 8-pins with people that don't have 8-pin leads. Every manufacturer throws an adapter in the box for free. I just don't see this being an issue unless you have an ancient PSU without enough 6-pin PCI-E leads anyways, and that's a pretty old PSU at this point. Can you power a 2080ti? If so, you've got no reason to worry.
 
This is no different from cards that require 8-pins with people that don't have 8-pin leads. Every manufacturer throws an adapter in the box for free. I just don't see this being an issue unless you have an ancient PSU without enough 6-pin PCI-E leads anyways, and that's a pretty old PSU at this point. Can you power a 2080ti? If so, you've got no reason to worry.

I think this adapter is going to need 2 x 8 and 1 x 6. I don’t think they’d do this just for 2 x 8 pins. That’s a lot of power. Something we haven’t seen since the dual gpu cards.
 
I think this adapter is going to need 2 x 8 and 1 x 6. I don’t think they’d do this just for 2 x 8 pins. That’s a lot of power. Something we haven’t seen since the dual gpu cards.
I agree, 3 6pins is possible. But again, most anyone with a system currently running something like a 1080ti+ has a PSU with 3 6/8pins available. If they don't that seems a little off to me, or they already know they are running something that is old now.
 
I agree, 3 6pins is possible. But again, most anyone with a system currently running something like a 1080ti+ has a PSU with 3 6/8pins available. If they don't that seems a little off to me, or they already know they are running something that is old now.

3 x 6 pins doesn’t even equal 2 x 8 pins.
 
Yep. People complaining about change.

I could understand if this was a proprietary connector like Apple/Dell tend to do. This isn't. It's just an extension of an open standard.
People complaining about unnecessary change. Two eight-pins can deliver the same wattage and are already an established method which will be compatible without any need for an adapter.

I just don't see any kind of advantage. If you use an adapter you're still going to need to have at minimum two six-pin connectors available from your PSU.
 
3 x 6 pins doesn’t even equal 2 x 8 pins.
True, but I really doubt a consumer GPU initially using this standard is going to actually need 3*3 of those +12v leads. You'll likely be fine getting away with less if you don't have it. Again though, most PSU's being sold the PCI-E leads are 8pins anyways. You'd have to have a pretty old PSU or just something that was lower-end anyways to not have that. And we're talking about $1000+ GPU's here...
 
12pin can only have up to 6 +12v since it must have at least one ground pin for each one. That's only 300w, not 600.

As for the 8-pin having a ground on the same side as the power, it's not actually not a ground. It's one of the two sense wires the card uses to detect whether an 8-pin, 6-pin, or no connector is plugged in so it can set the correct power limits to operate at.

Sorry, I should have been more clear. The 12 pin doubles the amperage of the 8-pin to a bit over 8amps (~8.5) from 4 AND doubles the pin count hence you can get 4 times the watts (150 -> 600).
Yes, I understand they have a sense pin, but that makes no sense, why not just supply 1 more +12v and one gnd to supply more power? What good does the sense pin do besides let the GPU know it's got an 8-pin plugged in? They could have easily done the same thing by checking if that pin had 12v on it... or have a slightly different connector so someone couldn't plug a 6-pin connector where an 8-pin is required? I wonder how many GPU's actually use the sense pin or just ignore it and assume. Just seems a wasted opportunity where it could have had 200watts per 8-pin instead of 150watts (they can supply 50 watts per pin @ 12v).

I'm more curious if we'll start seeing higher voltages passed around to keep the amps down. Now that would require a new PSU but efficiencies would go up (and wire sizes could go down and/or less pins on connectors and voltage losses on cables won't be as critical).
12v @ 4 amps = 48 watts
24v @ 2 amps = 48 watts
48v @ 1 amp = 48 watts

So with 48v you could reduce wire sizes AND reduce pin counts on connectors. A single 18 gauge wire can carry ~10amps at the short runs in a case. So a SINGLE 48v line could carry 480 watts. Make a 4 pin connector like we have for MB's that can carry 10amops and you would have close to 1000watts available for any device with just 4 pins (of course they'd want a safety margin, so say 750watts). This would be something I would be interested in, much more than just making bigger connectors. You can get a lot more power through the same size wire/connector with higher voltages. This is why electric vehicles run such high voltages, the wires would be as heavy as the car (and impossible to run/land) if they didn't. OSHA considers anything under 50volts safe to work on live (meaning it's voltage is low enough not to harm people if touched directly while running). This is why I don't think it should/will go any higher than that.
 
Agreed with that, the consumer market needs to start supporting 48v like the server market has for years.
 
I just don't see it being real.

We're not going to see power consumption go above 450w, and the vast majority of cards already have a single power connector. Even the RTX 2080 Ti only used two connectors.

You jump much above 400w, and you're going to need a triple-slot cooler to keep up...or a 240mm water cooler. That's not chap. You would also require 1000w PSUs, doubling your PSU price.

For 300w cards, It would save you 4 pins over having two connectors - I just don't see the value generated by releasing a whole new PSU standard to save 4 pins.
 
Last edited:
Sorry, I should have been more clear. The 12 pin doubles the amperage of the 8-pin to a bit over 8amps (~8.5) from 4 AND doubles the pin count hence you can get 4 times the watts (150 -> 600).
Yes, I understand they have a sense pin, but that makes no sense, why not just supply 1 more +12v and one gnd to supply more power? What good does the sense pin do besides let the GPU know it's got an 8-pin plugged in? They could have easily done the same thing by checking if that pin had 12v on it... or have a slightly different connector so someone couldn't plug a 6-pin connector where an 8-pin is required? I wonder how many GPU's actually use the sense pin or just ignore it and assume. Just seems a wasted opportunity where it could have had 200watts per 8-pin instead of 150watts (they can supply 50 watts per pin @ 12v).

I'm more curious if we'll start seeing higher voltages passed around to keep the amps down. Now that would require a new PSU but efficiencies would go up (and wire sizes could go down and/or less pins on connectors and voltage losses on cables won't be as critical).
12v @ 4 amps = 48 watts
24v @ 2 amps = 48 watts
48v @ 1 amp = 48 watts
oops, I misread that the 12pin was still 4amps.

All of the +12v pins are tied together so that each wire (ideally) will have the same load pulled through it as the others. So it would be impossible to tell if a 6-pin or 8-pin connector is plugged in by adding another power handling line. So you still have an isolated line there, but now you have to add complexity to ensure that voltages are stable and correct before reading that active high signal (and depending on the logic used to do that you may need to convert it to safe levels).


So with 48v you could reduce wire sizes AND reduce pin counts on connectors. A single 18 gauge wire can carry ~10amps at the short runs in a case. So a SINGLE 48v line could carry 480 watts. Make a 4 pin connector like we have for MB's that can carry 10amops and you would have close to 1000watts available for any device with just 4 pins (of course they'd want a safety margin, so say 750watts). This would be something I would be interested in, much more than just making bigger connectors. You can get a lot more power through the same size wire/connector with higher voltages. This is why electric vehicles run such high voltages, the wires would be as heavy as the car (and impossible to run/land) if they didn't. OSHA considers anything under 50volts safe to work on live (meaning it's voltage is low enough not to harm people if touched directly while running). This is why I don't think it should/will go any higher than that.


converting 48v to 1.xV requires much faster switching VR circuits (i.e. more expensive) or double conversion (i.e. inefficient on a small scale). It's good for datacenters since the reduction in ongoing costs quickly outweighs the higher initial cost, but consumer devices would never see the payback.
 
oops, I misread that the 12pin was still 4amps.

All of the +12v pins are tied together so that each wire (ideally) will have the same load pulled through it as the others. So it would be impossible to tell if a 6-pin or 8-pin connector is plugged in by adding another power handling line. So you still have an isolated line there, but now you have to add complexity to ensure that voltages are stable and correct before reading that active high signal (and depending on the logic used to do that you may need to convert it to safe levels).
Yes, they are tied together, but you can still put a sense on a single leg of that line if you so want (current sense would be my guess?). Changing the plug slightly so a 6-pin wouldn't fit would have of course been annoying because PSU's would have had to include both connections instead of just a single one that can be split to support both, but would have made the transition easier without even needing to sense anything.

converting 48v to 1.xV requires much faster switching VR circuits (i.e. more expensive) or double conversion (i.e. inefficient on a small scale). It's good for datacenters since the reduction in ongoing costs quickly outweighs the higher initial cost, but consumer devices would never see the payback.

It does require faster switching, but at this point it's not to big of an issue like it used to be. I mean, right now we are already doing multiple conversions anyways, your PSU does 120vac -> DC, then regulates it at 12VDC which is then regulated to 5v and 3.3v. Also, the 12v that's sent to your GPU and CPU is also regulated down to the 1.xx V that is required by the CPU, 1.35V (give or take) for your ram, and a slew of other voltage levels (GPU core, GPU memory, misc signals for DVI/hdmi/displayport, timers, control voltages, etc). If the power supply only had to generate a single 48v rail, they would be almost negligible. Of course your MB/GPU would have to convert 48v into it's required voltages (gpu, ram, etc) but it's already gotta do that anyways from the 12v. This would have a bigger impact on ssd's and HDD's, but we see intel is already trying to get the MB to supply those voltages with their 12v only PSU. It would allow the power traces to be much smaller going to the VRM's, but after that it'd be pretty much the same. I could understand PSU manufacturers not wanting to do this as it would mean you don't need as good of regulation and much less components in the PSU which would cut into their sales tremendously. It would also take an awful large paradigm shift, so the chances of it happening anytime soon are slim to none, but hey :). I could see this happening in the server space eventually though and then shifting down into consumer level.
 
It points to Nvidia underestimating AMD and once they realized they started cranking up the TDP to the roof to win some canned benchmarks.
 
If true, I think its a bunch of bullshit. No way I'm upgrading additional components just to add a video card.
 
Yes, they are tied together, but you can still put a sense on a single leg of that line if you so want (current sense would be my guess?). Changing the plug slightly so a 6-pin wouldn't fit would have of course been annoying because PSU's would have had to include both connections instead of just a single one that can be split to support both, but would have made the transition easier without even needing to sense anything.

You have to know that there's something plugged in before you can enable power so that you know how much power you're allowed to draw. Current sense isn't going to work so well if there's negligible current flowing. Also, it's still more complexity than just popping a ground onto an open collector and reading what's there.


It does require faster switching, but at this point it's not to big of an issue like it used to be. I mean, right now we are already doing multiple conversions anyways, your PSU does 120vac -> DC, then regulates it at 12VDC which is then regulated to 5v and 3.3v. Also, the 12v that's sent to your GPU and CPU is also regulated down to the 1.xx V that is required by the CPU, 1.35V (give or take) for your ram, and a slew of other voltage levels (GPU core, GPU memory, misc signals for DVI/hdmi/displayport, timers, control voltages, etc). If the power supply only had to generate a single 48v rail, they would be almost negligible. Of course your MB/GPU would have to convert 48v into it's required voltages (gpu, ram, etc) but it's already gotta do that anyways from the 12v. This would have a bigger impact on ssd's and HDD's, but we see intel is already trying to get the MB to supply those voltages with their 12v only PSU. It would allow the power traces to be much smaller going to the VRM's, but after that it'd be pretty much the same. I could understand PSU manufacturers not wanting to do this as it would mean you don't need as good of regulation and much less components in the PSU which would cut into their sales tremendously. It would also take an awful large paradigm shift, so the chances of it happening anytime soon are slim to none, but hey :). I could see this happening in the server space eventually though and then shifting down into consumer level.

But now you're putting double conversion EVERYWHERE.


It's already happening in the server space.
 
Like I said, an 8-pin that wasn't compatible would have been simpler from a circuit design, or even a split plug of sorts do you still use the 6 pin but plug in a single wire or something to glad that it can support 4amps per pin instead of 2a. Then the plug wouldn't even have to change and the sense wire would be a very tiny jumper.

Your not doing double conversion everywhere, your just converting from a different voltage. I'm not saying it's not a slight bit more effort, but it's not like it'd be difficult. I figured servers would need to start moving to higher voltages eventually, I don't keep up enough to realize they are already starting so cool to know!!
 
No way in hell NVIDIA would do something so stupid. I can see the 12 pin being used in workstations and other professional level equipment but not consumer PCs. First of all cooling a 350W+ GPU would be an enormous undertaking even for most enthusiasts not to mention most people are running around 700W PSUs so they won't go grab a $1200 card + $500 PSU just to stay NVIDIA LOL!
 
I almost hope this is true so AMD can cream them.

BUT... I still wanted one and don't exactly like the idea of a new PSU plus the hassle of redoing all the wiring.

I had to do that when the 2080 Ti came out, not the end of the world, but also not a fun time either.

This would be a hard sell for a lot of people.
 
Back
Top