New NVidia 12 pin GPU connector is only for space savings, not increasing power.

Snowdog

[H]F Junkie
Joined
Apr 22, 2006
Messages
11,262
There are a lot of bad assumptions flying around about the new NVidia 12 pin connector. From everything we have seen, this is just a smaller replacement for dual 8 pin connectors, which for reason that will be made clear, are NOT meant to carry more power than dual 8 pins.

Some people have just looked at the Molex Micro-fit specs and taking the theoretical amperage per circuit to mean that is what the GPU connector can pull.

Just to make clear what the specs are:

Currently the PCIe (8 Pin) power connectors use Mini-Fit power connectors. There are two types and these are they are rated:

Mini-Fit Junior: 9 Amps/Circuit.
Mini-Fit Standard: 13 Amps/Circuit.

The standard connector is the 13 Amp connector:
https://www.molex.com/molex/products/family/minifit_power_connector_solutions


The New PCIe 12 Pin is a standard Molex Micro-Fit Connector. There are also two types, and here is their rating:

Micro-Fit Standard: 8.5 Amps/Circuit.
Micro-Fit Plus: 12.5 Amps/Circuit.

The standard connector is the 8.5 Amp connector:
https://www.molex.com/molex/products/family/microfit_30

So How much power can these connectors theoretically carry at 12 V.

Dual 8 Pins carries 6 Circuits (some pins lost for sense)
Single 12 pin carries 6 Circuits (non lose to sense)

(Old) Mini-Fit 6 circuits 12V * 9-13 Amps = 648 Watts - 936 Watts
(New) Micro-Fit 6 circuits 12V * 8.5-12.5 Amps = 612 Watts - 900 Watts

So theoretically, the New (single 12 pin) and Old (Dual 8 Pin) connector have similar theoretical power limits. If anything the old, larger Mini-Fit connector handle more power.

So it's a faulty assumption to just look at the pin rating of the new connector and assume that it carries more power. If you want to got that route, you need to compare to the old connector, which is even more robust.

Finally, the way power actually gets to most cards using this connector for years to come, is going through dual 8 pin connectors, so the limit must NOT be higher than what dual 8 pins can supply.

The following makes no sense:
--> Dual 8 Pin (300 W) -> Adapter -> Single 12 pin (600 W).

This does:
--> Dual 8 Pin (300 W) -> Adapter -> Single 12 pin (300 W).
 
Last edited:
I figured the same.
It also rubbishes those images showing the 3090 is HUGE!
If it was so much larger, there wouldnt be a need to implement much smaller power connections.
 
Correct, we don't know what the spec will be yet for the 12-pin even though they can theoretically carry a lot of power. As can be seen in the current specs, 6-pin connectors are only required to support 2a per line, even though the connector can supply much more than that. I think this had more to do with limiting the amount of power a PSU had to supply in general than limitations on the connector/wiring. Same with 8-pin, where nothing changed connector wise (same line of connectors, same connector specs, same PSU wire gauge), yet they were able to carry 2x the current. For some reason the ATX spec likes to under-rate things in order to keep PSU's from having requirements that price out low end stuff. I am not sure if the new spec will bring up the 12-pin closer to it's actual specs or do like they've done in the past and castrate the values in order to make sure PSU's don't have to be crazy sized to support the 'spec'. Aka, if you spec the 12-pin @ 650watts, the PSU needs to be well over that in order to meet spec (and would require 54amps on a single 12v rail), which is a big ask for some lower end PSU's. The thing with this connector right now is (as far as I know, please point me in the right direction if i'm wrong!) that it's not even part of the official spec, so everything is speculation.
So far the only indication is the adapter from seasonic adapter, which is intended for use in their 850w + PSU's. We can either assume since it's dual 8pin it would be limited to 300w just like normal 8-pin, but if that's the case, why the recommendation of an 850w PSU? Surely you can use 300w GPU on a 650w PSU normally. The other assumption would be, since they recommend an 850w PSU it's because the connector is intended to go over the ATX spec of dual 8-pin (but stay within the actual molex/awg specs).
I think we're both on the same page overall, all of these things from ATX are under-speced vs safe capacity limits and at this point we can only make assumptions. It seems logical if the GPU pulls 350w, then it can easily get 50w from PCIE and the other 300w from dual 8-pin or 12-pin, so while it doesn't appear there is really any defined spec, right now it's probably safe to assume it's only meant to supply ~300w of power regardless of actual capacity. I'm sure any slight overclock will easily put it over this 300w theoretical limit and people will be pulling over 300w through it and it would be perfectly safe to do so up to near the actual Micr-Fit specs as long as your PSU can handle the loads.

So as you can see, there is some conflicting information, especially this (right on the box): "It is recommened to use a power supply rated 850W or higher with the cable". You don't need 850w for a 300w GPU, so it seems seasonic is assuming more than 300w may be pulled through this connector. It's all just a moot point anyways, since there is no official ATX spec. This is the point of the specs, even if they seen overly cautious is to make sure their are known quantities for PSU builders to meet. Of course, this doesn't stop crappy PSU's falling short on their rated capacities and shutting down even when you are in spec ;).
 
I figured the same.
It also rubbishes those images showing the 3090 is HUGE!
If it was so much larger, there wouldnt be a need to implement much smaller power connections.
The smaller connector is likely due to the short PCB. The GPU is so large due to the heatsink. If you look at the nekked pictures, the PCB area is limited. Of course, being triple slot and getting a wide FOV lense taking a picture, it's easy for the 3090 to look larger than it is, but it still looks to be a pretty good size card, mostly due to the cooler, not the actual PCB. I don't think they 'needed' a new connector, but it probably made sense to be able to put it on the angle like they did to keep the wires from hitting in some cases due to height.
 
What I don't get from this is why Igor's Lab was emphasizing the point about wire gauges and whatnot if so. Gamers Nexus OTOH didn't think it mattered. Was Igor just incorrect in his reasoning for the new connector?
 
6-pin connectors are only required to support 2a per line, even though the connector can supply much more than that. I think this had more to do with limiting the amount of power a PSU had to supply in general than limitations on the connector/wiring. Same with 8-pin, where nothing changed connector wise (same line of connectors, same connector specs, same PSU wire gauge), yet they were able to carry 2x the current. .

6 Pin spec is 2 circuits = 75 Watts. 37.5 Watts/circuit = ~3 Amps/circuit.
8 Pin spec is 3 circuits = 150 Watts. 50 Watts/circuit = ~4 Amps/circuit.

Both are well under the weakest Mini-Fit connector. Mini-Fit Junior is 9 Amps/Circuit.

Despite using well under spec, you will still see people melted their PCIe connectors.
 
The smaller connector is likely due to the short PCB. The GPU is so large due to the heatsink. If you look at the nekked pictures, the PCB area is limited. Of course, being triple slot and getting a wide FOV lense taking a picture, it's easy for the 3090 to look larger than it is, but it still looks to be a pretty good size card, mostly due to the cooler, not the actual PCB. I don't think they 'needed' a new connector, but it probably made sense to be able to put it on the angle like they did to keep the wires from hitting in some cases due to height.

The images I am talking of are those where the cooler is so large it wont fit in most cases because the hard drive cages are in the way.

When using a much larger length and height heatsink they are no longer limited to the size of PCB needed with the smaller cooler.
Changing the size of the power connector to something completely new is a desperate act to save space because there were no other feasible options.
When there is no need to restrict the size of the PCB, there is no need to shrink the power connector.

However, contradicting myself :)
A new single 400W+ power connector is nicer and simpler to replace 3x 150W power connectors.
It means they can still exceed 350W on future GPUs and gain the benefit of less space required on future designs.
So perhaps it needed to happen anyway.
 
What I don't get from this is why Igor's Lab was emphasizing the point about wire gauges and whatnot if so. Gamers Nexus OTOH didn't think it mattered. Was Igor just incorrect in his reasoning for the new connector?

I think GN is right on this one. The main factor here, is that for the vast majority of cases, for years to come. Power into this connector will come from current standard 2 8 Pin connectors (300 Watts) running through an adapter.

It simply can't be specified to be better than what it is going to be practically supplied by.

2-8pin running through an adapter, can't be better than 2-8 pin plugging into 2-8pin native connectors, and in fact it can only be worse.
 
However, contradicting myself :)
A new single 400W+ power connector is nicer and simpler to replace 3x 150W power connectors.
It means they can still exceed 350W on future GPUs and gain the benefit of less space required on future designs.
So perhaps it needed to happen anyway.

Except it's a new 300W connector replacing 2X 150W connectors.
 
6 Pin spec is 2 circuits = 75 Watts. 37.5 Watts/circuit = ~3 Amps/circuit.
8 Pin spec is 3 circuits = 150 Watts. 50 Watts/circuit = ~4 Amps/circuit.

Both are well under the weakest Mini-Fit connector. Mini-Fit Junior is 9 Amps/Circuit.

Despite using well under spec, you will still see people melted their PCIe connectors.
No, check the current ATX12V specs... the 3rd power pin is no longer optional.
6-pin is 3 circuits @ ~2 amps each = 75watts
8-pin is 3 circuits @ ~4amps = 150 watts

12-pin isn't specified, but it's pulling from dual 8-pin so we know it's able to support at least 300w based on ATX spec, but in reality it can (and most likely will if anyone bumps up their GPU to overclock) support more.

Neither is pushing the connector, but the 6-pin hasn't been 2 circuit for some time now (I don't recall ever seeing on that wasn't fully populated when it was optional).

I haven't seen a well connected PCIe connector melt. My assumption is either bad QC or a bad connection causing excess resistance in the connection. Another reason I don't know why they still insist of these connectors that suck when there are much more consistant connectors that have much better ratings. This is why (higher current) R/C vehicles abandoned these types of connectors ages ago. They get a little dirty and overheat and melt. The good thing about not switching connector (I mean overall type, not category), if you build your own you can use the same crimper and keep your wiring looking nice.
 
Maybe I'm conflating design guide vs spec because I can't find actual 'specs' anywhere that even talk about the connectors. The ATX PSU spec mostly states tolerances of the rails, not the power the connectors supply.

Section 4.2.2.4 - It only states that it's used for cards that require more than 75watts, doesn't say how many watts it can draw.
https://www.intel.com/content/www/us/en/technology-provider/power-supply-design-guide.html?wapkw=power supply design

The design guide also suggest (for modular power supplies) to supply 6-8amps per circuit, or 18a-24a for a 6-pin connector. (Section 2.2.1).

If you have better information please share as I may be looking in the wrong places, if that's the case I appologize, I'm just going by the information I've been able to find.

Edit: Apparently the specs for the PCIe connectors are part of the PCIe specs, not the ATX specs, hence the confusion. The PCIe specs are not free.
http://www.playtool.com/pages/psuconnectors/connectors.html#pciexpress
"Nonetheless, information leaks out from the specification and the 6 pin PCI Express power cable is actually rated at an extremely conservative 75 watts. I have no idea why the wattage is rated so low because the specifications from Molex clearly allow substantially more power. Part of the reason may be that pin 2 (listed above as a 12 volt line) may be listed as not connected in the specification. I've never seen a 6 pin PCI Express power cable with pin 2 not connected."

Basically what I was saying, I've never seen it not connected, and it's only rated at 75w. Without seeing the specs I can only assume since it's 75watts with all 3 pins populated, it'd be even less (50w) if you decided to only include 2 pins or the spec may say something about 2amps with 3 wires or 3amps with 2 wires and both would support the same power? I'm not really sure, but it's pretty moot since no PSU's come that way.
 
Last edited:
Basically what I was saying, I've never seen it not connected, and it's only rated at 75w. Without seeing the specs I can only assume since it's 75watts with all 3 pins populated, it'd be even less (50w) if you decided to only include 2 pins or the spec may say something about 2amps with 3 wires or 3amps with 2 wires and both would support the same power? I'm not really sure, but it's pretty moot since no PSU's come that way.

Never seeing something not connected, is not the same as being part of the spec.

The spec taken from the PC-Sig shows this kind of image, which is 2 circuits for 6 pin, and 3 circuits for 8 pin, like I said:

6 pin 8 pin.png
 
Never seeing something not connected, is not the same as being part of the spec.

The spec taken from the PC-Sig shows this kind of image, which is 2 circuits for 6 pin, and 3 circuits for 8 pin, like I said:

View attachment 274457
Yes, but which revision is that from and what is the different in wattage with that ping present or not present? The image provided is not an official PC-Sig image, just someones interpretation of it at some point. I don't have access to the PC-Sig specs, so... I can only guess. If you have access, please do share the information that's relevant. Most everything I see says the spec states it optional, but always included. So is the 75w based on it being included or not included? I'm not sure if it is with or without (not that it really matters as 2-pins can easily sustain 75-watts safely). And even in that image you show it says that the offical standard lists it as no connection but most power supplies are supplying +12v on this pin. So if you want to go by the spec or what reality is for any current PSU (and by current, ~10+ years)? I don't know which is "proper" but every PSU that I know of has all 3 pins, so that's the only thing I can speak to, it's what intel recommends for compatibility and it's what PSU manufacturers state in their specs. This is also the same when there are 6+2 connectors, they obvoiusly have to have that pin populated otherwise it wouldn't be compatible. So, the reality is they are all 3/3 circuit connectors today and going forward.

And again, based on Corsair diagrams, this is populated: https://mainframecustom.com/shop/cable-sleeving/corsair-pinout-diagrams/ As well as Intels Design guide which I linked above which if the PSU says Intel compatible they have to follow this. The only thing I can surmise is people felt it was better to use all the pins and supply the current accross 3/3 instead of 2/2 and that's what everyone does and suggests for compatibility. In the real world, all PSU's supply 75w accross 3 circuits on the 6-pin connector which means ~2amps per wire. We already know the connector and wiring can support > 75w, heck it could have been 1 circuit and still supplied 75w (6.5amps) with a good safety margin, although voltage drop/droop may have been a concern with a single circuit.

Anyways, it's a silly argument because just about any 6-pin in existence could easily and safely exceed the spec with 2 or 3 circuits.

Edit: Was thinking maybe it had something to do with being able to take 2 peripheral connectors and make a single 6-pin. The 12v line on the 4-pin peripheral is 60w I beleive, so 2x 60w = 120w. Minus some safety for having multiple connectors? Meh, who knows sometimes I think these groups just throw darts at a board to make decisions on things ;).
 
Last edited:
Ok, I havent seen that mentioned.
Yes, PCIe slot (x16) can supply up to 75w... so a 300w connector + 75w bus = 375w total, which is enough to meet 350w requirements. Any overclocking and you can easily blow past this.
 
Yes, PCIe slot (x16) can supply up to 75w... so a 300w connector + 75w bus = 375w total, which is enough to meet 350w requirements. Any overclocking and you can easily blow past this.
I mean I havent seen mention that the 3090 will use 2 sockets for power.
 
I mean I havent seen mention that the 3090 will use 2 sockets for power.
Ahh, sorry. The Nvidia one will be using a single 12 pin adapter. The Seasonic adapter for it uses 2 8-pin plugs to feed it, so it's indirectly using 2 8-pin plugs. It is assumed most AIBs will use 2 or possibly 3 8-pin connectors, but won't know for sure until they are announced.
 
Ahh, sorry. The Nvidia one will be using a single 12 pin adapter. The Seasonic adapter for it uses 2 8-pin plugs to feed it, so it's indirectly using 2 8-pin plugs. It is assumed most AIBs will use 2 or possibly 3 8-pin connectors, but won't know for sure until they are announced.

The 3090 is rated at 350W though.
It was said above it will use the 12pin for 300W and a 6pin for 75W.
I havent seen mention of this anywhere else.
I'm looking for evidence the 12pin connector will only handle 300W and will need another power conenction.
Igor reckons the 12pin is capable of 600W under ideal conditions and is easily capable of 400W.

ps I'm an EE, I good with power requirements and methods.
 
It was said above it will use the 12pin for 300W and a 6pin for 75W.

No, I said single 12 pin and the Slot, as in PCIe slot. The PCIe edge connector that connects card to the board supplies 75 watts.

Images here show dual 8 pin for AIB. I expect there will be dual 8 pin and triple 8 pin designs:
https://videocardz.com/newz/gainward-geforce-rtx-3090-and-rtx-3080-phoenix-leaked-specs-confirmed

For the FE cards they will use a single 12 pin. In the recent NVidia Video they show a PCB drawing starting with Dual 8 pin connectors, eventually substituting One 12 pin.

While they haven't specified how many 150 W cables will plug into the adapter, I think it's a pretty safe bet, that it will be TWO 8 pin connectors down to ONE 12 pin connector.
 
The 3090 is rated at 350W though.
It was said above it will use the 12pin for 300W and a 6pin for 75W.
I havent seen mention of this anywhere else.
I'm looking for evidence the 12pin connector will only handle 300W and will need another power conenction.
Igor reckons the 12pin is capable of 600W under ideal conditions and is easily capable of 400W.

ps I'm an EE, I good with power requirements and methods.
Yes, going by pure specs all of these connectors can handle 300watts, even a single 6-pin. The connector itself is rated at 13 amps per pin, so 3x13 = 36 amps. 36*12 is 432 amps! Now would you want to run it on the ragged edge? No, and these do get pulled, replugged and dirty/dusty, but really the pcie spec is way under rated from the wire gauge/connector specs. Just because the connector can handle it didn't automatically mean they will make the spec anywhere close as can be seen with existing connectors. I still don't know why they didn't just make the 8-pin 4 complete circuits and not worried about the "sense". Could have easily specced it much higher. As I mentioned, Intel "suggests" 6-8 amps per leg. So that works have equate to 6*4 = 24amps, *12v is 288 watts. This would have held up over time much better, but who will ever need more than 64k I guess.
 
So if these are under spec and we’re only pulling 300W + 75W, that would explain why one of the leaked AIB cards had 3 x 8 pin connector ports on it. Either way I see this card pushing 400W on most enthusiasts systems so personally I’ll wait to evaluate AIB cards rather than FE.
 
Last edited:
So if these are under spec and we’re only pulling 350W + 75W, that would explain why one of the leaked AIB cards had 3 x 8 pin connector ports on it. Either way I see this card pushing 400W on most enthusiasts systems so personally I’ll wait to evaluate AIB cards rather than FE.

It's 300W + 75W to be in spec with the incoming 2 x 8 pin.

Basic 3090 looks to be 2 x 8 pin, just like basic 2080 Ti.

There are AIB 2080 Ti cards with 3 x 8 pin connectors as well, so with 3090 being a higher power card, I expect they will be more common.
 
Yes, going by pure specs all of these connectors can handle 300watts, even a single 6-pin. The connector itself is rated at 13 amps per pin, so 3x13 = 36 amps. 36*12 is 432 amps! Now would you want to run it on the ragged edge? No, and these do get pulled, replugged and dirty/dusty, but really the pcie spec is way under rated from the wire gauge/connector specs. Just because the connector can handle it didn't automatically mean they will make the spec anywhere close as can be seen with existing connectors. I still don't know why they didn't just make the 8-pin 4 complete circuits and not worried about the "sense". Could have easily specced it much higher. As I mentioned, Intel "suggests" 6-8 amps per leg. So that works have equate to 6*4 = 24amps, *12v is 288 watts. This would have held up over time much better, but who will ever need more than 64k I guess.

The sense wires are there to tell the card it can draw more than the 75w the slot allows. If sense0 is grounded, the card knows it has an extra 75w available. If sense1 is grounded, it has an extra 150w.
 
It's 300W + 75W to be in spec with the incoming 2 x 8 pin.

Basic 3090 looks to be 2 x 8 pin, just like basic 2080 Ti.

There are AIB 2080 Ti cards with 3 x 8 pin connectors as well, so with 3090 being a higher power card, I expect they will be more common.

Oops the 350W was a typo but yah I think AIB will be the way to go this time, especially those of us watercooling.
 
The sense wires are there to tell the card it can draw more than the 75w the slot allows. If sense0 is grounded, the card knows it has an extra 75w available. If sense1 is grounded, it has an extra 150w.
I know the concept, but if you just used an incompatible connector you wouldn't need to sense anything and a really simple adapter (just a plastic connector with pins on 2 sides or even a 90* adapter for space constraints) could go from 8 to 6 pin. Instead they used a connector that a 6 pin could connect into and allow the GPU to possibly run at lower specs, which isn't a thing, no GPU manufacturer includes an 8-pin and says, please plug in a 6-pin if you want it to run slower. It just seemed a wasted opportunity to actually advance, but when your group is paid based on incremental changes to the spec I guess you try not to future proof to far ahead :p
 
I know the concept, but if you just used an incompatible connector you wouldn't need to sense anything and a really simple adapter (just a plastic connector with pins on 2 sides or even a 90* adapter for space constraints) could go from 8 to 6 pin. Instead they used a connector that a 6 pin could connect into and allow the GPU to possibly run at lower specs, which isn't a thing, no GPU manufacturer includes an 8-pin and says, please plug in a 6-pin if you want it to run slower. It just seemed a wasted opportunity to actually advance, but when your group is paid based on incremental changes to the spec I guess you try not to future proof to far ahead :p

This is pretty much life with standards. We could do much better if we would just start over with a clean sheet of paper, but that seldom happens.

I can see a problem with the new connector not having sense pins, and using adapters. That mean the cards will start up with only one 6 pin connected to the adapter, and it will attempt to draw all that power through one 6 pin connector, when it's intended to be supplied by 2 x 8 pin.
 
This is pretty much life with standards. We could do much better if we would just start over with a clean sheet of paper, but that seldom happens.

I can see a problem with the new connector not having sense pins, and using adapters. That mean the cards will start up with only one 6 pin connected to the adapter, and it will attempt to draw all that power through one 6 pin connector, when it's intended to be supplied by 2 x 8 pin.
No, I mean from 8 pin to 6 pin you could use an adapter to still work with a 6-pin GPU. Let's be honest though, 6+2 connectors just jumper 2 wires over to the sense pin anyway, lol. But there is nothing stopping someone from not connecting 2 6pins for a single 8 pin. Just like there is nothing to stop someone from plugging the new 12-pin into a single 8-pin and letting it ride.
I mean backwards compatibility is good up to a point.
 
No, I mean from 8 pin to 6 pin you could use an adapter to still work with a 6-pin GPU. Let's be honest though, 6+2 connectors just jumper 2 wires over to the sense pin anyway, lol. But there is nothing stopping someone from not connecting 2 6pins for a single 8 pin. Just like there is nothing to stop someone from plugging the new 12-pin into a single 8-pin and letting it ride.
I mean backwards compatibility is good up to a point.

Your analogy fails. If you only connect one 6 pin to 2 x 6 pin to 8 pin adapter, it will fail, because one of the sense pins will be unconnected...

That is exactly my point. Sense pins protect from people under-connecting the power system. This new connector apparently has none, so no protection reminding people to connect 2 x 8 connectors.
 
Your analogy fails. If you only connect one 6 pin to 2 x 6 pin to 8 pin adapter, it will fail, because one of the sense pins will be unconnected...

That is exactly my point. Sense pins protect from people under-connecting the power system. This new connector apparently has none, so no protection reminding people to connect 2 x 8 connectors.
That obviously depends on which plug you plug in :p. If you plug in the one with the sense pin jumpered, the card wouldn't know the difference. If you plugged in the one without the sense line, then it may notice if it even has any sense logic on board (I'm not sure if they even do in general). So, it would only protect some times, it's not really there to protect the user from doing something wrong but a signal to the GPU to let it know if it has enough power available.
 
That obviously depends on which plug you plug in :p. If you plug in the one with the sense pin jumpered, the card wouldn't know the difference. If you plugged in the one without the sense line, then it may notice if it even has any sense logic on board (I'm not sure if they even do in general). So, it would only protect some times, it's not really there to protect the user from doing something wrong but a signal to the GPU to let it know if it has enough power available.

Definitely glad this moved to it's own thread, since it's going into the weeds.

8 Pin spec has 2 sense pins. A proper 2 x 6 pin -> 8 Pin adapter, will supply one sense pin from each 6 pin connector.

So if you only plug in one 6 connector, you only get one sense pin.
 
Good point, sorry I forgot there were 2 sense pins, kept just thinking one was ground and not "sense". Again not sure how much sensing GPUs actually do these days, would be curious if they actually implement it all properly. Well, either way, the 12-pin has no mechanism for this so it'll be up to the user to plug them in properly. Same as they could have done with the 8-pin, could have been 4 circuits and just used a connector that a 6-pin couldn't plug into (different key). Funny though as most 6-pins connectors are 6 wires to PSU and so are 8-pin with jumpers at the plug, so there is no real reason they can't supply the same current besides an odd choice of specs.
 
Like I said, legacy imposes penalties. GPUs started out using only slot power, then added 6 pin 75W. Since 6 pin was spec'd for 75 Watts, then everything on the back end might only be good enough for 75W.

So, when you move to 8 pin, you add another sense pin to increase the "contract" to 150 W. Unless you have that extra sense pin guaranteeing the contract, you can't be sure you don't have some old PSU only supplying 75W which was the original "contract".

So you kind of get stuck forever with these legacy standards, and it is VERY hard to break away.

It will be interesting how many issues will happen with the new connector, adapters and no sense pins to enforce proper connections.
 
Igor's Lab has some commentary on how a PSU panic started from inadequate information sharing:
https://www.igorslab.de/en/nvidias-...nd-a-patch-cable-from-sonic-background-story/

While he doesn't directly state it, now that I look at the Seasonic box, they lists their cable as 9 Amps. It looks like they just read the connector specs and made a cable without talking to NVidia.

9 Amps x 12V x 6 circuits = 648 Watts. So they specified their cable for 650 Watts! Which is no doubt why they said use it with a 850W PSU. :)

Now that NVidia cards are out in the open, it's pretty clear the connector is only meant to replace 2 X 8 Pin power (300 Watts), as I have been arguing all along.
 
Back
Top