RX 480 is apparently killing pcie slots

That puts the 6 pin connector even further out of spec.
They need to reduce power use.

The ram speed drop wont net much power saving.
If they have over volted all cards then they can save some there but its likely some cards will suffer. They could allow RMA replacements of those cards.
Otherwise it will have to be clock drops to allow a safe voltage drop.

thats just bitching right now and sounds so fuckin political. 6 pin can pull a lot of juice. this bullshit about it draws another 10w is not going to change the world. Man I hate when there is a solution people find another thing to bitch about. from what I have read RX480 6 pin is wired in the way that it doesn't need 8 pin to deliver more power. So you can call it out of spec, it sort of avoids you having to put 8 pin. Now you are going to complain if it draws 5 more watts? How did you think all the overclocking was done back in the days? 6 pin is more than capable of handling little more juice. If they can patch it so the card doesn't draw any of power to gpu from pci-e and more from 6pin Its all good. At that point it just becomes witch hunt when we all know the cards draw more out of 6 pin and they have been doing it for ages.
 
Its nothing to do with what I like, its not my problem.

Not its not, Its AMD's problem but at least they could have a temporary solution to the problem until they figure it out how to reduce the power usage so that both PCI-E and 6-pin fall inside the specs.

That would give some end users a piece of mind.
 
Its out of spec.
For a well set up system its no problem.
But for a bad connection it will cause damage faster and do more damage.

Selling a new product that doesnt conform to basic specs is painful to see.

Sounds like propaganda talk. Cards have pulled more from 6 pin forever, not anything new bro. This card is around 160's stock in current stage if they don't do anything else and draw another 10-15w from 6 pin you are really think that is going to blow things up? Common now, seen cards pull extra 100 out of it from old days and back then people had more shitty PSUs then now due to lack of competition. Now you can get GOLD certified PSU's for like 65 bucks not before.

I have never seen people ever worrry about their 6 pin when they overclocked the shit out of their cards, the power draw from the board is the main concern here. If you draw too much from 6 pin while overclocking atleast it doesn't blow up your board, you ever heard of failsafe? If you try to pull too much from your psu it will just shut down, that usually doens't happen with 10-15w though.
 
I already referenced it to overclocking. A lot of gpus will suck down two to three times their rated TDP when unleashed.

I couldn't find this. Please show me one instance of that card drawing 800W while overclocked, for science :)

thats just bitching right now and sounds so fuckin political. 6 pin can pull a lot of juice. this bullshit about it draws another 10w is not going to change the world. Man I hate when there is a solution people find another thing to bitch about. from what I have read RX480 6 pin is wired in the way that it doesn't need 8 pin to deliver more power. So you can call it out of spec, it sort of avoids you having to put 8 pin. Now you are going to complain if it draws 5 more watts? How did you think all the overclocking was done back in the days? 6 pin is more than capable of handling little more juice. If they can patch it so the card doesn't draw any of power to gpu from pci-e and more from 6pin Its all good. At that point it just becomes witch hunt when we all know the cards draw more out of 6 pin and they have been doing it for ages.

To be fair, he has a point. As I stated in another post. It's ok if the user goes out of spec while overclocking. That's their choice. A card should never be out of spec out of the box, unless it's an ultra enthusiast product with warnings. The 6-pin PCI-E connector is rated for 75W. Yes, you, me, and most here know that, with the proper wiring, it can go WAY beyond that. But a cheaper yet still adequate PSU may decide to adhere to that spec. And if that occurs, you have a 6-pin connector that only delivers 75W + some degree of tolerance. You're going to have problems with an out of box card routing over 100W to that cable.

So, like it or not, Nenu is correct. The card needs to be designed to be within spec out of the box, ESPECIALLY a reference card. Overclocking is up to the user, including any pitfalls that come with it.

The easiest way to fix this is to cap the card's power draw at 150W, with 75/75 being the split, and if power level is increased by the user, draw from the 6-pin as this causes the least harm. Drawing direct from the PSU won't damage anything as over-current protection will cause a shutdown/reboot before any harm could be done. A motherboard isn't design for power delivery. It's a conduit. So no such protection is afforded. A mother board delivering power is like a person without the sense of touch putting their hand on the stove. You don't know there's a problem until, well, you get the idea.
 
If there's someone that needs to be blamed, its most likely AMD's marketing team, yet again. They wanted a card which can be powered through 6-pin connector and engineering team had to design the card so that it draws pretty much 50/50 from both power inputs to get that PCI-E certification.

If this thing had a 8-pin connector, they could have powered the chip from it and memory, mem subsystem and display controllers from PCI-E slot, like how they did before with earlier series.
 
I couldn't find this. Please show me one instance of that card drawing 800W while overclocked, for science :)



To be fair, he has a point. As I stated in another post. It's ok if the user goes out of spec while overclocking. That's their choice. A card should never be out of spec out of the box, unless it's an ultra enthusiast product with warnings. The 6-pin PCI-E connector is rated for 75W. Yes, you, me, and most here know that, with the proper wiring, it can go WAY beyond that. But a cheaper yet still adequate PSU may decide to adhere to that spec. And if that occurs, you have a 6-pin connector that only delivers 75W + some degree of tolerance. You're going to have problems with an out of box card routing over 100W to that cable.

So, like it or not, Nenu is correct. The card needs to be designed to be within spec out of the box, ESPECIALLY A reference card. Overclocking is up to the user, including any pitfalls that come with it.

True but its not drawing a whole lot over spec though thats what I am trying to get it. 10-15 watt over 6 pin connector really won't harm anything. Unless you really have a crappy psu that can't even handle the graphics card in the first place. Thats my point. Most PSUs will be just fine. Out of box yea I am sure they will throttle it enough to keep it around 75w, but the way the card is wired it will pull much much more, from what I have read its wired in a way that 6 pin connector will act like an 8 pin because its grounded already like an 8 pin.
 
I couldn't find this. Please show me one instance of that card drawing 800W while overclocked, for science :)



To be fair, he has a point. As I stated in another post. It's ok if the user goes out of spec while overclocking. That's their choice. A card should never be out of spec out of the box, unless it's an ultra enthusiast product with warnings. The 6-pin PCI-E connector is rated for 75W. Yes, you, me, and most here know that, with the proper wiring, it can go WAY beyond that. But a cheaper yet still adequate PSU may decide to adhere to that spec. And if that occurs, you have a 6-pin connector that only delivers 75W + some degree of tolerance. You're going to have problems with an out of box card routing over 100W to that cable.

So, like it or not, Nenu is correct. The card needs to be designed to be within spec out of the box, ESPECIALLY a reference card. Overclocking is up to the user, including any pitfalls that come with it.

The easiest way to fix this is to cap the card's power draw at 150W, with 75/75 being the split, and if power level is increased by the user, draw from the 6-pin as this causes the least harm. Drawing direct from the PSU won't damage anything as over-current protection will cause a shutdown/reboot before any harm could be done. A motherboard isn't design for power delivery. It's a conduit. So no such protection is afforded. A mother board delivering power is like a person without the sense of touch putting their hand on the stove. You don't know there's a problem until, well, you get the idea.

yea I think that is what will happen. They will cap it at 150 and if the people want to take the risk of overclocking well they can run it out of spec all they want. Probably the best solution.
 
Do tell me, which one you prefer, 85w from PCI-E slot with puny pins or 100w from 6-pin connector which can handle +250w, at least on RX480 because it has all six pins there :meh:
I agree with this......the 6 pin being overloaded is much easier to deal with. I personally dont care how oems or system builders deal or dont deal with it.....my interest is how well the card works out for me lol. Starting to look like the cards with 8 pin connectors are going to be good to go.:)
 
True but its not drawing a whole lot over spec though thats what I am trying to get it. 10-15 watt over 6 pin connector really won't harm anything. Unless you really have a crappy psu that can't even handle the graphics card in the first place. Thats my point. Most PSUs will be just fine.

Here's the problem. If a PSU can only handle 75W over 6-pin, it's not "crappy." It's "spec compliant." This is what you're failing to understand (and I TRULY do not mean that as any kind of insult). A GPU SHOULD try to stay within this spec out of box so that it works on as many possible configurations as possible, again, out of the box, especially for a budget card sold at Best Buy. This is being sold to the tech-illiterate, not us. It needs to work.
 
  • Like
Reactions: Nenu
like this
Here's the problem. If a PSU can only handle 75W over 6-pin, it's not "crappy." It's "spec compliant." This is what you're failing to understand (and I TRULY do not mean that as any kind of insult). A GPU SHOULD try to stay within this spec out of box so that it works on as many possible configurations as possible, again, out of the box, especially for a budget card sold at Best Buy. This is being sold to the tech-illiterate, not us. It needs to work.

Oh I won't argue about out of box. As I said they will probably make that happen with update. But on the other part all electronics have variance built in. If it is designed to handle 75w it does have some wiggle room, I know thats not the issue here. And yes out of the box it should not hit over 150. They should cap it at that.
 
Its not political, IT IS A MUST FOR THEM TO DO, they can't get pci-e cert if they are out of spec how hard is that for any of you guys to understand. There is some flexibility in the specs but not so much that AMD can run rampant with what ever they want to do. The specs tend to be outdated as well, but there is nothing AMD can do about that right now (with the 6 pin and 8 pin), unless there is something there I'm missing which I don't think I am.

The bios solution is good solution for the time being, but AMD still has to fix the root cause of the spec violation for ANY OEM or SYSTEM BUILDERS to pick them up.

THIS MEANS APPLE TOO. Who knows maybe APPLE will make AMD drop their pants more and ask for a discount so they will cover that warranty costs.....
 
  • Like
Reactions: Nenu
like this
I can draw 1600w at the wall with my 3930K and two 290x, easily. Need proof for that too?

Not in games. Furmark + Prime and its doable on water. I had the same setup clocked at 4.7ghz and 1250/1600mhz and I saw 1250w as a max power draw at the wall when playing BF4 on 64p server. Fire up the power virus and cpu burner and my gold rated 1200w PSU shut down instantly.
 
Its not political, IT IS A MUST FOR THEM TO DO, they can't get pci-e cert if they are out of spec how hard is that for any of you guys to understand. There is some flexibility in the specs but not so much that AMD can run rampant with what ever they want to do. The specs tend to be outdated as well, but there is nothing AMD can do about that right now (with the 6 pin and 8 pin), unless there is something there I'm missing which I don't think I am.

The bios solution is good solution for the time being, but AMD still has to fix the root cause of the spec violation for ANY OEM or SYSTEM BUILDERS to pick them up.

THIS MEANS APPLE TOO. Who knows maybe APPLE will make AMD drop their pants more and ask for a discount so they will cover that warranty costs.....

I am not arguing the in-spec part. Yea that must be done! I am just more generally talking about the extra 5-10w drawn from 6 pin won't hurt and its still better than it coming from the motherboard in general. Yea it has to consume 150 or less they gotta make sure it stays around there. Now if its doing 151-152 that is till with in margin of error.
 
I'm not just talking to you, eveyone in the last page and half, ITS NOT A NEGOTIABLE OPTION to be certified lol. Its either you get it or you don't

The rx480 has more than one issue, the PCI-e power overdraw added to this current overdraw, and then the over all wattage of the card pulling from the pci-e connectors. All must but in spec.
And right now they are all out of spec lol.

So for a temp fix yeah pushing it over the to PCI-e connector is a GREAT way to go. At least it will put peoples minds at ease to not screw around with their motherboards. But AMD can't stop there, as NKD just stated it has to consumer 150 watts or less overall with the other problems fixed as well. So this is what AMD is working on.
 
  • Like
Reactions: Nenu
like this
Unless they screw you. They converted my 650i Ultra from lifetime to 2 year, stating that mine was never eligible. The product listing I ordered from on Newegg had lifetime warranty. Their product page on the EVGA website showed it as lifetime, and, the box for the product (the physical box!) had their old "Lifetime warranty upon registration" sticker! From what I gather, at least at that time, the guys who did GPU and mobo warranty were different departments. So, I avoid EVGA motherboards, but I still go for their other products.
Will keep that in mind :(
 
Jesus guys.
I'm a realist, I tell it how it is.
When NVidia conned everyone with the 3.5GB 970 and then went back on their promises, I gave them a lot of shit.
edit: at that time I owned a R9 290x, no agenda for either side. But I correct fanboy statements on either side because, well, they arent realistic.

Fanboyism makes people crazy!
 
Last edited:
There's too much variance in the quality of these chips, some are inside the spec and some are not.

14LPP is a shit node, but it has been common knowledge for some time already. Why everyone is avoiding that node like a plague, except for AMD...
 
  • Like
Reactions: Nenu
like this
Yeah, I don't know how you could work with that thing, way too small...

Well... it's a digital PWM controller, supposedly that is in charge of the power delivery on the card. You don't have a lot of current going trough that particular part.
 
Not in games. Furmark + Prime and its doable on water. I had the same setup clocked at 4.7ghz and 1250/1600mhz and I saw 1250w as a max power draw at the wall when playing BF4 on 64p server. Fire up the power virus and cpu burner and my gold rated 1200w PSU shut down instantly.

I didn't need to run furmark/prime to hit that instead it was in firestrike ultra iirc. At that time I was running quad 290x at 1300/1600mhz and a 5ghz 3930K to give you an idea. I ran dual psu and ran two gpu+cpu on one psu and two gpu on the other psu.
 
  • Like
Reactions: Osjur
like this
There's too much variance in the quality of these chips, some are inside the spec and some are not.

14LPP is a shit node, but it has been common knowledge for some time already. Why everyone is avoiding that node like a plague, except for AMD...
Well, AMD are binded in WSA chains.
 
I just read one of the undervolt reports and unfortunately there is one critical factor missing in the article, improving fan profile.
I doubt anyone keeps reference cards to the default fan profiles, this was an issue with the 1080 (lesser extent 1070) that the fan profile was just not aggressive enough at say 80c, it just needed a tweak and looks like the 480 does as well.
Reading the following undervolt test: AMD Radeon RX 480 Undervolting Performance - Legit Reviews
Well they note most of the time the card is not holding its boost clock speed in default setup, primary reason would be throttling as it is the sort of behaviour one sees with the more dynamic boost algorithms that calculate clocks from voltage-temp to frequency.

So really the test needs to be done with the fan profile set to be a bit more aggressive IMO, and tbh every consumer should do that anyway with reference cards.
Worth noting legitreviews BF4 default behaviour is very similar to that of here:

bf4-voltage-645x448.jpg


1467185872F5hoIVuh4I_3_1.gif


1467185872F5hoIVuh4I_3_2.gif



However there are games that have a greater impact such as The Division and RoTR shown on the legitreviews undervolt test.
Point is, the undervolt may not actually improve performance if one sets the fan profile more aggressively to reduce boost algorithm from 'throttling'.
But then that is not its primary purpose, and unfortunately still not known how much undervolt is required to hit safe margin for the x16 PCIe slot.
Cheers
 
There's too much variance in the quality of these chips, some are inside the spec and some are not.

14LPP is a shit node, but it has been common knowledge for some time already. Why everyone is avoiding that node like a plague, except for AMD...


Its a possibility or its a combination of that specific architecture on that node as well which is more likely.
 
I didn't need to run furmark/prime to hit that instead it was in firestrike ultra iirc. At that time I was running quad 290x at 1300/1600mhz and a 5ghz 3930K to give you an idea. I ran dual psu and ran two gpu+cpu on one psu and two gpu on the other psu.

I know how much those cards can take when benchmarking, I ran one at 1320/1600 with ocp limiters disabled. Power draw increased exponentially after I pushed more than 1.3v through the chip. But I'm 99% sure that 1300/1600 wasn't game stable :D
 
I know how much those cards can take when benchmarking, I ran one at 1320/1600 with ocp limiters disabled. Power draw increased exponentially after I pushed more than 1.3v through the chip. But I'm 99% sure that 1300/1600 wasn't game stable :D

I've run up to 1.55v on the reference card and over 1.6 on lightnings. And yes, at 1300/1600 its on the edge with watercooling. If I had phase it would drop temps and be stable for gaming. In the past I had a 7970 reference that did 1400w on water with very low ambient. I sent it to a friend in norway and with his stupid low ambient he was gaming on it at 1350/1800mhz lol. Running phase though is not my style, wasting another 1Kw lol you know...


AMD Radeon HD 7970 video card benchmark result - Intel Core i7-3930K Processor,ASUSTeK COMPUTER INC. RAMPAGE IV FORMULA

^^using a tight memory timing bios, 1700mhz on memory here is equivalent to roughly 1900mhz.
 
Its not a solution but it is helpful. However it is dangerous.

In its default state, half of the VRMs are connected to the PCI-E bus. These feed 1/2 the GPU and memory which will potentially allow more current to flow from PCI-E than the 6 pin connection. However this can be compensated using the voltage controller.
So while bridging the power inputs can "help" reduce saturation of the PCI-E bus power, it still doesnt guarantee that it wont be exceeded because current will now be divided equally between both power inputs.
If the card draws more than 132W + 9% (144W), the PCI-E 12V power delivery will be outside spec.
(PCI-E 12V is specced to 66W +9%)

Bridging the power connections is dangerous because AMD have not implemented the sense pin on the 6 pin power connection, they appear to be sensing actual power availability to detect a non connection.
(This assumes they do have a method of detection, we have to assume they havent completely lost the plot)
Once the power rails are bridged, both the 6 pin and the PCI-E power are connected to everything on the card simultaneously.
The card is no longer able to realise if the 6 pin power connection is missing because there will always be voltage to all circuits.
Should the 6 pin power connector come loose or is left unplugged, the full power draw will come through the PCIE-E bus power connections only.
It will fry the power connections.

Even without bridging, since half of the core voltage VRMs are connected to the PCI-E, wouldn't it pop if you don't have a 6 pin in whether you bridge or not? Especially during POST, since at that time every component is using full power plus no drivers to keep things in check.

Also if they're gonna change the power balance, that's gonna put a lot more stress on the 6 pin VRMs. I wonder if those are capable of handling the extra load...
 
Even without bridging, since half of the core voltage VRMs are connected to the PCI-E, wouldn't it pop if you don't have a 6 pin in whether you bridge or not? Especially during POST, since at that time every component is using full power plus no drivers to keep things in check.
I posed this question assuming they must have some method of detecting a power connection on the 6 pin.
Someone must have performed this test outside of AMD by now so its probably safe to assume they are detecting a voltage level on the 6 pin power side.

Also if they're gonna change the power balance, that's gonna put a lot more stress on the 6 pin VRMs. I wonder if those are capable of handling the extra load...
My point too.
The 6 pin power connection is already out of spec before this change.

If the supply connection is solid the wires can handle up to 3 times the load, the wires wont be a problem.
But one of the potential fail conditions is when there is a bad connection.
This is worse than losing a wire entirely because a bad connection is a tiny connection point (by definition) that is trying to pass high current for its contact size.
This will heat up rapidly and if it doesnt disconnect due to burning up, it will continue to get hotter and risks more physical damage or fire.
In one respect, higher current flow could be safer due to faster oxidation of the connection. But this assumes the metal parts are not being forced together which can still be the case, even when there is a bad connection.
If so, the oxidation would help keep the contact area small and prolong the burning event which could set fire to the connector.
This makes it a none tolerable situation.

Thats not to say the same type of problem cant occur when its running to spec, but the spec exists for a reason, to minimise risk.
With less current flow, less heat is generated and degeneration is slower.
 
Even without bridging, since half of the core voltage VRMs are connected to the PCI-E, wouldn't it pop if you don't have a 6 pin in whether you bridge or not? Especially during POST, since at that time every component is using full power plus no drivers to keep things in check.

Also if they're gonna change the power balance, that's gonna put a lot more stress on the 6 pin VRMs. I wonder if those are capable of handling the extra load...
Hopefully this works but try this video.



Explains in great detail the power delivery on the card with comparisons to the Fury.
 
I posed this question assuming they must have some method of detecting a power connection on the 6 pin.
Someone must have performed this test outside of AMD by now so its probably safe to assume they are detecting a voltage level on the 6 pin power side.


My point too.
The 6 pin power connection is already out of spec before this change.

If the supply connection is solid the wires can handle up to 3 times the load, the wires wont be a problem.
But one of the potential fail conditions is when there is a bad connection.
This is worse than losing a wire entirely because a bad connection is a tiny connection point (by definition) that is trying to pass high current for its contact size.
This will heat up rapidly and if it doesnt disconnect due to burning up, it will continue to get hotter and risks more physical damage or fire.
In one respect, higher current flow could be safer due to faster oxidation of the connection. But this assumes the metal parts are not being forced together which can still be the case, even when there is a bad connection.
If so, the oxidation would help keep the contact area small and prolong the burning event which could set fire to the connector.
This makes it a none tolerable situation.

Thats not to say the same type of problem cant occur when its running to spec, but the spec exists for a reason, to minimise risk.
With less current flow, less heat is generated and degeneration is slower.

It would be a much safer practical solution to shift load from the slot to the 6-pin, but it's still technically even worse out-of-spec operation on the 6-pin and completely valid grounds to recall or refund this POS no matter how many AMD diehards claim or show how much multiples of 75W max power can be carried by the 6-pin.
 
I can't think of many cards that come out of spec on the 6pin at stock. It's nowhere near as big a deal as the PCIe problem, but still technically not allowed and more importantly to me, I don't understand how they ended up with this situation lol
 
I can't think of many cards that come out of spec on the 6pin at stock. It's nowhere near as big a deal as the PCIe problem, but still technically not allowed and more importantly to me, I don't understand how they ended up with this situation lol

AMD fanboys' favorite punching bag the 960 Strix did very slightly and briefing exceeded 75W 6-pin out of the box by PCPer. Not saying it should get a free pass, but at least it is advertised as a factory OCed card and faaaaaaar from a PCIE slot disaster in the making.

If this fiasco is going to make card makers paying attention to slot power and putting 8-pin everywhere even on <150W cards then every consumer wins. If the users insist on being cheapskates using adapter cables then instead of a real 8-pin cable/PSU then they can only blame themselves for being dumb if shit happens.
 
I can't think of many cards that come out of spec on the 6pin at stock. It's nowhere near as big a deal as the PCIe problem, but still technically not allowed and more importantly to me, I don't understand how they ended up with this situation lol


No card that I know of was out spec on the pci-e bus, amps or wattage draw. PCI-e connects it has happened once the 6990 I think, all other top end cards nope and this is why nV and AMD never lets AIB's make their own versions cause there isn't much room to play with on those usually.
 
It would be a much safer practical solution to shift load from the slot to the 6-pin, but it's still technically even worse out-of-spec operation on the 6-pin and completely valid grounds to recall or refund this POS no matter how many AMD diehards claim or show how much multiples of 75W max power can be carried by the 6-pin.


Across the a 6 pin PCI-e connector is not much of an issue at all, the specs really have to be redone for the connectors. In this case though, AMD just f'ed up, two if and buts about it.
 
No card that I know of was out spec on the pci-e bus, amps or wattage draw. PCI-e connects it has happened once the 6990 I think, all other top end cards nope and this is why nV and AMD never lets AIB's make their own versions cause there isn't much room to play with on those usually.

what about the 295x2, titan z, pro duo etc ?
 
all those had enough power connectors for pci-e compliance.

Power Consumption: Gaming - Radeon R9 295X2 8 GB Review: Project Hydra Gets Liquid Cooling

toms hardware power draw, from bus the 295x2 is drawing 30 watts or so

actually they did go over the pci-e connect limits on this card as well.


The NVIDIA GeForce GTX TITAN Z Review | 3DMark, Power, Sound and Conclusions

they didn't show us the details but they wrote about it

Very well in fact! The Radeon R9 295X2 still uses the most power, drawing 676 watts as a full system including our Sandy Bridge-E processor and platform. The pair of GeForce GTX 780 Ti cards in SLI uses 624 watts, 52 watts less than the 295X2. The GeForce GTX Titan Z was actually quiet efficient, with a total system power consumption 530 watts, 146 watts less than the 295X2.

That is not a small difference. With a TDP of 500 watts, we knew that that Radeon R9 295X2 was on the outside looking in when it comes to graphics cards, but the Titan Z is clearly keeping things more in line with the expected values. The TDP of 375 watts is within specification of the PCIe standards: 75 watts from the PCIe bus and 150 watts from each of the two 8-pin power connectors. There are no strict power supply amperage requirements like we saw on the second page of our R9 295X2 review which gives users and system builders more flexibility in chassis and design.
 
Last edited:
all those had enough power connectors for pci-e compliance.

Power Consumption: Gaming - Radeon R9 295X2 8 GB Review: Project Hydra Gets Liquid Cooling

toms hardware power draw, from bus the 295x2 is drawing 30 watts or so

actually they did go over the pci-e connect limits on this card as well.

Yeah the connectors have been overdrawn before its no big deal really, especially on a dual gpu card its pretty forgivable in my book, so long as it's not something ridiculous like +80% at stock lol. PCIe? Never
 
Yeah the connectors have been overdrawn before its no big deal really, especially on a dual gpu card its pretty forgivable in my book, so long as it's not something ridiculous like +80% at stock lol. PCIe? Never
yeah on these super high end cards I don't see people using cheapo components anyways so there should be no issue.
 
Back
Top