AMD's Statement On Reports Of Radeon RX 480 Excessive Power Draw

I have a 290x that consistently trips an arc fault circuit if I run MSI Kombuster stress test. My rig and monitors only thing on the circuit. Never had an issue with any other cards. Always seemed fishy to me, but I know little to nothing about electricity.
 
290X will draw a lot of power. I had 290X CF and an overclocked X58-era i7 in my room. I had to get a 2nd AC duct installed.
 
I have a 290x that consistently trips an arc fault circuit if I run MSI Kombuster stress test. My rig and monitors only thing on the circuit. Never had an issue with any other cards. Always seemed fishy to me, but I know little to nothing about electricity.

You don't need to know much except that you're probably overloading that circuit. Also, there's no good reason to run a power virus, unless you have very specific reasons like overloading a circuit, or heat checking your cooling etc.
 
You don't need to know much except that you're probably overloading that circuit. Also, there's no good reason to run a power virus, unless you have very specific reasons like overloading a circuit, or heat checking your cooling etc.
I like to use my rig as a space heater :) The room temp is 62 in this basement. I have resorted to a sweat shirt and sweat pants - in the middle of summer!
 
I like to use my rig as a space heater :) The room temp is 62 in this basement. I have resorted to a sweat shirt and sweat pants - in the middle of summer!

Man I would loooooove to have a full or hell even a partial basement for my mining rigs and my home server..I live nearly as far east as one can go without falling into the Atlantic here in MD, and the high water table coupled with the crappy sandy soil means no basement love..I did get lucky in that my house is a huge split level, and my downstairs living room is huge and stays about ~4-6F cooler in the summer and 10~12F cooler in the winter so that helps a bit.
 
How can a driver control the amount of power the card takes from the slot vs the power socket?

Because the driver contains a copy of the VBIOS. The proper/permanent way of fixing this issue, would be a BIOS update to the card.
 
No, the driver won't flash a new bios. The driver will communicate with the VRM ic through i2c and modify the power distribution such that the three phases hooked up to the PCIE are on higher duty cycle than the other three
 
PCI-E 6 pin wires are generally 16 AWG, braided. At 12v, that gives a current rating of 20A. (This is just a simple wire rating, and the "safe" current would be limited by any connector limitation and, of course, the ability of the PSU to supply that current.)

I*V=P

2 12v wires on a PCI-E 6 pin plug gives:

2*20A*12v= 480w

The wires could safely pull 480 Watts, all day, every day.

Household electrical circuits are limited to 80% current of their limit. (So a 12 gauge, 20 Amp, wire would support a circuit designed not to exceed 16 Amp steady load.) Applying the same 80% limit to the 480 Watt rating gives a constant power rating of 384 Watts.

The PCI-E 6 pin spec'ed rating is far below the actual capacity of the wires to safely conduct the current. As long as the connectors are similarly robust, the limiting factor would be the PSU.

(If you look this up yourself, make sure you're looking at 12v DC ratings. Most tables assume household current of 110v AC.)

There is plenty of headroom available beyond the 75w "limit" imposed on PCI-E 6 pin connectors.

Ken
 
The amount of energy you can put through even smaller wires is quite astonishing really, enough to still power this card. Spikes use some of that headroom though.
 
Works fine for me. I specced my system to handle heavier loads.

Consider how many graphics cards from all manufacturers include PCIe to molex power adaptors. Those are definitely not spec compliant but the "average Joe" consumers everyone is so worried about all of a sudden wouldn't think twice about using something included in the box.

I wonder how well the average OEM power supply (which often have a much lower practical limit than their sticker rating) handles using one of those.
 
c3k correct on the wire rating, the 5.5amp +8% was from the PSU, meaning that it should be able to send 142 AMP from the PSU itself, but yes the cable will withstand much more as you proved.
 
I have a 290x that consistently trips an arc fault circuit if I run MSI Kombuster stress test. My rig and monitors only thing on the circuit. Never had an issue with any other cards. Always seemed fishy to me, but I know little to nothing about electricity.

Are your outlets hooked to a GFCI? As crazy as it sounds

1. Replace your surge protector. Sometimes cheap MOV based surge strips can cause power spikes when there's a sudden in rush as they get close to end of life.
2. Replace your GFCI
 
I have a 290x that consistently trips an arc fault circuit if I run MSI Kombuster stress test. My rig and monitors only thing on the circuit. Never had an issue with any other cards. Always seemed fishy to me, but I know little to nothing about electricity.
Ark fault breakers when first required by code were almost faulty to begin with and took a while to iron the kinks out. They are very easy to replace YOURSELF. Take a picture of your breaker and i can give you very easy instructions were to pick one up and about how much you would have to pay. In most cases you can pick them up at home depot or lowes for around 35 bucks. Its also possible you have a loose wire in your outlet or leading up to your outlet but in MOST cases its a faulty breaker.
 
Are your outlets hooked to a GFCI? As crazy as it sounds

1. Replace your surge protector. Sometimes cheap MOV based surge strips can cause power spikes when there's a sudden in rush as they get close to end of life.
2. Replace your AFCI

Fixed it for you. I know your an engineer but GFCI are mostly just in kitchens and bathrooms:)
 
Last edited:
Our PSU's can handle the current over the PCIe.

Each pin no the PEG is rated for about ~1.25 amps. There are 4 of them for a total of 5 amps. With some 8% leeway that gets you about 5.4 amps total.

So what happens when you draw too much current through a conductor/pin? That conductor/pin becomes the weak point and you start to get voltage drops at that point. That in the end affects stability. And when people start to overclock the RX480 they note the PEG bus pin starts to show significant voltage drops proving that the motherboard power path is under stress.

So what happens to those lost volts? (Spec - Actual) * Current = Power Lost = HEAT.

So lets say you are running 11.85 Volts. Spec is 12 volts. '
12-11.85 = .15 Volts

.15V * 5.5 = .825 Watts heat. Not serious. Problem is AMD is really cranking a lot of juice above that PEG.
.15V * 6.5 = .975 Watts head. Again not that serious. Anything over 1 watt in a small area starts to become a problem IMHO. 5 Watt Atom processors need a heat sink

But as you start cranking AMPs, your voltage drop increases and your problem starts to exponentially grow. PLUS you are supplying less than 12 volts on your DC-DC VRMs which puts strain on them.
 
Fixed it for you. I know your an engineer but GFCI are mostly just in kitchens and bathrooms:)

I have a finished basement. They hooked the outlets to the GFCI in the garage. (Likely because of flooding potential in a basement)

But you are right. He could have AFCI. They really didn't start taking hold till 2005+.
 
Last edited by a moderator:
Didn't the GTX 960 draw up to 225w from PCI-e when they first came out? Why are we making such a huge deal about AMD doing a much lower voltage lol - EDIT WATTAGE AND LOWER IN COMPARISON TO THE 960's WATTAGE <-- cause this was too difficult for some to pick up
 
Last edited:
Didn't the GTX 960 draw up to 225w from PCI-e when they first came out? Why are we making such a huge deal about AMD doing a much lower voltage lol

No, it didn't and it's not a lower "voltage". Maybe you should avoid talking about things you don't know anything about.
 
So much of a non-issue that Raja had a driver team working over time during 4th of July weekend.

When you suddenly out of nowhere have ppl/websites claiming your video card will destroy everyone's computers which said remarks could and possibly will tank your company you're damn right they'll work over the 4th of July to clear their name

No disrespect, but something like 9/10 of the world doesn't care much about the 4th of July.
 
Wattage not voltage i agree i used the wrong wording thats just about the only thing you got me on.. maybe you should avoid speaking before someone has a chance to read what crap you spew?

AMD RX 480 Power Issue Detailed - VRMs Beefier Than GTX 1080 Founder's Edition "AMD’s RX 480 Isn’t The First Graphics Card To Exhibit Such Behavior – Custom GTX 960s Caused Similar Concerns Last Year

Read more: AMD RX 480 Power Issue Detailed - VRMs Beefier Than GTX 1080 Founder's Edition"

The graph shows the 960 pulls at PEAK close to 225w. But you may want to squint, it may be too dificult for you to tell that the graph for the 960 goes up to 300w where as the graph for the 480 maxes at 150w. Just in case maybe pick up a monocle to go with your colossal attitude


One thing to note for those that aren't fanboys. The 960 did not consistently remain at that power band.

Yes, you've refuted my point about you not knowing anything about electricity by posting some articles that don't prove your point at all and just make you look stupid.

So, a single model of Custom GTX 960 has a problem with insufficient filtering, which you can only see if you monitor it using 1ms readings on a high-resolution oscilloscope. You somehow believe that this is relevant to the current situation around the Radeon RX 480 pulling a constant overvoltage through the PCI-E slot.
 
Yes, you've refuted my point about you not knowing anything about electricity by posting some articles that don't prove your point at all and just make you look stupid.

So, a single model of Custom GTX 960 has a problem with insufficient filtering, which you can only see if you monitor it using 1ms readings on a high-resolution oscilloscope. You somehow believe that this is relevant to the current situation around the Radeon RX 480 pulling a constant overvoltage through the PCI-E slot.

I refuted your point about pestering about an incorrect term being used and accepted that I made a mistake, as you should. That said, the article shows that they tested the RX 480 in the same manner as they did the 960, also they tested multiple 960s there is a whole article on other board partners.

That said I also did state that the 960 PEAKS, and doesn't remain at a constant, however it peaks much higher than anything the 480 does. Should also note the primary cause of hardware failure is thermal fatigue, which can be caused by the rx 480 and the 960 a like. HOWEVER 90% of all high end motherboards include VRM heatsinks for this reason, when closely monitored there is ver little danger to most motherboards these days of consistently sitting higher than 75w, the theoretical (I'd hate it to be used it out context) maximum for PCIe 3.0 is 300w.


The point is no one raised an eyebrow when Nvidia's EDIT board partners were doing this, or should I state continue to do this as this issue was never rectified by their board partners in the first place.
 
Last edited:
All That Energy blasting away on every web site on the planet, the nvidia trolls went on full attack mode.....And the Result? A big Recall? AMD going under Finally?......Ill give them credit for one thing! They annoyed the living crap out of everybody! But thats were it ended lol:)
 
And this discussion is becoming a vicious loop cycle now. Kyle said it was a non issue yesterday, AMD fixed it and released the driver today and made it even more of a non issue than it was. Went from violating the spec, to being within 10% of the allowed max specification and when compatibility mode is enabled, it is in spec. (Or at least mostly)

Case closed.
 
All That Energy blasting away on every web site on the planet, the nvidia trolls went on full attack mode.....And the Result? A big Recall? AMD going under Finally?......Ill give them credit for one thing! They annoyed the living crap out of everybody! But thats were it ended lol:)

But they are STILL out of spec on the PCIe. But this is no big deal provided the PSU is big enough. HOWEVER, the overclock should be disabled as any overclock will likely put the PEG outside spec again.

The rightful thing to do is to RECALL the cards. But we know that will cost way too much money. The bean counters are doing a dice roll against a lawsuit possibility.


And now all reviews are invalidated until we get new numbers up. It was, quite simply, a boneheaded design move to make it look like a low power card, when in fact it isn't. Quite a black eye IMHO.
 
But they are STILL out of spec on the PCIe. But this is no big deal provided the PSU is big enough. HOWEVER, the overclock should be disabled as any overclock will likely put the PEG outside spec again.


Overclocking is at your own risk for anything and is not warranted by any manufacturer. If you were to overclock and have a failure, its your problem, not AMD's.

This is by no means an enthusiast part. Its a reference card. Wait for AIB models.
 
Overclocking is at your own risk for anything and is not warranted by any manufacturer. If you were to overclock and have a failure, its your problem, not AMD's.

This is by no means an enthusiast part. Its a reference card. Wait for AIB models.

Car Driver: Hey lets go 160 MPH on 150 MPH rated tires.....

Car Mfg: Ummm NO! Sorry but the ECM will throttle back to keep you from going over 150MPH because that is beyond engineered limits of your tires we spec'd and we released it at stock to run at 65MPH average.

In other words, intentionally allowing anyone to go outside of spec on a known weak part is a legal liability. Therefore it should be DISABLED.

It is possible to OVERCLOCK and stay well within spec on a PROPERLY DESIGNED PART. (Hey lets swap those tires out for Z Rated ones and reprogram the ECM so it knows we have z-spec tires) That is up to the designer and owner however to make sure you have robust enough system and components to do that. So overclocking is valid for THOSE individuals. Yes, it's still a mia culpa if you crash and burn. But allowing overclocking on a system that is already outside spec is quite frankly GROSS NEGLIGENCE as any good engineer would state.

AMD KNOWS the PEG bus isn't designed for currents over 5.5Amps.
 
Last edited by a moderator:
Provided the driver keeps the PCIE bus under its proper power limit, just how far is it overdrawing the 6-pin?

I'm finding it super hard to believe that anyone would bother worrying about a few extra watts on a 6-pin. It's the MB we're trying to keep from killing here. There's plenty of current headroom in the 6-pin wiring.

(I suppose to answer my own question, the one exception would be business use where you MUST meet all relevant industry specs or you can't use the part. But if whatever driver gets loaded automatically by Windows doesn't select the by-the-book power options no one is going to spec any 480 cards for any mission critical applications.)
The safety level isnt just down to the ability of the wire to carry power, its also down to the ability of the connector to remain safe under a bad connection.
The more current flows, the hotter a bad connection will get.
This is why the limit.

The loss of a wire is less of a problem than a bad connection on that wire, as long as the remaining connections are well made.
A bad connection causes incredible heat in a tiny space.
If it doesnt burn out it can continue for long enough to melt plastic and even start a fire.
 
So all in all a non-issue and again was only something that affected computers with bargain basement crap parts and was easily fixed to calm fears

Just like I said two days ago

Me thinks the nvidia shills be working heavy overtime (And I own a GTX1080 to boot)
Only if you bury your head.
 
PCI-E 6 pin wires are generally 16 AWG, braided. At 12v, that gives a current rating of 20A. (This is just a simple wire rating, and the "safe" current would be limited by any connector limitation and, of course, the ability of the PSU to supply that current.)

I*V=P

2 12v wires on a PCI-E 6 pin plug gives:

2*20A*12v= 480w

The wires could safely pull 480 Watts, all day, every day.

Household electrical circuits are limited to 80% current of their limit. (So a 12 gauge, 20 Amp, wire would support a circuit designed not to exceed 16 Amp steady load.) Applying the same 80% limit to the 480 Watt rating gives a constant power rating of 384 Watts.

The PCI-E 6 pin spec'ed rating is far below the actual capacity of the wires to safely conduct the current. As long as the connectors are similarly robust, the limiting factor would be the PSU.

(If you look this up yourself, make sure you're looking at 12v DC ratings. Most tables assume household current of 110v AC.)

There is plenty of headroom available beyond the 75w "limit" imposed on PCI-E 6 pin connectors.

Ken

Do the calculations for a bad connection.
 
I must apologize for my rants on the RX 480 I have posted.Sometimes I get my head stuck in a bad place. I was going to send the Sapphire back but then I read couple of of the 1671 tests I figured what the hell installed the card in an Old 2008 Gigabyte p35 ds3 mobo with Q6600 8 GB ddr2 800 with old 80GB intel x25 ssd. installed the drivers and ran Heaven and Valley

Heaven 1920 1080 ultra tesselation medium A10-7870k GTX660ti 16gb ddr3 2133 Intel240ssd score 45.4 fps 1143
Q6600 Rx480 8gb ddr2 800 intel 80 s2 score 71.8 fps 1808

valley 1920 1080 low default A10-7870k GTX660Ti 16gb ddr3 2133 Intel240ssd score 68.1 fps 2849
Q6600 Rx480 8gb ddr2 800 intel 80 s2 score 46.8 fps 1958

Is the old Q6600 the factor in the low Valley score?

I will be looking at bulding an Intel Core I5 just don't have extra cash. I have to admit AMD did a good job fixing the power issue. Used
both compatiblity and normal 2-3 fps difference. Older board so I will leave it in compatibility mode..
 
I must apologize for my rants on the RX 480 I have posted.Sometimes I get my head stuck in a bad place. I was going to send the Sapphire back but then I read couple of of the 1671 tests I figured what the hell installed the card in an Old 2008 Gigabyte p35 ds3 mobo with Q6600 8 GB ddr2 800 with old 80GB intel x25 ssd. installed the drivers and ran Heaven and Valley

Heaven 1920 1080 ultra tesselation medium A10-7870k GTX660ti 16gb ddr3 2133 Intel240ssd score 45.4 fps 1143
Q6600 Rx480 8gb ddr2 800 intel 80 s2 score 71.8 fps 1808

valley 1920 1080 low default A10-7870k GTX660Ti 16gb ddr3 2133 Intel240ssd score 68.1 fps 2849
Q6600 Rx480 8gb ddr2 800 intel 80 s2 score 46.8 fps 1958

Is the old Q6600 the factor in the low Valley score?

I will be looking at bulding an Intel Core I5 just don't have extra cash. I have to admit AMD did a good job fixing the power issue. Used
both compatiblity and normal 2-3 fps difference. Older board so I will leave it in compatibility mode..


Good deal, glad to see its working great. The slot must be a PCI-E 2.0 slot so no worries about a smoke situation. Just make sure the slot is clean and contacts are free from oxidation.
 
But they are STILL out of spec on the PCIe. But this is no big deal provided the PSU is big enough. HOWEVER, the overclock should be disabled as any overclock will likely put the PEG outside spec again.

The rightful thing to do is to RECALL the cards. But we know that will cost way too much money. The bean counters are doing a dice roll against a lawsuit possibility.


And now all reviews are invalidated until we get new numbers up. It was, quite simply, a boneheaded design move to make it look like a low power card, when in fact it isn't. Quite a black eye IMHO.


The issue seems to stem from not enough performance necessary to compete, before final clocks were set I'm willing to bet it was in spec just fine, till they realized it need a lot more balls.
 
Back
Top