AMD in Trouble? RX 480 Powergate

Lol. Listen. If I post this, I'm gonna get threatened with lawsuits :p not you :p
already like 5 threats

Just post the link then, with a FYI. Let the people who published that document get sued. :)

BTW, googling "AMD reddit" shows your thread as the in-the-news result. :)
 
I did actually check Toms review few hours before your post. It posted ~50-60w average from the slot with peaks slightly over 70w. Notice that peaks are not worth a damn, since they last half a microsecond at most.

Those peaks also violate the PCI Express spec, if they are not a measurement artifact. The card isn't allowed to change it's current draw faster than 0.1A per microsecond: that's the "maximum current slew rate" part of the spec image posted above. A 1A spike for 0.5 microsecond is 2A per microsecond: 20 times the max allowed current skew and seriously no bueno.
 
Last edited:
Those peaks also violate the PCI Express spec, if they are not a measurement artifact. The card isn't allowed to change it's current draw faster than 0.1A per microsecond: that's the "maximum current slew rate" part of the spec image posted above. A 1A spike for 0.5 microsecond is 5A per microsecond: 50 times the max allowed current skew and seriously no bueno.

god I love these threads
 
If you take lets say a capacitor that is rated for a certain uF, and you keep going over it and stay over it what will happen? You will blow it.

But if you peak and go back under its rating it will be fine. Capacitors have tolerance ratings because of this.

Not quite razor. Farads is the amount of energy it can store. It can't go past it's rating. And if you go past the voltage rating...POP goes the CAP! You can however cycle them so much and so often with so much power they heat up and reach their Tmax. If you are using cheap electrolytic paste caps out of TW or China, god help ya. You'll have an oozing mess for sure.

Now resistors have a wattage rating. You can exceed this and if you do you will slowly burn out the carbon film (typically used on low wattage resistors)
 
Not quite razor. Farads is the amount of energy it can store. It can't go past it's rating. And if you go past the voltage rating...POP go the CAP!
Well, in real life, there is usually a substantial safety margin (to compensate for manufacturing variations, thermal variation, and aging, at least), but still it's risky to exceed a capacitor's voltage spec.
 
Not quite razor. Farads is the amount of energy it can store. It can't go past it's rating. And if you go past the voltage rating...POP go the CAP! You can however cycle them so much and so often with so much power they heat up and reach their Tmax. If you are using cheap electrolytic paste caps out of TW or China, god help ya. You'll have an oozing mess for sure.

Now resistors have a wattage rating. You can exceed this and if you do you will slowly burn out the carbon film (typically used on low wattage resistors)

he wrote uF instead of V
 
So I spent a little bit of time looking at an article on Tom's HW about how they do their power measurements (The Math Behind GPU Power Consumption And PSUs). Overall it looks pretty good, but it seems there is something wrong with how they are measuring the PCI express voltages. This article indicates that they are measuring the 12V and 3.3V at the motherboard power connector. This seems wrong to me. They really should be measuring the PCI express voltages at the PCI express riser card they are using. If there are any voltage drops between the motherboard power connector and the PCI express slot that could cause an error in the measured power consumption. Also, there will generally be a voltage droop on the power supply when the current demand increases. I wish they provided the raw data they measured (V and I for each PS voltage).

The current probes they are using are rated to +/- 1% accuracy +/- 2mA. For the max PCI express power ([email protected] and 3.3V@3A) this error is < 1W, so that seems ok. However, we also don't know if their probes and scopes have been calibrated.

Anecdotally, I have made measurement errors with different types of current probes than they are using at work. We have some tektronix probes that will give you a significant measurement error if the probe is not closed all the way. I'm not saying this happened here, but it is worth noting. I would like to see other people independently verify the measurements THG made using similar methodology.
 
Why is this such a big deal? If it's out of spec by a non-trivial margin when running stock. Ok, that's a problem they need to deal with. AMD has straight up said that they are looking into it...It's been a day...so let them look into it.

Why does overclocking even come into this conversation? Yea it sucks that it's not a great overclocker...but if you're running the card out of spec you can't really expect it to maintain all other values to stay in spec. You're off book, you're going to get off-book results.

It's a Budget Card so it's not as likely to be crossfired or overclocked by the masses that buy it.

So I'm fine with AMD doing an analysis and coming back. We can crucify them then if necessary if it's BS or a bad analysis.

Leldra just seems to be frothing for the sake of frothing.
 
Not quite razor. Farads is the amount of energy it can store. It can't go past it's rating. And if you go past the voltage rating...POP goes the CAP! You can however cycle them so much and so often with so much power they heat up and reach their Tmax. If you are using cheap electrolytic paste caps out of TW or China, god help ya. You'll have an oozing mess for sure.

Now resistors have a wattage rating. You can exceed this and if you do you will slowly burn out the carbon film (typically used on low wattage resistors)


yeah I was thinking V when I type it up lol.
 
Why is this such a big deal? If it's out of spec by a non-trivial margin when running stock. Ok, that's a problem they need to deal with. AMD has straight up said that they are looking into it...It's been a day...so let them look into it.

Why does overclocking even come into this conversation? Yea it sucks that it's not a great overclocker...but if you're running the card out of spec you can't really expect it to maintain all other values to stay in spec. You're off book, you're going to get off-book results.

It's a Budget Card so it's not as likely to be crossfired or overclocked by the masses that buy it.

So I'm fine with AMD doing an analysis and coming back. We can crucify them then if necessary if it's BS or a bad analysis.

Leldra just seems to be frothing for the sake of frothing.

I agree we need to stay calm. At this point it's a sit and wait. AMD deserves time to reply. (A week is reasonable.)
 
Those peaks also violate the PCI Express spec, if they are not a measurement artifact. The card isn't allowed to change it's current draw faster than 0.1A per microsecond: that's the "maximum current slew rate" part of the spec image posted above. A 1A spike for 0.5 microsecond is 5A per microsecond: 50 times the max allowed current skew and seriously no bueno.
I am positive your math has a fail here.

Also, is not the average over microsecond skew the relevant one?

Why is this such a big deal? If it's out of spec by a non-trivial margin when running stock. Ok, that's a problem they need to deal with. AMD has straight up said that they are looking into it...It's been a day...so let them look into it.

Why does overclocking even come into this conversation? Yea it sucks that it's not a great overclocker...but if you're running the card out of spec you can't really expect it to maintain all other values to stay in spec. You're off book, you're going to get off-book results.

It's a Budget Card so it's not as likely to be crossfired or overclocked by the masses that buy it.

So I'm fine with AMD doing an analysis and coming back. We can crucify them then if necessary if it's BS or a bad analysis.

Leldra just seems to be frothing for the sake of frothing.
Mainly because card even gets further out of spec when overclocking. But yes, it should be done with a BIOS update. How AMD will get it to all first batch cards however is another story.

And yes, the power distribution is firmware thing, it's out of driver update control.
 
I am positive your math has a fail here.

Also, is not the average over microsecond skew the relevant one?


Mainly because card even gets further out of spec when overclocking. But yes, it should be done with a BIOS update. How AMD will get it to all first batch cards however is another story.

And yes, the power distribution is firmware thing, it's out of driver update control.
No not really. Firmware updates can update via driver updates. You never heard of Intel issuing a uCode patch to their CPUs? After the whole Pentium float point error, Intel and most large IC makers put in updateable microcode to fix hardware issues.
 
Last edited by a moderator:
I am positive your math has a fail here.
Fixed it, thanks for the catch. Only 20x over the spec.

Also, is not the average over microsecond skew the relevant one?
No. Maximum skew rates exist to prevent at least two things: voltage spikes across the inherent inductance in the power supply delivery wires and instability in the poor VRMs trying to keep everything jake. Change the current draw too quickly on some VRM topologies and they may go metastable on you, ringing like a drunken bell and at a not-insubstantial magnitude. Both of these bad things are caused by instantaneous changes in the current draw, there's no averaging.

Besides, averaged over time, the skew rate is zero: what goes up must come back down, after all.
 
No not really. Firmware updates can update via driver updates. You never heard of Intel issuing a uCode patch to their CPUs? After the whole Pentium float point error, Intel and most large IC makers put in updateable microcode for fix hardware issues.
Actually, you are correct. Firmware updates can be issues through driver/system updates. BIOS updates cannot. So, now it's up to whether power distribution is controllable via firmware.
Fixed it, thanks for the catch. Only 20x over the spec.


No. Maximum skew rates exist to prevent at least two things: voltage spikes across the inherent inductance in the power supply delivery wires and instability in the poor VRMs trying to keep everything jake. Change the current draw too quickly on some VRM topologies and they may go metastable on you, ringing like a drunken bell and at a not-insubstantial magnitude. Both of these bad things are caused by instantaneous changes in the current draw, there's no averaging.

Besides, averaged over time, the skew rate is zero: what goes up must come back down, after all.
Well, then apparently the whole thing is non-issues since all GPU manufacturers violate the spec, unless proven otherwise.
 
Fixed it, thanks for the catch. Only 20x over the spec.


No. Maximum skew rates exist to prevent at least two things: voltage spikes across the inherent inductance in the power supply delivery wires and instability in the poor VRMs trying to keep everything jake. Change the current draw too quickly on some VRM topologies and they may go metastable on you, ringing like a drunken bell and at a not-insubstantial magnitude. Both of these bad things are caused by instantaneous changes in the current draw, there's no averaging.

Besides, averaged over time, the skew rate is zero: what goes up must come back down, after all.

He's quite right. Add to this, just past the VRM there's usually some form of filter to average out the PWM power. This is dependent on the choice of the VRM and power design target and how quickly the VRM can pick up before the filter runs out of "leveling" juice. There's also a problem with sudden down slews which result in a buck feedback where current heads back to the VRM. (Sort of like throwing a large beach ball against a wall that wasn't there before...it kinda bounces back at ya'. This requires a shunt that can handle the power or risk burning out the VRM. If the shunt is external no problem, easy to design. But if they went cheap and went with the limited internal shunt that is based on the expectation it would be within slew rate, you have a problem on your hands.) This is all dependent on if the trace is current regulated. Any motherboard designer by default should use an external shunt.
 
This is a DC DC buck converter paper but VRM's operate in much the same manner. You can see how capacitor value is mated to the specifications of the power regulator for a given design envelope and how it breaks down if you don't pair them properly or push them outside spec.

http://www.ti.com/lit/an/slva301/slva301.pdf
 
They really should be measuring the PCI express voltages at the PCI express riser card they are using. If there are any voltage drops between the motherboard power connector and the PCI express slot that could cause an error in the measured power consumption. Also, there will generally be a voltage droop on the power supply when the current demand increases. I wish they provided the raw data they measured (V and I for each PS voltage).

Agree, voltage needs to be measured at the same location as current and at the same frequency. If there's enough space between the two, you increase the chance of resistive losses. The voltage drop that occurs when the current increases for a GPU load surge needs to be measured and taken into account. Otherwise you can end up with a recorded surge in current due to make up for a voltage drop that looks like an increase in power.

A terrible example: you have a PC with a static workload of 500W. The power supply is pulling 500W/120V = ~4.2A. When your A/C comes on, the voltage on the PC's circuit will drop, say 110V. The power supply will increase the current draw to maintain the same power (500W/110V = ~4.5A). If you were measuring current and didn't record the voltage drop, you would find an increase in power use that didn't make sense. As long as voltage is being accurately recorded for each power calculation it should be fine.

I think the average power (>75W) is likely still an issue here (unless there's significant and constant voltage drop across the motherboard, which I doubt). It is possible that the power peaks - of all cards, not just the RX480 example, could be exaggerated by this measurement method.
 
Agree, voltage needs to be measured at the same location as current and at the same frequency. If there's enough space between the two, you increase the chance of resistive losses. The voltage drop that occurs when the current increases for a GPU load surge needs to be measured and taken into account. Otherwise you can end up with a recorded surge in current due to make up for a voltage drop that looks like an increase in power.

A terrible example: you have a PC with a static workload of 500W. The power supply is pulling 500W/120V = ~4.2A. When your A/C comes on, the voltage on the PC's circuit will drop, say 110V. The power supply will increase the current draw to maintain the same power (500W/110V = ~4.5A). If you were measuring current and didn't record the voltage drop, you would find an increase in power use that didn't make sense. As long as voltage is being accurately recorded for each power calculation it should be fine.

I think the average power (>75W) is likely still an issue here (unless there's significant and constant voltage drop across the motherboard, which I doubt). It is possible that the power surges - of all cards, not just the RX480 example, could be exaggerated by this measurement method.

I believe Tom's Hardware do measure at the riser card, they only mention 24-pin ATX to explain that that is essentially the power source
 
I believe Tom's Hardware do measure at the riser card, they only mention 24-pin ATX to explain that that is essentially the power source

In the article I linked above it looks like they are measuring the voltage at the motherboard header. They are measuring the current at the riser using their current probes, but it doesn't look like they are measuring the voltage there.
 
This had to be known to AMD pre-release, and I wonder why they went for it. They should have known THG does this kind of testing that would reveal it.

Using 6-pin just for marketing reasons seems very, very strange to me. Most people do not care, and they could easily defend it pre-release while saying 150W TDP, e.g. something like "more consistent power delivery" or "more overclocking potential" or whatever.

Perhaps they went to OEMs and asked: do you want 8-pin, or 6-pin and being slightly out-of-spec on PCIe at stock? And OEMs said to proceed with the b) option?
 
I believe Tom's Hardware do measure at the riser card, they only mention 24-pin ATX to explain that that is essentially the power source

The write-up that Schmave linked to shows they measure at the ATX connector:
Measuring Power Consumption: A Practical Implementation - The Math Behind GPU Power Consumption And PSUs

We’re looking at several voltage rails here: 3.3, 5 and 12V. As mentioned, these voltages are looped through the motherboard and can thus be measured at the 24-pin power connector, since the motherboard doesn’t influence them.

The motherboard shouldn't influence them, but at the end of the day they're taking a big wire from the PSU, plugging it into a blackbox (the motherboard) and hoping it's identical at the other end. It's definitely a little different - it would be a little different even if you measured from the other end of the wire inside the power supply. The question is how much and does it affect the data. It's just a lot of variables to add to something which needs some fairly high specificity.

The article is dated Nov 2014, so it's possible they have changed methods.
 
The write-up that Schmave linked to shows they measure at the ATX connector:
Measuring Power Consumption: A Practical Implementation - The Math Behind GPU Power Consumption And PSUs



The motherboard shouldn't influence them, but at the end of the day they're taking a big wire from the PSU, plugging it into a blackbox (the motherboard) and hoping it's identical at the other end. It's definitely a little different - it would be a little different even if you measured from the other end of the wire inside the power supply. The question is how much and does it affect the data. It's just a lot of variables to add to something which needs some fairly high specificity.

The article is dated Nov 2014, so it's possible they have changed methods.

I was just about to post the same thing. I wouldn't assume that those voltages pass straight through the motherboard without any kind of filtering (even just putting some capacitors on these voltages would be enough to change the measurement at the motherboard header vs. the PCI express slot).
 
Agree, voltage needs to be measured at the same location as current and at the same frequency. If there's enough space between the two, you increase the chance of resistive losses. The voltage drop that occurs when the current increases for a GPU load surge needs to be measured and taken into account. Otherwise you can end up with a recorded surge in current due to make up for a voltage drop that looks like an increase in power.

A terrible example: you have a PC with a static workload of 500W. The power supply is pulling 500W/120V = ~4.2A. When your A/C comes on, the voltage on the PC's circuit will drop, say 110V. The power supply will increase the current draw to maintain the same power (500W/110V = ~4.5A). If you were measuring current and didn't record the voltage drop, you would find an increase in power use that didn't make sense. As long as voltage is being accurately recorded for each power calculation it should be fine.

I think the average power (>75W) is likely still an issue here (unless there's significant and constant voltage drop across the motherboard, which I doubt). It is possible that the power peaks - of all cards, not just the RX480 example, could be exaggerated by this measurement method.

Anybody have access to voltage sag limits on the bus to stay compliant? 5%? 10%? The filter past the bus power should compensate for some of this.
 
I was just about to post the same thing. I wouldn't assume that those voltages pass straight through the motherboard without any kind of filtering (even just putting some capacitors on these voltages would be enough to change the measurement at the motherboard header vs. the PCI express slot).

Even if they do, think of all the cases you used to see of burnt ATX power connectors. This is literally one of the biggest possible sources of resistive losses on the motherboard and they are measuring before it. Really not best practice.

I'd be curious to see other reviewers set-ups. I know that I've been bit by something as small as capacitance introduced by my probe throwing off numbers when measuring small voltage deltas.
 
On a side note, having predictable voltage and current swings is why intel put a lot of power regulation circuitry of Broadwell and ?Haswell? on the CPU package. Lack of flexibility let them put it back on the board, but Intel had to put a crap load of capacitors on the chip to act as a filter for sudden current demand changes which can create it's own issues which I'm sure Intel let their mb partners know about.
 
If the 750ti can draw 100w threw the PCI-e connection and tomshardware isn't concerned, but the RX 480 draws a whole 8 watts. The earth is going to implode.

Seriously these Nvidia chills are just trying to do anything to discredited it.
 
This had to be known to AMD pre-release, and I wonder why they went for it. They should have known THG does this kind of testing that would reveal it.

Using 6-pin just for marketing reasons seems very, very strange to me. Most people do not care, and they could easily defend it pre-release while saying 150W TDP, e.g. something like "more consistent power delivery" or "more overclocking potential" or whatever.

Perhaps they went to OEMs and asked: do you want 8-pin, or 6-pin and being slightly out-of-spec on PCIe at stock? And OEMs said to proceed with the b) option?

Wasn't a problem for all the 750ti owners, won't be a problem for all the RX 480 owners.

Power Consumption: Gaming - GeForce GTX 750 Ti Review: Maxwell Adds Performance Using Less Power

Remember this link above if from the same source who started it all.

Nvidia does it. Its ok. Amd does it. Bring out the pitchforks!
 
If the 750ti can draw 100w threw the PCI-e connection and tomshardware isn't concerned, but the RX 480 draws a whole 8 watts. The earth is going to implode.

Seriously these Nvidia chills are just trying to do anything to discredited it.
Look, i did come to conclusion that overall it won't go anywhere but beyond firmware/driver update (if possible).

But where did 750 ti ate constantly around 100 watts, i dare you to show me. And yes, peaks are relevant, but less so.
Thought this was a nonissue years ago? Standards guys don't seem to care. Why should we?
AMD's Radeon HD 6990: The New Single Card King
Standards guys don't care because they would have to strip both nV and AMD off of PCI-E license, and it's just not worth the trouble. And finally, when was last time you've seen 6990 sit on $20 mobo and $20 PSU? That's sort of machines where rx480 may end up and where it will cause issues (mostly the fact that rx480 goes over it's TDP in power consumption).
 
Look, i did come to conclusion that overall it won't go anywhere but beyond firmware/driver update (if possible).

But where did 750 ti ate constantly around 100 watts, i dare you to show me. And yes, peaks are relevant, but less so.

Standards guys don't care because they would have to strip both nV and AMD off of PCI-E license, and it's just not worth the trouble. And finally, when was last time you've seen 6990 sit on $20 mobo and $20 PSU? That's sort of machines where rx480 may end up and where it will cause issues (mostly the fact that rx480 goes over it's TDP in power consumption).

I owned a 6990 and bitcoined the fuck outta it for over 3 years on the same PCI-e. I had no issues.

The only issue I have is people showing benchmarks of the issue, from the same website thats shows Nvidia doing it and seeing no issue at all.

It boggles my mind how far Shills will go to discredit a company.
 
Look, i did come to conclusion that overall it won't go anywhere but beyond firmware/driver update (if possible).

But where did 750 ti ate constantly around 100 watts, i dare you to show me. And yes, peaks are relevant, but less so.

Standards guys don't care because they would have to strip both nV and AMD off of PCI-E license, and it's just not worth the trouble. And finally, when was last time you've seen 6990 sit on $20 mobo and $20 PSU? That's sort of machines where rx480 may end up and where it will cause issues (mostly the fact that rx480 goes over it's TDP in power consumption).

The peaks/spikes are also violation of a different spec. Violated 2 specs...

The larger variations in power from the MB is potentially more damaging to components...

And yes, the 750ti and 960 also were geared for low-end systems with inexpensive motherboards and PSUs, so it should have been major cause for concern for the same reasons...
 
So if you run the card out of spec it requires more power.

*Looks at his sig and smiles a big cheesy grin*
 
Read that article again. It never went over 75 Watts AVERAGE. Short burst are okay.

OOOO So only short bursts? So now its (well it can do it slightly and its ok) Come the fuck on people. This is seriously a non-issue and people are making a mountain out of a molehile.

Same shit like the Benghazi report. Waste of time and space, yet people will bitch about it!
 
It boggles my mind how far Shills will go to discredit a company.

GamerGate was the begining. The public has woken up to gorilla marketing tactics through news media and social media... The public is waking up to the idea of refusing to be told what to think by other people :)
 
The peaks/spikes are also violation of a different spec. Violated 2 specs...

The larger variations in power from the MB is potentially more damaging to components...

And yes, the 750ti and 960 also were geared for low-end systems with inexpensive motherboards and PSUs, so it should have been major cause for concern for the same reasons...
Peaks/spikes are a common thing, and that's the reason why i think this issue won't be enforced legally (well, someone crazy can fire a lawsuit, but crazies gon craze).

OOOO So only short bursts? So now its (well it can do it slightly and its ok) Come the fuck on people. This is seriously a non-issue and people are making a mountain out of a molehile.

Same shit like the Benghazi report. Waste of time and space, yet people will bitch about it!
PCPer said:
I asked around our friends in the motherboard business for some feedback on this issue - is it something that users should be concerned about or are modern day motherboards built to handle this type of variance? One vendor told me directly that while spikes as high as 95 watts of power draw through the PCIE connection are tolerated without issue, sustained power draw at that kind of level would likely cause damage. The pins and connectors are the most likely failure points - he didn’t seem concerned about the traces on the board as they had enough copper in the power plane to withstand the current.

Well, not really a waste, heh.

I owned a 6990 and bitcoined the fuck outta it for over 3 years on the same PCI-e. I had no issues.

The only issue I have is people showing benchmarks of the issue, from the same website thats shows Nvidia doing it and seeing no issue at all.

It boggles my mind how far Shills will go to discredit a company.
Nvidia actually has a reference card violating the mobo slot consumption standard? Show me this reference card you speak of!
 
Peaks/spikes are a common thing, and that's the reason why i think this issue won't be enforced legally (well, someone crazy can fire a lawsuit, but crazies gon craze).




Well, not really a waste, heh.


Nvidia actually has a reference card violating the mobo slot consumption standard? Show me this reference card you speak of!

Already linked. Sorry man try again.

Sick and tired of people making shit such a big deal when it isn't. Getting old. Next the AMD fan is 2db louder then the reviews said. Time to investigate all AMD fans to see if reviewers lied about them!
 
Back
Top