Power Draw RX 480 - gaming and Furmark

rbuass

n00b
Joined
Jul 1, 2016
Messages
29
First we created a device to isolate the rails (3,3V and 12V from PCI Express), and compared to osciloscope results, then we found we have very close precision.

So, we make easier understanding vídeo and close caption for share.



I hope it can clear something about.

Sorry the broken english.
 
First we created a device to isolate the rails (3,3V and 12V from PCI Express), and compared to osciloscope results, then we found we have very close precision.

So, we make easier understanding vídeo and close caption for share.



I hope it can clear something about.

Sorry the broken english.

I saw your very first 480 review on youtube, where you measured power :) huge props for your work, and thank you for uncovering this clusterfuck
 
How in the hell will AMD fix this and how it is not a fire hazard? I wanted to replace the 280x with this but not while it is pulling that kind of power straight from the slot.
 
Any chance you could repeat the tests with a 960 or 750ti to shut down the NV comparisons and really drive home this issue?
 
Any chance you could repeat the tests with a 960 or 750ti to shut down the NV comparisons and really drive home this issue?
yea i would agree testing a different card just to make sure the test is showing accurate details
 
The problem is if your measurement is off you have no point of reference. A card you measure as drawing 50w when it only draws 30w is going to look fine
 
ight.
As we have explained, we used our owm tool after to compare to oscilloscope measures, and check the difference is not significant enough.
The reason is to make easier understanding to the public, clear to know.
Also we have explained different power request and signal from the tool, is also non significant to justify 50W+ with no overclocking.
Note that we made alive... in a 5 hours live stream (we usually test psu, fps, Overclocking, hardmoddind, etc in live stream).
Not intended to harm AMD, but just share important finding.

Best Regards
Ronaldo
TecLab
 
copy paste from a friend in other topic... hahaha

Will try to explain.
To measure with oscilloscope will bring close information, and we compared, but if you understood portuguese.
But here, the public is not too experienced like Germany, USA or Japan... if we show oscilloscope to explain the "power draw" (main reason), too many people simply will leave the video, because will think is complex.
So we created that simply tool.
Explained that power W = current A x Voltage V... and explained there are 3 ways...
12v pciex
3,3v pciex
12v psu cable.

Then every single guy, not only will understand the issue, but also wake up to learn more about.

Regarding the tool, we have explained that the signal is not perfect like to be directed to pciex slot, for sure...
New way to bench

We know from a long time...
Also the power draw will change a bit...

But we are not looking for highest precision like calibration procedures,... we are not talking if is 126,2W or 126,1W... but if the accuracy from 75 to 80, or worst 81...82 @@...
But for this level of violation, will be not 2 or 3%, but we are talking about 68% above limit, and we have explained even in our live streams.

We have lots of work regarding our own Power Surge we are doing together APC in Brazil, and also need to test and show all Galax 1070 and 1080 models.... but in few days will post difference and precision with properly pair of oscilloscopes.

Best wishes
 
One thing to note that makes this even a bit worse than appears with just giving the power % over "spec" is that the spec allows for a voltage tolerance. And these measurements are showing that voltage is usually around 11.5v that is fine in itself, but for the card to maintain a certain power it is needing to pull higher currents at the lower voltage, that is because Power= current*voltage. The spec lists a hard current limit that is being exceeded by an even higher % than looking at the power number alone would have you believe because the voltage is less than 12v. Unfortunately, it is the current that actually matters flowing through the smaller traces and connections that would cause problems. All this can be seen from the pcper article and numbers.


This likely gets worse when you have higher loads (from multiple cards) as voltage sags further and current rises to supply each card with the same amount of power. Also with bad motherboards and power supplies that may have even more voltage sag.
 
Yeah this guy leldra is definitely happy about this. Especially since he's posting multiple times at 4am cst wondering if Amd has responded yet.

Good work leldra.
 
Yeah this guy leldra is definitely happy about this. Especially since he's posting multiple times at 4am cst wondering if Amd has responded yet.

Good work leldra.
Thanks man, appreciate the support
 
Sorry that I dropped this in the wrong thread before:

So the first graph was a slight OC to 1300 MHz and the power limit set to 150%. I get that, because you may not have been able to keep the OC without the PL increase. Why is the power limit still at 150% when testing at stock frequency with furmark? Can you vouch for whether or not this card acts this way when the power limit is set to default?

Reason I ask is that my 390 is running stock voltage and stock PL at 1100/1650. No fluctuations in frequency, temps are steady and GPU-Z report that it comsumes on average around 220W while playing Witcher 3. If I leave the frequencies alone and just increase the PL to 150%, the card now consumes around 245W. No change in performance, it's a couple degrees warmer, but there was no increased demand on the card to warrant the draw increase. So I'm wondering if the simple act of increasing the PL is affecting how the board manages its power draw, because it is on my 390.
 
I really wish you'd reconsider investigation this, Kyle. Not to fault the reviewers currently looking at this problem, but I have more respect for [H]ardOCP's combined competence and integrity than I do for any other journalism organization, on the web or off.

Your only agenda is giving your readers the straight dope, and you know your tech. That's a rare combo.
There are a lot of people better qualified than I am with EE degrees to look into this. I do not see this issues losing traction.
 
I had one in hand to purchase today but backed out because of this, wish AMD would officially respond. They lost one sale from me so far.
 
copy paste from a friend in other topic... hahaha

Will try to explain.
To measure with oscilloscope will bring close information, and we compared, but if you understood portuguese.
But here, the public is not too experienced like Germany, USA or Japan... if we show oscilloscope to explain the "power draw" (main reason), too many people simply will leave the video, because will think is complex.
So we created that simply tool.
Explained that power W = current A x Voltage V... and explained there are 3 ways...
12v pciex
3,3v pciex
12v psu cable.

Then every single guy, not only will understand the issue, but also wake up to learn more about.

Regarding the tool, we have explained that the signal is not perfect like to be directed to pciex slot, for sure...
New way to bench

We know from a long time...
Also the power draw will change a bit...

But we are not looking for highest precision like calibration procedures,... we are not talking if is 126,2W or 126,1W... but if the accuracy from 75 to 80, or worst 81...82 @@...
But for this level of violation, will be not 2 or 3%, but we are talking about 68% above limit, and we have explained even in our live streams.

We have lots of work regarding our own Power Surge we are doing together APC in Brazil, and also need to test and show all Galax 1070 and 1080 models.... but in few days will post difference and precision with properly pair of oscilloscopes.

Best wishes

I don't doubt the work you've done on this. It's good to see involved members in the community. The only thing I could possibly point out is measurement of the variance in delivered power. The reason the 750ti is compliant is because you have to measure the consumption over time by millisecond. If you just looked at the peak draw each second, the 750ti is also very out of spec, because at that scale you'd see most seconds reporting around 100-110W draws. However, if you drop down into millisecond range, you see that the 100W+ spikes occur less than 10 times per second (and sometimes not at all) and most of the second is spend around 25-50W. So over a second-based measurement system, the 750ti looks like it should be burning things out left and right, but when you become much more precise in the measurement you see it's actually pretty decent.

edit: This isn't meant to be a dig/slight at you, rbauss. I've seen a few people bring up the 750ti charts at Toms and the fact that they broke it down to a millisecond view really helps illustrate why the 750ti is actually compliant when the second-based chart they post initially looks horrible for power consumption. I actually use that set of charts to explain to some of the folks in the office why precision in measurement is important.
 
Last edited:
can maybe someone educate us a little - Why is the Motherboard providing more than the Spec 75W to the video card over the PCIe Slot?




I guess I would look at this as where does the In-Spec lie? with Motherboard manufacturers (your card must function with only 75W from the PCIeSlot) or from the PCIe card manufacturers (Your Card must only Draw a max of 75W from the PCIe Slot)

I know in the past the spec has lied in the Motherboard but that may be different for PCIe
 
Could this have come about from a duff bios that somehow got slapped onto a load of cards in error, AMD realized this but went ahead anyway?

Something seems VERY off with this, especially as it's such a large error.

I wont deny I'm a long term AMD fan, but this really has put any thought of picking up an RX 480 off the list.

....and have just given nVidia a free swing to smash the 1060 out the park!
 
I don't doubt the work you've done on this. It's good to see involved members in the community. The only thing I could possibly point out is measurement of the variance in delivered power. The reason the 750ti is compliant is because you have to measure the consumption over time by millisecond. If you just looked at the peak draw each second, the 750ti is also very out of spec, because at that scale you'd see most seconds reporting around 100-110W draws. However, if you drop down into millisecond range, you see that the 100W+ spikes occur less than 10 times per second (and sometimes not at all) and most of the second is spend around 25-50W. So over a second-based measurement system, the 750ti looks like it should be burning things out left and right, but when you become much more precise in the measurement you see it's actually pretty decent.
No doubt ... more accurate, more precision, much better...
No doubt... for electronics 1 second is too long...
In even a 60 FPS, in just one second.... average 16,6 ms, we can have a half running about 1 ms and a at about crap 38ms... will be crap second, ideal for stuttering, tearing and nice fenomenous... hahaha

But under oscilloscope we have much higher time response, and I can assure, is not about peak, but the behaviour...
We have disassambled completely the card, analisys, and found the power demand to the 12V PCI Ex is regaring, imho, about a wrong project...
So... how about to share 50% 50% PSU and PCI express power... or something like this.

Thanks for your good point.
 
can maybe someone educate us a little - Why is the Motherboard providing more than the Spec 75W to the video card over the PCIe Slot?




I guess I would look at this as where does the In-Spec lie? with Motherboard manufacturers (your card must function with only 75W from the PCIeSlot) or from the PCIe card manufacturers (Your Card must only Draw a max of 75W from the PCIe Slot)

I know in the past the spec has lied in the Motherboard but that may be different for PCIe

The motherboard gets a certain amount of power and that is divided with the rest of the components that are attached to it. Now the RX 480 is pulling more than it should through the pci-e slot the motherboard doesn't have as much power to sustain the other components, this is what causes instability in the short term if the motherboard is of less quality *less tolerance. Over a period of time though if you keep stressing them it can cause damage.

Now people are pointing to the gtx 960 strix and the 750 ti where they have spikes when they go over the total PCI spec of both the board and or pci-e connector.

Spikes are ok, because its such a short amount of time that the motherboard can tolerate them switch power to other components that need them in that short amount of time.
 
Would like to talk...

This forum is really crazy in "interation"

After FB kill brazilian foruns, looks like I feel home... I loved.

Very nice team here.

Too many crazy people like me...
 
OK so doing some research does anyone know the PCIe Specification for Power? PCIe 3.0 specification is 300W total and doesn't specify slot maximum of 75W. I see Reference to the 25W from the slot + 75W/150W from the plugs..


if the RX480 is getting 126W from the slot, then I suspect something else is going on..
 
can maybe someone educate us a little - Why is the Motherboard providing more than the Spec 75W to the video card over the PCIe Slot?




I guess I would look at this as where does the In-Spec lie? with Motherboard manufacturers (your card must function with only 75W from the PCIeSlot) or from the PCIe card manufacturers (Your Card must only Draw a max of 75W from the PCIe Slot)

I know in the past the spec has lied in the Motherboard but that may be different for PCIe

Think of the PCIe spec like the red line on your transmission in your car. You certainly can push the transmission harder than the redline suggests, but it becomes a risk to the reliability of the parts.

As to who has the responsibility, each manufacturer involved. The MB vendor is responsible for actually being able to provide steady support up to the spec draw through the slot. They guarantee that as long as you don't push beyond that limit, your board won't fail (that's your warranty). The vendor producing the card is also responsible for producing a card that will not require more than the guaranteed spec. If they do, then they can't sell it as a "safe" part to put in your computer.

The contention here is that AMD insists the card is pcisig certified, while a number of people are claiming otherwise and now starting to show how they measured these claims.
 
The total the card used was in the video... is more than 250w....
From my personal point of view

The project split the power distribution not good way.
Almost 50% 50%... psu cable and pciex

Why AMD didn't use a 2 x 4 or 4 x 3 pin?

Maybe the image against NV, because the launch of the powerful 1080 with a 2 x 3, forced AMD with the idea.

Why NV used 2 x 3 pin... while AMD needs again too much power... 2 x 4 or 4 x 3 shoul look for a ridiculous for a 200 usd card...
This is just my 2 cents...
Maybe I am wrong, but can't understand why a powerful capable amd team, with a good new chip and technology, not attempt about this...
I really don't know.
 
The total the card used was in the video... is more than 250w....
From my personal point of view

The project split the power distribution not good way.
Almost 50% 50%... psu cable and pciex

Why AMD didn't use a 2 x 4 or 4 x 3 pin?

Maybe the image against NV, because the launch of the powerful 1080 with a 2 x 3, forced AMD with the idea.

Why NV used 2 x 3 pin... while AMD needs again too much power... 2 x 4 or 4 x 3 shoul look for a ridiculous for a 200 usd card...
This is just my 2 cents...
Maybe I am wrong, but can't understand why a powerful capable amd team, with a good new chip and technology, not attempt about this...
I really don't know.

Neither of the NV pascal cards launched with a 6 pin connector. They're both 8-pin, by spec. The 1060 may be a 6-pin, though. We'll see.
 
My guess is they're waiting for the updated VBIOS to pass QA before talking about it. They'll phrase it as a technical issue for OEM validation with no real impact for consumers, but that concerned consumers can flash the new VBIOS if they're raw about it.

That assumes it doesn't really burn-out peoples' PCI-e slots, which sounds like internet bullshit to me. But if that's really happening, all bets are off and AMD is fuh-screwed.

jbc029: Leaks say the GTX1060 is only 120w TDP, so a 6-pin connector would be appropriate.

It "should" be, but possibly, with the shitstorm breweing for AMD, I'd consider erring on the side of caution and slapping an 8 pin on there and marketing the card as being able to "OC to the moon":p
 
  • Like
Reactions: noko
like this
Neither of the NV pascal cards launched with a 6 pin connector. They're both 8-pin, by spec. The 1060 may be a 6-pin, though. We'll see.
Hahaha...
Again my stupid English not let me be clear.
I know, we tested.
I mean when the cards still not launched, in project time, before nv disclose ou someone discover, they should think.
Fking nvidia will launch next gen card with very low request, and mimimi...
Then they failed, in order to try to show they were able to create a 6 pin crazy card...
Sorry again, but be sure... I hate to have bad communication.
 
Hahaha...
Again my stupid English not let me be clear.
I know, we tested.
I mean when the cards still not launched, in project time, before nv disclose ou someone discover, they should think.
Fking nvidia will launch next gen card with very low request, and mimimi...
Then they failed, in order to try to show they were able to create a 6 pin crazy card...
Sorry again, but be sure... I hate to have bad communication.

No worries. :)
 
They said pretty clearly on Reddit that they are investigating the reports but that it had passed all the tests

This could be as simple as a batch of cards with faulty/incorrect resistors (or whatever) applied by whoever assembled the PCBs. Maybe a bad batch of VRMs.

There is no way for them to make a definitive statement until they can repro in their own labs. At this point saying something that turns out to be wrong would be far worse for them.


Yet ever single reviewer that did extensive power testing, report the same thing, and we have retail board buyers saying the same thing. Must be a pretty big batch of small batch of cards lol.
 
Last edited:
Kyle, you guys have one of those heat cams right? Well when you test those dual RX 480's, make sure to take a really good look around :D

and rbuass - Great job with the video! Can you do a comparison with say a 390X or other such card? Maybe even an NVidia one for comparison sakes?
 
They said pretty clearly on Reddit that they are investigating the reports but that it had passed all the tests.
I think AMD merely claims that the it has passed a compliance check performed by the PCI-SIG - do not be fooled by that!
If you look into it, you;'ll discover that the PCI-SIG compliance program does not check for compliance with the max-power parts of the spec.
Nor does the PCI-SIG check under a stress test like FurMark or by running a game. They leave all that to the OEMs -- unfortunately.

Now, if Underwriters Labs (UL) were running the tests, they probably would FurMark the card. But I cannot find any documentation (like, for example, a user manual) for the RX 480 on-line. Anyone have a box ro a manual and can check to see if the card is UL certified?[/QUOTE]
 
everybody knows that furmark stresses a card WAAAY beyond what they normal run at so has any testing been done with a normal game or somethin like heaven or valley?

edit: never mind I re-watched the video and saw they used valley/heaven. I skipped threw the first time.
 
Last edited:
I think they used heaven in the video here also. In that test, the PL was set to +50%, the clock was 1300MHz (so slight OC) and the slot power draw was measured at 110W. After looking through a lot of the videos and threads, it seems that most of these huge overdraws are happening when the PL has been raised significantly. In the past that meant you might kill your PSU, not necessarily your mobo. In either case, tweaking that invokes the AMD disclaimer about overclocking (even if you don't actually change the frequency, adjusting the power delivery management trips the same clause). AMD may be banking on the fact that PL increases make them not liable.
 
Back
Top