XFX GTR RX 480 running 1500 core / 2100 memory on AIR.

Why can't you believe it ? It has the exact same problem. GPU power split 50/50 between motherboard and PSU
Then I'd ask where the sea of burnt out motherboards are, since every 250+W 480 *must* be pulling over 120W from the PCIe slot for hours at a time, right?
 
Then I'd ask where the sea of burnt out motherboards are, since every 250+W 480 *must* be pulling over 120W from the PCIe slot for hours at a time, right?
The problem may not manifest straight away as issues with the connector melting/burning, consider the EVGA 1080 FTW and a couple of their other cards designed for large OC without the thermal pads on the VRMs and you will see many complaining about other intermittent issues for now, with the very rare few burnt out models.
Also worth pointing out most user will not be using more than 1.15V with their 480 due to the BIOS.
Cheers
 
Last edited:
Then I'd ask where the sea of burnt out motherboards are, since every 250+W 480 *must* be pulling over 120W from the PCIe slot for hours at a time, right?

I specifically asked if anyone knows exact duty cycle set by bios. The point is there will be a power threshold above which it will be out of spec again.
 
Yeah lets just ignore the manufacturing (meaning TSMC and Samsung/GF) performance envelope spec and threshold-tolerance (voltage sensitivity) of the silicon-node and crank it to 11 for normal gaming, and lets ignore the PCIe SiG spec again as the 480 never gave anyone problems with the PCIe slot so lets also crank that baby to a fair bit above 100W.
How many here has their Skylake at 1.4-1.45V all the time for normal gaming and expect it to be fine for years?
Anyway there is a limit one should go and the extreme OCers who set world records/work for IHV give good advice on normal day workload V limits.

The OP test is interesting but does not mean it should be used permanently for normal day-gaming operation, some like to push this as far as possible just to see what can be achieved.

Cheers
[H] members have been degrading their Intel chips with too much 24/7 voltage since at least the C2Q era. Feel free to buy a used Q6600 from anyone other than a grandmother and see if it still overclocks like they did when they were new! Consumers have the right to destroy their hardware with unsafe voltages for an extra % and some epeen just so long as they don't RMA or sell it without notifying buyers. If we as a community were to condemn such a practice we might as well delete every last thread referencing synthetic benchmarks such as Firestrike, Heaven, etc. because without such tools there would be far less unstable overclocking going on in the world.
 
[H] members have been degrading their Intel chips with too much 24/7 voltage since at least the C2Q era. Feel free to buy a used Q6600 from anyone other than a grandmother and see if it still overclocks like they did when they were new! Consumers have the right to destroy their hardware with unsafe voltages for an extra % and some epeen just so long as they don't RMA or sell it without notifying buyers. If we as a community were to condemn such a practice we might as well delete every last thread referencing synthetic benchmarks such as Firestrike, Heaven, etc. because without such tools there would be far less unstable overclocking going on in the world.
Again missing my point and specific to Skylake not earlier Intel CPUs, especially as I mentioned that this becomes more of a challenge as the node shrinks.
So how many here have their Skylake above 1.4V permanently for all operations/gaming?
So far it would seem to be a minority and goes against your point that everyone is OC to breaking point their CPUs and GPUs for 24/7 operation.
 
Again missing my point and specific to Skylake not earlier Intel CPUs, especially as I mentioned that this becomes more of a challenge as the node shrinks.
So how many here have their Skylake above 1.4V permanently for all operations/gaming?
So far it would seem to be a minority and goes against your point that everyone is OC to breaking point their CPUs and GPUs for 24/7 operation.

So are you the forum nanny now?
 
So are you the forum nanny now?
No just trying to provide some much needed common sense, most in this thread are posting as this is normal to push 24/7 operations this far and also normal and not needing further investigation just to see how much further the XFX is over 100W on the PCIe slot when it is at 1.3V.
Note AMD made the BIOS change for normal operation and OC back then was only 1.15V

Same way I am critical of the EVGA Pascal cards that do not have thermal pads/cooling relating to the VRMs and IMO they should not be OC'd (so left at custom defaults albeit higher than reference) until an owner deals with that issue or improves cooling around the card.
Cheers
 
Last edited:
dont worry if some new rookie talks about using 1.4 volts for 24/7 well explain its a bad idea:woot:.....most here are way past noob level here lol
 
dont worry if some new rookie talks about using 1.4 volts for 24/7 well explain its a bad idea:woot:.....most here are way past noob level here lol
Great, so we can all agree that the XFX OP is not normal operation and should not be done by most :)
Cheers
 
Again missing my point and specific to Skylake not earlier Intel CPUs, especially as I mentioned that this becomes more of a challenge as the node shrinks.
So how many here have their Skylake above 1.4V permanently for all operations/gaming?
So far it would seem to be a minority and goes against your point that everyone is OC to breaking point their CPUs and GPUs for 24/7 operation.
Feel free to quote the portion of my post that implies "everyone" is overclocking like that. Are you that new here that you don't realize this has been going on for generations of processors? What is so different about Skylake in your eyes that degrading it with too much voltage is a totally different concept than degrading a chip from ten years ago?
 
Feel free to quote the portion of my post that implies "everyone" is overclocking like that. Are you that new here that you don't realize this has been going on for generations of processors? What is so different about Skylake in your eyes that degrading it with too much voltage is a totally different concept than degrading a chip from ten years ago?
And feel free to actually take my posts in this thread all in context and within reason, goes both ways.
OK, lets change this a bit with regards to Skylake, is anyone here OC the voltage to 1.4V for 24/7 operation?
How many take their 6+core Broadwells to 1.4V permanently?
As this is hardocp, there will be owners of Skylake I7 or extreme Broadwell.
The difference as I said earlier is the challenge of higher voltages with ever decreasing node-silicon, I explained why earlier, this generation because it is smaller has a greater voltage-thermal sensitivy and its silicon-node envelope cannot go as high as in the past, Pascal is just one example when compared to Maxwell when all safeguards are removed and tested to the extreme.
Extreme OCers mention the limits of Polaris (mentioned that earlier), and you will notice that while Skylake can go above 1.4V you will not find many doing that 24/7.
But that digresses if taken out of context, especially as this thread is now skewing from 1.3V and going a fair amoung over 100W (when limit is 75W) on the PCIe slot as a 'normal' operation for the XFX to as you now say:
'[H] members have been degrading their Intel chips with too much 24/7 voltage since at least the C2Q era. Feel free to buy a used Q6600'.
But yet no-one reading this thread is doing what you say even with their Skylake or 6+ core Broadwell, and your sarcasm is rather misplaced.
People do not want to degrade-damage their HW to the extremes and with 24/7 operation, even as members here.

Cheers
 
Last edited:
And feel free to actually take my posts in this thread all in context and within reason, goes both ways.
OK, lets change this a bit with regards to Skylake, is anyone here OC the voltage to 1.4V for 24/7 operation?
How many take their 6+core Broadwells to 1.4V permanently?
As this is hardocp, there will be owners of Skylake I7 or extreme Broadwell.
The difference as I said earlier is the challenge of higher voltages with ever decreasing node-silicon, I explained why earlier, this generation because it is smaller has a greater voltage-thermal sensitivy and its silicon-node envelope cannot go as high as in the past, Pascal is just one example when compared to Maxwell when all safeguards are removed and tested to the extreme.
Extreme OCers mention the limits of Polaris (mentioned that earlier), and you will notice that while Skylake can go above 1.4V you will not find many doing that 24/7.
But that digresses if taken out of context, especially as this thread is now skewing from 1.3V and going a fair amoung over 100W (when limit is 75W) on the PCIe slot as a 'normal' operation for the XFX to as you now say:

But yet no-one reading this thread is doing what you say even with their Skylake or 6+ core Broadwell, and your sarcasm is rather misplaced.
People do not want to degrade-damage their HW to the extremes and with 24/7 operation, even as members here.

Cheers

Stop hijacking this thread and make your own if you need for whatever purpose.
 
I buy a new video card every generation. Who cares if it degrades? I have a box of AMD and Nvidia cards, AMD and Intel CPU, motherboards that have been run OC'd to hell and back under 24/7 operation for 5+ years. This is a website dedicated to running shit out of spec. I bought a 10,000 DPI mouse and was going to overclock that, but the damn pointer moves too fast as it is. Hell the other day I yanked my water pump to make sure that it was running at the fastest possible speed. Then I looked into overclocking that for shits n' giggles.

I asked a buddy a month ago can he change out my electrical box for a 200 amp one. "Why?" he asked me. I dunno. Never can have too much power? I'm looking at adding a small home theater system with a few subs.

 
I buy a new video card every generation. Who cares if it degrades? I have a box of AMD and Nvidia cards, AMD and Intel CPU, motherboards that have been run OC'd to hell and back under 24/7 operation for 5+ years. This is a website dedicated to running shit out of spec. I bought a 10,000 DPI mouse and was going to overclock that, but the damn pointer moves too fast as it is. Hell the other day I yanked my water pump to make sure that it was running at the fastest possible speed. Then I looked into overclocking that for shits n' giggles.

I asked a buddy a month ago can he change out my electrical box for a 200 amp one. "Why?" he asked me. I dunno. Never can have too much power? I'm looking at adding a small home theater system with a few subs.


Man, what size TV is that? 105"? Man those speakers are so huge it makes that TV look small :LOL:.
 
Seriously you are in the wrong forum. This is an overclockers forum POWER USAGE BE DAMNED. So you can stop trotting out the sign boards claiming we must save mother earth.

Maintaining proper power envelopes within a safety margin is a hard limit...unless you like burnt components that you replace on a regular basis, or lost data. I mean you don't see engineers go, "Let's turn this nuclear reactor up to 11"* (Well they did that in the Ukraine with bad results sadly)
 
Then I'd ask where the sea of burnt out motherboards are, since every 250+W 480 *must* be pulling over 120W from the PCIe slot for hours at a time, right?
it can be sudden failure or long term slow failure.

Running a lot above limit will lead to a sudden failure on a cheap motherboard (ie: biostar/as-rock). On something like an ASUS sabertooth with heavy copper traces, it will be a slow decay. Stability will slowly decrease with time as the voltage drop across the pins increases with time, leading to voltage stability issues on the card. All high power components fail with time as their ultimate yield decreases with age when you run them close to the limit. The more you run them close to the limit, the quicker they fail.

Currently there are only a few cases where people were really pushing the limits, or in SLI where their MB went up. And they weren't new MB either. They were older ones which suffered from age based losses of performance which I talked about.

Personally I wouldn't touch a 480 unless it was a 33% & 66% split on power with an 8 pin plug (It's 6 phase right? I can't remember) If it's 8 phase I would go with 25% 75%. It's not that hard to design a new PCB to redirect the VRM source trace. If i were AMD, I would swallow what pride I had left and redesign the ref board this way.
 
Last edited by a moderator:
But yet no-one reading this thread is doing what you say even with their Skylake or 6+ core Broadwell, and your sarcasm is rather misplaced.
People do not want to degrade-damage their HW to the extremes and with 24/7 operation, even as members here.
It wasn't sarcasm. [H] members degrade their CPUs and GPUs regularly. If they do so without understanding the risks, well that is on them. I was pissed as all hell when I killed my 1090T exactly a year after I purchased it. Would I have been as pissed had it lasted two years versus one... not nearly so and I think any rational member here can understand why. I specifically mentioned the Q6600 not as hyperbole but as a very concrete example of a processor that was widely loved on this very forum for it's overclocking potential vs initial cost versus buying more expensive chips with higher stock clocks at the time. What happened when users here bought a Q6600 high on the hype and found out they had to increase the voltage more than normal to reach stable operation?? They DID IT ANYWAY KNOWING THE RISKS. What is that you don't seem to get? This thread is about an overclocker achieving higher than normal results and so far all you've had to add to the discussion is buu-bu-but how long will it last??
 
It wasn't sarcasm. [H] members degrade their CPUs and GPUs regularly. If they do so without understanding the risks, well that is on them. I was pissed as all hell when I killed my 1090T exactly a year after I purchased it. Would I have been as pissed had it lasted two years versus one... not nearly so and I think any rational member here can understand why. I specifically mentioned the Q6600 not as hyperbole but as a very concrete example of a processor that was widely loved on this very forum for it's overclocking potential vs initial cost versus buying more expensive chips with higher stock clocks at the time. What happened when users here bought a Q6600 high on the hype and found out they had to increase the voltage more than normal to reach stable operation?? They DID IT ANYWAY KNOWING THE RISKS. What is that you don't seem to get? This thread is about an overclocker achieving higher than normal results and so far all you've had to add to the discussion is buu-bu-but how long will it last??
They still set within reason (especially if with an engineering background), and especially so if following the spec capabilities as the nodes shrink, and my posts also referenced international known OCers who talk about the ceilings and 24/7 for both Pascal and Polaris.

Actually the OP was about hitting 1500MHz on air, and later inference that this could be normal, however the OP makes no mention of the 1.3V nor the limits and capabilities of the 14nm Samsung die, nor importantly that this is so overspec of the PCIe it is definitely of note now; reference does 95W with a 4% overclock and still within 1.15V, bearing in mind the slot is 75W (66W for 12V), pretending this would not matter is just wrong when the XFX is using 1.3V in that example.

It comes back to Broadwell Skylake can run at 1.4V+ and so lets do it as we do not care about degradation, yet I doubt anyone here is - and using that as an example as there would be quite a few owners of those CPUs reading this topic.
There must be quite a few 480 owners here, so how many of you have unlocked your cards and run at 1.3V or as close to that as you can get 24/7 operation?
Point comes back to you saying this is Hardocp and members degrade their CPUs and GPUs regularly.

BTW I have said multiple times the challenge is when nodes shrink and not before, even gave the case of difference for voltage sensitivity limit between 16nm Pascal and 28nm Maxwell (manages seriously high voltage and abuse) with safeguards removed, so not sure why you bring the Q6600 in to this debate.
 
Last edited:
They still set within reason (especially if with an engineering background), and especially so if following the spec capabilities as the nodes shrink, and my posts also referenced international known OCers who talk about the ceilings and 24/7 for both Pascal and Polaris.

Actually the OP was about hitting 1500MHz on air, and later inference that this could be normal, however the OP makes no mention of the 1.3V nor the limits and capabilities of the 14nm Samsung die, nor importantly that this is so overspec of the PCIe it is definitely of note now; reference does 95W with a 4% overclock and still within 1.15V, bearing in mind the slot is 75W (66W for 12V), pretending this would not matter is just wrong when the XFX is using 1.3V in that example.

It comes back to Broadwell Skylake can run at 1.4V+ and so lets do it as we do not care about degradation, yet I doubt anyone here is - and using that as an example as there would be quite a few owners of those CPUs reading this topic.
There must be quite a few 480 owners here, so how many of you have unlocked your cards and run at 1.3V or as close to that as you can get 24/7 operation?
Point comes back to you saying this is Hardocp and members degrade their CPUs and GPUs regularly.

BTW I have said multiple times the challenge is when nodes shrink and not before, even gave the case of difference for voltage sensitivity limit between 16nm Pascal and 28nm Maxwell (manages seriously high voltage and abuse) with safeguards removed, so not sure why you bring the Q6600 in to this debate.
Point most are making here is, Why do you feel the need to be the police? Op didn't mention it as being the definite 24/7 clock. And really if someone did try to emulate this OCers setup and they did in fact fry their system, how is that our fault?

Fact is that every single positive AMD post garners the attention of some self righteous poster that feels that bringing negativity even if unfounded is warranted and must be done. I haven't been able to rad a single thread in the AMD sections without having to deal with the asinine and irrelevant posting of the normal everyday AMD-bashers. Granted you aren't the worst but even you are known as a not-AMD-supportive. Others are far worse and they show up as well.

I just want to read the good news on AMD, when we can get it. You don't see me in the Nvidia/Intel forums, so not sure why I see some of you here.
 
Viral marketing. I guess they are appealing to some new breed of Millennials or something.
 
They still set within reason (especially if with an engineering background), and especially so if following the spec capabilities as the nodes shrink, and my posts also referenced international known OCers who talk about the ceilings and 24/7 for both Pascal and Polaris.

Actually the OP was about hitting 1500MHz on air, and later inference that this could be normal, however the OP makes no mention of the 1.3V nor the limits and capabilities of the 14nm Samsung die, nor importantly that this is so overspec of the PCIe it is definitely of note now; reference does 95W with a 4% overclock and still within 1.15V, bearing in mind the slot is 75W (66W for 12V), pretending this would not matter is just wrong when the XFX is using 1.3V in that example.

It comes back to Broadwell Skylake can run at 1.4V+ and so lets do it as we do not care about degradation, yet I doubt anyone here is - and using that as an example as there would be quite a few owners of those CPUs reading this topic.
There must be quite a few 480 owners here, so how many of you have unlocked your cards and run at 1.3V or as close to that as you can get 24/7 operation?
Point comes back to you saying this is Hardocp and members degrade their CPUs and GPUs regularly.

BTW I have said multiple times the challenge is when nodes shrink and not before, even gave the case of difference for voltage sensitivity limit between 16nm Pascal and 28nm Maxwell (manages seriously high voltage and abuse) with safeguards removed, so not sure why you bring the Q6600 in to this debate.

And nobody cares honestly, if you want a thread to investigate what kind of power circuitry and power delivery balance have a card between PEG and Auxiliary connectors make your own thread and have dedication to that thread spreading the apocalypse of all motherboards, but all the things you have said in this thread are off topic, unrelated and unneeded as the time have proven.

People is free to do what they want with their purchased hardware, overclock as itself is risky as put your hardware out of specifications to gain more performance and that's the base of this forum since..... Always, that's the base of [H]OCP go near and beyond the limits of a specific hardware, no matter if its 1% or 20% or 80% gain, in this forum every bit of percentage increased in performance used to matter. Hot? Risky? Loud? who cares, to some people is important to other not, is not needed to go into every possible thread and speak about a issue which is now months old and was even forgotten. I've been here since like 10 years, this place used to be full of people willing to go beyond limits no matter what cost and as that should remain.

As far as this thread goes, you are not needed here.
 
Actually the OP was about hitting 1500MHz on air, and later inference that this could be normal, however the OP makes no mention of the 1.3V nor the limits and capabilities of the 14nm Samsung die, nor importantly that this is so overspec of the PCIe it is definitely of note now; reference does 95W with a 4% overclock and still within 1.15V, bearing in mind the slot is 75W (66W for 12V), pretending this would not matter is just wrong when the XFX is using 1.3V in that example.

I'm glad you've done the research to confirm that the specific card the OP referenced has the same power delivery system as a reference card.
 
I'm glad you've done the research to confirm that the specific card the OP referenced has the same power delivery system as a reference card.
They do not and you can use as I have mentioned one can do rough calculations looking at the core voltage to power demand historically for the 480.
Reference is 1.15V max without unlocking and that was what the reference was locked to back then and would hit 95W on the PCIe slot when OC'd by 4%.
The XFX tested in OP is 1.3V......
That means it is a fair chunk over 100W, when the limit is 75W (66W specifically for the 12v conductor).

Cheers
 
As far as this thread goes, you are not needed here.

Sorry to continue to post, but you do realise recently I was only responding to specific posts in response to mine....in other words engaging me.
Funny enough no-one is saying that to me with regards to my negative opinions pertaining to EVGA in the specific NVidia threads.
 
Sorry to continue to post, but you do realise recently I was only responding to specific posts in response to mine....in other words engaging me.
Funny enough no-one is saying that to me with regards to my negative opinions pertaining to EVGA in the specific NVidia threads.

I think the issue people are having, is that you seem to be assuming that because the reference 480's, at launch, had a 50/50 split between the PCIe and the six pin, that this card has the same issue.

I don't know of any non-reference PCB 480 that followed that particular design route. I think it's far more likely that the card is limited to 66w on the PCIe slot, and draws the rest from the 8pin.

If you've seen something that shows that this card over draws the PCIe slot, post it up for everyone.
 
I think the issue people are having, is that you seem to be assuming that because the reference 480's, at launch, had a 50/50 split between the PCIe and the six pin, that this card has the same issue.

I don't know of any non-reference PCB 480 that followed that particular design route. I think it's far more likely that the card is limited to 66w on the PCIe slot, and draws the rest from the 8pin.

If you've seen something that shows that this card over draws the PCIe slot, post it up for everyone.
Buildzoid who is the video in the OP has done a breakdown of the XFX separately and showed it is also a 50/50 split, unfortunately because not all my posts in this thread are being taken together and I am responding to bits and pieces that gets missed later on, like I said at one point some custom models still keep reference power distribution, meaning not all custom AIB models.
The problem is only 4 sites can accurately measure the power draw and distribution, and they have not been given an XFX yet.
So as I keep saying you can then fall back to the performance envelope of the silicon-node and Polaris (or Pascal) for a rough guideline is the only option but it is a reasonable approach, there is no way you are getting similar power demand at 1.3V as you do with 1.15V (enough information out there showing how Polaris behaves once beyond the ideal performance envelope and clocked without safeguards) even as I mentioned allowing for better thermal leakage and energy waste.
As those who keep telling me this is an OC forum and pushing boundaries, I would assume those dismissing me in this thread have taken the time to follow such information from various sources (I have posted several in the past and so have a few others).

Cheers
 
Last edited:
Buildzoid who is the video in the OP has done a breakdown of the XFX separately and showed it is also a 50/50 split, unfortunately because not all my posts in this thread are being taken together and I am responding to bits and pieces that gets missed later on, like I said at one point some custom models still keep reference power distribution, meaning not all custom AIB models.
The problem is only 4 sites can accurately measure the power draw and distribution, and they have not been given an XFX yet.
So as I keep saying you can then fall back to the performance envelope of the silicon-node and Polaris (or Pascal) for a rough guideline is the only option but it is a reasonable approach, there is no way you are getting similar power demand at 1.3V as you do with 1.15V (enough information out there showing how Polaris behaves once beyond the ideal performance envelope and clocked without safeguards) even as I mentioned allowing for better thermal leakage and energy waste.
As those who keep telling me this is an OC forum and pushing boundaries, I would assume those dismissing me in this thread have taken the time to follow such information from various sources (I have posted several in the past and so have a few others).

Cheers
The dismissing is on the part that you never gave positive feedback on the topic but rather came in with doom and gloom. Come on, surely you have read enough of these threads to know that in AMD forums there is always a negative poster that keeps posting that same thought over and over and over for as long as threads show up on that particular hardware.

In this case the number of verifiable instances of board failure due to PCI-e over current is next to zero. There have even been reports from a number of mobo manufacturers claiming they can handle anywhere from 150W to 300W. In general a single GPU user may never have a single issue, whereas multi-GPU users have a greater chance, but even then not guaranteed.

Now had there been a single poster that was shooting for this particular clock and asking for OUR opinion on that chance, then by all means you can then convey your fears, however in a more positive manner would be nice.
 
Buildzoid who is the video in the OP has done a breakdown of the XFX separately and showed it is also a 50/50 split, unfortunately because not all my posts in this thread are being taken together and I am responding to bits and pieces that gets missed later on, like I said at one point some custom models still keep reference power distribution, meaning not all custom AIB models.
The problem is only 4 sites can accurately measure the power draw and distribution, and they have not been given an XFX yet.
So as I keep saying you can then fall back to the performance envelope of the silicon-node and Polaris (or Pascal) for a rough guideline is the only option but it is a reasonable approach, there is no way you are getting similar power demand at 1.3V as you do with 1.15V (enough information out there showing how Polaris behaves once beyond the ideal performance envelope and clocked without safeguards) even as I mentioned allowing for better thermal leakage and energy waste.
As those who keep telling me this is an OC forum and pushing boundaries, I would assume those dismissing me in this thread have taken the time to follow such information from various sources (I have posted several in the past and so have a few others).

Cheers

Then, being honest about the issue, this should have been your source:
https://www.pcper.com/reviews/Graph...ion-Concerns-Fixed-1671-Driver/Power-Testing-
 
Then, being honest about the issue, this should have been your source:
https://www.pcper.com/reviews/Graph...ion-Concerns-Fixed-1671-Driver/Power-Testing-
OMG it was and I LINKED IT!!!
https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620182

Again you all are just proving my point about ignoring context and picking bits of posts and treating them individually, PCPER shows it draws 95W with a 4% OC, meaning at most it is using 1.15V.
Ironically you even commented and quoted my post straight afterwards!!!! https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620227
The non-issue that all of you feel is when it is operating within its reference spec clock and voltage, not when it starts to use higher clocks/voltage. However engineers are still not entirely happy with the situation pushing as much north of the 66W it does without the BIOS update (check what Allyn Malventano at PCPER states who has a high engineering background as a nuclear technician and also as a Navy engineer, he is still a bit leery of the situation even without the 4% OC).

Just Reason, by your logic we should also accept the situation with EVGA because when operated normally it is within spec, however no-one has an issue with us all being critical (I am critical as well) of pushing that out of spec and using too quiet fan profile (key point with a lot of user) causing failures.
I am just pointing out the facts that need to go with the OP.
.
I remember the 1st time Pascal was shown working out 2500MHz (been benchmarked now up to 2800MHz), but some even in this thread were saying it does not hold much weight due to being LN2 and not indicative for gamers...
Now I do not have an issue with comments like that when the clocks were presented for Pascal because it is facts than need to be considered and weighed, same situation here.

There are several BIOS now that disable some of the NVIDIA Pascal boost3 checks to allow 1.25V on certain Pascal cards, but then I also strongly recommended it should be used very carefully with acknowledgement this is pushing the spec of the silicon-node hard (even reputable extreme OCers say the same).
Cheers

Edit:
Just for clarification please note my post linking to PCPER is in relation to the post earlier where someone asked the source:
https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620144
 
Last edited:
OMG it was and I LINKED IT!!!
https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620182

Again you all are just proving my point about ignoring context and picking bits of posts and treating them individually, PCPER shows it draws 95W with a 4% OC, meaning at most it is using 1.15V.
Ironically you even commented and quoted my post straight afterwards!!!! https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620227
The non-issue that all of you feel is when it is operating within its reference spec clock and voltage, not when it starts to use higher clocks/voltage. However engineers are still not entirely happy with the situation pushing as much north of the 66W it does without the BIOS update (check what Allyn Malventano at PCPER states who has a high engineering background as a nuclear technician and also as a Navy engineer, he is still a bit leery of the situation even without the 4% OC).

Just Reason, by your logic we should also accept the situation with EVGA because when operated normally it is within spec, however no-one has an issue with us all being critical (I am critical as well) of pushing that out of spec and using too quiet fan profile (key point with a lot of user) causing failures.
I am just pointing out the facts that need to go with the OP.
.
I remember the 1st time Pascal was shown working out 2500MHz (been benchmarked now up to 2800MHz), but some even in this thread were saying it does not hold much weight due to being LN2 and not indicative for gamers...
Now I do not have an issue with comments like that when the clocks were presented for Pascal because it is facts than need to be considered and weighed, same situation here.

There are several BIOS now that disable some of the NVIDIA Pascal boost3 checks to allow 1.25V on certain Pascal cards, but then I also strongly recommended it should be used very carefully with acknowledgement this is pushing the spec of the silicon-node hard (even reputable extreme OCers say the same).
Cheers

Edit:
Just for clarification please note my post linking to PCPER is in relation to the post earlier where someone asked the source:
https://hardforum.com/threads/xfx-g...e-2100-memory-on-air.1916001/#post-1042620144

Reading is fundamental. You linked (and have been referencing) the June 30 article. I linked (and am pointing out that what you referenced is no longer the case) the July 7 article where the driver changes the vrm duty cycle. That's explicitly why I said you had to load up drivers older than 16.7.1 in order to have the card behave the way you're harping about.

So correction...ahem...
OMG I linked the article that actually reflects how the card functions TODAY. Even without the compatibility switch on, the power delivery had been changed to favor the PSU and away from the PCI. I LINKED IT!!!!
 
Reading is fundamental. You linked (and have been referencing) the June 30 article. I linked (and am pointing out that what you referenced is no longer the case) the July 7 article where the driver changes the vrm duty cycle. That's explicitly why I said you had to load up drivers older than 16.7.1 in order to have the card behave the way you're harping about.

So correction...ahem...
OMG I linked the article that actually reflects how the card functions TODAY. Even without the compatibility switch on, the power delivery had been changed to favor the PSU and away from the PCI. I LINKED IT!!!!
Sigh,
I also mentioned that the BIOS (or driver) changed the behaviour for the 'reference' spec, meaning that it re-distributed the power and only when in compatibility mode, however it does not help when one runs over/out of this spec.

You did notice that your updated page is only using the chart of the original 480 clock (in article I linked) as reference to the power reduced and redistributed compatibility mode, and also notice they did not overclock test when in compatibility mode (a very good reason for that)?
It has no relevance to the XFX or anyone not running compatibility, and my point was not reference clock but when OC even just by 4% as shown in the original article and still valid reference point.
The reason is doing the on-fly driver controller adjustment puts greater work-power demand (heat-waste-etc) on some of the VRMs and why it required a mode to be enabled for those who wanted to be closer to SiG standards as it has limitations on overall power.

Cheers
 
Last edited:
Sigh,
I also mentioned that the BIOS (or driver) changed the behaviour for the 'reference' spec, meaning that it re-distributed the power and only when in compatibility mode

Well, there's your problem right there. You're wrong. You see all those tables they conveniently put in each game test scenario? The ones that have columns named like <16.6.2>, <16.7.1>, <16.7.1 compat on>. The duty cycle change is *always* active. The compatibility flag limits the overall draw to 150W. It has nothing to do with altering the VRM loads. They have done the tests. It's pretty damn obvious, if you just look at it.
 
Well, there's your problem right there. You're wrong. You see all those tables they conveniently put in each game test scenario? The ones that have columns named like <16.6.2>, <16.7.1>, <16.7.1 compat on>. The duty cycle change is *always* active. The compatibility flag limits the overall draw to 150W. It has nothing to do with altering the VRM loads. They have done the tests. It's pretty damn obvious, if you just look at it.
It DOES adjust the VRM loads because of the phases, and that is shown in the earlier charts where it is 50/50 to the latest where they no longer are.
Case in point the July article you link even says this as a section: 'Reconfiguring the power phase controller'
It means more cycle supported for 3 phases and less for the other 3 phases, putting more strain on one group and why you would be limited what you can do outside of reference spec.

If it was simple, they would also had tested OC and compared that as well like they did with reference power, but they did not, nor could AMD distribute much more between mainboard phases and auxiliary.

Before the update reference for 4k Metro Last Light is 79W for PCIe slot, but 4% OC took that to 90W to 95W and real sustained clocks of 1250MHz and 1.05V to 1.06V.
The 16.7.1 reduced the 79W by 8% to 71W and increased by 12% 6-pin (adding more pressure to 3 phases of VRM but acceptable in reference spec), and with compatibility on reduced by 12% to 67W (very close to PCIe SiG spec for 12V at 66W) and increased by 5% 6-pin , none of this tested OC.

Well I learnt something new,I must admit I thought the driver needed to enable the phase power distribution because when used it adds pressure to 3 phases for any models using this circuit-architecture design, and possibly why no sites (only 3 with the right gear and methodology) who can accurately measure the watts has tested this situation with OC (maybe AMD request *shrug*) - interesting that both PCper and Tom's Hardware did not do an OC test when testing the AMD solution even though they did for the launch review and analysis on its behaviour.

Easier to read from Tom's Hardware but similar trend as PCPer:
01-Power-Consumption_w_727.png



Well this raises even more why need to see the XFX measured by either Tom's Hardware/PCPer/Hardware.fr, need to see how the phase distribution behaves in normal XFX GTR custom clocks and then OC, still as I say the mainboard is still going to be at least 100W if OC to 1.3V and we need to see how much additional demand it creates for the 3 other phases.
Of course that comes down to whether Buildzoid made a mistake or not when doing his breakdown and identifying the phase split as 50/50 for the XFX.

BTW my initial post that has everyone in uproar regarding watching out for behaviour on mainboard power only says
OK anyone with the card still needs to be careful OC if looking at 24/7 gaming type setup. Seems half the Vcore goes to the PCIe slot
Context by half is GPU core phases not necessarily watts although they match.
Anyway to re-iterate again my post relates more to running beyond AMD's standard 480 spec meaning OC (such as the 4% and much higher).

The 1.3V related posts are more about being realistic on real-world 24/7 clock-voltages with reasons explained multiple times now including also why it is no longer great for Nvidia and Pascal compared to Maxwell.
Cheers

Edit:
Here is an image showing how the phases are split 50/50 between mainboard and auxiliary for the 'reference' launch 480.
rx480-phases.jpg

Cheers
 
Last edited:
Back
Top