ATI HD 6970 actual benchmarks

Which makes it even stranger that they don't compare it to a 580. Looking at the settings used, is it possible that the 570 is VRAM limited, making the 2GB 6970 look better by comparison to the 570 than it would to the 580?

It's still about a ~20% improvement at 1920x1080/1200 resolution, which is what I suspect most people who buy this card will be gaming at.

Even if it's equal to the 580 in terms of performance, positioning it against the 570 is a very smart move, because a lot more people buy video cards at the $300-400 range than $500, and they'll be able to flat-out say it's better than the competition in every way.
 
Still base on those numbers (if they are correct) the 6970 is 25% faster than 570GTX on average, which makes it around +5% faster than a 580GTX...

Got to watch out for AMD's 'optimisations' on 'AF16' mode, and be sure it's comparing apple-to-apples
;)
 
Which makes it even stranger that they don't compare it to a 580. Looking at the settings used, is it possible that the 570 is VRAM limited, making the 2GB 6970 look better by comparison to the 570 than it would to the 580?

I wouldn't call it "strange", I would call it "marketing". ;)
 
Got to watch out for AMD's 'optimisations' on 'AF16' mode, and be sure it's comparing apple-to-apples
;)

oh wow, please stop bringing this garbage up. Catalyst 10.12 defaults the setting to High Quality anyway.
 
oh wow, please stop bringing this garbage up. Catalyst 10.12 defaults the setting to High Quality anyway.

Yeah, I know it's a touchy subject, but I think that when the reviews come out, it'll be worth making sure they're using comparable quality levels...
 
oh wow, please stop bringing this garbage up. Catalyst 10.12 defaults the setting to High Quality anyway.

You knew it was going to happen. If the 6970 beats the 580 in the price/performance category you'll see more claims about AMD "cheating" appear. It's a given.
 
From Neliz:

here's some food for thought.

All the benchmarks you see are for the 190W TDP limited card, you can still adjust the card to 250W TDP (closer to the 580), what do you think it would do with the performance numbers?

The plot thickens...
 
You knew it was going to happen. If the 6970 beats the 580 in the price/performance category you'll see more claims about AMD "cheating" appear. It's a given.

They just need to turn down the 'optimisation' on the AF16 mode a little
(i.e. not do so many pixels in tri-linear mode)
:D
 
You knew it was going to happen. If the 6970 beats the 580 in the price/performance category you'll see more claims about AMD "cheating" appear. It's a given.

That'll be BS if it's HQ to HQ like here on Hardocp. As long as AMD does no degradations on HQ then even the most nvidia zealots should accept whatever results come in. I guess it never fails and you can expect the worse out of some. it'll be funny if all the results are @ 190W and we have yet to see the 250W results! :D Perhaps even 250W results were skewed on Cat 10.11 both results showing 190W #s and on 10.12WHQL under 250W it beats a GTX 580 and comes out for $399! :D
 
From Neliz:

here's some food for thought.

All the benchmarks you see are for the 190W TDP limited card, you can still adjust the card to 250W TDP (closer to the 580), what do you think it would do with the performance numbers?

The plot thickens...

Wouldn't this just amount to auto-overclocking? I mean sure it might give it a 5-10% perf. boost, but it's not going to be a magical switch that makes it infinitely better than everything else on the market.

Then again, earlier AMD slides did indicate that it was a 250W part...
 
That'll be BS if it's HQ to HQ like here on Hardocp. As long as AMD does no degradations on HQ then even the most nvidia zealots should accept whatever results come in. I guess it never fails and you can expect the worse out of some. it'll be funny if all the results are @ 190W and we have yet to see the 250W results! :D Perhaps even 250W results were skewed on Cat 10.11 both results showing 190W #s and on 10.12WHQL under 250W it beats a GTX 580 and comes out for $399! :D

I think it depends if the card is hitting the limiter in normal use
- it probably isn't being limited when running normal games
- and would only hit the limiter in Furmark, extreme shader apps

- but what the limiter allows them to do, is set the normal operation to be closer to this limit, knowing that when a Furmark-type app is run, it will just run into the limiter, and not cause massive power usage
 
So you promise you won't complain?

Yes I promise :D I'll probably also quickly buy 2 if that happens :D I just want everything to be fair, around the web when the many sites use Quality vs Quality and the degradations come into play I'll bring that up whenever some idiot posts that kind of result though. Fair is HQ vs HQ unless nvidia proves to the world that even at HQ amd is only as good as their quality which they haven't done AFAIC.

I think it depends if the card is hitting the limiter in normal use
- it probably isn't being limited when running normal games
- and would only hit the limiter in Furmark, extreme shader apps

- but what the limiter allows them to do, is set the normal operation to be closer to this limit, knowing that when a Furmark-type app is run, it will just run into the limiter, and not cause massive power usage

From what I understand the 2 power profiles were also performance profiles. Maybe we'll learn while in 190W mode it runs at 1600SP and while at 250 W it runs 1920SP and PWNs everything! :D
 
I'm guessing the 6970 is going to basically be a value 580. I was hoping it was going to clown the 480 and beat the 580. I'm not interested in MGPU anymore so Antilles doesn't mean a thing to me.

*le sigh*

Bored already.
 
From Neliz.


If that's a legit graph, would it mean that AMD's new PowerTune could automatically increase performance just by adjusting its power usage? :eek:

So, let's say it's a demanding game like Metro 2033, the power usage goes up +10% and we get increased performance.

It makes me wonder though about two things:

  1. How do you fairly benchmark the card if PowerTune increases the performance of the card based on its load? Or, do you manually set the power usage to maximum and test it at that power level?
  2. Does the Radeon 6970 actually have more performance potential than what is actually being shown here? Didn't one slide showed it having 190W TDP with a maximum of 250W TDP? So, at 190W it's slower than GTX 580, matches (or beats) the GTX 480/GTX 570. But, at full load of 250W TDP, it is closer to the GTX 580 and potentially faster?
 
Wouldn't this just amount to auto-overclocking? I mean sure it might give it a 5-10% perf. boost, but it's not going to be a magical switch that makes it infinitely better than everything else on the market.

Then again, earlier AMD slides did indicate that it was a 250W part...

Yeah, the question is whether that gives normal everyday performance increases, or just better overclocking. Also, why do we assume that the people with retail cards are testing at 190W TDP? Do they not know enough to flip the switch? Or are we still hanging onto the theory that the retail cards come with some kind of crippled low-power drivers?
 
Yeah, the question is whether that gives normal everyday performance increases, or just better overclocking. Also, why do we assume that the people with retail cards are testing at 190W TDP? Do they not know enough to flip the switch? Or are we still hanging onto the theory that the retail cards come with some kind of crippled low-power drivers?

No point in AMD letting all the reviews be done @190W, and then saying that they should have been done @ 250W
;)

- so the cards out there in review's hands must be 250Watters
 
No point in letting all the reviews be done @190W, and then saying that they should have been done @ 250W
;)

Yeah, exactly. It wouldn't seem like a fair comparison if they only test it at one power level.

If this is true about the Radeon 6970 and it's starting to look like it is, a review site like [H] should test it at BOTH 190W and 250W.

I'm fairly positive we'll see it at or above a GTX 570 at 190W.

And, at 250W, we'll probably see something surprising, maybe something closer to a GTX 580 or faster.
 
Yeah, the question is whether that gives normal everyday performance increases, or just better overclocking. Also, why do we assume that the people with retail cards are testing at 190W TDP? Do they not know enough to flip the switch? Or are we still hanging onto the theory that the retail cards come with some kind of crippled low-power drivers?
From my understanding, the way it works is similar to Intel's turbo boost except in reverse. With powerboost, Intel dynamically increases one core's clock speed when the entire chip is under a set thermal threshold. For ATI's solution, they do it backwards. By default, the card will run at X MHz. If the core starts using too much power, the clock speed is dropped to 90% of X or whatever it needs to do to stay below the thermal threshold. This isn't really overclocking, this is dynamic underclocking.
 
From my understanding, the way it works is similar to Intel's turbo boost except in reverse. With powerboost, Intel dynamically increases one core's clock speed when the entire chip is under a set thermal threshold. For ATI's solution, they do it backwards. By default, the card will run at X MHz. If the core starts using too much power, the clock speed is dropped to 90% of X or whatever it needs to do to stay below the thermal threshold. This isn't really overclocking, this is dynamic underclocking.

So will this be a driver solution like Powerplay or a hardware solution like the 580 employs?
 
GTX 580 is not by any means 20-30% faster then gtx 570 or 480. Trust me it has been stated numerous times. It is about 15% faster on average.

Agreed. Was just going to comment the same. If the diff between 480/570 was 30% I would have bought one at launch without thinking twice.

if true, the above table puts the 6970 about 5-10% faster than 580.
 
Have you seen the tags over at Xtreme. :p

Tags
amd and mudkipz, amd did it for the lulz, amd king of trolls, box fetish, can it play wow, cayman = steven seagal., cayman spreads its wings, caymans don't have wings, chuck norris has a cayman, damn you tsmc, don't you?, endless pile of rumors, field of broken dreams, has an am/fm radio switch, nvidia loses again, performance over 900, some think rumours=truth, then get disappointed, you will say wow

http://www.xtremesystems.org/forums/showthread.php?t=261195&page=127
 
So will this be a driver solution like Powerplay or a hardware solution like the 580 employs?

Not quite...


From my understanding, the way it works is similar to Intel's turbo boost except in reverse. With powerboost, Intel dynamically increases one core's clock speed when the entire chip is under a set thermal threshold. For ATI's solution, they do it backwards. By default, the card will run at X MHz. If the core starts using too much power, the clock speed is dropped to 90% of X or whatever it needs to do to stay below the thermal threshold. This isn't really overclocking, this is dynamic underclocking.


Kind of...

So the default clock is just the max clock the core will reach. The card adjusts the clock to maintain the card within the TDP envelope. From the graph, it's obvious that at 200W maximum in Perlin Noise, the 6950 fluctuates from 650 to 800 MHz (closer to 650 more). Adjust the TDP 10% to 220W though, the 6950 runs at a full 800MHz the entire time.

To OC, you just flip from 800MHz to 850MHz or whatever. In other words, the TDP slider and OC slider's are just the max those can run at, within each other's envelope.

What's interesting is that this is proof that benchmarks aren't reliable #'s for performance - they can stress components more to the TDP limit and thus it'll be important to raise the slider for power consumption to see true power. Look at the fps for Perlin Noise (a component of 3dMark11 as well) - BIG difference, and possibly huge change in scores.
 
This is just getting confusing now. What a release. First it was 30-40% faster than the GTX580, then it was suddenly barely faster than the 5870, and now it's at 'probably faster than the GTX580', with some disturbing new features which might be messing up the benchmarks. Oh well, guess it'll make the proper reviews all the more exciting.
 
From my understanding, the way it works is similar to Intel's turbo boost except in reverse. With powerboost, Intel dynamically increases one core's clock speed when the entire chip is under a set thermal threshold. For ATI's solution, they do it backwards. By default, the card will run at X MHz. If the core starts using too much power, the clock speed is dropped to 90% of X or whatever it needs to do to stay below the thermal threshold. This isn't really overclocking, this is dynamic underclocking.



I think I'm starting to understand how AMD's new PowerTune works if going by what I'm reading here.

The Radeon 6970 works like this (in theory):

  • Default clock at 880 MHz at 250W TDP
  • Normal running mode is less than 880 MHz at 190W TDP
PowerTune dynamically underclocks the card (quote siliconnerd above) to 190W during normal operation.

The new AMD Catalyst Control Center allows a person to adjust the power usage of the card, hence the completely redesigned interface in the preview drivers for 10.12.

So, let's say I want to run Metro 2033, I go to CCC and adjust the power up +10%. My performance goes up in the game as a result because it's now running at or close to the actual speed of the GPU.

That or I can set PowerTune to automatically and dynamically adjust the power and speed based on its load. If it's at idle, it stays at a lower clock speed and 190W TDP. If it's under load, it goes up and stays within or close to 250W. If I want to OC the GPU, I manually adjust it myself.

Right?
 
This is just getting confusing now. What a release. First it was 30-40% faster than the GTX580, then it was suddenly barely faster than the 5870, and now it's at 'probably faster than the GTX580', with some disturbing new features which might be messing up the benchmarks. Oh well, guess it'll make the proper reviews all the more exciting.

"AMD King of Trolls"
 
related:
fot023.jpg


Read my post...

The "default" clock is just the max clock of the card. The TDP max is just the max TDP the card will reach.

Typically, in games, the TDP is always lower than the max, and so it should have no problems running at the clocks it needs to. It's in synthetics (Furmark is an example of this) where TDP often goes way above and beyond the TDP, and so this adjusts clocks.

Just like Orthos can stress all 4 cores at 100%, which isn't typically going to happen, synthetics can stress all 1536SP's at once and drive power way up, so this can change clocks to dynamically keep card within limits

Increasing power 10% isn't automatically going to raise power 10%. To increase max clocks, you still need to OC the card to like 950MHz for example.

However, increasing TDP 10% means the card can run at 275W max for example, so the card will not lower clocks as easily anymore since the threshold is higher - and thus performance overall can increase if your clocks are high enough that it requires a higher power maximum.

For instance, lets say I OC the card to 1000Mhz - and now I draw 280W.. But TDP is kept at 250W. Then, the card keeps hitting the 250W barrier and so my card runs from 800MHz to 1000Mhz up and down, like in that graph, to keep the card below 250W.. However, I slide the TDP power to +20% to 300W, and now my card runs constantly at 1000MHz since it is drawing 280W, less than the 300W. (all #'s are just examples fyi)
 
Oh please. Do you even remember a month ago when the 580 came out? Half the board was mocking Nvidia for releasing such an "unimpressive" refresh, and talking trash about how AMD was going to kill them this round.

it was an unimpressive refresh for those that already owned the 480...for all others it was 'Fermi done right' and a very good release...if the 6970 cannot overtake Nvidia after such an 'unimpressive' refresh then the only thing left is for Nvidia to sit out a year and let AMD catch up :D
 
Yeah, the question is whether that gives normal everyday performance increases, or just better overclocking. Also, why do we assume that the people with retail cards are testing at 190W TDP? Do they not know enough to flip the switch? Or are we still hanging onto the theory that the retail cards come with some kind of crippled low-power drivers?

There is no switch to turn up the tdp, I think from what I have been reading you adjust the power control in CCC. Default power control runs the card at lower speed while adjust the power control to higher it bumps up the clocks. The drivers that everyone is testing with doesn't have the option to adjust. I think the newer 10.12 does, and the one probably shipped to reviewers.
 
Last edited:
There is no switch to turn up the tdp, I think from what I have been reading you adjust the power control in CCC. Default power control runs the card at lower speed while adjust the power control to higher it bumps up the clocks. The drivers that everyone is testing with doesn't have the option to adjust. I think the newer 10.12 does, and the one probably shipped to reviewers.
There is a small slider switch on the board next to the crossfire connectors. It can be seen on the PCB image here: http://semiaccurate.com/2010/12/13/radeon-hd-6900-appears-all-over/ I don't think it's use has really been explained yet. Some rumors where pointing to it being a switch for dual bios with one able to cold start. I think there have also been some people suggesting that it changes the max TDP.
 
I see PowerTune maybe being beneficial for people that wish to keep their GPU temps and fan noise under control. But what's the point really? To keep Furmark from killing cards? If that's the goal, fine, but don't start dicking with my clocks when the action gets heavy and I need every frame I can get.
 
Back
Top