ATI HD 6970 actual benchmarks

seems like both 6950 and 6970 have vapor chamber cooler. Pic at overclocker forums.
 
There is no switch to turn up the tdp, I think from what I have been reading you adjust the power control in CCC. Default power control runs the card at lower speed while adjust the power control to higher it bumps up the clocks. The drivers that everyone is testing with doesn't have the option to adjust. I think the newer 10.12 does, and the one probably shipped to reviewers.

Which kind of all leads back to the issue of exactly what drivers are they shipping with the retail cards? Seems strange that shipping drivers would not support such a key feature of the card.

As far as performance goes, I guess we'll just have to wait to see how much the card is TDP constrained - might be big gains, might be no gains at all, depending on the game.
 
it was an unimpressive refresh for those that already owned the 480...for all others it was 'Fermi done right' and a very good release...if the 6970 cannot overtake Nvidia after such an 'unimpressive' refresh then the only thing left is for Nvidia to sit out a year and let AMD catch up :D

That makes no sense.

AMD was second to Fermi all this time, and everyone said Nvidia was doomed. Now AMD is around GTX 580 performance with a single GPU, and AMD is doomed? :rolleyes:

The 4890 was also a refresh of the 4870, came 10 months later, and still lost to the GTX285, but people thought it was a great card. This comes out 15 months later, has more changes and is a far bigger improvement over the 5870 than the 4890 was over the 4870, and it's a failure? sigh
 
Last edited:
I see PowerTune maybe being beneficial for people that wish to keep their GPU temps and fan noise under control. But what's the point really? To keep Furmark from killing cards? If that's the goal, fine, but don't start dicking with my clocks when the action gets heavy and I need every frame I can get.
It basically allows higher clocks while staying within a thermal budget. Current CPUs and GPUs are past the point of overclocking being limited by the clock speed, but more by the cooling solution. This should help with overclocking since you will be able to overclock farther without having to upgrade the cooling or have the fan be loud. The big issue will be when it will clock down. If it clocks down when the game is running slow for frame rate, it will hurt performance, but if it clocks down when you are closer to maximum or average frame rate, it will help. The interesting thing is the power intensive parts of a game don't always align with the minimum frame rate. Look at Starcraft 2 when it came out. The menu was what was causing games to overheat.
 
That makes no sense.

AMD was second to Fermi all this time, and everyone said Nvidia was doomed. Now Nvidia is around GTX 580 performance with a single GPU, and AMD is doomed? :rolleyes:

The 4890 was also a refresh of the 4870, came 10 months later, and still lost to the GTX285, but people thought it was a great card. This comes out 15 months later, has more changes and is a far bigger improvement over the 5870 than the 4890 was over the 4870, and it's a failure? sigh

It's all about expectations. Everyone thought (hoped?) the 6970 would be a 580 killer, so anything less than that is going to seem bad.
 
that sentence made absolutely no sense

Meant AMD, you know what I mean. And the point stands


It's all about expectations. Everyone thought (hoped?) the 6970 would be a 580 killer, so anything less than that is going to seem bad.


You mean those who read rumor websites? Don't get me wrong, 1920SP's would have probably beat the 580, but once it became clear it was 1536 w/ other changes, its time to reign in expectations, esp. since we're still on the same 40nm node

And considering AMD hasnt had a single GPU go toe to toe with Nvidia's best in 4 years now, I'd say this is a big change for them to even be contending at this point
 
Meant AMD, you know what I mean. And the point stands

point doesn't stand...Nvidia's release of the 480 was not the card they wanted to put out there...it was way too hot, loud and had it's features cut...in short it was a disaster...yet they were still able to have a 'faster' card then ATI...now faster does not necessarily mean better...I think the 5870 won that round all things considered

the 580 is what the 480 should have been...and once again ATI is falling short, this time with the difference being that when you factor in everything the 6970 will not be the 'better' card
 
the 580 is what the 480 should have been...and once again ATI is falling short, this time with the difference being that when you factor in everything the 6970 will not be the 'better' card

Isn't it kinda early to say that one way or the other? And if anything, things are kinda pointing in the opposite direction; that the 6970 will actually be faster on average. Also, from the looks of it, cheaper and less power hungry. Again, I'm not concluding anything until the 15'th, just saying how things have started pointing in AMD's direction again the last couple of days.
 
The 580 is roughly 25% faster than the 5870...
If ATI can't pull off a 25% boost from 5870 to 6970, that's pretty disappointing. I'm not saying ATI needs to do that, they can still fight a pricing war, I guess.
 
Isn't it kinda early to say that one way or the other? And if anything, things are kinda pointing in the opposite direction; that the 6970 will actually be faster on average. Also, from the looks of it, cheaper and less power hungry. Again, I'm not concluding anything until the 15'th, just saying how things have started pointing in AMD's direction again the last couple of days.

cheaper yes...less power hungry also probably yes (but not the big difference we saw from the 480---5870)...faster, no...it seems as though ATI was caught unprepared and either didn't think that Nvidia would be releasing a 480 refresh so soon or didn't think that Nvidia would be able to produce a better performing, less power hungry and less noisy Fermi based card
 
580 only 25% faster? depends on the game suite, but its 40% faster on average from a wide array of games

And again, as I posted in that long post, compare the 6970 to the 480/580, and not to the 5870, since the difference between 5870 and the 480/580 has a HUGE performance delta. In some games, 5870 trails by 10% - in others, by 60-80%

point doesn't stand...Nvidia's release of the 480 was not the card they wanted to put out there...it was way too hot, loud and had it's features cut...in short it was a disaster...yet they were still able to have a 'faster' card then ATI...now faster does not necessarily mean better...I think the 5870 won that round all things considered

the 580 is what the 480 should have been...and once again ATI is falling short, this time with the difference being that when you factor in everything the 6970 will not be the 'better' card

First of all, not what Nvidia wants out there is irrelevant. They failed to deliver, just like AMD failed to deliver with the R600, etc. Blame it on the process / whatever, it was bad engineering and Nvidia admitted as much

Second, how do you know the 6970 will not be the better card? Do you have pricing #'s? Do you have power #'s? Heck, do you even have reliable performance #'s? Do you even know what AMD is targetting with the 6970?

AMD has never competed with Nvidia for the single high end GPU since the X1950XTX. But unless they've explicitly said that the 6900 was intended for the crown (anything else was just rumors spread by the same people wrong and wrong again this time around), it's silly to think that this card is a failure on their end, especially since you aren't taking the whole card in its context, and just looking at performance #'s.

Is the 6970 a failure if it were less than 10% slower than the 580 but priced $100 cheaper and draws 30 watts less typically? The latest pricing from OCUK put it at the same price as the 5870 at launch, meaning $400, and this time supposedly without the supply issue the 5870 had.

So look at where the card sits after reviews on Dec 15th in the whole context. The 4870 came 1 1/2 months after the GTX 280 and was 20% slower, and that card was a failure right?

P.S. the 480-5870 gap was an all-time high. However, the #'s given for the 6970 are from one or two situations against undervolted GTX480s. The actual power usage is supposed to be in the 190W range
 
Second, how do you know the 6970 will not be the better card? Do you have pricing #'s? Do you have power #'s? Heck, do you even have reliable performance #'s? Do you even know what AMD is targetting with the 6970?

AMD has never competed with Nvidia for the single high end GPU since the X1950XTX. But unless they've explicitly said that the 6900 was intended for the crown (anything else was just rumors spread by the same people wrong and wrong again this time around), it's silly to think that this card is a failure on their end, especially since you aren't taking the whole card in its context, and just looking at performance #'s.

so AMD is not targeting the high end enthusiast with the 6970?...AMD and Nvidia ALWAYS strive to be the fastest performing card on the market...they never launch a new product saying "OK we want to be #2 this time"...there are different market segments for each card- budget, mainstream and enthusiast...to say that AMD is not aiming to be the best performing card in each segment is silly
 
so AMD is not targeting the high end enthusiast with the 6970?...AMD and Nvidia ALWAYS strive to be the fastest performing card on the market...they never launch a new product saying "OK we want to be #2 this time"...there are different market segments for each card- budget, mainstream and enthusiast...to say that AMD is not aiming to be the best performing card in each segment is silly
It isn't so much that they say we are going to be #2, with the RV770, ATI realized that making the largest die they could and throwing everything including the kitchen sink was taking too much development time. Instead, they moved to making the best GPU for a given area. (Read the RV770 and RV870 stories on anandtech). It isn't saying they will make a worse GPU, the strategy is more to make something that can be released on time. How well it compares to Nvidia's card is determined by what Nvidia brings to the table.
 
I dont think AMD is doing a bad thing at all. If they can make a card that is pretty close to gtx 580 and make is widely available and price it 100 bucks cheaper then it's a win win. They could have made 6900 series bigger and could have made it faster but then their profits would be low as the yields would be low. I am sure if 32nm wasn't scratched you would have seen this card with close to 2000 shaders and it would have fit in the same power envelope. If the card was 1920 shader and it performed like this then it would have been fail.

This all just points to one thing, I think this was probably the best they could do with 40nm and staying power effecient and getting the yields right.
 
so AMD is not targeting the high end enthusiast with the 6970?...AMD and Nvidia ALWAYS strive to be the fastest performing card on the market...they never launch a new product saying "OK we want to be #2 this time"...there are different market segments for each card- budget, mainstream and enthusiast...to say that AMD is not aiming to be the best performing card in each segment is silly

Uhm... The HD 3870 and HD 4870 definitely didn't target nVidia's top guns for those respective generations. The HD 3870 was AMD's recovery from the HD 2900XT's failed attempt at such a push for the top, while the HD 4870 was an incremental step up in terms of pricing and performance targeting, but still AMD didn't design a card to compete with a monster like the GTX 280.

Those two releases led up to the HD 5870 where AMD imo did resume its quest for the top while still being observant of price/performance/heat/power/etc (all the factors which doomed the HD 2900XT when it failed to prove viable against the 8800GTX). Imo, the HD 5870 beat the GTX 480. It didn't do it in straight-up performance, but AMD forced nVidia into a position where if nVidia wanted the performance crown, it would have to make an HD 2900XT. The only reason the GTX 480 is remembered a little better than the ill-fated HD 2900XT is because the GTX 480 did manage to get the performance it required... but at massive cost. With it all said and done, market share shifted heavily in AMD's favor and no doubt that speaks a fair bit to sales as well.

And as that works out, you'd assume that AMD should next be able to take a proper victory, or nVidia should have to keep doing what it was doing.

But you've got three major events at play that will impact this launch. First, you have the GTX 460. That seems a wtf consideration for the high-end, but the GTX 460 proved an interesting little beast that nVidia has an uncanny knack for occasionally churning out (6600GT and 8800GT). AMD had successfully relegated nVidia's whole Fermi line-up to being relatively terrible price/performance up until that point. But anyway, that threatened AMD's mainstream cards and those are the moneymakers, so the HD 68xx cards were aggressively slapped-out first (whereas AMD might have focused on the HD 69xx cards first otherwise). That would be fine and so has been quite fine, but nVidia used the time that bought to plop out the GTX 580 and GTX 570 first, and for both nVidia opted to just refine what they had. Again, this is all fine, because AMD still was ahead on this front... but we run into the 32nm issue. Both companies expected to be on 32nm by this point, and imo AMD was on schedule for that. nVidia I'd suspect was knocked-off or back by the GTX 480. The 40nm cycle combined with the other two factors allowed nVidia to do the housekeeping they needed to.

So anyway, looking at all that, the HD 69xx cards wanted to be 32nm and to be out now or soonish competing w/nVidia's current crop from that vantage point (while nVidia would have been forced to decide between a 40nm refresh as they have now or to try and follow AMD to 32nm despite trailing), but TSMC screwed that all to hell. But unless/until AMD drops concerns about producing another HD 2900XT, when the two companies are at an even stage in the game I think we're always going to see their top guns either trading blows or nVidia's coming out slightly ahead. When viewed through that context, the HD 6970 just has to be between the GTX 570 and GTX 580 to be a good card for AMD.
 
AMD hasn't competed directly for the top performance single GPU from Nvidia since 2007. The other poster's covered the points nicely, but I wanted to add to the 32nm part

AMD was always ahead of Nvidia on process nodes. 32nm was due at this time, and it's cancellation screwed AMD's plans.

Remember, AMD was the first to 65/55nm. It then pushed to 40nm with the 4770 many many months before the 5xxx generation. Had 32nm been on time, Cayman would probably have been on 32nm with probably higher functional units / clocks / all in a smaller package, probably roughly 4890 sized die. TSMC cancelled it, and Cayman on 40nm was what ended up happening.

Anywho, even beyond all those facts, people who automatically assume performance is the whole package forget a LOT goes into a GPU's final result. Performance is a big one, of course, but so are costs to manufacture, software support (drivers AND games), heat output and power intake (a big issue with OEMs), future scalability, etc.

You can't take performance in a bubble and say that's all that matters. If I created a GPU that was 30% faster than the GTX580 but drew 10x more power, cost me $1000 to manufacture, and was hotter than the surface of the sun, would it be a success? Probably not right?

The good news for AMD is that because their 389mm^2 GPU is apparently as close to Nvidia's 520-530mm^2 GPU, TSMC's cancellation of 32nm and possible delay of 28nm until 2012 means AMD has a LOT more breathing room to go up. It would not surprise me if AMD creates a 6975 or 6980 or something, like the RV790 (4890) with higher clocks and possibly more functional units and outright takes the crown sometime next year if 28nm really is delayed. Nvidia already did their 40nm refresh, so it'll be interesting to see what AMD pushes to next.
 
AMD hasn't competed directly for the top performance single GPU from Nvidia since 2007. The other poster's covered the points nicely, but I wanted to add to the 32nm part

AMD was always ahead of Nvidia on process nodes. 32nm was due at this time, and it's cancellation screwed AMD's plans.

Remember, AMD was the first to 65/55nm. It then pushed to 40nm with the 4770 many many months before the 5xxx generation. Had 32nm been on time, Cayman would probably have been on 32nm with probably higher functional units / clocks / all in a smaller package, probably roughly 4890 sized die. TSMC cancelled it, and Cayman on 40nm was what ended up happening.

Anywho, even beyond all those facts, people who automatically assume performance is the whole package forget a LOT goes into a GPU's final result. Performance is a big one, of course, but so are costs to manufacture, software support (drivers AND games), heat output and power intake (a big issue with OEMs), future scalability, etc.

You can't take performance in a bubble and say that's all that matters. If I created a GPU that was 30% faster than the GTX580 but drew 10x more power, cost me $1000 to manufacture, and was hotter than the surface of the sun, would it be a success? Probably not right?

The good news for AMD is that because their 389mm^2 GPU is apparently as close to Nvidia's 520-530mm^2 GPU, TSMC's cancellation of 32nm and possible delay of 28nm until 2012 means AMD has a LOT more breathing room to go up. It would not surprise me if AMD creates a 6975 or 6980 or something, like the RV790 (4890) with higher clocks and possibly more functional units and outright takes the crown sometime next year if 28nm really is delayed. Nvidia already did their 40nm refresh, so it'll be interesting to see what AMD pushes to next.

Well, to look at it one way, AMD had a complete runaround with the top performing single GPU for almost 6 months straight - longer than the combined reign of each of the top Fermi dogs:p (top dog being a good thing...)

So even if it doesn't fit into the "generation" thing, it was then when AMD started seriously gathering market share/volume and momentum.
 
Last edited:
Well, to look at it one way, AMD had a complete runaround with the top performing single GPU for almost 6 months straight - longer than the combined reign of each of the top Fermi dogs:p (top dog being a good thing...)

So even if it doesn't fit into the "generation" thing, it was then when AMD started seriously gathering market share/volume and momentum.
I would say the 4870/4850 launch is where AMD started gathering market share. Look at the steam hardware survey. The 4800 series is still #1. Even though the card wasn't as fast as the 280, it caused Nvidia to rapidly drop prices to compete. Even with low availability after release of the 5800 series and the price gouging of retailers, AMD soundly won the market share game for this round.
 
AMD was always ahead of Nvidia on process nodes. 32nm was due at this time, and it's cancellation screwed AMD's plans.

Nvidia already did their 40nm refresh, so it'll be interesting to see what AMD pushes to next.

that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point

Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?

if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
 
that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point

Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?

if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI

hint: for his relative sample data "click more info"
Don't reply till you figure it out.
 
I take these benchmarks as rumors. The day an official review comes out I will begin to form an opinion about the HD6900 series card.
 
that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point

Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?

It doesn't preclude them, but physically they're not going to be able to get much bigger due to actual manufacturing limitations. You can't just make 1000mm^2 chips out of nowhere here, to say nothing of power restrictions.

Also, 28nm isn't done by Nvidia, it's done by TSMC, so who cares how far Nvidia is on their 28nm part - they can't get it out until TSMC is ready

You're seriously arguing about those points? If so, then you need to read through the 4800 and 5800 behind the scenes articles from Anandtech to get some info on the GPU business.

And you managed to ignore the rest of my points too, seeing as how my 32nm blurb was just to inform people where Cayman targeted and what the reality of the situation is :rolleyes:

if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI

I don't know who you're directing this at, since I didn't bring it up.

However, if you want actual GPU market share, you can take info from analysts:
jpr_gfx_mkt_q3_2010.png


Google Jon Peddie GPU research and you'll see

Oh and that's integrated AND discrete - but if you compare AMD to Nvidia, including mobile stuff, AMD has a larger share than Nvidia now

You can also google AMD's analyst call where they stated they had 80%+ of the DX11 market share - and AMD isn't going to lie, unless they want to be investigated by the SEC

I don't have time to look up his latest discrete GPU breakdown, but last I remember, AMD had gone up from 30% vs. 70% around the time after 8800's (before the 4800's) to now where they're split something like 45-55, which is a huge difference

The entire shift began with the 4800 series. Prior to that, TWIMTBP was a big deal and Nvidia clearly ruled. Since then though, the 4800's and 5800's have sold enormous amounts and AMD seeded DX11 cards to developers long before Nvidia, hence AMD has managed to turn a LOT more games around to support their hardware / less biased towards Nvidia than in the past.
 
if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI

A huge portion of that is 8800/9800-era GPU's, though.

Check out the DX11 section, and ATI has 80% of the market share.
 
that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point
There is a difference between "mute" and "moot", seriously.

Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?
It doesn't matter if NVidia is ready to release a 28nm part tomorrow, or last week, or in 3 months. the only thing that matters is TSMC doesn't have a 28nm process ready yet.

if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
How about DX11 parts, that should be more indicative of the latest sales.


Not really, they strive to make money.
Making lots of money isn't inclusive of having the #1 card.
Exactly. The whole point of designing, producing, and advertising video cards is to make money so the whole process can be repeated, hopefully ad infinitum. AMD/ATI has been systematically/purposely targeting NVidia's cash cows, at least in the retail consumer segment. The 68XX was targeted at the 460GTX, the 69XX at the 5XX series. The AMD offerings are cheaper to produce, so AMD can afford a certain leeway in pricing that NVidia can't. Nvidia has to cut prices to maintain marketshare, and lose money, or maintain margins and lose market/mindshare. NV is slowly, but surely being squeezed. Hopefully NV can hang tough until TSMC gets off their ass and gets 28nm ready to go on schedule. I have no desire for a monopoly.
 
That'll be BS if it's HQ to HQ like here on Hardocp. As long as AMD does no degradations on HQ then even the most nvidia zealots should accept whatever results come in. I guess it never fails and you can expect the worse out of some. it'll be funny if all the results are @ 190W and we have yet to see the 250W results! :D Perhaps even 250W results were skewed on Cat 10.11 both results showing 190W #s and on 10.12WHQL under 250W it beats a GTX 580 and comes out for $399! :D

This new powersave feature slide really makes me give pause. Because that's a huge FPS difference, nearl 20+, between the maximum TDP mode and the powersave 190W mode. It really would be the best explanation for the otherwise entirely confusing performance numbers of the 6970 to date. Also, why did AMD feel it important enough to put a 'nipple' switch on the 6970. Is power savings really that important to them or what does the toggle actually do? Also how much of this eco mode is possible driver related/controlled might explain things as well.

It brings to mind Nvidia's anti-Furmark chip. Which was a truly bizarre addition. All these efforts on the part of Nvidia and AMD are to bring in sexy low TDPs or what? In any case, I hope that [H] is on top of this and gives us the review of the 6970 with the TDP both maxed and at 'eco settings' so we can properly understand the descrepancies in performance and the FUD that is floating around everywhere right now.
 
Why exactly would anyone want a powersave switch?
Marketing gimmick? Set it to max, and forget about it forever?
 
Why exactly would anyone want a powersave switch?
Marketing gimmick? Set it to max, and forget about it forever?

That, I don't understand either. But AMD has already made some weird decisions based on marketing. Such as renaming the successor to the 5770 the 6870 and droping the ATI name for good. So a nipple switch to enable some weird super eco mode might make sense to their marketing deparment. Other rumors I've heard are that the 'nipple' switches between dual bios for the video card. Although again one has to ask, WHY?
 
I really doubt that is a power switch, it could easily be done with software. It is believed that is a bios switch. The card has dual bios and manufacturers can put oc bios on the card and if someone wants to go back to default the flip the switch.
 
I really doubt that is a power switch, it could easily be done with software. It is believed that is a bios switch. The card has dual bios and manufacturers can put oc bios on the card and if someone wants to go back to default the flip the switch.
Yeah it might also allow users to modify the 2 bioses; one for an OC (custom) and another for the factory firmware.
Still seems a little pointless, but at least people could flash a bios and not worry about bricking their card.

That assumes, of course, both the bioses will be modifiable.
 
Yeah it might also allow users to modify the 2 bioses; one for an OC (custom) and another for the factory firmware.
Still seems a little pointless, but at least people could flash a bios and not worry about bricking their card.

That assumes, of course, both the bioses will be modifiable.

I can't remember where I saw it but it mentioned that the second bios was for modification like cold bugs when overclocking with Liquid nitrogen and stuff like that and the other was the stock factory bios. Not sure if that was accurate. Damn I can't remember.
 
I can't remember where I saw it but it mentioned that the second bios was for modification like cold bugs when overclocking with Liquid nitrogen and stuff like that and the other was the stock factory bios. Not sure if that was accurate. Damn I can't remember.
I really would find it hard to believe a reference card would have that. I mean how many people really have that issue .001%? I could see someone like XFX putting that on a special version of the card though. Power throttling would make more sense, though you would think that would be done in software. Like the rest of questions on this card, Kyle should tell us soon. :D
 
Back
Top