Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
There is no switch to turn up the tdp, I think from what I have been reading you adjust the power control in CCC. Default power control runs the card at lower speed while adjust the power control to higher it bumps up the clocks. The drivers that everyone is testing with doesn't have the option to adjust. I think the newer 10.12 does, and the one probably shipped to reviewers.
it was an unimpressive refresh for those that already owned the 480...for all others it was 'Fermi done right' and a very good release...if the 6970 cannot overtake Nvidia after such an 'unimpressive' refresh then the only thing left is for Nvidia to sit out a year and let AMD catch up
It basically allows higher clocks while staying within a thermal budget. Current CPUs and GPUs are past the point of overclocking being limited by the clock speed, but more by the cooling solution. This should help with overclocking since you will be able to overclock farther without having to upgrade the cooling or have the fan be loud. The big issue will be when it will clock down. If it clocks down when the game is running slow for frame rate, it will hurt performance, but if it clocks down when you are closer to maximum or average frame rate, it will help. The interesting thing is the power intensive parts of a game don't always align with the minimum frame rate. Look at Starcraft 2 when it came out. The menu was what was causing games to overheat.I see PowerTune maybe being beneficial for people that wish to keep their GPU temps and fan noise under control. But what's the point really? To keep Furmark from killing cards? If that's the goal, fine, but don't start dicking with my clocks when the action gets heavy and I need every frame I can get.
AMD was second to Fermi all this time, and everyone said Nvidia was doomed. Now Nvidia is around GTX 580 performance with a single GPU, and AMD is doomed?
That makes no sense.
AMD was second to Fermi all this time, and everyone said Nvidia was doomed. Now Nvidia is around GTX 580 performance with a single GPU, and AMD is doomed?
The 4890 was also a refresh of the 4870, came 10 months later, and still lost to the GTX285, but people thought it was a great card. This comes out 15 months later, has more changes and is a far bigger improvement over the 5870 than the 4890 was over the 4870, and it's a failure? sigh
that sentence made absolutely no sense
It's all about expectations. Everyone thought (hoped?) the 6970 would be a 580 killer, so anything less than that is going to seem bad.
Meant AMD, you know what I mean. And the point stands
^ Anybody have the link for that?seems like both 6950 and 6970 have vapor chamber cooler. Pic at overclocker forums.
the 580 is what the 480 should have been...and once again ATI is falling short, this time with the difference being that when you factor in everything the 6970 will not be the 'better' card
Isn't it kinda early to say that one way or the other? And if anything, things are kinda pointing in the opposite direction; that the 6970 will actually be faster on average. Also, from the looks of it, cheaper and less power hungry. Again, I'm not concluding anything until the 15'th, just saying how things have started pointing in AMD's direction again the last couple of days.
point doesn't stand...Nvidia's release of the 480 was not the card they wanted to put out there...it was way too hot, loud and had it's features cut...in short it was a disaster...yet they were still able to have a 'faster' card then ATI...now faster does not necessarily mean better...I think the 5870 won that round all things considered
the 580 is what the 480 should have been...and once again ATI is falling short, this time with the difference being that when you factor in everything the 6970 will not be the 'better' card
^ Anybody have the link for that?
Just spent 10 minutes checking forums and nothing.
How about 20 games.580 only 25% faster? depends on the game suite, but its 40% faster on average from a wide array of games
Second, how do you know the 6970 will not be the better card? Do you have pricing #'s? Do you have power #'s? Heck, do you even have reliable performance #'s? Do you even know what AMD is targetting with the 6970?
AMD has never competed with Nvidia for the single high end GPU since the X1950XTX. But unless they've explicitly said that the 6900 was intended for the crown (anything else was just rumors spread by the same people wrong and wrong again this time around), it's silly to think that this card is a failure on their end, especially since you aren't taking the whole card in its context, and just looking at performance #'s.
Not really, they strive to make money.AMD and Nvidia ALWAYS strive to be the fastest performing card on the market
Ahh, missed the link for a second.
It isn't so much that they say we are going to be #2, with the RV770, ATI realized that making the largest die they could and throwing everything including the kitchen sink was taking too much development time. Instead, they moved to making the best GPU for a given area. (Read the RV770 and RV870 stories on anandtech). It isn't saying they will make a worse GPU, the strategy is more to make something that can be released on time. How well it compares to Nvidia's card is determined by what Nvidia brings to the table.so AMD is not targeting the high end enthusiast with the 6970?...AMD and Nvidia ALWAYS strive to be the fastest performing card on the market...they never launch a new product saying "OK we want to be #2 this time"...there are different market segments for each card- budget, mainstream and enthusiast...to say that AMD is not aiming to be the best performing card in each segment is silly
so AMD is not targeting the high end enthusiast with the 6970?...AMD and Nvidia ALWAYS strive to be the fastest performing card on the market...they never launch a new product saying "OK we want to be #2 this time"...there are different market segments for each card- budget, mainstream and enthusiast...to say that AMD is not aiming to be the best performing card in each segment is silly
AMD hasn't competed directly for the top performance single GPU from Nvidia since 2007. The other poster's covered the points nicely, but I wanted to add to the 32nm part
AMD was always ahead of Nvidia on process nodes. 32nm was due at this time, and it's cancellation screwed AMD's plans.
Remember, AMD was the first to 65/55nm. It then pushed to 40nm with the 4770 many many months before the 5xxx generation. Had 32nm been on time, Cayman would probably have been on 32nm with probably higher functional units / clocks / all in a smaller package, probably roughly 4890 sized die. TSMC cancelled it, and Cayman on 40nm was what ended up happening.
Anywho, even beyond all those facts, people who automatically assume performance is the whole package forget a LOT goes into a GPU's final result. Performance is a big one, of course, but so are costs to manufacture, software support (drivers AND games), heat output and power intake (a big issue with OEMs), future scalability, etc.
You can't take performance in a bubble and say that's all that matters. If I created a GPU that was 30% faster than the GTX580 but drew 10x more power, cost me $1000 to manufacture, and was hotter than the surface of the sun, would it be a success? Probably not right?
The good news for AMD is that because their 389mm^2 GPU is apparently as close to Nvidia's 520-530mm^2 GPU, TSMC's cancellation of 32nm and possible delay of 28nm until 2012 means AMD has a LOT more breathing room to go up. It would not surprise me if AMD creates a 6975 or 6980 or something, like the RV790 (4890) with higher clocks and possibly more functional units and outright takes the crown sometime next year if 28nm really is delayed. Nvidia already did their 40nm refresh, so it'll be interesting to see what AMD pushes to next.
I would say the 4870/4850 launch is where AMD started gathering market share. Look at the steam hardware survey. The 4800 series is still #1. Even though the card wasn't as fast as the 280, it caused Nvidia to rapidly drop prices to compete. Even with low availability after release of the 5800 series and the price gouging of retailers, AMD soundly won the market share game for this round.Well, to look at it one way, AMD had a complete runaround with the top performing single GPU for almost 6 months straight - longer than the combined reign of each of the top Fermi dogs (top dog being a good thing...)
So even if it doesn't fit into the "generation" thing, it was then when AMD started seriously gathering market share/volume and momentum.
AMD was always ahead of Nvidia on process nodes. 32nm was due at this time, and it's cancellation screwed AMD's plans.
Nvidia already did their 40nm refresh, so it'll be interesting to see what AMD pushes to next.
that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point
Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?
if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point
Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?
if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
There is a difference between "mute" and "moot", seriously.that whole post was a 'what if' scenario...what if AMD released Cayman on 32nm?...they didn't so it's a mute point
It doesn't matter if NVidia is ready to release a 28nm part tomorrow, or last week, or in 3 months. the only thing that matters is TSMC doesn't have a 28nm process ready yet.Nvidia already did their 40nm refresh means it precludes them from releasing another refresh?...how do you know how far along Nvidia is with their 28nm part?
How about DX11 parts, that should be more indicative of the latest sales.if you're basing market share based on the Steam hardware survey then you left out the part where Nvidia has a 59% market share compared to 32% for ATI
Exactly. The whole point of designing, producing, and advertising video cards is to make money so the whole process can be repeated, hopefully ad infinitum. AMD/ATI has been systematically/purposely targeting NVidia's cash cows, at least in the retail consumer segment. The 68XX was targeted at the 460GTX, the 69XX at the 5XX series. The AMD offerings are cheaper to produce, so AMD can afford a certain leeway in pricing that NVidia can't. Nvidia has to cut prices to maintain marketshare, and lose money, or maintain margins and lose market/mindshare. NV is slowly, but surely being squeezed. Hopefully NV can hang tough until TSMC gets off their ass and gets 28nm ready to go on schedule. I have no desire for a monopoly.Not really, they strive to make money.
Making lots of money isn't inclusive of having the #1 card.
QFT, the GTX 280 launch was a glimpse of what would happen price wise if either Nvidia or AMD left the market.I have no desire for a monopoly.
QFT, the GTX 280 launch was a glimpse of what would happen price wise if either Nvidia or AMD left the market.
That'll be BS if it's HQ to HQ like here on Hardocp. As long as AMD does no degradations on HQ then even the most nvidia zealots should accept whatever results come in. I guess it never fails and you can expect the worse out of some. it'll be funny if all the results are @ 190W and we have yet to see the 250W results! Perhaps even 250W results were skewed on Cat 10.11 both results showing 190W #s and on 10.12WHQL under 250W it beats a GTX 580 and comes out for $399!
Why exactly would anyone want a powersave switch?
Marketing gimmick? Set it to max, and forget about it forever?
Yeah it might also allow users to modify the 2 bioses; one for an OC (custom) and another for the factory firmware.I really doubt that is a power switch, it could easily be done with software. It is believed that is a bios switch. The card has dual bios and manufacturers can put oc bios on the card and if someone wants to go back to default the flip the switch.
Yeah it might also allow users to modify the 2 bioses; one for an OC (custom) and another for the factory firmware.
Still seems a little pointless, but at least people could flash a bios and not worry about bricking their card.
That assumes, of course, both the bioses will be modifiable.
I really would find it hard to believe a reference card would have that. I mean how many people really have that issue .001%? I could see someone like XFX putting that on a special version of the card though. Power throttling would make more sense, though you would think that would be done in software. Like the rest of questions on this card, Kyle should tell us soon.I can't remember where I saw it but it mentioned that the second bios was for modification like cold bugs when overclocking with Liquid nitrogen and stuff like that and the other was the stock factory bios. Not sure if that was accurate. Damn I can't remember.