Now that HD 8XXX cards are months away, will ATI Skip 7990?

i vote Nivdia for releasing solid drivers

Get AMD if you want crap drivers and 3-5 percent faster FPS in metro and maybe another game.

Lol, I vote for who ever has the best card at the price I want to pay when I want to buy.
 
first, lets make it clear that the 7970 is using 2xmsaa.

cant go wrong with either card in terms of performance but it still comes down to who has a better driver support, heat, fan speed, and power draw at the end.

nvidia is clearly the winner this gen, anyone who denies it is a die hard amd/ati extremist.

I admit it. I'm a die hard AMD extremist.
 
I'm glad I gave you a laugh, Kyle. Life is short, and we can use all the good cheer we can get!

Didn't know I could actually rouse the big guy to respond to my humble rant! I feel strangely honored. :)

Yes, yes you did give ONE gold award to an AMD or AMD-based product in the last 3 years. OK, so you're not really mad at AMD then!

Seriously, though, I am looking forward to the day that AMD graphics cards and CPUs strangely get the benefit of the doubt/mysteriously good reviews all of a sudden. It will feel like old times all over again.

They're reviewed pretty positively given their performance...
 
i vote Nivdia for releasing solid drivers

Get AMD if you want crap drivers and 3-5 percent faster FPS in metro and maybe another game.

Just ONE other game? Like maybe:

-Battlefield 3:

BF3.png


Hexus wrote for the above AMD Battlefield 3 results:

"Look at the results, and then look at them again. AMD made a specific, concerted point about describing the GCN-related improvements for the Frostbite 2 engine that underpins Battlefield 3 - and by consequence, MOH: Warfighter. We had expected to see a 15 per cent improvement, perhaps a shade less, but the Catalyst 12.11 numbers are 33.9 per cent higher at 1080p and 19.4 per cent higher at 1600p.

We've double- and triple-checked the results and run the test multiple times; the reported figures are consistent."

-Aliens vs. Predator:

AVP.png


-Sleeping Dogs:

dogs.jpg


-DiRT Showdown:

showdown.jpg


-Batman Arkham Asylum:

batman.jpg



Did you mean that ONE game where the AMD 7970 wins against the GTX680? Sorry, which ONE game did you mean again?
 
That's why clock for clock is important. If you have the GPU's running at the same speed, and one does more work, it shows the strengths of the architecture. Since the 7970 is faster, a good starting consideration is indeed the memory bandwidth. It's not unreasonable to think the extra memory bandwidth helps performance, and again shows why the 7970 being faster is a better purchase right now.
Clock-for-clock comparisons compare architectures and designs, not raw speed, something it seems you don't understand. 1.3GHz is an arbitrary number, you can select any overclock and Kepler and Tahiti will clock relatively the same. In fact, hwbot shows their averages as being almost identical: http://hwbot.org/hardware/videocard/geforce_gtx_680/ . Therefore, if both cards clock relatively the same, but the 7970 is faster at the same clocks, clearly the 7970 is the higher performing part.

You will never, ever convince me of the logic behind clock for clock comparison between two different architectures. That memory bandwidth BS also made zero sense to me. You don't seem to like to include memory OC to your clock for clock point of view. Knowing the difference in stock memory bandwidths of Tahiti and Kepler, it's clear that those two respond very differently to memory OC. Not only that, but as a result of that difference in bandwidth, they also respond very differently to core OC when considering the bandwidth as constant. Kepler will be much more bandwidth restricted which is not necessarily so when you take memory OC into account.

That's why you use raw data, which I showed you. Looks like you got lousy clockers, it happens. The reasonable expectation when buying a part is that you will fall at the 50th percentile, which in this case is 1214 MHz.

Similar clocks are the trend, I just proved it and provided references.

Wanna know what's the problem with your raw data? GTX 680 clocks are all over the place. There are many, many hwbot users who report the clocks that you see on the first tab of GPU-Z. Those are not the actual clocks the card is running with, you need to check those in the sensors tab. Take the actual clocks, get an average out of those and then talk about raw data.

Then there is the fact that for Tahiti cards it is much more common to bench with a set up not suitable for 24/7 use. Take the average air cooled 7970 owner: for benching he cranks the voltage to max, clocks to where they are stable at with that max voltage, makes his bench run and then turns the voltage and clocks down. Take your average air cooled 680 owner: he cranks the clocks up to as high as they are stable, makes his bench run and then goes on playing his games.

Furthermore, I can show the opposite, there's no power difference. For example:
That's an overclocked 7970 @ 1210MHz and it's using 13W more than a stock GTX 680. I was being generous with 50W; again, power isn't the issue you're trying to make it into.

You can show it, but you don't seem to understand it. That XFX is overclocked with no voltage control. Overclocking with no overvolting results in exactly that: miniscule power consumption increase. Just as with the 680 I showed you. Take your average 7970 and your average "raw data" overclock: on average you will need extra voltage to get there. That extra voltage will increase the power consumption exponentially. There is no way out of it: voltage tweaking will destroy your power efficiency, simple as that.

Now we can argue about these things forever, but I think it's safe to say we are never going to agree on the details. The thing is, we are agreeing on the big question: AMD is the thing to get right now. So let's not go any further into it, ok?
 
I like seeing how fanboys argue over 680 vs 7970 when the frame rate is withing 5%.

This is a gross oversimplification to be honest. The HD 7970 OC editions can be purchased for $369 after rebate and they are overall faster than a GTX 680, come with more RAM and at higher resolutions are a better card.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814202008

So it isn't as simple as stating its only 5% faster. You need to factor in it is also ~$70 cheaper to purchase and comes with a great games bundle. Faster and cheaper = better price/perf. When GTX 680 was released it had better price/perf and it was rightly received as a great card at a (then) great price. Now AMD have turned the tables and the kudos goes to them, unfortunately there are too many brand loyalists on both sides who refuse to give credit where it is deserved.
 
So it isn't as simple as stating its only 5% faster. You need to factor in it is also ~$70 cheaper to purchase and comes with a great games bundle. Faster and cheaper = better price/perf. When GTX 680 was released it had better price/perf and it was rightly received as a great card at a (then) great price. Now AMD have turned the tables and the kudos goes to them, unfortunately there are too many brand loyalists on both sides who refuse to give credit where it is deserved.

What you describe is a problem of AMD. They have to sell the far more expensive product including a game bundle for a lower price. Good business execution is something different. Their Q3 financial results where a nightmare and even the GPU division did not well in the last quater. They had a profit but revenue took a hit. We will see when Nvidia posts numbers but my feeling is they are doing far better. BTW AMD still has no answer for the high quality GTX690.
 
What you describe is a problem of AMD. They have to sell the far more expensive product including a game bundle for a lower price. Good business execution is something different. Their Q3 financial results where a nightmare and even the GPU division did not well in the last quater. They had a profit but revenue took a hit. We will see when Nvidia posts numbers but my feeling is they are doing far better. BTW AMD still has no answer for the high quality GTX690.

You post is such BS I don't know where to begin. Seriously dude, take a step back and read the utter bullshit you just typed. It doesn't mater how AMD are doing financially, this is pure and simple about what GPU cards are the best bang for buck. RIGHT NOW the AMD cards are better price/perf compared to their equivalent Nvidia cards.

How the hell is AMDs current financial position pertinent to how good the AMD GPUs are?

This HardOCP not the FT times.:rolleyes:
 
You will never, ever convince me of the logic behind clock for clock comparison between two different architectures. That memory bandwidth BS also made zero sense to me. You don't seem to like to include memory OC to your clock for clock point of view. Knowing the difference in stock memory bandwidths of Tahiti and Kepler, it's clear that those two respond very differently to memory OC. Not only that, but as a result of that difference in bandwidth, they also respond very differently to core OC when considering the bandwidth as constant. Kepler will be much more bandwidth restricted which is not necessarily so when you take memory OC into account.
Kepler's lack of memory bandwidth has the possibility of being restrictive and much more quickly than a 7970. That's a disadvantage. You don't need to complicate it further or be purposefully dense to try to negate or deflect that point.
Wanna know what's the problem with your raw data? GTX 680 clocks are all over the place. There are many, many hwbot users who report the clocks that you see on the first tab of GPU-Z. Those are not the actual clocks the card is running with, you need to check those in the sensors tab. Take the actual clocks, get an average out of those and then talk about raw data.
The same can be said for 7970 which also has boost. The great thing about averages is it tends to minimize differences.
Then there is the fact that for Tahiti cards it is much more common to bench with a set up not suitable for 24/7 use. Take the average air cooled 7970 owner: for benching he cranks the voltage to max, clocks to where they are stable at with that max voltage, makes his bench run and then turns the voltage and clocks down. Take your average air cooled 680 owner: he cranks the clocks up to as high as they are stable, makes his bench run and then goes on playing his games.
That's a baseless assumption on your part. Do you have any data to show that or are you now just making up things because your argument is so weak?
You can show it, but you don't seem to understand it. That XFX is overclocked with no voltage control. Overclocking with no overvolting results in exactly that: miniscule power consumption increase. Just as with the 680 I showed you. Take your average 7970 and your average "raw data" overclock: on average you will need extra voltage to get there. That extra voltage will increase the power consumption exponentially. There is no way out of it: voltage tweaking will destroy your power efficiency, simple as that.
And the point is that the 7970 can clock the same as the GTX 680 without touching the voltage control. If you want to go higher, you can give it voltage to keep going, which is another advantage the 7970 has over the GTX 680. Again, not complicated.
Now we can argue about these things forever, but I think it's safe to say we are never going to agree on the details. The thing is, we are agreeing on the big question: AMD is the thing to get right now. So let's not go any further into it, ok?
What has happened is you have no rebuttals or data to counter my points so you want to bow out. Just say that instead of this "let's agree to disagree" nonsense.

You post is such BS I don't know where to begin. Seriously dude, take a step back and read the utter bullshit you just typed. It doesn't mater how AMD are doing financially, this is pure and simple about what GPU cards are the best bang for buck. RIGHT NOW the AMD cards are better price/perf compared to their equivalent Nvidia cards.

How the hell is AMDs current financial position pertinent to how good the AMD GPUs are?

This HardOCP not the FT times.:rolleyes:
Lol, amen.
 
Then there is the fact that for Tahiti cards it is much more common to bench with a set up not suitable for 24/7 use. Take the average air cooled 7970 owner: for benching he cranks the voltage to max, clocks to where they are stable at with that max voltage, makes his bench run and then turns the voltage and clocks down. Take your average air cooled 680 owner: he cranks the clocks up to as high as they are stable, makes his bench run and then goes on playing his games.



You can show it, but you don't seem to understand it. That XFX is overclocked with no voltage control. Overclocking with no overvolting results in exactly that: miniscule power consumption increase. Just as with the 680 I showed you. Take your average 7970 and your average "raw data" overclock: on average you will need extra voltage to get there. That extra voltage will increase the power consumption exponentially. There is no way out of it: voltage tweaking will destroy your power efficiency, simple as that.

Where are you pulling this rubbish from? The only thing that I sorta of agree with is that the clock for clock argument. But your other arguments are really weak, I mean seriously? Just read K6's reply to you above. I was going to use the nearly the exact same words as he did.
 
You post is such BS I don't know where to begin. Seriously dude, take a step back and read the utter bullshit you just typed. It doesn't mater how AMD are doing financially, this is pure and simple about what GPU cards are the best bang for buck. RIGHT NOW the AMD cards are better price/perf compared to their equivalent Nvidia cards.

How the hell is AMDs current financial position pertinent to how good the AMD GPUs are?

This HardOCP not the FT times.:rolleyes:

Brilliant post!! :)
 
You post is such BS I don't know where to begin. Seriously dude, take a step back and read the utter bullshit you just typed. It doesn't mater how AMD are doing financially, this is pure and simple about what GPU cards are the best bang for buck. RIGHT NOW the AMD cards are better price/perf compared to their equivalent Nvidia cards.

How the hell is AMDs current financial position pertinent to how good the AMD GPUs are?

This HardOCP not the FT times.:rolleyes:

Because people like winners and not loosers. Nvidia has a lot of other stuff going for them. A great brand is definately one of them. This justifies a higher price plain and simple and the majority - whether you like it or not - seems to agree as numbers clearly show.
 
Because people like winners and not loosers. Nvidia has a lot of other stuff going for them. A great brand is definately one of them. This justifies a higher price plain and simple and the majority - whether you like it or not - seems to agree as numbers clearly show.
No, brainless sheep without a pair tie themselves to a company because they lack a personality and a functional social life. It's video card company, move on.
 
Because people like winners and not loosers. Nvidia has a lot of other stuff going for them. A great brand is definately one of them. This justifies a higher price plain and simple and the majority - whether you like it or not - seems to agree as numbers clearly show.

LOL, Nvidia is not a great brand for GPUs, how can it be when it is in an industry populated by two manufacturers in the gaming GPU market. I guarantee you that if you ask the average person in the street they won't have heard about Nvidia. This isn't Ford vs BMW we are talking about, this is very minor, small fry gaming GPUs that the vast majority of people will never have heard off. How many people outside of gamers actually even buy discrete GPUs? In fact the enthusiast gamer GPU market is so small only Nvidia and AMD care about it. Think about that when you spout the "good brand" drivel.
 
Last edited:
So it took almost a year for AMD to unleash the potential of the HD79XX series? Good thing I didn't waste my time and bought my 680 GTX at launch instead.
 
Did you mean that ONE game where the AMD 7970 wins against the GTX680? Sorry, which ONE game did you mean again?[/QUOTE]


Nvidia is on 310.33 Drivers are you living in the past because the 680 cards just had a boost. BTW how is that hairdryer and portable heater doing in your computer today? The heater must be nice in the winter but that hairdryer must get annoying all year round.
 
So it took almost a year for AMD to unleash the potential of the HD79XX series? Good thing I didn't waste my time and bought my 680 GTX at launch instead.
So you bought a card with identical performance and lost out on a year of Bitcoin mining profits instead. Strong work.

Honestly, it's interesting to see the fanboys downplay the performance boost. Competition is good for all consumers, even if it hit your favorite company in the nuts.
 
Kepler's lack of memory bandwidth has the possibility of being restrictive and much more quickly than a 7970. That's a disadvantage. You don't need to complicate it further or be purposefully dense to try to negate or deflect that point.

It is a disadvantage and you maximize that disadvantage by not taking into account the possibility to OC the memory on 680. That is of course fitting to your need to include everything pro AMD and exclude everything pro Nvidia.

The same can be said for 7970 which also has boost. The great thing about averages is it tends to minimize differences.

Seriously, stop talking nonsense. GPU-Z reports just one core clock for the 7970 GHz Edition and it is that boosted clock. How about getting your facts straight before attacking my arguments?

That's a baseless assumption on your part. Do you have any data to show that or are you now just making up things because your argument is so weak?

Data? You mean like the ages old fact that benchmarking is all about suicide runs with settings not suitable for 24/7 usage? You can say whatever bad things you like on Nvidia's decision to not allow voltage tweaking with Keplers, but the one thing it means is that suicide runs do not go with Keplers. They do with Tahitis. Deny it all you want, but I've made my fair share of suicide runs with my 7970s and I know perfectly well that it is not the same thing as getting to a balanced and stable 24/7 OC. How exactly do you do that with a locked voltage?

And the point is that the 7970 can clock the same as the GTX 680 without touching the voltage control. If you want to go higher, you can give it voltage to keep going, which is another advantage the 7970 has over the GTX 680. Again, not complicated.

And again, you are cherry-picking your data. You want everyone to know how great the 7970 overclocks with their unlocked voltages and the next thing you do is tell them to ignore the effect overvolting has on the power consumption of a graphics card. When the facts are already on the side of AMD, why do you feel the need to distort the obvious truth? 1125/1575 are pretty much the clocks that most 7970s do on stock voltage. Some do more, 1150MHz isn't too rare, but then there are cards that come shy of 1100MHz. 1200MHz is very much in the overvolting category. One card making it to 1210MHz (with a higher-than-original stock voltage of 1.2V, mind you) don't change that.

What has happened is you have no rebuttals or data to counter my points so you want to bow out. Just say that instead of this "let's agree to disagree" nonsense.

You want to keep going, I'm all for it. I don't have an agenda to put forth and feel no personal need to argue with you, but keep posting stuff like this and I will for sure keep countering you.
 
So you bought a card with identical performance and lost out on a year of Bitcoin mining profits instead. Strong work.

Honestly, it's interesting to see the fanboys downplay the performance boost. Competition is good for all consumers, even if it hit your favorite company in the nuts.

Performance was identical when I bought my 680 GTX? Yeah right guy.
Right, I'm going to spend all my time mining. Wait for that next price plunge.

You're just bitter that it took AMD almost a year to realize the potential of their GAMING card.
 
It is a disadvantage and you maximize that disadvantage by not taking into account the possibility to OC the memory on 680. That is of course fitting to your need to include everything pro AMD and exclude everything pro Nvidia.
Where did I exclude anything? Don't try to make me into a fanboy like yourself, I'm stating an advantage which can come into play - the 7970 has much higher memory bandwidth and that is a big advantage in some applications. That's a simple point, don't go ad hominem because you don't like it.
Seriously, stop talking nonsense. GPU-Z reports just one core clock for the 7970 GHz Edition and it is that boosted clock. How about getting your facts straight before attacking my arguments?
And what about when boost is disabled when overclocking, which is what we're talking about in the first place. It seems like, once again, you don't have your facts straight.
Data? You mean like the ages old fact that benchmarking is all about suicide runs with settings not suitable for 24/7 usage? You can say whatever bad things you like on Nvidia's decision to not allow voltage tweaking with Keplers, but the one thing it means is that suicide runs do not go with Keplers. They do with Tahitis. Deny it all you want, but I've made my fair share of suicide runs with my 7970s and I know perfectly well that it is not the same thing as getting to a balanced and stable 24/7 OC. How exactly do you do that with a locked voltage?
Again, you're making up anecdotal nonsense because your arguments are completely transparent and baseless. I'm not the only one telling you this.
And again, you are cherry-picking your data. You want everyone to know how great the 7970 overclocks with their unlocked voltages and the next thing you do is tell them to ignore the effect overvolting has on the power consumption of a graphics card. When the facts are already on the side of AMD, why do you feel the need to distort the obvious truth? 1125/1575 are pretty much the clocks that most 7970s do on stock voltage. Some do more, 1150MHz isn't too rare, but then there are cards that come shy of 1100MHz. 1200MHz is very much in the overvolting category. One card making it to 1210MHz (with a higher-than-original stock voltage of 1.2V, mind you) don't change that.
The thing is your trying to make this black and white because that's what nvidia has restricted you to do, and of course you're going to bat for them. With AMD you have a choice - you can run a nice overclock at lower power consumption or you can boost the voltage and shoot for the moon. You can't do that with a Kepler based card, period. And furthermore, at the same clocks, the Kepler card is slower. No matter how you try to deflect or change parameters you can't escape the fact that Kepler is a gimped GPU.
You want to keep going, I'm all for it. I don't have an agenda to put forth and feel no personal need to argue with you, but keep posting stuff like this and I will for sure keep countering you.
You're not countering anything, just stop. You haven't provided a shred of evidence to prove your points and rely on anecdotal hearsay and your opinion, which counts for nothing may I remind you, as evidence. Keep going, you're digging your own hole and this is fun to watch.
Performance was identical when I bought my 680 GTX? Yeah right guy.
Right, I'm going to spend all my time mining. Wait for that next price plunge.

You're just bitter that it took AMD almost a year to realize the potential of their GAMING card.
Wait, you come into this thread to stamp your feet and I'm the one that's bitter? Sorry you don't like being called out on your behavior, here's a tissue.
 
Performance was identical when I bought my 680 GTX? Yeah right guy.
Right, I'm going to spend all my time mining. Wait for that next price plunge.

You're just bitter that it took AMD almost a year to realize the potential of their GAMING card.

I bought a HD 7970 when they were released in January, right from the off it was able to reach 1100 core clock at stock voltage. That might seem like a moderate overclock but this is 20% over the stock 7970. I call this an excellent overclock and at those speeds it was faster than a GTX 680 which came out almost 3 months later.

I also have a GTX 680, I bought in April and it is a great card, it overclocks to ~1300 core which is exceptional. At stock clocks the GTX 680 is a better card (non GHz edition 7970). The problem is that even though it gets 1300 on the core it still only trades blows with my 7970 at ~1150 core.

So in my experience the HD 7970 has always given roughly equal performance compared to a GTX 680. If you have a GTX 680 then be happy that you got a great card, and by waiting you got it a better price than if you had bought a 7970 on release. Though don't make the mistake of assuming the GTX 680 is a faster card. The honest truth is they trade blows and with the latest drivers the HD 7970 is winning more than not.

When the GTX 680 was released it was cheaper and at stock clocks it was better than the HD 7970, and it quite rightly got praise for having the price/perf crown. Now the HD 7970 GHz is faster at stock settings and is cheaper, so it has regained the price/perf crown. Why is it hard for some people to accept AMD has the price perf as of this date?

I suppose the high-end Nvidia users have always been safe in the knowledge that they had the best card available, money no object. Now this is no longer true they come out to re-affirm their superiority in the most idiotic ways possible.

  • The GTX 680 uses less power - even though this never seemed an issue for Nvidia fans last gen.
  • Nvidia is a better brand name, so they are worth the premium. If that myth makes you sleep better at night then you need to re-evaluate your priorities in life.
  • AMD drivers suck. Strangely not in my experience, both drivers do the job they were designed to. I especially love when people say this and they haven't owned an ATI/AMD card for almost 10 years.
  • AMD are in serious financial difficulty. So we should judge the performance of the 7970 based on that metric? How idiotic is that.
  • Nvidia Multi GPU is far superior. This is actually partly true, though SLI is not without it's problems. Also consider the vast majority use single GPUs only, so this shouldn't be a problem.

Everyone of these "reasons" is not a reason at all, it's an excuse for Nvidia biased fans to justify their purchase. Thankfully most people are realists and will look at price/perf and go with what is best for their wallet.
 
Last edited:
Performance was identical when I bought my 680 GTX? Yeah right guy.
Right, I'm going to spend all my time mining. Wait for that next price plunge.

You're just bitter that it took AMD almost a year to realize the potential of their GAMING card.

He is right, the performance was nearly identical between the two cards around april. I purchased the 680GTX in April only because it was cheaper.

If I was buying now I would buy the 7970 because it now offers better performance for less. And the three games thrown in, love them or hate them, they are still a nice add-on to the deal.

And why would any 7970 owners be bitter? That's about the stupidiest thing that I have read in this thread so far. They are getting a fairly substantial upgrade for free, what's not to like? Seriously?
 
Well i had a geforce 4200ti was great, the ATI x800xl great card never had problems with it, the 8800GTX the drivers had problems... got blue screens in a couple of games and the last one was a 4870 loved the card. Now i'm going with a 7950.
 
LOL, Nvidia is not a great brand for GPUs, how can it be when it is in an industry populated by two manufacturers in the gaming GPU market. I guarantee you that if you ask the average person in the street they won't have heard about Nvidia. This isn't Ford vs BMW we are talking about, this is very minor, small fry gaming GPUs that the vast majority of people will never have heard off. How many people outside of gamers actually even buy discrete GPUs? In fact the enthusiast gamer GPU market is so small only Nvidia and AMD care about it. Think about that when you spout the "good brand" drivel.

We are talking about gamers. And gamers do very well know the brand "Nvidia" because after 3dfx has stopped existing they have been the synonym for gaming, especially enthusiast gaming. And the latter one is a good selling factor for performance, mid range and low end gaming. Simple market mathematics. It does not matter how you want to spin it. The majority of customers prefers Nvidia plain and simple. Live with it. Its AMDs fault if their brand recognition is so lousy as it is.
 
We are talking about gamers. And gamers do very well know the brand "Nvidia" because after 3dfx has stopped existing they have been the synonym for gaming, especially enthusiast gaming. And the latter one is a good selling factor for performance, mid range and low end gaming. Simple market mathematics. It does not matter how you want to spin it. The majority of customers prefers Nvidia plain and simple. Live with it. Its AMDs fault if their brand recognition is so lousy as it is.

Nvidia has always had around 60% of the discrete GPU market with AMD/ATI at around 40%, so I am aware of how simple market mathematics work. Please bear in mind I was not referring to the fact that the majority prefer Nvidia, but to the statement that Nvidia is a "Great brand and justifies the higher price". Unless I am wrong the implication was that Nvidia cards = higher quality than AMD cards, hence the price premium. My analogy of BMW vs Ford is pertinent because having owned both types of cars it is clear BMW has a better build quality and is made of more upmarket materials. In this case the brand premium pricing is almost justified, this is why the BMW brand commands a premium compared to a Ford. The same is not true of Nvidia vs AMD (or ATI). Nvidia cards are not made of better parts, or of better materials, nor do they have better build quality. All you are paying for is brand name recognition.

Correlation does not imply causation. Just because 60% of people assume Nvidia = higher quality does not make it true. Yes Nvidia do command a slightly better brand image among gamers but to claim that Nvidia = higher quality does not make it a fact.

Don't make the assumption that brand image always = higher quality.
 
Last edited:
No, brainless sheep without a pair tie themselves to a company because they lack a personality and a functional social life. It's video card company, move on.

You are being a bit harsh on AMD fans. :p
 
So it isn't as simple as stating its only 5% faster. You need to factor in it is also ~$70 cheaper to purchase and comes with a great games bundle. Faster and cheaper = better price/perf. When GTX 680 was released it had better price/perf and it was rightly received as a great card at a (then) great price. Now AMD have turned the tables and the kudos goes to them, unfortunately there are too many brand loyalists on both sides who refuse to give credit where it is deserved.

This is pretty much my point as well. I simply don't care about brand, though I must admit that the activies of Nvidia fanboys on this forum try my patience. I acknowledged the 680's superiority when it was warranted. It is no longer.
 
Where did I exclude anything? Don't try to make me into a fanboy like yourself, I'm stating an advantage which can come into play - the 7970 has much higher memory bandwidth and that is a big advantage in some applications. That's a simple point, don't go ad hominem because you don't like it.

Maybe there, where you went ballistic with your clock for clock comparisons? You know, the ones where you compare core clock to core clock and exclude memory OC? Yea, there. 7970 has bandwidth advantage, which in effect means that if you exclude memory OC from the OC equation, you maximize that advantage. That is where your selective comparison goes south.

And what about when boost is disabled when overclocking, which is what we're talking about in the first place. It seems like, once again, you don't have your facts straight.

Seriously, dude. Get. Your. Facts. Straight. GTX 680 reports a different clock than it actually runs at. The 7970 GE does not: what you see on the first tab of GPU-Z is what you see on the sensors tab (except when throttling). See, that is a fact and I got it straight. You did not. Whether or not boost is active has zero effect on that fact.

Again, you're making up anecdotal nonsense because your arguments are completely transparent and baseless. I'm not the only one telling you this.

My facts are your anecdotal nonsense, I get it. But if you're going to deny the concept of suicide runs when benchmarking, then you are the one making up nonsense. As it happens, you didn't answer my question, either. How do you do a suicide run with a voltage locked card?

The thing is your trying to make this black and white because that's what nvidia has restricted you to do, and of course you're going to bat for them. With AMD you have a choice - you can run a nice overclock at lower power consumption or you can boost the voltage and shoot for the moon. You can't do that with a Kepler based card, period. And furthermore, at the same clocks, the Kepler card is slower. No matter how you try to deflect or change parameters you can't escape the fact that Kepler is a gimped GPU.

Nvidia doesn't restrict me at the slightest and doesn't make me do anything. I'm a consumer who buys what makes me tick. Right now it happens to be 7970 CF so you can save those fanboy accusations. I got all the same choices that you do. Getting similar clocks with no voltage tweaking is better than getting them with voltage tweaking. If 7970s were to run at 1250MHz with stock voltage, then it would be a clear cut case for overclocking. They don't, and it's not. With a 7970, you get the choice of running stock voltage with about similar performance than a 680 and with a little bit higher power consumption, or you can voltage tweak, get better performance and use way more power. With a 680, your choices are very limited but you do get those 1250MHz core clocks with practically no hit in the power consumption. Why this is impossible for you to admit, I do not know.

You're not countering anything, just stop. You haven't provided a shred of evidence to prove your points and rely on anecdotal hearsay and your opinion, which counts for nothing may I remind you, as evidence. Keep going, you're digging your own hole and this is fun to watch

I've provided tons of evidence, reaching to such laws of physics as the fact that voltage increase means exponential growth to power consumption. You have provided some statistical data on which you are not willing to accept any critique. Combined with your selective include/exclude data processing, it is clear that you're not too concerned about the real strengths and weaknesses of these GPUs.

Wait, you come into this thread to stamp your feet and I'm the one that's bitter? Sorry you don't like being called out on your behavior, here's a tissue.

Providing an opinion not similar to yours is stamping one's feet? That's classy.
 
And why would any 7970 owners be bitter? That's about the stupidiest thing that I have read in this thread so far. They are getting a fairly substantial upgrade for free, what's not to like? Seriously?

You know how nVidia fanboys like to thrash ATI, now AMD, for their drivers? How about you affix a sticker to your card: in about a year performance will be what it should have been.
 
Wait, you come into this thread to stamp your feet and I'm the one that's bitter? Sorry you don't like being called out on your behavior, here's a tissue.

This isn't the AMD GPU sub forum. You're such a fanboy you can't even take a joke. From the [H] review:
NVIDIA has also surprised us by providing an efficient GPU. Efficiency is out of character for NVIDIA, but surely we welcome this cool running and quiet GTX 680. NVIDIA has delivered better performance than the Radeon HD 7970 with a TDP and power envelope well below that of the Radeon HD 7970. NVIDIA has made a huge leap in efficiency, and a very large step up from what we saw with the GeForce GTX 580.

NVIDIA has raised the performance metric at the $499 price point. This is what we expect out of next generation video cards, moving efficiency forward as well as performance at a given price point. The $500 segment just became a lot more interesting, and will give you more performance now than ever before.

We've given many awards to Radeon HD 7970 video cards, and those were well deserved. Now that the GeForce GTX 680 is here, the game changes again, and there is no doubt in our minds that this video card has earned HardOCP's Editor's Choice Gold Award. NVIDIA’s GeForce GTX 680 has delivered a more efficient GPU, lower in TDP, that is better or competitive in performance, at a lower price.

The GeForce GTX 680 truly is a win right out of the gate. It has been a long time since we've said that about a new GPU from NVIDIA, and it is about time the company got something right the first time! Perhaps the stigma of power hungry, hot, inefficient GPUs is gone thanks to the GeForce GTX 680? NVIDIA needed to build its "green" reputation back up with hardware enthusiasts and gamers, and the GeForce GTX 680 is an excellent start. Let’s just hope we see NVIDIA’s next flagship GK110 do the same.

Maybe you should return to your basement to mine for coins. If I buy another AMD card I'll remember to affix a sticker on it that reads drivers will be fixed, I mean improved right before the next gen of cards is released.

Since I'm looking at buying a Windows laptop I'll make sure to avoid AMD:

We’re not done with the AMD Enduro driver story, of course. With this release, AMD is starting on the road to delivering reference drivers that will in theory work with all Enduro (PowerXpress 4.0 or later) laptops. In practice, there are still some teething problems, and long-term AMD needs to get all the kinks straightened out. They’re aware of issues with other Enduro laptops and the 12.9 Beta drivers (they’re beta for a reason, right?), and hopefully the next major release after the Hotfix will take care of the laptop compatibility aspect. I’ve stated before that AMD’s Enduro feels like it’s where NVIDIA was with Optimus about two years (2.5 years) back, and that continues to be the case. The first public Enduro beta driver is a good place to start, and now AMD just needs to repeat and refine the process a few more times. Hopefully by the end of the year we’ll see a couple more driver updates and compatibility will improve.
 
Last edited:
Off-topic nonsense

You're digging a pretty deep grave for yourself and your position. We're talking about the market landscape and price/performance right now.

Purchasing a 680 over a 7970, right now, means exactly the following:


1. You're heartbroken about 50 watts at most at full load.
2. You want 1 GB less RAM.
3. You want reduced performance (take a look at post #126 of this thread).
4. You want to pay, at a minimum, $50 more.
5. You want castrated compute performance.


Respond to these simple facts. I'm waiting.
 
You're digging a pretty deep grave for yourself and your position. We're talking about the market landscape and price/performance right now.

Why don't you go back and read my first post? I bet you're happy that AMD finally released drivers that let the card shine. Wow. It only took AMD almost a year. Ironically, right before the next product refresh. You can then restart your wait for performance drivers.

I don't have to talk about the quality of AMD's drivers. They do a pretty damming job themselves. Hey Enduro owners, keep waiting for those hotfixes. They will come soon, promise.
 
You know how nVidia fanboys like to thrash ATI, now AMD, for their drivers? How about you affix a sticker to your card: in about a year performance will be what it should have been.

Wow, just when I thought you couldn't say anything stupider than your last post, you go and top it with this post!! Look at the lifespan of any card, it's performance is better at the end than at release date. Heck Look at the first fermi, nvidia couldn't fix that crap with drivers they had to release a whole new card.

And you are right I will do that, I will affix my a sticker to my 680GTX saying exactly that. And just in case you miss it I HAVE A 680GTX!!

Just a few other points, It's 10 months not a year. Also, there is no sign of the next round of cards from either company, it could be 6 months before they are released. I read somewhere that it will be Q2 2013. So go away with your BS statement "right before the next product refresh"

And lastly there was nothing wrong with the performance of the 7970 at launch. It traded blows with the 680GTX when it launched. Now 7970 owners are just getting icing on their cake, with extra performance for nothing. And I am sure the same owners will hope that the next drivers bring more improvements.

If people are deciding between the 7970 and the 680GTX right now, if they don't need 3D then it's a no brainer, 7970 all the way, faster and cheaper.
 
Wow, just when I thought you couldn't say anything stupider than your last post, you go and top it with this post!! Look at the lifespan of any card, it's performance is better at the end than at release date. Heck Look at the first fermi, nvidia couldn't fix that crap with drivers they had to release a whole new card.
.

And just when I thought you couldn't say anything more stupid than your last post you go and top it. Add a disclaimer to the next AMD release: Will perform as it should in about 11 months. Thanks for the money suckers.

OP, skip the next AMD release for at least 10 months. Buy it way cheaper and don't finance AMD's driver team. They're still pretty busy trying to fix the Enduro line.
 
And just when I thought you couldn't say anything more stupid than your last post you go and top it. Add a disclaimer to the next AMD release: Will perform as it should in about 11 months. Thanks for the money suckers.

OP, skip the next AMD release for at least 10 months. Buy it way cheaper and don't finance AMD's driver team. They're still pretty busy trying to fix the Enduro line.
Wait, AMD unlocks more performance in their new architecture and you try to negatively spin it as performance they should have had at launch. Nvidia is late to the game and tries to respond WITH THE EXACT SAME THING and not a peep.

Get out of here with your hypocritical fanboy bullshit.

EDIT: Looks like [H] is holding strong in the no bullshit zone and banned a certain nvidia website . :D
 
Since I'm looking at buying a Windows laptop I'll make sure to avoid AMD:

LOL at this.

Enduro isn't great at the moment but Optimus was the buggiest pile of crap for over a year. And besides the article is out of date the 12.11 beta driver has brought a lot enduro fixes.
 
Back
Top