Kepler in March/April?

  • Thread starter Deleted whining member 223597
  • Start date
@Viper-X, numbers are from TPU launch reviews for all cards in question. 1920x1200. That's pretty representative to me. You have a better site or settings that you consider more representative?

And now we have clowns claiming its 60% faster, lol. Amusing.
 
@IDCP, are you arguing with yourself? When did I say the 7970 is a crap card. You should stick to numbers. My point stands - it's a lower increase than other architecture and process migrations hence my surprise at the hype. You can harp on the 480 all you want but it's funny that it was crippled and was still a bigger jump than the 7970 is! :)

@Viper-X, read my post. I already said exactly where the numbers are from.

Your original BS made up to suit an agenda table was as follows

Let's see, fastest card of one full generation to the next:

X1950XTX->8800GTX: +92%
8800GTX->GTX280: +63%
GTX280->GTX480: +51%
GTX580->HD7970: +30% ===> beast?

http://www.computerbase.de/artikel/...rten-evolution/3/#abschnitt_leistung_mit_aaaf

Standards sure are falling :p


You stated the 7970 was a lower than usual jump in speeds for process a migration. You tried to demonstrate this by saying it was only 30% faster than the GTX 580. What I pointed out was that the HD 7970 should be compared to the HD 5870 as it was the true process jump for AMD. You can't be biased and include GTX 580 in your charts while ingnoring the AMD cards such as the HD 5870 or HD 6970 completely.

I and other have showed you that the actual process migration from the 40nm to 28nm AMD parts falls well within the expected jump in speeds.
 
@Viper-X, numbers are from TPU launch reviews for all cards in question. 1920x1200. That's pretty representative to me. You have a better site or settings that you consider more representative?

And now we have clowns claiming its 60% faster, lol. Amusing.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html
still showing 35% for the 280 =p ( and really we should be using the GTX 285)

plus, why are you so stuck on TPU? how about using the [H] Review? :p we are on [H] Forums after all, and at least we can be sure everything was tested in a fashion that represents what YOU will experience playing games with a card, vs (I'm not sure what TPU uses, canned run through or actual gaming).
 
@ICDP: Nope my original post was asking why people are so hyped for a 30% bump. All your subsequent beating around the bush was just to distract from the question :) Don't get me wrong. I'm not asking you not to be impressed. Just expressing my surprise at the whole thing. At least we agree on the numbers, if not their significance.

@Viper, [H] reviews are too inconsistent and subjective to be of any use in comparisons across generations. TPU, computerbase.de and probably Anand are the best in that regard. Those sites also include a much larger and diverse set of games.
 
Last edited:
Anybody know what the average performance delta is between the 6970 vs. the 7970? Wouldn't that give you the correct performance jump between old and new technologies? All the these freaking percentages people are throwing around is making me dizzy! :p
 
Anybody know what the average performance delta is between the 6970 vs. the 7970? Wouldn't that give you the correct performance jump between old and new technologies? All the these freaking percentages people are throwing around is making me dizzy! :p

It is around 40% faster on average.
 
@ICDP: Nope my original post was asking why people are so hyped for a 30% bump. All your subsequent beating around the bush was just to distract from the question :) Don't get me wrong. I'm not asking you not to be impressed. Just expressing my surprise at the whole thing. At least we agree on the numbers, if not their significance.

@Viper, [H] reviews are too inconsistent and subjective to be of any use in comparisons across generations. TPU, computerbase.de and probably Anand are the best in that regard. Those sites also include a much larger and diverse set of games.

I don't find that to be true, if anything, [H] uses the same system to test all the cards present as well as the latest drivers, so when you have a 6970-7970 comparison, this is really the ONLY Way to look at it, where as TPU, ComputerBase.de and Anand for eg most likely uses #s from the last time they tested the 6970, with older drivers. I find that to be far less consistent. and again you've completely ignored the other questions, like where are you getting 51%? from :) and why not use the GTX285?
 
@Viper-X, numbers are from TPU launch reviews for all cards in question. 1920x1200. That's pretty representative to me. You have a better site or settings that you consider more representative?

And now we have clowns claiming its 60% faster, lol. Amusing.

Take a look at the [H]ard review, fanboy. You're absolutely flagrantly wrong. Your numbers are bullshit.

Fact: 7970 overclocked is 40-60% faster than 580 overclocked. http://hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/7

You: hurr hurr 30%


Of course we all know the real reason your little fanboy soul is crying tears of no-longer-top-of-the-line sorrow: that 580 in your sig.
 
I don't find that to be true, if anything, [H] uses the same system to test all the cards present as well as the latest drivers, so when you have a 6970-7970 comparison, this is really the ONLY Way to look at it, where as TPU, ComputerBase.de and Anand for eg most likely uses #s from the last time they tested the 6970, with older drivers. I find that to be far less consistent.

[H] tests only a few games in each review and has no convenient way of aggregating data from multiple reviews. I don't regard any one site as gospel and [H] certainly isn't at the top of that list. I read all sites' reviews and decide who's reliable based on their track record. As far as I know TPU and Anand keep their test systems stable. Computerbase may be guilty of reuse.

From TPU: "All video card results were obtained on this exact system with the exact same configuration."
http://www.techpowerup.com/reviews/AMD/HD_7970/6.html

and again you've completely ignored the other questions, like where are you getting 51%? from :) and why not use the GTX285?

No I just don't like repeating myself when people decide not to read :) The 285 is a slightly overclocked 280, it tells me nothing about the transition to 65nm. I really don't understand why you guys are making such a simple thing so complicated. Just take the transition cards for each architecture and/or process migration and go from there. Ignoring refreshes simply makes the comparison easier when evaluating the 7970 as a transition card.

The only point of contention is the 480 - it's obviously a real product but with a cut down chip. Either way I'm fine with using the 480 as nVidia's 40nm entry. So that leaves 65nm (280, 4870) -> 40nm (480, 5870) -> 28nm (???, 7970). Pretty straightforward. Now just run the numbers. I used TPU, you can use whatever site you feel comfortable with.

Btw, the 51% is right there in the computerbase link that I included in my post. At least read my entire post before asking me redundant questions :)
 
Here is your post that prompted my first reply to you.

You just have low expectations. The 580 is 70% faster than a 280. Why should we start thinking 30% improvements are spectacular? Personally, if the big boy Kepler isn't over 2x the speed of a 580 I'll be disappointed. As you said they fucked up perf/watt with Fermi so I'm inclined to believe them when they say they've improved things in that area. Your numbers are based on the assumption that Kepler perf/watt will continue to be shit....

You posted stating that the HD 7970 was "over hyped", because it was only 30% faster than a GTX 580, ergo it was unspectacular. To prove this point you posted tha the GTX 580 was 70% faster than the GTX 280. Unfortunately you neglected to include TWO Nividia products, (GTX 285 and GTX 480) between the GTX 280 and GTX 580 where your 70% number comes from. That is why I posted the following.

Going by that logic the HD 7970 is ~70% faster than HD 5870, you can't just skip cards and pretend they never existed. I would not expect over 2x the performance, 60-70% would be a good return IMHO.

Now looking at your new post.

The only point of contention is the 480 - it's obviously a real product but with a cut down chip. Either way I'm fine with using the 480 as nVidia's 40nm entry. So that leaves 65nm (280, 4870) -> 40nm (480, 5870) -> 28nm (???, 7970). Pretty straightforward. Now just run the numbers. I used TPU, you can use whatever site you feel comfortable with.

You have come full circle and have agreed with my reason for disagreeing with you in the first place! Namely, that you cannot just skip cards when measuring the performance delta between top-end GPU releases, full process migrations, no matter how small are fine as long as you omit revisions. One final point, your post has some factual errors or omissions, depsite being corrected numerous times. Eg, the GTX 285 is missing, wether we like it or not it was on a new process for Nvidia 55nm instead of 65nm and the HD 4870 was 55nm, not 65nm.
 
Last edited:
You have come full circle and have agreed with my reason for disagreeing with you in the first place! Namely, that you cannot just skip cards when measuring the performance delta between top-end GPU releases, full process migrations, no matter how small are fine as long as you omit revisions.

No we have always agreed on that point. We just disagree on what's considered a top-end GPU release or full process migration. GTX 285's don't fit in that category in my book. In any case we've beaten this thread to death. It's all not going to matter in a few weeks anyway - there'll be a lot more interesting things to argue over ;)
 
No we have always agreed on that point. We just disagree on what's considered a top-end GPU release or full process migration. GTX 285's don't fit in that category in my book. In any case we've beaten this thread to death. It's all not going to matter in a few weeks anyway - there'll be a lot more interesting things to argue over ;)

I have to ask, why does the GTX285 not fit :p? it was a process transition card to, 65nm->55nm.
 
No we have always agreed on that point. We just disagree on what's considered a top-end GPU release or full process migration. GTX 285's don't fit in that category in my book.

I thought the GTX 285 was a refresh part so was not counting it in my original assessment. But since it is on a different process it does fit in, no matter what your book says. :)

In any case we've beaten this thread to death. It's all not going to matter in a few weeks anyway - there'll be a lot more interesting things to argue over ;)

Do you mean when the HD 7950 is released? Because if it's Kepler you aren't talking a few weeks. In my book (and everyone elses) a few weeks is than less four, otherwise it counts as a month, which is multiples of four weeks. I love how some people on these forums put a winky icon at the end of their posts, almost like you are trying to say you know a secret... I can do that as well ;)
 
well, it is a new process *55nm* but not a new architecture.

Agreed but a new die shrink process doesn't relate to the gtx 285 in the context of how you wanted it too. The 260/270/280/285 were all the same architecture, which is the point the other poster was trying to make. What is so difficult for you to understand about that? The 6970 is not the same die shrink or architecture as the 7970. Can't make it any simpler than that!
 
Last edited:
No we have always agreed on that point. We just disagree on what's considered a top-end GPU release or full process migration. GTX 285's don't fit in that category in my book. In any case we've beaten this thread to death. It's all not going to matter in a few weeks anyway - there'll be a lot more interesting things to argue over ;)

Unless Kepler is coming out in a few weeks. I'm not sure what's going to be more interesting than 7970 vs GTX 580. Clearly the 7970 is a big step up over the 6970. And additionally it soundly beats the 580, not to mention when you overclock both cards, the gap increases even more.

http://www.hardwareheaven.com/revie...-vs-radeon-6970-overclocked-introduction.html

http://www.hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/7

What does that leave really? Just wait for Kepler. It's the only one that stands a chance to take back the performance crown. The question is how much of a premium will Nvidia charge for its new part. Will we be seeing the return of $650 sticker prices or will it come in at a slightly less outrageous $599 msrp?

As for GTX 280 vs 285. Didn't that provide about a 15-20% performance jump? The same thing happend from GTX 480 vs GTX 580. again about a 15-20% performance jump. But everyone praised the 285 and the 580 highly. So the 7970 has a huge performance over the 6970 and manages to beat the 580 by 30%-50% and it's "no big deal?" Now that does not compute.
 
Last edited:
As for GTX 280 vs 285. Didn't that provide about a 15-20% performance jump? The same thing happend from GTX 480 vs GTX 580. again about a 15-20% performance jump. But everyone praised the 285 and the 580 highly. So the 7970 has a huge performance over the 6970 and manages to beat the 580 by 30%-50% and it's "no big deal?" Now that does not compute.

280 -> 285 and 480 -> 580 were not new architectures, so smaller gains are to be expected. Better comparison for 6970 -> 7970 would be 285 -> 480.
 
280 -> 285 and 480 -> 580 were not new architectures, so smaller gains are to be expected. Better comparison for 6970 -> 7970 would be 285 -> 480.

That's exactly how I would look at it as well. AMD and Nvidia just need to keep'em coming! :)
 
280 -> 285 and 480 -> 580 were not new architectures, so smaller gains are to be expected. Better comparison for 6970 -> 7970 would be 285 -> 480.

This is the problem, what is included or excluded when measuring performance increases as a result of process migrations? Do you include refreshes or stopgap cards? For example the GTX 580 is a refresh of the GTX480 and the HD 6970 is a cut down, stopgap card due to the cancelation of 32nm.

I feel if we are looking at process migration performance increases then you don't include stopgaps or refreshes. You also only measure compared to previous generation of the same manufacturer.

For example, top-end Nvidia process jumps are GTX 285 -> GTX 480 -> Kelper

For AMD it is HD4870 -> HD 5870 -> HD 7970

On the other hand if we are measuring performance increases from GPU to GPU then all cards from both manufactures should be included.
 
Unless they shock everyone and this turns out to be the 570 replacement, with a higher-end 2+GB card to come later.

Thats what I feel is going to happen. GTX 670 2GB Ram. 780mhz core clock? Smells like 660ti or GTX 670. It would blow everyones mind if that was the 660Ti that just underperformed against a 7970 and give Nvidia time to get the top end(GTX 680) ramped for April. :p
 
Well, I hope that 2GB card is not the GTX 680, or 780 or whatever it is going to be called, and if it has a 256 bit memory bus it would be a huge epic fail, and I don't see it outperform or perform closely to the 7970. Seriously, whats with NVIDIA and their small VRAM? GTX 480 and 580 had only 1.5GB when it should have been 3GB, at least ATI was smart enough to slap 2/3GB when 69xx and 79xx came out.
 
there is zero chance that Nvidia's next top of the line card will have a 256 bit bus. the fastest memory speed they would likely have from the factory would be around 6000mhz. on a 256bit bus that means they would only be able to just match the bandwidth of their current gtx580.
 
there is zero chance that Nvidia's next top of the line card will have a 256 bit bus. the fastest memory speed they would likely have from the factory would be around 6000mhz. on a 256bit bus that means they would only be able to just match the bandwidth of their current gtx580.

agree.. some ones feeding hellchip bogus info.. that will definitely be GTS 650 or GTX 650 territory. if they were to release that as the flagship card then holy crap we are all screwed as gamers and customers. but this almost sounds like those 28nm fermi refresh cards they were planning to release.
 
Last edited:
The flagship card is still many months away, like Q3/4 away. The card rumored has always been believed to be the GTX 560 replacement.
The GTX 580 replacement will have a 512 bit bus and the GTX 570 a 384 bit bus. These are all rumors until we see some hardware.

GK100 - GTX 680 - 670
GK104 - GTX 660

GK104 is rumored to be Nvidia's first release.
 
I just don't understand why are they letting the market get saturated with AMD's flagship? What are they waiting for?

IMO, they could just do a paper release like AMD did. They could let reviewers show us the benches for the new flagship, give us the price, and then do product launch once they've produced enough cards. At least people will know what they're waiting for at that point.

BTW, I really hope that their new cards have 3+ DP connectors and Surround works like Eyefinity does, this way we won't be limited to 3-way SLI anymore.
 
I don't know there is a pretty big difference between launching a product a couple of weeks before it is available and launching a product several months before it is available. IMO nVidia would get much more heat for doing this than waiting and doing a proper launch.

The reason nVidia usually had the fastest single GPU of the past several generations is because they have a different design philosophy than ATi. nVidia aims for the fastest single GPU no compromises. To do that, they need to push die size and thermals to their limits. This is going to cause difficulties when a new die manufacturing process is getting ramped up. So it's no surprise that the 480 (and 680) took longer to get released than the ATi counterpart.

Given the stories you heard about ATi barely having enough cards to send out to reviewers during the 7970 launch. nVidia probably doesn't have enough 680 cards yet to do a paper launch even if they wanted to.

I just don't understand why are they letting the market get saturated with AMD's flagship? What are they waiting for?

IMO, they could just do a paper release like AMD did. They could let reviewers show us the benches for the new flagship, give us the price, and then do product launch once they've produced enough cards. At least people will know what they're waiting for at that point.

BTW, I really hope that their new cards have 3+ DP connectors and Surround works like Eyefinity does, this way we won't be limited to 3-way SLI anymore.
 
I just don't understand why are they letting the market get saturated with AMD's flagship? What are they waiting for?

IMO, they could just do a paper release like AMD did. They could let reviewers show us the benches for the new flagship, give us the price, and then do product launch once they've produced enough cards. At least people will know what they're waiting for at that point.

BTW, I really hope that their new cards have 3+ DP connectors and Surround works like Eyefinity does, this way we won't be limited to 3-way SLI anymore.

you can't send reviewers samples of cards you don't have. not to mention they would have to cherry pick the heck out of the gpu's to send out as sample cards.

as far as the 3+ DP, i doubt it is going to happen. wouldn't surprise me if they stuck with the current requirement of using SLI. Nvidia and change don't really go hand in hand and unless its totally broken they won't change it. i have a feeling they will leave it to the AIB's to fix it again like galaxy did but at least now the AIB's know it works so you'll probably see all of them carrying it on the non reference cards.
 
I think NVidia will eventually allow 3 monitors per card, but it's not going to happen overnight.

http://www.anandtech.com/show/2937 It took AMD 3 years to make the 5870, and Eyefinity was included in the design from the beginning. [post=1035205858]When NVidia discovered in 2009[/post] that they want multi-monitor gaming, likely it was too late to change Kepler. I would not entirely discount the possibility, but Maxwell seems more likely for single GPU surround (barring special solutions like Zotac MultiView and Galaxy MDT cards).
 
I think NVidia will eventually allow 3 monitors per card, but it's not going to happen overnight.

http://www.anandtech.com/show/2937 It took AMD 3 years to make the 5870, and Eyefinity was included in the design from the beginning. [post=1035205858]When NVidia discovered in 2009[/post] that they want multi-monitor gaming, likely it was too late to change Kepler. I would not entirely discount the possibility, but Maxwell seems more likely for single GPU surround (barring special solutions like Zotac MultiView and Galaxy MDT cards).

At this point they almost have to support it - it has become too much of a selling point for $500+ cards to not include it in some way, even if they have to kludge something together like Galaxy did.
 
I think NVidia will eventually allow 3 monitors per card, but it's not going to happen overnight.

http://www.anandtech.com/show/2937 It took AMD 3 years to make the 5870, and Eyefinity was included in the design from the beginning. [post=1035205858]When NVidia discovered in 2009[/post] that they want multi-monitor gaming, likely it was too late to change Kepler. I would not entirely discount the possibility, but Maxwell seems more likely for single GPU surround (barring special solutions like Zotac MultiView and Galaxy MDT cards).

Not really much of a problem. Gaming on single card triple monitor is a big no. No card is powerful enough on modern games at decent frame rates. Those eyefinity 6 cards and galaxy 5 monitor cards cannot properly play games on 6/4 monitors. To properly go multimonitor you will need dual card sli/crossfire minimum unless you are willing to play with low detail or low frame rates. But theres is a difference between surround and eyefinity. Unlike eyefinity, surround doesn't need all monitors attached to one card, so theres little point having more than 2 connectors, as you are going to have another card with another 2, and every card after that gives you 2 more. Basically, whereas you need the first card to have all the connectors with eyefinity, surround allows you to add more later. 1 card per monitor is a good ratio, especially with 3d/larger monitors. Even as cards get more powerful, (hopefully) games will come along to make them seem feeble again.
 
Back
Top