Kepler in March/April?

  • Thread starter Deleted whining member 223597
  • Start date
Just listen to Kyle and quit reading this crap from fud, fart, and frap. Unless HE says the Top of the Line is coming out March/April, then June for that card it is.

When games are released on day 1, nvidia has a driver out. AMD, 1 to 2 weeks after. Has been since my first Radeon DDR to my last Radeon 1900XT to present day. All of you talking about HUGE performance increases 6 months after the 7970 has been released can stuff it. As like I said before, been the same line of Bullshit since the original Radeon. Besides that, why would you have to wait 4-6 months after release to get good drivers?? They have had working silicon in shop for 6 months or more before release. Remember the drivers for the Maxx Fury?? That was a hoot!

Its like there is a whole generation here now that didn't deal with ATI through the 2000's. Its no different. Should have been at rage 3D when a few of us got banned for asking Mr. Makedon about the hardware flaw in the T&L engine in the original Radeon 8500 64MB/128MB.(And other things they were do driver work arounds for.) No nasty comments, just really informative questions and answers THEY didn't want to hear. Banned and disposed of. That there is one of many reasons I do not put and ATI card in anyone of my family or friend computers to this day.

Another thing, the reason this things (7970) is over priced by a $100 is the BD cpu. IF you have one side of the company making money, a head start on the competition, and one side losing money, you have to try and make that up somewhere. (Basic business) ATI/AMD have a full quarter jump on Nvidia to make back some lost profit on the CPU divison through the GPU division. So instead of lets say a $100 profit on this card, its now $200. Go get it Lemmings!! :p

Apart form the Kyle related stuff you just posted an utter load of crap. I have used GPUs from both Nvidia and AMD recently and I had my fair share of day 1 game problems from both types. AMD release WHQL drivers every month and CAP updates a few times month and for the most part the AAA games get fixes very quickly if there are problems. They dropped the ball with Skyrim and Rage but they had Tahiti coming up so I gave them a by ball. Rage had serious problems with Nvidia SLI as well. If you think Nvidia drivers are all that then go and Google the Nvidia 196.75 driver problem. Nvidia even had to remove the drivers as they were blowing up GPUs, I seen this first hand because one of my SLI GTX 260's died when it's fan decided to stop while playing a game. I RMAd the card, and sold the replacement and my surviving GTX 260 due to Nvidia's fuck up, I really was that pissed with them over that. I bought a HD 5870 and it was a great card, then a couple of GTX 460's in SLI (nothing blew up this time), then a HD 6970 on release and now new a HD 7970.

http://hardocp.com/news/2010/03/05/nvidia_19675_drivers_kill_your_gpu

The reason HD 7970 is so expensive is because it beats the 3GB GTX 580 by a fair margin and the 3GB GTX 580 is priced higher than the HD 7970. AMD looked at the market for and seen the inlfated price of the 3GB GTX 580 and they priced the HD 7970 lower, but no, it's a fucking conspiracy I tell you, where's my tinfoil hat. Don't start all the conspiracy theory BS here, what was the conspiracy behind the prices for the GTX 480 ($499), or the GTX 280 ($650), or the GT8800 Ultra ($840)! , or the X1950XT ($499). I could list more but I think the point is made, they were all introduced at a very high or downright extortionate prices. Nvidia and AMD have almost always priced their undisputed topend single GPUs at premium or rip-off prices. Add to this the fact that retailers have always price gouged for the latest and greatest GPUs, always have, always will.

Sorry for the rant :)
 
Last edited:
System power != graphics board power
50W is very significant. Perf/W has suffered considerably on the 6970 compared to the 5870.

Sorry, my bad. I had two tabs open and hastily linked the wrong one:

http://hardocp.com/article/2010/12/14/amd_radeon_hd_6970_6950_video_card_review/8

That 16% (technically 15.5%) was based on 238 watts (6970) vs. 206 watts (5870) at full load. At 2560x1600, [H] averaged about 22% (ballpark) faster over the 5870. My conclusion is that the 6970 seemed to be running more efficiently.
 
So? Other loads (other benchmarks, other scenes), other numbers. There is not "the" power consumption.
And both of you still cannot differentiate between system power and board power...

HT4U measured the power consumption of the card and the card only with their equipment:
http://ht4u.net/reviews/2010/amd_radeon_hd_6970_6950_cayman_test/index22.php

The 6970 uses 30% (not watt, percent) more power in their game test than the 5870. Is it 30% faster then a 5870? No. Thus, barring tests where the 5870 is limited by its vram or tessellation, efficiency took a hit. It's as simple as that.
 
Let's see, fastest card of one full generation to the next:

X1950XTX->8800GTX: +92%
8800GTX->GTX280: +63%
GTX280->GTX480: +51%
GTX580->HD7970: +30% ===> beast?

http://www.computerbase.de/artikel/...rten-evolution/3/#abschnitt_leistung_mit_aaaf

Standards sure are falling :p


and if you look at the numbers they continually fall lower and lower with each generation because we are getting to the point of diminishing returns. also if you look at the differences both Nvidia and AMD are become more power conscious. back during the x1950XTX and 8800GTX neither ATI nor Nvidia gave a rats ass about how much power their cards used as long as it was within pci-e spec. since then you have seen both ATI/AMD and Nvidia drop their max power usage on their cards. all be it the 400 series was a fuck up on Nvidia's part when it come to its power/performance ratio.

so in the end that 30% increase by the 7970 is right where it should be if you look through the generations of cards. honestly i don't even see nvidia's flagship card being more than 10% faster then the 7970 unless they completely disregard the power usage then we might see 15-20% faster out of a card using 50-100w more then the 7970.
 
This chart from computerbase includes old games from 2005 up to 2011. In new games the cards are doing much much better.
 
so in the end that 30% increase by the 7970 is right where it should be if you look through the generations of cards. honestly i don't even see nvidia's flagship card being more than 10% faster then the 7970 unless they completely disregard the power usage then we might see 15-20% faster out of a card using 50-100w more then the 7970.

You just have low expectations. The 580 is 70% faster than a 280. Why should we start thinking 30% improvements are spectacular? Personally, if the big boy Kepler isn't over 2x the speed of a 580 I'll be disappointed. As you said they fucked up perf/watt with Fermi so I'm inclined to believe them when they say they've improved things in that area. Your numbers are based on the assumption that Kepler perf/watt will continue to be shit....
 
You just have low expectations. The 580 is 70% faster than a 280. Why should we start thinking 30% improvements are spectacular? Personally, if the big boy Kepler isn't over 2x the speed of a 580 I'll be disappointed. As you said they fucked up perf/watt with Fermi so I'm inclined to believe them when they say they've improved things in that area. Your numbers are based on the assumption that Kepler perf/watt will continue to be shit....

Going by that logic the HD 7970 is ~70% faster than HD 5870, you can't just skip cards and pretend they never existed. I would not expect over 2x the performance, 60-70% would be a good return IMHO.
 
Going by that logic the HD 7970 is ~70% faster than HD 5870, you can't just skip cards and pretend they never existed. I would not expect over 2x the performance, 60-70% would be a good return IMHO.

Ah, but I'm not skipping anything. I agree that it's a big boost over the 5870, however it's not the 5870 that it's being compared to now is it? :)

According to hardware.fr a 7970 draws about the same power as a 580 when clocked to 1100/1350 with powertune limitations removed. At those speeds the overclocked 7970 is about 29% faster than the 580 at the games and settings tested. So for a given 28nm Kepler part at ~230w power draw where do you expect performance to fall relative to the 580 considering that they "fucked up" with Fermi?

http://www.hardware.fr/focus/59/xfx-radeon-hd-7970-overclocking.html
 
different sites can have wildly different power consumption numbers. xbit showed the gtx580 and 7970 using similar power at stock speeds but the 7970 using more when both were oced.



Uploaded with ImageShack.us
 
This is not what HardOCP observed. The Galaxy MDT 580 (which was overclocked but its power consumption is in line with stock cards) consumed considerably more power than a 7970@1125 MHz, while the 7970 is ahead 33-54% in single monitor and 38-62% in Eyefinity.
http://www.hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/4

[H]ardOCP has one of the least scientific power consumption reporting methods out there. i.e. not useful for any sort of analysis. Use Xbit, Hardware.fr or Techpowerup for more accurate numbers.
 
At least HardOCP give more relevant performance numbers, and do not base their evaluation on 1080p resolution as your linked hardware.fr article does.
Besides, while the absolute power consumption numbers may be questionable, the relative consumption figure I consider still valid despite the "unscientific" method.
 
[H]ardOCP has one of the least scientific power consumption reporting methods out there. i.e. not useful for any sort of analysis. Use Xbit, Hardware.fr or Techpowerup for more accurate numbers.

What is unscientific about it? They compare power draw between one card and the other in the same system. The wattage difference between the 2 is going to be the difference between the cards. Whats the issue? Same system with a 580 uses a good bit more than the 7970, even with OC sliders maxed at stock voltage. End of story IMO.
 
What is unscientific about it? They compare power draw between one card and the other in the same system. The wattage difference between the 2 is going to be the difference between the cards. Whats the issue? Same system with a 580 uses a good bit more than the 7970, even with OC sliders maxed at stock voltage. End of story IMO.
Agreed.......

System without GPU.
System with GPU at idle.
System running 3D graphics at 100% utilization.

What else do you need?
 
The measured difference is not entirely between cards. It doesn't take the inefficiency of the PSU into account. It also cannot tell how much power the CPU consumes in each situation.
Having said that, it is still reasonable to assume that the card with higher system power consumption will also have higher consumption when measured individually.
 
At least HardOCP give more relevant performance numbers, and do not base their evaluation on 1080p resolution as your linked hardware.fr article does.
Besides, while the absolute power consumption numbers may be questionable, the relative consumption figure I consider still valid despite the "unscientific" method.

What exactly is "irrelevant" about 1080p? Don't tell me you think Eyefinity resolutions are somehow more relevant - that would be a pretty bad joke. You can consider [H]'s methodolgy valid if you like. I will continue to use readings from sites with consistent methods that isolate GPU power consumption. To each their own :)
 
Ah, but I'm not skipping anything. I agree that it's a big boost over the 5870, however it's not the 5870 that it's being compared to now is it? :)

http://www.hardware.fr/focus/59/xfx-radeon-hd-7970-overclocking.html

And the GTX 580 wasn't compared to the GTX 280, it was compared to the GTX 480 and HD 5870. Why one rule for Nvidia but not for AMD/ATI?

You posted a statement saying the GTX 580 was great because it was 70% faster than a GTX 280, conveniently forgetting the GTX 480 existed. So GTX 280 to GTX 480 = 50%.

So if we use the same rules you initially applied then we get the following (pointless measurement).

GTX 280 to GTX 580 = ~70% performance gain
HD 5870 to HD 7970 = ~70% performance gain
 
And the GTX 580 wasn't compared to the GTX 280, it was compared to the GTX 480 and HD 5870. Why one rule for Nvidia but not for AMD/ATI?

You posted a statement saying the GTX 580 was great because it was 70% faster than a GTX 280, conveniently forgetting the GTX 480 existed. So GTX 280 to GTX 480 = 50%.

So if we use the same rules you initially applied then we get the following (pointless measurement).

GTX 280 to GTX 580 = ~70% performance gain
HD 5870 to HD 7970 = ~70% performance gain

I made no statements about the 580 being great or not great - that's all in your head. My original question was why people are so hyped over a 30% gain from 580->7970 and consider it par for the course. Or do you disagree that people are hyping up a 30% gain? Not really sure what point you're trying to make here.....

Btw, the reason I don't mention the 480 is because it's a partially disabled chip - sort of pointless when comparing performance gains between architectural generations.
 
I made no statements about the 580 being great or not great - that's all in your head. My original question was why people are so hyped over a 30% gain from 580->7970 and consider it par for the course. Or do you disagree that people are hyping up a 30% gain? Not really sure what point you're trying to make here.....

I don't think its just the % gain. This is one of the most overclockable GPU's of all time- THAT I think is one of the reasons its really being hyped, more so than the actual performance, as well as the fact that it came with the 3gb of VRAM that the 580 SHOULD have come with.
 
I don't think its just the % gain. This is one of the most overclockable GPU's of all time- THAT I think is one of the reasons its really being hyped, more so than the actual performance, as well as the fact that it came with the 3gb of VRAM that the 580 SHOULD have come with.

I have a feeling the 7970 will be viewed in a slightly different light soon.
 
This is essentially in line with that leaked roadmap from November... which if correct means you won't see the high end single GPU card until close to 2013. The mid to high end cards might be released by summer.
 
http://www.chiphell.com/thread-346223-1-1.html

RE: GTX680 (not sure if this is the final model number)

May be seen as early as in February (not paper launch but retail), as Mr Huang doesn't like the HD7970 to shine most in the 28nm era.

The performance would be around the same as the HD7970, depending on the drivers.

So far the core clock is at 780MHz, with 2GB vram.

Source is from an AIC manufacturer, saying that even the retail packages are being printed now.
 
I made no statements about the 580 being great or not great - that's all in your head. My original question was why people are so hyped over a 30% gain from 580->7970 and consider it par for the course. Or do you disagree that people are hyping up a 30% gain? Not really sure what point you're trying to make here.....

Btw, the reason I don't mention the 480 is because it's a partially disabled chip - sort of pointless when comparing performance gains between architectural generations.

The GTX 480 still existed, or is that all in my head as well? :)

It doesn't matter if it was castrated/disabled/retarded, it still existed, Nvidia released it at £430 here in the UK. Or are you treating it like a retarted little brother that you keep hidden because he embarrasses you, so you pretend he doesn't exist?

You are conveniently comparing how the GTX 580 was 70% faster than the GTX 280,while forgetting the fact that the HD 5870 ever existed. Let's play your game and forget the GTX 480 existed, that means the GTX 580 came one year later than HD 5870 and was only ~40% faster.

Stop cherrypicking info or even ignoring facts all together to make an invalid point. Facts are that each NEW proccess generation for the past few years has had AMD and Nvidia leapfrog previous topend by ~20-40%

GTX 280 -> HD 5870 -> GTX 480 -> GTX 580 -> HD 7970, all no more than a 40% increase at stock speeds. Note I am only comparing the actual top end undesputed fastest single chip GPU, so no, I am not including the HD 6970.
 
waiting 2 months for a new game to work with AMD completely rules AMD out for me. You guys can argue %'s. but the real world truth is that AMD's drivers suck, they are never ready for new releases and they have bugs that never get fixed. Sorry AMD, it's not just about the hardware...

and yes the 480 existed I owned it. It was never a "partially disabled chip", it kicked the shit out of anything AMD had at the time. 580 unlocked slightly more performance but the 480 was a great card on it's own.
 
waiting 2 months for a new game to work with AMD completely rules AMD out for me. You guys can argue %'s. but the real world truth is that AMD's drivers suck, they are never ready for new releases and they have bugs that never get fixed. Sorry AMD, it's not just about the hardware...

and yes the 480 existed I owned it. It was never a "partially disabled chip", it kicked the shit out of anything AMD had at the time. 580 unlocked slightly more performance but the 480 was a great card on it's own.

Spoken like a true fanboy

I've never really had any show stopper games issues with my AMD/ATI cards but I've only ever used single cards and not xfire or sli. To each his own I guess.
 
GTX 280 -> HD 5870 -> GTX 480 -> GTX 580 -> HD 7970, all no more than a 40% increase at stock speeds. Note I am only comparing the actual top end undesputed fastest single chip GPU, so no, I am not including the HD 6970.

Ok, let's try this with no interpretation, just raw numbers relative to each company's former generation. All numbers based on launch reviews at TPU, 1920x1200 resolution.

65nm -> 40nm:
GTX 280 -> GTX 480 (+54%)
GTX 280 -> GTX 580 (+79%)
HD 4870 -> HD 5870 (+64%)

40nm -> 28nm
HD 5870 -> HD 7970 (+52%)
 
Spoken like a true fanboy

I've never really had any show stopper games issues with my AMD/ATI cards but I've only ever used single cards and not xfire or sli. To each his own I guess.

I have, I've owned almost every AMD card in the last 5-6 years with the execption of the dual GPU cards. So I've got experience with their products to back up my claims.

I'm not a fanboy for any GPU company, I'm a fanboy of being able to play a game on day one without having to deal with driver bullshit and excuses. If expecting to play a game on day one makes me a fanboy then so be it.
 
I have, I've owned almost every AMD card in the last 5-6 years with the execption of the dual GPU cards. So I've got experience with their products to back up my claims.

I'm not a fanboy for any GPU company, I'm a fanboy of being able to play a game on day one without having to deal with driver bullshit and excuses. If expecting to play a game on day one makes me a fanboy then so be it.

So your experience is better than my experience...OK

I've been able to play all my games on day one with either AMD or Nvidia cards I have owned, which like yourself is quite extensive. I know others have had issues, which is why I don't lump one company or the other in a single good/bad category.

What you seem to be inferring is all your game issues are caused by the graphics card(s), in this case AMD, so yes I'd call you an NV fan boy.
 
So your experience is better than my experience...OK

I've been able to play all my games on day one with either AMD or Nvidia cards I have owned, which like yourself is quite extensive. I know others have had issues, which is why I don't lump one company or the other in a single good/bad category.

What you seem to be inferring is all your game issues are caused by the graphics card(s), in this case AMD, so yes I'd call you an NV fan boy.

..and you sound like an AMD fanboy, so we can just agree to disagree...
 
Ok, let's try this with no interpretation, just raw numbers relative to each company's former generation. All numbers based on launch reviews at TPU, 1920x1200 resolution.

65nm -> 40nm:
GTX 280 -> GTX 480 (+54%)
GTX 280 -> GTX 580 (+79%) <- Shouldn't be here
HD 4870 -> HD 5870 (+64%)
HD 4870 -> HD 6970 (+80%) <- Shouldn't be here either, I added it to counter your obvious bias

40nm -> 28nm
HD 5870 -> HD 7970 (+60%)

Lol, I love how you slip in the GTX 280 to GTX 580 there, even though it wasn't on a new process and was a GTX 480 refresh. You keep doing this, cherry picking or even changing facts to suit some strange agenda to prove the HD 7970 is actaully a crap card. Also the GTX 580 was on average 18% faster than the GTX 480, not 25% as you are showing. Even at that it still doesnt belong in that list, or if it does then soe does the the HD 4870 -> HD 6970 (+80%).

Looking at most reviews the HD 7970 is around ~60% faster than the HD 5870, so I fixed that in your original table.

So overall we see the usual 50-70% jump (not including the 2x refresh parts) in overall speed from process to process. The fact that 32nm was cancelled meant both Nvidia and AMD had to launch another part on 40nm which only gave a marginal 15-20% increase (GTX 580 and HD 6970). So we can discount both of those to get out true measurement of process to proccess increase. Incidentally I am not counting the GTX 285 which was a GTX 280 on 55nm process and really wasn't a true process jump for Nvidia.

So now that we have a true table showing the approximate overal % increase from process to process we can see that the HD 5870 -> HD 7970 falls well within the usual 50-70% jump in speed at stock clocks. So, no, it isn't a crap card in the grand scheme of things.
 
Ok, let's try this with no interpretation, just raw numbers relative to each company's former generation. All numbers based on launch reviews at TPU, 1920x1200 resolution.

65nm -> 40nm:
GTX 280 -> GTX 480 (+54%)
GTX 280 -> GTX 580 (+79%)
HD 4870 -> HD 5870 (+64%)

40nm -> 28nm
HD 5870 -> HD 7970 (+52%)

Why don't we compare using the proper cards? or did we all forget about the 285 :p?

GTX 285 -> GTX 480 (+??%) 55nm->40nm not 65->40nm
GTX 285 -> GTX 580 (+??%) 55nm->40nm
HD 4890 -> HD 5870 (+??%) 55nm->40nm

40nm -> 28nm
5870 -> 7970 is def more than 52%. Where are you getting these #s?
 
Why don't we compare using the proper cards? or did we all forget about the 285 :p?

GTX 285 -> GTX 480 (+??%) 55nm->40nm not 65->40nm
GTX 285 -> GTX 580 (+??%) 55nm->40nm
HD 4890 -> HD 5870 (+??%) 55nm->40nm

40nm -> 28nm
5870 -> 7970 is def more than 52%. Where are you getting these #s?

I think this should be 6970 not 5870 right?
 
@IDCP, are you arguing with yourself? When did I say the 7970 is a crap card. You should stick to numbers. My point stands - it's a lower increase than other architecture and process migrations hence my surprise at the hype. You can harp on the 480 all you want but it's funny that it was crippled and was still a bigger jump than the 7970 is! :)

@Viper-X, read my post. I already said exactly where the numbers are from.
 
@IDCP, are you arguing with yourself? When did I say the 7970 is a crap card. You should stick to numbers. My point stands - it's a lower increase than other architecture and process migrations hence my surprise at the hype. You can harp on the 480 all you want but it's funny that it was crippled and was still a bigger jump than the 7970 is! :)

@Viper-X, read my post. I already said exactly where the numbers are from.

That's my point, not sure where you get GTX280->480 is 51% where TPU shows 33%, and only 27% going from the GTX285->GTX480.

On top of that, Does TPU re-test with all cards, or do they use previous #s obtained on different systems? which would make your comparison even more invalid(Most of the recent ones are using the same system, but the GTX280 is using a core 2 duo @3.6 , vs the 480 using a 920 @ 3.8ghz). to punch the nail in a bit more, TPU shows perf per watt to be 24% better on the GTX285 vs the 480 :p

and why are we only using 1920x1200? they have a summary that applies to all tested resolutions, might as well use that one, since nothing else really matches in your comparison.
 
Last edited:
Back
Top