Leaked Radeon R9 290X Benchmarks?

Hope AMD's R9 is a low power consuming monster. I want to see at least 30% faster performance for the flagship vs its predecessor and a significant drop in power and heat.
.

On that I'd not bet. The screens I saw put it on 349W during 3D Mark Fire Strike Combined test and 88W on idle - on the same chart Titan scored 361/84W and 780 got 331/81W - 7970 GHZ edition got 322/88W. So it's not much difference between top cards (if the results are true). And only 27W more between 7970 and this, which is quite nice result.

And for the multiscreen gaming - apart from some of the hardcore gamers, most of casual and normal folks don't go into triple monitor settings, so i could safely bet, that yes, a lot of people play on 1080p. Yes, I might consider going into multiscreen, but to think of the cost of SLI/XFire and good triple monitors, way too much.
 
Unless you go for some triple or quad videocard setup running $2000+ so games don't look like a choppy slideshow with constantly low dips into the 10's, yeah. Lots of people still game in 1080p.

Say what now? I've had a 30" monitor since around 2008 and have never had a single game drop into the 10s and I never turn my settings down. What you folks keep forgetting is that at 2560x1600(and 1440) the pixel density is already high enough that you don't need a shitload of AA.
 
Give it up with the drivers crap, nVidia's drivers F'ed up my hard drive requiring a complete reinstall due to them constantly reseting my PC in the middle of games.

Somehow I think you have a bigger problem than drivers can fix.
 
Say what now? I've had a 30" monitor since around 2008 and have never had a single game drop into the 10s and I never turn my settings down. What you folks keep forgetting is that at 2560x1600(and 1440) the pixel density is already high enough that you don't need a shitload of AA.

haha I was gonna call BS on this until I finished reading it all.

Totally agreed you don't need 8x AA at 2560x1600

And I find the people that complain about it are just silly.

But then again you always have the OMG I can't run 8x AA and max setting on every game out I need to drop $2000 in videocards etc.

It like people forget about actually playing games!

There was no AA on a Sega genesis or a Super nintendo yet some of my best gaming memories were on those early consoles.
 
Just how memory bound is the 7970? Trying to figure out if 512bit memory bus is marketing, or tangible performance improvement, as we can assume only a fairly modest increase (20% at best) in performance from the GPU itself, compared to the 7970.
 
Imo a 512 bit bus card seems a bit too complex pcb wise for that price point or anything besides a "halo" type niche product like Titan at this moment. I think they'll stick with 384bit bus & 3/6GB combos until 20/22nm.

A quote that I think people need to remember from ryan at anandtech:

AMD made one other major change to improve efficiency for Barts: they’re using Redwood’s memory controller. In the past we’ve talked about the inherent complexities of driving GDDR5 at high speeds, but until now we’ve never known just how complex it is. It turns out that Cypress’s memory controller is nearly twice as big as Redwood’s! By reducing their desired memory speeds from 4.8GHz to 4.2GHz, AMD was able to reduce the size of their memory controller by nearly 50%. Admittedly we don’t know just how much space this design choice saved AMD, but from our discussions with them it’s clearly significant. And it also perfectly highlights just how hard it is to drive GDDR5 at 5GHz and beyond, and why both AMD and NVIDIA cited their memory controllers as some of their biggest issues when bringing up Cypress and GF100 respectively

AMD will always opt for a chip design with just enough pad space for a larger bus if feasible. rv670, pitcairn, redwood...even r600 (but that is a different can of worms). The justifications are many...but look at Pitcairn vs gk106 as one example. Same die size, amd essentially fit two more cus in the space it took nvidia to make a faster 192-bit bus, and they are still more power/design efficient within 150w. Die space for the controllers, power savings for similar bandwidth because of a lower voltage and clock, cheaper ram, and this case...appropriate market choices. 2GB for 1080p (likely in a salvage sku), 4GB for 2560x or 1080p crossfire, and 8gb for multi-monitor/uhd/workstation or single 2560x display with crossfire.

If you want the point hammered home a little more...what is Pitcairn's die size? What is the rumored die size of this chip? Not a coincidence.
 
haha I was gonna call BS on this until I finished reading it all.

Totally agreed you don't need 8x AA at 2560x1600

And I find the people that complain about it are just silly.

But then again you always have the OMG I can't run 8x AA and max setting on every game out I need to drop $2000 in videocards etc.

It like people forget about actually playing games!

There was no AA on a Sega genesis or a Super nintendo yet some of my best gaming memories were on those early consoles.

And ya know what? When framerates dipped on sega and snes games, it sucked then too. The entire point is, that even if I drop $1000, I still can't get smooth gameplay with everything maxed. Running smoothly is a part of gameplay. Obviously I can just turn details down, but I don't want to. I could just buy another videocard, but that doesn't get me the minimum FPS I want either.
 
Just how memory bound is the 7970? Trying to figure out if 512bit memory bus is marketing, or tangible performance improvement, as we can assume only a fairly modest increase (20% at best) in performance from the GPU itself, compared to the 7970.

Bandwidth? It isn't. You could think of 7950 as essentially having similar performance as a gpu with perfect gpu efficiency (ie about 29-30 cus) at default clock because of the extra bandwidth it has.

7970 could run at 1250mhz with 6ghz/384-bit and still be efficient...this is why a 256-bit refresh design makes sense. They could run 1792sp at 1100/7000, just as an example, and have close to boilerplate 7970ghz performance (because 7970 has too many shaders for it's rops outside of compute tasks and way too much bandwidth while running at a slow clock to keep power down).

The bus choice is probably some combo of some of the things I mentioned above.

512-bit/5ghz could run 2816sp at around 1010mhz and maintain efficiency.

If you want another example...pitcairn and it's little brother are optimized at 1ghz/4500. A reasonable overclock for pitcairn purposely being 1200/5400 (default voltage of greater than 1.2v)...which is still efficient. This may have more units comparably (scaled up) which would help boilerplate efficiency (if 48 rops) and texture capabilities versus nvidia, but optimized to run at a lower clock for power efficiency while maintaining similar bandwidth efficiency.
 
They may drop the titan but definitely not 400-450 since it still serves very well for CUDA tasks and workstation environments... either they have an ace up their sleeve or the 780 will take a tumble.. maybe 770 too.
They have an ace up their sleeve: Maxwell is coming within 3-4 months.
Titan has been out for 8+ months already, and the gtx780 for 3+. They've already sold a crapton of them, and they can afford to drop the prices a little and still sell them, since Ati as always have driver problems, and no support for cuda/physx/lightboost.
Seriously, with this small performance gap, Nvidia has nothing to fear. They will just release Maxwell Q1 2014 and crush Ati again until 2015...

Just to give an idea, GK110 is like 18 months old, and Ati only now is catching up with it in performance.
Question is: how much has Nvidia advanced in the last 18 months while Ati was catching up? I have the feeling Maxwell will easily be double the performance of a Titan...
 
They have an ace up their sleeve: Maxwell is coming within 3-4 months.
Titan has been out for 8+ months already, and the gtx780 for 3+. They've already sold a crapton of them, and they can afford to drop the prices a little and still sell them, since Ati as always have driver problems, and no support for cuda/physx/lightboost.
Seriously, with this small performance gap, Nvidia has nothing to fear. They will just release Maxwell Q1 2014 and crush Ati again until 2015...

Just to give an idea, GK110 is like 18 months old, and Ati only now is catching up with it in performance.
Question is: how much has Nvidia advanced in the last 18 months while Ati was catching up? I have the feeling Maxwell will easily be double the performance of a Titan...

18 months old? You just said the Titan came out 8 months ago..... Technically, AMD is only a little late to the party on this one and if they bring something faster, whats it matter at that point? They are both refreshes, neither is a new design on red or green.

NV never crushed AMD in the GTX6xx vs the AMD 7xxx series. Also, I do believe the 7xxx released earlier than the 6xx series if I remember right...
 
18 months old? You just said the Titan came out 8 months ago..... Technically, AMD is only a little late to the party on this one and if they bring something faster, whats it matter at that point? They are both refreshes, neither is a new design on red or green.

NV never crushed AMD in the GTX6xx vs the AMD 7xxx series. Also, I do believe the 7xxx released earlier than the 6xx series if I remember right...
GK110, the chip used in Titan, was presented at the beginning of 2012, and then put in Teslas.
Only the next year it was released on consumer devices, with Titan first, and 780 later.

gtx6xx series was Nvidia trolling, since Ati was so far behind that Nvidia had no reason to release GK110 for consumers. So they just released a low cost, high margin gk104 video card.
And this was Ati's fault, since not only they always catch up, but they also suck at it. How much do we actually have to wait until Ati can put some fire on Nvidia's ass? How much do we have to wait until we actually have some fucking competition in this expensive market?

Ffs, the new consoles are coming, and shit like Star Citizen is gonna push the computing requirements stupidly high, and Nvidia/Intel aren't even bothering to increase performance on their stuff, just trying to milk the customers the most they can with power saving crap, while AMD/ATI just stands and watches as we're getting ripped off, and the computing power progress in the PC field is moving at the pace of a snail...

Sorry for the rant, but this is getting really really annoying.
 
Hmmm... So while they were having yield production problems with the gk104 chips, the gk110 was abundant in supply but they held onto them forever giving the market to AMD for 3 months? Then even after giving them the market, they still only produced a card to rival AMD not trump them? Then they wait a year, release a $1000 card that isnt even aimed at the majority of the enthusiast market. Then they finally release a more consumer like card for about 4 months ahead of AMDs card.

Its back and forth dude, accept it. Neither is further ahead at this point in the overall picture, only ahead for a short time.
 
Hmmm... So while they were having yield production problems with the gk104 chips, the gk110 was abundant in supply but they held onto them forever giving the market to AMD for 3 months? Then even after giving them the market, they still only produced a card to rival AMD not trump them? Then they wait a year, release a $1000 card that isnt even aimed at the majority of the enthusiast market. Then they finally release a more consumer like card for about 4 months ahead of AMDs card.

Its back and forth dude, accept it. Neither is further ahead at this point in the overall picture, only ahead for a short time.

Giving AMD the market? Hardly. Allowing themselves to slack is more like it since the bulk of the work was already done on the gk110 and there was no point in launching it as a consumer product with zero competition(which absolutely sucks for the consumers). The fact is that if AMD had actually got a product to market it would have lit a fire under Nvidia's ass and we would have seen the gk110 on the consumer market much sooner.
 
GK110, the chip used in Titan, was presented at the beginning of 2012, and then put in Teslas.
Only the next year it was released on consumer devices, with Titan first, and 780 later

They didn't start shipping any mass produced GK110 until Sept '12....
 
the gk110 had soft launch, and absimal yields, which is why they were initially so damn expensive...

it took a good 6 monhts for decent availability...

By the time they got the yields up, they were able to gimp them for the 780 and still make them profitable...

Its odd however, that now the yields are up, they havent dropped the price on the titan.

Bear in mind the Titan is also gimped from the workstation card, gpgpu is downgraded...
 
That 512 bit wide memory bus gets even more interesting when you look at it in the context of their hUMA architectual changes coming up.
 
Giving AMD the market? Hardly. Allowing themselves to slack is more like it since the bulk of the work was already done on the gk110 and there was no point in launching it as a consumer product with zero competition(which absolutely sucks for the consumers). The fact is that if AMD had actually got a product to market it would have lit a fire under Nvidia's ass and we would have seen the gk110 on the consumer market much sooner.

Dude, are you that ignorant? Did you read anything I posted? Provided benchmarks are true, if AMD comes out @ $650 w/ a little better than Titan performance, where is NVidia? They are a good 6 months out from their next generation if not more. Then AMD will be 6 months behind that.

Neither is ahead of the other in the big picture. All you seem to see is black and white (or green).
 
They might as well gimp it even more and poser it out as a 770Ti, plenty of morons will buy it.
 
I am still tired of playing these games. Still rocking a used GTX 580 MSI 3GB Lighting card.
 
Dude, are you that ignorant? Did you read anything I posted? Provided benchmarks are true, if AMD comes out @ $650 w/ a little better than Titan performance, where is NVidia? They are a good 6 months out from their next generation if not more. Then AMD will be 6 months behind that.

Neither is ahead of the other in the big picture. All you seem to see is black and white (or green).

That's a hell of an assumption on your part(that nvidia wouldn't be able to come up with anything within 6 months of the launch of this particular AMD card).

Furthermore, even if we were to take a serious look at what AMD has been producing for the past couple of years, they get beat by nvidia, eventually release a new card that catches up, get beat, catch up, get beat, catch up. That's not competition that drives prices down(I'm intentionally ignoring the cost of the titan since you can't assume it's cost won't change, and it was a halo product that did actually sell at it's ridiculous price point). For AMD to be a serious competitor in the market, they need to beat nvidia, and they can't.

"but you're just an nvidia fanboy" I hope that's not what you're thinking, because it couldn't be further from the truth as I've owned plenty of cards over the years besides nvidia(3dfx, couple matrox cards, various versions of ATI chipsets). The sad fact is that aside from the driver improvements that ATI had finally started making during the "reign" of the 4000 series cards, AMD has done diddly squat the past few years and keeps playing catch up with nvidia instead of leapfrog(the same could be said about their desktop CPUs and competition with intel as well, but that's not even really catching up).
 
I think both companies are out of sync now.

Is this to compete with Titan, or nVidia's next gen card? Titan is still Kepler.
 
I think both companies are out of sync now.

Is this to compete with Titan, or nVidia's next gen card? Titan is still Kepler.

same node current gen.
20nm is another ball game and compeition.
 
That's a hell of an assumption on your part(that nvidia wouldn't be able to come up with anything within 6 months of the launch of this particular AMD card).

.

1fps faster, cost you 10000us more but you buy it happily.
Nvidia laugh their asse(t)s at the bank and does high fives.

Even if they could do it, it either would be a limited edition high priced or PR telling you maxwell is coming and it be so great which then would be 20nm node and again amd will answer with their own 20nm card.

Nvidia wont have an answer.
:p
 
That's a hell of an assumption on your part(that nvidia wouldn't be able to come up with anything within 6 months of the launch of this particular AMD card).

Furthermore, even if we were to take a serious look at what AMD has been producing for the past couple of years, they get beat by nvidia, eventually release a new card that catches up, get beat, catch up, get beat, catch up. That's not competition that drives prices down(I'm intentionally ignoring the cost of the titan since you can't assume it's cost won't change, and it was a halo product that did actually sell at it's ridiculous price point). For AMD to be a serious competitor in the market, they need to beat nvidia, and they can't.

"but you're just an nvidia fanboy" I hope that's not what you're thinking, because it couldn't be further from the truth as I've owned plenty of cards over the years besides nvidia(3dfx, couple matrox cards, various versions of ATI chipsets). The sad fact is that aside from the driver improvements that ATI had finally started making during the "reign" of the 4000 series cards, AMD has done diddly squat the past few years and keeps playing catch up with nvidia instead of leapfrog(the same could be said about their desktop CPUs and competition with intel as well, but that's not even really catching up).

Not to be rude, but WTF are you babbling on about!? The 4000 series was far superior on a price/performance scale to the Nvidia 200 series, the AMD 7000 series came out months before the ultimately slower and more expensive Nvidia 600 series. The companies play leapfrog. One company comes out with the 'fastest' card, then the other company comes back and steals back the title. You can't interpret that as one company 'always loosing'. Remember the GTX 590 declared 'the world's fastest graphics card!', and later AMD's clever ad for the 6990: 'faster than the world's fastest graphics card.' also now: the 7990 is the fastest card on the market, it's dual GPU, but it is still the fastest card. Nvidia will probably release a dual GPU card that will out-pace it. At that point YOU will probably look at that as AMD loosing...
 
Not to be rude, but WTF are you babbling on about!? The 4000 series was far superior on a price/performance scale to the Nvidia 200 series,

You ask what I'm babbling about, and then post the obvious. You didn't even read my post. I said AMD hadn't done squat since ATI produced a competing product with the 4000 series. I suppose now I've just typed it in as plain of English as I possibly can, but holy crap guy. I've been talking about single GPU cards, now you want to bring up dual GPU cards. If you're going to go that route, might as well take it to the full extreme of quad SLI and toss the budget right out the window just because ATI can slap a couple GPUs on the same board.

WHOOPDEEDOO

Sorry pal, I want serious competition from AMD, something to actually convince me to buy their products.
 
From the leaked benches, the 780 at overclocked spec can already surpass the card since as far as I understood the benches were done at 1020 MHz (already an overclock). If this is the case and assuming less OC headroom beyond 1020, it is safe to say cards are evenly matched on performance.

Factor in drivers, features that come with nVidia and 3D support and I am willing to pay 50 more per card for nVidia just for stability of my system and my personal sanity than go back to AMD or run a CFX system.

with that argument, aren't nvidia cards overclocked as well with the same exact speed boost feature that AMD now uses on their cards. so in that sense it doesn't really hold up does it?


Giving AMD the market? Hardly. Allowing themselves to slack is more like it since the bulk of the work was already done on the gk110 and there was no point in launching it as a consumer product with zero competition(which absolutely sucks for the consumers). The fact is that if AMD had actually got a product to market it would have lit a fire under Nvidia's ass and we would have seen the gk110 on the consumer market much sooner.


ok this is the best one i've read in a while.. are you serious? did you put the horse blinders on and completely forgot about the cluster fuck GK110 was when it was suppose to be released almost 2 years ago. it had absolutely nothing to do with AMD not forcing them to release GK110. to top it off it took them another 6 months to even release the Tesla K20 series which was originally announced in may of that year.

so stop trying to act like nvidia was just sitting there twiddling their thumb's waiting for AMD because that was never the case. nvidia screwed up and they were forced to fix it to keep their share holders happy, end of story.
 
Last edited:
You ask what I'm babbling about, and then post the obvious. You didn't even read my post. I said AMD hadn't done squat since ATI produced a competing product with the 4000 series. I suppose now I've just typed it in as plain of English as I possibly can, but holy crap guy. I've been talking about single GPU cards, now you want to bring up dual GPU cards. If you're going to go that route, might as well take it to the full extreme of quad SLI and toss the budget right out the window just because ATI can slap a couple GPUs on the same board.

WHOOPDEEDOO

Sorry pal, I want serious competition from AMD, something to actually convince me to buy their products.

5870 was pretty pimped out :)
 
And ya know what? When framerates dipped on sega and snes games, it sucked then too. The entire point is, that even if I drop $1000, I still can't get smooth gameplay with everything maxed. Running smoothly is a part of gameplay. Obviously I can just turn details down, but I don't want to. I could just buy another videocard, but that doesn't get me the minimum FPS I want either.

that is my point.

There is always going to be a game that isn't coded properly even on $2000 3 way SLI setup.

This obessions for Max settings is pointless.

Just play your games and move on you just said buying another videocard doesn't get you the mimimum fps you require and it may never happen in all games.
 
Impossible, Nvidia have never written a bad driver ever in the history of videocards.
Never!
Nvidia cant make a mistake ever.
its told in the tablet left by Moses.

I find the mythical tales of people telling that nvidia works so well is sad at best.

Speak the truth, brother/sister!

/s
 
This obessions for Max settings is pointless.

Buut, I would like to be able to run at least a 6 years old game maxed out:
54869.png


:(
 
I tried that line once around here ... put on your fire suit!

hehe i'm well aware of my present location this is [H] Go big or go home.

But even on a site like this there has to be some common sense and decision based on logic :p

We both entitled to our opinons.

And for me I will take 1600p on very high with no AA to get 80fps avg and a 40fps mins.

While there are others that need to have 1600p 8x AA and getting 45fps avg and 20mins and they are fine with that.

I will sacrifce max settings for performance other prefers to throw $1000 dollar at the problem.

To each their own.

Buut, I would like to be able to run at least a 6 years old game maxed out:
54869.png


:(

I would drop 4x SSAA and go to 4x MSAA and like magic 60fps!
 
Back
Top