Has Nvidia Forgotten Kepler? The GTX 780 Ti vs. the 290X Revisited

Joined
Dec 23, 2010
Messages
885
There is more than enough information in this article to conclude that Nvidia has not neglected Kepler but is optimizing for Maxwell.

The 290x has improved, but in my opinion it should, AMD is still focused on the 390x which is the same card as the 290x with better cooling and more ram.

Has Nvidia Forgotten Kepler? The GTX 780 Ti vs. the 290X Revisited - BabelTechReviews

I'm going to ask you guys to please discuss the data in the linked article, not fight over if its a shill site, biased article, ect. ect. ect. (y)
 
This was always my opinion from the get-go: AMD has continued to optimize Hawaii-based cards while NVIDIA has simply been focusing their efforts on Maxwell. It makes no sense for NVIDIA to go in and purposefully hurt Kepler-based cards. The most infuriating thing throughout all this is when people think that there 780 Ti should be on par with the performance of the GTX 980 whenever data comes out on a new AAA release. Technology is constantly advancing.
 
This was always my opinion from the get-go: AMD has continued to optimize Hawaii-based cards while NVIDIA has simply been focusing their efforts on Maxwell. It makes no sense for NVIDIA to go in and purposefully hurt Kepler-based cards. The most infuriating thing throughout all this is when people think that there 780 Ti should be on par with the performance of the GTX 980 whenever data comes out on a new AAA release. Technology is constantly advancing.


Yeah that is my take too.
 
It doesn't matter if Nvidia is actively sabotaging Kepler performance or not. The end result is the same. Look at the release dates and prices of the cards in question:

February 2013 - Nvidia Titan - $1,000
October 2013 - AMD HD 290X - $550
November 2013 - Nvidia GTX 780Ti - $700

All three cards released the same year. Two of the cards released within weeks of each other (290X and 780Ti). Yet today, the AMD 290X is faster than not only the more expensive GTX 780Ti but also the MUCH more expensive Titan. This only reinforces my opinion that if you can afford to continually purchase the newest video cards available, you'll probably buy Nvidia. If you expect to hang on to your card for a few years or want the best bang-for-your-buck, you're better off with an AMD card.
 
nah not enough games on PC from Consoles or due to console ports to show that kinda of degree of influence. Gotta remember Xbox one didn't have Dx12 till just a few months ago. The game list used in this review doesn't even go through console to PC either although they broke out different DX11 games into brackets of newer older etc. And all of them seem to have the same effect, if it was due to console influence the newer games should be extenuated towards GCN favor which doesn't seem to happen.
 
Problem is communication! The argument wasn't sabotage it was no longer optimizing for Kepler. Besides article is trash and only states obvious conclusions. Benching 2-3 year old games wont likely show much change for either vendor. Look at the 2015-16 games and the difference between the 780Ti and 290X. Granted today enough time has passed that the issue is of little importance but a year ago it was valid and obvious to those being reasonable. Really think about it from a purchasers stand point. The 780Ti hitting the Titan almost 9 mths later which seemed to rile a few feathers, but pales in comparison to the 980Ti a year later. So I get some of the frustration.
 
It seems to be the opinion of this guy that the consoles have some thing to do with it.
AnandTech Forums - View Single Post - Has Nvidia Forgotten Kepler? The GTX 780 Ti vs. the 290X Revisited
WHat do you guys think?

Happy here are his post from the other thread. His explanation makes sense, but like I stated it would help if a respected tech site did a deep dive as well.

AnandTech Forums - View Single Post - Why Doesn't Anyone Get to the Bottom of the Aging of Kepler and GCN?

AnandTech Forums - View Single Post - Why Doesn't Anyone Get to the Bottom of the Aging of Kepler and GCN?
 
yeah he missed a big topic in there, occupancy isn't the same as utilization. GCN currently gets great occupancy, but many other things in the GPU stop the ALU's form utilization. Again both GCN and Maxell are good at certain things. And Maxwell's utilization and occupancy of its shader array is definitely stronger than GCN.

Nv doesn't need to move to a more like GCN architecture to be competitive. AMD has been going towards Scalar architecture for GCN, nV has been on scalar type since the G80. If anything AMD can learn from nV's pipelines. Doing more with less. But by going down that road there must be silicon in nV's architecture that gives them better occupancy and utilization to "keep up" with GCN's raw theoretically numbers. Which way is better one might ask or look into, I don't know, because there are advantages to both, just look at Intel vs. AMD in the CPU space, which one can do more with less, Intel currently, and they have a large lead because of a mistake AMD did, but looking at mutli core application performance, AMD fairs well, Of course the GPU space isn't this divergent as the CPU's are. But one can't say an architecture is "better" "more advanced" by looking at these things, nor can they say one IHV has to go towards another. It will come down it what new things can help overcome current bottlenecks and how can they be implemented without restricting older code paths or creating other bottlenecks.

Over all its a give and take from both sides, its never one sided. As DX and Vulkan are both built with the support of both vendors they are well aware of those features that are coming out, so neither of them are left in the cold when they are designing their GPU's, Granted in the short term it might seem GCN has an inharent advantage, but we have seen this in the past, consoles tend not be the deciding factor in the pc benchmark wars.
 
I think the guy is missing part of what AMD did for the 290x by letting Board Partners redesign it with better cooling and power delivery as to keep the gpu from throttle because of massive heat .. the last 290x design by Sapphire was one of the best called Tri X 290X New Edition which was clocked at 1020/1350.. Now add the drivers and easy to see why some 290X's have gained a lot of performance.
 
  • Like
Reactions: N4CR
like this
Cliff notes: Nvidia started with better drivers, AMD drivers were a dumpster fire and they've been getting better since AMD hasn't move architectures in over three years.
This is what makes me worry about Polaris in the early game.
 
^ By all indications, GCN 1.x to Polaris will not be the same sort of architectural jump as Terascale to GCN. In other words, my fingers are crossed, too.
 
This was always my opinion from the get-go: AMD has continued to optimize Hawaii-based cards while NVIDIA has simply been focusing their efforts on Maxwell. It makes no sense for NVIDIA to go in and purposefully hurt Kepler-based cards. The most infuriating thing throughout all this is when people think that there 780 Ti should be on par with the performance of the GTX 980 whenever data comes out on a new AAA release. Technology is constantly advancing.

I think the bitching started to come when the GTX-960 was trading blows with the GTX-780 in some Gameworks titles (and even winning, if I remember correctly). Even though Maxwell has architectural improvements, the GTX-780 has more raw power than the GTX-960 (as evidenced by the fact that it beats the card in pretty much everything out there, plus it has 48 ROP's versus 32 ROP's), so it made sense that people would be mad to see Project Cars running better on a 960 than a 780 (and 780 ti?).
 
  • Like
Reactions: N4CR
like this
Doesn't surprise me in the least.

AMD is still generating a lot of revenue from the GCN architecture, whereas nVidia is no longer making any Kepler cards, so the revenue from them is solely from any left over stock (if not actually 0).

It coincides with my understand that nVidia drivers are usually better up front, while AMD would usually take a while to catch up, and they often do.
 
I would also think that developers, tools etc. will get more efficient for a given architecture if it sticks around longer like GCN. So my 290x was a great long standing card that even today does well as shown by the data. My Nano on the other hand I do not think will be a long standing card mostly due to the same 4gb memory amount. So the article does not predict future longevity of any video card and gives some data on current state of older cards with a more current configuration and newer drivers.
 
I would also think that developers, tools etc. will get more efficient for a given architecture if it sticks around longer like GCN. So my 290x was a great long standing card that even today does well as shown by the data. My Nano on the other hand I do not think will be a long standing card mostly due to the same 4gb memory amount. So the article does not predict future longevity of any video card and gives some data on current state of older cards with a more current configuration and newer drivers.

AMD is creating a cache in system memory for the Fury type cards. If you purchased fast, low latency system memory you have nothing to worry about.
 
AMD is creating a cache in system memory for the Fury type cards. If you purchased fast, low latency system memory you have nothing to worry about.


Not just Fury cards, good'ol 280x cards are also getting huge RAM caches as well :

VRAM_zpsfdn4mchj.png
 
Doesn't surprise me in the least.

AMD is still generating a lot of revenue from the GCN architecture, whereas nVidia is no longer making any Kepler cards, so the revenue from them is solely from any left over stock (if not actually 0).

It coincides with my understand that nVidia drivers are usually better up front, while AMD would usually take a while to catch up, and they often do.

Pretty much.

Want 100% performance now and don't mind only 105% 2 years down the road? Buy nVidia.

Don't mind 85% performance now to get 120% 2 years down the road? Buy AMD.

Obviously other factors come into play, but if you play the long game, you should definitely consider AMD first.
 
Pretty much.

Want 100% performance now and don't mind only 105% 2 years down the road? Buy nVidia.

Don't mind 85% performance now to get 120% 2 years down the road? Buy AMD.

Obviously other factors come into play, but if you play the long game, you should definitely consider AMD first.

85% of what? the 290X was way closer than 15% to the 780Ti
 
Yakk Checked out my properties and I have a 20GB Graphics memory cache in addition to the 4GB that comes with my R9 290. Bulletproof!
 
85% of what? the 290X was way closer than 15% to the 780Ti

85% of itself.

Don't mind the percentages, it's just my way of saying AMD's cards usually don't perform at 100% right out the gate, but give them time and you'll easily see an extra 20-30% performance from driver maturity.
 
Nice article. The need to tune drivers for specific games and cards is an unfortunate reality. The longer an architecture stays on the shelves the more TLC it will get.

What's up with the drop in Titan performance though?
 
^ By all indications, GCN 1.x to Polaris will not be the same sort of architectural jump as Terascale to GCN. In other words, my fingers are crossed, too.
What indications? According to AMD it's their biggest jump ever.
 
Nice article. The need to tune drivers for specific games and cards is an unfortunate reality. The longer an architecture stays on the shelves the more TLC it will get.

What's up with the drop in Titan performance though?
Kill the used market for older cards?
 
An interesting analysis, and one I think some people had suggested in the "Why does [H] suck" thread. I think the analysis shows that, it's not a conspiracy so much on NVidia's part to reduce performance on older cards, as it has been in AMD's interest to improve performance on their older architectures (which largely still makes up their current line-up). When AMD tunes drivers for 390x, the improvements also apply to 290x since they are both the same chip (Hawaii); when console game core code is developed (which then carries over to x86), it's developed for Southern Islands GPUs.
 
It
nah not enough games on PC from Consoles or due to console ports to show that kinda of degree of influence. Gotta remember Xbox one didn't have Dx12 till just a few months ago. The game list used in this review doesn't even go through console to PC either although they broke out different DX11 games into brackets of newer older etc. And all of them seem to have the same effect, if it was due to console influence the newer games should be extenuated towards GCN favor which doesn't seem to happen.
It has nothing to do with DX12 or Async Compute. Heck we can leave DX12 and Async out of it. Take the 2015 titles. Isolate console vs non-console titles and this is what you get..

R9 290x vs GTX 780 Ti (non-console)
1080p: GTX 780 Ti is 8.3% faster
1440p: GTX 780 Ti is 1.7% faster

R9 290x vs GTX 780 Ti (console)
1080p: R9 290x is 16.2% faster
1440p: R9 290x is 20.8% faster

See a pattern?

Let's do the same thing but comparing a GTX 980 vs R9 290x..

R9 290x vs GTX 980 (non-console)
1080p: GTX 980 is 33.5% faster
1440p: GTX 980 is 29.3% faster

R9 290x vs GTX 980 (console)
1080p: GTX 980 is 8.1% faster
1440p: GTX 980 is 5.6% faster

Now as we move towards DX12 and Async Compute titles. That 10-20% boost Async compute offers, as well as the API overhead alleviation of DX12 should result in the R9 290x being around 5-15% faster than a GTX 980. (We can ignore the Rise of the Tomb Raider DX12 patch as it is broken but once fixed you'll see).

What we're seeing is the console effect. With Microsoft pushing unity between the PC and console platforms then we're going to see this push NVIDIA towards a more GCN-like uarch or they won't be able to compete.
 
What games are you looking at? I don' t see that based on the games in that review.

Mahigan, I know you are interested in this stuff, but why don't you post over at B3D, none of the programmers, engineers etc, think consoles will not influence pc development to any appreciable degree, even async too, its a short term affect and that is it with GCN 1.0 to 1.2 and that is it.
 
Last edited:
What games are you looking at? I don' t see that based on the games in that review.

Mahigan, I know you are interested in this stuff, but why don't you post over at B3D, none of the programmers, engineers etc, think consoles will not influence pc development to any appreciable degree, even async too, its a short term affect and that is it with GCN 1.0 to 1.2 and that is it.
I have this ability to look at a graph full of numbers and intuitively discern patterns. But I understand not everyone is an aspie like me. So I did the math instead.

Console titles:
Star Wars Battlefront
Mad Max
Assassin's Creed
Just Cause 3
Rainbow Six
Dirt Rally
Far Cry Primal
The Division

Non console? All the others from 2015 on.

If you separate the two you get drastically differing outcomes. I don't think that this is a fluke.
 
What games are you looking at? I don' t see that based on the games in that review.

Mahigan, I know you are interested in this stuff, but why don't you post over at B3D, none of the programmers, engineers etc, think consoles will not influence pc development to any appreciable degree, even async too, its a short term affect and that is it with GCN 1.0 to 1.2 and that is it.

Well I don't think Pascal will have Async Compute + Graphics support. I think Pascal will brute force its way to a win like the GTX 980 Ti did.

As for Polaris, it will benefit from Async Compute...perhaps more so than previous GCN iterations due to the CU improvements.
 
Err you missed quite a few consoles games in that list. Dying light, GTA V Wolfinstien, project cars etc So you missed the games that are in heavy favor of nV, ?

These are console titles. Just check. title name, xbox one google search, and they all pop up.

I can't believe you took out arkam knight, we knew that was a horrid pc port lol.
 
Last edited:
Well I don't think Pascal will have Async Compute + Graphics support. I think Pascal will brute force its way to a win like the GTX 980 Ti did.

As for Polaris, it will benefit from Async Compute...perhaps more so than previous GCN iterations due to the CU improvements.

I think the first part is TBS, and its premature to even think that.
 
Non console? All the others from 2015 on.

If you separate the two you get drastically differing outcomes. I don't think that this is a fluke.

... try cross-referencing that with the game engine used for even more interesting results...
 
Err you missed quite a few consoles games in that list. Dying light, GTA V Wolfinstien, project cars etc So you missed the games that are in heavy favor of nV, ?

These are console titles. Just check. title name, xbox one google search, and they all pop up.

The console versions of those titles are drastically different. They were re-worked from scratch for the PC. The console titles starting with the end of 2015 and beginnings of 2016 kept most of their code optimizations intact. It's the pattern we've been seeing in most new game releases.

One game, which should prove interesting, is Doom. The alpha kept its console optimizations but we'll see what happens once the PC port is completed. Another game is Quantum Break, but something tells me it will be similar to GoW upon release and will require a patch before we can compare it.

Lately, console ports have added GameWorks options as an afterthought rather than having been designed around NVAPI. I think game studios are doing this to save money and unless NVIDIA help bankroll titles, we're not likely to see the same results that we used to see in the beginning of 2015.
 
There is no way you can run console code on a pc, it would not work well performance wise, all games have to be rewritten for pc, and if you want to talk about async code, there is another thread for that but that is very sensitive to different gneerations of cards. So don't expect Fiji to have the same resilience ad the 290x. This has happened in ever generation of cosnoles, the IHV has a short term that generation advantage and then it disappears.
 
I think the first part is TBS, and its premature to even think that.
It is premature but the design time frame for Pascal doesn't appear to indicate that it will be drastically different from Maxwell. I could be wrong but that's my take on it.
 
It is premature but the design time frame for Pascal doesn't appear to indicate that it will be drastically different from Maxwell. I could be wrong but that's my take on it.


Yeah and the G80 was a 7800 on steroids ps it was ready when the 7900 was launched. I full year before DX10 was released.

Yes if you don't believe me go to the review sites of the G80 with die shots and look at the date on the chip.

This is why I have been saying don't underestimate timing with coincidence or down play due to marketing current technologies.
 
Back
Top