NVIDIA GPU Generational Performance Part 1 @ [H]

This was after 15 minutes of running around at 4k on the very high preset. I never saw my 1080 ti drop below 2000 mhz.

View attachment 91744

I got stuck on that TR level and stopped playing the game o_O

But back on topic, great article! The Pascal cards are awesome - my 1070 has served me well for over 2 years now. I only wish it was the Ti, as I have a 4k screen and have to dial down the graphics settings more than I'd like, to hit 60 fps.
 
Dope idea Kyle :D

Thank you for doing this. Terrific article. Much awesome. Such majestic.
 
That said, in my experience even with a +250mhz core clock boost on the 980Ti I had, it could "only" equal my 1070 (non-Ti). The 1070 would bench higher, but the 980Ti would keep a higher minimum framerate. I just give these as comparison points, I also could have been cpu bound.

Just some thoughts on Pascal vs Maxwell.

Sounds about right. I get similar framerates with my overclocked 980 Ti to my friend's ASUS Strix 1070, perhaps even close to reference 1080. That's why I haven't bothered upgrading and will jump straight to the 11 series (unless a used 1080 Ti turns up at a very affordable price). Nvidia was incredibly conservative with the clocks on the reference 980 Ti and it annoys me that is used in a lot of GPU comparisons because it's not even close to what the card can handle.
 
Very cool idea, but the information isn't going to be useful to most people without mid-tier cards being used as well. A 970/1070 tier comparison would likely engage a lot more readers.

I'll look into that possibility if we get a lot of good feedback. Right now, the focus on high-end is a good starting point. We will be testing 6 AMD cards, FYI.
 
Nice review. The only thing I'd have liked to see in addition would have been the 70 series, but that's only because I'm using a 970, so bias. :) I get why the focus is on 80 and ti.

I somewhat agree with the question posed near the end of conclusion 3, that except for a handful of games the 1080 hasn't really been taxed much by anything -- at 1440p. Of course, 4K is becoming the new hotness, so it might behoove [H] to conduct the performance review of the 1080 ti at both 1440p and 4K, since it's just going to stomp all over 1440p. 4K results will give a better picture of where we may be heading with 1180 and 1180 ti.

Thanks for the feedback and suggestion
 
Going from 28nm to 16 nm gave Nvidia a 60% performance boost, 16nm to 12nm should give us about a 40% boost from a gtx1080 to gtx1180. It should be a good upgrade.

Going by past history a 1180 should be about 25% faster than a gtx1080ti at about the same price point. A gtx1170 should be about equal to a gtx1080ti and about $175 cheaper.
 
I would have not even been close to guessing the performance advances, while many benchmarks and reviews over the years have been done, this article nailed it on generational hardware and game development. Makes me wonder if the hardware is pushing the game development or the games pushing the hardware? Maybe both. [H]ardOCP looks like it will again be set up to be the one's best to accurately review how Turing (whatever next generation Nvidia calls it) performs from an accurate historical point. Bottom line, no one does GPU testing, particularly gaming, as good as [H]ardOCP.
 
I'll look into that possibility if we get a lot of good feedback. Right now, the focus on high-end is a good starting point. We will be testing 6 AMD cards, FYI.
Absolutely love this article and look forward to the AMD comparison as well. I really you can include 7970 even though it's what from 2011? I think the AMD review may actually shed some light on the fine wine idea and where it comes from or if it's a fallacy like many will tell you here.
 
Man... 980 to 1080 was like i7 920 to i7 2700k kind of bump. 780 to 980's been what we've seen since from Intel.

Taking bets: 1080 to 1180 will be (as a %) a gain closer to 780 -> 980 rather than 980 -> 1080. Call it a hunch that I don't see lightning striking twice, back to back.
980 was the end of 4 generations of cards on 28nm. The gains were not impressive due to already being near the limit of the node. We're already moving to 12nm from 16nm with the next generation. We might not see 70% improvement across the line like we did with Pascal, but it should certainly be better than the 20% we saw with Maxwell.
 
Excellent article! Enjoyed it as it answered many questions, but I do agree with A little teapot- you don't see 1080 cards in the Steam survey until you get down in the bottom 5%. I'm still gaming at 1920x1200 at max settings on a 970 SSC. I like the 1440, but there should also be 1080p in there too. Heck, I'm waiting for 4K. 1180? Maybe.
 
Good article. I know that you are pushing the cards by running at 1440p resolution, but I would have also run benchmarks at 1080p resolution. From my perspective, the 1080p resolutions were the most "upper-limit" monitors at the time of the 780's release, and measuring the improvement from 780 to 1080 at the 1080p resolutions would have been interesting.

Also, since the ten year old "joke" about "Can it run Crysis?" is still somewhat valid, I'm surprised that benchmarks involving the original Crysis was not included.
 
Putting some glorious GPU meat back onto the front page, I fucking love it. While other sites are reposting rumors as clickbait crapfests that change their 'inside story' everyday, you guys are still delivering real journalism into relevant topics while getting your hands dirty and putting in the time to actually create content. I'll never stop lauding HARDOCP for delivering videocard porn.

It is really something to see exactly how much of an improvement was made for the 1080gtx, which considering the trend that had been established with smaller and smaller incremental ticks, Nvidia was able to put out a comparatively much better performer across virtually every metric from speed, to architecture, power consumption, heat, and overall features. not to mention the raw power it delivered in VR that was only eclipsed by Titan architecture due to brute force. If 4k wasn't such a monumental leap over 1080p, I think history would have remembered the 1080gtx in an even more favorable light.

I'm honestly really excited to see the AMD side reach back to the transition to GCN architecture, and their incremental improvements on the boutique market.

Also:
"We are running the game at the default "Ultra High" setting, which note is not the highest the game can go, it can go even higher with distance sliders.

This game puts the GeForce GTX 780 and GeForce GTX 980 in their coffins" -Brent Justice on Kingdom Come: Deliverance

Sorry fanboys, he is not trying to tell us these videocards are vampires.

Thanks Brent.
 
Last edited:
I was a big hater on the 780, I went AMD for that cycle and was still doing well when the 980 came about. Looks like I planned it right as I was able to land a good deal on the 1080 shortly after release. The upgrade to the 1080 at 1440 resolution was like adding a Voodoo1 to my system in the past. It was so effortless in everything I played, maxed out games without thinking twice. I think the GYX1080 might be one of my favorite GPU releases of all time, right there with the ATI 9800pro/4870 and GTX7800.
 
Great article but at page 16, you mentioned "Farcry 5 doesn't seem to be as demanding as Far Cry 4 was". if Farcry 4 has better Average and minimum frame rate on all three cards then how it is more demanding compare to Farcry 5???

Farcry 4 both looks better and runs better but people still consider Farcry 5 better optimized. it's interesting that even Kyle assume Farcy 5 is less demanding without looking at his own benchmarks.
 
Great article but at page 16, you mentioned "Farcry 5 doesn't seem to be as demanding as Far Cry 4 was". if Farcry 4 has better Average and minimum frame rate on all three cards then how it is more demanding compare to Farcry 5???

Farcry 4 both looks better and runs better but people still consider Farcry 5 better optimized. it's interesting that even Kyle assume Farcy 5 is less demanding without looking at his own benchmarks.
We don't use benchmarks. We play the game and up the eye candy so that we we still get acceptable frames. So cross comparing old data is not going to expose what you are looking for.
 
Ok, you don't use canned benchmaks but I just wanted to point out there is a contradiction between your results and your comment about FC5 is less demanding than FC4.

Page 6, Farcry 4 average frame rate : GTX 780=43fps - GTX 980=54fps - GTX 1080=93fps
Page 16, Farcry 5 average frame rate : GTX 780=31fps - GTX 980=50fps - GTX 1080=85fps

Considering your results from this article how you can say "Farcry 5 doesn't seem to be as demanding as Far Cry 4 was"?

You need to look beyond the "Pure FPS"...and observe the settings used ingame...(High vs Very High vs Ultra, stuff like AA ect...)
 
Ok, you don't use canned benchmaks but I just wanted to point out there is a contradiction between your results and your comment about FC5 is less demanding than FC4.

Page 6, Farcry 4 average frame rate : GTX 780=43fps - GTX 980=54fps - GTX 1080=93fps
Page 16, Farcry 5 average frame rate : GTX 780=31fps - GTX 980=50fps - GTX 1080=85fps

Considering your results from this article how can you say "Farcry 5 doesn't seem to be as demanding as Far Cry 4 was"?
You cannot cross compare framerates in our reviews like that.
 
I'll look into that possibility if we get a lot of good feedback. Right now, the focus on high-end is a good starting point. We will be testing 6 AMD cards, FYI.

I like this. I was hoping for the same kind of deal with AMD.
 
Ok, you don't use canned benchmaks but I just wanted to point out there is a contradiction between your results and your comment about FC5 is less demanding than FC4.

Page 6, Farcry 4 average frame rate : GTX 780=43fps - GTX 980=54fps - GTX 1080=93fps
Page 16, Farcry 5 average frame rate : GTX 780=31fps - GTX 980=50fps - GTX 1080=85fps

Considering your results from this article how can you say "Farcry 5 doesn't seem to be as demanding as Far Cry 4 was"?

Simply put Far Cry 4 supports features Far Cry 5 does not. Far Cry 4 has tessellated Godrays, Far Cry 5 uses a more simple form of volumetric fog and Godrays. Far Cry 4 supports HBAO+ Ambient Occlusion, Far Cry 5 uses a lesser more inaccurate form. Far Cry 4 supports true soft shadows, Far Cry 5 does not. Far Cry 4 supports simulated fur, FC5 does not. Far Cry 5 also seems to have taken a step backwards by reducing the LOD distance and world detail. Having now played Far Cry 4 and 5 back-to-back in this review I can unequivocally state that Far Cry 4 looks superior and more detailed and better compared to FC5 and that FC5 is a step backwards. Therefore, FC5 is not as demanding in graphics, features and image quality.
 
980 to 1080 jump was huge and Pascal is the best executed lineup to date. I’m definitely not expecting the newer generation to provide a bigger or similar jump, at least in current conditions. And being honest, I’m very happy to see that even after 2 years, my 1080 has so much performance left in it at 1440p, except for unoptimized junk like KCD, that it most likely can live another couple years before getting demoted to htpc card. Without going nuts with ultra settings there’s quite a headroom.
 
980 to 1080 jump was huge and Pascal is the best executed lineup to date. I’m definitely not expecting the newer generation to provide a bigger or similar jump, at least in current conditions. And being honest, I’m very happy to see that even after 2 years, my 1080 has so much performance left in it at 1440p, except for unoptimized junk like KCD, that it most likely can live another couple years before getting demoted to htpc card. Without going nuts with ultra settings there’s quite a headroom.

Citation needed...
 
Very cool idea, but the information isn't going to be useful to most people without mid-tier cards being used as well. A 970/1070 tier comparison would likely engage a lot more readers.

As much more of a x60 or x70 guy myself, I would love to see that comparison. I still found this one very interesting, though! :)
 
980 to 1080 jump was huge and Pascal is the best executed lineup to date. I’m definitely not expecting the newer generation to provide a bigger or similar jump, at least in current conditions. And being honest, I’m very happy to see that even after 2 years, my 1080 has so much performance left in it at 1440p, except for unoptimized junk like KCD, that it most likely can live another couple years before getting demoted to htpc card. Without going nuts with ultra settings there’s quite a headroom.

KCD is on CryEngine, a highly optimized engine by now. It is simply very demanding because of the high quality of graphics and detail and polygon usage.
 
Brent, Kyle, thanks for much for the article. A lot of work went into it.

It's a hell of a lot better than the cookie clutter trash on most other tech websites.

Looking forward to part 2.
 
KCD is on CryEngine, a highly optimized engine by now. It is simply very demanding because of the high quality of graphics and detail and polygon usage.

I’m tempted to try it now as there were so many complaints abot bugs that the game also had an aura of being unoptized in addition to that.
 
I’m tempted to try it now as there were so many complaints abot bugs that the game also had an aura of being unoptized in addition to that.

A patch was released early on after its launch that improved performance quite a bit actually. I remember testing it before and after the patch and noticed a big difference.
 
Really enjoyed the article, especially because I just upgraded from a GTX 760 to a GTX 1070 Ti, so I know firsthand how large the performance jump can be when going up multiple generations.

One question: In 2013, how many people were even playing at 1440p? I don't really think 2K monitors were that common back then, and I bet even the majority of enthusiasts were still on 1080p resolution. Reducing the resolution is also one of the best ways to extend the useful life/performance of an older card. I feel like you could've kept the 780 in "usable" territory even on 2018 games at 1080p and medium settings.

I realize that probably wasn't what you were going for, because the 980 and 1080 would be putting up ridiculous numbers at 1080p. I was just wondering if you had considered evaluating the performance benefits of dropping the resolution for the older-gen cards.
 
I like what your doing as I been building my 4K setup but I am more older guy and like the 3d person view like world image as my days in Eyefinity i Racing was the love of memory bus width of AMD cards at the time as they just hold up under demand to me as to be built for it . Here is a Ryzen 1400 stock speed with 16Gb 2133Mhz with 290x NE stock with club 3D dp to HDMI 2.0 to Seiki 42um 4K 60Hz panel (hhgregg $249 ) with firmware upgrade playing World of Warship on a usb wifi in 4K Very High DX 11 around 40-60fps. my phone camera sucks but as to say this is what you get if you plan for the game you play as spend a $1000 with panel on current driver .

my kitchen TV

Using a handheld mobile cam recording to display "quality" is like putting the cart before the horse...my poor eyes :/
 
Really enjoyed the article, especially because I just upgraded from a GTX 760 to a GTX 1070 Ti, so I know firsthand how large the performance jump can be when going up multiple generations.

One question: In 2013, how many people were even playing at 1440p? I don't really think 2K monitors were that common back then, and I bet even the majority of enthusiasts were still on 1080p resolution. Reducing the resolution is also one of the best ways to extend the useful life/performance of an older card. I feel like you could've kept the 780 in "usable" territory even on 2018 games at 1080p and medium settings.

I realize that probably wasn't what you were going for, because the 980 and 1080 would be putting up ridiculous numbers at 1080p. I was just wondering if you had considered evaluating the performance benefits of dropping the resolution for the older-gen cards.

back in 2013 the cool thing was still 30" 2560x1600 displays but the transition was already happening to 1440p even before that.. sadly 1440p won the standard war even though 2560x1600 was so much better in my opinion.
 
YOu guys should have used World of Warcraft as one of the test games. And then tell me why my fps on a 1080 is in the gutter!
 
Really enjoyed the article, especially because I just upgraded from a GTX 760 to a GTX 1070 Ti, so I know firsthand how large the performance jump can be when going up multiple generations.

One question: In 2013, how many people were even playing at 1440p? I don't really think 2K monitors were that common back then, and I bet even the majority of enthusiasts were still on 1080p resolution. Reducing the resolution is also one of the best ways to extend the useful life/performance of an older card. I feel like you could've kept the 780 in "usable" territory even on 2018 games at 1080p and medium settings.

I realize that probably wasn't what you were going for, because the 980 and 1080 would be putting up ridiculous numbers at 1080p. I was just wondering if you had considered evaluating the performance benefits of dropping the resolution for the older-gen cards.
I had a 2K monitor in 2013. I went to 1440p in 2014.
 
I'd hope a 4 year newer card would show marked improvement over its predecessors but that's not always the case and Nvidia had that issue from the 200 to the 600 series of cards. So I can't quite take these numbers seriously. Not because I doubt the veracity or the methodology used by the tester but simply because the premise assumes a level playing field with no variables and that's simply not the case.

For one thing, it's known that the coding for many of these triple-A titles is sloppy and often wastes resources ( rendering an entire ocean when a pond is all you can see for example ) Take GTA 5 . It is still considered a poorly optimized game that can drop even midrange current gen GPU's into the sub 30FPS at the test settings after 5 years!

So the standard is essentially based on performance comparisons of poorly optimized software with the only fix to throw more hardware at it or fudge the drivers . We accept that as a good thing? What do we really learn there? Even Prime95 actually tried to do something useful with the resources it consumed while we all burned in our CPU's!

There's also the factor of Nvidia drivers that tend to gimp older hardware at the expense of newer. AMD does it too BTW. I've seen this for myself with a pair of GTX 680's that have found better performance by rolling back drivers after doing a driver update. And what about those driver tweaks and optimizations for so many of these games that artificially hobble older cards and AMD GPU's? We know Nvidia isn't above artificially "tweaking" drivers to help market their new cards.

Look, there's no doubt that a 1080 is a better performing, more efficient card than a 780. Just as a 780 is that to a 480. I just don't believe it's a revolution so much as an evolution. I just don't buy that with all the hijinx going on behind the scenes with triple-A titles and GPU vendors that the comparisons may have yet another flawed premise. That being comparisons based on such games with claims of 50% or more performance deltas cannot be trusted simply because there's too much subjectivity brought on not by Brent but rather the subjective nature of video game performance not to mention the influence of marketing on things like drivers and available optimizations unseen and unchangeable by the average game enthusiast

We see it with AMD as well. Things like checks for driver strings that wont allow a game to run ( I('m thinking BF1 here) that a simple edit in a config file to masquerade as a newer GPU allowed it to run without issue.

That's marketing at play and so long as it exists you can only use game benchmarks as a secondary indicator of a REAL performance delta between generations.

Performance testing is boring, clinical stuff under strict and controlled environments where all the variables are known. Games don't offer that. Instead you're actually testing more of an experience which is subjective.
 
Last edited:
I'd hope a 4 year newer card would show marked improvement over its predecessors but that's not always the case and Nvidia had that issue from the 200 to the 600 series of cards. So I can't quite take these numbers seriously. Not because I doubt the veracity or the methodology used by the tester but simply because the premise assumes a level playing field with no variables and that's simply not the case.

For one thing, it's known that the coding for many of these triple-A titles is sloppy and often wastes resources ( rendering an entire ocean when a pond is all you can see for example ) Take GTA 5 . It is still considered a poorly optimized game that can drop even midrange current gen GPU's into the sub 30FPS at the test settings after 5 years!

So the standard is essentially based on performance comparisons of poorly optimized software with the only fix to throw more hardware at it or fudge the drivers . We accept that as a good thing? What do we really learn there? Even Prime95 actually tried to do something useful with the resources it consumed while we all burned in our CPU's!

There's also the factor of Nvidia drivers that tend to gimp older hardware at the expense of newer. AMD does it too BTW. I've seen this for myself with a pair of GTX 680's that have found better performance by rolling back drivers after doing a driver update. And what about those driver tweaks and optimizations for so many of these games that artificially hobble older cards and AMD GPU's? We know Nvidia isn't above artificially "tweaking" drivers to help market their new cards.

Look, there's no doubt that a 1080 is a better performing, more efficient card than a 780. Just as a 780 is that to a 480. I just don't believe it's a revolution so much as an evolution. I just don't buy that with all the hijinx going on behind the scenes with triple-A titles and GPU vendors that the comparisons may have yet another flawed premise. That being comparisons based on such games with claims of 50% or more performance deltas cannot be trusted simply because there's too much subjectivity brought on not by Brent but rather the subjective nature of video game performance not to mention the influence of marketing on things like drivers and available optimizations unseen and unchangeable by the average game enthusiast

We see it with AMD as well. Things like checks for driver strings that wont allow a game to run ( I('m thinking BF1 here) that a simple edit in a config file to masquerade as a newer GPU allowed it to run without issue.

That's marketing at play and so long as it exists you can only use game benchmarks as a secondary indicator of a REAL performance delta between generations.

Performance testing is boring, clinical stuff under strict and controlled environments where all the variables are known. Games don't offer that. Instead you're actually testing more of an experience which is subjective.
Sorry to have wasted your time. You are due a full refund.
 
I'd hope a 4 year newer card would show marked improvement over its predecessors but that's not always the case and Nvidia had that issue from the 200 to the 600 series of cards. So I can't quite take these numbers seriously. Not because I doubt the veracity or the methodology used by the tester but simply because the premise assumes a level playing field with no variables and that's simply not the case.
Tesla to Fermi was a change in NVIDIA's philosophy to power above all else. They threw efficiency out the window and it showed in the lack of performance gains and heat generated. Kepler was the first architecture developed using their new philosophy of a balanced approach toward efficiency. NVIDIA had gotten complacent after the reign of the Tesla architecture. Still, ATi saw similar "anemic" performance gains during that time as they dealt with their own issues in design philosophy.
For one thing, it's known that the coding for many of these triple-A titles is sloppy and often wastes resources ( rendering an entire ocean when a pond is all you can see for example ) Take GTA 5 . It is still considered a poorly optimized game that can drop even midrange current gen GPU's into the sub 30FPS at the test settings after 5 years!

So the standard is essentially based on performance comparisons of poorly optimized software with the only fix to throw more hardware at it or fudge the drivers . We accept that as a good thing? What do we really learn there? Even Prime95 actually tried to do something useful with the resources it consumed while we all burned in our CPU's!
So you have code samples and application profiling results of these games to share with us to prove that?
There's also the factor of Nvidia drivers that tend to gimp older hardware at the expense of newer. AMD does it too BTW. I've seen this for myself with a pair of GTX 680's that have found better performance by rolling back drivers after doing a driver update. And what about those driver tweaks and optimizations for so many of these games that artificially hobble older cards and AMD GPU's? We know Nvidia isn't above artificially "tweaking" drivers to help market their new cards.
This has been debunked several times. Lack of updates to a previous architecture's code base does not mean "gimping."
Look, there's no doubt that a 1080 is a better performing, more efficient card than a 780. Just as a 780 is that to a 480. I just don't believe it's a revolution so much as an evolution. I just don't buy that with all the hijinx going on behind the scenes with triple-A titles and GPU vendors that the comparisons may have yet another flawed premise. That being comparisons based on such games with claims of 50% or more performance deltas cannot be trusted simply because there's too much subjectivity brought on not by Brent but rather the subjective nature of video game performance not to mention the influence of marketing on things like drivers and available optimizations unseen and unchangeable by the average game enthusiast
The testing parameters are laid out right in the article and they are consistent across the spectrum of tests, including game patches. What subjectivity are you referring to, specifically?
We see it with AMD as well. Things like checks for driver strings that wont allow a game to run ( I('m thinking BF1 here) that a simple edit in a config file to masquerade as a newer GPU allowed it to run without issue.

That's marketing at play and so long as it exists you can only use game benchmarks as a secondary indicator of a REAL performance delta between generations.
No, that is quality assurance at play because the hardware is not supported and the developer doesn't want to get complaints or support tickets for unsupported hardware. GPU architecture is more than the total number of transistors in the core.
Performance testing is boring, clinical stuff under strict and controlled environments where all the variables are known. Games don't offer that. Instead you're actually testing more of an experience which is subjective.
The only variable in [H]'s testing is the RNG in the game loop because they want to provide us, the readers, what one can expect in real world usage. Benchmarking can't provide that. The most recent case of a benchmark being vastly different to the actual experience was Deus Ex: Mankind Divided and is a perfect example of why benchmarks should not be used as a metric for gameplay experience.

Seeing as you seem to lurk the FS/FT section and come out randomly every couple years to post thoughts in other threads, I've probably wasted my time with this post. But it seems that [H] is not the hardware site for you.
 
I’ll add that Kepler was pretty good for desktop gaming but it was also awesome with mobile.

From having anemic upgrades from Nvidia for a while and practically zero from AMD we got some serious powerful stuff in Mobile again.
 
Back
Top