NVIDIA Video Card Driver Performance Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
NVIDIA Video Card Driver Performance Review - We take the NVIDIA GeForce GTX 980 Ti and NVIDIA GeForce GTX 1080 for a ride in 11 games using drivers from the Windows 10 release to the latest drivers in January of 2017. We will see if and by how much game performance has changed over the course of time on NVIDIA GPUs. Have drivers really improved performance?
 
Damn good article. And the bottom line you asked 2 very good questions.

Maybe the reason Nvidia doesn't see much performance increase could be because a lot of those games are Nvidia sponsored games? So they already did eek out all the performance already?

Either way people think Nvidia has better drivers than AMD. It will take a lot for AMD to change peoples perspective.
 
Great review!! (*both of them, NV & AMD driver update-reviews) This is the kind of reviews i like to see!! (y)

P.S. I have the following comments:
-Since we had FuryX's comparable, which is 980Ti, i believe it should be better if we also had RX480's comparable (*GTX1060 ), instead of having GTX1080.
-We should take into account that 980Ti is the reference model, which means that we should add +5-10% more performance boost for the aftermarket models of 980Ti. ( FuryX doesn't have any aftermarket models, so its performance will remain the same)
 
nice work nice read sence im planning on getting a 1080 in a few months when the IRS lets me have my money.
 
there was a time when nvidia drivers were leaps and bounds better then amd drivers. This is not the case anymore.

I just recently built my pc after taking a hiatus of about 15 years...I remember back then I looked forward to new drivers everytime, it was noticeable with each update...so I have to agree with you here for sure.
 
I must admit that Nvidia drivers in the last 2 years have been far from flawless at times, but you seem to get max performance for your games from the get go (or at least with the game ready drivers) while with AMD cards you will get your cards full potential if you can wait long enough but they do seem to be less buggy then nvidia's in recent years.

I can at least respect AMD for keeping up trying to improve their hardware's performance over time.
 
Nice work. It reinforces what was generally known imo, but either way grabs popcorn cuz its gonna be interesting to read the comments.
I mean yes it does affirm something but in another way it just feeds into whatever people feel is right. Be it fanboy one way or the other.
So that begs the questions... Does AMD launch video cards with performance left on the table in terms of drivers? Does NVIDIA launch video cards that are optimized to the utmost out of the gate?
Or?
Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?
 
I think both sides discover improvements and release them as competition merits.
For AMD they are playing catch up so are releasing them quickly.
NVidia release theirs when AMD push hard enough.

Theres no point in pushing performance to the max unless you have to, when its harder to support (faster generally = hotter)or can be more buggy.
And it also takes away from the performance difference the next gen cards will give.
 
Interesting article. I think this is a key question that's hard for anyone except nVidia and AMD to answer:

"So that begs the questions... Does AMD launch video cards with performance left on the table in terms of drivers? Does NVIDIA launch video cards that are optimized to the utmost out of the gate?

Or?

Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?"

I'd guess it's the former, but obviously can't prove it.
 
I think both sides discover improvements and release them as competition merits.
For AMD they are playing catch up so are releasing them quickly.
NVidia release theirs when AMD push hard enough.

Theres no point in pushing performance to the max unless you have to, when its harder to support (faster generally = hotter)or can be more buggy.
And it also takes away from the performance difference the next gen cards will give.

I've talked to people that think 1 fps more makes one card god tier over another. As long as it get the longer bar on a graph then it is considered winning.
 
I've talked to people that think 1 fps more makes one card god tier over another. As long as it get the longer bar on a graph then it is considered winning.

and that's stupid, I don't consider a card better than other card until it can provide at least 10FPS of difference under same game and settings (of course under the same tier/category with a +/-50$ Price bracket), until that margin it's just a matter of being competitive, don't care of power used of temp if it can stay between that margin, that's competition.
 
and that's stupid, I don't consider a card better than other card until it can provide at least 10FPS of difference under same game and settings (of course under the same tier/category with a +/-50$ Price bracket), until that margin it's just a matter of being competitive, don't care of power used of temp if it can stay between that margin, that's competition.

Agreed 100%!
 
AMD seems to be attempting to improve their hardware performance over time, something Nvidia puts much less emphasis on.



I've talked to people that think 1 fps more makes one card god tier over another. As long as it get the longer bar on a graph then it is considered winning.

I think that when it's a near tie situation, improvements over time can make a difference.

I'd love to see a 7970 today versus say, a GTX 680 today. At launch, the 680 was slightly faster, forcing AMD to cut price and respond with a 7970 GHz edition. It's kind of like a tie breaker or a factor to consider when you have 2 cards in a similar tier (Ex: RX 480 vs GTX 1060 today).

The other part is that AMD did add Async abilities to their GPUs, which may or may not be a huge advantage for those who keep their cards over time (Ex: those who bought say, a 290X over a 780Ti or a 290 over a 780). If so, the difference will be bigger than a few percentage points. In that regard, you could argue AMD was more "forward looking" even if the architecture came at the expense of power consumption.



Interesting article. I think this is a key question that's hard for anyone except nVidia and AMD to answer:

"So that begs the questions... Does AMD launch video cards with performance left on the table in terms of drivers? Does NVIDIA launch video cards that are optimized to the utmost out of the gate?

Or?

Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?"

I'd guess it's the former, but obviously can't prove it.



From what I understand, AMD does not have the resources (keep in mind they're short on cash) to optimize their drivers as much. On the other hand, I have found that AMD's drivers as of late have gotten better.
 
and that's stupid, I don't consider a card better than other card until it can provide at least 10FPS of difference under same game and settings (of course under the same tier/category with a +/-50$ Price bracket), until that margin it's just a matter of being competitive, don't care of power used of temp if it can stay between that margin, that's competition.

The challenge though is seeing the behaviour of the frame performance, they may have similar avg fps but one could have much worst lower frametime behaviour and this does not come across clearly with just a min fps against avg fps a lot of other review sites do (much better here with what they do in the reviews).
This also applies to CPUs and how hyperthreading can make improvements to the lower % of frame fps for some games but is not really picked up in the overall avg as it may only be a bit higher.

Cheers
 
Last edited:
AMD seems to be attempting to improve their hardware performance over time, something Nvidia puts much less emphasis on.

it is easy to say that if you only look at FPS numbers. but if you try seeing it from raw performance wise a 5.6Tflops cards consistently matching or even exceed a 8.6tflops cards in majority of games that is quite an achievement.

I'd love to see a 7970 today versus say, a GTX 680 today. At launch, the 680 was slightly faster, forcing AMD to cut price and respond with a 7970 GHz edition.

hardware unboxed did a test recently for GTX680 vs GTX1050ti vs 7970 (Ghz Edition). surprisingly GTX680 was able to keep up with 7970 just fine in new tittles because many expect 7970 will going to murder 680 outright FPS wise.



[
 
because many expect 7970 will going to murder 680 outright FPS wise.

Interesting, they ran a ref 7970 which will throttle itself when it heats up. This type of testing is very typical when you don't really want to run them straight up. If you run them clock for clock, the 7970 will/should leave the 680 behind. These videos are lame in that they really are at the mercy of the whims of the source w/o any defining specs. There's too much we don't know about the cards, ref or custom, what clocks were run, boosts speed, temps, etc. We just have to take their word for it due to the lack of data.
 
Nvidia - Why waste resources improving the experience on older cards when we want our customers to UPGRADE to our newest cards?

JHH is nothing if not totally mercenary in his drive for profits and growth.
 
All this says to me is NVIDIA focuses on giving maximum performance as soon as possible, and that NVIDIA's architecture is better suited to PC games today. How can there be a larger performance jump over time when there is nowhere left for the hardware to go?
 
You ended with these questions "Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?"

My guess would be that Nvidia has performance optimized their drivers when they are released, but I would also say that AMD has put a lot of work into making their drivers better in recent times (they needed to, remember early FCAT results ??)
so a bit of both really.

Nice article BTW.
 
Interesting, they ran a ref 7970 which will throttle itself when it heats up. This type of testing is very typical when you don't really want to run them straight up. If you run them clock for clock, the 7970 will/should leave the 680 behind. These videos are lame in that they really are at the mercy of the whims of the source w/o any defining specs. There's too much we don't know about the cards, ref or custom, what clocks were run, boosts speed, temps, etc. We just have to take their word for it due to the lack of data.

You are probably confusing the 7970 with the R9 290X, the 7970 and 7970ghz doesn't throttle with temperature, in fact the 7970 was a cold card at the expense of being louder than the Nvidia counterpart, but definitively the reference 7970 was able to keep better temps even Overclocked versus the reference GTX 680 cooler, the reference 7970 board was also better overclocker and stable than most AIB cards,

PowerTune introduced with the GHZ edition was just able to switch between baseclock (1000mhz) and boost (1050mhz) depending on load, if a game wasn't needing full power (for example vsync'd at 60hz under a non-demanding game) the card only used 1000mhz, on a demanding game it would just boost to 1050mhz, in fact talking on throttling, that card only had 3 performance power states (Pstate) in the power tables:

*3D load: 1000mhz core + boost / 1500mhz memory
*Multimedia Playback: 500mhz / 1500mhz memory
*Idle: 150mhz core/ 300mhz memory

And that's it, everything fixed, nothing dynamic as happened with the clock engine used since Hawaii.. only a VRM overload (as Kombustor or furmark) would have been able to "throttle" the card from 1050mhz to 1000mhz..
 
Powertune is the sole reason I switched to green team.

It's just a giant hassle I don't wish to endure..

*swapped my 380 for a 960 within a week cuz of clock issues
 
hardware unboxed did a test recently for GTX680 vs GTX1050ti vs 7970 (Ghz Edition). surprisingly GTX680 was able to keep up with 7970 just fine in new tittles because many expect 7970 will going to murder 680 outright FPS wise.



[


The settings they used were turned down a lot to not trash the 680's 2GB buffer (and they did state that it was done on purpose). Whereas the 7970 is actually able to run at the higher setting, better visual and very playable.

As an example, here's a bench pitting a 7950 vs a gtx770. The 770 used to be heaps faster but now it's similar or even slower.




Note that the 770 competitor would be the 280X or 7970Ghz edition which is ~20% faster than the 7950.
 
And here's a detailed review of the GTX 780Ti in 2016 from Gamers Nexus.

Compare it to the Hawaii (& even Maxwell) GPUs, it has fallen far behind.



In some games it's even slower than the GTX 960 which is an utter disgrace considering on release of the GTX 970 and 980 in late 2014, the GTX 780Ti traded blows with these GPUs.

The thing is today, AMD GPUs such as the 290 and 290X which were 780 and Titan Kepler/780Ti competitors, these GCN 290/X still hang with the RX 470 and 480, up there with the 390 and 390X.

If you are an owner of 290/X from 2013, you are still getting excellent performance in all the new games max at 1080p, while owners of 780, 780Ti and Titan Kepler have to turn down settings at 1080p.

I'm sure [H]/Kyle have a 290X laying around. It used to get smashed by the GTX 980 and the 780Ti, it would be great of [H] took out these old dogs and test them in major 2016 games. The results would be enlightening.
 
For the sake of full disclosure, am currently using an nVidia graphics card in the PC I game on and while I typically alternate between nVidia and AMD based on who has the best bang for buck at the time / which company most recently gave me a less than ideal experience, can't help but feel its a little bit of column A and a little bit of column B from the last two questions posed in the article. Don't think either company is as good or as bad for it to be a clear cut one way or the other.

While I definitely don't claim to have any great technical knowledge when it comes to the architecture of particular graphics cards, (particularly clueless on what changed with the RX480 sorry) but hope it isn't too contentious to say that from reading around various forums got the impression it was AMD's turn to be a little refresh-happy in recent years, and (not to take anything away from AMD's driver improvements) could the fact that larger portions of the architecture/implementation have carried over mean that older cards are seeing a benefit from work being done to improve performance of current series by extension of that, with less dedicated resource or attention to the older series needed to still see improvements?

Probably a few misconceptions on my part here. Nonetheless, would be curious to know just how big a shift or fundamental change there actually is going from the 200 and 300 series, Polaris, Vega versus Kepler, Maxwell, Pascal, Volta and what impact that may have on support and improvements for previous series.
 
The settings they used were turned down a lot to not trash the 680's 2GB buffer (and they did state that it was done on purpose). Whereas the 7970 is actually able to run at the higher setting, better visual and very playable.

As an example, here's a bench pitting a 7950 vs a gtx770. The 770 used to be heaps faster but now it's similar or even slower.

That makes sense as I just knew there was some bs going on in that video. Back in the day I ran a clock for clock test between watercooled 680 vs 7970 and the gap was pretty sizeable. Then we compared them high overclock benching and it took a 680 at 1500+ to match a 1350mhz 7970 in various 3dmark and heaven.
 
That makes sense as I just knew there was some bs going on in that video. Back in the day I ran a clock for clock test between watercooled 680 vs 7970 and the gap was pretty sizeable. Then we compared them high overclock benching and it took a 680 at 1500+ to match a 1350mhz 7970 in various 3dmark and heaven.

i'll have to see proof of an OC'd GTX 680 at 1500mhz I would call that BS even for 1400 under water, 1300mhz stable was already hard to achieve beyond couple of fast benchmarks.. Also Compare clock for clock in GPU it's actually non-sense, only a fool would try to compare clock for clock test like if it were CPUs, you can do any kind of clock scalings but no clock for clock that's pointless, and in any case in clock scaling test, yes the 7970 scale way better than the GTX 680 and in fact the 7970 scale much better with clocks than any other GCN card, maybe due architectural defficiencies, but as example a 7970 from 1000mhz to 1200mhz have bigger gains than a 290X from 1000 to 1200mhz and even more scaling like a 380X under the same scenario, a 1350mhz HD7970 in the time would has been able to achieve GTX 780 performance levels normally in their time even with the tessellation defficiency in some games..

In my opinion the HD7970 it's one of the best GPUS ever made, but please speak with facts and avoind non-sense things as clock for clock comparisons in GPUS and 1500mhz GTX 680s..
 
Cute...you could also say that NVIDIA is keeping it's pipeline feed almost optimal from day one...and it takes AMD longer time to do the same.

I know I pay for performance today...just saying.

AMD unfortunately, is short on funding, which I suspect is the culprit.

The thing is, most people keep their GPUs for a few years. [H] in that regard might have a selection bias because people here are more likely to be hardcore enthusiasts.

I personally care about long term over the expected life of how long I keep the GPU because I don't upgrade every year.


All this says to me is NVIDIA focuses on giving maximum performance as soon as possible, and that NVIDIA's architecture is better suited to PC games today. How can there be a larger performance jump over time when there is nowhere left for the hardware to go?


Because features are not being used in existing games.

A very good example of that was the Async compute, as demonstrated by Ashes of Singularity. If more games use Async, then those who opted for Hawaii/Tahiti rather than Kepler got a very good deal indeed, assuming they still have their cards.

The same will be true until Volta, when Nvidia is expected to offer Async. So depending on how Vega turns out, it may be like the 7970, that introduced GCN compared to the Nvidia GPU of that price range.





And here's a detailed review of the GTX 780Ti in 2016 from Gamers Nexus.

Compare it to the Hawaii (& even Maxwell) GPUs, it has fallen far behind.



In some games it's even slower than the GTX 960 which is an utter disgrace considering on release of the GTX 970 and 980 in late 2014, the GTX 780Ti traded blows with these GPUs.

The thing is today, AMD GPUs such as the 290 and 290X which were 780 and Titan Kepler/780Ti competitors, these GCN 290/X still hang with the RX 470 and 480, up there with the 390 and 390X.

If you are an owner of 290/X from 2013, you are still getting excellent performance in all the new games max at 1080p, while owners of 780, 780Ti and Titan Kepler have to turn down settings at 1080p.

I'm sure [H]/Kyle have a 290X laying around. It used to get smashed by the GTX 980 and the 780Ti, it would be great of [H] took out these old dogs and test them in major 2016 games. The results would be enlightening.



I too would like these tests to be showed.

At maximum settings, it's probable that it won't be too different than the GameNexus results. For AMD, worst case is that it's similar to release day, with the 290X somewhat slower. Best case, the gap may have increased with the latest Crimson Relive drivers - to the point of Ashes with Async on.
 
i'll have to see proof of an OC'd GTX 680 at 1500mhz I would call that BS even for 1400 under water, 1300mhz stable was already hard to achieve beyond couple of fast benchmarks.. Also Compare clock for clock in GPU it's actually non-sense, only a fool would try to compare clock for clock test like if it were CPUs, you can do any kind of clock scalings but no clock for clock that's pointless, and in any case in clock scaling test, yes the 7970 scale way better than the GTX 680 and in fact the 7970 scale much better with clocks than any other GCN card, maybe due architectural defficiencies, but as example a 7970 from 1000mhz to 1200mhz have bigger gains than a 290X from 1000 to 1200mhz and even more scaling like a 380X under the same scenario, a 1350mhz HD7970 in the time would has been able to achieve GTX 780 performance levels normally in their time even with the tessellation defficiency in some games..

In my opinion the HD7970 it's one of the best GPUS ever made, but please speak with facts and avoind non-sense things as clock for clock comparisons in GPUS and 1500mhz GTX 680s..

BS? Call BS? Go ahead keep calling BS. Lmao, you are making stink about this w/o a clue. I never stated that there was a rule on how the 680 got to 1500mhz. smh

http://www.overclock.net/t/1322119/12-11-vs-310-33/100#post_19009313
 
AMD unfortunately, is short on funding, which I suspect is the culprit.

The thing is, most people keep their GPUs for a few years. [H] in that regard might have a selection bias because people here are more likely to be hardcore enthusiasts.

I personally care about long term over the expected life of how long I keep the GPU because I don't upgrade every year.

This is a personal preference so really doesn't matter, I want games that come out around the time of my graphics card purchases to run the fastest possible.



Because features are not being used in existing games.

A very good example of that was the Async compute, as demonstrated by Ashes of Singularity. If more games use Async, then those who opted for Hawaii/Tahiti rather than Kepler got a very good deal indeed, assuming they still have their cards.

The same will be true until Volta, when Nvidia is expected to offer Async. So depending on how Vega turns out, it may be like the 7970, that introduced GCN compared to the Nvidia GPU of that price range.

Still going on that Pascal doesn't have async? The 1060 and 480 last I remember ran neck and neck


http://www.guru3d.com/index.php?ct=...dmin=0a8fcaad6b03da6a6895d1ada2e171002a287bc1

index.php


Async really helping AMD right now? Yeah, they can have more performance possible gains due to Async and that is what we are seeing when developers pay paticular attention to AMD products and spend more dev time on AMD products.


I too would like these tests to be showed.

At maximum settings, it's probable that it won't be too different than the GameNexus results. For AMD, worst case is that it's similar to release day, with the 290X somewhat slower. Best case, the gap may have increased with the latest Crimson Relive drivers - to the point of Ashes with Async on.


The latest crimison drivers haven't done anything spectcular for performance, looking like around 1% change
 
This is a personal preference so really doesn't matter, I want games that come out around the time of my graphics card purchases to run the fastest possible.

It comes down to, do you hold your GPU for a couple of years, or do you upgrade each generation (which some people here do)?

If you hold for a couple of years, you are better off over the life of the time you own the GPU.

If you upgrade each generation, go with Team Green.



Still going on that Pascal doesn't have async? The 1060 and 480 last I remember ran neck and neck

...

Async really helping AMD right now? Yeah, they can have more performance possible gains due to Async and that is what we are seeing when developers pay paticular attention to AMD products and spend more dev time on AMD products.

Correct. The GTX 1060 and RX 480 are very close competitors, at least right now. AMD fans will go with the RX 480, Nvidia with the GTX 1060, and the rest with whatever is on sale.

The other consideration is that AMD does have one advantage in terms of optimization that makes it more common than you might think. Console ports. Console devs will have to optimize around AMD. That is especially important for games that push console hardware to the limit and over time, because as PC GPUs get better, consoles are stuck on their release cycle of a console every few years.

Both Nvidia and AMD sponsor. An example of Nvidia sponsoring is Hairworks, which hurts AMD disproportionately due to their weak triangle performance, although with the RX 480 AMD has made progress here and Vega may close the gap (we don't know yet).

On the note of Async, at higher resolutions, Async is leaving the gap a closer than you might think:
https://www.computerbase.de/2016-05/geforce-gtx-1080-test/11/

2820642



With Async, Vega could end up in a similar situation to the 1080 relative to the 980Ti. The real question is, will more games use it?




And here's a detailed review of the GTX 780Ti in 2016 from Gamers Nexus.

Compare it to the Hawaii (& even Maxwell) GPUs, it has fallen far behind.



In some games it's even slower than the GTX 960 which is an utter disgrace considering on release of the GTX 970 and 980 in late 2014, the GTX 780Ti traded blows with these GPUs.

The thing is today, AMD GPUs such as the 290 and 290X which were 780 and Titan Kepler/780Ti competitors, these GCN 290/X still hang with the RX 470 and 480, up there with the 390 and 390X.

If you are an owner of 290/X from 2013, you are still getting excellent performance in all the new games max at 1080p, while owners of 780, 780Ti and Titan Kepler have to turn down settings at 1080p.

I'm sure [H]/Kyle have a 290X laying around. It used to get smashed by the GTX 980 and the 780Ti, it would be great of [H] took out these old dogs and test them in major 2016 games. The results would be enlightening.




Just found one - Computer Base.de

Compare the Nvidia stuff:
https://www.computerbase.de/2017-01/geforce-gtx-780-980-ti-1080-vergleich/2/

To AMD
https://www.computerbase.de/2017-01/radeon-hd-7970-290x-fury-x-vergleich/2/

Of great interest is the 290X vs 780Ti and 7970 vs GTX 680.
 
Last edited:
It comes down to, do you hold your GPU for a couple of years, or do you upgrade each generation (which some people here do)?

If you hold for a couple of years, you are better off over the life of the time you own the GPU.

If you upgrade each generation, go with Team Green.

So what do you think will happen when base polycounts of objects double in next gen games (yes it will happen, as I'm making game right now that is doing it!) You think all of AMD products Fiji and prior get hurt? Yeah that is what will happen.

Correct. The GTX 1060 and RX 480 are very close competitors, at least right now. AMD fans will go with the RX 480, Nvidia with the GTX 1060, and the rest with whatever is on sale.

The other consideration is that AMD does have one advantage in terms of optimization that makes it more common than you might think. Console ports. Console devs will have to optimize around AMD. That is especially important for games that push console hardware to the limit and over time, because as PC GPUs get better, consoles are stuck on their release cycle of a console every few years.

Both Nvidia and AMD sponsor. An example of Nvidia sponsoring is Hairworks, which hurts AMD disproportionately due to their weak triangle performance, although with the RX 480 AMD has made progress here and Vega may close the gap (we don't know yet).

On the note of Async, at higher resolutions, Async is leaving the gap a closer than you might think:
https://www.computerbase.de/2016-05/geforce-gtx-1080-test/11/

2820642



With Async, Vega could end up in a similar situation to the 1080 relative to the 980Ti. The real question is, will more games use it?








Just found one - Computer Base.de

Compare the Nvidia stuff:
https://www.computerbase.de/2017-01/geforce-gtx-780-980-ti-1080-vergleich/2/

To AMD
https://www.computerbase.de/2017-01/radeon-hd-7970-290x-fury-x-vergleich/2/

Of great interest is the 290X vs 780Ti and 7970 vs GTX 680.


Not going to break down your post because you need to look at what games are pushing what.

Async haven't seen anything that would make AMD products over nV's current products, so beating on Vega to help with that well yeah its anyone's pick.



And now you can't sit here and tell me Async is doing the same thing to pascal as it did to the 980ti, do you see performance degredation using Async on AOTS with Pascal? Why not?

I don't care how much you want to swing it and try to show me AOTS or other AMD sponsered titles cause ya have DX12 games that are nV sponsored with Async that give Pascal a boost too. So if you are going to sit here and show one side of the story all the while there has been indepth threads on why and how things are different between nV and AMD architectures and why, without paying close attention to each IHV's architecture, developers will not get the best out of async, be my guest, but before you go down that path please look at those threads for actual information on Async.

You are specifically picking things that point out certain things in a specific time slice, that is a poor data set to start from. Widen that base and increase you base knowledge about why Async does what it does then we might have a decent conversation. First off you already implied Pascal can't do async which is just false.
 
Awesome review. Thanks!

I haven't used AMD/ATI in a while(last was a HD2600 or something) so I don't have any meaningful comparisons as such

For NV though, this very much goes with what I've experienced over the last 10 years. From my 560ti's to SC780 I would occasionally see a 5-10fps increase with driver updates and loved it. From the 970's to 1080's I usually only see a couple of FPS gain, if that, and mostly just hoped the driver didn't cause more problems as has happened all too often in the last couple of years. Since I'm still running both of those pairs I can still relate relevant info for both on the same/current drivers. I am, however, considering updating the 970's to whatever follows this summer if my budget allows. I'm kind of excited at the thought of rocking a TI or whatever on my Z68/2600k setup.

Edit: I just forgot to mention these have all been in setups using 64bit versions of 7/8/8.1/10. Presently using 10 on both systems now.
 
Good to see AMD drivers have been improving. Lets hope this continues with Vega....
 
Interesting, they ran a ref 7970 which will throttle itself when it heats up. This type of testing is very typical when you don't really want to run them straight up. If you run them clock for clock, the 7970 will/should leave the 680 behind. These videos are lame in that they really are at the mercy of the whims of the source w/o any defining specs. There's too much we don't know about the cards, ref or custom, what clocks were run, boosts speed, temps, etc. We just have to take their word for it due to the lack of data.

AFAIK 7970 did not have complex turbo boost tech like the one AMD implement with 290X/290. and during initial review of 7970 i see no reviewer mentioned or notice about the gpu core throttling because of heat. test has shown that 7970 Ghz edition running reference usually end up around 84c max (not including manual OC):

https://www.techpowerup.com/reviews/AMD/HD_7970_GHz_Edition/32.html
http://www.tomshardware.com/reviews/radeon-hd-7970-ghz-edition-review-benchmark,3232-16.html

so in that test there is no concern about throttling with reference cooler on that 7970 (including Ghz edition). reference cooler only start becoming a real problem for AMD with 290X because the gpu is so hot that blower type cooler cannot keep up to cool the gpu without sounding like a jet engine.
 
The settings they used were turned down a lot to not trash the 680's 2GB buffer (and they did state that it was done on purpose). Whereas the 7970 is actually able to run at the higher setting, better visual and very playable.

As an example, here's a bench pitting a 7950 vs a gtx770. The 770 used to be heaps faster but now it's similar or even slower.




Note that the 770 competitor would be the 280X or 7970Ghz edition which is ~20% faster than the 7950.


still it prove the point the disadvantage of 680 coming from the lack of VRAM not because of nvidia simply neglect or even gimping kepler performance in order to sell newer card like some people like to believe.
 
On the note of Async, at higher resolutions, Async is leaving the gap a closer than you might think:
https://www.computerbase.de/2016-05/geforce-gtx-1080-test/11/

2820642



With Async, Vega could end up in a similar situation to the 1080 relative to the 980Ti. The real question is, will more games use it?
I would be wary of games that have their own internal benchmarks and especially their own internal counter (AoTS do not use the present frame data perceived by the player and what we are used to traditionally capturing with DX11 type tools).
Here is an example of just a recent game benched by GamersNexus albeit still in Beta but worrying how the internal benchmark is skewing compared to the actual game:
for-honor-game-vs-gn-bench.png



Back to AoTS and async compute, other reviews that use independent tools has the gap larger between Fury X and 1080 than shown by Computerbase.
As a real example PCGamesHardware: AoTS Extreme at 1080p (not 1440p or 4k) has the gap at 16% when using PresentMon and still the internal preset weighted test, this with a 1080FE GPU.
Personally I feel reviews really should also use custom AIB of Nvidia cards and AMD where they exist, even though the Fury X is an AMD only design does not mean one should use only a blower Nvidia for comparison.
Here is one that has both a FE and custom AIB for the 1080, uses the internal benchmark for the run and uses PresentMon to measure performance just like PCGamesHardware
HardwareCanucks:
GTX-1080-EVGA-78.jpg


Cheers
 
Back
Top