AMD Video Card Driver Performance Review - Fine Wine? @ [H]

Love these types of articles.

Differing driver version performance / quality attributes have always been a difficult thing to aggregate and quantify. These numbers all in one spot benefit gamers' decision-making and adds value to their gaming experience.

As someone else noted earlier in the thread, retesting from anew all of the driver versions in all the titles featured was/is no cheap meal ticket. More like a serious labor of love that belies the sense of purpose and commitment [H] continually demonstrates toward its community, as well as the larger enthusiast community as a whole.

Thanks you.
Thanks for the kind words, Brent put in a lot of hours on this one to make it right.
 
my impression of FineWine is that AMD's older cards stay more relevant longer than Nvidia's older cards as newer games get released
 
Excellent review Kyle, thanks!

The guy that did the original 'Fine Wine' video did mention that AMDs stepped architectural improvements we're partially the reason that you get improvements with all the GCN cards across the board, and that Nvidia hasn't maintained a similar enough architecture to do the same thing over time.
 
Excellent review Kyle, thanks!

The guy that did the original 'Fine Wine' video did mention that AMDs stepped architectural improvements we're partially the reason that you get improvements with all the GCN cards across the board, and that Nvidia hasn't maintained a similar enough architecture to do the same thing over time.
Yeah, doing a video and stealing everyone else's benchmarks for our data then talking about it for a while is a lot less resource intensive than actually doing the work. I would have loved to have more cards, but when it comes to doing the research yourself and KNOWING what you are talking about, there tend to be limitations.
 
Excellent review Kyle, thanks!

The guy that did the original 'Fine Wine' video did mention that AMDs stepped architectural improvements we're partially the reason that you get improvements with all the GCN cards across the board, and that Nvidia hasn't maintained a similar enough architecture to do the same thing over time.


Nvidia COULD have delivered optimizations to the older generations, as their budgets are much larger. They CHOSE to only optimize current gen's along the way. Makes sense for the bottom line, keeps driving new sales but sucks for folks that invested in their cards and dont upgrade every gen. Or is it that the older gen Nvidia cards were inherently less capable/flexible and performance upgrades hit a wall?
 
Last edited:
Excellent review Kyle, thanks!

The guy that did the original 'Fine Wine' video did mention that AMDs stepped architectural improvements we're partially the reason that you get improvements with all the GCN cards across the board, and that Nvidia hasn't maintained a similar enough architecture to do the same thing over time.

Nvidia COULD have delivered optimizations to the older generations, as their budgets are much larger. They CHOSE to only optimize current gen's along the way. Makes sense for the bottom line, keeps driving new sales but sucks for folks that invested in their cards and dont upgrade every gen. Or is it that the older gen Nvidia cards were inherently less capable/flexible and performance upgrades hit a wall?

My only issue with GCN's stepped iteration upgrades has been fragmentation within the same product line. When we had cards like 390/X running the older Hawaii iteration alongside the 380/X running the newer Tonga iteration alongside even 360 using the even newer Bonaire iteration all in the same product cycle, it made feature support confusing. 390/X was faster, but also the oldest iteration of GCN. I like things all nice and tidy, and keeping the same "generation" within the same product line from top to bottom makes sense to me. Having to explain that the 380 and 360 were faster and more efficient at Tessellation than the 390/X even though the 390/X was faster overall in other things, was probably confusing for a lot of consumers.
 
Nvidia COULD have delivered optimizations to the older generations, as their budgets are much larger. They CHOSE to only optimize current gen's along the way. Makes sense for the bottom line, keeps driving new sales but sucks for folks that invested in their cards and dont upgrade every gen. Or is it that the older gen Nvidia cards were inherently less capable/flexible and performance upgrades hit a wall?

There could be any number of reasons and unless someone from Nvidia breaks what are likely some pretty strict NDAs all anyone can do is make baseless assumptions based on their own views and biases regarding either company. The only real facts we have are the benchmarks and all they reveal is how the cards perform, not the reasons for that performance. Any argument against one company can be countered with an equally valid argument against the other company.
 
Nvidia COULD have delivered optimizations to the older generations, as their budgets are much larger. They CHOSE to only optimize current gen's along the way. Makes sense for the bottom line, keeps driving new sales but sucks for folks that invested in their cards and dont upgrade every gen. Or is it that the older gen Nvidia cards were inherently less capable/flexible and performance upgrades hit a wall?

Nvidia did. I think you confuse what developers develop for and the IHV. The benefit for AMD is they essentially haven't changed anything for the last 5 years. Its called stagnation.

Look at games like Civ6, Ark, Halo Wars 2, GoW, For Honor and Conan Exiles for example.
 
My only issue with GCN's stepped iteration upgrades has been fragmentation within the same product line. When we had cards like 390/X running the older Hawaii iteration alongside the 380/X running the newer Tonga iteration alongside even 360 using the even newer Bonaire iteration all in the same product cycle, it made feature support confusing. 390/X was faster, but also the oldest iteration of GCN. I like things all nice and tidy, and keeping the same "generation" within the same product line from top to bottom makes sense to me. Having to explain that the 380 and 360 were faster and more efficient at Tessellation than the 390/X even though the 390/X was faster overall in other things, was probably confusing for a lot of consumers.
Uh, to nitpick a little, Bonaire and Hawaii are the same GCN iteration,
 
Last edited:
yes yes, you get my point though

390/x Hawaii is 2nd gen GCN, 380/x Tonga is 3rd gen, 370/x Pitcairn is 1st gen, 360 Bonaire is 2nd gen

fury/furyx debatable 3rd or 4th gen with Fiji, I say 4th gen cause it did have new features

I made this a long time ago - http://www.hardocp.com/image/MTQzNTEwODU5MTlTMEhPT1prR0FfMV82X2wuZ2lm

a completely fragmented lineup from top to bottom
Yeah and on top of that, they renamed some existing chips to the new naming scheme. I had sense of it at the time. But I don't remember all of that, anymore.

Good devs help old cards stay relevant, by writing good engines. My 7870 plays Battlefield 1 better than Battlefiled 4, at the same resolution and looks better doing it.
 
As said ad infinitum, AMD has used the same GPU architecture since the 7970. Meaning improvements made to newer hardware improves the old without any additional work. With the R&D budget Radeon has they have to stretch their technology from one iteration to the next out of necessity.

Actually it's quite funny razor1 and the rest are not correcting you here for some strange reason, as they have gone into depth ad nauseum about how it was not as straight forward as you put it, to program between the various GCN versions - as they are indeed quite different from a programming POV especially in later versions vs earlier.
But of course now that would be detracting from the argument to detract from these results, crickets from the usual suspects, we can't possibly paint AMD in a positive light, would probably take a pay hit.

I look forward to seeing the Nvidia results to put this to bed once and for all. Other sites have done it and some of us already know that both do advance over time, but one does more so. Bad drivers from outset perhaps but the performance in the end is often higher than when they are first launched and compared.

I had a guy sell me this non-throttling, factory clocked 290X DCUII to 'upgrade' to a 780Ti. Looking at 2+ year old benchmarks cost him some performance and VRAM - said to his face that it's a sidegrade at best, not to mention the results from other sites confirm this. 290X pulled ahead to be the superior card over time on average. Whoops! I'm not complaining.

Same goes for the few titles you see the ageing Fury X competing with a 16nm 1080 on half the ram with a last generation 28nm gpu....


But no, AMD drivers are just shit eh guys?
 
GCN versions isn't that different. Its tiny changes and the shader core is exactly the same. And this is why the aging have been good from a stagnated view when a developer ditched Kepler uarch for the optimizations. The big driver myth that ran like viral PR have been debunked.

Fury X competing with 1080? That was one funny! :D
 
Same goes for the few titles you see the ageing Fury X competing with a 16nm 1080 on half the ram with a last generation 28nm gpu....


But no, AMD drivers are just shit eh guys?

The fact that the Fury X and 1080 have basically the same TFLOPS (8.6 vs. 9) and the fact that the Fury X matches the 1080 in enough games to be called a statistical anomaly is not a testament to architecture prowess as much as delivering on what the card is capable of doing.
 
fury/furyx debatable 3rd or 4th gen with Fiji, I say 4th gen cause it did have new features
Hm, now these are features i do not quite remember from Fiji. Always thought of it as 2 Tongas glued together with HBM2 controller.

In fact, looking at AT article http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/2 i see that 3 of things you list in Fiji only category were in R9 285 launch presentation.

HEVC does look new to Fiji.

But yeah, i got your point in the first place, 3xx line-up was a mess.
 
My only issue with GCN's stepped iteration upgrades has been fragmentation within the same product line. When we had cards like 390/X running the older Hawaii iteration alongside the 380/X running the newer Tonga iteration alongside even 360 using the even newer Bonaire iteration all in the same product cycle, it made feature support confusing. 390/X was faster, but also the oldest iteration of GCN. I like things all nice and tidy, and keeping the same "generation" within the same product line from top to bottom makes sense to me. Having to explain that the 380 and 360 were faster and more efficient at Tessellation than the 390/X even though the 390/X was faster overall in other things, was probably confusing for a lot of consumers.
[Note: I did read your later post, and the rest of the thread]

I'll admit my ignorance when it comes to nV's architectures and card releases, as while I recognize their (and Intel's) superiority in cases where they are, I just refuse to buy or recommend them due to how they operate their business when it comes to their competition (neither have any qualms over continually punching below the belt).

That being said, didn't nV do this with the 7xx or 9xx series as well? I thought that in one of them that later on in the product cycle, the 760 or 960... perhaps even the -50's, had gotten updated. Originally it was something like a G?108 and later utilized a G?107. I dunno specifically what the case was, but even whatever news site it was I had seen it on, thought it sorta odd they did it since it broke with conformity. Come to think of it, it may have been a Ti model... but regardless, I do at least stand by the memory of it occurring :p heh
 
Great article, really well done there Brent/Kyle.

I guess my takeaway is that yes, improvements occur over time, but the only meaningful improvement comes when a card/driver combo has some substantial "issues" in a game that need to be "fixed" in a new driver. This isn't really a surprise, once you find the initial roadblock, its not like theres tons of unused performance left on the table for the driver team to unlock. Considering how long some of AMD's SKU's have been sold under various names, its not surprising that they are still being supported and those "fixes" are being made. After all, theres no point discontinuing support for a 3 year old GPU if you're still selling it with a new name. I know many accuse NV of planned obsolescence, but is AMD really doing this out of the kindness of their hearts or as a strategy, or is it just the reality of where their product lineup has been the last few years? Sure, its worth some goodwill, but if they were refreshing the full stack with new architectures more frequently, would this still be the case?
 
The only stagnation here is the fact that Nvidia trolls are barking up the same fucking tree not even changing the tune. The GCN path will continue with VEGA and beyond. This in part will ensure some backwards compatibility on the console side of things. Everyone knows the story of NVidia's aging GPU's so no justifying needed here. I'm pretty sure that VEGA will be a great card and Hard OCP will repair its strained relationship with AMD graphics division :D
 
Great article, really well done there Brent/Kyle.

I guess my takeaway is that yes, improvements occur over time, but the only meaningful improvement comes when a card/driver combo has some substantial "issues" in a game that need to be "fixed" in a new driver. This isn't really a surprise, once you find the initial roadblock, its not like theres tons of unused performance left on the table for the driver team to unlock. Considering how long some of AMD's SKU's have been sold under various names, its not surprising that they are still being supported and those "fixes" are being made. After all, theres no point discontinuing support for a 3 year old GPU if you're still selling it with a new name. I know many accuse NV of planned obsolescence, but is AMD really doing this out of the kindness of their hearts or as a strategy, or is it just the reality of where their product lineup has been the last few years? Sure, its worth some goodwill, but if they were refreshing the full stack with new architectures more frequently, would this still be the case?
The theory is that fixes for later GCN products propagate to earlier GCN products because architecture (and its slow points etc) is so similar. Also AMD must apply more clever tweaks to paddle against the flow of NVIDIA-based thinking. Games (dx11) tend to be optimized for NVIDIA and not GCN. Or vice versa
 
The theory is that fixes for later GCN products propagate to earlier GCN products because architecture (and its slow points etc) is so similar. Also AMD must apply more clever tweaks to paddle against the flow of NVIDIA-based thinking. Games (dx11) tend to be optimized for NVIDIA and not GCN. Or vice versa

That's a flawed theory tho.

Its easy to see with Kepler how it works. Because its a hit or miss so to say in titles. Either the developer optimized for the uarch or they didn't. And for someone now selling the same thing on year 5 there is a bonus that way.

The answer to why AMD cards age well is called stagnation.

Maxwell and Pascal is quite similar. But when Volta hits, then we all know what will happen over time to Maxwell/Pascal cards. They will slowly lose developer focus.
 
I may have missed something, but wasn't the AMD claim of 4%-8% gains implied to be in relation to the Crimson (pre-ReLive) drivers? If so, 5-6% improvement from Omega to ReLive would be rather disappointing, no?
 
I may have missed something, but wasn't the AMD claim of 4%-8% gains implied to be in relation to the Crimson (pre-ReLive) drivers? If so, 5-6% improvement from Omega to ReLive would be rather disappointing, no?
The article hit the nail on the head for looking at just the driver performance by using a consistent benchmark or platform, as in same current OS, current drivers and current game state (all patches and updates). Outstanding work!

AMD and Nvidia also work with the game developers, Microsoft, hardware designers and others to improve the performance of software. If you took performance of many game first launched and later without driver change you will most likely see marked performance improvements as well in many games. HardOCP was right on the money, very professional in how they tested and isolated driver refinements and improvements. Also 5%-6% improvement on average also can include some over 20% improvements as well due to drivers which was the case and some not at all.

If you combine the two above, with Microsoft updates, game patches, bios updates etc. you most likely see more then 5%-6% increase in performance but that would be very hard to tabulate accurately. What HardOCP did on this article had to be very hard to do in itself to begin with. One would have to almost have to do endless time capsules of hardware and software to test years later not being touched for those years in between to get truly meaningful data.

Now one question I can think of is this good or bad or just plain good advancement? Bad as in AMD is not giving the buyer the full use or potential of the hardware they paid for and must wait some time before having it - like limiting your 12 second quarter mile car to 14 seconds until the bugs are worked out of the computer control system. Nvidia gives you a 13 second car right off the bat giving you the full potential right now but it is not going to get any faster down the road. Nvidia is getting to the customer the full potential or closer then AMD when bought, that would be a positive for Nvidia. Good in that AMD made hardware that is more forward looking and will take advantage of better techniques as time passes so with AMD you hand down your hardware generation to generation to your kids while Nvidia hardware you chunk in the garbage can outside.

Personally I like that AMD keeps finding ways to improve their performance on older hardware - that to me is taking care of their buyers giving long term service and not just take my money and run.
 
Kyle, performance gains with drivers is only 1 part of "FineWine".

The other part and the important one, is that tier for tier, AMD GPUs tend to hold up better over time.

Example: 7950 vs 660Ti, 7970 vs 680. 290 vs 780, 290X vs Titan Kepler/780Ti, 380 vs 960.

Whatever the cause, their relative performance improves, to a point where a GPU like the 7970 is still a very capable 1080p gaming GPU today in most of the new games whereas the 680 fell behind.

The proposed idea of FW is that a gamer on an AMD GPU gets more value out of it over the years, so they can go longer without upgrading and still get a good gaming experience.


AND less likely to have hardware failures related to poorly implemented driver releases...it's like BizarroWorld..in the last 2+ years AMD drivers have been relatively solid, where as it seems NV can't go a 6 months without introducing a serious flaw/issue (looking at YOU "GameReady" drivers)
 
AND less likely to have hardware failures related to poorly implemented driver releases...it's like BizarroWorld..in the last 2+ years AMD drivers have been relatively solid, where as it seems NV can't go a 6 months without introducing a serious flaw/issue (looking at YOU "GameReady" drivers)
What's bizarre is how no one's seemed to notice that. My 1080 had dpc latency audio issues for two months after launch.
 
What's bizarre is how no one's seemed to notice that. My 1080 had dpc latency audio issues for two months after launch.

I've had no video driver issues at all with my 1070 and Windows 10 Insider Edition. The latency issue never affected everyone.
 
Guess I will just forget that issue then, my bad.

Exactly. Just like everyone ignored the 970 3.5/0.5BG issue. Eventhough there was a lawsuit and they have to pay people....nothing to see here move on!
 
woah woah i wasn't commenting that you guys didnt cover it. But it is well known the Nvidia...loyalists tried to downplay it as much as possible.
That is why we actually tested it and gave our thoughts. Whole bunch of folks arguing, but we actually tested it in real world gaming.
 
Guess I will just forget that issue then, my bad.

I'm not saying you should ignore it. There was a whole thread on OCN about the issue that people were having with latency and I believe that they truly are or were, just as you are or were. Not everyone is affected by it (in this case, neither my wife's or my system has the issue).
 
What's bizarre is how no one's seemed to notice that. My 1080 had dpc latency audio issues for two months after launch.

It probably does not help that AIB partners did not necessarily stick to the exact spec of the FE, classic and masive example was the Micron-Samsung memory issues with the 1070 and this required a specific BIOS change by Nvidia to also support the Micron VRAM.
Not saying this is the same issue but highlights complexity once AIB partners start to change components, although Nvidia may had messed up with the audio components due to its integral VR development that was another layer of complexity.
Cheers
 
It seems it's about time in the perennial cycle of driver supremacy for the shift.. in fact it's perhaps overdue in this case. AMD really should be applauded for their efforts - they benefit the consumers directly. It's been some time since I've purchased a card from them and I moved away specifically because of driver issues, but with Vega around the corner and an aging 670 to replace, I'm definitely keeping an open mind.
 
This article was great! Thank you [H]!

Kyle, I see you guys were planning on doing an NVIDIA one soon - I think this would make a really neat "recurring feature" type of article. Like, in a year's time, you could test the RX 480 as your "old" card and then something relatively fresh, Vega or whatever is out, as the "new" one. Then a year after that, the Vega-ish card gets to be the old card and something else as new.

Just a quick, once-a-year check in on something would be really neat, from a historical perspective, and wouldn't take as much effort as doing a big multi-card thing.
 
Back
Top