AMD's FineWine Technology: What is it & why do AMD GPUs age well?

he got a lot of things right wish he pointed out some of the benchmarks were well below 30fps ;) on both IHV's, so gotta watch for that, cause its pointless to even bring those things up. And the big thing about the HC review ;)
 
For a second I thought it was going to be satire, but that was pretty informative. I didn't realize the difference between Nvidia and AMDs architecture timelines until it was laid out here.

It's pretty cool that improvements will trickle down to older GCN versions since the core architecture remains intact.
 
I see it both ways, having a new architecture drivers do take time to develop, and going from VLIW to scalar is a big change. Just look at the g80 even three years nV was still getting performance benefits from drivers for it, but then again, they also stuck to Tesla's architecture for three years, which is outside of the norm for nV.....

So yeah I can't disagree with AMD needs better drivers out of the gate, but also have to say the longer you keep an architecture, the benefits can be seen from drivers at a later date.
 
Last edited:
Just imagine the landscape if AMD could get their drivers together on release and not years later. Years later don't matter to the bottom dollar.. just to people hanging on for dear life to their aging video card.


Its not all drivers either, AMD cards have more potential shader capabilities (till this gen of cards, Pascal kinda equalized that with higher clocks) and in the past more memory. So its a combination of things that we are seeing as the end result. If they can't show that right off launch, then they don't get the bottom line, sales, and profits.

If you look back when it was ATi vs. nV the same situation can be seen. X1800 low shader capabilities vs nV's 7xxxx series, by the time the 1900 series came out from ATi, 3x the shader throughput but games didn't push that much, so the results, FPS, didn't show that capability, by the time they did, the g80 crushed the x19xx series.

The only time ATi was able to capatilize was when nV faltered, with the fx series. Now we can say they did do well with their 4xxxx and 5xxxx series, but then again, nV deviated from their normal life cycles of 1 to 1.5 years for next gen chips (new architectures) where they milked their tesla architecture, and they did it for just a bit too long, as the g80 was ready on the same node and similar timing as the g72 (there was a 6 month differential).

Having a great product is a good thing,but timing matters too. nV makes GPU's that work well with games that are coming out at that time. We can see this over and over again, is it because of their dev rel relationships, yes could be, when they know what is going to be important to their next gen chips they can push that to dev's early on without anyone knowing the wiser. But Its a combination of many things. If dev's don't have certain hardware in hand they can't push certain features.
 
Last edited:
It's because they have to improve via software since they can't get the hardware side correct.



Disclaimer
Didn't watch the video.
Currently an AMD user.
 
Just imagine the landscape if AMD could get their drivers together on release and not years later. Years later don't matter to the bottom dollar.. just to people hanging on for dear life to their aging video card.

Did you actually watch the video?

I'm very happy with my R9 290x 3 years on. The only card that's lasted me longer was my Voodoo 3 way back in the day. I still feel no pressure to upgrade.

It plays Kerbal Space Program and my other games just fine on Linux Mint. I was worried I was going to have to pick up an Nvidia card to play anything on Linux based on all the anti-AMD FUD I'd read.
 
When you haven't changed uarch in ~5 years and developers gets more and more experience with it, that's why.

But at this point even small changes like the primitive discard can send older cards out in the old very fast.
 
Last edited:
Maybe AMD makes their video cards to damn good, they last forever and folks just don't need to buy another AMD card after 3 years. Now Nvidia owners have to use a strap on with a yearly update :LOL:
 
When you haven't changed uarch in ~5 years and developers gets more and more experience with it, that's why.

But at this point even small changes like the primitive discard can send older cards out in the old very fast.
Ok, then AMD has the superior hardware if it overtakes the Nvidia hardware consistently over time.

Not all the games were updated that were in the tests, meaning it was AMD improving performance by the drivers themselves. This also give credence Nvidia in the past just has better drivers at launch of games. Now the 480 has improved rather dramatically over a shorter period of time which is most likely due to the new hardware stuff added which also give credence to RTG better efficiency for running the graphics division.
 
Ok, then AMD has the superior hardware if it overtakes the Nvidia hardware consistently over time.

Not all the games were updated that were in the tests, meaning it was AMD improving performance by the drivers themselves. This also give credence Nvidia in the past just has better drivers at launch of games. Now the 480 has improved rather dramatically over a shorter period of time which is most likely due to the new hardware stuff added which also give credence to RTG better efficiency for running the graphics division.

Did the 480 improve dramatically? Or is it just a matter of games benched?

The perf/watt of 14nm LPP parts is close to that of 28nm parts of the competition. That's how big of a change there has been. 5 years back the 2 was equal.

There is pros and cons of both ways, however only one of them is long term viable. And this is why the cards today are what they are.
 
Did the 480 improve dramatically? Or is it just a matter of games benched?

The perf/watt of 14nm LPP parts is close to that of 28nm parts of the competition. That's how big of a change there has been. 5 years back the 2 was equal.

There is pros and cons of both ways, however only one of them is long term viable. And this is why the cards today are what they are.
We just don't know if AMD current cards will age like the 7970 and 290x did in the past. I don't see Fiji aging particularly good in the long run due to memory restrictions or 4gb. The past may hint at what the future brings but is not guaranteed it will flow that way. Yes the 480 has improved since launched due to drivers.

Power is much better for AMD but still not in Nvidia's league. Hopefully AMD will be able to gain some more ground with Vega.

AMD just needs to win the race with a good race car and driver, not build a race car that initially looses all the races but could beat all the other cars after two years with a better driver. Except after two years all the other racers have new faster cars anyways so that race car never wins even with a better driver.
 
We just don't know if AMD current cards will age like the 7970 and 290x did in the past. I don't see Fiji aging particularly good in the long run due to memory restrictions or 4gb. The past may hint at what the future brings but is not guaranteed it will flow that way. Yes the 480 has improved since launched due to drivers.

Power is much better for AMD but still not in Nvidia's league. Hopefully AMD will be able to gain some more ground with Vega.

AMD just needs to win the race with a good race car and driver, not build a race car that initially looses all the races but could beat all the other cars after two years with a better driver. Except after two years all the other racers have new faster cars anyways so that race car never wins even with a better driver.

The problem is those things requires fundamental changes. Picking low hanging fruits and focusing on bigger titles isn't going to win the race. Fiji was pretty much obsolete the day it got released.

Polaris 10 for example uses 33% more memory bandwidth, 30% more transistors and 37% more power than GP106. From a consumer perspective you can ignore the first 2 as such, because that's something AMD needs to pay for. But it shows the big problem.

If you want to play titles like Divinity Original Sin 2, Skyrim SE, Cossacks 3, Dishonored 2 or even their own titles like Civ6 and Warhammer. Then there isn't much aging or driver benefit. Just to mention a few examples.
 
The problem is those things requires fundamental changes. Picking low hanging fruits and focusing on bigger titles isn't going to win the race. Fiji was pretty much obsolete the day it got released.

Polaris 10 for example uses 33% more memory bandwidth, 30% more transistors and 37% more power than GP106. From a consumer perspective you can ignore the first 2 as such, because that's something AMD needs to pay for. But it shows the big problem.

If you want to play titles like Divinity Original Sin 2, Skyrim SE, Cossacks 3, Dishonored 2 or even their own titles like Civ6 and Warhammer. Then there isn't much aging or driver benefit. Just to mention a few examples.
Yes, Kyle had heads up with the issues prior to launch but AMD has at least made this GPU sell good enough in the mean time. The 1060 also has some disadvantages as well, not able to SLI, 6gb or less memory and an almost unusable 3gb version. Still when I built my daughter an SFF system the 1060 won over the 480 for exactly for the last reason you mention - power = heat. AMD lack of competition at the high end for over 6 months is the biggest failure I've seen from them - Vega needs to be good if not awesome with a near perfect launch.
 
Did you actually watch the video?

I'm very happy with my R9 290x 3 years on. The only card that's lasted me longer was my Voodoo 3 way back in the day. I still feel no pressure to upgrade.

It plays Kerbal Space Program and my other games just fine on Linux Mint. I was worried I was going to have to pick up an Nvidia card to play anything on Linux based on all the anti-AMD FUD I'd read.

Yes, I watched the video. Judging by my reply, and the reply of 90% of the remaining posts in here, you did not.
 
AMD cards often have more raw processing power than their nvidia competitors at launch, but the drivers aren't as fine tuned out of the gate.

Case in point the Fury X has 8.5 teraflops of processing power while the 1070 has 6.5 teraflops. That explains why the Fury X has been able to gain so much ground on the original launch Fury X benchmarks. It had a lot of untapped potential at launch and still does.

The Fury X has 30% more potential via hardware than the 1070 if the drivers on each card were 100% efficient. And the Fury X was released a full year earlier than the 1070. Benchmarks between the two are getting more equitable overall as AMD matured their drivers and ultimately I expect the Fury X to overtake the 1070 across the board, but at that time the elite consumers will have long left the Fury X and the 1070 for the newest generation or two and so this type of info is known by the deeper diving community but not often published at the benchmark sites which typically only show battles between the newest gen stuff.

I.e. All the benchmarks these days are with the newer RX480 vs Nvidia stuff meanwhile the 1.5 year old Fury X is a MUCH faster card than it was at launch and a MUCH faster card than the RX480 in pretty much everything. At ~$300 right now the Fury X is about the best GPU value available. (Considering all variables including the price and availability of freesync displays)

For generation after generation AMD has produced the faster cards computationally. But they aren't optimized as well via software/drivers at launch (maybe not ever optimized fully). In direct raw computational performance they are stronger. That's why the password crackers and miners use AMD cards primarily.
 
Last edited:
AMD cards often have more raw processing power than their nvidia competitors at launch, but the drivers aren't as fine tuned out of the gate.

Case in point the Fury X has 8.5 teraflops of processing power while the 1070 has 6.5 teraflops. That explains why the Fury X has been able to gain so much ground on the original launch Fury X benchmarks. It had a lot of untapped potential at launch and still does.

The Fury X has 30% more potential via hardware than the 1070 if the drivers on each card were 100% efficient. And the Fury X was released a full year earlier than the 1070. Benchmarks between the two are getting more equitable overall as AMD matured their drivers and ultimately I expect the Fury X to overtake the 1070 across the board, but at that time the elite consumers will have long left the Fury X and the 1070 for the newest generation or two and so this type of info is known by the deeper diving community but not often published at the benchmark sites which typically only show battles between the newest gen stuff.

I.e. All the benchmarks these days are with the newer RX480 vs Nvidia stuff meanwhile the 1.5 year old Fury X is a MUCH faster card than it was at launch and a MUCH faster card than the RX480 in pretty much everything. At ~$300 right now the Fury X is about the best GPU value available. (Considering all variables including the price and availability of freesync displays)

For generation after generation AMD has produced the faster cards computationally. But they aren't optimized as well via software/drivers at launch (maybe not ever optimized fully). In direct raw computational performance they are stronger. That's why the password crackers and miners use AMD cards primarily.

The 980TI got 5.6Tflops and still beat the Fury X today.
 
AMD cards often have more raw processing power than their nvidia competitors at launch, but the drivers aren't as fine tuned out of the gate.

Case in point the Fury X has 8.5 teraflops of processing power while the 1070 has 6.5 teraflops. That explains why the Fury X has been able to gain so much ground on the original launch Fury X benchmarks. It had a lot of untapped potential at launch and still does.

The Fury X has 30% more potential via hardware than the 1070 if the drivers on each card were 100% efficient. And the Fury X was released a full year earlier than the 1070. Benchmarks between the two are getting more equitable overall as AMD matured their drivers and ultimately I expect the Fury X to overtake the 1070 across the board, but at that time the elite consumers will have long left the Fury X and the 1070 for the newest generation or two and so this type of info is known by the deeper diving community but not often published at the benchmark sites which typically only show battles between the newest gen stuff.

I.e. All the benchmarks these days are with the newer RX480 vs Nvidia stuff meanwhile the 1.5 year old Fury X is a MUCH faster card than it was at launch and a MUCH faster card than the RX480 in pretty much everything. At ~$300 right now the Fury X is about the best GPU value available. (Considering all variables including the price and availability of freesync displays)

For generation after generation AMD has produced the faster cards computationally. But they aren't optimized as well via software/drivers at launch (maybe not ever optimized fully). In direct raw computational performance they are stronger. That's why the password crackers and miners use AMD cards primarily.

That is part of it, as Shintai stated, the 980TI with 40% less shader computational capabilities keeps up with it, and that is easy to see why, Fiji had many other bottlenecks that stopped it from using its shader array to the fullest.

Every generation of GPU's, the amount of fixed function units have to increase too, maybe not as much as the shader array, but the increase still needs to be there something AMD wasn't able to do with the 20nm node failing, their back up design, Fiji on 28nm just didn't have enough space for all the other units and it hurt them at the end. This is when generational changes are important, efficiency per transistor come into play when there isn't enough space to put everything a company needs, nV took it in strides, yeah Keplar doesn't hold up as well as GCN but by that point where games become unplayable on Keplar, those same games become unplayable on earlier GCN because of other bottlenecks.

We have seen Vram limts on Fiji on newer games where min max frame rates jumps are all over the place, we don't see that happening to the 980ti as much, but newer games stress the shader array more so over all frame rates drop a bit more on the 980ti. Then the polygon throughput problems with newer games on AMD cards are there too, So end result, there is no such thing as future proofing, cause both older generation IHV product's have their own problems when it comes to newer games. But when they were released its more important that they cater to games that are out at that time since sales are based on those benchmarks, not when next gen cards/games come out.

Lets look back on DX12 vs DX11, was it really worth it for GCN to push so hard for LLAPI's? Do we see an advantage for AMD for this? Not really, the advantage is only on games they have sponsored. So forgetting about DX11 and trying to push a new standard, AMD shot themselves in the foot, they lost how much marketshare between the DX11 to Mantle to DX12 transition? How much money did they loose? How many years will it take to get that marketshare back? Will they be able to get that marketshare back? How much money will they loose because of that time frame when gaining marketshare back if they do?

Companies can't forget the present, which is based on past experiences, and only think about the future. A company can not survive with that because they can't make money.
 
Last edited:
Power is much better for AMD but still not in Nvidia's league. Hopefully AMD will be able to gain some more ground with Vega.
Is that necessary true though? Power curves are exponential. Equalize performance between a 480 and 1060 how much of a power difference still exists? If a 480 with proper drivers ends up 10% ahead of a 1060 the actual perf/watt would probably be in the same neighborhood. Then you have to get past people constantly using results from the very first, less than optimal, benchmarks to draw a conclusion.
 
Is that necessary true though? Power curves are exponential. Equalize performance between a 480 and 1060 how much of a power difference still exists? If a 480 with proper drivers ends up 10% ahead of a 1060 the actual perf/watt would probably be in the same neighborhood. Then you have to get past people constantly using results from the very first, less than optimal, benchmarks to draw a conclusion.


No it doesn't, proper drivers, you mean, with "chill", cause that drops performance on the rx480, so you end up with 10% less power usage, and less performance. And chill only works if its integrated by the developer as well as it doesn't work that well with movement.

http://www.tomshardware.com/reviews/amd-radeon-chill-ocat-relive,4846-2.html

Unless you like to stand still (in one spot) in games that support it, you don't any power savings.

06-Power-Consumption.png


And now if you are talking about something other than chill, that means what? you need to drop the performance of the 1060 in dx11 games, then you drop frequency of the 1060 that ends up at much lower power consumption than what its at right now, either way, the perf/watt differences is still in favor of the 1060. Or if you increase the frequency of the rx 480 to match the performance of the 1060 in dx11 games, its power levels go up exponentially.

Now if you look at the perf/watt of the 1050 series and the 1070, 1080 series, they are much higher than the 1060 series, that means the power curve of the 1060 is at a higher level than the rest of the cards, and we know the rx480 its power curve it already near the top, so you end up with the rx480 being pushed too much, and the 1060 not being pushed to the top limits but at the higher end of its ideal perf/watt range.
 
Last edited:
So since the AMD cards gradually overtake the Nvidia cards, does that mean Nvidia never gets its drivers right or does that mean AMD has better hardware, but needs to learn how to write drivers?

It has a bit to do with what Oz explained about the raw amount of flops. Nvidia is prolly better at exposing their hardware for the better part and AMD is not that fast but eventually gets there.
There is a video out with Mantle Q&A from APU13 where the driver guy from AMD touches upon this problem(DX11 optimization). AMD driver team has to figure out what exactly is going on before they can improve and that is where AMD are slower even if there involved in the title in development phase it does not always mean they are able to get maximum performance.
 
It has a bit to do with what Oz explained about the raw amount of flops. Nvidia is prolly better at exposing their hardware for the better part and AMD is not that fast but eventually gets there.
There is a video out with Mantle Q&A from APU13 where the driver guy from AMD touches upon this problem(DX11 optimization). AMD driver team has to figure out what exactly is going on before they can improve and that is where AMD are slower even if there involved in the title in development phase it does not always mean they are able to get maximum performance.


It has very little to do with that, Applications are made to run based on hardware that is out there, so AMD bet the house on shader needs to increase at much higher rate than the fixed units, that did not happen, hence why you see with Polaris, they focused transistors on fixed units and feeding those units more than increasing the size of their shader array. They know where the issues are and have addressed some of the problems they were having.

AMD's DX11 problem is inherently across the board on all DX11 games, the games that don't exhibit that problem are ones they specifically worked with the dev's and sponsored through their dev program. AMD knows their weaknesses or bottlenecks in their hardware and they are well equipped to advise game companies in their dev program what is the best way to avoid those bottlenecks.
 
It has very little to do with that, Applications are made to run based on hardware that is out there, so AMD bet the house on shader needs to increase at much higher rate than the fixed units, that did not happen, hence why you see with Polaris, they focused transistors on fixed units and feeding those units more than increasing the size of their shader array. They know where the issues are and have addressed some of the problems they were having.
AMD's DX11 problem is inherently across the board on all DX11 games, the games that don't exhibit that problem are ones they specifically worked with the dev's and sponsored through their dev program. AMD knows their weaknesses or bottlenecks in their hardware and they are well equipped to advise game companies in their dev program what is the best way to avoid those bottlenecks.

Were not talking about bottlenecks because ~10% is not a bottleneck it is about driver optimizations.
 
show me the games and times you are comparing. If you are talking about same games over a certain period of time, nV gets a similiar amount of increases when you look at averages. And then you factor in memory differences and newer games and how they use more memory or the ratio shader needs increase, the picture is pretty clear.

This is why I was saying in the other thread about HC's review of new vs old games, new drivers vs. old drivers, its impossible to draw a conclusion when everything is mashed together like that, and this video draws part of its conclusion from that. And other reviews that clearly show games running under 30 FPS on both IHV's hardware which should not even be considered appropriate for benchmarking, unless you are looking at theoreticals and in that case, you can throw everything out, cause for end users it won't matter.

That is why I stated he got a lot of things right, but without really understanding what is going on.
 
Last edited:
I could care less if my old 7950 is able to marginally beats Kepler cards by 2016 when I can get a 1070 that is more than 3x faster.
 
Hmm I really am not convinced by the argument because now their architecture is evolving more we are now seeing the same issues on AMD as we did with Nvidia in terms of older designs.
PCGamesHardware is one of the few that uses a lot of generations of cards to review new games, and quite a few of the latest has hit Fiji and Hawaii hard....
In an earlier post-thread I commented about this in the past:
Seems Fury models are hurting more than in the past, with only a rare few modern games having the performance one expects from such a card, and usually pretty poor at launch with AAA.
To see how dire it is at launch for Fury X, look at PCGamesHardware that use PresentMon\frame analysis tools\etc and go into careful detail and repeat testing early on when issues identified.
Dishonored 2, Watch Dogs 2, Call of Duty: Infinite Warfare, Forza Horizon 3.
I am just listing the games where the performance could be deemed to really had dropped off, sometimes near same as Polaris 480 or just ahead and even occasionally behind Polaris 480; not listing games where it could be deemed 980ti is outperforming Fury X such as Shadow Warriors 2 by 20%.
They also do not use canned benchmark capture measurement-results.

To list some.
Dishonored 2 okish with Polaris but dire with Fiji and Hawaii even after patches: http://www.pcgameshardware.de/Dishonored-2-Spiel-54640/Specials/Patch-13-Benchmark-Test-1214990/
Watch Dogs 2, again Polaris pretty much matching Pascal but Fiji and Hawaii failing performance: http://www.pcgameshardware.de/Watch-Dogs-2-Spiel-55550/Specials/Test-Review-Benchmark-1214553/
Call of Duty Infinite Warfare, same again: http://www.pcgameshardware.de/Call-...591/Specials/Technik-Test-Benchmarks-1212463/
Forza Horizon 3 Fiji and Hawaii behind 470: http://www.pcgameshardware.de/Forza.../Specials/Benchmarks-Test-DirextX-12-1208835/

So it can be hit and miss now for Fiji and Hawaii as I agree there are games they do well and others such as these examples they suffer compared to Polaris; this issue goes beyond tessellation or the Primitive Discard Accelerator (although more could be focused towards this at the cost of earlier hardware).
Cheers
 
Basically, this "fine wine" thing has ALWAYS existed, for both AMD and Nvidia.

AMD/Nvidia driver optimizations, or game engine patch optimizations improve performance on existing cards. And developers who now have previous experience with a card can reuse the same tricks with new games, delivering high performance on-release.

But newer games are still a mixed-bag. Some games are just more like an expansion pack, mostly reusing an engine. But others are completely new tech, and a completely new optimization problem. And for many of these newer games, the older cards hit bottlenecks just not seen by newer generations. AS CSI_PC pointed out above, sometimes it's hard to work new features using older tech.

So labeling this as "fine wine" is just self-selective marketing bullshit. To make you forget that AMD will be around a year late with their consumer GTX 1080 competitor.
 
Last edited:
I think that most of the performance increase of AMD cards is more to do with the Nvidia cards not having a base of 4GB of VRAM. Like he said in the video the 600 series only came with 2GB of VRAM (They were available with 4GB so I'm not sure why he didn't show that in the vid when he did say the AMD cards came with multiple memory configs). I just upgraded from a 4GB GTX670 and it ran nearly everything but the newest AAA titles at max settings. I'd be surprised if a 3GB 7970 was able to pull more than a few fps more than a 4GB 670/680.
 
Last edited:
I think that most of the performance increase of AMD cards is more to do with the Nvidia cards not having a base of 4GB of VRAM. Like he said in the video the 600 series only came with 2GB of VRAM (They were available with 4GB so I'm not sure why he didn't show that in the vid when he did say the AMD cards came with multiple memory configs). I just upgraded from a 4GB GTX670 and it ran nearly everything but the newest AAA titles at max settings. I'd be surprised if a 3GB 7970 was able to pull more than a few fps more than a 4GB 670/680.

No that is only one aspect of it. Keplar and the 6xx/7xx generations aging relative to the GCN 1.0 7xxx/2xx series, is due to a large combination of factors many of which were one off situational factors.

At the basic level the situation is really this. Graphics cards at launch need to be priced against each other based on what they offer at that time. The reason is that future benefits are often extremely difficult to actually convey as a selling point on tangible level to consumers. So this is what determines on the onset which graphics cards compete with each other for consumers. But that does not necessarily mean those graphics cards are equal in other aspects or considerations (for example what each company places them at).

Also a side note here, FLOPS is becoming one of those buzz jargon terms that people throw around with very little understanding. The FLOPS ratings for graphics cards are just theoretical calculation based on the clock speed x cores x OPs per core (basically 2 for FP32 for everyone). A graphics card with half the memory speed for example will have the same FLOPS rating. It is not even derived from some standardized benchmark (supercomputer FLOPS ratings for the top500 for example are actually at least done running Linpack as a benchmark).
 
Last edited:
No that is only one aspect of it. Keplar and the 6xx/7xx generations aging relative to the GCN 1.0 7xxx/2xx series, is due to a large combination of factors many of which were one off situational factors.

At the basic level the situation is really this. Graphics cards at launch need to be priced against each other based on what they offer at that time. The reason is that future benefits are often extremely difficult to actually convey as a selling point on tangible level to consumers. So this is what determines on the onset which graphics cards compete with each other for consumers. But that does not necessarily mean those graphics cards are equal in other aspects or considerations (for example what each company places them at).

Also a side note here, FLOPS is becoming one of those buzz jargon terms that people throw around with very little understanding. The FLOPS ratings for graphics cards are just theoretical calculation based on the clock speed x cores x OPs per core (basically 2 for FP32 for everyone). A graphics card with half the memory speed for example will have the same FLOPS rating. It is not even derived from some standardized benchmark (supercomputer FLOPS ratings for the top500 for example are actually at least done running Linpack as a benchmark).

FLoating-point Operations Per Second

so yes it does have a time based component for the metric.
 
Last edited:
Basically, this "fine wine" thing has ALWAYS existed, for both AMD and Nvidia.

AMD/Nvidia driver optimizations, or game engine patch optimizations improve performance on existing cards. And developers who now have previous experience with a card can reuse the same tricks with new games, delivering high performance on-release.

But newer games are still a mixed-bag. Some games are just more like an expansion pack, mostly reusing an engine. But others are completely new tech, and a completely new optimization problem. And for many of these newer games, the older cards hit bottlenecks just not seen by newer generations. AS CSI_PC pointed out above, sometimes it's hard to work new features using older tech.

So labeling this as "fine wine" is just self-selective marketing bullshit. To make you forget that AMD will be around a year late with their consumer GTX 1080 competitor.
I pretty much 100% agree with this statement. It has been the reason for years that we update game patches and retest cards with pretty much every driver rev. It is why you do not see a graph with 30 video cards from us....because we don't re-use old results as those have the tendency to change greatly over time for both Red and Green and can be very game dependent as well.
 
Back
Top