AMD GPU Generational Performance Part 1 @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
AMD GPU Generational Performance Part 1

Ever wonder how much performance you are truly getting from GPU to GPU upgrade in games? What if we took GPUs from AMD and compared performance gained from 2013 to 2018? This is our AMD GPU Generational Performance Part 1 article focusing on the Radeon R9 290X, Radeon R9 390X, Radeon R9 Fury X and Radeon RX Vega 64 in 14 games.

If you like our content, please support HardOCP on Patreon.
 
I'm not real familiar with AMD video cards. Where do the RX 570 / RX 580 fall on the performance spectrum? I'm guessing between Fury and Vega?
 
Glad I sold my 290X's during the mining craze and jumped to the V56's. Awesome cards, both clock to 1650/1100 (and was one of the lucky few who got em at MSRP) Looking forward to the 7nm iteration and Navi following!
 
No 4k testing? Remember how AMD used to market the FuryX as a 4k card? I'd love to see how it fared. I think I know the answer though.
 
No 4k testing? Remember how AMD used to market the FuryX as a 4k card? I'd love to see how it fared. I think I know the answer though.
That said, there have been a few cards marketed as 4K cards, and as we have told you guys repeatedly, it just was not so. The 1080 Ti was the first true 4K card in our eyes. But, let's keep this on the topic, please.
 
BTW people bitching about the 35%+ performance increase on Turing vs Pascal should take a look at this.

Anyway can't wait for the next installment. Good job.
 
I really like these articles - as someone who generally skips a generation or two between updates, it's great to see what the real difference between my 290 and a modern high-end card is.

Gotta say though, while I know everyone has their own performance preferences, calling performance that never drops below 30fps "unplayable" is just asinine, and I've seen this comment come up a lot in recent articles. It may not be ideal for some people, but performance that averages in the 40-50fps and *never drops below 30fps* is far from "unplayable".
 
I'm not real familiar with AMD video cards. Where do the RX 570 / RX 580 fall on the performance spectrum? I'm guessing between Fury and Vega?

Mainstream

RX 580 MSRP - $229
RX 570 MSRP - $169

The cards we evaluated today starting at $549 (290X) and $449 (390X refresh at the cheapest) up to $649 for Fury X and $499 for Vega 64 air-cooled

That should put Polaris in perspective for you.
 
No 4k testing? Remember how AMD used to market the FuryX as a 4k card? I'd love to see how it fared. I think I know the answer though.

And we all know how that turned out.....

4K testing would be a nightmare on Fury X, we all know Vega 64 would surpass it greatly.

I wish there was a good 4K comparison we could make with AMD cards, but even Vega 64 struggles in most games at 4K since it is only "on par" with 1080 performance, there is no match for the 1080 Ti.
 
I really like these articles - as someone who generally skips a generation or two between updates, it's great to see what the real difference between my 290 and a modern high-end card is.

Gotta say though, while I know everyone has their own performance preferences, calling performance that never drops below 30fps "unplayable" is just asinine, and I've seen this comment come up a lot in recent articles. It may not be ideal for some people, but performance that averages in the 40-50fps and *never drops below 30fps* is far from "unplayable".

Very dependent on the game, some games are acceptable at lower framerates, some are not. We base our experiences on not just the framerate, but how the gameplay experience feels while playing, looking for lag, hitches or pauses, inconsistencies, changes in fps that we can notice, and take all this into consideration, not just what the fps number is.
 
These comparison graphs really remind me of my prior concern that one of the greatest issues holding back Fury and especially Vega has been the limited memory bandwidth. Although anecdotal due to my small sample size, with all the prior generations of ATI/AMD (and my one not too, too ancient Nvidia) graphics cards I've overclocked when core clock and memory bandwidth increased by the same percentage then frame rate scaled linearly. However increasing core count or core speed without commensurate memory bandwidth would eventually lead to rapidly diminishing returns.

The 290/390 transition to Fury had a more than 40% increase in cores with only a 1/3 increase in memory bandwidth along with a real problem in only having 4GB of memory. Remember how before Vega came out many of us thought there would be a 40-50% plus improvement in frame rate based on the core speed increase and possible architecture improvements. And yet with Vega there's a roughly 40-50% general increase in core clock speed (depending on heat soak) with a REDUCTION in memory bandwidth leading to a mere 20-30ish percent overall gain.
 
This article makes me feel better about my impulse upgrade from my Fury Nano to a Vega 64 LE (at MSRP, thankfully). I think next card, i'd like something not weirdly power limited for a change.
 
Really enjoying this series of articles. I get asked all the time by people about upgrading and builds so this is a terrific source of information to pass along.

Thanks for all the hard work guys!
 
This article makes me feel better about my impulse upgrade from my Fury Nano to a Vega 64 LE (at MSRP, thankfully). I think next card, i'd like something not weirdly power limited for a change.

Soft powerplay tables mod will fix the power limiting issues, my clocks are rock solid and not all over the map as ive seen on many reviews. Its an extra step in overclocking, but its worth it IMO
 
Pretty fun read, I had the Nvidia part 1 article open at the same time and I’ve gotta say I’m actually impressed by the 290x/390x at how they compared to the 780 and 980. They aged much better in my opinion (but yes, I do realize the Ti cards are better.)

I wonder i how the 290x held up slightly makes the ~80% improvement to Vega seem smaller when compared with the 780/1080 numbers. Granted, once again, Ti versions throw a wrench to that theory. AMD needs something to compete above the non-Ti level.

Great article! Any chance there will be anything looking at Polaris?
 
There is one game that gives us hope, Far Cry 5. We see that Vega 64 is 93% faster than R9 290X. This is more like it, but we feel this is only happening because of a technology called Rapid Packed Math. On the one hand, this is a good thing and technologies like this can push Vega 64 forward and provide wider performance advantages over time. On the other hand, it’s sad that it takes something like this to move performance forward. It’s sad that AMD’s high-end GPUs can’t just brute force FPS like NVIDIA’s GPUs are able to from generation to generation. You just haven’t got a lot of performance upgrade uplifts from AMD over the past five years in the high-end.
Cough... GameWorks.... cough.

Also... this isn't just about AMD but also about GlobalFoundries, correct? The process that AMD was forced to buy Vega on (due to the wafer agreement and other issues) was optimized for low power, as far as I understand it. It is inferior in performance, at higher clocks, to the TSMC process Nvidia has been able to use.
 
Would have liked to see wolfenstein in there as well, you said fc5 was the only game to have rpm but wolfenstein II does as well?
 
Fantastic article. Fury X actually holds it's own at 1440P still in most titles despite only having 4GB of VRAM. Would have been interesting seeing 7970 compared here as I feel that card held onto being relevant longer than the others, but alas it's a tad too old to keep in line with Nvidia comparison and the game suite.
 
Cough... GameWorks.... cough.

Also... this isn't just about AMD but also about GlobalFoundries, correct? The process that AMD was forced to buy Vega on (due to the wafer agreement and other issues) was optimized for low power, as far as I understand it. It is inferior in performance, at higher clocks, to the TSMC process Nvidia has been able to use.
The point of the article is to show gaming performance, not evaluate technical and business decisions of the devs and foundries. So, let's stay on topic please. There are plenty of other threads to take those discussions up in.
 
Thanks Brent & Kyle for this series.

Been team green since MS Vista. My last red card was an HD2600(not sure if pro or xt). Got it on a black friday sale for ~$100 and loved it, at least when my games didn't crash in my old Pentium IV rig. Still have it. It was awesome because aside from it's speeds it also allowed some DX10 functionality on XP. Have a feeling if I fired that thing up now I might be able to grab an updated driver that would help it a bit.

I could never really wrap my head around AMD's naming schema's so your history here really helped. Fascinating about that custom closed loop on the R9 Fury X-pretty cool.
 
Would have liked to see wolfenstein in there as well, you said fc5 was the only game to have rpm but wolfenstein II does as well?
As reported by AMD, you are correct. Wolf2 did support RPM. https://gaming.radeon.com/en/wolfenstein-2-new-colossus-recommended-settings/

Wolfenstein® II took full advantage of the Vulkan™ API and latest features found on Radeon RX Vega graphics cards. By using the advanced features on the “Vega” GPU such as rapid packed math, multi-threaded command buffer recording, asynchronous compute and shader intrinsics, I averaged 60+ fps at 3440×1440 with Radeon™ RX Vega 56 and at 4K with Radeon™ RX Vega 64.

FYI Brent_Justice
 
Last edited:
Go ahead and ban me for discussing something your article discusses — features like Rapid Packed Math being implemented by developers and how that impacts gaming.

GameWorks is very similar to Rapid Packed Math. It is one company's tech vs another's (Rapid Packed Math). Clearly, Nvidia has been more successful than AMD at getting such game performance enhancement tech (for their cards) implemented by devs.

If you can prove what I wrote isn't on-topic, as it pertains directly to the quoted text from your article, then I will retract my posts.
 
Go ahead and ban me for discussing something your article discusses — features like Rapid Packed Math being implemented by developers and how that impacts gaming.

GameWorks is very similar to Rapid Packed Math. It is one company's tech vs another's (Rapid Packed Math). Clearly, Nvidia has been more successful than AMD at getting such game performance enhancement tech (for their cards) implemented by devs.

If you can prove what I wrote isn't on-topic, as it pertains directly to the quoted text from your article, then I will retract my posts.
Should you wish to discuss the technology and how it impacts performance, I would love to discuss that. However you seem to want to discuss the politics behind the technology and the foundry process, which is something I do not want to discuss in this review thread. There are plenty of other threads for that.
 
BTW people bitching about the 35%+ performance increase on Turing vs Pascal should take a look at this.

Anyway can't wait for the next installment. Good job.
You mean bitching about the price increase per performance gained. Price to performance ratio.
 
Just one little nitpick. It looks like the article doesn't say whether DX11 or DX12 was used in Rise of the Tomb Raider and Deus Ex: Mankind Divided, unless it is implied by "Highest Settings."
 
Just one little nitpick. It looks like the article doesn't say whether DX11 or DX12 was used in Rise of the Tomb Raider and Deus Ex: Mankind Divided, unless it is implied by "Highest Settings."

DX11 is used in those games in all articles unless otherwise specified specifically, DX11 is faster in those games in our testing.
 
Vega struck me, since its release, as a stretch for AMD. They really had to milk it to get 1080-level performance out of it. Where Ryzen exceeded many performance targets, Vega only just barely made it.

Seeing these benchmarks, Vega isn't bad. But it's not what AMD really needed, either. It's somewhere in between. Neither an utter failure or an unqualified success. Still, this is better than no high end presence at all.

I hope the next GPU out of AMD brings us more competition. Nvidia is killing it from a performance angle, but their pricing and anti-competitive practices are just screaming for real competition again.

And, lol, my old 7970 still continues to serve in my HTPC, and dual 7970s still run my old arcade/console emulator build. GCN forever, lolol.
 
Excellent review [H]!

I love looking back at the older cards, and see how much performance increases between generations. Should make some of us rethink our impulsive, every generation upgrade tendencies.. since they are not always much of an upgrade.

What is also interesting, is how certain games just perform insanely better than all of its peers (Doom). I know the Vulkan api helps, but I doubt it accounts for all of the difference. You guys are really familiar with all of the "eye candy features" in these games. Would you think that Doom has less of those features than say Crysis 3? I pick Crysis since it is on the other end of the performance spectrum, but when you lowered the in game options to really low, it almost performed as well as Doom. In doing that, what visual quality differences or sacrifices were there? Or does doing that just put them at the same visual quality level, explaining the performance? Or is Doom just that much faster while looking great?

Any chance once the AMD series is done, that the 2080 and 2080Ti can get thrown into this blender? Even if the games don't support the ray tracing features, it will be a great jumping off point to be able to compare the new cards performance to all these generations going back 5 years in all these older games.
 
DX11 is used in those games in all articles unless otherwise specified specifically, DX11 is faster in those games in our testing.
Really? I was under the impression that DX12 was faster in those games on AMD hardware.
 
You mean bitching about the price increase per performance gained. Price to performance ratio.
Well, were are looking at 35%+ performance increase with pre-release drivers + the RTX thingy. So the jury isn't out yet.
I agree the 2080Ti price is ludicrous, but you can just get the AMD performance equival... no wait... :rolleyes::rolleyes:
 
What is also interesting, is how certain games just perform insanely better than all of its peers (Doom). I know the Vulkan api helps, but I doubt it accounts for all of the difference. You guys are really familiar with all of the "eye candy features" in these games. Would you think that Doom has less of those features than say Crysis 3? I pick Crysis since it is on the other end of the performance spectrum, but when you lowered the in game options to really low, it almost performed as well as Doom. In doing that, what visual quality differences or sacrifices were there? Or does doing that just put them at the same visual quality level, explaining the performance? Or is Doom just that much faster while looking great?

You have to remember that Doom is mostly played in rather small indoor areas. Crysis 3 has levels with far more small details and viewing distance. Aside from that the Doom developers have done a fantastic job of optimizing the game. It also seems to be one of those games where AMD's architecture really shines.

I think for any follow-up articles it would be a good idea to include the 2016 Hitman game as well since if I remember correctly it was also a game that performed well on AMD cards.

It really sucks that we are still a few years away from AMD putting out something new. They have to put out something really stellar to compete with whatever Nvidia is offering at that point.
 
I think the next iteration of Vega will have a great leap forward. Doubling the bandwidth and cutting core power consumption by 40% along with the speed improvements that will come from moving to TSMC 7nm process. Vega this gen was hamstrung by memory bandwidth, but the new iterations of HBM2 like Aquabolt clock much nicer. 7nm is gonna be a game changer across all segments, similar to what 28nm to 14nm did.
 
Back
Top