Will the new Radeons on 7nm compete with Nvidia?

I've always wondered why AMD cards look so great on paper but can't take the crown.

I know architectures are not comparable, but the radeon 7 looks like a monster (as did Vega for that matter). Spec wise it looks like it could pretty much kill the RTX2080, but only manages to keep up with it.

Just take a look at the specs

upload_2019-1-14_16-11-7.png



I mean the Radeon VII has better specs on all counts but ROPs. It even has twice the memory size, more than twice the bandwidth and 38% more compute power.
Then again it draws more power at 300w compared to RTX 2080 225W and even RTX 2080Ti 260w

AMD still has a lot to learn on performance and efficiency.
 
I've always wondered why AMD cards look so great on paper but can't take the crown.

Here's the fun part: they do, depending on what you compare.

To keep in mind: Nvidia usually cuts down significantly on compute hardware for their consumer-oriented GPUs, where the Gx100, currently the GV100 'Volta' has outsize compute compared to the rest of the lineup, whereas AMD typically doesn't, and Nvidia is likely both more efficient on-die with respect to setting up workloads for gaming (or compute for their Quadro/Tesla parts), and more efficient on the software/driver side as well- and these two sides are likely better integrated.

The delta in design goals and funding, which influence each other, helps explain what we see on paper and how that plays out with various workloads.
 
I've always wondered why AMD cards look so great on paper but can't take the crown.

I know architectures are not comparable, but the radeon 7 looks like a monster (as did Vega for that matter). Spec wise it looks like it could pretty much kill the RTX2080, but only manages to keep up with it.

Just take a look at the specs

View attachment 135048


I mean the Radeon VII has better specs on all counts but ROPs. It even has twice the memory size, more than twice the bandwidth and 38% more compute power.
Then again it draws more power at 300w compared to RTX 2080 225W and even RTX 2080Ti 260w

AMD still has a lot to learn on performance and efficiency.

Thats been the same story for years. AMD architecture is more compute-oriented and performance doesn't translate to games. On top though RTX 2080s boost much higher then stock boost they give out in specs. Here is the biggest difference between architectures that can instantly improve AMD.

Immidiate mode tiled based rasterization. I believe this gave Nvidia up to 30% more performance at the same power. Raja tried it in Vega but I think it was broke in hardware and never really gave the results they expected. I believe Navi or next gen for sure should have that working. Nvidia added that to maxwell and amd just hasn't made any significant changes to GCN, I think they tried with vega but the ball was dropped on that.

AMD's rasterization performance hasn't really improved and Nvidia made the biggest improvement with maxwell. That difference has been there since then. I was really excited about DSBR but after the initial architecture reveal as the launch got closer it wasn't a feature that was being talked about. That showed something didn't go as planned. Pretty much went away with the wind.
 
Last edited:
Shall see with division 2, AMD specifically mentioned it using over 8GB @ 4k

And Vulkan AMD was doing really well in, with the new DX12 push coming out, should see gap narrow....Nvidia will be pushing DX12 now as well since their cards do better in it than DX11 in BF V at least
 
Shall see with division 2, AMD specifically mentioned it using over 8GB @ 4k

And Vulkan AMD was doing really well in, with the new DX12 push coming out, should see gap narrow....Nvidia will be pushing DX12 now as well since their cards do better in it than DX11 in BF V at least

No nvidia is going to push it because they need it for dxr. Hopefully they don't gimp nvidia cards somehow working with developers lol.
 
Back
Top