AMD RDNA 2 gets ray tracing

LukeTbk

Gawd
Joined
Sep 10, 2020
Messages
826
Games developed for the next gen consoles using RT will most likely be DXR 1.1 -> Dirt 5.
Other one according to AMD : Far Cry 6, World of Warcraft: Shadowlands, Godfall, and The Riftbreaker,

From what I can see of those title for those released it is either quite limited or limited to the screen, but with good performance (maybe because of the previous statement):

Will see with updates over time.
 

XoR_

[H]ard|Gawd
Joined
Jan 18, 2016
Messages
1,093
Video from AMD talking about MS DXR 1.1, kinda dry but for real comparisons between Nvidia and AMD RT, the application/game will really need DXR 1.1. Do not think DXR 1.0 applications will effectively use RNDA2 Raytracing ability in other words, hence most current DX 12 RT games use DXR 1.0 some Vulkan RT games do use Nvidia RT extension. Games developed for the next gen consoles using RT will most likely be DXR 1.1 -> Dirt 5.
I would not expect big performance improvement (AMD vs NV) from DXR 1.1 because Turing and Ampere cards will benefit the same from optimized DXR 1.1 code them having full hw support and all.

Actually the RT performance of RDNA2 is more or less where I though it will be => solid below Turing.
Frame-rate rt-on / rt-off * 100% - Control 1440p - Metro Exodus 1440p
2070 - 54.2% - 58.2%
2080Ti - 57.5% - 61.2%
3080 - 59.4 - 67.5%
6800XT - 42.1% - 53.6%

Your old post (noko @ May 17, 2020)
Yes, performance much better than Turing since Turing has pretty much sucked with RT gaming from a performance prospective, lack of real innovative games or games where one would say -> I want that!.
Back then we had only Turing (for more than a year) and expectations for both both Ampere and RDNA2 were quite high with people generally calling Turing as unproven tech with poor unacceptable RT performance hit.
Today we see that NV did not manage to substantially improve RT performance despite some wild claims from the leaks. Ampere has only evolutional RT performance improvement over Turing.
AMD on the other hand just like I suspected added RT acceleration but has higher performance hit. Maybe it also has less transistor costs and overall is not so much worse but the performance targets were different. NV after all changed name to RTX and this was the main selling feature so they pushed for more RT performance from the get go.

Given the RT hit scores from these two games NV actually did pretty good job with RT implementation on Turing. They themselves could not improve it much after two years of development with minor improvements and I am pretty sure they tried hard because the RT performance drop difference is what would make many people want to replace older Turing cards. As it is any upgrade other than for the sake of generally higher performance doesn't make any sense eg. I have RTX 2070 and in this class this card will rock for quite some time to come. Ampere with similar rasterization performance will have almost identical RT performance making it only marginally better and mostly in power draw.

AMD could not get to Turing levels with their first DXR capable GPU architecture.

One of my quotes
RDNA2 does not really need to be faster.
If their DXR implementation is slower than Turing then so what? What can you do?
What can we do?
I won't do anything because my RTX 2070 is plenty fast. I wanted RDNA2 to be super fast at RT for PS5's sake. It isn't... no biggie really. It will take years until developers really grasp what RT can be used for. I expect beautiful games for PS5.

You?
Either still hold on to idea that for whatever reason somehow DXR 1.1 will run much much faster on RDNA2 than on Turing/Ampere or just accept that Turing was not bad. Not hastily pushed RT implementation just so that NV can change GTX name to RTX and increase card prices as it was made up to be.

You posted (noko @ May 21, 2020)
Cyberpunk 2077 will be getting an update to support XBox Series X console new features, the original console version will work from the start but what the update will do to take advantage of the new console features is to be seen. Could be using AMD RT method for raytracing. Would think that would also carry over to RNDA2 GPU's that support RT.
Yes, Cyberpunk 2077 will be the better benchmark for actual RDNA2 performance comparison with possible hard coded optimization for console hardware / generally new DXR 1.1 features which might be less taxing for the AMD RT method.

That said I do not expect relative performance to change much if any and it will be still RDNA < Turing < Ampere with more difference between RDNA2 and Turing than Turing and Ampere.
Turing is supposed to have full hardware DXR 1.1 support and the new RT optimization methods were just missing from original DXR API. Meaning they were not AMD additions they made for their hardware but general improvements in API which allow getting more performance by better grouping of dispatched rays to avoid cache misses and allow more aggressive geometry optimization of further objects.

It is very unlikely proportions will change in Cyberpunk 2077 vs current games and RDNA2 will never beat Turing at RT - in performance RT-ON / perf RT-OFF. Simply with that big of a gap in current DXR games it seems completely unlikely.
...but let's see what future software brings. New games, new tech, new AMD drivers, NV nerfing Turing, etc. all might make big difference. Maybe eg. 2 years from now the performance gap won't be there or even slightly in AMD's favor. We had seen this before with GCN cards beating NV cards which were benchmarked faster at launch. I do not expect any drastical changes in next few months though.
 

noko

Supreme [H]ardness
Joined
Apr 14, 2010
Messages
5,912
I would not expect big performance improvement (AMD vs NV) from DXR 1.1 because Turing and Ampere cards will benefit the same from optimized DXR 1.1 code them having full hw support and all.

Actually the RT performance of RDNA2 is more or less where I though it will be => solid below Turing.
Frame-rate rt-on / rt-off * 100% - Control 1440p - Metro Exodus 1440p
2070 - 54.2% - 58.2%
2080Ti - 57.5% - 61.2%
3080 - 59.4 - 67.5%
6800XT - 42.1% - 53.6%

Your old post (noko @ May 17, 2020)

Back then we had only Turing (for more than a year) and expectations for both both Ampere and RDNA2 were quite high with people generally calling Turing as unproven tech with poor unacceptable RT performance hit.
Today we see that NV did not manage to substantially improve RT performance despite some wild claims from the leaks. Ampere has only evolutional RT performance improvement over Turing.
AMD on the other hand just like I suspected added RT acceleration but has higher performance hit. Maybe it also has less transistor costs and overall is not so much worse but the performance targets were different. NV after all changed name to RTX and this was the main selling feature so they pushed for more RT performance from the get go.

Given the RT hit scores from these two games NV actually did pretty good job with RT implementation on Turing. They themselves could not improve it much after two years of development with minor improvements and I am pretty sure they tried hard because the RT performance drop difference is what would make many people want to replace older Turing cards. As it is any upgrade other than for the sake of generally higher performance doesn't make any sense eg. I have RTX 2070 and in this class this card will rock for quite some time to come. Ampere with similar rasterization performance will have almost identical RT performance making it only marginally better and mostly in power draw.

AMD could not get to Turing levels with their first DXR capable GPU architecture.

One of my quotes

What can we do?
I won't do anything because my RTX 2070 is plenty fast. I wanted RDNA2 to be super fast at RT for PS5's sake. It isn't... no biggie really. It will take years until developers really grasp what RT can be used for. I expect beautiful games for PS5.

You?
Either still hold on to idea that for whatever reason somehow DXR 1.1 will run much much faster on RDNA2 than on Turing/Ampere or just accept that Turing was not bad. Not hastily pushed RT implementation just so that NV can change GTX name to RTX and increase card prices as it was made up to be.

You posted (noko @ May 21, 2020)

Yes, Cyberpunk 2077 will be the better benchmark for actual RDNA2 performance comparison with possible hard coded optimization for console hardware / generally new DXR 1.1 features which might be less taxing for the AMD RT method.

That said I do not expect relative performance to change much if any and it will be still RDNA < Turing < Ampere with more difference between RDNA2 and Turing than Turing and Ampere.
Turing is supposed to have full hardware DXR 1.1 support and the new RT optimization methods were just missing from original DXR API. Meaning they were not AMD additions they made for their hardware but general improvements in API which allow getting more performance by better grouping of dispatched rays to avoid cache misses and allow more aggressive geometry optimization of further objects.

It is very unlikely proportions will change in Cyberpunk 2077 vs current games and RDNA2 will never beat Turing at RT - in performance RT-ON / perf RT-OFF. Simply with that big of a gap in current DXR games it seems completely unlikely.
...but let's see what future software brings. New games, new tech, new AMD drivers, NV nerfing Turing, etc. all might make big difference. Maybe eg. 2 years from now the performance gap won't be there or even slightly in AMD's favor. We had seen this before with GCN cards beating NV cards which were benchmarked faster at launch. I do not expect any drastical changes in next few months though.
I don't expect many drastic changes either in 2 months, as for RT, it will depend more on the application and implementation, while Dirt 5 at present has RNDA2 beating both Ampere and Turing, I just don't think it means much. That could turn around but the implementation does not appear to add anything worthwhile to the game in the end. CDProjekt maybe best hope in defining better both Nvidia and AMD capability, that is if they can. Working years with Nvidia for RT may not translate well for a few months with AMD for RT. As for gamers using RT, users using it when available is very small. Consoles may turn that around but most will probably not even know or care that RT is being used on the consoles, they just want a good gaming experience however it comes.

Yes both Nvidia and AMD will benefit with DXR 1.1, who will benefit more is the question. For old titles, using that to compare to AMD RT ability I think is pointless and probably very misleading.
 

noko

Supreme [H]ardness
Joined
Apr 14, 2010
Messages
5,912
Riftbreaker RT, Shadows and Ambient Occlusion, plus other DX 12 Ultimate items



DX 12 Ultimate games and features supported with RT:

RTgames.jpg
 
Last edited:

RanceJustice

Supreme [H]ardness
Joined
Jun 9, 2003
Messages
6,037
How easily can DX12 Ultimate RT be ported/used via Vulkan? Likewise, many of the FidelityFX features, or whatever the AMD-equivalent to DLSS is called? While I'm pleased that DXR is hardware agnostic seemingly, I'm really hoping to see platform independent implementations of these functions with something like Vulkan. I know that DX12 is in theory very "close" to Vulkan in terms of overall features, low level, and compatibility but I don't know what sort of porting is necessary from "normal" DX12 > Vulkan much less DXR / DX12 Ultimate . Hopefully the Vulkan devs , other open source/spec platforms, and AMD have been working on this a bit behind the scenes, rather than having to give up all these features if users aren't interested in using a Microsoft compatible OS like Windows or Xbox.
 

LukeTbk

Gawd
Joined
Sep 10, 2020
Messages
826
I'm really hoping to see platform independent implementations of these functions with something like Vulkan.
With both the PS5 and XBox not supporting the Vulkan API apparently (seem like only Nintendo do), for a gaming developer studio I am not sure if Vulkan is perceived specially more platform independent.

Most of those game above work on a PS5 (thus from what we understand not on DX12), that would point that there is probably in their engine an abstraction between the lower level graphic API and the game graphic API making using multiple API possible.
 

chameleoneel

2[H]4U
Joined
Aug 15, 2005
Messages
3,820
With both the PS5 and XBox not supporting the Vulkan API apparently (seem like only Nintendo do), for a gaming developer studio I am not sure if Vulkan is perceived specially more platform independent.

Most of those game above work on a PS5 (thus from what we understand not on DX12), that would point that there is probably in their engine an abstraction between the lower level graphic API and the game graphic API making using multiple API possible.
according to techpowerup's pages on the ps5 and new Xbox gpus, they support Vulkan.

https://www.techpowerup.com/gpu-specs/playstation-5-gpu.c3480

https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482

https://www.techpowerup.com/gpu-specs/xbox-series-s-gpu.c3683
 

cybereality

Supreme [H]ardness
Joined
Mar 22, 2008
Messages
6,664
Maybe the chip does, but consoles typically use somewhat proprietary APIs.
 
Top