AMD RDNA 2 gets ray tracing

I look at it this way, what is more important? Having better rasterization per $ or RT per $? Better rasterization will help every game for the most part. Better RT, that is if it is usable and makes even a significant difference, is only useful with RT games. Most Turing RTX holders never or seldom ever used RT due to many reasons, very poor performance, can't really tell a difference, had to degrade other settings and IQ to use giving an overall worst gaming experience. Anyways I ended up with conclusion rasterization/$ is king.

With the above benchmark which is not clear driver version etc. the 6800 is 33% faster in that test for RT, will that even be significant for RT at all? We need some real games that are optimized with both AMD and Nvidia RT for a better view.
 
There are many processors per CU. How that break out occurs and how much of it is dedicated to RT we don't know. It's not software if that's what you're wondering.

**EDIT**
FROM ANAND:

"Ray tracing itself does require additional functional hardware blocks, and AMD has confirmed for the first time that RDNA2 includes this hardware. Using what they are terming a ray accelerator, there is an accelerator in each CU. The ray accelerator in turn will be leaning on the Infinity Cache in order to improve its performance, by allowing the cache to help hold and manage the large amount of data that ray tracing requires, exploiting the cache’s high bandwidth while reducing the amount of data that goes to VRAM."
Awesome thanks. Yeah I was wondering if it was software based, so thanks for kicking me that info.
 
  • Like
Reactions: kac77
like this
Dirt 5 and new tech being used for RDNA 2 GPU's, ray traced shadows (very little performance hit), did not talk about reflections but that was indicated elsewhere from them, many AMD optimizations:

 
I look at it this way, what is more important? Having better rasterization per $ or RT per $? Better rasterization will help every game for the most part. Better RT, that is if it is usable and makes even a significant difference, is only useful with RT games. Most Turing RTX holders never or seldom ever used RT due to many reasons, very poor performance, can't really tell a difference, had to degrade other settings and IQ to use giving an overall worst gaming experience. Anyways I ended up with conclusion rasterization/$ is king.

With the above benchmark which is not clear driver version etc. the 6800 is 33% faster in that test for RT, will that even be significant for RT at all? We need some real games that are optimized with both AMD and Nvidia RT for a better view.
also seldom used because very few games have RTX features
 
  • Like
Reactions: noko
like this
That game looks really nice. Definitely gonna try than when I get my new AMD card.
 
Dirt 5 and new tech being used for RDNA 2 GPU's, ray traced shadows (very little performance hit), did not talk about reflections but that was indicated elsewhere from them, many AMD optimizations:
Sony came out during their hardware presentation and said global illumination and shadows have very little performance impact on their hardware.
 
Sony came out during their hardware presentation and said global illumination and shadows have very little performance impact on their hardware.
Interestingly, Dirt 5's Global Illumination is done via Voxel Cone traced bounce lighting on GPU Compute.
 
Ray Tracing, Variable Rate Shading, Fidelity FX CAS and Luminance Preserving Mapper


Impressive! Even 4K good frame rates with RT max settings, 12gb of textures (sorry 10gb cards, lower those settings). VRS is also key in keeping performance up when using very complex scenes and rendering methods at high resolutions.
 
the Big Navi cards are impressive overall but not when it comes to ray-tracing...they're taking a different route then Nvidia and using more of a targeted ray-tracing approach...performance is meh...they don't get outright beaten down by Nvidia but it seems if you're looking for the best RT performance then you need to stick with Nvidia

but what really sucks is that certain games will now be RT exclusive with Nvidia/AMD...starting with Godfall which is an AMD timed exclusive
 
Last edited:
the Big Navi cards are impressive overall but not when it comes to ray-tracing...they're taking a different route then Nvidia and using more of a targeted ray-tracing approach...performance is meh...they don't get outright beaten down by Nvidia but it seems if you're looking for the best RT performance then you need to stick with Nvidia

but what really sucks is that certain games will now be RT exclusive with Nvidia/AMD...starting with Godfall which is an AMD timed exclusive

The cards just got released, I am pretty sure those games would need some tweaking to run as well on AMD hardware as they do Nvidia hardware. Ray Tracing is still a niche thing and most really dont care, were still years out on it even mattering for most people. Also exclusives have been going on forever and will continue no matter what as it's a marketing tool. I look forwarded to a competitive market rather then a monopoly and I can choose what matters to me.
 
I didn't saw a lot of review yet about discernable image quality with RT On (if it end up with any), I saw a lot of difference in watch dogs but apparently AMD said they have drivers issues with that particular game.
 
The cards just got released, I am pretty sure those games would need some tweaking to run as well on AMD hardware as they do Nvidia hardware. Ray Tracing is still a niche thing and most really dont care, were still years out on it even mattering for most people. Also exclusives have been going on forever and will continue no matter what as it's a marketing tool. I look forwarded to a competitive market rather then a monopoly and I can choose what matters to me.
The thing is, if their RT code follows DXR standards, it should run on both AMD and Nvidia. Indeed, there could be some opportunity for optimization tweaks. but it should run. This is literally holding off a standard feature, due to an exclusivity deal. Which is dumb and also hilarious, because Nvidia has had RTX for two years. AMD has it for a day and now they are paying to keep it from Nvidia????

I'm an AMD fan but this is not a good look.
 
the Big Navi cards are impressive overall but not when it comes to ray-tracing...they're taking a different route then Nvidia and using more of a targeted ray-tracing approach...performance is meh...they don't get outright beaten down by Nvidia but it seems if you're looking for the best RT performance then you need to stick with Nvidia

but what really sucks is that certain games will now be RT exclusive with Nvidia/AMD...starting with Godfall which is an AMD timed exclusive
AMD has never in it's life done feature exclusives. I doubt really seriously they are going to start now. A timed exclusive because it's going to consoles first.
 
Last edited:
"Brace yourself for ray tracing in new games being AMD or Nvidia exclusive"

https://www.pcgamer.com/ray-tracing-amd-nvidia-exclusivity/
That is the biggest load of shit I have heard in a while. One of the biggest clickbait articles I have seen in a while. Has NVIDIA fingerprints all over it for its hardware being broken and not in line with DX12 specs currently is what I think. A little PC birdy told me this morning that there have been several other "updates" offered to that article yesterday, but NVIDIA's is the only one that has been published.
 
That is the biggest load of shit I have heard in a while. One of the biggest clickbait articles I have seen in a while. Has NVIDIA fingerprints all over it for its hardware being broken and not in line with DX12 specs currently is what I think. A little PC birdy told me this morning that there have been several other "updates" offered to that article yesterday, but NVIDIA's is the only one that has been published.
DX12 ray tracing is working on other title too; Watch Dogs and Dirt 5, maybe those game had to do non DX12 api to make it work with Nvidia, but with how much partnering Godfall made with AMD it can this title wanting to push the run better on AMD to the maximum.
 
DX12 ray tracing is working on other title too; Watch Dogs and Dirt 5, maybe those game had to do non DX12 api to make it work with Nvidia, but with how much partnering Godfall made with AMD it can this title wanting to push the run better on AMD to the maximum.
My comment was meant towards Godfall alone. Please do not read more into it.
 
My comment was meant towards Godfall alone. Please do not read more into it.
Well yes but if Nvidia card didn't respect DX12 raytracing convention being the reason Godfall has issue making it work on them, why wasn't any issue for Dirt 5 or watch dogs ? They are not using proprietary Nvidia tech either right ?
 
Well yes but if Nvidia card didn't respect DX12 raytracing convention being the reason Godfall has issue making it work on them, why wasn't any issue for Dirt 5 or watch dogs ? They are not using proprietary Nvidia tech either right ?
Well, that would have to include that RT in every game is done the same exact way with the same feature sets.
 
I think at this point AMD has no choice but to "game works" their titles. AMD has tried for how long to be a good corporate citizen and push open standards. Where did it get them?
 
I think at this point AMD has no choice but to "game works" their titles. AMD has tried for how long to be a good corporate citizen and push open standards. Where did it get them?
https://media.ycharts.com/charts/6a9877c1d25c67ae28b01a55a638f69d.png
6a9877c1d25c67ae28b01a55a638f69d.png
 
Now look up market share.

Cpu:
https://www.techspot.com/images2/news/bigimage/2020/11/2020-11-03-image-32.jpg

PC Gpu:
https://www.statista.com/statistics/754557/worldwide-gpu-shipments-market-share-by-vendor/
As of the second quarter of 2020, Intel was the biggest vendor in the PC GPU market worldwide, occupying 64 percent of the market. AMD, who has shipped over 500 hundred million GPUs since 2013, occupied 18 percent of the market, whilst Nvidia took a market share of 19 percent.

If we would look at all gpu, with PS5/Xbox at the moment it probably does not look bad.
 
Is the poor ray tracing performance something that can be improved on the 6800 series with time? Meaning, is this a fundamental hardware limitation, or something that can be improved with driver or software updates and optimizations to make it be more in line with the current RTX line?
 
Is the poor ray tracing performance something that can be improved on the 6800 series with time? Meaning, is this a fundamental hardware limitation, or something that can be improved with driver or software updates and optimizations to make it be more in line with the current RTX line?
Take a stab, guess, opinion -> Yes, Yes, Yes, Yes, No. I am not sure how different in performance once DXR 1.1 games are plentiful, what optimizations can be had for both architectures and so on. Ampere does have great compute capability which RT uses a lot with the double fp32 per SM for compute portions of RT, plus the Tensor cores not only helps with DLSS but also denoising. I've have yet to see a meaningful must have or even significantly desired RT outcome in games, the added benefits are like outweighed by the performance penalty incurred. On the flip side AMD cache should help tremendously for RT workloads, BVH to triangle intersection for light rays and also compute. Basically have to wait and see.

The older RT games will be meaningless to decide RT capability of RNDA2, only the newest games using DXR 1.1 and optimized for both will a clear understanding of capability will be seen.
 
Yeah, definitely in Control the performance hit is worth it for the graphics.

Metro also. The scenes on the train with RT and HDR almost look real.
 
Last edited:
Discussion on reddit, someone commented the following about AMD's version of DLSS: "I think AMDLSS is going to be more an RDNA3 thing than an RDNA2 thing. RDNA2 doesn't have any dedicated hardware for running ML workloads, so any ML upsampling they do with this hardware is coming right out of your shader budget. They'll have something, but it probably won't be as good as DLSS."

I'm not knowledgable enough on this -- anyone agree/disagree that 6800XT will not be able to match 3080's performance on DLSS and RT even after some time passed and drivers are more optimized?
 
Discussion on reddit, someone commented the following about AMD's version of DLSS: "I think AMDLSS is going to be more an RDNA3 thing than an RDNA2 thing. RDNA2 doesn't have any dedicated hardware for running ML workloads,
A. I don't think that is strictly correct. However, AMD's current design does seem to be more looped around the shader cores. Whereas Nvidia has the RT cores and Tensors cores basically separate.

B. even if they do have to use some of your shader potential-----I doubt its going to be a meaningful impact. You could even look at the RTX 2060, which requires about 5ms to run DLSS (1-2ms for the bigger cards with more tensor cores). That sounds long, But when the overall benefit is like 30% more FPS in many cases----it doesn't really matter.

C. additionally, the actual AI is on a supercomputer. What actually ends up being used by DLSS/DirectML, will be a derivative based on data points from the AI training. I'm sure there are a number of ways that can be adapted to all the different things a GPU has available (shader cores, Computer architecture, RT accelerators, tensor cores, etc.).
 
Discussion on reddit, someone commented the following about AMD's version of DLSS: "I think AMDLSS is going to be more an RDNA3 thing than an RDNA2 thing. RDNA2 doesn't have any dedicated hardware for running ML workloads, so any ML upsampling they do with this hardware is coming right out of your shader budget. They'll have something, but it probably won't be as good as DLSS."

I'm not knowledgable enough on this -- anyone agree/disagree that 6800XT will not be able to match 3080's performance on DLSS and RT even after some time passed and drivers are more optimized?

That someone is stating the obvious. We know amd doesn't have tensor cores, no shit! But if you read what xbox series X has to say about RNDA 2 I would have to disagree with that post.

"Quantic Dream CEO David Cage pointed out that one of the biggest hardware advantages for the Xbox Series S and X consoles over the competition (chiefly, Sony's PS5) could be in their shader cores, reportedly more suitable for Machine Learning tasks thanks to hardware extensions allowing for up to 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations" so RDNA clearly has some tricks in shader core. I am not a big fan of dedicated hardware and fixed function. If you cook it in the shaders and make them better and smarter that tends to help long term in making chips leaner and meaner.
 
Discussion on reddit, someone commented the following about AMD's version of DLSS: "I think AMDLSS is going to be more an RDNA3 thing than an RDNA2 thing. RDNA2 doesn't have any dedicated hardware for running ML workloads, so any ML upsampling they do with this hardware is coming right out of your shader budget. They'll have something, but it probably won't be as good as DLSS."

I'm not knowledgable enough on this -- anyone agree/disagree that 6800XT will not be able to match 3080's performance on DLSS and RT even after some time passed and drivers are more optimized?
No, this is incorrect. AMD will be implementing AI upscaling via DirectML, this will be their DLSS equivalent. From everything AMD has mentioned, it should come out sometime in 2021. It will not be restricted to RDNA3 because DirectML works via a DirectX shader. Technically, any GPU that supports DX12 will be able to use DirectML.
 
No, this is incorrect. AMD will be implementing AI upscaling via DirectML, this will be their DLSS equivalent. From everything AMD has mentioned, it should come out sometime in 2021. It will not be restricted to RDNA3 because DirectML works via a DirectX shader. Technically, any GPU that supports DX12 will be able to use DirectML.
But will it perform similarly to 3080 or quite a bit worse, you think?
 
But will it perform similarly to 3080 or quite a bit worse, you think?
Nobody can tell you that, because we haven't seen it used. Since it doesn't use tensor cores, it's a new thing to all of us. If I remember correctly, I think I've read it uses compute shaders (just google DirecML super resolution amd microsoft and you'll come up with stuff), but I'm not certain. It *might* (TOTAL speculation) perform similarly or better on AMD, since they've had an edge on compute because their architecture is organized around more shader cores.

PS: there you go, found a decent link. The picture they show is pretty impressive, we'll see about the performance in the coming weeks/months.
 
Last edited:
But will it perform similarly to 3080 or quite a bit worse, you think?
Seeing it's in the XBox Series X, yeah I think Microsoft is working with AMD to make it better.they don't want to miss out on the faster frame rates.
 
Video from AMD talking about MS DXR 1.1, kinda dry but for real comparisons between Nvidia and AMD RT, the application/game will really need DXR 1.1. Do not think DXR 1.0 applications will effectively use RNDA2 Raytracing ability in other words, hence most current DX 12 RT games use DXR 1.0 some Vulkan RT games do use Nvidia RT extension. Games developed for the next gen consoles using RT will most likely be DXR 1.1 -> Dirt 5.

 
This is what I was hoping for, hardware RT support in Prorender 2. Looks really good on the the different scene tests and the speedup using hardware RT:

 
Back
Top