[WCCF] [IgorsLab] Alleged performance benchmarks for the AMD Radeon RX 6800 XT "Big Navi" graphics card have been leaked out.

Yes, DXR is part of DirectX12 and is not tied to Nvidia. However, that doesn't mean that older DXR games will work optimally on AMD's implementation without additional work.
But that's just a fact of life, it's not unique to Ray Tracing. AMD will have to do the work to make sure their implementation works. Just like they have do at the moment with any normal game.

So it is not an automatic thing, even though ideally it should be.

It's not an automatic for normal games why should it be for Ray Tracing? Or do we want PCs basically to become consoles?
 
Well if the info that was floating around here recently about each company being better at a specific method of RT calculations(I believe it was box cubic vs trilinear) is true it could make a big difference. It seems like Nvidia's plan is to push RTX like a gameworks feature but unless they make it replace the regular RT method on PC it would simply put them on an even playing field when it is added, it might not even be that simple though since it sounded like the different methods were better at specific RT features as well so it might not be a drop in addition or replacement.

I think you’re confusing RT with image filtering. There’s no such thing as bicubic or trilinear RT.

RT has such a big hit on performance, that developers are going to go the most efficient route possible to reach their goals. You are going to see RT used with a heavy reliance on things like global illumination which is as close to free as you can get with RT, and shadows which can be done somewhat well. Reflections are not worth the performance hit. AMD RT is going to be better in certain areas, and that is where they will use it.

Interesting. What exactly would make an RT implementation faster at shadows but slower at reflections? They both require a single RT bounce and both need to handle transparency. Global illumination is harder than both as you really want multiple bounces for a realistic effect.
 
Textures/models love cache, but geometry is new each frame. are we sure a video card with 2x RX 5700Xt raster performance is going to be okay with that same old bus?

When you add Raytacing to the mix (very hard to cache, and also requires higher bandwidth), suddenly you're going to have trouble using that cache effectively. I'm no buying that this architecture is designed for the future until we see how it handles today's RTX titles.

Showing synthetic benchmarks is taking the easy way out. I will also be amazed if this card actually available in retail on Nov 05.
Nv has its own RT method and AMD has thier own. You cant compare a lizard to a moon rock like you are. Wait till gamw devs write code optimkzed for AMDs method.


Ray tracing works by sending out rays that then interact with objects. Cache is absolutely vital in ray tracing because the faster it can obtain access to the information it needs to interact with, the fewer rays that need to be sent out. The slower it has access to it, the more latency in the ray tracing which results in very weird phenomena like missing objects in reflections and other aberrations. Modern ray tracing utilizes mapping to give a basic location of objects that the rays need to interact with, this cuts down on the number of rays that are sent out, and thus the computational requirements. Also, cache frees up memory bandwidth for things that cannot be cached and need the memory bandwidth.
This is how I understand it.
 
Interesting. What exactly would make an RT implementation faster at shadows but slower at reflections? They both require a single RT bounce and both need to handle transparency. Global illumination is harder than both as you really want multiple bounces for a realistic effect.

I would like to see an explanation too. I am not that well up on RT, but, I thought reflections and shadows were easier to do than Global illumination.
 
But if AMD's implementation is not using dedicated hardware and/or sharing the standard compute units, then that would lead me to believe adjusting settings could have a bigger difference in balancing performance. This could be a big deal on the mid to lower end, even if the top card doesn't beat Nvidia outright.

We know from the Patents that AMD's solution has dedicated hardware for doing RT calculations. They are using the a repurposed TMU unit that can either do textures or RT acceleration but not both at the same time. Texturing is normally done at the end of the pipeline with Lighting and geometry at the start. This way the TMU is doing the BVH calculations at the start and the switching over to Texturing at the end.
 
I think you’re confusing RT with image filtering. There’s no such thing as bicubic or trilinear RT.



Interesting. What exactly would make an RT implementation faster at shadows but slower at reflections? They both require a single RT bounce and both need to handle transparency. Global illumination is harder than both as you really want multiple bounces for a realistic effect.
Detail is not needed in global illumination. You can run the illumination at a lower frame rate, and lower resolution than the scene. You can map the scene and use an approximation instead of needing a single ray per pixel. Accurate detail of global illumination is simply not necessary, whereas if you want shadows, the level of detail needed increases, and then increases further with reflections.
 

Attachments

  • rt.jpg
    rt.jpg
    152.5 KB · Views: 0
The ones built into DX and Vulcan vs the RTX specific implementation, from what I understand Nvidia is using extensions to add RTX RT to them though they can handle standard RT as well.

I think you are mixing up a lot of info. Nvidia helped develop the Software Ray Tracing API on Vulkan. That's a Ray Tracing API that will work on any GPU but won't have any hardware acceleration.

Currently there is no Ray Tracing built into Directx, that's coming when Dx12 Ultimate is released.
 
I think you are mixing up a lot of info. Nvidia helped develop the Software Ray Tracing API on Vulkan. That's a Ray Tracing API that will work on any GPU but won't have any hardware acceleration.

Currently there is no Ray Tracing built into Directx, that's coming when Dx12 Ultimate is released.
Not quite. DXR was an add-on for DirectX 12 and was released as an experimental SDK in 2018, which many games have shipped with (Control, Metro, etc.)

https://www.anandtech.com/show/12547/expanding-directx-12-microsoft-announces-directx-raytracing

DX12 Ultimate I guess is the official release, which will include AMD support as well as Nvidia.
 
Detail is not needed in global illumination. You can run the illumination at a lower frame rate, and lower resolution than the scene. You can map the scene and use an approximation instead of needing a single ray per pixel. Accurate detail of global illumination is simply not necessary, whereas if you want shadows, the level of detail needed increases, and then increases further with reflections.
Huh? Detail is absolutely paramount. Without it you might as well stick to rasterization. If it doesn't mesh with the background it's going to stick out like a sore thumb.

Personally I think that rasterization has gotten so good that i really don't see the draw for RT. It's too computationally heavy for very little gain.
 
Detail is not needed in global illumination. You can run the illumination at a lower frame rate, and lower resolution than the scene. You can map the scene and use an approximation instead of needing a single ray per pixel. Accurate detail of global illumination is simply not necessary, whereas if you want shadows, the level of detail needed increases, and then increases further with reflections.

What you seem to be saying is that lowering the quality of the Global Illumination makes it less GPU intensive than Shadows or Reflections? But, can't you do the same with shadows and reflections too, render them lower than the scene, control the number of Rays etc? I am pretty sure I remember reading a Nvidia whitepaper that said Global illumination was more performance intensive than reflections.

But, maybe that was specific situations.
 
You can do tricks and optimizations on anything (reflections, lighting, shadows, etc.) but global illumination needs multiple bounces to look good, which quickly adds up when you are talking about shooting out huge numbers of rays.

Reflections and shadows only really need one collision in the base case (unless you want a hall of mirror effect, but even in Control they limit to 1 bounce on the glass).
 
I think you are mixing up a lot of info. Nvidia helped develop the Software Ray Tracing API on Vulkan. That's a Ray Tracing API that will work on any GPU but won't have any hardware acceleration.

Currently there is no Ray Tracing built into Directx, that's coming when Dx12 Ultimate is released.
I realize that Nvidia was more involved with RT in Vulkan but from what I've read they're still using an extension for RTX RT in it.

I certainly could be mixing some things up since I've been on info overload mode with this stuff for a couple months now trying to get up to speed on the new tech.
 
Sure but that doesn’t have anything to do with the post I responded to.
Well, I'm agreeing with you on the original point (sorry if that wasn't clear).

Merely showing that AMD was not starting from nothing once Nvidia's plans were known.
 
Not quite. DXR was an add-on for DirectX 12 and was released as an experimental SDK in 2018, which many games have shipped with (Control, Metro, etc.)

https://www.anandtech.com/show/12547/expanding-directx-12-microsoft-announces-directx-raytracing

DX12 Ultimate I guess is the official release, which will include AMD support as well as Nvidia.

Yes, I know that.

If you read the posts from the guy I was responding to, you will understand my response. He thinks that there is a standard version of Ray Tracing in DXR and Vulkan that can run games on any GPU.

There is in Vulkan but not in Directx until DX12 Ultimate is released.
 
I realize that Nvidia was more involved with RT in Vulkan but from what I've read they're still using an extension for RTX RT in it.

Yes, because Nvidia still has graphic cards capable of doing Ray Tracing.

But Khronos also developed the VK_KHR Ray Tracing API, which is a software based API that will work on all GPUs.

And When AMD release their Ray Tracing solution, they will need to create a Vulkan extension to work with their hardware.
 
I can only hope that AMD's RT implementation is going to be platform agnostic an A) is able to make use of titles with Nvidia 2000 / 3000 style RT with a minimal if any performance penalty and B) Will offer a competing platform and OS agnostic RT setup that is more open (source and spec) which will in time become the favored methodology. Nvidia seems to base their tech from PhysX to GameWorks to GSync to CUDA on a proprietary NV only platform which is quite frustrating, whereas AMD seems to choose the open standard (FreeSync etc) instead, a practice I like to support. However, I also want AMD's open alternatives to be comparable or better.

Nvidia has been sitting on RT tech for 2 generations now so its of pivotal importance that AMD RDNA2 comes onto the scene with an answer proving that their open way of doing things is at least comparable. This is not just RT support and performance, but also features like DLSS that depend upon having RT hardware support. Lets hope for the best but even from what we see here, things seem promising.
Both AMD and NVidia’s RT implementation is agnostic as it is both based on the version that is in the DX12U that was built by Microsoft in Conjunction with NVidia. NVidia’s RTX was based on Microsoft’s early releases, which were refined and rolled into DX12 as an official release which is what both new products are basing their implementation off of. DLSS on the other hand was developed completely in house by NVidia and they have spent a lot of time getting its API as simple and refined as possible, if NVidia is able to keep it in use consistently for the next 2 years then I can see AMD releasing their version of it eventually but I don’t see it being something AMD has working this gen and possibly not next either.
 
Yes, because Nvidia still has graphic cards capable of doing Ray Tracing.

But Khronos also developed the VK_KHR Ray Tracing API, which is a software based API that will work on all GPUs.

And, When AMD release their Ray Tracing solution, they will need to create a Vulkan extension to work with their hardware.
Here’s hoping, but I’m not sure we will see it in the near future.
 
Well, I'm agreeing with you on the original point (sorry if that wasn't clear).

Merely showing that AMD was not starting from nothing once Nvidia's plans were known.
Yeah there has been a ton of ray tracing research done by AMD, Intel and Nvidia over the last 30 years. It’s old ass tech.
 
You can do tricks and optimizations on anything (reflections, lighting, shadows, etc.) but global illumination needs multiple bounces to look good, which quickly adds up when you are talking about shooting out huge numbers of rays.

Reflections and shadows only really need one collision in the base case (unless you want a hall of mirror effect, but even in Control they limit to 1 bounce on the glass).

That's the way I was thinking too. I must see can I find it, but, I do remember some website doing a performance analysis on Control. Global Illumination was the most intensive, followed by Reflections then Shadows.

I don't know enough about Ray Tracing, so if someone can illuminate me, I would appreciate it :)
 
AMD releasing their version of it eventually but I don’t see it being something AMD has working this gen and possibly not next either.


AMD are releasing their version of DLSS early next year according to the latest leaks.
 
What you seem to be saying is that lowering the quality of the Global Illumination makes it less GPU intensive than Shadows or Reflections? But, can't you do the same with shadows and reflections too, render them lower than the scene, control the number of Rays etc? I am pretty sure I remember reading a Nvidia whitepaper that said Global illumination was more performance intensive than reflections.

But, maybe that was specific situations.
You are maximizing the image quality while minimizing the number of rays produced. All forms of modern raytracing does this. There will never be a raytracing effect that uses a full single ray per pixel with unlimited bounces. They will all be limited. Lighting simply requires the fewest number of rays to provide a visibly appealing image. Illuminating a static object with very little change needs very few rays per framef. Where as if you take a reflected object, you need a ray per pixel of the model you are reflecting, if you lower that number, the resolution is decreased. So the higher the resolution of the model, the more rays you need. Spiderman on PS5 is a perfect example. The reflections look like a terrible pixelated mess with missing objects.
 
Personally I think that rasterization has gotten so good that i really don't see the draw for RT. It's too computationally heavy for very little gain.
RT is a beginning of a new era while rasterization is approaching its end. I also would think that RT takes less development time than figuring out how to do rasterization tricks.
 
Detail is not needed in global illumination. You can run the illumination at a lower frame rate, and lower resolution than the scene. You can map the scene and use an approximation instead of needing a single ray per pixel. Accurate detail of global illumination is simply not necessary, whereas if you want shadows, the level of detail needed increases, and then increases further with reflections.
For simple one bounce diffuse global illumination you’re right that it requires fewer rays than high quality reflections. However proper global illumination requires multiple bounces which ramps up complexity significantly. You can get away with one bounce for shadows and reflections.
 
Yes, because Nvidia still has graphic cards capable of doing Ray Tracing.

But Khronos also developed the VK_KHR Ray Tracing API, which is a software based API that will work on all GPUs.

And When AMD release their Ray Tracing solution, they will need to create a Vulkan extension to work with their hardware.
I did a little digging and it does look like I was confused on the RTX label.:oops: I was thinking it was their gameworks specific implementation of RT(among other things) but I see it just refers to their hardware based implementation, I think where i got confused on it was reading about things like the denoise module that is(or is supposed to eventually be) part of the gameworks sdk and uses RTX though it looks like it's just for DX.

I should add I think Nvidia will still have the leg up RT this gen regardless simply because they're on their second gen of hardware based RT. My original comment was mainly to point out that I had read recently that the two hardware implementations each favored a different method of implementing RT(I think it came down to each be being better at certain RT features) which could be a factor worth considering when looking at the impact of the new consoles on game development. I'm also almost certain I got the terms for them wrong and after reading a couple more things today I'm thinking it might have been bounding box vs something.
 
RT is a beginning of a new era while rasterization is approaching its end. I also would think that RT takes less development time than figuring out how to do rasterization tricks.
It does, it is also leading to the death of blue and green screen effects in the TV and Movie industry. Having RT effects manage the lighting frees up a significant portion on texture effects and in-game lighting work. Additionally because the textures require less work and contain less data it leads to a pretty large reduction in GPU memory usage. Pairing RT effects with the direct access API’s gives developers a lot more to work with.
 
Both AMD and NVidia’s RT implementation is agnostic as it is both based on the version that is in the DX12U that was built by Microsoft in Conjunction with NVidia. NVidia’s RTX was based on Microsoft’s early releases, which were refined and rolled into DX12 as an official release which is what both new products are basing their implementation off of. DLSS on the other hand was developed completely in house by NVidia and they have spent a lot of time getting its API as simple and refined as possible, if NVidia is able to keep it in use consistently for the next 2 years then I can see AMD releasing their version of it eventually but I don’t see it being something AMD has working this gen and possibly not next either.
All DLSS does is use an outside supercomputer to do upscaling of a game, and then allow the GPU to utilize the locally stored data that's already been rendered to upscale the image. There is essentially an equivalent upcoming for DX12 called DirectML.
 
All DLSS does is use an outside supercomputer to do upscaling of a game, and then allow the GPU to utilize the locally stored data that's already been rendered to upscale the image. There is essentially an equivalent upcoming for DX12 called DirectML.

DirectML is equivalent to CUDA, it’s just a framework. DLSS is a neural network trained using CUDA.

Is Microsoft training image upscaling models and including them as part of DirectML? Or did you mean game developers will use DirectML to train similar models to DLSS?
 
I take WCCFtech about as seriously as I take my horroscope. Fun to look at, the odd time it sort of lines up with reality out of coincidence, but I don’t ever take it seriously. Just wait until actual benchmarks come out.
 
For simple one bounce diffuse global illumination you’re right that it requires fewer rays than high quality reflections. However proper global illumination requires multiple bounces which ramps up complexity significantly. You can get away with one bounce for shadows and reflections.
Apparently you don't want to take my word for it that it's a lot more complicated than that. Global illumination allows far more approximation than shadows and reflections. Should I again post the image I posted earlier? It's the scale of resources raytracing take up from least (sound), to most difficult (full raytracing). It's from Sony's PS5 presentation on raytracing.
 
Yes, because Nvidia still has graphic cards capable of doing Ray Tracing.

But Khronos also developed the VK_KHR Ray Tracing API, which is a software based API that will work on all GPUs.

And When AMD release their Ray Tracing solution, they will need to create a Vulkan extension to work with their hardware.

The NV extension was a proof of concept/example, it's done and has served its purpose.

All major vendors will implement VK_KHR_ray_tracing. Nvidia is obviously the only one with driver (and hardware) support as it stands.
 
All DLSS does is use an outside supercomputer to do upscaling of a game, and then allow the GPU to utilize the locally stored data that's already been rendered to upscale the image. There is essentially an equivalent upcoming for DX12 called DirectML.
Not quite, they use an outside supercomputer to train and refine the AI. That AI is then implemented locally on your machine and run locally on your GPU. Your computer isn’t phoning out to some GPU cluster on the internet and replacing the textures, it’s doing the work on the fly locally on your machine.

But yes just as NVidia took Microsoft’s DXR1.1 and launched it as RTX for it to be rereleased by Microsoft as 2.0 in DX12U they are doing the same with DLSS and DirectML, which when they launch that AMD following the DX12U spec will get access to it there. But there is no ETA on DirectML from Microsoft last I checked just that it’s supposed to work with the new Xbox, but nothing has announced using it.
 
Last edited:
Not quite, they use an outside supercomputer to train and refine the AI. That AI is then implemented locally on your machine and run locally on your GPU. Your computer isn’t phoning out to some GPU cluster on the internet and replacing the textures, it’s doing the work on the fly locally on your machine.
I never said it was phoning out. There is no AI running on your local computer, just data generated by an AI. AIs "learn", the data on your computer is just an algorithm specific for the game you play. The AI is running on the computer that generated the upscaling algorithm. It's all just buzzwords used by NVidia.
 
No, it's running on your machine. That is the whole point.

How else could it work when everything is dynamic?
 
Back
Top