AMD RDNA 2 gets ray tracing

Unlike consoles, PC graphics apis don’t have support for explicit cache management. If AMD does go this route they may have to work some driver magic for each game to ensure the right data stays in cache for maximum reuse.
Why would it need to? The graphics drivers compile the DX/Vulkan api so as to control the hardware. Most used shaders can be cached, BVH, there is already a shader cache that allows faster loading of games. The drivers have knowledge and priorities over what gets cache, extending this further into a more robust cache on the chip addressing the most memory intensive operations seems logical. HBCC analyzes what vram content is needed for a game, keeping what is needed local. We probably had to wait until October anyways to see if this is true but looks like it.
 
Why would it need to? The graphics drivers compile the DX/Vulkan api so as to control the hardware. Most used shaders can be cached, BVH, there is already a shader cache that allows faster loading of games. The drivers have knowledge and priorities over what gets cache, extending this further into a more robust cache on the chip addressing the most memory intensive operations seems logical. HBCC analyzes what vram content is needed for a game, keeping what is needed local. We probably had to wait until October anyways to see if this is true but looks like it.

HBCC sits between VRAM and the hard drive and mostly deals with managing texture data to avoid running out of VRAM. The on-chip cache will be most useful for transient buffers rendered on the GPU and helps reduce VRAM bandwidth usage.

Each game creates and uses buffers differently. For most effective usage of the cache they will need per game optimization or some really swanky heuristics to automagically do the right thing for each game.
 
Then maybe they have some software breakthroughs similiar to the new consoles that claim on average 2.5x efficiency in I/O. Something Feedback Streaming.

I just don't see AMD getting BIIIIIG NAAAAAVVVI performance with 256 bit GDDR6 which early leaks suggest.
why not? you know how edram works? have you heard about 5775c? I remember back then by just overclocking it by 200mhz the bump in performance in games was between 29,6% - 31.3%....
 
HBCC sits between VRAM and the hard drive and mostly deals with managing texture data to avoid running out of VRAM. The on-chip cache will be most useful for transient buffers rendered on the GPU and helps reduce VRAM bandwidth usage.

Each game creates and uses buffers differently. For most effective usage of the cache they will need per game optimization or some really swanky heuristics to automagically do the right thing for each game.
That's what the engines are for. Yes you're correct that the developer may need to get specific in use of cache from a general use point but I think the MS console has the same edram if so then it's not that big of a deal. In fact this is the first year generation where current video cards match the consoles.

The last consoles were more about multithreading on CPU. This time I think it's more about to GPU.
 
That's what the engines are for. Yes you're correct that the developer may need to get specific in use of cache from a general use point but I think the MS console has the same edram if so then it's not that big of a deal. In fact this is the first year generation where current video cards match the consoles.

The last consoles were more about multithreading on CPU. This time I think it's more about to GPU.

DirectX and Vulkan don’t expose functions for explicitly managing cache so there isn’t much game devs can do without updates to those apis. This would have to be handled by intelligent hardware and software from AMD.
 
DirectX and Vulkan don’t expose functions for explicitly managing cache so there isn’t much game devs can do without updates to those apis. This would have to be handled by intelligent hardware and software from AMD.
They don't need to. Many hardware specific functions are included in game engines that add additional functionality from DLSS to physx. If there's engine support then it very well could be easily accessed. Regardless if that's console or pc.
 
They don't need to. Many hardware specific functions are included in game engines that add additional functionality from DLSS to physx. If there's engine support then it very well could be easily accessed. Regardless if that's console or pc.

How do you think game engines get access to "hardware specific functions"?
 
How do you think game engines get access to "hardware specific functions"?
If you think it's all through vulkan or direct x you would be mistaken. Nor does every developer make their own engines to support them. Unreal engine in particular supports all sorts of apis out side of direct x / vulkan. More accurately I'd say in tandem.
 
Last edited:
If you think it's all through vulkan or direct x you would be mistaken. Nor does every developer make their own engines to support them. Unreal engine in particular supports all sorts of apis out side of direct x / vulkan. More accurately I'd say in tandem.

No I don’t think that. Yes the IHVs provide proprietary libraries to access functionality not exposed in DirectX/Vulkan.

You’re saying the same thing I said several posts ago. AMD either has to provide an api to devs or add game specific logic to the driver for each game.
 
No I don’t think that. Yes the IHVs provide proprietary libraries to access functionality not exposed in DirectX/Vulkan.

You’re saying the same thing I said several posts ago. AMD either has to provide an api to devs or add game specific logic to the driver for each game.

Uh... no? That's what happened in the 90s, there's a reason we have "universal" (more or less) APIs now. API = application programming interface, the "path" through which hardware interfaces with an application by being programmed in a specific way. Whatever proprietary libraries anyone wants to use, they'll be shipped with the game but it shouldn't need any driver help from AMD/Nvidia - they might choose to publish optimized drivers for better performance, but it's not necessary at all - and they certainly don't "provide an api to devs", the APIs are controlled by Microsoft (DirectX) or Khronos (Vulkan). Why would any dev voluntarily choose to not explose DirectX/Vulkan functionality and make it harder for anyone to use their game with "plug-n-play" (so to speak) ease? I mean, can you give me an example of any functionality that doesn't go through one of the main 2 APIs? Otherwise I don't quite understand what you're getting at with your post.
 

Whoa punctuation exists for a reason.

You seem to not be following the conversation. When new hardware capabilities are brought to market they must be exposed by an api. Often DirectX and Vulkan don’t initially or never support the functionality so proprietary extensions are necessary.

One recent example is Nvidia’s proprietary VK_NV_ray_tracing extension to enable raytracing in Vulcan before it was formally adopted into the core api. Then there’s NVAPI which is Nvdia’s core extension lib.

I don’t know where you get that extensions died in the 90’s. Might want to check your sources.
 
I don’t know where you get that extensions died in the 90’s. Might want to check your sources.

Aaaaah, apologies, I misunderstood what was being said. Yes, indeed something like VK_NV_ray_tracing was added to the API, and that is indeed something Nvidia would distribute via drivers. I guess I don't think about that as an "extension" anymore, I just think of it as one more API (isn't that what it is?). When I think of extensions, I think of all the little GL additions we received per-game in the 90s, or when Glide got a few new tricks in X or Y game. Then again I guess those were parts of the API just like VK_NV_ray_tracing is. Do we just call them different things now (API - not mentioning the "big" API but just a small API - vs extension?) or perhaps I'm mistaken?
 
Aaaaah, apologies, I misunderstood what was being said. Yes, indeed something like VK_NV_ray_tracing was added to the API, and that is indeed something Nvidia would distribute via drivers. I guess I don't think about that as an "extension" anymore, I just think of it as one more API (isn't that what it is?). When I think of extensions, I think of all the little GL additions we received per-game in the 90s, or when Glide got a few new tricks in X or Y game. Then again I guess those were parts of the API just like VK_NV_ray_tracing is. Do we just call them different things now (API - not mentioning the "big" API but just a small API - vs extension?) or perhaps I'm mistaken?

I think it's pretty common to call them extensions nowadays. For OpenGL and Vulkan for sure. And both the standard libs like DirectX and the proprietary extensions are considered apis (which they are).
 
According to the latest "whispers" going around, seems like AMD might actually give us even more VRAM:

1600698071695.png


Given the other rumors of higher VRAM GPU versions by nVidia, I'm inclined to believe these are probably accurate. Having more VRAM, if it's for the same or similar price, might win me around this time. I can't tell you the amount of times I run out of memory and it hurts performance (granted I'm on a 1060 3GB). Considering the higher levels of texture, world detail and raytracing that'll happen in next gen games, VRAM could become even more important than it is now.
 
According to the latest "whispers" going around, seems like AMD might actually give us even more VRAM:

View attachment 281263

Given the other rumors of higher VRAM GPU versions by nVidia, I'm inclined to believe these are probably accurate. Having more VRAM, if it's for the same or similar price, might win me around this time. I can't tell you the amount of times I run out of memory and it hurts performance (granted I'm on a 1060 3GB). Considering the higher levels of texture, world detail and raytracing that'll happen in next gen games, VRAM could become even more important than it is now.
In Crysis Remaster, it pegs the 5700 XT 8gb at 1440p, sometimes it has the typical out of vram indication with dramatic frame time hitches which can be corrected by going into the settings, changing it and going back to where you were. This is with Very High settings, no motion blur and Performance RT. On the Vega FE, the same place that had the problems went up to 9gb of vram and this was at 3440x1440p, seen as high as 9.8gb of vram used so far. Game looks like it uses the vram allocated, if you turn down the settings, vram automatically reduces to even less than 5gb even when I have 16gb on the Vega FE. Not as if it is automatically filling up the available vram. Game uses 8K textures where you can see the veins on leaves when you get close. Amazing looking game with Very High settings. Have not tested out that much "Can it run Crysis" settings for vram usage.
 
Last edited:
Navy Flounder is the midrange part that will compete with 3060(?) more likely. Sienna Chiclid is the one we're all thinking about hoping it comes out swinging against 3080. Also these are awful codenames.


Couldn't agree more on the AMD Code names,horrible.

Hope performance is competitive,but, given the launch is the day AFTER their quarterly report,I am somewhat doubtful,the implications are clear. It doesn't scream Nvidia Killer !
 
Can Big Navi even compete with the 3080 or is it meant to compete against the 3070? Reportedly 80 compute units and 256 bit bus width. The lower end card will be 40 compute units and 192 bit bus.
Apparently they are using some kind of console speed up tech with increased on board cache. Not sure how well this will play out in actual game FPS.
Also will be interesting to see how AMD's ray tracing stacks up.

I hope team red is competitive but I'm not very hopeful right now. They have let us DOWN so many times in the GPU arena!
 
They have let us down before, but they've also had cards that were good that weren't well received.

For example, the RX 580/570 (or even 480/470) were pretty well priced and competed with Nvidia in that segment and didn't seem to get a lot of sales.

Also, the Radeon VII was actually a really nice card. You still see people today claim AMD hasn't beat the 1080 Ti, when they have. Most people don't even talk or notice the Radeon VII.

So, I think the bigger problem is that AMD can (and most likely will) offer a competitive alternative to the 3070/3080 and people still might not buy it.
 
They have let us down before, but they've also had cards that were good that weren't well received.

For example, the RX 580/570 (or even 480/470) were pretty well priced and competed with Nvidia in that segment and didn't seem to get a lot of sales.

Also, the Radeon VII was actually a really nice card. You still see people today claim AMD hasn't beat the 1080 Ti, when they have. Most people don't even talk or notice the Radeon VII.

So, I think the bigger problem is that AMD can (and most likely will) offer a competitive alternative to the 3070/3080 and people still might not buy it.

The problem with the Radeon VII was that they only made very limited numbers. It was just to say that they made the first 7nm gaming GPU. It was more of PR exercise than any real attempt to win market share.
 
Couldn't agree more on the AMD Code names,horrible.

Hope performance is competitive,but, given the launch is the day AFTER their quarterly report,I am somewhat doubtful,the implications are clear. It doesn't scream Nvidia Killer !
Seems like they're going all out to make sure there is no hype machine this time, from the lack of leaks to the code names, lol.
 
Seems like they're going all out to make sure there is no hype machine this time, from the lack of leaks to the code names, lol.
Well they were very effective in shutting down any further leaks on the "Nvidia Killer" GPU. :)
 
They have let us down before, but they've also had cards that were good that weren't well received.

For example, the RX 580/570 (or even 480/470) were pretty well priced and competed with Nvidia in that segment and didn't seem to get a lot of sales.

Also, the Radeon VII was actually a really nice card. You still see people today claim AMD hasn't beat the 1080 Ti, when they have. Most people don't even talk or notice the Radeon VII.

So, I think the bigger problem is that AMD can (and most likely will) offer a competitive alternative to the 3070/3080 and people still might not buy it.

This is true. Though I sell PCs and compnents over the shelf and have sold (and still sell) a TON of RX580s and RX570s (they're still rock-solid and the best GPUs for their price) the truth is that Nvidia could take a bloody backed up dump in a box and slap a Geforce logo on it and charge $1200 for it and still sell them faster than AMD could give away their cards for free.

That said, Nvidia isn't lazy when it comes to the technology they sell. They keep producing fast, industry-defining products and that is what has earned them their mindshare. AMD does not need a Halo-beating super-product, they need three or four generations of industry-defining products before people will trust them like they do Nvidia.
 
Seems like they're going all out to make sure there is no hype machine this time, from the lack of leaks to the code names, lol.


I really like this,as to me its a potentially good sign.... AMD has a long history of hyping sub par products. Maybe just maybe they have something decent this time?
 
I really like this,as to me its a potentially good sign.... AMD has a long history of hyping sub par products. Maybe just maybe they have something decent this time?
I'm hopeful, but not looking to buy day one or anything. I'll wait to see where the dust settles after appt the cards from both companies are out for a little while.
 
will Big Navi be more gaming focused compared to Ampere?...and will it be better for 1440p compared to Ampere?...Nvidia seemed to overload their new cards with compute performance vs strictly gaming and it seems best served at 4K resolution...
 
4K probably looks better cause it removes the CPU from bottle necking.

But they are very different beasts, so we'll have to wait for reviews to see how they compare.
 
So we've come full circle and that's why I made mention of the problem of not knowing how much of a scene is ray traced and just blindly assuming that ray tracing was the reason. Not unless you are literally telling everyone here that the only games that look good are ones that have Nvidia RT support.

One thing I really like about ray tracing are the deeper black representation in scenes. It reminds me of that delicious black that CRT's used to recreate so well, deepening the immersion.

It will probably be comparable.

AMD seems to be one step behind Nvidia, so beating the 2080 Ti is possible. Beating Nvidia's new best is probably out the question.

But I would like to be surprised.
If AMD can come out with a chip within 10% of the 3080, at a better price, they are going to have a hell of a card. Add the increased financial commitment that the last 3 years of revenue being funneled in from Ryzen, they are going to be able to recommit to their GPU division like never before.
 
One thing I really like about ray tracing are the deeper black representation in scenes. It reminds me of that delicious black that CRT's used to recreate so well, deepening the immersion.

That’s one of the real benefits of true global illumination. No need for a fake “ambient light” that lights up parts of the scene that shouldn’t be getting any light. Gives that nice depth and contrast like OLED vs LCD.

I think Metro Exodus is the only game that tries to do proper GI but it’s far from perfect.
 
Upcoming games supporting RDNA2 Ray Tracing (plus other tech):

Dirt 5 -> Nov 6 2020
Godfall -> Nov 12 2020
FarCry 6 -> Feb 18 2021
World Of Warcraft Shadowland -> Oct 27 2020
Rift Breaker -> 2021?

Since some of these come out before the launch of the 6000 series cards, we should get an idea of RT performance. These should have been at least reasonably optimized for RDNA2.
 
Did AMD implement dedicated hardware for the RT, or is it being done by the same CU's as rasterization? I didn't catch any of that so far from what I've read.
 
Did AMD implement dedicated hardware for the RT, or is it being done by the same CU's as rasterization? I didn't catch any of that so far from what I've read.
There's a dedicated what they call ray accelerator per CU. Benches put it on par with Turing.
 
His work was Vega.

IIRC Raja was upset that he had team members pulled to work on navi due to navi powering consoles.
I remember it happening this way... he wanted people on Vega nor Navi
There's a dedicated what they call ray accelerator per CU. Benches put it on par with Turing.
That sounds a lot like, no. So then the accelerators will help out with the processing of RT but a large part of the RT workload remains with the main CU's.
 
I remember it happening this way... he wanted people on Vega nor Navi

That sounds a lot like, no. So then the accelerators will help out with the processing of RT but a large part of the RT workload remains with the main CU's.
There are many processors per CU. How that break out occurs and how much of it is dedicated to RT we don't know. It's not software if that's what you're wondering.

**EDIT**
FROM ANAND:

"Ray tracing itself does require additional functional hardware blocks, and AMD has confirmed for the first time that RDNA2 includes this hardware. Using what they are terming a ray accelerator, there is an accelerator in each CU. The ray accelerator in turn will be leaning on the Infinity Cache in order to improve its performance, by allowing the cache to help hold and manage the large amount of data that ray tracing requires, exploiting the cache’s high bandwidth while reducing the amount of data that goes to VRAM."
 
Last edited:
Back
Top