AMD RDNA 2 gets ray tracing

Oh yeah it has begun.

Halo infinite gameplay on XSX is universally panned for looking flat and last gen. Digital Foundry breaks down the problem to the use of last gen indirect lighting in scenes not directly lit by the sun. Solution of course is higher precision bounce lighting including hardware RT.

Halo is also a great example of why shitty graphics at 4K still looks shitty. DF recommends lowering resolution and spending the horsepower on better lighting. I couldn't agree more.

So much for people not noticing the benefits of pixel perfect light and shadows. This is a promising sign for this coming generation. Open world games with the sun as the primary source of light will need RT or SVOGI to avoid looking "flat" as people's expectations are changing.



"But, but but...the console are supposed to make the PC gaming look last-gen" ;)
 
It feels like a calculated plan. Total blackout silence.

That means they have good cards. Sort of like when you get pocket Aces in Texas Hold 'em and you stop talking real quick.
I agree, and I feel the lack of info and promises is a nice change. Hopefully it means the AMD hype machine resources are being put into the product instead ;).
 
Oh yeah it has begun.

Halo infinite gameplay on XSX is universally panned for looking flat and last gen. Digital Foundry breaks down the problem to the use of last gen indirect lighting in scenes not directly lit by the sun. Solution of course is higher precision bounce lighting including hardware RT.

Halo is also a great example of why shitty graphics at 4K still looks shitty. DF recommends lowering resolution and spending the horsepower on better lighting. I couldn't agree more.

So much for people not noticing the benefits of pixel perfect light and shadows. This is a promising sign for this coming generation. Open world games with the sun as the primary source of light will need RT or SVOGI to avoid looking "flat" as people's expectations are changing.


Good greif. So without RT everything looks flat? No the problem is that Microsoft made the choice of releasing all games on the older platform and new. Meaning the development focuses on the most common denominator aka the Xbox One and not the series X. So what you're looking at is typical of how games are designed for the PC because that's how Microsoft thinks.

Microsoft isn't planning on releasing a X Series optimized game for 3 years. In terms of console development that's suicide. Console developement does not equal PC development.
 
Last edited:
Good greif. So without RT everything looks flat? No the problem is that Microsoft made the choice of releasing all games on the older platform and new. Meaning the development focuses on the most common denominator aka the Xbox One and not the series X. So what you're looking at is typical of how games are designed for the PC because that's how Microsoft thinks.

Microsoft isn't planning on releasing a X Series optimized game for 3 years. In terms of console development that's suicide. Console developement does not equal PC development.

Raytracing actually makes ligthing hacks look bland FYI
 
Good greif. So without RT everything looks flat?

No, without proper GI everything looks flat. The video explains it well but we are moving away from lighting baked into textures and light maps to more dynamic assets where the lighting has to come from an actual light source. If you use dynamic assets with bad lighting you get Halo.

No the problem is that Microsoft made the choice of releasing all games on the older platform and new. Meaning the development focuses on the most common denominator aka the Xbox One and not the series X. So what you're looking at is typical of how games are designed for the PC because that's how Microsoft thinks.

Yes this clearly isn't an issue with XSX hardware. But it is a nice demonstration of what's missing from games without dynamic GI. A lot of people never paid attention because lighting has been baked with excellent results in the past - TLOU, Uncharted etc. It's mostly a reminder for the folks who think good dynamic lighting and shadows aren't a big deal.
 
Raytracing actually makes ligthing hacks look bland FYI
So we've come full circle and that's why I made mention of the problem of not knowing how much of a scene is ray traced and just blindly assuming that ray tracing was the reason. Not unless you are literally telling everyone here that the only games that look good are ones that have Nvidia RT support.
 
  • Like
Reactions: noko
like this
No, without proper GI everything looks flat. The video explains it well but we are moving away from lighting baked into textures and light maps to more dynamic assets where the lighting has to come from an actual light source. If you use dynamic assets with bad lighting you get Halo.
Actually the video doesn't explain it well. Laying all of that on lighting is lazy. First off the textures themselves are really low quality. The use of high quality bump mapping is basically non-existent in well.... everywhere. Lighting alone does not give objects texture....bump mapping does that. GI does nothing more than give realistic shadows. But in the demo that game is missing more than that. This rush (and it's what I was afraid of) that any game that doesn't look great is immediately the result of it not being ray traced/sanctioned by Nvidia is problematic. This is how Apple users think. But it's not how people who build their own computers should think. That DF video literally thinks that all of the problems are lighting related when there's way more problems than that. Without even trying I see low quality textures, the memory contraints on the game engine is too low, the pop-in is damn near at PS2 levels (might as well have added fog everywhere), and the use or the magnitute of volumetric lighting is also minimal. Hence why I said before the problems with that game have waaaay more to do with development goals than anything else. As someone said before a XBOX360 could almost pull that off minus the FPS/resolution and that's pretty darn accurate.

Yes this clearly isn't an issue with XSX hardware. But it is a nice demonstration of what's missing from games without dynamic GI. A lot of people never paid attention because lighting has been baked with excellent results in the past - TLOU, Uncharted etc. It's mostly a reminder for the folks who think good dynamic lighting and shadows aren't a big deal.
Please see above. It's way more than just lighting. The more videos I see from DF the more I dislike them. Nvidia released a hotfix for the DLSS corruption in Death Stranding that everyone else caught when it released, but somehow they missed today. So come to find out everyone else wasn't hallucinating. I don't know if they are getting lazy or what. But so far I haven't been impressed.
 
Last edited:
It's strange cause Crysis Remastered on Nintendo Switch even has real-time voxel global illumination.



The XSX should definitely be able to pull it off, it might have just been a developer choice to keep the classic look, not a hardware limitation on the AMD board.
 
Last edited:
Actually the video doesn't explain it well. Laying all of that on lighting is lazy.

The video specifically addresses the flat look of characters in shadow. There are a ton of other problems with the game but none of them take away from the fact that the lighting is poor.

This rush (and it's what I was afraid of) that any game that doesn't look great is immediately the result of it not being ray traced/sanctioned by Nvidia is problematic.

Ummm what does Halo running on XSX have to do with Nvidia? I think your hate for team green is clouding your judgement a bit.

That DF video literally thinks that all of the problems are lighting related when there's way more problems than that.

This is clearly not true. They also criticized pop-in, uninspired physically based materials, lack of texture detail and clunky particle effects. Did you actually watch the video?
 
Good greif. So without RT everything looks flat? No the problem is that Microsoft made the choice of releasing all games on the older platform and new. Meaning the development focuses on the most common denominator aka the Xbox One and not the series X. So what you're looking at is typical of how games are designed for the PC because that's how Microsoft thinks.

Microsoft isn't planning on releasing a X Series optimized game for 3 years. In terms of console development that's suicide. Console developement does not equal PC development.

It wasn't a decision made by Microsoft, it was their internal developer. I would put it that 343 Studios is either completely inept or they planned to be heavily reliant on RT to differentiate the XSX version.
 
have details leaked about AMD's ray-tracing implementation?...that's the only thing having me considering Big Navi over Ampere...if Big Navi can be slightly faster then a 2080Ti with better ray tracing hardware at a relatively affordable price then it sounds like a major win
 
You all think big navi will outperform nvidias 3080?
It will probably be comparable.

AMD seems to be one step behind Nvidia, so beating the 2080 Ti is possible. Beating Nvidia's new best is probably out the question.

But I would like to be surprised.
 
We just have to wait and see, unless Big Navi has some of the power features that the 3080 has, such as DLSS, it gets somewhat doubtful for me. I too like to be surprised though.
Since FidelityFX CAS is already a thing it would be no surprise if AMD kept refining it and not worry about DLSS.
To the AMD faithful it would be enough is my guess.

It would make financial sense to use what they have, maybe tweak and re-brand it a bit?
 
Big Navi is going to have to be pretty damn big to outdo Nvidia's claims for 3080 performance, which is double the performance of a 2080. It will also need to deal with nvidia's price point of $699.

AMD has a pretty good shot at the 3080.

I expect Big Navi to have at least the same 1TB/s 16GB HBM as the VII. Based on what we’re seeing from the consoles clocks should be in the neighborhood of 2Ghz. That should deliver ~20Tflops.

That’s up against 760Gb/s and ~30Tflops from the 3080. Pricing will be key as HBM isn’t cheap.
 
AMD has a pretty good shot at the 3080.

I expect Big Navi to have at least the same 1TB/s 16GB HBM as the VII. Based on what we’re seeing from the consoles clocks should be in the neighborhood of 2Ghz. That should deliver ~20Tflops.

That’s up against 760Gb/s and ~30Tflops from the 3080. Pricing will be key as HBM isn’t cheap.
Not sure if they will call it Big Navi, more like RNDA2 but in any case if they have better performance then the 3080, more vram, they should just charge more like $799 and a LC version maybe for $899. They will sell if 3080's are not around to be bought anyways. I don't see them having to match Nvidia pricing if Nvidia cannot remotely have enough supply. They can change pricing next year or as needed later. The next version down, $599.
 
My thoughts on possible RDNA2 cards...!

Radeon RX 6700 / Navi 23 / 36CU / 8GB GDDR6 / $299 / equal to RTX 3060
Radeon RX 6700 XT / Navi 23 / 40CU / 8GB GDDR6 / $399 / equal to RTX 3060 Super

Radeon RX 6800 / Navi 22 / 56CU / 12GB GDDR6 / $499 / equal to RTX 3070
Radeon RX 6800 XT / Navi 22 / 64CU / 12GB GDDR6 / $599 / equal to RTX 3070 Super

Radeon RX 6900 / Navi 21 / 72CU / 16GB GDDR6 / $699 / equal to RTX 3080
Radeon RX 6900 XT / Navi 21 / 80CU / 16GB HBM2e / $899 / equal to RTX 3080 Super

Radeon RX 6950 XT / Navi 21 / dual 80CU / 32GB HBM2e total / $1799 / surpasses the RTX 3090

Raytracing across the board...!!

Winning...!!! ;^p
 
Last edited:
AMD has a pretty good shot at the 3080.

I expect Big Navi to have at least the same 1TB/s 16GB HBM as the VII. Based on what we’re seeing from the consoles clocks should be in the neighborhood of 2Ghz. That should deliver ~20Tflops.

That’s up against 760Gb/s and ~30Tflops from the 3080. Pricing will be key as HBM isn’t cheap.

HBM will also help AMD keep power in check.
 
Not sure if they will call it Big Navi, more like RNDA2 but in any case if they have better performance then the 3080, more vram, they should just charge more like $799 and a LC version maybe for $899. They will sell if 3080's are not around to be bought anyways. I don't see them having to match Nvidia pricing if Nvidia cannot remotely have enough supply. They can change pricing next year or as needed later. The next version down, $599.

You're probably right. I think the original idea of Big Navi was essentially just more Navi CUs with the possibility of HBM. Also for it to not take very long to put out to market. RDNA2 is actual new tech. I still make the mistake of saying big navi instead of rdna2.
 
AMD is not using the term "Big Navi" as a name for the forthcoming RDNA2-based GPUs. It is a term that the internet is using to talk about the largest die size for RDNA2 GPUs, hence "Big" Navi...
 
cybereality key word is announced, until cards are in reputable testers hands, I will take their claims with a grain of salt for now, NVIDIA is famous for making such massive claims to only be proven wrong once the cards were in the hands of actual users.
 
cybereality key word is announced, until cards are in reputable testers hands, I will take their claims with a grain of salt for now, NVIDIA is famous for making such massive claims to only be proven wrong once the cards were in the hands of actual users.
How famous?
 
Well, marketing in general is usually somewhat BS from all companies, don't think Nvidia is worse than anyone else.
 
cybereality key word is announced, until cards are in reputable testers hands, I will take their claims with a grain of salt for now, NVIDIA is famous for making such massive claims to only be proven wrong once the cards were in the hands of actual users.

I don't think they have any history of faking benchmarks, or hiring marketing companies that do that. They aren't Intel. ;)

So I don't doubt the game numbers we have seen.

Naturally they are showing the games that show them in the best light. That is how marketing works for everyone. No one pulls out the games that makes them look worse.

Games that have some CPU limitations aren't going to show big gains, so they will be skipped.

The new SM with FP32 + (FP32 or INT32) is how they doubled CUDA cores, but they won't be near double in Games that mix in a lot of INT32, so these games won't boost as much either, so they aren't here...

So of course we need to see a wider range of games from third parties, but I don't think there is much doubt that they dropped some seriously powerful new GPUs for good prices, and the benches we have seen are essentially honest, just incomplete.
 
I don't think they have any history of faking benchmarks, or hiring marketing companies that do that. They aren't Intel. ;)

So I don't doubt the game numbers we have seen.

Naturally they are showing the games that show them in the best light. That is how marketing works for everyone. No one pulls out the games that makes them look worse.

Games that have some CPU limitations aren't going to show big gains, so they will be skipped.

The new SM with FP32 + (FP32 or INT32) is how they doubled CUDA cores, but they won't be near double in Games that mix in a lot of INT32, so these games won't boost as much either, so they aren't here...

So of course we need to see a wider range of games from third parties, but I don't think there is much doubt that they dropped some seriously powerful new GPUs for good prices, and the benches we have seen are essentially honest, just incomplete.
Also some games have fps caps which won't reflect the added performance at lower resolutions. Pretty exciting overall, not sure what AMD has planned in the next couple of months. Release RN22? To compete against the 3070 or 3060? RN21? 3070 or 3080? Both chips? Until AMD reveals their plans we will just be in the dark. RNDA2 maybe released this year but what skews? Midrange, highend, ultra highend? With Zen 3 coming out, top priority most likely and Consoles which really is Sony and Microsoft (I would not be surprised that a huge chuck of 7nm process was allocated to this) -> They will be busy no doubt, to maybe only release a few RNDA2 skews this year.
 
Yep, I wonder about two things, for one AMD is not have a big node change but yet is gaining 50% performance increase per watt. The 36 CU PS5 is being shown to do 60FPS at 4K, not checkerboard with RT reflections at 1080p. The 5700XT 40 CU's can't even come close to usable 4K, little alone with RT. RNDA2 I can't believe is an utter whole new uber design, it is most likely based off of the original RNDA. So what is this secret sauce that it appears to be using?

This was shown last year in March:

View attachment 255791

I think some do not understand that Coretek mostly uses available patents, AMD official slides/whitepapers, source code from available programs like in Linux open drivers and then he speculates, any leaks appears he will also consider. So he is basically speculating as best he can with the available data. Anyways 3d Stacked Memory either SRam or some other form, maybe MRam but doubt that since TSMC has it on 22nm process unless they are working with AMD. Putting the most intensive memory operations on a local fast very large pool/cache which for textures and transversal of the BVH would free up the regular GPU ram which could allow them to use cheaper slower DDR 6 and a smaller width bus while outperforming previous performance. I really don't know but something big I think has to be incorporated to get AMD another 50%/watt boost from basically the same process.
We had a question some time back, looks like RNDA2 50% perf/w increase which from this rumour AMD is getting 60% Perf/w improvement. Anyways what is being reported is that indeed AMD is using a large on die Cache, around 128mb. Allowing use of slower DDR6 at a cheaper cost, some energy savings, with increase overall performance. Some of the other stuff that might interest a few:

RNDA2
3 Sku's
  • Radeon RT? 6700 - compete with the 3070, maybe faster and AMD may undercut the cost
  • Radeon RT? 6800 - compete with the 3080
  • Radeon RT 6900 - compete with the 3090 but maybe slower
  • XT versions or variations added to the above to compete with Nvidia Ti or Super versions
RT performance unknown but hints at Turing to better than Turing RT performance -> I just don't see how AMD can outdo Nvidia in RT due to that the limited number of rays that are casted for real time that need a lot of denoising to clean up the rendering, Nvidia has Tensor cores which work for denoising and also DLSS.

While it will be great if true that AMD is competing sku on sku with hopefully some price or feature such as additional vram, if AMD is basically the same for games, behind in RT and features then I see no reason to wait, just buy an Ampere GPU. The jest is also that AMD may have a better supply over Nvidia, if so many will just buy what is available if they are indeed similar in performance, we will see.

 
We had a question some time back, looks like RNDA2 50% perf/w increase which from this rumour AMD is getting 60% Perf/w improvement. Anyways what is being reported is that indeed AMD is using a large on die Cache, around 128mb. Allowing use of slower DDR6 at a cheaper cost, some energy savings, with increase overall performance. Some of the other stuff that might interest a few:

RNDA2
3 Sku's
  • Radeon RT? 6700 - compete with the 3070, maybe faster and AMD may undercut the cost
  • Radeon RT? 6800 - compete with the 3080
  • Radeon RT 6900 - compete with the 3090 but maybe slower
  • XT versions or variations added to the above to compete with Nvidia Ti or Super versions
RT performance unknown but hints at Turing to better than Turing RT performance -> I just don't see how AMD can outdo Nvidia in RT due to that the limited number of rays that are casted for real time that need a lot of denoising to clean up the rendering, Nvidia has Tensor cores which work for denoising and also DLSS.

While it will be great if true that AMD is competing sku on sku with hopefully some price or feature such as additional vram, if AMD is basically the same for games, behind in RT and features then I see no reason to wait, just buy an Ampere GPU. The jest is also that AMD may have a better supply over Nvidia, if so many will just buy what is available if they are indeed similar in performance, we will see.



I'd be surprised at this point, with the inference from AMD's statements and what we know about RDNA 1 and 2, if the Navi 2 line up strays too far from what you lay out above. My main question is around pricing at this point.

We may see some parallels with RD 5870 vs GTX 480 in some respects. Somewhat less expensive and lower power for Navi 2, higher cost, higher power draw and higher performance (with a few more largely unsupported features) for Ampere.
 
I'd be surprised at this point, with the inference from AMD's statements and what we know about RDNA 1 and 2, if the Navi 2 line up strays too far from what you lay out above. My main question is around pricing at this point.

We may see some parallels with RD 5870 vs GTX 480 in some respects. Somewhat less expensive and lower power for Navi 2, higher cost, higher power draw and higher performance (with a few more largely unsupported features) for Ampere.
Still rumors, while the claimants attests to reliable sources. I would expect more rumors and eventual images to leak out over time with some benchmark results mixed in with the usual fud. For AMD to really get 50%+ Pwr/w increase from almost the same process would be amazing unless it is some obscure non meaningful measurement not related to gaming. In AMD case they stated that improvement from a game benchmark so that is at least meaningful if true. If AMD is getting great performance benefits by a large cache, keeping the data mostly in the GPU at super speeds, allowing use of much cheaper ram, board configuration etc. They should have a great leeway on pricing which mostly I think will go into getting their profit margin up since they are low on that aspect overall.

Some think that Nvidia FE versions are token symbols, highly overbuilt, extraordinary cooler design and cost, which will always be out of stock, Nvidia is basically underselling them but will make the money up with more readily available, higher priced AIB Ampere cards. Since Nvidia now sells most AIB cards from their own website they still make money even if their FE's are virtually always out of stock. For one I doubt AMD will have the trophy cooler Nvidia put on their card, Lamborghini vs Chevy. So the looks may look downright cheap compared to each other which every review will show the two -> Looks sell except in Turning case where $ considerations overtook the sleek design cards. Here you go, the 3070 beefy, highly design cooler card to AMD, plastic looking with a bent shape on it or something else stupid, some people will subconsciously just stop there and think Nvidia has the superior higher quality product, which maybe the case except can't be bought. Nvidia are masters at manipulative marketing, trying to control the narrative, how presented, who gets the cards review and how you will test it and always a threat of you may not receive our goodness next time around. Marketing slides that appear remarkable until you really look at it, like the The Greatest Generational Leap slide where the baseline is the 980 as 1x :ROFLMAO:, when you actually calculate from Turning to Ampere it is more like 35% and not even remotely as good from the Kepler to Maxwell (>50%) or Maxwell to Pascal (>60%). Or, the power chart. Is Nvidia lying? Well from the 980 as a baseline no but from from one generation to the very next generation yes. Anyways dealing with pricing, we have to see what the real availability and actual pricing is, Nvidia $999 MSRP for the 2080Ti was an utter joke, with only a token few put out at that price, will Ampere cards follow suite in a similar fashion but throughout the range?
 
Last edited:
I always thought that Navi was mis - judged as the RX 5700 has shown that it can run with -in 5% of the XT core using the same bios/ driver but being of less specs was kind of a point not taken into light .
 
...Anyways what is being reported is that indeed AMD is using a large on die Cache, around 128mb. Allowing use of slower DDR6 at a cheaper cost, some energy savings, with increase overall performance...

Sounds like a version of edram akin to the Xbox One. I guess AMD is going for a more layered approach. The Series X has a similiar approach a lesser extent with the hybrid vram.

Honestly, it makes sense. After all, how much data can a single frame pack. Games will need sufficient vram and some of that at very high bandwidth, but it is probably a waste having 16 GB capable of swaping out at 800+ GB/s.
 
  • Like
Reactions: noko
like this
Sounds like a version of edram akin to the Xbox One. I guess AMD is going for a more layered approach. The Series X has a similiar approach a lesser extent with the hybrid vram.

Honestly, it makes sense. After all, how much data can a single frame pack. Games will need sufficient vram and some of that at very high bandwidth, but it is probably a waste having 16 GB capable of swaping out at 800+ GB/s.

Unlike consoles, PC graphics apis don’t have support for explicit cache management. If AMD does go this route they may have to work some driver magic for each game to ensure the right data stays in cache for maximum reuse.
 
why don't more games use software ray tracing?...Crysis Remastered uses it...can't developers give players the option to use software ray tracing?
 
why don't more games use software ray tracing?...Crysis Remastered uses it...can't developers give players the option to use software ray tracing?
I'm still extremely skeptical of Crytek's solution, at least in terms of fidelity even beginning to approach what hardware RT is capable of.

At the same time, I can't disagree that it's worth a hard look, especially since they seem to have included hardware RT support as well. In the race to ray trace all the things, Crytek might just have found a way to bridge the gap, and while their engine may not be the most popular, they're at least showing that it can be done.

Similar to the auto-generated detail system being used in Unreal, Crytek's soft-RT might be helpful for developers if there isn't a need for separate experiences to be developed for different levels of hardware.
 
Well Crytek developed it for their engine, and probably spent considerable time and effort to pull it off. Using hardware rt would probably be easier, since all the API handles most of the complexity.

That said, many games have been using hybrid ray tracing techniques for certain elements. Unreal 4 has supported this for a while for shadows, also games like Quantum Break had a pretty advanced global illumination system.

And there have been all sort of tricks and hacks done over the years to "fake" ray tracing to an extent, so it's not like this is new.
 
Unlike consoles, PC graphics apis don’t have support for explicit cache management. If AMD does go this route they may have to work some driver magic for each game to ensure the right data stays in cache for maximum reuse.

Then maybe they have some software breakthroughs similiar to the new consoles that claim on average 2.5x efficiency in I/O. Something Feedback Streaming.

I just don't see AMD getting BIIIIIG NAAAAAVVVI performance with 256 bit GDDR6 which early leaks suggest.
 
Back
Top