AMD RDNA 2 gets ray tracing

If I were to guess I would say that Big Navi will be roughly equivalent to 3080Ti, maybe faster in some slower in others but below the 3090. Without getting into anything at all technical, if the 3090 rumour is true and it sounds like it is the really only good explanations are this:

1. They are dropping the 3080T for just 3090 and will later do Super versions

2. They will still have 3080TI but think from whatever information that they have cobbled together that it might be short of Big Navi and need a non-Titan variant to "keep" speed crown.

Also, again not getting techincal but PS5 seems to be able to do 4K 60 FPS in some new games and has 36CU. Since Big Navi is rumored to have 80 it would seem like it should be a beast.

I've been disappointed by AMD too many times.

Because of this, I'm assuming big Navi won't be competitive at all in the high end. That way, if they do, I'll get a positive surprise :p
 
Yeah, I feel you.

Realistically, it's going to trade blows with the 2080 Ti, or somewhere around there.

Then Nvidia will release another monster, and we can sit here and speculate for another 2 years...
 
  • Like
Reactions: Auer
like this
Also, super models are just clocked up models. Technically AMD can do the same-thing as binning goes up. But that has never been a part of their strategy.
50th AE 5700Xt disagrees with you.. they did do it. And should do it more, as a collector those are the cards I'm interested in, bit like the 'Phantom Editions' (platinum edition?) e.g. X800XT PE. They were vapourware back then though, I tried to get one lol.

They could have done a Vega PE version too if they wanted, mine significantly (and quite a few other late production V64s) undervolts better than most earlier cards, 1.6GHz (max rated boost) @ 180-190W and 1100MHz HBM2 in most titles incl Battlefront II, NMS, etc is a big difference to 300W... and it also makes the stock blower pretty damn quiet lol.
 
Just hoping we get a decent £400-£500 option this autumn.
I'm looking for a $200 and under option.... Probably be a while till lower end stuff comes out (early 2021?). I have 3-4 to upgrade for the family so no $1,000 cards here! In all honesty though, even the RX 560 does fine with medium @ 1080p in most games, so no huge rush.
 
  • Like
Reactions: noko
like this
I'm looking for a $200 and under option.... Probably be a while till lower end stuff comes out (early 2021?). I have 3-4 to upgrade for the family so no $1,000 cards here! In all honesty though, even the RX 560 does fine with medium @ 1080p in most games, so no huge rush.
I had to do the same a while back. Even at $200, multiple systems = multiple cards and it adds up.
 
I had to do the same a while back. Even at $200, multiple systems = multiple cards and it adds up.
Yeah, this winter I'll probably upgrade the GPU 2 of the systems and then play the component shuffle. I just started upgrading my sons PC, waiting on the parts to show up hopefully this weekend! B550 + 3700x + 16gb 3200. Should be a noticeable difference from his 6600k. The 6600k + MB will go into my wifes desktop, she has an older i5 3450. Not a ton of swapping right now, but stuff at my house tends to get passed down a bit, lol. I try to keep 5-6 desktops usable, so I agree, even $200 can add up quickly.
 
AMD/Microsoft version of DLSS for RNDA2, missed this report:
But it doesn't end there. AMD and Microsoft also seem to be targeting Nvidia's DLSS technology with RDNA 2 and the Xbox Series X. If you're not familiar with DLSS, or deep learning super sampling, it's a technology that uses dedicated hardware on Turing graphics cards to upscale images through AI.

Nvidia graphics cards have dedicated Tensor cores that handle this, but AMD is taking another approach. Instead, AMD will be relying on the raw throughput of the GPU, and executing the machine learning workloads through 8- and 4-bit integer operations – much lower precision than the 32-bit operations that are typically used in graphics workloads. This should result in a huge amount of power for this up-scaling without sacrificing too much.
https://www.techradar.com/news/xbox-series-x-specs

Which may also explain some of the PS5 performance at 4K with raytracing. Does this also represent that phantom performance boost of 50% or part of it on 7nm node? I don't know. Looks like we will need to evaluate once the hardware is out there to be tested. In any case I would want a solution that is more universal than DLSS, as in available in all DX12/Vulkan games. How much performance you gain, is the IQ as good as DLSS (which has rather outstanding results) if this is even true, all will have to be evaluated. This maybe the biggest launch of new GPU's ever.
 
I hadn't seen that report, but I assumed AMD was working on something. This confirms that.
 
Does this also represent that phantom performance boost of 50% or part of it on 7nm node?

It's not a 50% performance boost, it's 50% performance per watt. And I would think it's highly unlikely that AMD included figures got by upscaling in their performance per watt estimates.
 
It's not a 50% performance boost, it's 50% performance per watt. And I would think it's highly unlikely that AMD included figures got by upscaling in their performance per watt estimates.
I agree with that but seeing a 50% per/w basically on the same node is probably a first. Speculating leaves much to be desired. If AMD is comparing their 40 CU part Navi 10 at 225w to a 80 CU part restricted to something like 225w, meaning running at a lower frequency and voltage to meet the wattage, something they did with the Nano previously for a claim of 30% more performance with 30% less power over a 290x is to be seen, both were on the 28nm process. AMD did not compare the Fury X 275w perf/w to the 290x but the Nano 175w perf/w in this case. Best just to evaluate when all the hardware is available.

AMD claim for Nano 30% less power and 30% more performance:
https://www.amd.com/en/press-releases/amd-radeon-r9-nano-2015aug27
 
I remember when AMD was bragging about how amazingly efficient polaris was, and then only a year later rebadging overvolted/overclocked variants as new cards.
 
I remember when AMD was bragging about how amazingly efficient polaris was, and then only a year later rebadging overvolted/overclocked variants as new cards.

I am still laughing over "Poor Volta".
If AMD focused more on GPU's than marketing fluff they mught have a better product.
 
Just have to see when released the real scoop. Then pick and choose, if anything, is worthwhile to buy.
 
I am still laughing over "Poor Volta".
If AMD focused more on GPU's than marketing fluff they mught have a better product.
Well, it's not as if you can have a marketing agent can pick up a copy of some vlsi software and get a better product... In all seriousness though, I'm hoping with Raja gone and a bit more competence at the top of the company, we will be getting better results and less fluff. But, it's hard to be optimistic based on prior history. I still don't think they'll magically catch nvidia on the top end while significantly undercutting prices of anything crazy, but being competitive at 2080ti level for a more reasonable price would be a great step forward.
 
I am still laughing over "Poor Volta".
If AMD focused more on GPU's than marketing fluff they mught have a better product.
It was pretty cringe but volta didn't really end up doing much for mainstream dGPUs either, so they must have known something we didn't and got it partially right.
DLSS also was pretty shit first time, barely worked to look better and only in tunnel shooters. Does it work on open world yet?

Looks like AMD have some good uArch advancements in the pipeline, I just hope we can see this before the end of the year.
 
It was pretty cringe but volta didn't really end up doing much for mainstream dGPUs either, so they must have known something we didn't and got it partially right.
DLSS also was pretty shit first time, barely worked to look better and only in tunnel shooters. Does it work on open world yet?

Looks like AMD have some good uArch advancements in the pipeline, I just hope we can see this before the end of the year.

Tell me the difference between Volta SM/CUDA cores and Turing SM/CUDA cores? ;)
 
Well Volta kicked GCN ass in HPC. Also made Cuda even more of the standard which now AMD has to convert so it can be used on their hardware. Also Volta was not even the gaming version anyways which Pascal trumped Vega not only in gaming but mining as well for the most desired GPUs. That is history, while some say history repeats over and over again so can AMD, as in the past where for awhile were top dog. Best to see what comes available this fall and choose wisely from who gives the best value/performance per $.
 
Tell me the difference between Volta SM/CUDA cores and Turing SM/CUDA cores? ;)
Seen lots of weird shit about that but from what I can tell in more recent benches, only one does the paytracing properly and the other is a bit slower. I always assumed they were the same but allegedly they are not. It's really not that interesting because almost no one is going to use Volta to run RT titles.
 
Well Volta kicked GCN ass in HPC. Also made Cuda even more of the standard which now AMD has to convert so it can be used on their hardware. Also Volta was not even the gaming version anyways which Pascal trumped Vega not only in gaming but mining as well for the most desired GPUs. That is history, while some say history repeats over and over again so can AMD, as in the past where for awhile were top dog. Best to see what comes available this fall and choose wisely from who gives the best value/performance per $.
Are you talking about hardware or software eco system?
 
is it just me, or has the big navi/ RDNA2 rumor mill ground to a halt?

AMD has been pretty tight these last few years on rumors. You might see something when they are 30 days or so from launch but it's pretty quiet right up to then.
 
What was the “Poor Volta” thing? I must’ve missed that.
1594694730599.png
 
Seen lots of weird shit about that but from what I can tell in more recent benches, only one does the paytracing properly and the other is a bit slower. I always assumed they were the same but allegedly they are not. It's really not that interesting because almost no one is going to use Volta to run RT titles.

Volta doesn't have RT cores ;)
 
Volta doesn't have RT cores ;)
But it has almost the same size die and similar features otherwise.. nearly the same core it's not like ngreedia did a complete revamp just partial.
 
But it has almost the same size die and similar features otherwise.. nearly the same core it's not like ngreedia did a complete revamp just partial.

And that kill the argument that Nvidia spent to much die on RT.
You cannot have your cake and eat it too.
 
AMD has been pretty tight these last few years on rumors. You might see something when they are 30 days or so from launch but it's pretty quiet right up to then.
I agree, and that's good. The hype machine is slowing down, so hopefully that means they don't need it as badly anymore. Like with their CPU that carries its own hype because it actually performs well.
 
The silence is probably a good sign. Although the RT video was underwhelming, I think the PS5 videos showed a lot of promise, so they might actually have the goods this time around.
 
I agree with that but seeing a 50% per/w basically on the same node is probably a first. Speculating leaves much to be desired. If AMD is comparing their 40 CU part Navi 10 at 225w to a 80 CU part restricted to something like 225w, meaning running at a lower frequency and voltage to meet the wattage, something they did with the Nano previously for a claim of 30% more performance with 30% less power over a 290x is to be seen, both were on the 28nm process. AMD did not compare the Fury X 275w perf/w to the 290x but the Nano 175w perf/w in this case. Best just to evaluate when all the hardware is available.

AMD claim for Nano 30% less power and 30% more performance:
https://www.amd.com/en/press-releases/amd-radeon-r9-nano-2015aug27
Here you go. It's a good review.

"Death Stranding: PC graphics performance benchmark review - NVIDIA DLSS 2.0 and AMD FidelityFX CAS" https://www.guru3d.com/articles-pages/death-stranding-pc-graphics-performance-benchmark-review,4.html
 
Looks like AMD fine wine, lol. 5700XT beating a 2070 Super in 1080p and tying at 1440p and not even significantly behind a 2080. A card that you could have gotten for $349 at Dell not too long ago. Looking at Fidelity FX in RE3, it does give some nice improvements without the normal texture issues with regular texture sharpening. The performance mode for DLSS on this title looks like a non-starter and degrades the image from their samples, the quality mode definitely enhances the IQ while improving performance. Maybe AMD will have FidelityFX 2 for sharpening and up scaling. In any case FidelityFX is better IQ wise than DLSS 1.0, maybe even 1.5 like used in the original Control. Plus FidelityFX is not restricted to one brand of current generation GPUs. Even the 1080Tis, 1080's etc. can get the benefit of that tech which is very cool.
 
Looks like AMD fine wine, lol. 5700XT beating a 2070 Super in 1080p and tying at 1440p and not even significantly behind a 2080. A card that you could have gotten for $349 at Dell not too long ago. Looking at Fidelity FX in RE3, it does give some nice improvements without the normal texture issues with regular texture sharpening. The performance mode for DLSS on this title looks like a non-starter and degrades the image from their samples, the quality mode definitely enhances the IQ while improving performance. Maybe AMD will have FidelityFX 2 for sharpening and up scaling. In any case FidelityFX is better IQ wise than DLSS 1.0, maybe even 1.5 like used in the original Control. Plus FidelityFX is not restricted to one brand of current generation GPUs. Even the 1080Tis, 1080's etc. can get the benefit of that tech which is very cool.
CAS actually does upscale.
 
Back
Top