Big Navi VRAM specs leak: 16GB Navi 21 and 12GB Navi 22 go head to head with GeForce RTX 3080 and RTX 3090

If above chart is correct.
In a nut shell.When a scene calls for G Box.Navi 21 will blaze thru it.Nvidia card will slow down.Nvidia appears to have upper hand in RT.

Reading AMD patents on this topic.Appears Big Navi or future AMD gpu will have a ability to redirect prioritys.

Let’s say Ampre has 20% of its hardware dedicated to RT. When RT is not needed it’s a waste of 20% rendering power.

Interesting read https://www.hardwaretimes.com/amd-big-navi-to-pack-80cus-5120-cores-20-cus-to-be-used-for-ray-tracing-20-tflops-of-fp32/amp/#referrer=https://www.google.com&amp_tf=From %1$s

They are quoting CyberPunkCat. Everyone else already debunked it. Its going to be implemented like xbox I believe where entire chip can dedicate resources to raytracing. Haven't heard tom CyberPunkCat for a while and this really seems like wrong info. Its going off a statement form AMD that it will be hybrid approach. Hybrid usually means combination of two things, so it is likely the entire chip I have no idea where cyberpunk made up the 20 CU number from Hybrid lol.
 
I am hopeful, but with RDNA 2 / Big Navi I’ll believe it when I see it. Even if they match the 3080, they don’t have DLSS which is a great feature since 2.0. Additionally, it took a year for them to finally get the performance out of the 5700XT that it should have had long ago.

That said, I do hope they succeed. More competition, the better. If they can match Nvidia in ray tracing, and come up with something like DLSS, all while being a better value, I would happily buy their card.
 
I think a lot of people are only waiting for Big Navi because Nvidia failed us so hard.

They hyped the people so bad with their "cheap" GPUs which was in first place the pricing range used for years before except for the RTX 2000s rip off, it wasn't innovative or cheap at all. And now this "shortage" it's gonna rise up the prices again just like when they did it with their GTX 1000s line. Also the performance wasn't as they showed it, but that's not news, companies always use cherry picked benchmarks and scores, the worst part here is all the people that preorder parts they don't know yet and keep giving money to companies with this deplorable practices.
 
Last edited:

"Infinity Cache" is the marketing term. It is not clear what are the technical concepts covered under it.

There is a related patent for dynamic (adaptive) pooling (clustering) of L1 cache.

This acts as a virtual layer before the L2 cache thus avoiding costly L2 look-ups.

The clustering of the L1 cache configuration is dynamic based on a sampling which determines how much clustering/pooling should happen of the L1 cache.

AMD researchers claim 50% power savings for a minimal increase in die size

Performance boosts vary between 50% increase in bandwidth to 4% decrease

https://hardforum.com/threads/confi...-to-disrupt-4k-gaming.1992290/post-1044754256

There's a possibility that the Infinity Cache may be related to a patent that AMD filed last year on Adaptive Cache Reconfiguration Via Clustering. Subsequently, the authors published a paper on the topic. It talks about the possibility of sharing the L1 caches between GPU cores.

Traditionally, GPU cores have their own individual L1 cache, while the L2 cache is shared among all the cores. The suggested model proposes that each GPU is allowed to access the other's L1 cache. The objective is to optimize the caches' use by eliminating the replicated data in each slice of the cache.

The results are pretty amazing. Across a suite of 28 GPGPU applications, the new model improved performance by 22% (up to 52%) and energy efficiency by 49%.
(In worst-case the performance drops by 4%)
(Area overhead is 0.09 mm2/core.)

https://www.tomshardware.com/news/a...e-big-navis-rumored-mediocre-memory-bandwidth
 
The clustering of the L1 cache configuration is dynamic based on a sampling which determines how much clustering/pooling should happen of the L1 cache.

It makes a lot of sense to do this if you can get L1 to L1 latencies down.

Navi 10 already shares an L1 cache across the 10 CUs in a shader array. The private CU specific caches are called L0. I assume the paper is actually referring to those L0 caches.
 
why is Navi 21 the higher end part while Navi 22 is below that?...shouldn't the higher number signify the higher end part?

anyhow I like the 16/12 VRAM...mainly because it forced Nvidia to release 3080 20GB and 3070 16GB parts :D

I have never been able to rap my brain around AMDs naming scheme. I think it's akin to Boeing where the size of the plane doesn't matter, it's just whatever comes out next. Hence why the 787 is smaller than the 777.
 
I have never been able to rap my brain around AMDs naming scheme. I think it's akin to Boeing where the size of the plane doesn't matter, it's just whatever comes out next. Hence why the 787 is smaller than the 777.
Nvidia's is much more logical, isn't it?
 
Back
Top