Navi 21 XT to clock up to 2.4GHz

No clue who Patrick Schur is, but if what he said was true then that's impressive. 16 of GDDR6 sounds rather expensive however. Seems like it might be a tad overkill for something that's supposed to be a 3080 competitor. We'll see where prices land, but that does not make the card sound like it's going to be much cheaper.
 
Hoping these rumors prove true and the thing is a competitive beast. Feels like we've come a long way since the initial rumor of 'between 3070 and 3080 levels of performance'.

I've recently made the switch back to 1440p. Originally a 1440p HP Omen 32" to a LG 4K 27" panel, and now to a 32" Benq 1440p 144hz. So I'm not as into a 4K-slaying GPU anymore.
 
16gb is nice, but is anyone concerned about it being gddr6 vs gddr6x?

Not really, no. They could probably get away with less and still be just fine. 6x supply is probably still pretty limited, going with normal 6 might mean they're able to get more cards out.
 
16gb is nice, but is anyone concerned about it being gddr6 vs gddr6x?
I think it's valid to wonder if GDDR6 and the lower bus width controller they're supposed to be using will provide enough bandwidth to properly utilize 16GB however it sounds like they're confident that the new infinity cache will eliminate any bottleneck.
 
Sounds more and more like AMD will have options nestled right below and above 3080 respectively, 6800 XT and 6800 (6900?) XTX.

All of this, is sadly academic, however, since no one will be able to get their hands on retail cards until 2022.
 
I think it's valid to wonder if GDDR6 and the lower bus width controller they're supposed to be using will provide enough bandwidth to properly utilize 16GB however it sounds like they're confident that the new infinity cache will eliminate any bottleneck.

Rumors seem to be pretty solid that it uses a caching system. So no there really won't be a concern. The proof will be in the benches and we aren't that far away now.
 
https://www.techpowerup.com/273490/...y-confirmed-to-run-at-2-3-2-4-ghz-clock-250-w

And supposedly there is an XTX version with higher performance.

With the 5700XT clocking around 2GHz, that's a 20% clock increase on top of the doubled compute units.

Actually its even more. You are looking at 5700xt AIB that clock around 2GHZ with more power, but at default the cards stay around 1900 max and reference stayed around 1850-1900 ish. 2GHz for 5700xt is more of top end cards not the norm.

Also the 2.4ghz model is an AIB testing it so likely OC'ed model. But I think by default they will fall around 2000-2100 out of the box with OC capabilities to be higher and the XTX version likely clocked higher then normal xt.
 
If the XT is 250W, I could see AMD doing an XTX at 325 or so. Wouldn't be more power hungry than the 3080.

The juice for these high end cards is starting to creep up. It ain't easy loving SFF like I do. First I saw the short PCBs on those nVidia cards and I was all excited, there's a lot more planning required for all that heat.
 
No clue who Patrick Schur is, but if what he said was true then that's impressive. 16 of GDDR6 sounds rather expensive however. Seems like it might be a tad overkill for something that's supposed to be a 3080 competitor. We'll see where prices land, but that does not make the card sound like it's going to be much cheaper.
Maybe, they could be making up for lower memory speeds with greater quantities of it
 
Eh? GPU's generally love clockspeed. These are going to be high performing parts.

Subject to the power constraints

AMD promised 50% improvement in RDNA 2 over RDNA 1

I wonder how much sustained high clockspeed is possible or did AMD manage to boost improvement to 60-70%

They've had what seems like 2 years to optimise RDNA 2 now
 
giphy (1).gif
 
If the XT is 250W, I could see AMD doing an XTX at 325 or so. Wouldn't be more power hungry than the 3080.
250w was not total power board power, just total graphics power (TGP), so add a bit on to that for the memory and power conversion and board circuitry and you're looking closer to 300watts. Still not bad if it can perform at 3080 levels.
 
That was a great architecture with abyssmal OS level support. Its too old and too moot of an argument at this point.
I used an FX-8320 as my main system for four years. For my main uses of multimedia encoding, goofing around with VMs, and learning about undervolting, it was a champ. It played the games I had at the time well, too. Windows 10's scheduler finally felt pretty solid for it, but Linux was the only platform where it reliably felt great.

Lord, I hope RT Navi is good.
 
250w was not total power board power, just total graphics power (TGP), so add a bit on to that for the memory and power conversion and board circuitry and you're looking closer to 300watts. Still not bad if it can perform at 3080 levels.
With 16 Gb of RAM I'd say you should expect something north of 300W.
 
Nope.

Its like being concerned your ferarri is only 389 HP compared to your friends ferarri that is 390hp


Not quite. You're assuming they can pull off having as much cache on the die as a 8-chiplet Zen2 Epyc die (128MB cache takes die space, and massive interconnect). Will that really keep die space in-check (over just going 384-bits), and not make big navi the "limited supply" part here?


They are also supposedly sticking with 256-bit GDDR6.

https://thesportsrush.com/amds-big-navi-gpu-specs-leaked-here-is-what-we-found-out/

Tell us how well the original Xbox One played games using DDR3 ram feeding off a 32MB cache? Early games were pure shit compared to PS4.

The drivers are going to have to be optimized-as-hell to make massive cache work (and you've already seen what a cluster they were with little Navi...these will require just as much custom work before they get stable)

The closer Big Navi gets, the more unlikely it is t be impressive. At least Nvidia can eventually fix the GDDR6x supply issues.
 
Last edited:
Not quite. You're assuming they can pull off having as much cache on the die as a 8-chiplet Zen2 Epyc die (128MB cache takes die space, and massive interconnect). Will that really keep die space in-check (over just going 384-bits), and not make big navi the "limited supply" part here?


They are also supposedly sticking with 256-bit GDDR6.

https://thesportsrush.com/amds-big-navi-gpu-specs-leaked-here-is-what-we-found-out/

Tell us how well the original Xbox One played games using DDR3 ram feeding off a 32MB cache? Early games were pure shit compared to PS4.

The drivers are going to have to be optimized-as-hell to make massive cache work (and you've already seen what a cluster they were with little Navi...these will require just as much custom work before they get stable)

The closer Big Navi gets, the more unlikely it is t be impressive. At least Nvidia can eventually fix the GDDR6x supply issues.

You can only feed the gpu cores as fast as they can suck up the electrons from the vram. There is a point where possibly gddr6x wqs a calculated sales pitch.

I.e. lets slap gddr6x on these cards, even if they dont need it, because it has an X and is a little faster, we pay 5 more per GB and we can charge the consumer $30 per GB and because it has an X and AMD doesnt it must be better. = profit

In western paychology bigger is always better right

I feel like gddr6x in the case of the 3000 series really just boils down to marketing jive

384 bit must be better than 256 bit because its mo bigger! What does that 384-bits address? Your wallet and I absolutely agree with the earlier meme of overhype posted above. Overhype for nV superiority.

The purpose for nVidia and AMD or Walmart for that matter is to make profit, earn money, make people wealth so they can buy big houses and faster cars and retirement right? So they are doing what it takes to make a profit.

Lisa Su completely flipped AMD to be a total winner and she is making Intel look like a has been, nVidia is not immune to her explosive talent. She is targeting them directly as well.

Now here is where youre able to flame me stating how I dont know what im talking about
 
Last edited:
You can only feed the gpu cores as fast as they can suck up the electrons from the vram. There is a point where possibly gddr6x wqs a calculated sales pitch.

I.e. lets slap gddr6x on these cards, even if they dont need it, because it has an X is a little faster, we pay 5 more per GB and we xan charge the consumer $30 per GB and because it has an X and AMD doesnt it must be better.

In western paychology bigger is always better right

I feel like gddr6x in the case of the 3000 series really just boils down to marketing jive

384 bit muat be better than 256 bit because its mo bigger

Now here is where youre able to flame me stating how I dont know what im talking about

I'm sorry, you don't understand numbers. It seems to be your big failing.

Keep the "pointlessly-hopeful AMD fan" alive, I guess?

Navi had an impressive improvement over Polaris memory compression, but I wouldn't expect any miracles from Navi 2. You have to feed over 2x performance over the RX 5700, off that same memory bus.

betting it all on a big new cache design could be a bigger risk than just going wider (384-bits)
 
Last edited:
Not quite. You're assuming they can pull off having as much cache on the die as a 8-chiplet Zen2 Epyc die (128MB cache takes die space, and massive interconnect). Will that really keep die space in-check (over just going 384-bits), and not make big navi the "limited supply" part here?


They are also supposedly sticking with 256-bit GDDR6.

https://thesportsrush.com/amds-big-navi-gpu-specs-leaked-here-is-what-we-found-out/

Tell us how well the original Xbox One played games using DDR3 ram feeding off a 32MB cache? Early games were pure shit compared to PS4.

The drivers are going to have to be optimized-as-hell to make massive cache work (and you've already seen what a cluster they were with little Navi...these will require just as much custom work before they get stable)

The closer Big Navi gets, the more unlikely it is t be impressive. At least Nvidia can eventually fix the GDDR6x supply issues.
The cache system in rdna worked well... Well enough if you cut the bandwidth by 50%, you only lost ~20% performance on average... Rdna2 has an improved cache system/structure, so hopefully it's a non issue. Obviously benchmarks by third parties will be the final deciding factor on how well it works, but there is reason to believe it will work to a good extent. Memory overclocking will also give us an indication on how much performance may have been left on the table by going with slower/narrower memory, but it's hard to know what the cost difference may have been.
 
For work we are gonna be locked to NVidia for the next while or at least until we do a full refresh of both hardware and software, but for home, I am somewhat excited about this.
 
My question is what is going to stop the Navi2 launch from being as frustrating as the Nvidia launch? Will the bot scalpers be out in force for these as well?
The only reason scalpers were able to do what they did is because nVidia had no real stock. As long as AMD has a decent amount of stock the scalpers won't be in a position to make much from scalping and will have no reason to attempt to buy up the few cards which are released. nVidia and the non-existent stock was the reason scalping was a problem.
 
If this is anywhere near as good at pro workloads like VII was then these will be the go to cards. I am EXCITE.
Nvidia's massively bolstered raw compute this product cycle, so I'm also keen to see how RDNA2 stacks up. I am pretty excited based on the buzz and an already-inspiring bump from the RX 5700 series. If AMD delivers a solid RTX 3070-class card with 12+GB of RAM that isn't a volcano, I'll happily grab one, and hope that some day ROCm support emerges in Linux... 'Til then, that's what the Vega FE I snagged on these forums is for.
 
I'm sorry, you don't understand numbers. It seems to be your big failing.

Keep the "pointlessly-hopeful AMD fan" alive, I guess?

Navi had an impressive improvement over Polaris memory compression, but I wouldn't expect any miracles from Navi 2. You have to feed over 2x performance over the RX 5700, off that same memory bus.

betting it all on a big new cache design could be a bigger risk than just going wider (384-bits)

So, you have no real argument?
 
Back
Top