Navi 21 XT to clock up to 2.4GHz

The bots can do those now, and if they run them out of Amazon’s AWS for dirt cheap then they can also have it randomize the outracing IP easily enough. So each can seem like it’s own person.
How do bots do 2 factor to a phone?

and as others pointed out days ago, they could use AI to profile the difference between a real person interacting with the system and a bot. Speed alone, would identify a bot.
 
If Navi 21 XT pulls 255W, performs a bit worse than the 3080, and costs the same or a little less, then I'll likely grab one as an upgrade to my 2080 Super. I'm wary of the 3080's 320W+ draw so this would be a marked performance improvement with same/very similar thermals.

Frankly if Navi 21 XT is within 10ish% of the 3080's performance for 50-70W fewer, then I can understand why Nvidia rushed so few units out the door so quickly.
Again, the 255w was only GPU power, not board power. Expect board power to be more. I would imagine ~300w depending on exactly how much the ram pulls... being 16gb with a 256-bit bus, it's hard to really say.
 
The AMD FX 8370 - Bulldozer overclocked like crazy, and got very, very little in return for it.

I am saying you can overclock RDNA 2 and compare it to RDNA 2. Last I check GPUs have done fine when OC-ing. Not comparing GPU to CPU architecture. Bulldozer was shit.
 
Faster the GPU clock the higher the latency becomes to the memory, Cache becomes very important. The basic network connected L1 cache of RNDA2 should dramatically speed up keeping the shaders busy with less access needed to the ram.

Now will this pose a number of issues? No clue, do look forward to when tested and how it all comes together. If 2400mhz is correct, OC maybe 2500mhz, we are looking at 25% higher clock speed than Navi or Ampere, 3080, with up to double the number of shaders of Navi. Maybe a lot of fun is coming.
 
Navi21 actually 320W+, memory is 16gbit DDR6

sounds more plausible than 255W. hitting those rumored clocks is gonna eat a lot of power. Memory bandwidth would be 512GB/s with 16gbit/s 256bit which seems awfully low, but AMD knows what they're doing so they must have decided the tradeoff of simpler/cheaper board design is worth it and their cache system will offset the lack of bw. I'm still hoping for a ridiculous liquid cooled version with a thicc 240mm rad lol
 
Last edited:
No clue who Patrick Schur is, but if what he said was true then that's impressive. 16 of GDDR6 sounds rather expensive however. Seems like it might be a tad overkill for something that's supposed to be a 3080 competitor. We'll see where prices land, but that does not make the card sound like it's going to be much cheaper.
Nvidia's 10GB is stingy there are already a few games that perform best with 12GB memory. It is a good moce and GDDR6 is not that expensive. You are thinking of GDDR6X
 
Nvidia's 10GB is stingy there are already a few games that perform best with 12GB memory. It is a good moce and GDDR6 is not that expensive. You are thinking of GDDR6X
Yup, most of the Assassin's Creed stuff can easily hit those numbers.
 
Since RDNA2 is very efficient, this would indicate that AMD saw how close they were to 3090 and decided to go all out and compete at the top end.
Kinda reminds me of when AMD did the liquid Vega 64 with maxi clocks to go after the 1080Ti, except this time the architecture looks more promising and the gap between the 3080 and 3090 is a lot smaller than 1080 -> 1080Ti. I would be surprised if a RDNA2 part beats the 3090 but if they can match the 3080- which looks certain- then a binned, high-clock halo card could shake things up, given the huge price gap and low performance gap between 3080 and 3090.
 
Nvidia's 10GB is stingy there are already a few games that perform best with 12GB memory. It is a good moce and GDDR6 is not that expensive. You are thinking of GDDR6X

Yup, most of the Assassin's Creed stuff can easily hit those numbers.
So why do the game benchmarks show little difference between the 3080 and the 3090, even though the latter has more than twice the memory of the former? The Eurogamer 3090 review shows 7 frames between them at 4K.
MS Flightsim 2020 is the same, actual difference is negligible.
 
You can only feed the gpu cores as fast as they can suck up the electrons from the vram. There is a point where possibly gddr6x wqs a calculated sales pitch.

I.e. lets slap gddr6x on these cards, even if they dont need it, because it has an X and is a little faster, we pay 5 more per GB and we can charge the consumer $30 per GB and because it has an X and AMD doesnt it must be better. = profit

In western paychology bigger is always better right

I feel like gddr6x in the case of the 3000 series really just boils down to marketing jive

I don't think that makes a ton of sense. You're assuming the average consumer doesn't know the difference between GDDR6 and GDDR6X, while also assuming they know enough about graphics cards to check what type of VRAM it has. Mostly likely the average consumer isn't going to notice at all, so the only reason Nvidia would spend the extra $50 per card is if it shows up in benches. On top of that, GDDR6X may be supply constrained. Nvidia could possibly have produced more cards (and made more money) by using GDDR6 over GDDR6X.

My guess, if it was really just for marketing purposes - they would have used GDDR6 and come up with a fancy marketing term for it. They don't need to spend more money to put an X somewhere.

On the flip side of your argument - what if there really is tangible benefit from GDDR6X over GDDR6? Maybe AMD is the one playing marketing games, knowing the average consumer will see 16GB on their card and assume it's better than 10GB on Nvidia's card without checking benches to see if it's true.
 
I don't think that makes a ton of sense. You're assuming the average consumer doesn't know the difference between GDDR6 and GDDR6X, while also assuming they know enough about graphics cards to check what type of VRAM it has. Mostly likely the average consumer isn't going to notice at all, so the only reason Nvidia would spend the extra $50 per card is if it shows up in benches. On top of that, GDDR6X may be supply constrained. Nvidia could possibly have produced more cards (and made more money) by using GDDR6 over GDDR6X.

My guess, if it was really just for marketing purposes - they would have used GDDR6 and come up with a fancy marketing term for it. They don't need to spend more money to put an X somewhere.

On the flip side of your argument - what if there really is tangible benefit from GDDR6X over GDDR6? Maybe AMD is the one playing marketing games, knowing the average consumer will see 16GB on their card and assume it's better than 10GB on Nvidia's card without checking benches to see if it's true.

These cards were/are not targeting average consumers. NVidia knows the 3080s/90s are for those that know.

The average consumer doesnt spend 700 or 1500 on a gpu. They spend 700 on an entire PC prebuit. Or 1500 on a Mac or higher end prebuilt PC. They are cool with it just having a sticker on the case that says powered by GeForce and have no idea what model of GPU is in the thing as long as their kid can play fornite.

NVidia in my opinion is working over the enthusiast like chumps with gddr6x. A ram that is extremely hard to get and supply and they knew that going in. These cards were released just to counter AMD. Nvidia wasnt looking for volume sales with these cards. They knew exactly what AMD was building and wanted to have bigger faster numbers on the market to take all the mindshare they could before AMD could release. NVidias corporate CIA agents per se, knew 10x more about Big Navis details than any rumor monger site like videocardz or wccftechs forum leakers ever knew.
As far as profit and stock market

That wil be the 3060 and 3050ti equivalents. That is the cash cow for thier GPUs. Not 3090s.

In my opinion [H]forum is an echo chamber of sorts. But outside the echochamber is where reality is. Not saying anything negative just what I believe.
 
Last edited:
These cards were/are not targeting average consumers. NVidia knows the 3080s/90s are for those that know.
OK, but now you're moving the goalposts. Previously you said Nvidia used GDDR6X because people don't know what it is, but now these cards are for people that know?
NVidia in my opinion is working over the enthusiast like chumps with gddr6x. A ram that is extremely hard to get and supply and they knew that going in. These cards were released just to counter AMD. Nvidia wasnt looking for volume sales with these cards.
As you said in your earlier post, Nvidia's whole goal is to make money. How does this strategy help them make money? They're spending extra $$ on RAM they don't need, to build hype for cards they can't sell?
Lisa Su completely flipped AMD to be a total winner and she is making Intel look like a has been, nVidia is not immune to her explosive talent. She is targeting them directly as well.
Ok now this just sounds like a religion or cult 🤪 I like Lisa Su as much as the next person, and I like the direction AMD is headed, but come on!
 
OK, but now you're moving the goalposts. Previously you said Nvidia used GDDR6X because people don't know what it is, but now these cards are for people that know?

As you said in your earlier post, Nvidia's whole goal is to make money. How does this strategy help them make money? They're spending extra $$ on RAM they don't need, to build hype for cards they can't sell?

Ok now this just sounds like a religion or cult 🤪 I like Lisa Su as much as the next person, and I like the direction AMD is headed, but come on!
The first one yes, I could have worded my reply more clearly.

Nvidia is a 300 billion enterprise, they purposefully lose money to gain far more later. If you havet taken business classes I dont have energy to explain.

And its not a cult its a fact and I invest in stocks and AMD is in my portfolio. Im cautiously but highly optimistic about Su's performance
 
Spoken like a true cult member :D Sorry, I'm just giving you a hard time. I know people can get really attached to these companies.
Oh God sometimes I have to catch myself because I am a fan boy at times and I have to tell myself STOP IT haha

But im wrong more than I want to be. What I say on these forums carries no authority in subject matter.
 
Allocation is not use.
You do realize this doesn't actually rebut what I said right? No Allocation doesn't mean in use. However, a lack of memory still has an effect. This has been tested and proven. It's a nonsensical statement because without context it doesn't mean anything.

If that's the case then save money and buy a card with the lowest amount of memory available. See how that works out for you.
 
So why do the game benchmarks show little difference between the 3080 and the 3090, even though the latter has more than twice the memory of the former? The Eurogamer 3090 review shows 7 frames between them at 4K.
MS Flightsim 2020 is the same, actual difference is negligible.
They do. You need the right game. If you pick a game that has minimal FOV objects then of course it doesn't matter as much. Having people saying memory means absolutely nothing on this forum is quite ridiculous but hey feel free to parrot memory doesn't matter. Obviously they add memory as a joke. 4GB? Why not just 1GB. I'm sure that's enough right? ;)
 
You can only feed the gpu cores as fast as they can suck up the electrons from the vram. There is a point where possibly gddr6x wqs a calculated sales pitch.

I.e. lets slap gddr6x on these cards, even if they dont need it, because it has an X and is a little faster, we pay 5 more per GB and we can charge the consumer $30 per GB and because it has an X and AMD doesnt it must be better. = profit
Has someone that know virtually nothing about this, is that remark true for what I imagine will be NVidia main market for GDDR6x users in the short future, deep learning/rendering farm and what not ?

Because I guess it has a lot of potential, didn't delivered for watt nor cost but still went with it anyway, at least on the gaming side, but on some metric those cards are so much more powerful than the 2080ti that it is to wonder if that faster ram do not play a role.
 
They do. You need the right game. If you pick a game that has minimal FOV objects then of course it doesn't matter as much. Having people saying memory means absolutely nothing on this forum is quite ridiculous but hey feel free to parrot memory doesn't matter. Obviously they add memory as a joke. 4GB? Why not just 1GB. I'm sure that's enough right? ;)
Has anyone actually been saying memory doesn't matter? I think most people understand that it does, but that you don't gain any advantage by having more than you need.
 
Navi21 actually 320W+, memory is 16gbit DDR6

sounds more plausible than 255W. hitting those rumored clocks is gonna eat a lot of power. Memory bandwidth would be 512GB/s with 16gbit/s 256bit which seems awfully low, but AMD knows what they're doing so they must have decided the tradeoff of simpler/cheaper board design is worth it and their cache system will offset the lack of bw. I'm still hoping for a ridiculous liquid cooled version with a thicc 240mm rad lol
Igor is the same person that said AMD was only competing with the 3070 and we know that's wrong.
 
Has anyone actually been saying memory doesn't matter? I think most people understand that it does, but that you don't gain any advantage by having more than you need.
Did I say you need have MORE THAN YOU need? Did I?
 
Did I say you need have MORE THAN YOU need? Did I?
No you said people were claiming "memory means absolutely nothing" which I don't believe to be true. I haven't seen anyone claiming anything remotely close.
 
16gb is nice, but is anyone concerned about it being gddr6 vs gddr6x?

I don't think I've ever been memory bandwidth limited on any GPU I've ever owned, so not really that concerned, no.

Same reason I questioned the benefit of HBM for consumer applications when that was the way AMD was going a few years back.
 
Has someone that know virtually nothing about this, is that remark true for what I imagine will be NVidia main market for GDDR6x users in the short future, deep learning/rendering farm and what not ?

Because I guess it has a lot of potential, didn't delivered for watt nor cost but still went with it anyway, at least on the gaming side, but on some metric those cards are so much more powerful than the 2080ti that it is to wonder if that faster ram do not play a role.
For 90% of the AI databases out there you need lots of memory and the faster the better it does almost everything right on the card, the AI upscaling stuff out there does a great job on NVidia's gear and demonstrates this very well.
 
That's really high. I'll wait to see reviews....

This is when I miss those long-form, written reviews from the [H].

I understand why they don't happen now, but I'm old and I miss them.

Ditto.

All the ad money has gone to YouTube, and it's a crying shame, because video is a terrible format for GPU/CPU/Motherboard reviews.

This is why I started giving via Patreon while HardOCP was still up, in a futile attempt to counteract the trend.
 
I don't think I've ever been memory bandwidth limited on any GPU I've ever owned, so not really that concerned, no.

Same reason I questioned the benefit of HBM for consumer applications when that was the way AMD was going a few years back.
I'm sure there was a specific set of use cases that NVidia found where the increased memory speeds made a difference enough for them to warrant the design choice, the question is will most gamers encounter them. But alternatively starting with GDDR6X out the gate gives them the option, later on, to maybe release cards with slower non X at a cheaper price or with more memory at the same price should the need arise and if there is a minimal impact on actual gaming than yay?
 
I'm sure there was a specific set of use cases that NVidia found where the increased memory speeds made a difference enough for them to warrant the design choice, the question is will most gamers encounter them. But alternatively starting with GDDR6X out the gate gives them the option, later on, to maybe release cards with slower non X at a cheaper price or with more memory at the same price should the need arise and if there is a minimal impact on actual gaming than yay?

Yeah, I mean, Nvidia is doubling down HARD on raytracing though. I have next to no experience with ray tracing, so I don't know how that changes the memory bandwidth equation. For raster, however, I have always seen great gains from increased core clocks, but only very marginal gains from increased VRAM speeds.
 
Back
Top