Navi 21 XT to clock up to 2.4GHz

I'm sure there was a specific set of use cases that NVidia found where the increased memory speeds made a difference enough for them to warrant the design choice, the question is will most gamers encounter them. But alternatively starting with GDDR6X out the gate gives them the option, later on, to maybe release cards with slower non X at a cheaper price or with more memory at the same price should the need arise and if there is a minimal impact on actual gaming than yay?

Youre not going to see GDDR6X on the 3060 and 50ti cards. Those are the cash crop cards for nVidia. Especially looking at steam hardware survey as evidence.

As far as the general user, they see a 30% uplift in performance but they dont see the little old man behind the curtain. But Im calling voodoo magic like DLSS 2 to reach this level.

Im not shilling for AMD here but being fair. It would appear that AMD doesnt have DLSS like tech as of now and thier card demo teaser shows that the raw raster performance is faster than nVidja 3080 with all of its voodoo magic tricks enabled. This means AMD is so far a much more powerful card if looking at this objectively.

So I have to ask, with all the voodoo magic on or off at least for gaming does (X) mean anything other than a bragging bullet point?
 
Last edited:
Yeah, I mean, Nvidia is doubling down HARD on raytracing though. I have next to no experience with ray tracing, so I don't know how that changes the memory bandwidth equation. For raster, however, I have always seen great gains from increased core clocks, but only very marginal gains from increased VRAM speeds.
It really depends on the ray-tracing algorithms used, there are a few different methods out there and they have different demands, but many of the AI and predictive ones do need fast memory because it skips many of the super accurate number crunching for an approximation and just spits them out as fast as possible. In practice, you get 90% of the quality at 300% the speed but you need to refresh the memory on it pretty hard.
 
You are getting overly worked up over a simple statement. If it's proven that modern AC (and others) NEED 10+ GB then show the proof. Otherwise your words are nothing more than hot wind.
I'm not worked up at all. So I'm confused does memory matter or not?
 
No you said people were claiming "memory means absolutely nothing" which I don't believe to be true. I haven't seen anyone claiming anything remotely close.
Well then what's the point that's being made? Allocation doesn't mean use doesn't mean anything as I wasn't implying that it did. So what's the point?
 
Allocation is not use.
People still fail to understand this.

Yep.

Most modern game engines leave stuff that is not in active use in VRAM, either as a matter of sloppy housekeeping or for "just in case it is needed again" purposes.

This does not mean that you would lose frame rate at all by having less VRAM than you see allocated during game play.

Depending on how the game is designed, however, there may be a slight improvement in load times though, as textures and shaders already in VRAM don't need to be loaded again.
 
Yep.

Most modern game engines leave stuff that is not in active use in VRAM, either as a matter of sloppy housekeeping or for "just in case it is needed again" purposes.

This does not mean that you would lose frame rate at all by having less VRAM than you see allocated during game play.

Depending on how the game is designed, however, there may be a slight improvement in load times though, as textures and shaders already in VRAM don't need to be loaded again.
Yeah, managing garbage collection is a lost art, DX12U goes a long way towards automating and simplifying it.
 
Yep.

Most modern game engines leave stuff that is not in active use in VRAM, either as a matter of sloppy housekeeping or for "just in case it is needed again" purposes.

This does not mean that you would lose frame rate at all by having less VRAM than you see allocated during game play.

Depending on how the game is designed, however, there may be a slight improvement in load times though, as textures and shaders already in VRAM don't need to be loaded again.

Only real thing I could see excess vram used for in the case of gaming is that direct storage access thingy.
 
Well then what's the point that's being made? Allocation doesn't mean use doesn't mean anything as I wasn't implying that it did. So what's the point?
You seem to be the one intent on making a point, so why don't you tell me? Part of your point seemed to be the claim that people think "memory means absolutely nothing", but now you're deflecting.

Best I can gather, you think AC performs best with 12GB of memory. Evidently there are some people who disagree with you on the basis that "memory means absolutely nothing" but they're hiding somewhere, out of sight.

I am curious about your claim. We don't have any 12GB variants of the 3080 to test with, so it seems pretty bold to say it would unequivocally work better.
 
You're dodging, provide proof to back up the claims and stop trying to deflect.
I'm not dodging. I'm proving how stupid the statement is. Are you calculating how much memory you're going to need per game before you buy the video card? You contacting the developer?

No you're not. Regardless if textures are reused or go unused the metric that developers give you from now until the end of time is what they give as a recommendation. You can prove that a texture goes unused or reused. So what are you going to do with that info? Buy a card with less memory? What's the action to take? What's your recommendation? Are you right sizing per game?

Repeating something that Gamer Nexus said without understanding what to do with it means nothing.
 
Last edited:
Only real thing I could see excess vram used for in the case of gaming is that direct storage access thingy.
https://www.kitguru.net/components/...s/nvidia-rtx-3080-founders-edition-review/24/

Vram usage in a number of games, how much vram is needed to keep smooth game play is another thing. I would think someone would actually have a program out to consume vram without hitting any performance on the GPU and see when a given game has issues in maintaining frame rates or frame times. Of course such vital information or even thought of that aspect skips reviewers. I've have done test independently using other programs consuming vram, like 3dMark which when minimize keeps vram allocated and puts very little impact GPU usage. Just a pain to do and not clean but some games do only allocate what they need and start to have erratic frame rates if not enough.

In any case with most current crop of games, at 4K, 10gb is good enough. I would have been much more pleased if the 3080 had a 384bit bus and used 16GB/s DDR6 which would give the same bandwidth, less energy, 2gb more or 12gb total than DDR6x. Since I keep my cards for 3-5 years in use in most cases, for me 10GB is not enough long term. I don't have a crystal ball but trends would indicate that would be the case. Those that update frequently their GPU, 3080 is a very nice card and how it compares to what AMD has we have to wait for the results.
 
Last edited:
You seem to be the one intent on making a point, so why don't you tell me? Part of your point seemed to be the claim that people think "memory means absolutely nothing", but now you're deflecting.

Best I can gather, you think AC performs best with 12GB of memory. Evidently there are some people who disagree with you on the basis that "memory means absolutely nothing" but they're hiding somewhere, out of sight.

I am curious about your claim. We don't have any 12GB variants of the 3080 to test with, so it seems pretty bold to say it would unequivocally work better.
If you believe that any amount of memory is fine. Go for it dude.
 
I'm not dodging. I'm proving how stupid the statement is. Are you calculating how much memory you're going to need per game before you buy the video card? You contacting the developer?

No you're not. Regardless if textures are reused or go unused the metric that developers give you from now until the end of time is what they give as a recommendation. You can prove that a texture goes unused or reused. So what are you going to do with that info? Buy a card with less memory? What's the action to take? What's your recommendation?

Repeating something that Gamer Nexus said without understanding what to do with it means nothing.
YOU made the claim that AC games need more than 10GB. My reply was directly talking about that. Then you went off on some bullshit rant about something no one said in an attempt to justify not backing up your claim with proof. You continue to deflect here and go off on your made up claim that anyone with a functioning brain can see was never made in the first place.
 
You seem to be the one intent on making a point, so why don't you tell me? Part of your point seemed to be the claim that people think "memory means absolutely nothing", but now you're deflecting.

Best I can gather, you think AC performs best with 12GB of memory. Evidently there are some people who disagree with you on the basis that "memory means absolutely nothing" but they're hiding somewhere, out of sight.

I am curious about your claim. We don't have any 12GB variants of the 3080 to test with, so it seems pretty bold to say it would unequivocally work better.
Pretty easy to downclock a 3090, to run at 3080 speeds while keeping all that memory to test if the difference is noticeable. Given the performance difference on average between the two in gaming, I would say a resounding no, but until I find a reputable review that has done this I can't say with any authority on the matter. But any game that doesn't allocate 100% of the VRam it can hit bottlenecks when background processes try to claim some for themselves, so the game may only need 4 but if it doesn't claim all 8,10,12,16 or whatever it can lead to problems if A, they have done a bad job at garbage collection, or B your sprawling list of active Chrome Tab's decides it wants in on the VRam action for whatever reason. Then out of nowhere your game crashes, hangs, or FPS just go to the dumps because they didn't claim what was there, by trying to be too efficient.
 
If you believe that any amount of memory is fine. Go for it dude.
Thanks man. I definitely will. And while we're just randomly putting words in people's mouths, enjoy the trip to Canada you said you were taking.
 
Pretty easy to downclock a 3090, to run at 3080 speeds while keeping all that memory to test if the difference is noticeable. Given the performance difference on average between the two in gaming, I would say a resounding no, but until I find a reputable review that has done this I can't say with any authority on the matter. But any game that doesn't allocate 100% of the VRam it can hit bottlenecks when background processes try to claim some for themselves, so the game may only need 4 but if it doesn't claim all 8,10,12,16 or whatever it can lead to problems if A, they have done a bad job at garbage collection, or B your sprawling list of active Chrome Tab's decides it wants in on the VRam action for whatever reason. Then out of nowhere your game crashes, hangs, or FPS just go to the dumps because they didn't claim what was there, by trying to be too efficient.

Would that be apples to apples, given the extra Cuda cores the 3090 has over the 3080, and the 384 bit bus? I wonder if you could instead limit the 3090's VRAM and run the test that way.

It's certainly something I'd be interested in given all the claims floating around right now.
 
YOU made the claim that AC games need more than 10GB. My reply was directly talking about that. Then you went off on some bullshit rant about something no one said in an attempt to justify not backing up your claim with proof. You continue to deflect here and go off on your made up claim that anyone with a functioning brain can see was never made in the first place.
I'm basing on having the game myself. If I increase the FOV the amount of memory goes up. It starts off at 4B and goes past 8GB once it hits that there's noticeably more stuttering. I'm not going to load up the game because someone on this forum doesn't believe me. If you are going to start off with not believing someone on here. Then fine from now until the end of time I will assume you're lying and being disingenuous. That help?
 
Thanks man. I definitely will. And while we're just randomly putting words in people's mouths, enjoy the trip to Canada you said you were taking.
I think that's what you did. Not me so I guess enjoy your trip.
 
Would that be apples to apples, given the extra Cuda cores the 3090 has over the 3080, and the 385 bit bus? I wonder if you could instead limit the 3090's VRAM and run the test that way.

It's certainly something I'd be interested in given all the claims floating around right now.
IT would certainly be closer to like Gala vs Fuji, I mean they are both apples... But if the performance decrease scaled with the clock decrease then it would at least go to show that the extra memory doesn't play a tangible role in the game tested. Is it concrete no, but it would go a long way towards ending the argument.
 
  • Like
Reactions: Epos7
like this
If I install more games on my hard drive, the amount of space goes up. Do I need a 12TB HDD for AC? Pls help.
Actually. If you install more games on your HD and the space increases on your hard drive then I can see who has the problem. You're just trolling and being disingenuous. Got it.
 
IT would certainly be closer to like Gala vs Fuji, I mean they are both apples... But if the performance decrease scaled with the clock decrease then it would at least go to show that the extra memory doesn't play a tangible role in the game tested. Is it concrete no, but it would go a long way towards ending the argument.
I'd read it.

Now I'm really hoping that 20GB 3080 comes out soon, just because I'm interested in the comparison to the 10GB variant.
 
I realize, reading through this thread (except for the last dozen posts...sheesh), I have had an underlying assumption that AMD will price their cards somewhat below Nvidia's price-gouging (to my eyes) numbers.

I could be very wrong.

I've been holding out for awhile for a new card. (Check sig: I'm limping along. ;) ) I've been thinking AMD will bring a competitive card at a consumer-friendly price. If they do, I'll buy 2 or 3. Or 4. If not...shrug.
 
I'm sure there was a specific set of use cases that NVidia found where the increased memory speeds made a difference enough for them to warrant the design choice, the question is will most gamers encounter them. But alternatively starting with GDDR6X out the gate gives them the option, later on, to maybe release cards with slower non X at a cheaper price or with more memory at the same price should the need arise and if there is a minimal impact on actual gaming than yay?
Not only that but if there wasn't a difference AMD wouldn't have felt the need to beef up the cache to counteract that difference, the real question is if Nvidia's approach increases performance enough to justify the extra expense and power consumption in comparison to AMD's approach with the cache.

AMD's approach seems to be much riskier and Nvidia's problem of cost and availability will only improve over time but if it works it might be the leg up needed to be competitive this round.
 
Not only that but if there wasn't a difference AMD wouldn't have felt the need to beef up the cache to counteract that difference, the real question is if Nvidia's approach increases performance enough to justify the extra expense and power consumption in comparison to AMD's approach with the cache.

AMD's approach seems to be much riskier and Nvidia's problem of cost and availability will only improve over time but if it works it might be the leg up needed to be competitive this round.
The increased speed may be a result of a manufacturing issue, TSMC's process and AMD's efforts have allowed them to greatly increase cache sizes in a small area, that may be a technological feat that Samsung is not able to reproduce at this time. So to counter the smaller cache the increased speed was required. These new cards all leave a lot to unpack very interesting things they are. I really look forward to the games that can take advantage of them. Larian Studios has been putting a lot of behind the scenes effort into all sorts of AI, and I would love to see what their RPG engines could do with that sort of tech running on a home device.
 
The increased speed may be a result of a manufacturing issue, TSMC's process and AMD's efforts have allowed them to greatly increase cache sizes in a small area, that may be a technological feat that Samsung is not able to reproduce at this time. So to counter the smaller cache the increased speed was required. These new cards all leave a lot to unpack very interesting things they are. I really look forward to the games that can take advantage of them. Larian Studios has been putting a lot of behind the scenes effort into all sorts of AI, and I would love to see what their RPG engines could do with that sort of tech running on a home device.
I agree that TSMC gives them an advantage over Samsung's improved 10nm node and I hadn't really thought about how that affected cache sizes, I still think Nvidia's approach is more traditional and thus less risky but you might be right that they didn't really have that option.
 
I agree that TSMC gives them an advantage over Samsung's improved 10nm node and I hadn't really thought about how that affected cache sizes, I still think Nvidia's approach is more traditional and thus less risky but you might be right that they didn't really have that option.
It really depends on how the new interface is presented to the developers, NVidia's design doesn't change much for any existing software so we shouldn't have to wait for "driver optimizations" to bring out the performance capabilities of the card, that may not necessarily be true with AMD's design choice and we could be in a situation where we need to wait for drivers and game patches to catch up to take advantage of the cache size. Going forward I am sure that AMD's design choice is the cleaner option but I fear that there will be a painful transition period. (which is kind of AMD's thing)
 
I'm basing on having the game myself. If I increase the FOV the amount of memory goes up. It starts off at 4B and goes past 8GB once it hits that there's noticeably more stuttering. I'm not going to load up the game because someone on this forum doesn't believe me. If you are going to start off with not believing someone on here. Then fine from now until the end of time I will assume you're lying and being disingenuous. That help?

That doesn't mean it's using all of that VRAM. The numbers reported by apps like Afterburner and so on are not accurately displaying what is being used, instead they display what the game allocates. Games generally allocate more VRAM than they need.

PS: Asking someone to back up their claims is natural. You don't need to get defensive when asked to do so.
 
I'm pretty much committing to AMD if they have cards available reasonably close to Nvidia. The only way I won't is if they are just really bad from a price vs performance stand point.
 
Last edited:



The maximum boost limit that could be set is also 2800 MHz which will be unlikely to achieve by regular users but LN2 enthusiasts are looking like they will be having a lot of fun with Big Navi when it launches.

cc cybereality

there's a new update on the maximum frequencies that Navi 2X "Big Navi" GPUs can run at by Igor's Lab. The information comes to him straight from an AIB partner of AMD and a very special one at that who has provided a BIOS of its next-generation Radeon RX 6800 XT custom board. Igor used the "MorePowerTool" to evaluate the BIOS and found out that the boost clock is set to 2577 MHz
 
I'm pretty much committing to AMD if they have cards available reasonably close to Nvidia. The only way I won't is if they are just really bad from a price vs performance stand point.

If they're reasonably close expect the botting dbags to be all over it and scalping as many as possible.
 
So why do the game benchmarks show little difference between the 3080 and the 3090, even though the latter has more than twice the memory of the former? The Eurogamer 3090 review shows 7 frames between them at 4K.
MS Flightsim 2020 is the same, actual difference is negligible.
Because the 3080 and 3090 do not overclock well. It is the chip architecture and the high density of transistors on an inferior Samsung 8nm process. Memory is not the issue with the poor overclocking at all. It will affect some games with the 3080's meager 10GB of vram. I suggest you wait 2 weeks and buy an AMD RX 6800XT or 6900XT. Better thermals, better overclocking, and decent availability.
 
Because the 3080 and 3090 do not overclock well. It is the chip architecture and the high density of transistors on an inferior Samsung 8nm process. Memory is not the issue with the poor overclocking at all. It will affect some games with the 3080's meager 10GB of vram. I suggest you wait 2 weeks and buy an AMD RX 6800XT or 6900XT. Better thermals, better overclocking, and decent availability.

Yep 5 more days and we will know Lisa's Secret.
 
Because the 3080 and 3090 do not overclock well. It is the chip architecture and the high density of transistors on an inferior Samsung 8nm process. Memory is not the issue with the poor overclocking at all. It will affect some games with the 3080's meager 10GB of vram. I suggest you wait 2 weeks and buy an AMD RX 6800XT or 6900XT. Better thermals, better overclocking, and decent availability.
I don't think he was asking about overclocking, I think he was curious why the 24GB of VRAM in the 3090 doesn't show very much advantage at all over the 10GB in the 3080. It's a good question.

Waiting to see AMD's offering would be the smart move at this point. We can't say they'll have better thermals, overclocking, and availability until they actually launch though.
 
I don't think he was asking about overclocking, I think he was curious why the 24GB of VRAM in the 3090 doesn't show very much advantage at all over the 10GB in the 3080. It's a good question.

Waiting to see AMD's offering would be the smart move at this point. We can't say they'll have better thermals, overclocking, and availability until they actually launch though.
Because the extra memory is just a marketing ploy by Nvidia to move the 3090 for $1500 when it is only 10% faster than a 3080.
 
Today's new is that an engineering sample of the AMD RX6800XT pretty much destroys the RTX 3080 in synthetic benchmarks. Remember this is an engineering sample and the released product will likely be a little faster. The 6800XT is on Navy 21 die with 72 cu's. It is Navi 21 's 2nd best sku. The 6900XT is rumored to be equal or better than the RTX 3090 for considerably less money. It wll alll be revealed on Wednesday October 28th. All I can say is Gensen is shitting bricks now.
 
Hopefully supply is better than the 3xxx series. I'll buy Navi just because it's in stock. Those leaked benchmarks are very promising though.
 
Back
Top