Is 10gb VRAM really enough for 1440p?

8GB got problematic when i ran monster hunters texture pack at 1440p with max settings. So I wouldn't trust 10GB for future games. You're definitely safer with 16GB
 
This thread reminds me of that time ppl said ya don't need more than four cores or ya don't need more than 2gb vram. Yea...
 
Yea, no... not even close. When AMD can produce a similar card with near double the vram for less. That's a tight margin without knowing the actual margins.
To be fair, GDDR6 is cheaper than GDDR6X. Also, Nvidia actually had more memory chips on their 3080 than AMD has on any of their 6800/6900 series cards.

The 3090 has 24 GDDR6X memory chips. That has to be pretty expensive.
 
To be fair, GDDR6 is cheaper than GDDR6X. Also, Nvidia actually had more memory chips on their 3080 than AMD has on any of their 6800/6900 series cards.

The 3090 has 24 GDDR6X memory chips. That has to be pretty expensive.
And? Are you suggesting the margin on the 3090 is tight??
 
I'm not a GPU expert, but I think people latch onto VRAM because it's much easier to say X is bigger than Y than it is to understand the many other aspects of performance, like memory bus width.

I don't think you should be concerned about VRAM at 1440p. I think you should look at the games you want to play and see which card performs best that fits within your budget.

Put it this way - do you buy a GPU to play games or to open your system information at look at how much VRAM it has?

I have a 3080, and I haven't run out of VRAM. There are already games (Cyberpunk, MSFS) where I can't quite max out the settings, and this tells me the card will be limited in other ways before VRAM will be an issue. Same can be said of the 6800/6900 series, and they have more VRAM.

Hell, even a 3090 with its 24GB isn't going to run Cyberpunk well with RT on and DLSS off. It sure as shit isn't running out of VRAM.
 
Last edited:
Decision to be made: 3080 vs 6800xt

I don't care about Ray Tracing, nor streaming, nvenc.

Some games, like doom eternal, and many others are already maxing out 8gb. 8gb is today clearly not sufficient (am talking about vram usage, not just allocation), let alone in 2 years.

Back to the 3080, and the question of will 10gb be enough for many years to come? I don't plan to swap my gpu every two years. I plan to keep it 3-5 years.

For those who think 10gb is enough today, will it be enough in 3+ years? Do you have stats of vram usage to support your arguments?

I think 10GB may be enough for some time at 2560x1440. At 3440x1440 or 3840x2160, I don't think it will. I would even go so far as to argue that 10GB is insufficient today. Of course, without ray tracing VRAM usage isn't usually nearly as bad. Using the Ultra-Ray Tracing preset with Cyberpunk 2077 at 4K, w/DLSS set to balanced w/HDR and the Cinematic RTX launch parameter the game has used upwards of 13GB of VRAM on my RTX 3090 FE. Obviously, that's beyond 10GB now and is encroaching on the limits of even the 16GB cards such as the 6800XT and 6900XT. However, they aren't fast enough at Ray Tracing, nor do they support DLSS 2.0 or an equivalent. Therefore, I'm not sure what the VRAM usage would look like on one of those cards.

Granted, Cyberpunk 2077 is an edge case. However, there is no reason to believe that there aren't others out there that are consuming near 10GB of RAM today. I'm sure there are, I just don't know what they are. For most games, even AAA titles the RTX 3080's 10GB of VRAM isn't a problem. Destiny 2 w/HDR at 4K and everything else maxed out doesn't consume more than 4GB of VRAM. I think I clocked Ghost Recon Breakpoint eating up 8GB or 7.5GB using Ultra settings at 3840x2160. To answer the question more directly, the answer is "no." While the 10GB on the RTX 3080 may be good enough for 2560x1440 for some time, it's marketed as a 4K card and frankly, I don't think it will age well for that purpose. It's even possible that it won't age well even at 2560x1440 three years from now. The RTX 3080 has less VRAM than its immediate predecessor and less than my GeForce Titan X (Maxwell). Make no mistake about it, the RTX 3080 was gimped RAM wise to cut costs enough to get the card below $1,000. More than likely, it's because NVIDIA had some idea what the 6800XT and 6900XT would bring to the table and what their pricing would be like. On the high end, the RTX 3080 is a value option. We haven't seen a flagship gaming card that cheap in some time. There are good reasons for that. In this post-pandemic world, costs are even higher. It's a wonder the thing is that cheap in the first place.

Conversely, the 6800XT and 6900XT are something of a let down today. Primarily because their poor ray tracing performance and lack of a DLSS 2.0 like feature makes them dog shit for 4K gaming today. The lack of something equivalent to DLSS 2.0 is certainly the bigger of those two issues today. If AMD can get things up to scratch on the software side, I think those cards will age very well comparatively. Now, there are arguments from many that ray tracing performance isn't that big a deal. To be honest, Cyberpunk 2077 is one of the few titles where I think its implemented well enough to make a case against that argument. However, this topic is about how these cards will age and ray tracing will become more and more important as things progress. If your concerned about how the cards behave in three years, I think the RTX 3080 is going to be a disappointment down the line and whether or not AMD can get the 6800XT/6900XT up to scratch is a crap shoot. Software has always been something of an issue for AMD and they've got a ton of catch up work to do.
 
^ I can run CP2077 at 4k HDR, ultra RT, balanced DLSS on my 3080 without issue. FPS is 45-50 with no issues related to insufficient VRAM. I play it on DLSS performance to get to 55-60 fps instead but it is quite playable on balanced. Another case of allocation != required

Not sure what the cinematic RTX launch parameter is, so that may change things.
 
Using the Ultra-Ray Tracing preset with Cyberpunk 2077 at 4K, w/DLSS set to balanced w/HDR and the Cinematic RTX launch parameter the game has used upwards of 13GB of VRAM on my RTX 3090 FE. Obviously, that's beyond 10GB now and is encroaching on the limits of even the 16GB cards such as the 6800XT and 6900XT. However, they aren't fast enough at Ray Tracing, nor do they support DLSS 2.0 or an equivalent. Therefore, I'm not sure what the VRAM usage would look like on one of those cards.

Curious how you are measuring VRAM usage? My understanding is you can't measure it unless you get down to the driver level. Apps such as HWINFO only report allocation.
 
^ I can run CP2077 at 4k HDR, ultra RT, balanced DLSS on my 3080 without issue. FPS is 45-50 with no issues related to insufficient VRAM. I play it on DLSS performance to get to 55-60 fps instead but it is quite playable on balanced. Another case of allocation != required

Not sure what the cinematic RTX launch parameter is, so that may change things.
Same here. I think he is reporting allocation.
 
Curious how you are measuring VRAM usage? My understanding is you can't measure it unless you get down to the driver level. Apps such as HWINFO only report allocation.

Well, this is according to GPU-Z and the games themselves. NVIDIA Frameview 1.x also reports this or can report this if I am not mistaken. I use this as well.
 
^ I can run CP2077 at 4k HDR, ultra RT, balanced DLSS on my 3080 without issue. FPS is 45-50 with no issues related to insufficient VRAM. I play it on DLSS performance to get to 55-60 fps instead but it is quite playable on balanced. Another case of allocation != required

Not sure what the cinematic RTX launch parameter is, so that may change things.

He said near 10GB of VRAM. If it isn't going over your VRAM capacity then you aren't going to run into issues...
 
Yea, no... not even close. When AMD can produce a similar card with near double the vram for less. That's a tight margin without knowing the actual margins.

Really? I didn't even think it was a debate that the 3080 has lower margins than Nvidia is used to. AMD traditionally works with lower margins to stay relevant.
 
^ I can run CP2077 at 4k HDR, ultra RT, balanced DLSS on my 3080 without issue. FPS is 45-50 with no issues related to insufficient VRAM. I play it on DLSS performance to get to 55-60 fps instead but it is quite playable on balanced. Another case of allocation != required

Not sure what the cinematic RTX launch parameter is, so that may change things.

I never said you would have an issue. The allocation reported by GPU-Z on my RTX 3090 was only 9.5GB of VRAM. If you bothered to fully read what i wrote, you'd also have realized I've enabled the cinematic RTX mode via a lunch parameter which alters the game's lighting. This is in addition to running the game with HDR, DLSS 2.0 set to balanced and at 4K using the same RT Ultra preset. This increases VRAM usage considerably, or at least allocation. After doing that, GPU-Z reports 12.5-13GB of VRAM usage.

EDIT: I didn't catch the bottom sentence of your post.
 
I never said you would have an issue. The allocation reported by GPU-Z on my RTX 3090 was only 9.5GB of VRAM. If you bothered to fully read what i wrote, you'd also have realized I've enabled the cinematic RTX mode via a lunch parameter which alters the game's lighting. This is in addition to running the game with HDR, DLSS 2.0 set to balanced and at 4K using the same RT Ultra preset. This increases VRAM usage considerably, or at least allocation. After doing that, GPU-Z reports 12.5-13GB of VRAM usage.

If you bothered to fully read my post you’d see that I had the caveat that I don’t know what cinematic RTX is and that may change things.
 
Nope. He said 13 GB.

Yes I did. I also later mentioned running the game without the cinematic RTX mode. Without using that, GPU-Z reports 9.5GB of VRAM usage at 4K, w/HDR and DLSS 2.0 set to balanced.

If you bothered to fully read my post you’d see that I had the caveat that I don’t know what cinematic RTX is and that may change things.

See the edit.
 
Yes I did. I also later mentioned running the game without the cinematic RTX mode. Without using that, GPU-Z reports 9.5GB of VRAM usage at 4K, w/HDR and DLSS 2.0 set to balanced.



See the edit.

Fair enough! Wonder what it reports at quality DLSS (playable on 3080 to console peasant used to 30 fps) and native 4k (unplayable on both)
 
Fair enough! Wonder what it reports at quality DLSS (playable on 3080 to console peasant used to 30 fps) and native 4k (unplayable on both)

I can certainly find out. BTW, the cinematic RTX command effects the game's lighting quite a bit. However, it doesn't impact performance negatively. Some people even report more stable FPS and fewer dips. I can't speak to that as I switched CPU's at the same time as well. It does require additional VRAM to use it though. It's available to all RTX cards, but only recommended for the 30 series. You might try it and see what it does. I'm curious as to whether or not the 3080 would run into issues due to having less VRAM.
 
Without using that, GPU-Z reports 9.5GB of VRAM usage at 4K, w/HDR and DLSS 2.0 set to balanced.
Ah. That is allocation. That isn't usage.

We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.”
 
Fair enough! Wonder what it reports at quality DLSS (playable on 3080 to console peasant used to 30 fps) and native 4k (unplayable on both)
1609958434348.png


9.9GB at 4K native with RT on according to TPU.

1609958589318.png


9.3GB at 4K Quality DLSS and RT on according to Guru3D.
 
Isn't it pretty standard for nvidia to gimp the non-Ti models though? 1080 Ti was 11GB and 2080 was 8 GB. At least the gap is closer between the 2080 Ti and 3080. Still dumb though, I agree.
Fair, but with the 3080 being a 102 die like the Ti cards usually are it was an odd decision.
 
On these low vram cards you won't be able to take advantage of SAM either. It could be a decent loss of potential as support for SAM increases.
 
View attachment 316473

9.9GB at 4K native with RT on according to TPU.

View attachment 316474

9.3GB at 4K Quality DLSS and RT on according to Guru3D.

That's generally in line with what I've seen the game do. Again, we only have allocation to go off of, but we don't know how it would perform if you were to constrain the usage to a lower amount of RAM on an equal GPU. I can make the game "use" more VRAM as stated earlier, but I don't know how much is actually needed to use the feature. Supposedly, the usage was only supposed to go up by 2GB of VRAM from what I read when I discovered the feature. However, it pulls 3GB of VRAM from my system.
 
Isn't it pretty standard for nvidia to gimp the non-Ti models though? 1080 Ti was 11GB and 2080 was 8 GB. At least the gap is closer between the 2080 Ti and 3080. Still dumb though, I agree.

It is, but typically, non-Ti models aren't directly comparable. The GPU's are gimped in a variety of ways as well. A 1080 wouldn't be as fast as a 1080 Ti if you gave it the same memory configuration. The subsequent RTX 2080 was also not a direct replacement for the flagship GTX 1080 Ti. The RTX 2080 Ti was.

Fair, but with the 3080 being a 102 die like the Ti cards usually are it was an odd decision.
It is, but it has fewer CUDA cores and a reduced memory bus. It's gimped compared to the RTX 3090. Supposedly the RTX 3080 Ti's GPU will actually be identical to the GPU found on the RTX 3090, albeit with unknown clocks.
 
Think this thread is moot and needs to be archived for 6 months because the most GPU you can buy right now is a 1650 4GB without paying scalpers...
In my neck of the woods, it's more like a GT 710 or HD 7550.
It is, but it has fewer CUDA cores and a reduced memory bus. It's gimped compared to the RTX 3090. Supposedly the RTX 3080 Ti's GPU will actually be identical to the GPU found on the RTX 3090, albeit with unknown clocks.
I get what you are saying. But considering they were marketing the 30-series during announcement as a replacement for Pascal owners, it seems odd to me that 1080 Ti owners would be happy going from an 11GB card to a 10GB card effectively 3 years later. If you actually look at the FE PCB for the 3080, they omitted 2 memory chips, so it is possible for them to have made it a 12GB card over a 384-bit bus from the start. I honestly think if they did that there would have been less stink about VRAM capacity. And the 70 class cards being 8GB for 3 generations seems silly to me.

Regarding cost...does anyone actually know how much VRAM costs? I doubt its as much as people think and what we get is a high markup instead.
 
Same reason enterprise Java apps and SQL and Oracle does. If mapped by the app, you don’t have to worry about something else claiming it, and it’s way easier that way.
Oh yeah that reminds me of this one place I worked at. They had a SQL process that was just a run-away memory leak over time. Periodically had to reboot it or it would grab all possible RAM.
 
View attachment 316473

9.9GB at 4K native with RT on according to TPU.

View attachment 316474

9.3GB at 4K Quality DLSS and RT on according to Guru3D.
1. I wonder how they measured vRAM used if none of the tools on the market (as quoted by the nVidia rep) report actual usage - only allocation.
2. If one of the most demanding games, CP2077, uses (or even better, only allocates) 6GB with no RT at 1440p, then that means 10GB should definitely be enough for a while still (possibly excluding future AAA games). Less demanding/indie games should definitely still be able to be completely maxxed out textures comfortably on 10GB for a while if this is any indication to go by.
 
Oh yeah that reminds me of this one place I worked at. They had a SQL process that was just a run-away memory leak over time. Periodically had to reboot it or it would grab all possible RAM.
I do sizing for those for a living. If you don't right-size a SQL VM and just give it a bunch of RAM, it will use it all. So you increase it, and it "uses" that. If you check in the DB, 1/10th of it is actually in use - MSSQL service just claimed all the ram because it was there :p
 
Same reason enterprise Java apps and SQL and Oracle does. If mapped by the app, you don’t have to worry about something else claiming it, and it’s way easier that way.

Maybe in some cases, but what you listed certainly doesn't apply. let's stick to games anyway

What I'd like to know from Nvidia and whoever else is what difference does it make to me as a gamer whether its allocation or usage? Do I just play my games with less VRAM telling myself "it's allocation, not usage". What are the effects of playing with less VRAM if it's just allocation?

Will a game allocate 2GB on a 2GB and 10GB on a 10GB card at the same settings? Is that what were saying here

I do sizing for those for a living. If you don't right-size a SQL VM and just give it a bunch of RAM, it will use it all. So you increase it, and it "uses" that. If you check in the DB, 1/10th of it is actually in use - MSSQL service just claimed all the ram because it was there :p

Very bad example. SQL server is designed to use all available memory.
 
Same concept for some of these games: they allocate all available vram regardless of what they’re going to use. That’s exactly the point we were trying to make. Actual usage is hard to measure.
 
Same concept for some of these games: they allocate all available vram regardless of what they’re going to use. That’s exactly the point we were trying to make. Actual usage is hard to measure.
I am not sure we ever saw this happen (i.e. is there any game that show VRAM usage going up to 20-22 gig with a 3090).

Maybe some allocate more and place more in VRAM when there is more available, (i.e. less intelligence of will I need those data during that scene versus after the next loading), but allocate all available VRAM ?
 
Last edited:
I am not sure we ever saw this happen (i.e. is there any game that show VRAM usage going up to 20-22 gig with a 3090).

Mayve some allocate more and place more in VRAM when there is more available, (i.e. less intelligence of will I need those data during that scene versus after the next loading), but allocate all available VRAM ?
Probably up to a certain maximum; 3090 is the first consumer card with that much on it. Have to manage allocated Ram, after all
 
8GB got problematic when i ran monster hunters texture pack at 1440p with max settings. So I wouldn't trust 10GB for future games. You're definitely safer with 16GB

I ran MH with the HD textures at 1440p and never had a problem on my 1080, so I am not sure what you were seeing.
 
Games definitely can allocate more VRAM than they actually need, we have seen this numerous times and various sources have reported on it.

https://www.resetera.com/threads/vram-in-2020-2024-why-10gb-is-enough.280976/

https://www.gamersnexus.net/news-pc/2657-ask-gn-31-vram-used-frametimes-vs-fps

Yep, the question is how much more, you have only one example in your sources. I'd like to see a list of games and the differences in allocation vs actual usage. I'm also willing to believe there's some technical benefit from allocating more than you need ahead of time but of course we will never know any specifics.

How wide is the gap to justify this constant allocation != usage argument
 
Back
Top