The Slowing Growth of vRam in Games

Despite what some say, I strongly believe the 2080 will be VRAM limited @ 4K in the near future.


Agreed. Asset loading takes up a bunch of VRAM, and then higher res textures used for AA, etc...
 
I would predict November 22nd depending on your game preference:
https://www.tweaktown.com/news/66779/doom-eternal-gameplay-shown-48-minute-video-quakecon/index.html

"10x higher geometry detail as well as more texture details than any other ID software game before."

This game is shaping up to be a real vram monster should you want to play at the highest settings.
Hot damn, I thought the 5GB+ requirement for DOOM 2016's nightmare setting was steep.
Really exciting to see how this plays out at 2K and 4K resolutions and VRAM-usage.

Thanks for sharing!
 
Hot damn, I thought the 5GB+ requirement for DOOM 2016's nightmare setting was steep.
Really exciting to see how this plays out at 2K and 4K resolutions and VRAM-usage.

Thanks for sharing!

I am sure it will be optimized to scale well. Doom External on Ultra will probably look similar to Doom 2016 Nightmare while using similar vram. However, if you want to play Nightmare on Doom External and see every gory detail...
 
Despite what some say, I strongly believe the 2080 will be VRAM limited @ 4K in the near future.

It seems more and more likely that the 2080 will be GPU limited and not vram limited for future games.
Take for example the 4GB FuryX. Since this was seen as a 1440p ultra gpu when released, it was thought that it was would quickly fall short of the 980ti in newer games due to vram limits. Now, however, it is more of a 1080p high GPU in games like Borderlands 3 and Control. At those settings, 4 GB of vram is a non-issue. Doom External may be the exception but we saw this with Wolfenstein 2 before.

Same story for he GTX1080ti. Upon release, that card was a 4k ultra card. On games over 2 years later, it is now closer to a 1440p on very high level of GPU power. In that case, 8 GB of the similarly powerful RTX 2080 looks to be enough. In a few more years, it will most likely be playing the newest games at 1080p high, so the 8 GB of vram will be even less of an issue.

Here is some links to the newest game tests with vram requirements at 1080p/1440p/4k on the highest settings. Not much has changed on the vram front despite the vastly higher gpu requirements:

https://www.techpowerup.com/review/control-benchmark-test-performance-nvidia-rtx/5.html
4279 / 4979 / 6520 MB
https://www.techpowerup.com/review/gears-5-benchmark-test-performance-analysis/4.html
4731 / 5105 / 6050 MB
https://www.techpowerup.com/review/greedfall-benchmark-test-performance-analysis/4.html
5637 / 5917 / 6625 MB
https://www.guru3d.com/articles_pages/borderlands_3_pc_graphics_performance_benchmark_review,4.html
4235 / 4805 / 6145 MB (R7 used, may be slightly higher with a 2080ti)
 
It seems more and more likely that the 2080 will be GPU limited and not vram limited for future games.
Take for example the 4GB FuryX. Since this was seen as a 1440p ultra gpu when released, it was thought that it was would quickly fall short of the 980ti in newer games due to vram limits. Now, however, it is more of a 1080p high GPU in games like Borderlands 3 and Control. At those settings, 4 GB of vram is a non-issue. Doom External may be the exception but we saw this with Wolfenstein 2 before.

Same story for he GTX1080ti. Upon release, that card was a 4k ultra card. On games over 2 years later, it is now closer to a 1440p on very high level of GPU power. In that case, 8 GB of the similarly powerful RTX 2080 looks to be enough. In a few more years, it will most likely be playing the newest games at 1080p high, so the 8 GB of vram will be even less of an issue.

Here is some links to the newest game tests with vram requirements at 1080p/1440p/4k on the highest settings. Not much has changed on the vram front despite the vastly higher gpu requirements:

https://www.techpowerup.com/review/control-benchmark-test-performance-nvidia-rtx/5.html
4279 / 4979 / 6520 MB
https://www.techpowerup.com/review/gears-5-benchmark-test-performance-analysis/4.html
4731 / 5105 / 6050 MB
https://www.techpowerup.com/review/greedfall-benchmark-test-performance-analysis/4.html
5637 / 5917 / 6625 MB
https://www.guru3d.com/articles_pages/borderlands_3_pc_graphics_performance_benchmark_review,4.html
4235 / 4805 / 6145 MB (R7 used, may be slightly higher with a 2080ti)

Well... one difference is ray tracing apparently eats VRAM. That’s one reason DLSS actually working is important. The lower res it upscales from uses a lot less VRAM, along with the inherent fps boost.
 
No one is running out to buy over priced 8k screens. Almost all your high end cards have over 8 GB and the mid range have around 8 GB which is fine, why pay for more ram you will never use. 4K is barely seeing any traction since the prices are low enough for most now and the broadcast world is stuck on 1080p. 16GB consumer cards would be a waste and add 0 benefit.

Kind of hard to justify one when nothing can drive them. I imagine there'd be a ton of people rushing out to buy 8k screens if the prices got competitive and there was hardware to utilize it.


Pretty much what everyone said about 4k also, yet almost every TV being released (in increasingly smaller sizes btw) are 4k, and I'd bet the adoption rate is pretty damn high for 4k, yet it's barely driveable by current top end GPUs, and only at the old "60fps is sufficient for games" standard, which I think most would agree, few of us are any longer happy with. Even though I don't play FPSs much or competitively, I won't be buying a sub-120hz monitor again. I think we're gonna need to just drop that 60fps baseline moving forward. Next major GPU series cycle needs to be able to push 4k 120hz at the flagship level to be relevant IMO, and I certainly agree we better see 16gb VRAM.

Personally, I no longer buy ultra-expensive components and peripherals at the insane prices GPUs and monitors are back to, and say "oh well being able to run a majority of games at my res, refresh, and max settings, is good enough". If I have interest in a single game that I run into VRAM ceiling, on a card that I paid over $500 for, is going to piss me off. Not being able to hit 120hz at 3440x1440 at ultra settings with a flagship card, pisses me off, and the 1080ti will run into this in some games, when they were touting 4k performance with that release.


May not be the popular opinion, but I think the performance jump, of the RTX 20xx series, being such a significant number iteration from a "perception" standpoint, is pretty damn weak, and I feel very confident in saying that, if it weren't for the runaway sales of the mining craze, Nvidia would have *never* released the 20xx series the way it stands, let alone with the price jump. The Super models, are what the standard models *should* have been, and they shouldn't have even released a 2080ti until they could offer it with a similar increase as the other super series see.


Here's hoping AMD will get their shit together enough to challenge NVidia on the top end, so they get nervous enough to actually offer us something with a real performance jump and value proposition next cycle.
 
The last VRAM spike came with PS4/Xbox One as games developed moved towards that console generation. This when requirements were pushed from 2GB being more than ample to 4GB-6GB for 1080p everything on and stagnating once those levels were reached.

At the end the major multi-platform games will heavily focus on asset optimization and delivery for the fixed hardware of those specs as you can't rely on hardware changes or game setting changes compared to the PC side. Also ultimately the biggest VRAM consumer will be texture assets and how they are handled (eg. the game may save cycles on things like optimizing LOD and just use memory as that is available in abundance).

The concern now will be how upcoming 2020 consoles impact PC game requirements post launch. This will be dependent on what hardware they have, what guidelines they put forth for developers to target and leverage those specs, and how fast development moves to next gen focused or only.

Here's hoping AMD will get their shit together enough to challenge NVidia on the top end, so they get nervous enough to actually offer us something with a real performance jump and value proposition next cycle.

The most competitive pressure next cycle will come not from AMD nor Intel but from the new consoles. If just look at the last 2 console launches the GPU cycles that competed with them were rather aggressive. In Nvidia's case it was the 8800 GT PS3/Xbox 360 and GTX 970 vs the PS4/XBox One.
 
Last edited:
The most competitive pressure next cycle will come not from AMD nor Intel but from the new consoles. If just look at the last 2 console launches the GPU cycles that competed with them were rather aggressive. In Nvidia's case it was the 8800 GT PS3/Xbox 360 and GTX 970 vs the PS4/XBox One.

Agree on this. VRAM requirements in games trend upward as a direct result of VRAM on consoles. VRAM on PC cards somewhat stagnated after the release of PS3/Xbox 360 and definitely after PS4/Xbox1, which was directly related to games being developed for consoles, and keeping within the VRAM available on consoles.

One of the more interesting unknowns (CPU and GPU are basically known quanitties at this point, in my view) on upcoming consoles is their memory configs. Will it be shared between CPU/GPU? Will there be a DRAM buffer of some sort? DDR5, DDR6, HBM dare I say? I know Xbox folks have said it's going to be a very interesting configuration, as well as storage. I'm betting on shared 12GB DDR6, at least on the PS5, but that's just me spitballing. If it's 12GB shared, we'll see VRAM utilization in games go up, although not monumentally.
 
Agree on this. VRAM requirements in games trend upward as a direct result of VRAM on consoles. VRAM on PC cards somewhat stagnated after the release of PS3/Xbox 360 and definitely after PS4/Xbox1, which was directly related to games being developed for consoles, and keeping within the VRAM available on consoles.

One of the more interesting unknowns (CPU and GPU are basically known quanitties at this point, in my view) on upcoming consoles is their memory configs. Will it be shared between CPU/GPU? Will there be a DRAM buffer of some sort? DDR5, DDR6, HBM dare I say? I know Xbox folks have said it's going to be a very interesting configuration, as well as storage. I'm betting on shared 12GB DDR6, at least on the PS5, but that's just me spitballing. If it's 12GB shared, we'll see VRAM utilization in games go up, although not monumentally.
With ram prices the way they are now I doubt we see anything less then 16GB. Maybe more if it is shared.
 
With ram prices the way they are now I doubt we see anything less then 16GB. Maybe more if it is shared.

I'd agree, but know Sony and MSFT are going to continue to push incremental console upgrades every 3-4 years. If they can get away with 12GB for the first iteration, they will, and slapping on an extra 4GB or memory for PS5 2.0 is an easy/cheap marketing point.
 
I'd agree, but know Sony and MSFT are going to continue to push incremental console upgrades every 3-4 years. If they can get away with 12GB for the first iteration, they will, and slapping on an extra 4GB or memory for PS5 2.0 is an easy/cheap marketing point.
I don't think they going to do that again. I can see shorter console cycles tho. PS 6 in 2024. This time around they seem to release a console that has zero compromise. Console have always been lacking in something. Be it ram, CPU or GPU power. They are never balanced right. PS 5 seems to buck that trend.
 
Agree on this. VRAM requirements in games trend upward as a direct result of VRAM on consoles. VRAM on PC cards somewhat stagnated after the release of PS3/Xbox 360 and definitely after PS4/Xbox1, which was directly related to games being developed for consoles, and keeping within the VRAM available on consoles.

One of the more interesting unknowns (CPU and GPU are basically known quanitties at this point, in my view) on upcoming consoles is their memory configs. Will it be shared between CPU/GPU? Will there be a DRAM buffer of some sort? DDR5, DDR6, HBM dare I say? I know Xbox folks have said it's going to be a very interesting configuration, as well as storage. I'm betting on shared 12GB DDR6, at least on the PS5, but that's just me spitballing. If it's 12GB shared, we'll see VRAM utilization in games go up, although not monumentally.

This is completely rubbish to say it nicely. This 'theory' has been regurgitated time and time again and it makes zero sense.
At console optimized settings, these games are not using anywhere near the vram as to what a pc on ultra is using, so how is the console limiting the max that can be utilized?
Have games been limited to 60 fps since then as it is the limit of the console cpu?
 
This is completely rubbish to say it nicely. This 'theory' has been regurgitated time and time again and it makes zero sense.
At console optimized settings, these games are not using anywhere near the vram as to what a pc on ultra is using, so how is the console limiting the max that can be utilized?
Have games been limited to 60 fps since then as it is the limit of the console cpu?

I wouldn't say it's total rubbish. Games are mostly developed with consoles in mind, so it tends to limit the boundaries. And it may not be as easy as people think to just upscale all game textures and assets for the PC port after console development is over. And neither would they want to spend time doing that to only isolate potential customers who now can't run the game due to the higher requirements and lose sales.
 
Last edited:
A big part of optimization that occurs for console games is at the design side. The game is designed on the art side to be optimal for the hardware specs of the console. The design/art side knows from the software engineering side what limitations they have to work with and try to make the best looking game possible within those constraints.

The PC version may than have what are basically add-on effects that effectively have poor scaling relative to visual quality gain. This is compounded then by how much resources companies want to put into this as ultimately game developers don't directly sell or benefit from hardware sales and therefore showcasing PC hardware.
 
Xbox 360 Release Date: Nov 2005
PS3 Release Date: Nov 2006

Xbox One release date: Nov 2013
PS4 release date: Nov 2013

PS5 projected: Nov 2020
Xbox projected: Nov 2020

I'm seeing a 7-8 year window. What indication do we have this would change to a shorter cycle?
 
I'd agree, but know Sony and MSFT are going to continue to push incremental console upgrades every 3-4 years. If they can get away with 12GB for the first iteration, they will, and slapping on an extra 4GB or memory for PS5 2.0 is an easy/cheap marketing point.
Isn't 12G gddr5 what XB 1 X has now?
 
Xbox 360 Release Date: Nov 2005
PS3 Release Date: Nov 2006

Xbox One release date: Nov 2013
PS4 release date: Nov 2013

PS5 projected: Nov 2020
Xbox projected: Nov 2020

I'm seeing a 7-8 year window. What indication do we have this would change to a shorter cycle?

This is a bit muddied as the enhanced consoles came out in 2017. The jump from One to OneX will be every bit as big as OneX to 'Scarlett': BR vs 4k BR; HUGE gpu boost; 8GB gddr3 to 12GB of gddr5. The only thing that wasn't changed much is the CPU.
 
Last edited:
This is a bit muddied as the enhanced consoles came out in 2017. The jump from One to OneX will be every bit as big as OneX to 'Scarlett': DVD vs 4k BR; HUGE gpu boost; 8GB gddr3 to 12GB of gddr5. The only thing that wasn't changed much is the CPU.
Eh? The original Xbox one was also a Blu-ray player. I think with the S they added the UHD Blu-ray.
 
A big part of optimization that occurs for console games is at the design side. The game is designed on the art side to be optimal for the hardware specs of the console. The design/art side knows from the software engineering side what limitations they have to work with and try to make the best looking game possible within those constraints.

The PC version may than have what are basically add-on effects that effectively have poor scaling relative to visual quality gain. This is compounded then by how much resources companies want to put into this as ultimately game developers don't directly sell or benefit from hardware sales and therefore showcasing PC hardware.

Except that most games are hitting nowhere near the 8 GB vram limit, even on the Pro. I am assuming this is all based on the lowest common denominator and not the 12 GB gddr5 of the OneX.
If games were just putting polish over a turd, they would consume WAY more than 8 GB at 4k Ultra/insane settings, but that is just not the case.
 
The last VRAM spike came with PS4/Xbox One as games developed moved towards that console generation. This when requirements were pushed from 2GB being more than ample to 4GB-6GB for 1080p everything on and stagnating once those levels were reached.

At the end the major multi-platform games will heavily focus on asset optimization and delivery for the fixed hardware of those specs as you can't rely on hardware changes or game setting changes compared to the PC side. Also ultimately the biggest VRAM consumer will be texture assets and how they are handled (eg. the game may save cycles on things like optimizing LOD and just use memory as that is available in abundance).

The concern now will be how upcoming 2020 consoles impact PC game requirements post launch. This will be dependent on what hardware they have, what guidelines they put forth for developers to target and leverage those specs, and how fast development moves to next gen focused or only.



The most competitive pressure next cycle will come not from AMD nor Intel but from the new consoles. If just look at the last 2 console launches the GPU cycles that competed with them were rather aggressive. In Nvidia's case it was the 8800 GT PS3/Xbox 360 and GTX 970 vs the PS4/XBox One.


Interesting, I hadn't thought about this, but it definitely makes sense.
 
Except that most games are hitting nowhere near the 8 GB vram limit, even on the Pro. I am assuming this is all based on the lowest common denominator and not the 12 GB gddr5 of the OneX.
If games were just putting polish over a turd, they would consume WAY more than 8 GB at 4k Ultra/insane settings, but that is just not the case.


Yeah but in the console space it's not just a question of the vram limit, but a synergy between that, and the rest of the performance "box" they have to work within to maintain frames, especially at higher resolutions. Just because there's X-amount more of this resource, doesn't mean utilizing it is free.


Almost any game that has released optional high resolution texture packs for PC (which always seems visually a big step up to me), ends up pushing at or over the 8gb vram ceiling, and I'd say the ones that don't exceed it, did so intentionally, at the cost of some fidelity. It's why I bought a replacement 1080ti when I killed my last one, instead of RTX, since I couldn't justify paying almost double for a 2080ti, though I'd have bought a 2080 Super just to try the rtx features if it weren't for the 8gb limit.
 
Almost any game that has released optional high resolution texture packs for PC (which always seems visually a big step up to me), ends up pushing at or over the 8gb vram ceiling, and I'd say the ones that don't exceed it, did so intentionally, at the cost of some fidelity. It's why I bought a replacement 1080ti when I killed my last one, instead of RTX, since I couldn't justify paying almost double for a 2080ti, though I'd have bought a 2080 Super just to try the rtx features if it weren't for the 8gb limit.

If they were the same price, I really think you would have been better off with the RTX 2080 as DLSS is finally showing promise and some games really take advantage of the new architecture.

Even if the new consoles are 16 GB, imagine developers will want to build games that can port to the older consoles as well. I really don't think vram will increase nearly as fast as hardware demands. Here is a huge compilation of vram using Techpowerup.com info. All were rather accurate using a linear equation. The first couple games needed to have the megapixels squared to make a linear chart between resolutions.

vram.png
 
Last edited:
If they were the same price, I really think you would have been better off with the RTX 2080 as DLSS is finally showing promise and some games really take advantage of the new architecture.

Even if the new consoles are 16 GB, imagine developers will want to build games that can port to the older consoles as well. I really don't think vram will increase nearly as fast as hardware demands. Here is a huge compilation of vram using Techpowerup.com info. All were rather accurate using a linear equation. The first couple games needed to have the megapixels squared to make a linear chart between resolutions.


Yeah, it's a valid opinion, and I certainly considered it. My animosity toward the mediocre iterative performance increase, coupled with the vram just made it a non-starter for me. I can handle turning down a few settings I don't even appreciate, but I *never* will be ok hitting a vram ceiling that precludes me from enjoying a significant texture resolution bump, when the cost of that component is such a drop in the bucket, especially compared to the huge vertical increase of gpu cost that I know was only possible because of a momentary aberration (maybe strong words, if mining gets a large and sustained resurgence, I'll happily admit I was wrong on this point), which emboldened such; aka the largest price versus (lack of) performance ratio increase in recent history.


I'll be really interested to see a comparative of the GPU performance potential of these upcoming consoles, versus their price, factoring other components, and I bet we see performance on par with or better than, the rtx 2080ti in a $400-500ish total price console. I'm spitballing, because I don't follow console news other than to know we're expecting the next iteration.


I fully expect to see a reasonable jump next round that is fiscally valid, but, I'm not feeling too tempted by the 20xx series myself.
 
Most games even now use only abou 3.5GB VRAM lol

Yep, and if they do, taking away some of the more 'placebo-level' settings like 16x aa and dense fog typically pulls them back to well under 4 GB.

I wonder what the actual vram consumption on consoles is using some of these more optimized settings.

So did the OneX have 12gb of memory to cover vram limitations? I am thinking they did it for the added bandwidth and is why the OneX has a big advantage over the PS4 pro beyond the gflop difference.

Case in point: We don't see nearly the same performance difference between the Rx570 and Rx590 as we do the Pro and OneX, despite similar gflop deltas (Digital Foundry has some great videos on this). Team Xbox knew that older GCN architecture was very bandwidth bottlenecked at a given gflop (compared to Nvidia offeringd) and is why they went for the 384-bit route.
 
Additionally, vram advantage of the OneX is not really 50% over the Pro. The pro has 1 GB of ddr3 to free up an extra 512 mb to the graphics. It's also said the Xbox One dedicates 3 GB to the system. Factor in a lower weight o/s, the Pro can likely dedicate 7 GB to the gpu while the OneX has at best 9 GB to work with.
 
R6 Siege on 1440P Ultra, and I mean every setting cranked, Temporal AA, uses about 6.5GB VRam.
 
Unless a developer wants to deliberately push things I wouldn't worry about 8GB any time soon.

And there really isn't the power to fully use it all anyway. If all 8GB is actually being used for rendering, and not just as a cache, then more than likely the GPU itself will be the bottleneck.
 
Unless a developer wants to deliberately push things I wouldn't worry about 8GB any time soon.

And there really isn't the power to fully use it all anyway. If all 8GB is actually being used for rendering, and not just as a cache, then more than likely the GPU itself will be the bottleneck.
High res textures can eat up vram and have little to no impact on GPU power.
 
High res textures can eat up vram and have little to no impact on GPU power.

It'll be compressed in memory though, and 8GB is actually quite alot in terms of raw compressed data.

That's less then whole of a fresh install of Windows 7!

MMMmmmmm, Windows XP running in a magical cuda VM entirely in GPU memory... :p
 
It'll be compressed in memory though, and 8GB is actually quite alot in terms of raw compressed data.

That's less then whole of a fresh install of Windows 7!

MMMmmmmm, Windows XP running in a magical cuda VM entirely in GPU memory... :p
My point was that GPU power is nearly irrelevant in the one setting that can impact vram the most.
 
View attachment 189792

then there are games like these where HBCC does wonders, but still this is most the fault on the developers and consoles probably

So HBCC is a better allocation of dynamic ram? It looks like the GTX was able to 'save' itself with dynamic ram since system ram usage was higher, but this may be misleading since we don't know what the actual minimums are on those settings. It is still sort of a moot point since you would not run a GTX at 1440p ultra on that game.
 
My point was that GPU power is nearly irrelevant in the one setting that can impact vram the most.

I kind of wonder what the 'theoretical' max data usage is for a single frame (similar to a raw file?) So gpu obviously needs several magnitudes more as it would never be able to swap out textures that fast.

So I guess my question is, does vram REQUIREMENTS go down as speed increases? ie. gddr5 vs gddr6

It would be interesting to see what GPU gets in trouble first - the 980ti vs the 1660ti.
 
Nobody should be citing VRAM usage as an indicator of VRAM need, period. Properly coded games dynamically allocate whatever the GPUs vram capacity may be, NOT what the game necessarily or actually needs. Any 8gb cards currently out there will run out of GPU power way before some silly hypothetical future scenario requires more vram than GPU power.

The only current scenario where 8gb vram may be insufficient is ray tracing, and here no 8gb cards can ray trace worth a damn anyway.
 
Nobody should be citing VRAM usage as an indicator of VRAM need, period. Properly coded games dynamically allocate whatever the GPUs vram capacity may be, NOT what the game necessarily or actually needs. Any 8gb cards currently out there will run out of GPU power way before some silly hypothetical future scenario requires more vram than GPU power.

The only current scenario where 8gb vram may be insufficient is ray tracing, and here no 8gb cards can ray trace worth a damn anyway.
Mods are a use case that could cause you to hit cram limits but I personally don't care for mods and the games that are extremely popular for it like Skyrim are trash anyway.
 
Mods are a use case that could cause you to hit cram limits but I personally don't care for mods and the games that are extremely popular for it like Skyrim are trash anyway.

Mods are notorious for that! And it's mostly due to bad exploitation of an engine not designed with such large assets in mind. Had it been, said textures would be better cached and optimised for streaming so it wouldn't be an issue with a proper production release.

How much RAM does IDs mega-texture technology chew up? Don't remember the Rage engine getting cited for memory issues.

I'd say about the only card in recent memory that was/is truly ram starved (beside those stupid 3GB versions of 6GB cards) was the 780ti with it's 3GB buffer. That's powerful enough to genuinely suffer with well optimised 4GB+ demanding games.

A theoretical RTX Titan/2080ti with only 8GB would probably be starved, but I'd say the 2080 is about perfectly balanced. You'll be sacrificing other settings before reducing texture size/quality in any foreseeable game I would say.
 
Mods are a use case that could cause you to hit cram limits but I personally don't care for mods and the games that are extremely popular for it like Skyrim are trash anyway.

This factor often amuses me. Those mods are optimizing in many cases for screenshots - not gameplay. In motion it matters far less.

There are serious diminishing returns on texture size past a point, and this varies per texture and it's usage. Once you get past "my god look at these pixels", it's hard to tell and you need to flip to before/after to see it at all.

Now yes, sometimes in games with lots of textures they clamp the budget per frame too low and mods can help. But overall, I find most mods to be extreme overkill, just to say "4k texpack!". I don't need 4k on a brick on a wall seen in passing.
 
Back
Top