The Slowing Growth of vRam in Games

Good article on vram:
https://www.techspot.com/article/2670-vram-use-games/

It mentions the amount of vram in a single frame and how Microsoft PIX can analyze real world usage.

While the amount of vram used in AAA games has shot up in the last year, there are still ways to make lower vram cards work beyond dropping resolution or settings.

Still, a bit crazy how the 8 year old RX 390 had 8 GB of vram while midrange cards today use the same. 8 years before 2015 was 2007. (0.5 to 1 GB GTX 9800 and HD 4850/70!!)

So yeah, definitely a slowing growth in vram.
 
Good article on vram:
https://www.techspot.com/article/2670-vram-use-games/

It mentions the amount of vram in a single frame and how Microsoft PIX can analyze real world usage.

While the amount of vram used in AAA games has shot up in the last year, there are still ways to make lower vram cards work beyond dropping resolution or settings.

Still, a bit crazy how the 8 year old RX 390 had 8 GB of vram while midrange cards today use the same. 8 years before 2015 was 2007. (0.5 to 1 GB GTX 9800 and HD 4850/70!!)

So yeah, definitely a slowing growth in vram.
For Nvidia at least, and it's showing now for 1080P gaming.
 
For Nvidia at least, and it's showing now for 1080P gaming.
Sure, at least a 2-3x increase over the last 8 years would have helped, but nowhere near an 8x increase over the 8 years prior.

And it's not as if developers would use 64 GB if they had it (and least not without being completely sloppy about it.)

It's sort of like digital pictures 20 years ago. Quality and size (in MB/megapixels) was comparably small. 10 years ago they dramatically increased in size and quality. Since then, digital photos haven't increased too much in file size. That's NOT because memory card size or camera processing power held them back as both continued to increase.

Not a perfect analogy, but the point is, there is really only so much you can stuff into a single frame without seeing massively diminishing returns.
 
Despite what some say, I strongly believe the 2080 will be VRAM limited @ 4K in the near future.
Hello, fellow [H] member 4 years from the future.

So looking at one of the current vram hogs...Wartz Legacy, that card will not do anywhere near 4k ultra without RTX, and is limited to 1080p ultra with a 76 fps average.
4k ultra gets you 28 fps with no evidence of vram limitations (when compared to more powerful cards with 8 GB)

Going RTX DOES show solid evidence of vram limitations, as the 2080 gets a mere 22 fps while the vram sufficient 2080ti gets 54 fps. Howevever, a few small tweaks, and you should be able to get a console-like 40 fps at 1080p with rtx.

It won't be until the next generation of GPUs (30 series) where 8 GB becomes a major problem for high end cards where the card has enough grunt to run out of vram.
 
An interesting comparison, I just saw this.

The RTX 3080 is normally 50% faster than RX 6800 XT in Ray Tracing but in Hogwartz Legacy 1440p ray tracing, it runs into stuttering issues (poor 1% lows)
Screenshot_20230508-193037_Opera.jpg

https://www.techspot.com/review/2671-geforce-3080-vs-radeon-6800-xt/
 
I heard that Nvidia's texture compression, texture streaming, etc., would overcome the issues of having a small VRAM buffer!

That is sarcasm. Years ago I figured that it likely wouldn't be enough, and so I stopped buying xx70 and xx80 class cards (the last xx80 that I had was an EVGA 2080 Super that I won).
 
An interesting comparison, I just saw this.

The RTX 3080 is normally 50% faster than RX 6800 XT in Ray Tracing but in Hogwartz Legacy 1440p ray tracing, it runs into stuttering issues (poor 1% lows)
View attachment 569006
https://www.techspot.com/review/2671-geforce-3080-vs-radeon-6800-xt/

All things considered, 10 GB held up rather well for that performance level, even with RT after 3 and a half years. Among 50 games Hogwarts was really the only exception. Plague Tale rook a dump at 4k rtx. but that game would have been running sub 30 fps even with sufficient vram.
 
There’s always market cycles, they said similar things when the 360 and PS3 launched, they were graphically ahead of PC hardware, especially for the $ for the average consumer. Only time will tell, I don’t think PC gaming is going away personally. There’s always going to be people willing to pay more for their preference. If supply chains get back to where they were, inflation reduces and they learn to limit scalpers, decent prices will naturally come back. Otherwise these pricing trends will only continue in this market climate. Here’s hoping for an amazing 50 series / RDNA 4 launch cycle!
 
I heard that Nvidia's texture compression, texture streaming, etc., would overcome the issues of having a small VRAM buffer!

That is sarcasm. Years ago I figured that it likely wouldn't be enough, and so I stopped buying xx70 and xx80 class cards (the last xx80 that I had was an EVGA 2080 Super that I won).
Guess what:
AMD uses the same kinds of techniques to make memory and bandwidth usage more efficient.
 
Neural Texture Compression could be an answer, and obviously GDDR7 will bring changes to VRAM requirements. I have a feeling more will have to be accomplished on the software side of things, as it really needs to catch up a good amount to be in harmony with the current capabilities of modern hardware. I don’t think just throwing a bigger bus and more VRAM is the solution, engineers goals are usually to improve efficiency.
 
Last edited:
All things considered, 10 GB held up rather well for that performance level, even with RT after 3 and a half years. Among 50 games Hogwarts was really the only exception. Plague Tale rook a dump at 4k rtx. but that game would have been running sub 30 fps even with sufficient vram.
Except most did not buy or were even able to obtain a 3080 until much later. I don't think it is holding up well at all.
 
Comparison of 8gb vs 16gb 4060 ti cards in various games:

HUB makes the case for spending $100 extra for more VRAM

Better performance:
The Last Of Us
Resident Evil
Callisto Protocol
Plague Tale Requim

Better Texture Quality:
Halo Infinite
Forspoken

 
I don't know how the 4200 Ti did it, keeping me alive on MOHAA public servers with only 64Mb of vram to use. I just couldn't afford the 128Mb model back then!
 
I don't know why nVidia decided to go with lower VRAM this gen. I have 8GB of VRAM on my 880M from 2014, 9 years later we're still seeing 8GB VRAM.

They should have done
24GB 4090
20GB 4080
16GB 4070
12GB 4060 Ti
8GB 4060

Now they have a $399 4060Ti 8GB and $499 4060Ti 16GB. $100 away from a 4070 which has less (12GB) VRAM.

I experienced the VRAM stuttering back with a 690 and had replaced it with a Titan. Ever since (especially since I play a 4K) I've always got the card with more VRAM.

It's definitely an unpleasant experience running out of VRAM but I guess these days it's not as bad as 13 years ago. Games were completely unplayable if you didn't have VRAM.
 
I don't know why nVidia decided to go with lower VRAM this gen. I have 8GB of VRAM on my 880M from 2014, 9 years later we're still seeing 8GB VRAM.

They should have done
24GB 4090
20GB 4080
16GB 4070
12GB 4060 Ti
8GB 4060
I'll just quote a fellow member here since it bears repeating.

Nvidia’s workstation silicon is the same as their gaming silicon the only difference being the memory, firmware and drivers.
They intentionally put less memory on the consumer cards to make them incapable of stepping on the toes of their workstation brethren to protect their artificial market segmentation.

Where you agree with this practice or not is a completely separate debate, but we need to stop acting like we don't understand why they do this anymore.
 
I'll just quote a fellow member here since it bears repeating.



Where you agree with this practice or not is a completely separate debate, but we need to stop acting like we don't understand why they do this anymore.
The only way Nvidia stops doing this is when their workstation silicon is different from their desktop silicon. I doubt AMD and Intel will manage to take any significant margin from Nvidia any time soon, but gaming and AI compute are going on different paths and Nvidia may soon be forced to split their stack.
Nvidia doesn’t like printing silicon they can’t charge for and removing the parts they disable via firmware and software would let them cut back on die size by a good amount.
 
I don't know why nVidia decided to go with lower VRAM this gen. I have 8GB of VRAM on my 880M from 2014, 9 years later we're still seeing 8GB VRAM.

They should have done
24GB 4090
20GB 4080
16GB 4070
12GB 4060 Ti
8GB 4060

Now they have a $399 4060Ti 8GB and $499 4060Ti 16GB. $100 away from a 4070 which has less (12GB) VRAM.

I experienced the VRAM stuttering back with a 690 and had replaced it with a Titan. Ever since (especially since I play a 4K) I've always got the card with more VRAM.

It's definitely an unpleasant experience running out of VRAM but I guess these days it's not as bad as 13 years ago. Games were completely unplayable if you didn't have VRAM.

Speculating of course, but gddr6 bus lanes have become silly expensive. It seems that they want to minimize the number of dies with a set bus limit on each. It also seems like Nvidia really like them in pairs as well, ie AD102 384 bit (12 chips), AD 103 256 bit (8 chips), AD 104 192 bit (6 chips), and both AD 106 and 107 as 128 bit (4 chips)

Amd does have the 6700 10 GB (5 chip) but that's likely them fusing off some of the lanes.

But yeah, that still leave put 320 bit (10 chips). As much as they charge for the 4080, AD 103 should have been 320 bit to make the rest if the product stack work. That would have left 256 bit for the 4070ti (AD 104) while both the 4070 and 4060ti (AD 106) could have been 192 bit and so on. If they wanted to segment the 4070 and 4060ti, they could have fused off some bus lanes on the 4060ti to be 160 bit with 10 GB vram akin to the RX 6700. Even that small bump to bandwidth and vram would have been huge for the 4060ti.
 
Last edited:
Back
Top