Why do video cards need 2GB video memory?

PS3

[H]ard|Gawd
Joined
May 31, 2009
Messages
1,386
Like the 5970, 4870x2, GTX285 2GB version, etc. Isn't this overkill since most reviews show that anything over 1GB is pretty much overkill, showing no performance differences. Or is there some underlying cause to having over 1GB frame buffer, like smoother gameplay? I've heard of mass effect where more video memory might kick in, but it seems like such a select few. Most developers probably gear their games to run on 512mb I would think. Is there any applications besides games that 2GB of video memory might serve well?

I think crysis may use over 1gb of frame buffer at times, perhaps this is why 5970 will still dip into the 30s and 40s at enthusiast settings. This must be the only thing holding back the 5970.

I wonder when we will see 2GB 5850 and 5870s....
 
It is now, will it be tomorrow?
The answer to this question of course depends on how long your are going to keep your GPU.
I was kinda bummed when GTA4 told me i could only have medium graphics even with 1GB. Then again, it seems to be a shitty console port.

The question is not when we will see 2GB enthusiast-level cards, but when we will see games that require >1GB.
 
You haven't done your research. All the dual GPU cards have to store ALL the texture data twice, one for each GPU, so while they may have 2GB of video ram, effectively they only have 1GB.
 
We're approaching a point where playing games in 1080p-level resolutions and using maximum quality textures requires that amount of RAM. But it's only that top-tier level of performance. But, again, it's always good to be ahead of the curve.
 
On this note I'm curious, is there any difference in the handling of video memory between dual GPU cards and single GPU crossfire? For instance, do I have any "more, effective" video memory with 1 5970, versus 2 5870s?
 
On this note I'm curious, is there any difference in the handling of video memory between dual GPU cards and single GPU crossfire? For instance, do I have any "more, effective" video memory with 1 5970, versus 2 5870s?

nope. They are 2 seperate cards, merged onto one PCB. nothing too fancy, unfortuneately.

I wish they could somehow 'share' that 2GB of memory, but currently, they really are just 2 seperate GPU.
 
I really wish actually that the 5970 and the GTX 295 had more RAM. It seems like we're already butting up against that barrier at 2560x1600 which I use exclusively now and I don't want to choose AA or max textures, I want both.
 
It is now, will it be tomorrow?
The answer to this question of course depends on how long your are going to keep your GPU.
I was kinda bummed when GTA4 told me i could only have medium graphics even with 1GB. Then again, it seems to be a shitty console port.

The question is not when we will see 2GB enthusiast-level cards, but when we will see games that require >1GB.

GTA4 sucked balls graphically IMO. I definitely would not buy a 2gb card for that game.
 
GTA4 sucked balls graphically IMO. I definitely would not buy a 2gb card for that game.

I want to see what that game looks like at maximum settings @ 60fps. I think its another 2 GPU generations until I'll get that level of performance from a midrange GPU.
 
GTA4 sucked balls graphically IMO. I definitely would not buy a 2gb card for that game.

It doesn't matter if GTA4 sucked balls. Games are heading in that direction (graphics requirement-wise) at any rate.
 
I was kinda bummed when GTA4 told me i could only have medium graphics even with 1GB. Then again, it seems to be a shitty console port.

I could set high on a gtx260 with 896mb and my current 5770 1gb. Drop your draw/detail distances to compensate.
 
the new gpuz shows how much memory ur video card is using. I've yet to see mine hit over 800mb, and thats just L4D2. everything else is sub 500
 
I am sure eyefinity has upped the ante here. on the newer setups I expect 2gb to become common. on a single screen the only reason I can see is high levels of AA.
 
Who needs a 32MB video card?

640KB of RAM ought to be enough for anyone

64-bit is pointless, why would you need more than 2GB of RAM?
 
gta4 was a good game, i liked it. ran fine (60 fps in-game benchmark) at 40 draw distance, 70, 70, and then 10, with everything on high. gtx285 1gb, 1920x1080, 3.4ghz q9550
 
Who needs a 32MB video card?

640KB of RAM ought to be enough for anyone

64-bit is pointless, why would you need more than 2GB of RAM?

You have a point, my video card has 16 MB of memory and my pc has 128 MB of ram and it's more then enough!
 
You haven't done your research. All the dual GPU cards have to store ALL the texture data twice, one for each GPU, so while they may have 2GB of video ram, effectively they only have 1GB.

It kinda sucks they haven't made use of those extra vram, hopefully in the near future both graphics companies or developers will figure something to make SLI games more efficient.
 
Eyefinity and high resolutions. A lot of people are getting bigger monitors now. Bigger resolution means bigger textures the games uses which makes for bigger file sizes. 1GB is already hitting the limit with heavy antialiasing and anisotropic.
 
Eyefinity and high resolutions. A lot of people are getting bigger monitors now. Bigger resolution means bigger textures the games uses which makes for bigger file sizes. 1GB is already hitting the limit with heavy antialiasing and anisotropic.

Not really. The game is going to use the same resolution textures regardless of your resolution. Bigger resolution only means bigger framebuffers (and some other rendering buffers), but those are static in size (they will always been the same size for a given resolution and AA level). AF doesn't increase VRAM usage afaik.

So we really are at the point where resolution is quickly going to become irrelevant when talking VRAM usage (unless we start seeing higher res screens, of course). Triple buffering 2560x1600 requires less than 50mb of VRAM for framebuffers, for example. AA affects that, but still. Resolution dependent VRAM usage makes up a rather small portion of the card's VRAM, and its only going to get smaller as VRAM increases.
 
From my knowledge, they probably used the two copies of data in memory model because it is fast and less complicated than sharing memory. The more complex it is (such as shared memory), the slower things go.
 
I was looking for 2GB vs 1GB video card reviews and couldnt really find much...
Can anyone provide any reviews of 2GB cards that show a difference from their 1GB counterparts?
 
I was looking for 2GB vs 1GB video card reviews and couldnt really find much...
Can anyone provide any reviews of 2GB cards that show a difference from their 1GB counterparts?

There aren't any 5xxx series cards with 2GB, and Nvidia doesn't have anything to compete with yet.
 
Who needs a 32MB video card?

640KB of RAM ought to be enough for anyone

64-bit is pointless, why would you need more than 2GB of RAM?

I don't think anyone is suggesting that graphic card won't need such amount of VRAM forever.

Its just a question of do we need them right now? Why now vs. 2, 3 years ago? Or why now vs. next gen GPUs?
Why not have 2GB on a GeForce 4 Titanium years ago?

My point is, new technology, or in this case here, new amount of VRAM on a graphic card, have its own certain time and condition where it becomes feasible to have that much. Doesn't meant that its feasible all the time

And then there is also this marketing gimmick where GPU are given more ram that what the GPU can actually utilize, at the cost of slower RAMs. This gimmick has been around for as long as I can remember. And I'm sure we all can agree that such gimmick brings no gain in performance, like for example, a 256MB Radeon 9600 Pro, or a 1GB GeForce 9500. Back when we have a 32MB GeForce 2, giving it 128MB isn't going to do it any good although we eventually need that much and more some years after that.

So its a valid question, when someone ask do we really need that much VRAM right now.
Can current GPU actually utilize such amount of RAM?
Or is this merely a marketing gimmick?

Now I'm not saying its necessary or not, I'm just pointing out that it is a very valid question that that the threadstarter is asking.:)
 
The faster we prepare people to work ahead of the curve (2 GB cards), the faster the market can push forward to produce the next series. So by the time that 2 GB is needed, 3-4 GB cards are already out and in the hands of avid gamers/etc. By pushing the curve up a notch, and building for the tomorrow, instead of the today, you encourage gaming companies to make games to facilitate such standards. Thus, progress gets a kick in the pants.

Besides, having 2GB in one card is nothing more than having a gold plated toilet. So what? It doesn't need to be gold, but do your friends have a gold plated john? Point in case.
 
Not really. The game is going to use the same resolution textures regardless of your resolution. Bigger resolution only means bigger framebuffers (and some other rendering buffers), but those are static in size (they will always been the same size for a given resolution and AA level). AF doesn't increase VRAM usage afaik.

So we really are at the point where resolution is quickly going to become irrelevant when talking VRAM usage (unless we start seeing higher res screens, of course). Triple buffering 2560x1600 requires less than 50mb of VRAM for framebuffers, for example. AA affects that, but still. Resolution dependent VRAM usage makes up a rather small portion of the card's VRAM, and its only going to get smaller as VRAM increases.

Its true that more video memory doesnt magic up higher detail textures, but with more memory available and the bandwidth to use it, developers will provide higher res textures as they already are doing.
Also, game modders give us higher res textures that have already exceeded 1GB memory requirements for some games + mods at modern resolutions.

Framebuffer size matters a lot when using AA or buffering as AA is applied to the whole screen and buffering applies to the whole screen.
So as your gaming display view size increases (ie Eyefinity, new higher res displays...), your video memory requirements can increase exponentially.

Gfx cards are now being used for more than just generating 3D images, they also process some of the interactions, as in PhysX on GPU.
This causes more objects to be drawn as well and can use a significant amount of memory.
So far only NVidia have joined the party with proprietary solutions but very soon, AMD will be doing the same independently and both are already using DX11's GPU streaming features.

Take for example Batman.
With a GTX260 running at 1920x1080, 16xQAA with max PhysX, the card runs out of memory, I have measured memory use and confirmed this.
Going back to 16xAA (not QAA) frees up enough memory to stop memory thrashing.
So the amount of AA you use can make all the difference, even at only single screen 1080p res !
This is from a game, as released, with no mods.
 
Last edited:
Enhanced E-Peen, very important.

Also, bigger numbers sell better.
 
Yeah i would say that game developers dont use smart caching at all, what you would expect is that memory is used better while gaming.

Yet when you have 4gb of normal ram it is hardly ever used 90% of the time while something can go much smoother cached.

Let alone the video memory which falls more or less under the same principle , can be used for caching but isn't

It is like the whole argument for single thread vs multi threading.

To few developers make smart use of this and to many are stuck programming wise in the last century , even tho we have had things like Linux and BeOS dare i even say Amiga :) .

And then people claim that we learn from our mistakes :) .
 
Yeah i would say that game developers dont use smart caching at all, what you would expect is that memory is used better while gaming.

Yet when you have 4gb of normal ram it is hardly ever used 90% of the time while something can go much smoother cached.

Let alone the video memory which falls more or less under the same principle , can be used for caching but isn't

It is like the whole argument for single thread vs multi threading.

To few developers make smart use of this and to many are stuck programming wise in the last century , even tho we have had things like Linux and BeOS dare i even say Amiga :) .

And then people claim that we learn from our mistakes :) .

Theres more to the story.
Win32 process space by default is limited to 2GB per process and can be expanded to 3GB per process if compiler flags are set.
As Win32 is the most common OS many devs choose not to use process space larger than the default.
Reason #1 its what most people have.

Also, for the mostpart there is no need for more than 2GB per process.
The hardware is flexible enough to allow realtime streaming of data into ram while a game is running, DMA on hard drives made this possible years ago, thats pretty decent caching.

So this is why 4GB ram is still fine for nearly everyone out there.
The reason gamers need more than 4GB of addressable memory space (note I said addressable space not actual memory) is if their hardware steals from the address space so there is not enough space left to fit the system ram.
ie Crossfire 2x 2GB gfx cards.
With a single 1GB card in my Win32 system, I get 3.3GB system ram to use, with 2x1GB crossfire, I get 3GB system ram to use, still just enough to run a game and the OS.
So when available addressable space isnt enough to fit your system ram, then you need a 64bit OS.
At the moment, the games themselves dont need a 64bit OS or more address space than 4GB.

The biggest bottlenecks occur getting data from system memory / CPU to the gfx card or between gfx cards.
Not only can they be bandwidth limited but latency limited too.
Because of these bottlenecks, devs "DO" implement good caching techniques by pre-loading data into system ram so it is ready to be shoved across the PCI-E bus with least delay.
Theres only so much thats worth caching though, filling your system ram to the full wont help.
 
Last edited:
So we really are at the point where resolution is quickly going to become irrelevant when talking VRAM usage (unless we start seeing higher res screens, of course). Triple buffering 2560x1600 requires less than 50mb of VRAM for framebuffers, for example. AA affects that, but still. Resolution dependent VRAM usage makes up a rather small portion of the card's VRAM, and its only going to get smaller as VRAM increases.
AA makes an enormous difference at high resolutions. We're talking >400MB for 8xAA at 2560x1600. 16xAA or 16xQAA could probably chew through 1GB on its own.

You're right about AF, though - memory overhead there is effectively zero.
 
AA makes an enormous difference at high resolutions. We're talking >400MB for 8xAA at 2560x1600. 16xAA or 16xQAA could probably chew through 1GB on its own.

You're right about AF, though - memory overhead there is effectively zero.

Alright, but how come reviews don't show any difference in fps numbers with a lot of AA? I've seen reviews where Lots of AA at low resolutions show a difference, but at high resolutions with AA there is no difference. You would think at higher resolutions video ram becomes more demanding.
 
AA makes an enormous difference at high resolutions. We're talking >400MB for 8xAA at 2560x1600. 16xAA or 16xQAA could probably chew through 1GB on its own.

You're right about AF, though - memory overhead there is effectively zero.

Oh, yes, I know AA has a huge impact, but I was trying to point out that its a fixed impact. 8xAA at 2560x1600 will use the same amount of VRAM in 20 years as it does today (assuming MSAA, of course), but the amount of VRAM on cards will continue to increase.

That said, I believe your 8xAA calculation is *waaay* off. 8xAA (supersampling) at 2560x1600 with triple buffering would be <150MB for framebuffer use (with 16xSSAA being <300MB). Remember, Nvidia's 16xAA isn't 16xMSAA, its CSAA.

Of course, there are also things like 8xCSAA and 12xCFAA.

Framebuffer size matters a lot when using AA or buffering as AA is applied to the whole screen and buffering applies to the whole screen.
So as your gaming display view size increases (ie Eyefinity, new higher res displays...), your video memory requirements can increase exponentially.

Yes, but Eyefinity is the only thing that stands to increase resolution at this point. Even the brand new Display Port still only has enough bandwidth to drive 2560x1600.

Gfx cards are now being used for more than just generating 3D images, they also process some of the interactions, as in PhysX on GPU.
This causes more objects to be drawn as well and can use a significant amount of memory.
So far only NVidia have joined the party with proprietary solutions but very soon, AMD will be doing the same independently and both are already using DX11's GPU streaming features.

And all of that is 100% resolution independent. :)

Take for example Batman.
With a GTX260 running at 1920x1080, 16xQAA with max PhysX, the card runs out of memory, I have measured memory use and confirmed this.
Going back to 16xAA (not QAA) frees up enough memory to stop memory thrashing.
So the amount of AA you use can make all the difference, even at only single screen 1080p res !
This is from a game, as released, with no mods.

You don't necessarily know if it was running out of VRAM that caused the stuttering, or lack of bandwidth, or lack of power, etc... I'm not saying it couldn't be the difference in AA, it quite possibly was, just saying. If it was purely running out of VRAM, it just means that the game was using most of your VRAM and AA just pushed it over the edge. Even just going from your 260's 896mb to 1gb of VRAM would have been enough extra space for AA's needs.
 
Yes, but Eyefinity is the only thing that stands to increase resolution at this point. Even the brand new Display Port still only has enough bandwidth to drive 2560x1600

I'm not sure why you are discussing bandwidth, its available memory.
a) The higher the AA mode you use, the more memory is required.
b) As resolution scales, memory requirements for AA increase exponentially.
I demonstrated that even at 1080p, video memory is at a premium.
This gets worse with higher resolution texture packs which are readily available for many games.

And all of that is 100% resolution independent. :)
How can you discount other uses of video memory from contributing to the problem?

You don't necessarily know if it was running out of VRAM that caused the stuttering, or lack of bandwidth, or lack of power, etc... I'm not saying it couldn't be the difference in AA, it quite possibly was, just saying. If it was purely running out of VRAM, it just means that the game was using most of your VRAM and AA just pushed it over the edge. Even just going from your 260's 896mb to 1gb of VRAM would have been enough extra space for AA's needs.

Thing is, I know exactly that it was running out of memory, I even said that in the post you quoted!
Here it is again:
"I have measured memory use and confirmed this"
 
I am sure eyefinity has upped the ante here. on the newer setups I expect 2gb to become common. on a single screen the only reason I can see is high levels of AA.

Agreed. I'm expecting to sell my 1GB 5870s to replace them with 2GB versions at some point during my rig's lifetime. I'm pretty sure I'm pushing the limits of 1GB in some games already at 3600x1920.
 
At 2560x1600 and below, only a few games utilize more than 1GB with stock textures (GTAIV being one of them). However, some games - notably Oblivion - have texture mods that can take them well over 1GB.

Eyefinity, on the other hand, will give users resolutions like or 5760x1080 or 3240x1920 with 3 1920x1080 monitors, and double that with 6 monitors. Those setups will eat video card RAM like crazy. 2GB per GPU wouldn't be enough for a 6 monitor setup.
 
Alright, but how come reviews don't show any difference in fps numbers with a lot of AA? I've seen reviews where Lots of AA at low resolutions show a difference, but at high resolutions with AA there is no difference. You would think at higher resolutions video ram becomes more demanding.

Can you link to thses reviews? I have yet to see a review where enabling AA at high resolutions did not result in a FPS drop.
 
When I said that, I was speaking in terms of more video card ram making a difference...such as a 4890 1gb vs a 4890 2gb. In a review it showed the 4890 2gb performing better at 1680x1050 with AA, but performed the same as a 4890 1gb card at 1920x1200 with AA. You would think at 1920x1200 AA the 4890 2gb would perform better, but it didn't...just at lower resolution which doesn't make any sense IMO.
 
When I said that, I was speaking in terms of more video card ram making a difference...such as a 4890 1gb vs a 4890 2gb. In a review it showed the 4890 2gb performing better at 1680x1050 with AA, but performed the same as a 4890 1gb card at 1920x1200 with AA. You would think at 1920x1200 AA the 4890 2gb would perform better, but it didn't...just at lower resolution which doesn't make any sense IMO.

That could be because at that resolution there may be a different bottleneck, possibly memory bandwidth, or the GPU itself. Look at some of the low end cards that have 1GB of memory. They're not even powerful enough to make use of it. The video card is it's own system just like the entire computer is. The card can only be as fast as it's weakest link. Weakest link could be the amount of ram, bandwidth, or the GPU itself. So just because card X didn't get an improvement with 2GB of vram, doesn't mean card Y won't.
 
Eyefinity, on the other hand, will give users resolutions like or 5760x1080 or 3240x1920 with 3 1920x1080 monitors, and double that with 6 monitors. Those setups will eat video card RAM like crazy. 2GB per GPU wouldn't be enough for a 6 monitor setup.

Sure it would. 6x 2560x1600 only needs ~300mb (no AA).

I'm not sure why you are discussing bandwidth, its available memory.
a) The higher the AA mode you use, the more memory is required.
b) As resolution scales, memory requirements for AA increase exponentially.
I demonstrated that even at 1080p, video memory is at a premium.
This gets worse with higher resolution texture packs which are readily available for many games.

A) Yes, but there is diminishing returns on AA modes. 12xCFAA vs. 8xMSAA vs. 24xCFAA - all fun modes, but it gets pretty hard to tell the difference in game between them. So aside from being able to, there generally isn't much use to going far past 8xAA, and like I said, even 8xAA @ 2560x1600 is only 150 MB of VRAM used - a whopping 15% of a modern card. Bump VRAM up to 2GB, and now 8xAA 2560x1600 will use 7% of the card's VRAM.

B) Because resolution can't scale past 2560x1600 with the sole exception of Eyefinity. The cable just doesn't support it.

How can you discount other uses of video memory from contributing to the problem?

I'm not. I'm discounting resolution from being a major use of VRAM like people like to claim around here. People regularly say things like you only need more VRAM if you play at 2560x1600, or that you don't need as much if you only play at 1680x1050 - that just isn't true. Resolution makes up a small portion of VRAM usage and that percentage is rapidly shrinking as VRAM increases. The primary use of VRAM is for things like textures, models, and physics simulations - all of which are going to be the same regardless of resolution.

Thing is, I know exactly that it was running out of memory, I even said that in the post you quoted!
Here it is again:
"I have measured memory use and confirmed this"

I was calling into question *HOW* you measured it and *HOW* you "confirmed" it.
 
Back
Top