Battlefield 4 recommended system requirements: 3 GB VRAM

BusyBeaver040197695 said:
Most games today don't need more than 2GB of VRAM t o max at 1080p, but I don't think it will stay that way in 2014... not with games designed around the XB1 as the baseline.

Current games are designed to scale well with small video memories, even BF4 is designed with 512MB of memory in mind for the X360 and PS3. Even bleeding edge visual tour-de-forces like Crysis 3 have this scaling. These games will work well with reasonable amount of VRAM because of two things:

1. Limited, modular environment: This is one of the main reasons why Crysis 3's visual fidelity is so high compared to other games. In addition to having the efficient Crytek engine, the environment was rather walled off, so asset density increased in relation to the decreasing environmental scope. While the artists did a great job at concealing game boundaries, I am most sensitive to the openness of the environment and have to say Crysis 3 was linear and cordoned off in terms of level design.

Crysis 3 was less claustrophobic than Crysis 2, but nowhere near the openness of the first Crysis. This allowed Crysis 3 to fit right into the X360 and PS3, while melting the highest-end PC with lighting, particle effects, AA, tessellation, etc.

2. Limited non-modular asset density: Let's define non-modular asset first: Stuff that is designed into the environment that can't be easily scaled, like buildings, mountains, rocks, and even trees to a certain extent. While you can easily scale lighting/shadow, particle effects, textures, etc. Polygon meshes are not so easily scaled, and hence the distinguishing factor in current generation vs next generation. Take one look at Witcher 3's non-modular asset density vs Skyrim's, and you will see that there are many art assets that cannot be scaled without ruining the design.

Given the baseline constraints by all the current-gen ports, 512MB of total RAM, no game today has both open environmental scope and high non-modular asset density that would truly force VRAM usage on the lowest settings.

I don't think BF4--or any of the current games available for that matter--should be considered when future-proofing hardware, as they still are designed around low VRAM scalability. The games that will shock the baseline up to XB1's standard will be the likes of Witcher 3: an open-world game with high non-modular asset density, as will be many next-gen console ports.

I can't wait to see how much VRAM Witcher 3 will use on maximum settings, I sense this game will be the litmus test of next-gen VRAM usage.

When you say it'll be the litmus test... I think we need to clarify what we're debating in this thread. Are you saying that next-gen games will require >2GB when maxed at 1080p, 1200p, 1440p, 1600p? Obviously triple monitor resolutions require more than 2GB (and more than 1 GPU in most circumstances). I highly doubt that you will see performance dropoff on next gen games at 1080p/1200p with a high end GPU that has 2GB of VRAM (GTX 680/770).
 
When you say it'll be the litmus test... I think we need to clarify what we're debating in this thread. Are you saying that next-gen games will require >2GB when maxed at 1080p, 1200p, 1440p, 1600p? Obviously triple monitor resolutions require more than 2GB (and more than 1 GPU in most circumstances).
My bet is that next-gen games using XB1 as the baseline (and not X360 or PS3) will take advantage of >2GB VRAM, even on 1080p.

I highly doubt that you will see performance dropoff on next gen games at 1080p/1200p with a high end GPU that has 2GB of VRAM (GTX 680/770).
I predict one simple thing: max out the draw distance in games like Witcher 3 and voila, all your VRAM are belong to the program.
 
October 1st open beta start. I suppose we'll find out then if the 3GB recommendation is true. I highly suspect that it was marketing material fluff with the AMD/EA/Dice affiliation, but as mentioned, we'll find out soon enough. Hopefully it isn't the case, but if it is, time to upgrade!
 
From what I remember, bf3 alpha/beta wouldn't let users adjust gfx at all. Hopefully they will allow it this time around.
 
Since we all know 'recommended' is the real minimum, and you should add at least 50% to that to play the game remotely smoothly...

No need for me to read on. This is what I wanted to say and it was the first post lol.
 
People seem to be forgetting, Microsoft says that new games will keep coming out for the 360 for a while longer. This means things won't be pushed hard at the beginning. Later developers will fill up the xbone and ps4 memory and finally when they rich the limit they will optimize.
 
People seem to be forgetting, Microsoft says that new games will keep coming out for the 360 for a while longer. This means things won't be pushed hard at the beginning. Later developers will fill up the xbone and ps4 memory and finally when they rich the limit they will optimize.

And then I will have SLI 4 gb 880s
 
People seem to be forgetting, Microsoft says that new games will keep coming out for the 360 for a while longer. This means things won't be pushed hard at the beginning. Later developers will fill up the xbone and ps4 memory and finally when they rich the limit they will optimize.

The range will just get larger- or, you'll have 'cross platform' games developed only for the current and next gen consoles along with mainstream PCs, and then other games developed for the next gen consoles and gaming PCs separately.

That's actually kind of what we should expect to happen, but it's also (maybe not so) common sense.
 
Sheesh - can't believe this thread has gotten so big. Folks - we're just going to need to test the game out when it drops. That's really all there is to it.
 
Killzone: Shadow Fall as demonstrated in E3 (video of demo and engine walkthrough), uses around 4.6GB of memory, 3GB of which is VRAM with FXAA:

Originally Posted by Eurogamer's Inside Killzone: Shadow Fall:

"Guerrilla also reveals that it is using Nvidia's post-process FXAA as its chosen anti-aliasing solution, so that 3GB isn't artificially bloated by an AA technique like multi-sampling."

"Factoring in that this is just a demo of a first-gen launch title, 3GB is an astonishing amount of memory being used to generate a 1080p image - as developers warned us recently, lack of video RAM could well prove to be a key issue for PC gaming as we transition into the next console era."

Killzone SF is a PS4 exclusive with streaming memory and processes super-optimized. I can only imagine the amount of VRAM required on less optimized systems like PCs, especially with additional effects and more expensive AA available. The scope of the environment with the sheer asset density is astounding. As a first-generation of the next-generation, it's a taste of what's to come.

I'm getting as much VRAM as I possibly can.
 
Last edited:
Are you saying that next-gen games will require >2GB when maxed at 1080p, 1200p, 1440p, 1600p? Obviously triple monitor resolutions require more than 2GB (and more than 1 GPU in most circumstances).
Increasing display resolution doesn't actually increase memory usage by all that much.

Even with Eyefinity resolutions, the additional memory usage is mostly because of the higher FOV (fewer resources can be culled from the scene because the viewport is wider).

And I've never personally needed multi-GPU to handle triple-monitor gaming. Single high-end cards have quite a bit of grunt these days.
 
"Factoring in that this is just a demo of a first-gen launch title, 3GB is an astonishing amount of memory being used to generate a 1080p image
That's a grossly misleading statement. I very highly doubt the game ever requires 3GB for any single frame. The more correct way to have said it would've been "3GB is an interesting amount of memory being used to generate a very large number of 1080p images across a long span of time".

It's interesting, but not astonishing.
 
Killzone SF is a PS4 exclusive with streaming memory and processes super-optimized.
Optimized? Hardly. They're targeting a 30 FPS framerate, that alone is enough for me to give the game a pass. Brand new hardware with a hugely reduced performance handicap,and they can't even manage to put out a title running at 60 FPS? meh...

They're still using LOD models, which are horribly inefficient. They state they upped the number of LOD models from 4 to 7 for this game. That means loading 7 different models per-object, which is a great way to waste RAM (or cause endless HDD thrashing by swapping from disk)

If it were being efficient it would be using tessellation. All that has to be loaded is the low-poly model and a detail map (from which the high-poly model can be rendered on-demand). Detail can be ramped up and down based on distance and performance, dynamically, smoothly, with no swapping or pop-in.
 
Increasing display resolution doesn't actually increase memory usage by all that much.

Even with Eyefinity resolutions, the additional memory usage is mostly because of the higher FOV (fewer resources can be culled from the scene because the viewport is wider).

And I've never personally needed multi-GPU to handle triple-monitor gaming. Single high-end cards have quite a bit of grunt these days.

Yup- as noted by most Eyefinity/Surround reviews, if detail settings are kept in check, increasing resolution does very little to increase RAM usage. And as I noted above, a 4k frame is just under 32MB- but let me spell the math here for everyone else:

4k, as implemented for HDTV (there's more than one standard), is 4x1080p, literally- so that's a resolution of 3840x2160. That's 8,294,400 pixels. Now, we run desktops at 32bpp- that's 32 'bits per pixel', which literally means you multiply the number of pixels, so you get a total of 265,420,800 bits per frame. Divide by 8 to convert to bytes, and you get 33177600 bytes per frame. Then, you divide by 1024 twice to convert from bytes to megabytes, and you get 31.640625MB, just under the 32MB that Microsoft spec'd as the onboard cache for the Xbox One.

Of course, literally anything you do that is 'per pixel' can inflate that number exponentially, and there's a whole lot of stuff that needs to be done per pixel, as that's how shaders work. But if you keep that in check, increasing resolution can be accomplished without significantly increasing VRAM requirements.
 
That's a grossly misleading statement. I very highly doubt the game ever requires 3GB for any single frame. The more correct way to have said it would've been "3GB is an interesting amount of memory being used to generate a very large number of 1080p images across a long span of time".

It's interesting, but not astonishing.

Uhm, hold on there cowboy. There's the amount of memory that the frame-buffer needs for each rendered frame- which is excruciatingly small- and the amount of memory that the game needs to generate said frame, such as textures and vertices, which is what we're talking about here with Killzone. And yeah, needing 3GB of VRAM to play the game- which means generating those 1080p frames- is entirely reasonable.
 
Uhm, hold on there cowboy. There's the amount of memory that the frame-buffer needs for each rendered frame- which is excruciatingly small- and the amount of memory that the game needs to generate said frame, such as textures and vertices, which is what we're talking about here with Killzone.
No, that's not what's stated:
3GB of RAM is reserved for the core components of the visuals, including the mesh geometry, the textures and the render targets (the composite elements that are combined together to form the whole).
It does not specifically state that any given frame requires a 3GB dataset. It states that 3GB is reserved for "mesh geometry, the textures and render targets". One single frame, X, may only comprise of a small subset of that dataset. That large a dataset may be resident but not necessarily used for any given frame.

There's a significant difference between what data may reside in memory at any given time and what is actually used in the process of rendering a given frame.
 
No, that's not what's stated:

It does not specifically state that any given frame requires a 3GB dataset. It states that 3GB is reserved for "mesh geometry, the textures and render targets". One single frame, X, may only comprise of a small subset of that dataset. That large a dataset may be resident but not necessarily used for any given frame.

There's a significant difference between what data may reside in memory at any given time and what is actually used in the process of rendering a given frame.

We're in agreement then- see my post above about the size of a 4k frame-buffer, and I'll add that I fully understand that only a sub-set of what's loaded in VRAM will actually be used to render a frame at any particular FOV, both due to elements being out of the frame and elements that are obscured and culled from the rendering process.

Still, it's not like the VRAM usage on the PC is going to be any less- while it's possible that assets may be streamed from main memory, such a situation is far from ideal and will tank performance if those assets are needed immediately for a frame.

And after we consider VRAM usage, we have have to consider how much VRAM we'd actually need- if a game needs 3GB of VRAM, we'll likely need 4GB just to make sure that the game doesn't run out, if we don't reduce game settings to compensate.
 
Yeah, I'd expect usage to be similar on a PC...assuming there's that much memory available, anyway.
 
Still, it's not like the VRAM usage on the PC is going to be any less- while it's possible that assets may be streamed from main memory, such a situation is far from ideal and will tank performance if those assets are needed immediately for a frame.
You're forgetting that the consoles are FORCED to pre-load a lot more data, because if the consoles need to load something that's not in RAM, they have to wait for the hard disk.

You want to talk about tanking performance? That'll do it...

PC's will be using look-ahead to pre-load data, same as the consoles, but they can pre-load into system RAM. A cache-miss on a PC's video RAM will hurt a LOT less than a cache-miss on a console's unified RAM.
 
Last edited:
Actually, this point I concede- not that I didn't understand it as it has been brought up throughout this thread, but I haven't addressed it yet. Still, this does depend on the game involved, and it's one of the things that we'll just have to wait and see to evaluate. I stand by my evaluation that games built for the new consoles may be able to make positive use of more than 4GB of RAM on the PC, and that 6GB of RAM is the best 'safe' bet for buying a new GPU today.
 
Optimized? Hardly. They're targeting a 30 FPS framerate, that alone is enough for me to give the game a pass. Brand new hardware with a hugely reduced performance handicap,and they can't even manage to put out a title running at 60 FPS? meh...
The 30 FPS is a design decision that has more to do with artistic optimization (like assets-on-screen, poly-count, draw distance, and level design) rather than engine optimization. Engine optimization means getting the most out of the hardware constraints at the programming level (like lighting calculation, memory allocation/streaming, and processor utilization), and you simply can't beat a console exclusive in this regard.

91


And seeing games of this generation versus those coming out of the PS4, I ask: what game currently on the market, utilizing the power of less than a 7870, has the visual fidelity of KZSF? None! Not even close! This engine is considered optimized in the sense that it's squeezing more rendering power out of that little GCN chip than any before it.

They're still using LOD models, which are horribly inefficient. They state they upped the number of LOD models from 4 to 7 for this game. That means loading 7 different models per-object, which is a great way to waste RAM (or cause endless HDD thrashing by swapping from disk). If it were being efficient it would be using tessellation. All that has to be loaded is the low-poly model and a detail map (from which the high-poly model can be rendered on-demand). Detail can be ramped up and down based on distance and performance, dynamically, smoothly, with no swapping or pop-in.
If I was a developer, I'd weigh the pros and cons of LOD modelling vs tessellation. Given that I tried both options, it's only logical that I decide on the one that gives me the best performance for the visual fidelity.

In KZSF's case, the developers probably thought about this decision many times over, and went ahead with LOD because it's cheaper to render than tessellation, and the PS4 with so much RAM to spare, HDD thrashing is unlikely to be an issue.

Considering the drawbacks you listed, the LOD models are not as efficient from a memory footprint aspect, but would be less costly to render than tessellation, which historically has been one of the more demanding DX11 features.

Additional notes on the difficulties of implementing tessellation, as found by Witcher 3's developer decision to exclude it from certain aspects of the game:

http://www.dsogaming.com/news/red-e...d-in-the-witcher-3-new-tech-details-unveiled/:

"CD Projekt RED has also experimented with DX11 tessellation on characters, however it seems that The Witcher 3 won’t support it (do note that the game will most probably take advantage of DX11 tessellation for its environments). According to the developers, the company experimented with two tessellation methods: PN Triangles and Displacement mapping. The first technique did not bring much additional detail to the characters, while the second technique was promising. Still, in order to use Displacement mapping as a tessellation solution, CDPR would need to change its pipeline. Not only that, but that particular technique brought a ‘swimming’ effect and some hole that were caused by the tessellation itself."
 
Last edited:
Tessellation isn't easy to do right- that may be one reason they skipped it. If not tuned properly it can result in some funky geometry issues (remember Serious Sam with demos on the ATi 8500 Pro?). Now, if done right, it can easily result in a smaller memory footprint, and it's not like you're going to go batty with it on a console, but LOD modelling was probably quicker and easier, and like BBHP says above, probably was conceded upon because the memory excess was there for KZSF's usage scenario.

Expect tessellation to be put to much more use in the future, especially with games that present relative hordes of unique models, to keep the memory footprint and bandwidth in check.
 
Actually, this point I concede- not that I didn't understand it as it has been brought up throughout this thread, but I haven't addressed it yet. Still, this does depend on the game involved, and it's one of the things that we'll just have to wait and see to evaluate. I stand by my evaluation that games built for the new consoles may be able to make positive use of more than 4GB of RAM on the PC, and that 6GB of RAM is the best 'safe' bet for buying a new GPU today.

Is this supposed to be useful advice? The only card I'm aware of with 6gb is a gtx Titan and the price/performance on a Titan is so terrible that you are better off buying a 780 or even below that and keeping money in your pocket for nvidia's next gen cards if you are trying to future proof because you will end up with a better card before you actually need to play any of these hypothetical games that might require 6gb.

If games substantially benefit from that ram you can be sure the flagship maxwell will have at least 4-6gb and it's far smarter to wait for that than to buy a Titan today and pay $350 for 3gb of VRAM.

Of course if money isn't an issue it doesn't matter anyway because you will always buy the highest performing card that meets your needs as soon as it releases and 'future proofing' means nothing to you. For everyone else, it's best to buy for price/perf than try to future proof by overbuying for the current gen because future performance is always cheaper than present performance,

The best way to be ready for future console to PC ports is with cash in your pocket.
 
Last edited:
There are 6GB HD 7970's; but finish reading the thread first. And understand that when an application needs more memory than is available, performance tanks.

And the point is that if you actually want to turn on all of the graphics settings, you're going to need more VRAM than cards are shipping with- 6GB to be safe. And that means that if you're looking at buying a card now, you should wait; not buy a Titan, or shell out the extra cash for the 6GB HD7970, necessarily.
 
The 30 FPS is a design decision that has more to do with artistic optimization (like assets-on-screen, poly-count, draw distance, and level design) rather than engine optimization. Engine optimization means getting the most out of the hardware constraints at the programming level (like lighting calculation, memory allocation/streaming, and processor utilization), and you simply can't beat a console exclusive in this regard.

That's kind of beside the point isn't it? They're targeting 30fps because 60 fps is not likely possible while still maintaining that same visual fidelity.
 
There are 6GB HD 7970's; but finish reading the thread first.

Paying $200 for 3GB of VRAM isn't much better than paying $350, and anyway, the card is a guaranteed loser because the 3GB 780 is a better buy unless you are explicitly targeting extreme resolutions(4k, pretty much). And if you are, you're already required to buy 780s or Titans because you need the raw performance more than anything else. (And you've now fallen into the category of people spending so much money that you shouldn't care about future proofing)

And that means that if you're looking at buying a card now, you should wait; not buy a Titan, or shell out the extra cash for the 6GB HD7970, necessarily.

Edit: Nevermind, then, we're in agreement. If you're buying a card now for 2+ years you're making a mistake, but that is as much a function of where we are in the graphics card release cycle(in the middle) as anything else, honestly.
 
Well, it could be 2+ years- or it could be six months. We'll have to wait and see. But the thing is this- if you have the memory, even if the GPU is old, you can still run the game; if you don't have the memory, well, life sucks.

Argument still applies for the 'buy one upper-midrange card now, buy another later' crowd too, but that's still just a small fraction of us.
 
I'm sure I'll be fine with my pair of 2GB 680s

this is what people were saying about bf3.
i remember in an interview, one of the dice developers said
"if you have tri 580 gtxs you will be fine"

and i could run the game on high settings on my sli 460 build just fine.

im not believing the hype.
 
Last edited:
Well, it could be 2+ years- or it could be six months. We'll have to wait and see. But the thing is this- if you have the memory, even if the GPU is old, you can still run the game; if you don't have the memory, well, life sucks.

Argument still applies for the 'buy one upper-midrange card now, buy another later' crowd too, but that's still just a small fraction of us.

You can run the game either way actually.
 
That's kind of beside the point isn't it? They're targeting 30fps because 60 fps is not likely possible while still maintaining that same visual fidelity.
Beside what point? 30 FPS means they're purposely giving up smoothness for visual fidelity. I still don't see any game currently that has KZ's level of visual fidelity at what is less than 7870's processing power.
 
I'm wondering how my GTX690 will do at 2560x1600 with only 2GB of memory? I hope I can still get decent performance with close to max settings.
 
I'm wondering how my GTX690 will do at 2560x1600 with only 2GB of memory? I hope I can still get decent performance with close to max settings.

I'll be finding out with you- but I don't even run close to max settings with a pair of GTX670s. I prefer to have the reaction time when shit goes down.
 
Beside what point? 30 FPS means they're purposely giving up smoothness for visual fidelity. I still don't see any game currently that has KZ's level of visual fidelity at what is less than 7870's processing power.

And you probably won't ever on a PC, but that's why we have GPUs far stronger than 7870s and CPUs far more powerful that what's in the PS4. Fidelity without the fps cap. Down side is having to wait longer since consoles will get next gen games first. To say you haven't seen anything like KZ is a pretty worthless metric at this point in time. You're comparing next gen to current gen and saying next gen is better.
 
And you probably won't ever on a PC, but that's why we have GPUs far stronger than 7870s and CPUs far more powerful that what's in the PS4. Fidelity without the fps cap. Down side is having to wait longer since consoles will get next gen games first. To say you haven't seen anything like KZ is a pretty worthless metric at this point in time. You're comparing next gen to current gen and saying next gen is better.
KZSF is an absolutely worthy and relevant metric at this point, a few people are saying 2GB is enough for 1080p even with next-gen and I point to the contrary in the case of KZSF.

It's a long thread and there are lots of side discussion to keep track of, but I don't think you understand, I wasn't saying anything about what's better in terms of next-gen vs last-gen. I'm explaining why current-gen uses so little VRAM even when maxed out on a PC, and why it's not a good metric for assessing future VRAM usage of next-gen.
 
So when the BF4 beta comes out October 1st and shows that 2GB is fine for 1080p/1200p, the reasoning is going to be that it's not a true next-gen game, correct? Come to think of it, I'm sure that's going to be the reasoning for every game that comes out until there's a game that needs more than 2GB of VRAM at aforementioned resolutions, then that will be a true next generation game.

The argument that one of the reasons >2GB of VRAM hasn't been necessary yet is because developers aren't creating assets that use large amounts of memory due to constraints on current generation consoles seems silly to me. The assets are clearly much more detailed in the PC ports, it's not like they remade all the assets from scratch for the PC version. The developers create the assets on PC and scale them down to fit consoles.
 
So when the BF4 beta comes out October 1st and shows that 2GB is fine for 1080p/1200p, the reasoning is going to be that it's not a true next-gen game, correct? Come to think of it, I'm sure that's going to be the reasoning for every game that comes out until there's a game that needs more than 2GB of VRAM at aforementioned resolutions, then that will be a true next generation game.

The argument that one of the reasons >2GB of VRAM hasn't been necessary yet is because developers aren't creating assets that use large amounts of memory due to constraints on current generation consoles seems silly to me. The assets are clearly much more detailed in the PC ports, it's not like they remade all the assets from scratch for the PC version. The developers create the assets on PC and scale them down to fit consoles.

Actually, I think that's completely right. BF4 will run on the previous gen consoles, and thus was designed with their limitations in mind; that means that it's more of a 'bridge' game and not fully a 'next-gen' game. And it should be obvious that the game will be both very playable and still look great on 2GB cards, but it won't be running at 'max settings', and that's the point.

As for true next-gen games, you're going to want 6GB/GPU to be able to turn on just the detail settings; you might need three to be able to actually run them at max, depending on just what developers put in these games for the PC releases. Just know that 'live assets' in games are about to go through the roof.
 
Actually, I think that's completely right. BF4 will run on the previous gen consoles, and thus was designed with their limitations in mind; that means that it's more of a 'bridge' game and not fully a 'next-gen' game. And it should be obvious that the game will be both very playable and still look great on 2GB cards, but it won't be running at 'max settings', and that's the point.

As for true next-gen games, you're going to want 6GB/GPU to be able to turn on just the detail settings; you might need three to be able to actually run them at max, depending on just what developers put in these games for the PC releases. Just know that 'live assets' in games are about to go through the roof.

Quoted for logic.

The 360 has 512MB VRAM yet console ports from the 360 Today are beginning to regularly utilize 2+GB VRAM.
The XBONE will have 8GB shared DDR3 RAM (No dedicated VRAM) and
The PS4 will have 8GB of shared GDDR5 RAM.

I fully expect that dedicated next-gen console ports coming out in the next 1-2 years will be utilizing 6GB VRAM on PC with ease.
 
Back
Top