Battlefield 4 recommended system requirements: 3 GB VRAM

1600p has nearly 2x the pixels of 1200p. Not really apple to apples there. I don't think it'll be a problem for at least the next couple years
 
You may be right for BF4 (I'll say that you'll be dialing back a slider or two not related to GPU rendering performance), but you're dead wrong for fully next-gen games built exclusively for incoming consoles and the PC.
You still have no hard basis for making that claim, if you're referring to video RAM requirements again.

You're making an assumption about how developers will allocate RAM on next-gen consoles, and based on the worst-case scenario derived from that assumption, you're making an assumption that said allocation will impact the PC side of things in such a way as to require inordinate quantities of video RAM.

Making predictions based on information like that isn't much better than consulting tarot cards or tea leaves.
 
Last edited:
Because there's another game out there with fewer bugs that provides the same or better gameplay? We've been playing BF3 successfully since the first public beta. It hasn't been smooth, I'll give you that, but you can hardly characterize the game as being any more 'bug filled' than any other AAA-title, and none exist that match BF3's depth and scope.

How did you like Skyrim, by the way?

I didnt play skyrim... but i have had enough of dice and i had serious issues with bf3, and am choosing not to pay into the ea/dice money grab that bf4 imo. YES i think bf4 is just another money grab and its gonna be more of the same. Enough with the flames i didnt mean to upset anyone.
 
bf3 ran like shit

they should fix their shitty net code first before they hand out requirements.
 
You may be right for BF4 (I'll say that you'll be dialing back a slider or two not related to GPU rendering performance), but you're dead wrong for fully next-gen games built exclusively for incoming consoles and the PC.

For me, at 1600p, 2GB was just barely enough, but I knew that these cards would last for the two years I needed them. And yeah, I'll be dialing back settings until I replace them- but I'm already doing that to keep BF3 at competitive frame-rates.

@1600p you are turning things down? what are your settings? I've got everything turned up with no problems. well, for me anyway, its pegged at 60fps+ to match vync. (i'm assuming 60fps+ is your target too @ 60hz)
 
You still have no hard basis for making that claim, if you're referring to video RAM requirements again.

You're making an assumption about how developers will allocate RAM on next-gen consoles, and based on the worst-case scenario derived from that assumption, you're making an assumption that said allocation will impact the PC side of things in such a way as to require inordinate quantities of video RAM.

Making predictions based on information like that isn't much better than consulting tarot cards or tea leaves.

You're not wrong- but obviously that's not how I feel about it :).

Essentially, the prediction is based on history- a lot of it. We've had the same consoles for eight years, and the consoles before those were similarly resource constrained relative to the PCs at the time.

So it's a prediction, of course. It's certainly possible that developers won't make use of that VRAM, and it's certainly possible that if they do, the PCIe bus will be fast enough on modern PCs to stream that data from main memory without a performance hit.

But that's not what I expect, because that's not what's happened before- anytime we've had a large jump in computing and memory resources, developers have raced to make good use of it. Sometimes it happens slowly, like the jump to 8GB of RAM as standard for PCs in a crashing RAM market and the ever-expanding CPU core counts of top shelf CPUs, but we're already seeing developers make use of that- even for games.

It might take a bit of time- maybe even a year. But since we typically buy hardware to last two years or more, and recommend hardware the same way, it's real hard not to recognize that developers will have unprecedented resources for their games on these new consoles, resources that PCs just don't have, outside of those very few running GPUs with 6GB of VRAM. And memory is cheap; it's why these consoles have 8GB of RAM in the first place, and it's why we should have more on our GPUs.
 
@1600p you are turning things down? what are your settings? I've got everything turned up with no problems. well, for me anyway, its pegged at 60fps+ to match vync. (i'm assuming 60fps+ is your target too @ 60hz)

I haven't looked at it in a while, but I do turn down a number of things to keep fast-paced, close-in action with lots of particle effects and destruction going on around me smooth. I can turn most things up all the way (except for literally any MSAA) and get a very good experience until the shit hits the fan; but that's when I actually need the system to be fast and consistent, so that's what I calibrate for.

I'd need more VRAM, and far more rendering power (and more CPU power!) to turn everything up with some MSAA at 1600p and meet my performance requirements. As it stands, though, I'm happy with what I've got- BF4 may very well change that.
 
You're not wrong- but obviously that's not how I feel about it :).

Essentially, the prediction is based on history- a lot of it. We've had the same consoles for eight years, and the consoles before those were similarly resource constrained relative to the PCs at the time.

So it's a prediction, of course. It's certainly possible that developers won't make use of that VRAM, and it's certainly possible that if they do, the PCIe bus will be fast enough on modern PCs to stream that data from main memory without a performance hit.

But that's not what I expect, because that's not what's happened before- anytime we've had a large jump in computing and memory resources, developers have raced to make good use of it. Sometimes it happens slowly, like the jump to 8GB of RAM as standard for PCs in a crashing RAM market and the ever-expanding CPU core counts of top shelf CPUs, but we're already seeing developers make use of that- even for games.

It might take a bit of time- maybe even a year. But since we typically buy hardware to last two years or more, and recommend hardware the same way, it's real hard not to recognize that developers will have unprecedented resources for their games on these new consoles, resources that PCs just don't have, outside of those very few running GPUs with 6GB of VRAM. And memory is cheap; it's why these consoles have 8GB of RAM in the first place, and it's why we should have more on our GPUs.

I think this won't happen til next gen console user base is big enough that publishers (not owned by MS or Sony) can afford to make a game that won't run on 360/PS3. That may take 3 years.
 
I don't think that it will be incredibly common- but given that game engines already support vastly differing tiers of hardware and software, games developed specifically with the new consoles in mind (that perhaps are nearly unreasonably gimped in the current-console-gen releases) will demand more VRAM on the PC side to enable all of the eye candy.

The potential is undeniable, as is the inevitability; the only question is when. Will we see a game that can use 4GB+ of VRAM this fall? Will it be next fall? Will anyone actually want to play it? All still up in the air :).
 
Essentially, the prediction is based on history- a lot of it. We've had the same consoles for eight years, and the consoles before those were similarly resource constrained relative to the PCs at the time.
These new consoles aren't any less memory constrained, they just have the option of stealing some from places where they weren't allowed to steal it before. Given how weak the GPU in the Xbox one is, I don't foresee the need to steal more than 2GB for graphical assets.

Perfect example of this? Pull up the (very similar) Radeon HD 7770, load up Skyrim with a pile of texture enhancements, see what happens... it's a stutter fest (go figure :rolleyes: ) but it's not because it burned through all 2GB of video RAM... it's because the video RAM can't feed the GPU data quickly enough. There's a bandwidth bottleneck on the HD 7770, and the Xbox One has even less memory bandwidth than that.

How do you intend to use more than around 2GB of RAM when that data literally cannot all make it to the GPU quickly enough?

Lets go by the numbers: Assuming you want to maintain 60 FPS, that gives you 1/60th of a second to run all the data through the GPU needed for each frame.
The Xbox One has 68 GB/s of total memory bandwidth, shared between the CPU and the GPU. Even if you dedicated 100% of memory bandwidth to the GPU (and choked out the CPU entirely), that's a grand total of 1.13 GB that can make it through the GPU per frame.

1.13GB, theoretical max, per-frame. In reality, the CPU will be using some of that bandwidth constantly, so it'll more likely be less than 1GB of data that can be crunched by the GPU every frame.

I ask again, how the hell are you coming up with needing 4+GB of RAM strictly for graphical assets on the consoles?

It might take a bit of time- maybe even a year. But since we typically buy hardware to last two years or more, and recommend hardware the same way, it's real hard not to recognize that developers will have unprecedented resources for their games on these new consoles, resources that PCs just don't have, outside of those very few running GPUs with 6GB of VRAM. And memory is cheap; it's why these consoles have 8GB of RAM in the first place, and it's why we should have more on our GPUs.
You keep saying memory is cheap, when it really, really isn't...

If memory were really that cheap, the consoles would have 16GB of RAM rather than 8GB. If memory were that cheap, we'd already have consumer-level graphics cards with 6, 8, and 12GB of RAM.

We don't see any of that, know why? If you said "lack of demand" you'd be wrong, there are plenty of examples of graphics cards coming with ridiculous quantities of RAM in order to appear "better" than competitors (GeForce FX 5200 with 1GB of RAM, remember those?)

Doubling the RAM in a fixed design like a console motherboard or a graphics card causes a slew of very expensive upgrades in architecture.
- Double the number of chips, twice as many points of failure, increase in poor bins.
- Increase in graphics core complexity. You'll want to use those chips in parallel, which requires a wider bus.
- Increase in PCB complexity handle that wider bus and to the address of all those new chips. Increased per-card cost.

Do-able? Sure. Do-able without drastically increasing per-unit costs? Nuh-uh...

The only reason we have 2GB / 4GB and 3GB / 6GB variants right now is because it's easy to simply leave half the chips off the board (or swap the chips out for slower, higher-density variants in order to win the "who can put the largest number on the box" war). Adding RAM beyond the initial designs of these cards and consoles is NOT cheap.
 
Last edited:
How do you intend to use more than around 2GB of RAM when that data literally cannot all make it to the GPU quickly enough?

Lets go by the numbers: Assuming you want to maintain 60 FPS, that gives you 1/60th of a second to run all the data through the GPU needed for each frame.
The Xbox One has 68 GB/s of total memory bandwidth, shared between the CPU and the GPU. Even if you dedicated 100% of memory bandwidth to the GPU (and choked out the CPU entirely), that's a grand total of 1.13 GB that can make it through the GPU per frame.

1.13GB, theoretical max, per-frame. In reality, the CPU will be using some of that bandwidth constantly, so it'll more likely be less than 1GB of data that can be crunched by the GPU every frame.

The Xbox one uses DDR3, the PS4 and GPUs use GDDR5. The PS4 will have about 170.6 GB/s, basically 3 times as much. But the Xbox One has have that 32mb frame buffer with 32mb of eSRAM that will be used for frame bufffers, that clocks in at 102 GB/s. So the PS4 will be actively using 3gigs of vram per frame at 60 FPS. There typically is a lot loaded and streamed into the vram that isn't being displayed so you don't need loading screens. I know UDK has planning on having a 3d voxel grid of lighting data that would require a large amount of data to be stored, but not actively being accessed every frame to work properly. That voxel cone tracing got scrapped from udk, but it has been tested in other game engines like unity, tech like that could really take off with more vram to play with.
 
You can keep coming up with corner cases- I can keep spelling it out line by line. PcZac has a decent explanation before the voxels part (which I do find interesting, by the way), but to address the 'memory is cheap' debate, remember that this generation was supposed to only have 4GB- which is about average for a PC with no discrete GPU- but got upgraded to eight solely on the stupid cheap pricing of RAM this last year or two. Remember that the last two generations had maybe half of what PCs did on average; and now we're getting consoles released with twice as much. This is unprecedented in modern gaming.

So yeah, memory is fricken' cheap. And yeah, it's not hard to put more memory on the GPUs- you just have to use modules that are more dense, which is the easiest way, or you use more modules. Just note that GK110 cards exist with 12GB of RAM now, and there's nothing stopping them from making an 8GB GK104, and there's nothing stopping AMD from following suit. RAM is a very, very easy upgrade, even if it does result in a bit of a redesign. Trust me, the motherboard makers are good at it. Video cards are easy.
 
Hate to still bring up the Memory debate, but wasnt there a BF4 alpha benchmark showing the game using over 2gb easily?

That explains why 3gb is needed.

Also i THINK they make a 7870 with 4gb of memory, but not 100% sure on that one.
 
BF3 does a lot of caching, the memory used isn't necessarily what it needs to use for studded/lag free performance. It's not unreasonable to assume BF4 does the same. Someone over at the anandtech forum explained how the engine worked, I'll see if I can locate the post.
 
BF3 does a lot of caching, the memory used isn't necessarily what it needs to use for studded/lag free performance. It's not unreasonable to assume BF4 does the same. Someone over at the anandtech forum explained how the engine worked, I'll see if I can locate the post.

I found it, it was moon something who posted it.
 
Hmmm looks like 1080p you will be JUST good enough with 2gb with no AA on. with 4xAA you go over 2gb.

http://www.bf4blog.com/battlefield-4-alpha-gpu-and-cpu-benchmarks/

Hmm. According to those charts, the 690 performs just as well as a 7990, avg FPS wise @1600p + 4xAA. (2gb and 3gb respectively).

Seems to me that BF4 uses as much ram as is it needs if it is available. kinda like BF3 (up to a certain limit of course)

We can only hope it gets better once it gets past alpha. beta test soon!
 
I'll bet that with next gen games the only resolutions that will require more than 2GB of VRAM will be triple monitor and 4K (which you'll need quite a beefy, likely SLI/Crossfire setup for anyway) . I'll bet the benchmarks will show that even at 1600p, there won't be much if any difference between say a 2GB GTX 770 and a 4GB GTX 770. But of course until these games are actually released, it's all speculation.
 
Have they released BF4 videos actually running on the XBONE, rather than on a quad SLI $10,000 PC? I wonder if it'll be an Aliens Colonial Marines style disappointment when they do?
 
Isn't saying that because consoles have much more memory we need more vram on GPU's a lot like saying that because the PS2 had 8 cores we all need at least 8 cores on our PC's? It is a crazy argument in my opinion. I hear all these arguments about BF4 using more VRAM with no benchmarks to show that it is affecting performance at all. I've seen people show that that BF3 was taking up 2.5gb of vram....but I don't see how that is relevant at all unless they are comparing it to the same card with less vram and finding that the card with more vram was performing better. There is no hard data that I know of to suggest that we will need more vram! And when you consider that most people are buying for 1080p.....it quickly becomes ridiculous. In my experience it is a terrible idea to upgrade your PC now for what MAY happen in the future because whatever feature you need in the future will be much cheaper to purchase then than beforehand.
 
I'll bet that with next gen games the only resolutions that will require more than 2GB of VRAM will be triple monitor and 4K
Again, though, terms like "need" and "require" are both simultaneously too strict and too fuzzy to be worthwhile to use in these contexts. You may not need more memory, but you may still want more, because it can still be beneficial.
 
Again, though, terms like "need" and "require" are both simultaneously too strict and too fuzzy to be worthwhile to use in these contexts. You may not need more memory, but you may still want more, because it can still be beneficial.

There is no question that more of any resource on a computer is better than less. But the question that is really being asked is whether or not increased VRAM results in increased performance and in almost EVERY case the answer is no, this especially being true of the average PC gamer who games at 1080p and 60hz.
 
Last edited:
Again, though, terms like "need" and "require" are both simultaneously too strict and too fuzzy to be worthwhile to use in these contexts. You may not need more memory, but you may still want more, because it can still be beneficial.

When I say need and require, I mean when you benchmark for example a GTX 770 2GB version and a 4GB version and the 2GB's performance completely drops off, which you see in some very rare instances in games running at triple monitor resolutions at high levels of detail and AA. I think it'll be awhile before we see that dropoff at 1080p or even 1440p/1600p.
 
Keep in mind that doing a 670 SLI, while it may perform a bit faster than say, a single 780, in certain situations, the 670 SLI will not be better than a 780 in its own contribution to the whole input lag chain from keypress-to-eye-retinas. SLI often does not usually improve the lag (at all) over a single card. So you're running framerates of a 780, but with the GPU lag of a 670. (It's possible to have 60fps with the input lag of 30fps). All depends on how the game handles the SLI.
 
All the math I've seen done says that input lag and FPS are not mutually exclusive. In other words, increasing FPS has the inherant affect of deceasing input lag.
 
All the math I've seen done says that input lag and FPS are not mutually exclusive. In other words, increasing FPS has the inherant affect of deceasing input lag.

This, sir, is right on the money. Well, decreasing FPS increases 'apparent' input lag, while increasing FPS decreases it, but essentially correct. It depends on how the game engine in question is set up, as to how often inputs are polled and how often those actually get processed, but it's reasonable to conclude that input polling happens at least every frame, and that input polling that happens entirely between frames may (or may not) be utilized by the engine.

But the long and short of it, as you summarize above, still stands- increasing framerate must decrease input lag. One of the main reasons people seek 120Hz and faster monitors for competitive gaming, I believe.
 
Keep in mind that doing a 670 SLI, while it may perform a bit faster than say, a single 780, in certain situations, the 670 SLI will not be better than a 780 in its own contribution to the whole input lag chain from keypress-to-eye-retinas. SLI often does not usually improve the lag (at all) over a single card. So you're running framerates of a 780, but with the GPU lag of a 670. (It's possible to have 60fps with the input lag of 30fps). All depends on how the game handles the SLI.

While I support RamonGTP's generalized statement above (I'm reading backwards today, apparently), my experience with Crossfire and SLi, and with gaming in general over the last couple of decades, supports your statement.

One simplified way to evaluate what you're saying, for those that are getting confused be the 'GPU lag' idea, is to imaging this:

If you have a single card putting out a solid, smooth, Vdsynch'd 60FPS, then the frame-time for each frame is 1/60 seconds, or about 16.7ms. Now, assuming that each frame is a newly rendered frame based on totally updated information from the game engine, then your minimum 'input lag' between a signal from a user input device (mouse, keyboard, gamepad, joystick, etc.) is that 16.7ms. It's certainly likely that it's far more than 16.7ms, since the game engine must first poll the input(s), then process them, and then pass those results on to the GPU for rendering, which means that both the game engine's work and the GPU's rendering work for a new frame must be completed in the 16.7ms 'cycle' between monitor refreshes in order for the inputs to have had an effect.

If the game takes too long, now we're waiting for the next frame- or rendering a partially updated frame, if we're running without Vsync, as most people do in reaction-oriented FPSs. I still do this in BF3, and just live with the tearing, as the difference in input lag is enough to get me killed, from experience.

So CPU speed, and everything else involved in the per-frame rendering process, has a huge effect here. This is also one of the reasons why we advocate much higher performance CPUs than most benchmark houses would have the masses believe was 'enough', by the way. Even with the 4.5GHz 2500k in my system below, I find the CPU to be lacking in 'intense' situations in BF3. GPUs are fine, as I calibrate my in-game settings for the worst case scenario, but the CPU still can't keep up, and I find that my system still gets a little 'laggier'. All other things being equal, it means that my opponents have the upper hand. Now, there's a whole metric shit ton of other variables here, from individual play-styles to input maps to the network's performance at that very moment, but when we build a 'system' to accomplish a goal, we try to minimize or eliminate every limitation we can.

Now back to the subject at hand. If you remember that the original SLi stood for 'Scan-Line Interleave', meaning that each card rendered every-other line, and that the master card put the output of both cards together, over an analog interface, for output to an analog monitor. Realistically, this was probably the most efficient way to combine the rendering power of two GPUs, but of course it runs into limitations: if you do nothing but add a second card, you increase your 'fill rate', but this only makes a difference if your settings were such that you were getting half of what you needed with one card, or if you were already getting the framerates you were looking for, but wanted to increase settings for greater fidelity, particularly resolution, assuming that the increase in settings wouldn't result in some other bottleneck like too much RAM usage or too much CPU load. Still, 3Dfx's SLi was a very potent setup and worked very, very well, because it really didn't need anything from the games to work. Both cards ran at full speed, essentially, because they worked on the same parts of the same frames at the same time. That means that, all other things being equal, adding a second card did not increase 'GPU lag' in any way.

When we move on to a modern multi-GPU configuration, we're now talking about 'Alternate Frame Rendering', or AFR. AFR isn't the only way to combine GPU horsepower; like SLi above, there are other methods that have been tried, but fundamental changes to the graphics rendering pipeline have made most of them unfeasible. Now GPUs must have access to whole frames independently of each other and splitting up the work of a single frame requires significant high-bandwith and low-latency low level core-logic communication, which just isn't feasible, even when you weld two GPUs to the same card. To make that work, whether it be an SLi type process or a tiling process or a split-frame process would require that the GPUs involved essentially be on the same internal bus sharing the same memory controller; otherwise, the latencies involved in inter-GPU communication for each frame would negate some or all of the advantages of multi-GPU rendering.

So AFR it is. Each GPU gets it's own frame to work on, one on the current frame and one on the next, so that inter-GPU communication stays at a minimum; with AFR, it's largely handled by the driver stack, which allows the driver engineers to tweak arrays of GPUs to the quirks of different game engines and the individual games that run on them. Note that the main reason AFR is the most effective method (not the best, mind you) is that the DX10/DX11 unified shader model and the resulting shader 'code' written for games necessitates that each GPU have access to the rendering results of the whole frame at any given point in time. This means that, if you're not doing AFR, one GPU might stall while waiting for the other GPU to send data that it needs for the part of the frame it's working on. This is going to happen every time there's a boundary in the frame that divides sectors being rendered by different GPUs, and is exacerbated by most games where some parts of the frame are typically very quick and easy to render, while others are much more complex and require more time, thus one GPU may get quite far ahead of the other(s) and then wind up waiting for them to catch up.

And AFR has a profound effect on 'GPU lag'. Essentially, it means that you're always about a frame behind, which is almost always noticeable in fast-paced games. It means that you're adding at least one frame's worth of frame latency to the input 'chain', which can be bad (well, IS bad), because you need to have two sequential frames in flight. Now, SLi/CFX-aware games and properly written drivers can mitigate the effect somewhat, especially if you're running Vsync on, such to the point that you're only ever one 'frame' behind. In situations where a single card would only get you half the frame-rate, and in situations where adding a second card or more increases frame-rate significantly above your personal reaction thresholds, and where you'd be at least a frame behind with just one (or fewer) card(s), mutli-GPU setups make a lot of sense.

But, as RamonGTP stated above, two slower cards are going to add measurable, if not noticeable, input lag over a single faster card every time. This isn't a 'golden rule' of multi-GPU; vendors, their driver engineers, game engine developers, and game developers can help to mitigate or eliminate this lag, but they have to work for it. In particular, cracking this egg is going to require either significant investment in game-engine awareness of AFR systems, or new shared rendering setups like tiling that can intelligently balance the workload of each GPU while minimizing the impact of cross-GPU rendering as well as final frame stitching, while being cognizant of current and emerging shader program conventions.

The last bit I have to add is this- I'm a hardware geek, but I've not really been heavy into the 'industry', as some of you are, or say Kyle or Brent, or Anand or the guys at TR. But I've definitely seen the need not only to measure 'frame times' instead of average frame-rates for a long, long time, and I'm seeing the need to develop a methodology to measure full 'input to output' lag of gaming implementations.

Particularly, as a community, we need to study just how much lag exists between moving a mouse or pushing a button and then seeing the result of those actions rendered on the screen, as well as seeing the update packets containing those movements leave the network. We need to be able to measure the total time it takes our systems to poll an input, to update the game engine, to process it, to output it to the network stack and the GPU and sound driver stacks, for the audio stack to push it to the DAC and for the GPU to push it out of the frame buffer and the network stack transmit it over the media, and for the speakers to produce sound, the monitor to display the graphics, and the local network to push the packets out to the ISP- and then for them to come back. We should also measure how long it takes the system to process incoming packets and for those to make it to the audio and GPU stacks as well.

This whole 'system' lag needs to begin to be explored- some how, some way- and measured and quantified, so that we can hold every party involved to a standard and demand, as a community, that they improve against that standard, just like TR and Anandtech called out AMD on their 'frame pacing' issue. Now, no one can get away with ignoring frame latencies in favor of higher average frame rates, and no one should be able to get away with producing products used for gaming that add significantly to 'system' lag without labeling them as such.
 
There is no question that more of any resource on a computer is better than less. But the question that is really being asked is whether or not increased VRAM results in increased performance and in almost EVERY case the answer is no, this especially being true of the average PC gamer who games at 1080p and 60hz.

In almost every case we have today, the answer is most definitely no. If we were talking about today, you'd be exactly right!

But we're not. We don't have consoles on the shelf that have 8GB of RAM today, with over 4GB of that memory available to games today. That's later this year.

And we don't have games that were designed specifically to take advantage of that memory today either- so our best examples are those games that did take advantage of extra memory without the usage of that memory significantly decreasing rendering performance- in particular, the two examples I've cited were Dragon Age II with the 'High Resolution Texture Pack' and Skyrim with texture mods that tremendously increase the graphics fidelity of the game without killing framerates. Larger textures and related assets aren't the only thing I expect developers to use the extra RAM on these new consoles for, but they are the first thing they're most likely to use, since most textures are usually created at far higher resolution than needed and then selectively downsampled and compressed to fit the game onto each platform for which it's released.

Now, like Skyrim, you have the option to turn that stuff off- so those of us, like myself, with GPUs that have less than 6GB of RAM will be fine by lowering settings.

But the point, restated, is this: in order to use all of the graphical 'goodies' likely to ship with the PC releases of games developed for the next generation consoles, you're going to need far more VRAM than we've conventionally thought was even useful, let alone necessary. Expect developers to ship the PC versions with even more assets than the console versions, simply because PC users can just turn the detail levels down, while those with the hardware to make use of them will definitely appreciate their inclusion. Expect AMD and Nvidia to promote this, since it means shipping new GPUs with more RAM, something inexpensive for them but with very high margins, since they can ask more, at least initially, for these cards. And expect those price premiums to erode quickly as vendors compete for sales, adjusting the retail market tiers we have now in turn.
 
Can we find any benchmarks comparing different vram amounts on the same video card in newer games? I found some benchmarks for the 8800gt and the 200 series card, but nothing for newer games. I did find an interesting benchmark comparing DDR3 to GDDR5 on the same card, which resulted in a 17-34% increase in FPS. http://im.tech2.in.com/gallery/2011/jul/graph_211723598385_640x360.jpg

The benchmarks I did find for those older cards only showed a 2-3 fps edge over the lower vram cards, but those were much older games that might of not be designed to take advantage of more vram.
 
The Xbox one uses DDR3, the PS4 and GPUs use GDDR5. The PS4 will have about 170.6 GB/s, basically 3 times as much.
Irrelevant. He's attempting to point out how games have to be limited in their scope in order to accommodate consoles.

The Xbox One is the lowest common denominator. He wants a baseline, the Xbox One is it.

But the Xbox One has have that 32mb frame buffer with 32mb of eSRAM that will be used for frame bufffers, that clocks in at 102 GB/s. So the PS4 will be actively using 3gigs of vram per frame at 60 FPS.
And, if IdiotInCharge is correct, the PS4 will be hamstrung because games will have to be developed to operate within the Xbox One's specs :rolleyes:

The PS4 will, no doubt, be better able to utilize its RAM. It has a GPU with more compute units and has many times more bandwidth... but by IdiotInCharge's logic, it's in the same boat as PC's.

You can keep coming up with corner cases
I've been hitting you with facts, benchmarks, and some pretty damning evidence.

A rundown of 30 games testing with the same GPU and 2GB vs. 4GB of video RAM was pretty clear cut. This included games that even next-gen consoles have no hope of running at the settings that the PC's they tested were capable of.

A GPU as weak as the one in the Xbox One actively using more than 2GB of video RAM is a corner case, as far as I'm concerned.

I can keep spelling it out line by line.
You mean ignoring all the evidence to the contrary, without actually giving a valid reason for doing so? :rolleyes:

Simply saying a game "isn't next gen so it doesn't count" because it doesn't meet some arbitrary cut-off date you've decided on is not acceptable.

to address the 'memory is cheap' debate, remember that this generation was supposed to only have 4GB- which is about average for a PC with no discrete GPU- but got upgraded to eight solely on the stupid cheap pricing of RAM this last year or two.
They were upgraded to double-density chips. I mentioned this exact possibility.

You get additional capacity, but no additional bandwidth. Higher density chips also tend to have worse timings than their low-density counterparts, leading to lower overall performance from the RAM.

They took the easy way out, and it shows. The Xbox One could have really used some extra bandwidth.

So yeah, memory is fricken' cheap.
Nope, not if you want to take full advantage of it. I already explained why, but you're just going to continue to ignore me...

Now, like Skyrim, you have the option to turn that stuff off- so those of us, like myself, with GPUs that have less than 6GB of RAM will be fine by lowering settings.
I know for a fact Skyrim does not require anywhere near 6GB of video RAM, even with a ridiculous number of graphical enhancements installed.

Know why? It's based on DirectX9 and uses a 32bit executable. One of the biggest issues with DX9 is how it causes system RAM usage to bloat in concert with graphical complexity. Loading up Skyrim with too many mods will cause it to overflow 4GB of system RAM (crashing the game) before you're anywhere near 6GB of video RAM being in use.

In almost every case we have today, the answer is most definitely no. If we were talking about today, you'd be exactly right!

But we're not. We don't have consoles on the shelf that have 8GB of RAM today, with over 4GB of that memory available to games today. That's later this year.
There you go with that same false assumption. You're still assuming that current games are not using PC's to their fullest simply because a console version was also developed. Even when we're talking about games that are able to swamp a Titan, that's still not good enough for them to count as an example to you.

You're effectively arguing that your argument un-testable because all tests disagree with you...

I can't take you seriously until you stop making that claim.
 
Last edited:
So much to read! Anyway, there is a 660 3GB to those who keep saying 2GB card....yes there is a 2GB which was released first.
 
Irrelevant. He's attempting to point out how games have to be limited in their scope in order to accommodate consoles.
The Xbox One is the lowest common denominator. He wants a baseline, the Xbox One is it.
And, if IdiotInCharge is correct, the PS4 will be hamstrung because games will have to be developed to operate within the Xbox One's specs :rolleyes:
The PS4 will, no doubt, be better able to utilize its RAM. It has a GPU with more compute units and has many times more bandwidth... but by IdiotInCharge's logic, it's in the same boat as PC's.
They do have a limited scope- that’s pretty easy to understand. While many, many things can be scaled, the basic run-times of games that are actually responsible for a game’s ‘experience’ has to be very nearly the same. It’s a small part of the game’s code and resulting executable, to be sure, but the point is that they’ll be different to really take advantage of the next-gen consoles. Maybe developers are willing to sacrifice some of that ‘experience’ (and I don’t mean graphics/sound/physics) for some games to get them running on the current gen stuff. That’s pretty reasonable, I think.

But again, the point is that they’ll have to be different.

As for the ‘lowest common denominator’, well, that remains to be seen- as it stands, I expect the Xbox One and PS3 to be different. I have no idea which will be more effective, though I fully expect the PS4 to pump out more graphical fidelity. But again, that’s just a matter of rendering the game textures at a lower resolution and/or turning back some shader precision and/or lowering AA, all things that are very easy to do.
What you can take from that situation, though, is that such high-end assets will be created- not might be, but will be- and that running them on PCs is going to require more VRAM.

I've been hitting you with facts, benchmarks, and some pretty damning evidence.

A rundown of 30 games testing with the same GPU and 2GB vs. 4GB of video RAM was pretty clear cut. This included games that even next-gen consoles have no hope of running at the settings that the PC's they tested were capable of.

A GPU as weak as the one in the Xbox One actively using more than 2GB of video RAM is a corner case, as far as I'm concerned.


You mean ignoring all the evidence to the contrary, without actually giving a valid reason for doing so? :rolleyes:

Simply saying a game "isn't next gen so it doesn't count" because it doesn't meet some arbitrary cut-off date you've decided on is not acceptable.

Yeah, you can’t use current PC benchmarks to determine how games developed for consoles that haven’t been released yet will perform when released on the PC. The games don’t exist; so not only can I not prove that with hard evidence, no one can disprove it either. Sorry, I’d love to be able to point out solid benchmarks, but unfortunately the situation we’re looking at is unique. This is the first time that a console generation has entered the market with significantly more memory than the ‘average’ PCs at the time, which is why I keep pushing my point- we haven’t seen this before. So yes, this is a forecast if you will, but I’m standing by it.

They were upgraded to double-density chips. I mentioned this exact possibility.
You get additional capacity, but no additional bandwidth. Higher density chips also tend to have worse timings than their low-density counterparts, leading to lower overall performance from the RAM.
They took the easy way out, and it shows. The Xbox One could have really used some extra bandwidth.
Nope, not if you want to take full advantage of it. I already explained why, but you're just going to continue to ignore me...

You don’t actually need extra memory bandwidth to take advantage of extra RAM. It helps, but expect developers to code masterfully around that limitation. And understand that this limitation isn’t really analogous to what we’ve seen on the PC- what we’ve seen does contradict what I’m saying, up until you realize that what we’ve seen was based on different conventions, as explained above. And do not, ever, underestimate the ingenuity of game developers .

I know for a fact Skyrim does not require anywhere near 6GB of video RAM, even with a ridiculous number of graphical enhancements installed.

Know why? It's based on DirectX9 and uses a 32bit executable. One of the biggest issues with DX9 is how it causes system RAM usage to bloat in concert with graphical complexity. Loading up Skyrim with too many mods will cause it to overflow 4GB of system RAM (crashing the game) before you're anywhere near 6GB of video RAM being in use.

6GB for Skyrim? No, for the reasons you explain above (to begin with). 3GB? Sure, more than possible. Thing is, if the game uses some amount of memory on the consoles, it’s going to need more on the PCs. The simple reason is that PCs are different- put another way, PCs have a whole lot more going on, for the OS and other applications, which means that at the very least more CPU, main memory, GPU, and VRAM will be needed. It may not be much more, depending on the game, but it will be more. Even Windows 8.1 isn’t as efficient as the OS’s on the incoming consoles. PC’s will need a little more of everything just to match performance (not that that’s really hard, except for the VRAM).

There you go with that same false assumption. You're still assuming that current games are not using PC's to their fullest simply because a console version was also developed. Even when we're talking about games that are able to swamp a Titan, that's still not good enough for them to count as an example to you.
You're effectively arguing that your argument un-testable because all tests disagree with you...
I can't take you seriously until you stop making that claim.
I talked above about trying to apply current benchmarks- it doesn’t really make sense unless you go out of your way to find those ‘corner cases’ that can reasonably reflect the situations that game developers will be working with developing for these new consoles. Sure wish that wasn’t the case, that’d make this easier!
 
Most games today don't need more than 2GB of VRAM to max at 1080p, but I don't think it will stay that way in 2014... not with games designed around the XB1 as the baseline.

Current games are designed to scale well with small video memories, even BF4 is designed with 512MB of memory in mind for the X360 and PS3. Even bleeding edge visual tour-de-forces like Crysis 3 have this scaling. These games will work well with reasonable amount of VRAM because of two things:

1. Limited, modular environment: This is one of the main reasons why Crysis 3's visual fidelity is so high compared to other games. In addition to having the efficient Crytek engine, the environment was rather walled off, so asset density increased in relation to the decreasing environmental scope. While the artists did a great job at concealing game boundaries, I am most sensitive to the openness of the environment and have to say Crysis 3 was linear and cordoned off in terms of level design.

Crysis 3 was less claustrophobic than Crysis 2, but nowhere near the openness of the first Crysis. This allowed Crysis 3 to fit right into the X360 and PS3, while melting the highest-end PC with lighting, particle effects, AA, tessellation, etc.

2. Limited non-modular asset density: Let's define non-modular asset first: Stuff that is designed into the environment that can't be easily scaled, like buildings, mountains, rocks, and even trees to a certain extent. While you can easily scale lighting/shadow, particle effects, textures, etc. Polygon meshes are not so easily scaled, and hence the distinguishing factor in current generation vs next generation. Take one look at Witcher 3's non-modular asset density vs Skyrim's, and you will see that there are many art assets that cannot be scaled without ruining the design.

Given the baseline constraints by all the current-gen ports, 512MB of total RAM, no game today has both open environmental scope and high non-modular asset density that would truly force VRAM usage on the lowest settings.

I don't think BF4--or any of the current games available for that matter--should be considered when future-proofing hardware, as they still are designed around low VRAM scalability. The games that will shock the baseline up to XB1's standard will be the likes of Witcher 3: an open-world game with high non-modular asset density, as will be many next-gen console ports.

I can't wait to see how much VRAM Witcher 3 will use on maximum settings, I sense this game will be the litmus test of next-gen VRAM usage.
 
Last edited:
I can't wait to see how much VRAM Witcher 3 will use on maximum settings, I sense this game will be the litmus test of next-gen VRAM usage.

Hell, I was cranking Witcher 2 down considerably; Ultra shaders really were :). Wonder how Titans are doing on that game.
 
Hell, I was cranking Witcher 2 down considerably; Ultra shaders really were :). Wonder how Titans are doing on that game.

Almost playable with two 780s w/ubersampling on. I would need 60 locked to say it's playable, but it dips into the low 40s in the forest outside of Flotsam with the rain coming down.
 
So CPU speed, and everything else involved in the per-frame rendering process, has a huge effect here. This is also one of the reasons why we advocate much higher performance CPUs than most benchmark houses would have the masses believe was 'enough', by the way. Even with the 4.5GHz 2500k in my system below, I find the CPU to be lacking in 'intense' situations in BF3. GPUs are fine, as I calibrate my in-game settings for the worst case scenario, but the CPU still can't keep up, and I find that my system still gets a little 'laggier'. All other things being equal, it means that my opponents have the upper hand. Now, there's a whole metric shit ton of other variables here, from individual play-styles to input maps to the network's performance at that very moment, but when we build a 'system' to accomplish a goal, we try to minimize or eliminate every limitation we can.

its the network. i doubt theyre all running i7s
 
I'm looking forward to putting this argument to rest during the load test beta next month. Personally I'm looking forward to moving past 2GB with Maxwell, gimme those high quality assets... or just butt loads of assets in a large environment that match Crysis 3 or Metro LL for quality.
 
Back
Top