Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
BusyBeaver040197695 said:Most games today don't need more than 2GB of VRAM t o max at 1080p, but I don't think it will stay that way in 2014... not with games designed around the XB1 as the baseline.
Current games are designed to scale well with small video memories, even BF4 is designed with 512MB of memory in mind for the X360 and PS3. Even bleeding edge visual tour-de-forces like Crysis 3 have this scaling. These games will work well with reasonable amount of VRAM because of two things:
1. Limited, modular environment: This is one of the main reasons why Crysis 3's visual fidelity is so high compared to other games. In addition to having the efficient Crytek engine, the environment was rather walled off, so asset density increased in relation to the decreasing environmental scope. While the artists did a great job at concealing game boundaries, I am most sensitive to the openness of the environment and have to say Crysis 3 was linear and cordoned off in terms of level design.
Crysis 3 was less claustrophobic than Crysis 2, but nowhere near the openness of the first Crysis. This allowed Crysis 3 to fit right into the X360 and PS3, while melting the highest-end PC with lighting, particle effects, AA, tessellation, etc.
2. Limited non-modular asset density: Let's define non-modular asset first: Stuff that is designed into the environment that can't be easily scaled, like buildings, mountains, rocks, and even trees to a certain extent. While you can easily scale lighting/shadow, particle effects, textures, etc. Polygon meshes are not so easily scaled, and hence the distinguishing factor in current generation vs next generation. Take one look at Witcher 3's non-modular asset density vs Skyrim's, and you will see that there are many art assets that cannot be scaled without ruining the design.
Given the baseline constraints by all the current-gen ports, 512MB of total RAM, no game today has both open environmental scope and high non-modular asset density that would truly force VRAM usage on the lowest settings.
I don't think BF4--or any of the current games available for that matter--should be considered when future-proofing hardware, as they still are designed around low VRAM scalability. The games that will shock the baseline up to XB1's standard will be the likes of Witcher 3: an open-world game with high non-modular asset density, as will be many next-gen console ports.
I can't wait to see how much VRAM Witcher 3 will use on maximum settings, I sense this game will be the litmus test of next-gen VRAM usage.
My bet is that next-gen games using XB1 as the baseline (and not X360 or PS3) will take advantage of >2GB VRAM, even on 1080p.When you say it'll be the litmus test... I think we need to clarify what we're debating in this thread. Are you saying that next-gen games will require >2GB when maxed at 1080p, 1200p, 1440p, 1600p? Obviously triple monitor resolutions require more than 2GB (and more than 1 GPU in most circumstances).
I predict one simple thing: max out the draw distance in games like Witcher 3 and voila, all your VRAM are belong to the program.I highly doubt that you will see performance dropoff on next gen games at 1080p/1200p with a high end GPU that has 2GB of VRAM (GTX 680/770).
Since we all know 'recommended' is the real minimum, and you should add at least 50% to that to play the game remotely smoothly...
People seem to be forgetting, Microsoft says that new games will keep coming out for the 360 for a while longer. This means things won't be pushed hard at the beginning. Later developers will fill up the xbone and ps4 memory and finally when they rich the limit they will optimize.
People seem to be forgetting, Microsoft says that new games will keep coming out for the 360 for a while longer. This means things won't be pushed hard at the beginning. Later developers will fill up the xbone and ps4 memory and finally when they rich the limit they will optimize.
And then I will have SLI 4 gb 880s
Originally Posted by Eurogamer's Inside Killzone: Shadow Fall:
"Guerrilla also reveals that it is using Nvidia's post-process FXAA as its chosen anti-aliasing solution, so that 3GB isn't artificially bloated by an AA technique like multi-sampling."
"Factoring in that this is just a demo of a first-gen launch title, 3GB is an astonishing amount of memory being used to generate a 1080p image - as developers warned us recently, lack of video RAM could well prove to be a key issue for PC gaming as we transition into the next console era."
Increasing display resolution doesn't actually increase memory usage by all that much.Are you saying that next-gen games will require >2GB when maxed at 1080p, 1200p, 1440p, 1600p? Obviously triple monitor resolutions require more than 2GB (and more than 1 GPU in most circumstances).
That's a grossly misleading statement. I very highly doubt the game ever requires 3GB for any single frame. The more correct way to have said it would've been "3GB is an interesting amount of memory being used to generate a very large number of 1080p images across a long span of time"."Factoring in that this is just a demo of a first-gen launch title, 3GB is an astonishing amount of memory being used to generate a 1080p image
Optimized? Hardly. They're targeting a 30 FPS framerate, that alone is enough for me to give the game a pass. Brand new hardware with a hugely reduced performance handicap,and they can't even manage to put out a title running at 60 FPS? meh...Killzone SF is a PS4 exclusive with streaming memory and processes super-optimized.
Increasing display resolution doesn't actually increase memory usage by all that much.
Even with Eyefinity resolutions, the additional memory usage is mostly because of the higher FOV (fewer resources can be culled from the scene because the viewport is wider).
And I've never personally needed multi-GPU to handle triple-monitor gaming. Single high-end cards have quite a bit of grunt these days.
That's a grossly misleading statement. I very highly doubt the game ever requires 3GB for any single frame. The more correct way to have said it would've been "3GB is an interesting amount of memory being used to generate a very large number of 1080p images across a long span of time".
It's interesting, but not astonishing.
No, that's not what's stated:Uhm, hold on there cowboy. There's the amount of memory that the frame-buffer needs for each rendered frame- which is excruciatingly small- and the amount of memory that the game needs to generate said frame, such as textures and vertices, which is what we're talking about here with Killzone.
It does not specifically state that any given frame requires a 3GB dataset. It states that 3GB is reserved for "mesh geometry, the textures and render targets". One single frame, X, may only comprise of a small subset of that dataset. That large a dataset may be resident but not necessarily used for any given frame.3GB of RAM is reserved for the core components of the visuals, including the mesh geometry, the textures and the render targets (the composite elements that are combined together to form the whole).
No, that's not what's stated:
It does not specifically state that any given frame requires a 3GB dataset. It states that 3GB is reserved for "mesh geometry, the textures and render targets". One single frame, X, may only comprise of a small subset of that dataset. That large a dataset may be resident but not necessarily used for any given frame.
There's a significant difference between what data may reside in memory at any given time and what is actually used in the process of rendering a given frame.
You're forgetting that the consoles are FORCED to pre-load a lot more data, because if the consoles need to load something that's not in RAM, they have to wait for the hard disk.Still, it's not like the VRAM usage on the PC is going to be any less- while it's possible that assets may be streamed from main memory, such a situation is far from ideal and will tank performance if those assets are needed immediately for a frame.
The 30 FPS is a design decision that has more to do with artistic optimization (like assets-on-screen, poly-count, draw distance, and level design) rather than engine optimization. Engine optimization means getting the most out of the hardware constraints at the programming level (like lighting calculation, memory allocation/streaming, and processor utilization), and you simply can't beat a console exclusive in this regard.Optimized? Hardly. They're targeting a 30 FPS framerate, that alone is enough for me to give the game a pass. Brand new hardware with a hugely reduced performance handicap,and they can't even manage to put out a title running at 60 FPS? meh...
If I was a developer, I'd weigh the pros and cons of LOD modelling vs tessellation. Given that I tried both options, it's only logical that I decide on the one that gives me the best performance for the visual fidelity.They're still using LOD models, which are horribly inefficient. They state they upped the number of LOD models from 4 to 7 for this game. That means loading 7 different models per-object, which is a great way to waste RAM (or cause endless HDD thrashing by swapping from disk). If it were being efficient it would be using tessellation. All that has to be loaded is the low-poly model and a detail map (from which the high-poly model can be rendered on-demand). Detail can be ramped up and down based on distance and performance, dynamically, smoothly, with no swapping or pop-in.
http://www.dsogaming.com/news/red-e...d-in-the-witcher-3-new-tech-details-unveiled/:
"CD Projekt RED has also experimented with DX11 tessellation on characters, however it seems that The Witcher 3 won’t support it (do note that the game will most probably take advantage of DX11 tessellation for its environments). According to the developers, the company experimented with two tessellation methods: PN Triangles and Displacement mapping. The first technique did not bring much additional detail to the characters, while the second technique was promising. Still, in order to use Displacement mapping as a tessellation solution, CDPR would need to change its pipeline. Not only that, but that particular technique brought a ‘swimming’ effect and some hole that were caused by the tessellation itself."
Actually, this point I concede- not that I didn't understand it as it has been brought up throughout this thread, but I haven't addressed it yet. Still, this does depend on the game involved, and it's one of the things that we'll just have to wait and see to evaluate. I stand by my evaluation that games built for the new consoles may be able to make positive use of more than 4GB of RAM on the PC, and that 6GB of RAM is the best 'safe' bet for buying a new GPU today.
The 30 FPS is a design decision that has more to do with artistic optimization (like assets-on-screen, poly-count, draw distance, and level design) rather than engine optimization. Engine optimization means getting the most out of the hardware constraints at the programming level (like lighting calculation, memory allocation/streaming, and processor utilization), and you simply can't beat a console exclusive in this regard.
There are 6GB HD 7970's; but finish reading the thread first.
And that means that if you're looking at buying a card now, you should wait; not buy a Titan, or shell out the extra cash for the 6GB HD7970, necessarily.
I'm sure I'll be fine with my pair of 2GB 680s
Well, it could be 2+ years- or it could be six months. We'll have to wait and see. But the thing is this- if you have the memory, even if the GPU is old, you can still run the game; if you don't have the memory, well, life sucks.
Argument still applies for the 'buy one upper-midrange card now, buy another later' crowd too, but that's still just a small fraction of us.
Beside what point? 30 FPS means they're purposely giving up smoothness for visual fidelity. I still don't see any game currently that has KZ's level of visual fidelity at what is less than 7870's processing power.That's kind of beside the point isn't it? They're targeting 30fps because 60 fps is not likely possible while still maintaining that same visual fidelity.
I'm wondering how my GTX690 will do at 2560x1600 with only 2GB of memory? I hope I can still get decent performance with close to max settings.
Beside what point? 30 FPS means they're purposely giving up smoothness for visual fidelity. I still don't see any game currently that has KZ's level of visual fidelity at what is less than 7870's processing power.
KZSF is an absolutely worthy and relevant metric at this point, a few people are saying 2GB is enough for 1080p even with next-gen and I point to the contrary in the case of KZSF.And you probably won't ever on a PC, but that's why we have GPUs far stronger than 7870s and CPUs far more powerful that what's in the PS4. Fidelity without the fps cap. Down side is having to wait longer since consoles will get next gen games first. To say you haven't seen anything like KZ is a pretty worthless metric at this point in time. You're comparing next gen to current gen and saying next gen is better.
So when the BF4 beta comes out October 1st and shows that 2GB is fine for 1080p/1200p, the reasoning is going to be that it's not a true next-gen game, correct? Come to think of it, I'm sure that's going to be the reasoning for every game that comes out until there's a game that needs more than 2GB of VRAM at aforementioned resolutions, then that will be a true next generation game.
The argument that one of the reasons >2GB of VRAM hasn't been necessary yet is because developers aren't creating assets that use large amounts of memory due to constraints on current generation consoles seems silly to me. The assets are clearly much more detailed in the PC ports, it's not like they remade all the assets from scratch for the PC version. The developers create the assets on PC and scale them down to fit consoles.
Actually, I think that's completely right. BF4 will run on the previous gen consoles, and thus was designed with their limitations in mind; that means that it's more of a 'bridge' game and not fully a 'next-gen' game. And it should be obvious that the game will be both very playable and still look great on 2GB cards, but it won't be running at 'max settings', and that's the point.
As for true next-gen games, you're going to want 6GB/GPU to be able to turn on just the detail settings; you might need three to be able to actually run them at max, depending on just what developers put in these games for the PC releases. Just know that 'live assets' in games are about to go through the roof.