NVIDIA GeForce GTX 780 SLI Video Card Review @ [H]

As far as people asking you to wait for this frame pacing patch from AMD... how many damn years now has it been that crossfire has been bugged out with crappy feel? I've been burned by the ati drivers more times than I care to admit. They will have to have a few years of solid performance before I let team red back in the house... so sad.

When will nvidia start using a 22nm process, and shrink down that gk110 while speeding up the clock?

That's how I feel pretty much, too... so many years of horrific drivers, I'm glad they've taken a good first step to getting their act together but it's going to take a good track record and solid fixes for awhile before I'll really go for it (at least without a good return policy! and even then I'd be wary in case they don't keep up, if I were going for them soon).

20nm GPU's are due a few months into 2014 ;). The next nVidia architecture is Maxwell... I doubt we'll see high-end 20nm Keplers.
 
I just put one of my cards under water and its stable at 1306 core. My second one is not under water yetm waiting a month to decide if I keep it. Running 1215 core and I think the memory is set to 1600mhz can't remember what I out memory to.
 
I'd like to see some testing with the NVidia EVGA 760 and in SLI. Since two 760's in SLI beat one $1,000 Titan for $500.
 
I'd like to see some testing with the NVidia EVGA 760 and in SLI. Since two 760's in SLI beat one $1,000 Titan for $500.
Can't really compare two GTX 760's to a Titan, the 760's still have horribly nerfed double-precision performance (and only 2GB of video RAM, vs. the Titan's 6GB). There are still situations where the Titan has a massive advantage (especially in compute performance).

Makes more sense to compare two GTX 760's (2GB video RAM and $500) against a single GTX 780 (3GB of video RAM and $650)... but I honestly think I'd pay the extra $150 just so I don't have to deal with SLI. Could also save a bit by going with a smaller PSU (single-card isn't as hard to drive, after all) and make the price difference more like $75.
 
Last edited:
I moved up to 2-780's (from 680's) to future proof my next build. The cards are extremely quiet while being very close to Titans. It does suck that the pricing models have exploded recently and wish they would return to the $500 dollar mark so more folks would be possibly interested. Great review and multi monitor setup's and resolutions are what I am looking for to see they scale (I hope to move to 3-30" monitors now that I know good games are playable).

Excellent work as always Kyle.
 
So with Xbox One and PS4 coming, how soon before new games eat up more than 3 GB of vram and you see 780 owners having to turn down MSAA in order to get playable FPS vs Titan owners? I'm thinking very soon.
 
So with Xbox One and PS4 coming, how soon before new games eat up more than 3 GB of vram and you see 780 owners having to turn down MSAA in order to get playable FPS vs Titan owners? I'm thinking very soon.
Highly doubt it'll be a problem. Both the Xbox One and the PS4 only have about 4 to 5 GB of RAM available to games (and that's shared between the CPU and the GPU, there's no dedicated video RAM).

Meanwhile, in PC's with a GTX 780, 16GB of dedicated system RAM and 3GB of dedicated video RAM is pretty standard.

Even so, isn't this part of the reason FXAA was created? FXAA has been a godsend for Surround users for ages, where the high resolution alone makes MSAA impractical in most newer titles anyway.

You also have to consider that MSAA is arguably both performance-heavy AND not great as far as quality is concerned. MSAA can't anti-alias shaders, FXAA can (and it can do it without the HUGE performance hit of SSAA). Some people like to combine MSAA and FXAA in order to up the sample size that FXAA has to work with (improving accuracy), but that's fairly niche. Some games don't even give you the option of MSAA anymore; Bioshock Infinite only allows FXAA, and Metro Last Light only has the option of FXAA or SSAA. These games are both shader-heavy, so I imagine MSAA would have looked like crap anyway.
 
Unknown - one, if this forum had rep, you would get +999999999 for that post. Perfectly said.
 
The problem I see is that the new consoles are going to set a minimum standard- and you're right, they're not going to use all that RAM for AA. They'll be using it for assets.

With the artistic limits raised substantially, I expect desktop games to be able to make use of 6GB to 8GB per GPU in the near future; and that's only reasonable, and that's at 1080p. Put 4k-level resolutions, in surround or otherwise, into the mix, and 12GB per GPU doesn't sound terribly unreasonable in the next couple of years.

Remember that memory is cheap- we were grabbing 16GB kits for $50 recently; how much more could memory for a GPU cost? Even three times as much would be reasonable for many of us, if we had the option.

And that's what I'm waiting for- AMD has a chance to really push things with their renewed focus on drivers; they could slap 12GB on a tweaked HD7970 GHz this fall and call it a win with ease. If they put a decent cooler (blower please) on it that's in range of the Titan cooler, I might be in line for two :).
 
The problem I see is that the new consoles are going to set a minimum standard- and you're right, they're not going to use all that RAM for AA. They'll be using it for assets.
Consoles aren't going to use any of that RAM for AA, because they're not going to be using MSAA most of the time. They Xbox One and PS4 are going to do exactly the same thing that the Xbox 360 and PS3 do, use shader-based AA (Edge AA, FXAA, MLAA, etc) because it looks good enough, anti-aliases shaders without resorting to super-sampling, and has almost zero performance impact.

Same reasons you should be using it on the PC. :rolleyes:

With the artistic limits raised substantially, I expect desktop games to be able to make use of 6GB to 8GB per GPU in the near future; and that's only reasonable, and that's at 1080p. Put 4k-level resolutions, in surround or otherwise, into the mix, and 12GB per GPU doesn't sound terribly unreasonable in the next couple of years.
12GB per GPU sounds wholly unreasonable for games being designed initially for a unit that only has 4 or 5 GB available for BOTH the CPU and GPU to utilize.

What's going to cause these titles to suddenly use three times more RAM, on the video card alone, when they're run on a PC? Resolution alone sure as hell wont do it.
 
Latency is still superior on 780s and game and they have no runt frame issue. Until this is really resolved paying for a Crossfire solution is illogical. Different sites are reporting progress but far from perfect.
 
it just bums me out that the 780's and titans coming out killed the resale value of my cards. in real world gaming there is no advantage for me or anyone else with a 680 single, if thats all you want, or sli like im comparing here. yes i know 6gb vram and boost 2.0.....whatever. these incremental releases by nvidi$ are insulting.

my cards are classifieds but most of the mainstream 680's can do these clocks easily. im not going to post an elaborate personal comparison of countless games. i will post my 3dmark11 score http://www.3dmark.com/3dm11/7096633
 
it just bums me out that the 780's and titans coming out killed the resale value of my cards. in real world gaming there is no advantage for me or anyone else with a 680 single, if thats all you want, or sli like im comparing here. yes i know 6gb vram and boost 2.0.....whatever. these incremental releases by nvidi$ are insulting.

my cards are classifieds but most of the mainstream 680's can do these clocks easily. im not going to post an elaborate personal comparison of countless games. i will post my 3dmark11 score http://www.3dmark.com/3dm11/7096633

They're not really incremental- just the current iteration of the same approach Nvidia has been taking for the last decade with big chips, mid-size chips, and small chips. The GTX780 is a bit of a quandary though- the price/performance index over the GTX680/GTX770 doesn't really account for the price (other than the super-premium price of the Titan), and the 3GB of RAM is a joke- it should have been 6GB, and the Titan should have had 12GB available. We're going to need that RAM in the very near future, I think, and those who invested in the GTX780 are likely to feel much more disenfranchised!
 
the 3GB of RAM is a joke- it should have been 6GB, and the Titan should have had 12GB available. We're going to need that RAM in the very near future, I think, and those who invested in the GTX780 are likely to feel much more disenfranchised!
You keep saying that without any rational basis :rolleyes:

I've already stated an argument to the contrary, but you go right on believing 16GB of DDR3 (System RAM) combined with 3GB of 6 GHz GDDR5 (video RAM) isn't enough :rolleyes:

Consoles have to suffer through with only about 4 or 5 GB of (usable) RAM shared between the CPU and GPU. Worse yet, the consoles don't have as much bandwidth: the shared RAM on the Xbox One clocks in at 68.3 GB/s, where as the video RAM on the GTX 780 is all the way up at 288 GB/s (and easily over 300 GB/s if you start overclocking). There is no world in which a GTX 780 is not adequate to run next-gen games.
 
You keep saying that without any rational basis :rolleyes:

I've already stated an argument to the contrary, but you go right on believing 16GB of DDR3 (System RAM) combined with 3GB of 6 GHz GDDR5 (video RAM) isn't enough :rolleyes:

Consoles have to suffer through with only about 4 or 5 GB of (usable) RAM shared between the CPU and GPU. Worse yet, the consoles don't have as much bandwidth: the shared RAM on the Xbox One clocks in at 68.3 GB/s, where as the video RAM on the GTX 780 is all the way up at 288 GB/s (and easily over 300 GB/s if you start overclocking). There is no world in which a GTX 780 is not adequate to run next-gen games.

You're argument to the contrary is... telling me that I'm wrong?

Not sure how you can do that- we don't have any way to prove it! But we can take the past few console generations and the behavior of developers in making games for both the consoles and the PC, and forecast that out- and that leads me to believe that console developers will easily be using ~4GB of memory on the consoles for graphics alone. Because they can.

If they can use 4GB on a console, how hard is it to believe that they could use 6GB on the PC? Do you really expect them to see consoles as a hard limit for game assets? They've never done that! Now, of course, you'll be able to turn the settings down for cards with less memory, if history is any indication, but I consider 8GB of VRAM as a minimum to be able to max out all of the settings on games that are designed to fully take advantage of these incoming consoles.

By minimum, I mean that's what I expect the games to need on the PC to keep VRAM from being a limiting factor; obviously scaling back settings will work to a point. We are [H] here, after all, why not set the bar properly?
 
You're argument to the contrary is... telling me that I'm wrong?
Reading comprehension fail?

Go back and re-read post #88 and post #91 in this thread. Made plenty of good points there.

we can take the past few console generations and the behavior of developers in making games for both the consoles and the PC, and forecast that out- and that leads me to believe that console developers will easily be using ~4GB of memory on the consoles for graphics alone. Because they can.
Except console developers can't, and we know they can't. Both Microsoft and Sony have said that only 4 to 5GB of RAM will be available for games to use. The rest is used by the console OS, hypervisor, and background services.

If they use up 4GB on graphics resources alone, there's no room for the rest of the game...

If they can use 4GB on a console, how hard is it to believe that they could use 6GB on the PC?
They can't use 4GB on the console (aside from tech demos with no actual gameplay), so I find it very hard to believe they'll IMMEDIATELY jump to 6GB of usage on a PC.

Resolution (even triple-monitor) wont push it that high, and neither will AA thanks to advances in shader-based AA. Next-gen engines all tend to use a framerate target; texture quality and poly count are now automatically varied based on distance, and that distance is a function of the current framerate. This means you'll always get 60FPS and always have the optimal screen quality that your hardware is capable of with no changes to settings required.

With quality being dynamically varied to maintain 60FPS, settings screens are going to be a thing of the past, there will be no reason to have them in most games.

Do you really expect them to see consoles as a hard limit for game assets? They've never done that! Now, of course, you'll be able to turn the settings down for cards with less memory, if history is any indication, but I consider 8GB of VRAM as a minimum to be able to max out all of the settings on games that are designed to fully take advantage of these incoming consoles.
Of course I expect to see consoles used as a hard limit for game assets, developers have done that for years. They design to whatever the consoles support, and then throw a small bone to PC gamers like upgraded textures.

Thankfully, both the Xbox One and the PS4 support tesselation, so PC gamers will also have the option of higher-poly models using whatever headroom they have left over.

And I ask you again, how is a game that has to be designed for far less than a 4GB envelope going to suddenly require 8GB of video RAM when it moves from the Xbox One to a PC? What on earth is going to cause it to arbitrarily require more than double the resources?

By minimum, I mean that's what I expect the games to need on the PC to keep VRAM from being a limiting factor; obviously scaling back settings will work to a point. We are [H] here, after all, why not set the bar properly?
Your entire argument is based on a false premise, go back to the drawing board :rolleyes:

The last-gen and next-gen consoles are requiring developers to get creative with resources, because they STILL don't have anywhere near as much to play with as PC's have.

Also keep in mind that PC graphics cards have a secondary bank of memory they can pull resources from, quickly, at any time. It's called system RAM, and a GTX 780 has a 15 GB/s pipe directly to it (assuming you have PCIe 3.0). Consoles have no such luxury, that 4 or 5GB is all that developers have to work with for graphics resources, and that's all they can directly count on. They don't have a 16GB (or larger) secondary memory space to pull from, all they have is a painfully slow hard disk with read/write rates more in the neighborhood of 80 MB/s.

On the consoles, games will have to load what they need for the immediate game area, and nothing more. They will need to leave space available in memory to cache data from the hard disk (just in case the player decides to move to one of the available new areas surrounding themselves). If a developer wants to use the maximum amount of available RAM on a console, they will have to resort to using loading screens so they have a somewhat fixed memory footprint.
On a PC, the developer can go nuts and use almost the entire graphics memory space without fear. Any additional resources can be piled into system RAM and loaded on-the-fly from system RAM into video RAM with minimal delay (compared to streaming directly from a hard disk, this is a night-and-day difference).

Put simply, very little look-ahead needs to stay in video memory on a PC (it can live in system RAM for the most part). The shared memory space on the consoles puts them at a serious disadvantage because developers now have to chip away at video RAM if they want additional look-ahead buffer, rather than just dumping the extra data to system RAM.
 
Last edited:
You win- please feel free to continue using 3GB cards for the next console generation, and ignore history outright. There's always a few in the crowd.
 
You win- please feel free to continue using 3GB cards for the next console generation, and ignore history outright. There's always a few in the crowd.
Ignore history outright? My expectations are perfectly in-line with history. Consoles have always been memory constrained, and the new generation is no different. Hell, just look at the current consoles (the 360 and PS3)... they don't have nearly enough GPU horsepower for what developers are trying to do. Even in the Xbox 360, with only 512mb of video RAM to work with, memory isn't as big of a concern as the GPU itself.

If you want to talk about history repeating itself, then I guarantee you a GTX 780's performance will be limited by its available GPU horsepower long before memory becomes a concern. :rolleyes:


Edit: And if you want to see this effect in action, take a look at the GTX 770 (which has both 2GB and 4GB versions available). Here's a showdown where 30 games are tested on both variants of the GTX 770: http://alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/
There isn’t a lot of difference between the cards at 1920×1080 or at 2560×1600. We only start to see minimal differences at 5760×1080, and even so, there is rarely a frame or two difference. If we start to add even more AA, in most cases, the frame rates will drop to unplayable on both cards.
See that? Both the 2GB and 4GB card ran out of raw power before the difference in video RAM capacity could make an appreciable difference.

A GTX 780 is roughly 30% faster than a GTX 770.
A GTX 780 has 50% more video RAM than the 2GB variant of the GTX 770.

That means the odds of a GTX 780 running out of GPU power before it runs out of RAM are even more likely than on a GTX 770.
 
Last edited:
Like I said, you win!

You've made the idea that things will always stay the same your hallmark. I'm not trying to change that!
 
You're using false arguments and false references; for reference:

-You're using games built around today's consoles (which are stupidly RAM limited) and ported to the PC as an example for how games will work on the PC once they're developed for the next-gen consoles
-You're claiming that with ~5GB of memory available to games, that they'll need >1GB for non-graphics related assets, when the current consoles don't have 1GB of memory total, for everything!

My perspective is to not underestimate developers, especially artists. They'll take everything you give them and then a little more. RAM is cheap, there's no reason we can't have 8GB-12GB on every gaming-grade card, and as such, there's no reason for developer's porting to the PC to not build those assets into their games!

But please, nothing to see here. Put your head back into the sand, it's cooler there than it is out here :).
 
-You're using games built around today's consoles (which are stupidly RAM limited) and ported to the PC as an example for how games will work on the PC once they're developed for the next-gen consoles
I used a huge sample of 30 games, some of which have very well-done PC ports that really do strain modern hardware (case in point, they were able to easily overwhelm a GTX 770 with many of them).

Even with these modern high-end PC games, games that the next-gen consoles cannot DREAM of running at the settings they used in the review, the raw computing power of the core was an issue long before RAM became a problem. Even at very high triple-monitor resolutions.

-You're claiming that with ~5GB of memory available to games, that they'll need >1GB for non-graphics related assets, when the current consoles don't have 1GB of memory total, for everything!
Yup, exactly. What's your point?

They have 4 to 5GB of RAM to work with, that's their new baseline and they're going to use it all. I never said they wouldn't. I'm sure they'll find plenty of ways to fill >1GB of RAM with non-graphical assets.

But, again, PC's have a separate bank of system RAM that can be 16GB or larger where all manner of assets (graphical or not) can be cached. on the PC, devs can load pretty much as much data as they want into system RAM without eating into video RAM at all.

Consoles only have that 4 to 5GB of unified memory, they have no secondary bank with a high-speed buss to defer to. This means a large portion of RAM will have to be used by the engine for non-graphical assets, and for caching data from disk in case it's needed (this would be done in system RAM on a PC).

The unified memory setup on the Xbox One and PS4 is a compromise, plain and simple. It allows the developer to decide how much RAM gets allocated to graphics vs. the rest of the game, rather than being stuck with a fixed value (which could lead to RAM going to waste). It allows for more efficient use of the limited available memory.

My perspective is to not underestimate developers, especially artists. They'll take everything you give them and then a little more.
Of course, but the new consoles don't offer much. Most games WILL NOT use anywhere near 4GB of the available memory for graphics assets because that RAM will be needed for other aspects of the game (a rolling disk cache being a huge one).

The only thing that will use that much ram for graphics alone is a tech demo.

RAM is cheap, there's no reason we can't have 8GB-12GB on every gaming-grade card
RAM is cheap, but the infrastructure required to support it is expensive (as far as board complexity goes). You're also looking at a larger more expensive GPU if you want additional memory channels to take advantage of capacity AND bandwidth from all those extra chips. The costs of adding more RAM rack up quick.

You'd be looking at some INSANELY expensive PCBs to get that much RAM on the graphics card alone (because you're doubling the number of chips)... and you still haven't explained why the graphics card ITSELF needs that much RAM.

You completely ignored the fact that when a PC graphics card runs low on RAM, it has a 15 GB/s pipe to the gigantic system RAM bank where assets can be freely swapped... but when one of these consoles runs out of RAM, it has to start thrashing the hard disk at a rate closer to 80 MB/s. To avoid stuttering and loading screens, the consoles will need to cache data from disk into RAM ahead of time, but that chews through RAM that could have been used for graphics...

I'll reiterate one more time, that same scenario is no problem on a PC. That data can be cached into system RAM and will not reduce available video RAM. The advantage here goes to the PC with 3GB + 16GB of RAM, easily.

and as such, there's no reason for developer's porting to the PC to not build those assets into their games!p
What assets? The assets they had to design to fit into a <4GB envelope for the consoles?

They aren't going to magically get larger just because they ported to the PC. Developers will have to do extra work to generate assets that the consoles cannot handle if they want to push PCs.

But please, nothing to see here. Put your head back into the sand, it's cooler there than it is out here :).
You're the one ignoring actual evidence and benchmarks... and the fact that PC's have 4 times (or more) total RAM to work with, not me...
 
Last edited:
You still don't get it- and that's okay! There are many more like you. Maybe they'll get it instead.

I'd go back and respond to your reply line-by-line, but then I'd just be repeating the obvious points that you can't bring yourself to see, that have already been said (or are really just that obvious). Can't help everyone :).
 
You haven't made a single point that holds water. You're the one not getting it.

I can throw more and more evidence at you, but you're just going to keep ignoring it...

You're continually claiming that a system with more/faster RAM will struggle due to games being designed for a system with less/slower RAM. That is nonsense, and you have yet to give any worthwhile reason otherwise.
 
Last edited:
You haven't made a single point that holds water. You're the one not getting it.

You're continually claiming that a system with more/faster RAM will struggle due to games being designed for a system with less/slower RAM. That is nonsense, and you have yet to give any reason otherwise.

If I'm reading your post here correctly, I think I see the reason for confusion, so I'll try to clear it up:

I'm claiming that we need more VRAM on current/future GPUs because developers will not only make use of the massive jump in available VRAM on the new consoles, using anywhere from ~3GB to ~5GB for graphics depending on the development budget and focus of the game, but that they'll probably overshoot that mark considerably and just dial the graphics assets back to suit each platform they release the game on.

In reality, I fully expect games designed for the incoming consoles releasing in the next year or two on all platforms to be able to actually make use of even more VRAM- and that's why I set my 'minimum' at 6GB. That's how much memory, per GPU, I expect us to need to be able to turn all of the 'asset' settings all the way up; obviously certain settings like AA and other intensive shader routines depend on both amount of memory along with a number of other variables. So my 'recommended' VRAM goes to 8GB and 12GB, for 128/256bit and 192/384bit memory controllers, respectively.

That gives games the room to employ all of their assets on the PC, and for PC gamers to turn up higher-fidelity settings and run at far higher resolutions than the consoles can muster.

Now, I'm not suggesting that these games won't run on current 1GB to 3GB GPUs; just that we'll have to dial back the settings more than we'll like.

Also, please know that I'm neither trying to insult you or be argumentative. I'm only standing by my point; please feel free to continue to pick at my ideas. In the setting of this forum, we have the opportunity to make extensive use of each other's knowledge, experience, and intelligence to really hammer out better ideas.

And last- I wanted to respond directly to your post above, but I literally am not sure what you're talking about- and it's not that I'm not trying to understand, just that I'm missing how that relates to something I've said, so please feel free to clarify when you get a chance :).
 
I'm claiming that we need more VRAM on current/future GPUs because developers will not only make use of the massive jump in available VRAM on the new consoles, using anywhere from ~3GB to ~5GB for graphics depending on the development budget and focus of the game
*BUZZER* Sorry, I'm going to have to stop you right there. You keep basing conclusions on the same false premise.

These consoles CANNOT use a full 5GB of RAM for graphical assets. That's the entirety of game-available RAM (and that includes requesting flexible RAM from the OS on the PS4, which may or may not be available at all times).

The only thing that will get anywhere close to 5GB of that RAM being allocated to graphics is a tech demo, not a game. There simply wouldn't be any room for a game left...

Even using 3GB for graphics alone strains believability. That's a massive waste for the weak GPU the consoles use, especially considering how little memory bandwidth they have. Remember, the Xbox One only has 68.3 GB/s of bandwidth between its RAM and the GPU, where as the video RAM on the GTX 780 is all the way up at 288 GB/s (that's 4.2 times more bandwidth).

but that they'll probably overshoot that mark considerably and just dial the graphics assets back to suit each platform they release the game on.
And as we've already seen with all 30 of the games in that review I posted (which includes fully PC-optimized titles like Metro: Last Light), even when a developer goes hog-wild, the GPU core runs out of steam before video RAM becomes a limiting factor.

In reality, I fully expect games designed for the incoming consoles releasing in the next year or two on all platforms to be able to actually make use of even more VRAM- and that's why I set my 'minimum' at 6GB.
Designed for consoles that have GPU's many times slower than what we're using, which are even less capable of using all that RAM before running out of steam? Designed for GPU's with hilariously little bandwidth feeding them?

Yeah, again, your argument isn't making much (read: any) sense... I guarantee you most of the RAM in these consoles is going to be geared towards cache, just like system RAM is on a PC. Preventing direct hits to the hard disk is going to be the biggest problem for them, and I can see them spending quite a bit of RAM on that.

Now, I'm not suggesting that these games won't run on current 1GB to 3GB GPUs; just that we'll have to dial back the settings more than we'll like.
All evidence points, once again, to the GPU in question being the limiting factor on performance, not video RAM.

Go ahead and compare a GTX 780 against a Titan, you wont see much difference between them in ANY game (even fully PC-centric ones like Metro: Last Light and Crysis 3). That's because the core is the limiting factor, not the RAM (though the Titan does have a slightly beefier core as well). Our graphics cards simply aren't fast enough to warrant that much RAM, and there exists little to no evidence to the contrary.

I wanted to respond directly to your post above, but I literally am not sure what you're talking about
What's not to understand?

You're claiming a PC with more/faster RAM will struggle because games are being designed for consoles with less/slower RAM. I call that nonsense. In what world does a machine with 4 to 5GB of available RAM trounce a machine with 3GB + 16GB of available RAM?

Then take into consideration that the settings required to even begin to make RAM matter will cause the core to bog down to unplayable performance levels, making the entire point moot. Piling more RAM onto a card wont suddenly make it better (remember those GeForce FX 5200's with 1GB of video RAM? Yeah... pointless. Same idea here).
 
Last edited:
Try not to hang up on ultra-specific details- I am trying to meet you in the middle here :).

I agree that few games will actually use close to 5GB of RAM for graphics, and that the average will likely be closer to 3GB or 4GB, but the capacity is definitely there. We can't know which games that we'll actually want to play will use that extra memory or not; but one thing we can be certain of, is that games on the current consoles have to run their executables in stupid small memory footprints, somewhere between 256MB and 512MB, so if you give a console game access to 5GB of memory for everything, using 4GB of it for graphics sounds pretty reasonable to me. The PC versions always use more, typically shipping with higher-detail assets, and that's where I'm coming from.

Now, while I appreciate the reference to current benchmarks, I can't really consider that as evidence here- not because it's not relevant or because there's some anomaly in the testing, but because all of these games have been developed as cross-platform games that must also be scaled down to the current generation of consoles, which means that their engines and assets were restricted in any number of ways. You can scale most of that stuff up, of course, but they're still 'current-gen' or 'bridge' games like Crysis 3 and BF4 are. None of them are really 'next-gen' games, developed entirely for the PS4/Xbox One/near-future PCs.

I'll leave the last point concerning VRAM usage vs. performance alone, as I've covered that elsewhere, except to say that while there are plenty of examples where turning the details up too high results in overwhelming the GPU before running out of VRAM, there are also examples of the opposite and there are innumerable things that developers can do to make use of extra memory without killing the framerate, even on the relatively anemic next-gen consoles.
 
I agree that few games will actually use close to 5GB of RAM for graphics, and that the average will likely be closer to 3GB or 4GB
How many times are you going to use that same falsity? No games will use 5GB for graphics alone. Will not happen. Only 4.5GB is guaranteed to be available at any given time (the additional 512mb has to be requested from the OS, and can be taken back if it's needed for background tasks).

Not all 4.5GB can be used for graphics, either. Some of that has to be used for the game itself. That's easily a few gigabytes shot, right there.

You'll be hard pressed to see more than 2GB used for graphics alone in most games. Maybe they can go wild with memory utilization during the cut-scenes (if gamers don't mine a MASSIVE loading time just to watch a non-interactive segment). But why go through all that trouble when a pre-rendered 1080p video takes less time to load and looks better than what the integrated GPU can pump out? Oh, right, a tech demo :p

And 2GB is still pretty insane for a GPU that (in the case of the Xbox One) is basically a Radeon HD 7770 with a horribly handicapped memory bus. You're basically arguing that memory is going to be used for a GPU that has no way to use all of those resources effectively.

Now, while I appreciate the reference to current benchmarks, I can't really consider that as evidence here- not because it's not relevant or because there's some anomaly in the testing, but because all of these games have been developed as cross-platform games that must also be scaled down to the current generation of consoles, which means that their engines and assets were restricted in any number of ways
A game that can easily bring a Titan to its knees, with full DirectX 11 support, is designed with consoles in mind? Riiiiiight...

Crysis 3 and Metro: Last Light might have console versions, but they were very obviously designed with PC's in mind first and foremost. The console versions look and perform like crap (which tends to happen when you design for higher spec and then have to try squeezing it onto inadequate hardware). The console versions are simply paired-down cash grabs.

Those benchmarks are perfectly relevant, and it's a HUGE sample size. In every single case, the GTX 770's core was too slow for 2GB vs. 4GB to matter. The same thing is holding true for the GTX 780 and the Titan from what I've seen.

Memory capacity on modern high-end cards has long been in excess of what they're actually capable of using effectively. The new consoles do not suddenly invalidate this long-running trend (and in fact, are subject to it themselves).

there are also examples of the opposite and there are innumerable things that developers can do to make use of extra memory without killing the framerate, even on the relatively anemic next-gen consoles.
I pointed out 30 examples to the contrary. Care to post your examples?

How exactly would a game like Crysis 3 use more video RAM than it does now, to improve visuals, and without impacting performance whatsoever? Where's the example of that? :rolleyes:
 
Last edited:
So you agree that a console game could easily use 4GB for graphics- not that they will, but that they could. We're getting somewhere!

Now if you would try and think ahead instead of using current benchmarks of 'last-gen' or 'bridge' games, we'd really be getting somewhere :D.
 
So you agree that a console game could easily use 4GB for graphics- not that they will, but that they could. We're getting somewhere!
With the only point being a techo demo (meaning minimal/no user interaction), which is what I've been saying from the beginning... It's a pointless show-off that doesn't translate into any actual games. Ergo, irrelevant (unless you REALLY love playing 3DMark...).

No actual game is going to use 4GB for graphics alone, plain and simple. There's no point with a GPU that slow and with memory bandwidth that limited.

Now if you would try and think ahead instead of using current benchmarks of 'last-gen' or 'bridge' games, we'd really be getting somewhere :D.
Like I said, and gave multiple reasons for, those benchmarks are perfectly valid. Just because you choose to ignore them doesn't invalidate them. It just makes you willfully ignorant.

If you'd like to dismiss them, you need to have a valid basis for dismissal.

They tested games that have no trouble overwhelming a Titan, a GPU many times in excess of what any sane home user would own. I'm sorry, but that is impressively forward-thinking on the part of the developers as-is. Just because they managed to nerf them enough to get them to run on the 360 doesn't mean they're not next-gen titles.
 
Last edited:
If I want to SLI, do the cards have to be exactly the same? Or can be super clocked and the other a normal card.
 
Back
Top