NVIDIA Kepler GeForce GTX 680 Video Card Review @ [H]ardOCP

I meant camera shots. They show the blurring on 120 Hz LCDs compared to CRTs.
And? We're talking about the perception of smooth motion. How are you suggesting that 'blurring' detracts from said perception of smooth motion?

The only argument you could really have against LCDs is that the pixel response time varies based upon what intensity each pixel is changing from and what intensity each pixel is changing to, but that's largely (entirely) negated by the content you're typically drawing for a game.

Yes, but it is not a plus, nor does it smoother and more fluid picture make.
You're agreeing and disagreeing at the same time. I don't know how to respond to that.
 
And? We're talking about the perception of smooth motion. How are you suggesting that 'blurring' detracts from said perception of smooth motion?
Because they are incapable of showing 120 distinct frames per second, thus providing less fluid motion. Reacting slowly and leaving residual ghost images does not make motion more fluent after a certain point (it helps with very low fps like 24 in movies, but becomes a big drawback later).

Another reason why LCDs are inferior for motion is explained here:
www.avsforum.com/avs-vb/showthread.php?t=960548
msdn.microsoft.com/en-us/windows/hardware/gg463407.aspx#EEH

On second thought, I guess smoothness or fluidity could be the wrong term here. Lets just say that the picture on LCDs is worse when it comes to motion.
 
Last edited:
By the way, does anyone know what are the rules for running Surround? Is it the same rules as for Eyefinity now or is there a way to connect three screens to three cards just like you can connect them to one? Do you get screen tearing like in Eyefinity when you mix DVI and DP?
 
Nice card. Maybe we will finally see a price war that will drop prices of well performing cards to reasonable range. Currently it seems that there is about 200$ extra tagged on top end. :cool:
 
Impressive. Love the GPU Boost Technology. I will try to get one of these.

Good Review.
 
Good review guys. Been reading your reviews and articles since the Radeon X800 days. That said, this is an incredible architecture. I actually believe Nvidia when they say that this was originally intended to be a midrange card. A few reasons:

1. Its 256-bit bus. This actually gives it a very slight decrease in memory bandwidth from the GTX 580.

2. Only 2GB of memory. Now, two gigs is plenty, but it isn't a huge step up from the previous high end.

3. The fact that it runs cool, quiet, and power-efficient. This seems to be out of character from the big, power-hungry designs of the past.

Also, if it is true that this card is a midrange Kepler (which I believe is true), then Nvidia must be laughing all the way to the bank. Why? Because this card was meant to compete in the $300 - $400 space. And now, because of what the competition offers, they are allowed to charge $500 for it - and they get awarded while doing so! That said, I hope this starts a price war soon. It's a good time to be a PC gamer.
 
If their midrange chip (if that is what this is and nVidia was not trolling AMD :p) was designed with a $300 price space and it competes with AMD's $500 card I say let them laugh all the way to the bank for a while. The market will allow high prices for a while, even if I don't like it, and maybe they'll turn that major profit back around for future development to the benefit of future architectures!
 
The simple fact that this chip was designated GK104 lends to the speculation that it was originally a mid-range chip. Up until now their high-end stuff has been xx110 or xx100.
 
I can't show you in this review, but if you read other review articles on this very site, you'll find situations where [H] themselves attributed drops in frame rates for the GTX cards to textures exceeding the card's memory. I don't know why the tests they just did in eyefinity/surround resolutions didn't reveal a problem. Perhaps they didn't push the AA high enough? In any case, unless NVidia is doing some on-the-fly texture compression and de-compression which somehow allows textures larger than 2Gb to be stuffed into 2Gb, this IS a documented issue. And in the event that NVidia HAS figured out how to squeeze larger textures into smaller amounts of video memory with no performance penalty or loss in quality, then I'd love for [H] to let us hear about how this marvellous new technology works.

In the meantime, this isn't really a debateable point in the absence of such magical technology. It's like asking me to prove that 2.5 gallons of water won't fit into a 2 gallon container. The proof is self-evident.

You should definitely check out the SLI review, because the GTX 680's 2GB VRAM simply wasn't a problem. I hadn't seen this card's 2GB VRAM to be a problem before, and it turned out that it wasn't. That's why I originally questioned your reasoning.
 
You should definitely check out the SLI review, because the GTX 680's 2GB VRAM simply wasn't a problem. I hadn't seen this card's 2GB VRAM to be a problem before, and it turned out that it wasn't. That's why I originally questioned your reasoning.

I'm not doubting the reviews and performance of the 680 in reviews. I'd love to have a pair in SLI myself.

I'm running 6970+50 in crossfire each with 2GB on board. The fact is 2 and 3GB video memories have been around for a while (a while being defined as the previous generation)

The 2GB limit isn't a huge deal right now - but what about any AAA titles coming out late this year or in 2013? For the extra cost of 50 dollars worth of memory you have given card (be it AMD or nvidia) that might have an extra 12 months on its life for some people.
 
I read somewhere that a vast majority of gamers have 1GB or less. Maybe it was here? Not to mention with the flagship cards AMD has 3GB of RAM, nVidia has 2GB, and nVidia meets or beats AMD with 2/3rd the RAM there is more to memory than just the raw quantity.

Kyle said:
Going forward, don't know. All I can tell you right now is NVIDIA is doing a much more efficient job in terms of memory footprint and bandwidth usage.

Sure people might not think you need more than 640kB of RAM :)D), but I find it difficult to believe that a 2GB card could not last as long as a 3GB card given the smaller memory card seems to have better memory management. That being said if you've got the extra $50 and the card has a longer warranty, maybe it is worth it. Otherwise your card will be dead (or not good enough) in 3-5 years anyway, and that $50 goes towards something then.
 
In December, my system was a Core i7 920, OC to 3.5GHz, and dual GTX470 cards. My idle power was sitting at 252 watts, and 596 watts while playing WoW.

I upgraded, thanks to my tax return, to a Core i7 2600k and a GTX680. I just put the GTX680 in on Tuesday evening. My power usage dropped to 126 watts at idle, dropping from 200 watts with the 2600k and the dual GTX470s, and 333-343 watts while playing WoW, down from 496 watts with the current CPU and dual GTX470s.

Through all these upgrades, I saw a 40-50% increase in performance with the CPU upgrade, and another 20% with the GPU upgrade.

I have never, in 26 years of custom pc building, had my power usage go down while performance went up. This GPU is a masterwork by Nvidia. They really nailed it. (Intel gets credit for SB, too. That's an awesome chip, too.)
 
I read somewhere that a vast majority of gamers have 1GB or less. Maybe it was here? Not to mention with the flagship cards AMD has 3GB of RAM, nVidia has 2GB, and nVidia meets or beats AMD with 2/3rd the RAM there is more to memory than just the raw quantity.

I go for the goal of a 8:1 ratio of main memory to video memory, or as close as I can get. It's been working well for me for 26 years. (4MB main and 512k video in my first system, 8MB and 1MB in my first rebuild, etc.) I propose it is something people want to aim for.
 
In December, my system was a Core i7 920, OC to 3.5GHz, and dual GTX470 cards. My idle power was sitting at 252 watts, and 596 watts while playing WoW.

I upgraded, thanks to my tax return, to a Core i7 2600k and a GTX680. I just put the GTX680 in on Tuesday evening. My power usage dropped to 126 watts at idle, dropping from 200 watts with the 2600k and the dual GTX470s, and 333-343 watts while playing WoW, down from 496 watts with the current CPU and dual GTX470s.

Through all these upgrades, I saw a 40-50% increase in performance with the CPU upgrade, and another 20% with the GPU upgrade.

I have never, in 26 years of custom pc building, had my power usage go down while performance went up. This GPU is a masterwork by Nvidia. They really nailed it. (Intel gets credit for SB, too. That's an awesome chip, too.)

My system went from about 200W idle to 95W idle when I upgraded from an i7-920 / GTX 275 SLI to the i7 2600K / GTX 570 combination. Even though GPU performance was probably about the same I'll gladly take a single card solution and lower power consumption. But yeah, the last few years have been exciting in terms of performance / watt.

I go for the goal of a 8:1 ratio of main memory to video memory, or as close as I can get. It's been working well for me for 26 years. (4MB main and 512k video in my first system, 8MB and 1MB in my first rebuild, etc.) I propose it is something people want to aim for.

While that might be a good ratio for you, it doesn't exactly correlate with any performance gains.
 
I go for the goal of a 8:1 ratio of main memory to video memory, or as close as I can get. It's been working well for me for 26 years. (4MB main and 512k video in my first system, 8MB and 1MB in my first rebuild, etc.) I propose it is something people want to aim for.

I would argue that ratio is a bit high these days. Most people probably use 4-8 GB system RAM, and that's really all you need. By your ratio you would be getting 24 GB of RAM for a 3 GB VRAM card. Seems considerably excessive.

4:1 is probably a safe ratio at the moment.
 
24 gigs of ram sounds very sexy, but I agree. 4:1 sounds like a better ratio.
 
I go for the goal of a 8:1 ratio of main memory to video memory, or as close as I can get. It's been working well for me for 26 years. (4MB main and 512k video in my first system, 8MB and 1MB in my first rebuild, etc.) I propose it is something people want to aim for.

how are they 2 even related? for people who don't game and run vms, 24GB of ram would mean they need a 3GB videocard, and people who ONLY game, don't need 16-24GB of ram but need a 2GB videocard.

your ratio makes no sense at all :p VRAM and System RAM have no relation to how / when they are used.
 
how are they 2 even related? for people who don't game and run vms, 24GB of ram would mean they need a 3GB videocard, and people who ONLY game, don't need 16-24GB of ram but need a 2GB videocard.

your ratio makes no sense at all :p VRAM and System RAM have no relation to how / when they are used.

On the contrary, they are used quite a bit. I currently have 16GB of RAM and 2GB on my graphics card. Sure, a game, especially World of Warcraft, doesn't use a ton of memory, but the extra caching helps immensely. I zone into dungeons is 10-12 seconds, ahead of anyone else. This allows me to do certain things before people completely zone in. (Like buffing the group as the characters appear, but before they go running off an all directions.)

Sure, the 3GB graphics cards are available, and that ratio would demand 24GB, but both are overblown, by about the same as the ratio I suggest. I wouldn't think I'd need 3GB of video memory anyway, at least for now. Both 3GB of video memory and 24GB of system RAM would last as overblown for about the same amount of time and would become mainstream for about the same amount of time. They'd probably even become antiquated at about the same time.
 
Sure, the 3GB graphics cards are available, and that ratio would demand 24GB, but both are overblown, by about the same as the ratio I suggest. I wouldn't think I'd need 3GB of video memory anyway, at least for now. Both 3GB of video memory and 24GB of system RAM would last as overblown for about the same amount of time and would become mainstream for about the same amount of time. They'd probably even become antiquated at about the same time.

With the 384 bit bus Amd only had the choice of 756mb,1.5gb vram, or 3gb, or well 6gb. They already knew that 1.5gb was not enough for people running eye infinity with AA in demanding games. You can look at any 580 sli setup and see that there is a pretty decent drop off in performance at these resolutions.

Heck with my current 6870 1gb crossfire, i have the gpu power to run 30" display or 2560x1600, but with only 1gb of ram performance drops off sharply in most games at this resolution.

In the case of tri-sli, and quad sli, Even the might 680gtx gets stomped pretty hard by the 7970's in tri-fire/quad fire at eye efinity resolutions with things like 4xmsaa and higher.

For just sli vs crossfire the 680 performance is quite acceptable, as even then you don't have the gpu power to run more than 2x msaa+fx aa. Look at the performance data for crossfire 7970 vs 680 sli in battlefield 3 64 player mutiplayer. More than 2gb is needed.
 
On the contrary, they are used quite a bit. I currently have 16GB of RAM and 2GB on my graphics card. Sure, a game, especially World of Warcraft, doesn't use a ton of memory, but the extra caching helps immensely. I zone into dungeons is 10-12 seconds, ahead of anyone else. This allows me to do certain things before people completely zone in. (Like buffing the group as the characters appear, but before they go running off an all directions.)

Sure, the 3GB graphics cards are available, and that ratio would demand 24GB, but both are overblown, by about the same as the ratio I suggest. I wouldn't think I'd need 3GB of video memory anyway, at least for now. Both 3GB of video memory and 24GB of system RAM would last as overblown for about the same amount of time and would become mainstream for about the same amount of time. They'd probably even become antiquated at about the same time.

Do you run WoW off an SSD? Your system RAM has little effect on load times. I would consider 10 - 12 sec rather slow.

I actually ran out of VRAM on my 460 SLI setup in NVSurround in places like Stormwind so this upgrade was needed for me.
 
Look at the performance data for crossfire 7970 vs 680 sli in battlefield 3 64 player mutiplayer. More than 2gb is needed.

Take a look at Kyle's review. With less RAM the 2GB GTX 680 simply outperforms the 3GB 7970. Don't know if a 4GB GTX 680 would make a difference. At this point its probable that nVidia has better memory management to get around the lower VRAM quantity.

There is one interesting game to look at right now for video card memory usage, and that is Battlefield 3 multiplayer. There is a big difference between the VRAM usage in single player and multiplayer. When we cranked this game up in a 64 player server at the highest in-game settings we saw it get near to 5 GB of VRAM usage on Radeon HD 7970 at 4X AA at 5760x1200. This game seems certainly capable of maximizing VRAM usage on video cards in multiplayer in NV Surround or Eyefinity resolutions. It makes us really want to try out 4GB, or higher video cards. A couple of 4GB GTX 680 video cards are looking real interesting to us right now in this game, and from what rumors we have heard, Galaxy is very likely to make this happen for us.

(...)


Personally speaking here, when I was playing between GeForce GTX 680 SLI and Radeon HD 7970 CrossFireX, I felt GTX 680 SLI delivered the better experience in every single game. I will make a bold and personal statement; I'd prefer to play games on GTX 680 SLI than I would with Radeon HD 7970 CrossFireX after using both. For me, GTX 680 SLI simply provides a smoother gameplay experience. If I were building a new machine with multi-card in mind, SLI would go in my machine instead of CrossFireX. In fact, I'd probably be looking for those special Galaxy 4GB 680 cards coming down the pike. After gaming on both platforms, GTX 680 SLI was giving me smoother performance at 5760x1200 compared to 7970 CFX. This doesn't apply to single-GPU video cards, only between SLI and CrossFireX.
 
Take a look at Kyle's review. With less RAM the 2GB GTX 680 simply outperforms the 3GB 7970. Don't know if a 4GB GTX 680 would make a difference. At this point its probable that nVidia has better memory management to get around the lower VRAM quantity.

As has been said earlier, the frostbite 2 engine dynamically changes LOD depending on available VRAM. This is why the 1.5gb and 3gb 580s usually perform similarly in bf3.....There's a youtube from GeForce Lan 2011 last year where a Dice developer stated this. Of course the difference is minimal and hardly a reason to buy a bigger VRAM card. The user experience matters more, and the 680 does just fine.
 
As has been said earlier, the frostbite 2 engine dynamically changes LOD depending on available VRAM. This is why the 1.5gb and 3gb 580s usually perform similarly in bf3.....There's a youtube from GeForce Lan 2011 last year where a Dice developer stated this. Of course the difference is minimal and hardly a reason to buy a bigger VRAM card. The user experience matters more, and the 680 does just fine.

Yep. Wish more people understood that BF3 does dynamic caching and management of VRAM. It works beautifully.
 
Probably the only way would be to test with High texture settings, which is supposed to disable the feature.

I wonder how many benchmarkers are going to be caught out by this? Because if the card is silently stepping down image quality, then comparative benchmarks aren't going to be valid.
 
I wonder how many benchmarkers are going to be caught out by this? Because if the card is silently stepping down image quality, then comparative benchmarks aren't going to be valid.

It would be nice if there was a way to selectively set the amount of VRAM available so you could test and see if it makes a performance difference. So take a 7970 and test it with Ultra and 3GB, then limit the card to 2GB and see if the performance changes. If it is loading fewer textures, it might be faster with 2GB than it is with 3GB. That could actually be silently helping the 680, if that is in fact what happens.
 
1332910830lxuqiwXcM0_5_4.gif


just like what i said..... the 7970's pull away when the settings go up.

tri-fire and quad fire expresses this dearly.

another thing i didn't exactly like about the [H] Sli comparison, is how they tested the idle wattage. Everyone knows the monitors have to be off in order to take full advantage of ZERO Core. Why they tested with the monitors one, and stated zero core was working. The fan on the second card will stop running, if its not under load, this doesn't confirm zerocore is working. Zerocore only comes into play when the monitors are OFF. Even Amd slides say this.
 
^ we tested fairly with the displays all turned on

the point was to test idle power in a typical scenario where a user has left their pc for some time, and the monitors were on, which most people leave their displays on

it's a valid test, and you can't deny the fact that the 680 SLI is more power efficient at idle with monitors turned on, this is a factual difference users should know about
 
he point was to test idle power in a typical scenario where a user has left their pc for some time, and the monitors were on, which most people leave their displays on

most peoples displays automatically go to sleep/stand by. But by all means keep running those 50+ watt lcds.

I see the point of your comparison, but these numbers would effect me none. As my displays automatically go into standby/sleep after 15mins. So this test would only apply for those first 15 mins. It still doesn't explain why they explicably said that Zero Core was working, because one of the fans wasn't spinning. Zero core only works with the monitors off or in standby.
 
I see the point of your comparison, but these numbers would effect me none. As my displays automatically go into standby/sleep after 15mins. So this test would only apply for those first 15 mins. It still doesn't explain why they explicably said that Zero Core was working, because one of the fans wasn't spinning. Zero core only works with the monitors off or in standby.

I agree a "long term idle" comparison with monitors off should have been included, although according to AnandTech's 7970 review "long idle" was just 10 watts lower than "regular idle", so if people REALLY cared they'd use sleep or hibernate anyway.
 
I thought all [H] members ran nvidia's 3D geoforms screensaver anyways? :D
 
I would argue that ratio is a bit high these days. Most people probably use 4-8 GB system RAM, and that's really all you need. By your ratio you would be getting 24 GB of RAM for a 3 GB VRAM card. Seems considerably excessive.

4:1 is probably a safe ratio at the moment.

There is no relation between system ram and VRAM... totally un-related.
 
I want to see how these 2GB cards perform under GTA IV with icenhancer. That game with that mod is probably the most demanding thing you can run for VGA today. Maybe except for the original Crysis all modded up.
 
You obviously haven't played ut2k4 or ut3 with Mr. Pant's Excessive Overkill mod...load up all the bots and watch your PC grind to a halt. ;)
 
I want to see how these 2GB cards perform under GTA IV with icenhancer. That game with that mod is probably the most demanding thing you can run for VGA today. Maybe except for the original Crysis all modded up.

GTA iv is the most poorly coded console port of all time. That doesn't help.
 
So is anyone going to talk about how reviewer cards may turbo up a different amount than the cards people are receiving? Didn't the [H] get a card that went over 1100 by default? Though I can't find the quote now. Shouldn't reviewer cards be locked to 1058 peak since that's all that is guaranteed that a buyer might get?
 
So is anyone going to talk about how reviewer cards may turbo up a different amount than the cards people are receiving? Didn't the [H] get a card that went over 1100 by default? Though I can't find the quote now. Shouldn't reviewer cards be locked to 1058 peak since that's all that is guaranteed that a buyer might get?

Mine goes to 1110 stock. There isn't a way (that I know of) to lock the clocks, so there really isn't anyway to make the test completely repeatable. All they can do is report the speed their review card went to, I guess.
 
So is anyone going to talk about how reviewer cards may turbo up a different amount than the cards people are receiving? Didn't the [H] get a card that went over 1100 by default? Though I can't find the quote now. Shouldn't reviewer cards be locked to 1058 peak since that's all that is guaranteed that a buyer might get?

I think it depends on the game.
In PCPerspective's live video review recap with Nvidia's director of marketing Tom Petersen, their GTX 680 was boosting to 1110 GHz in BF3 with normal settings.
It's around the 31 minute mark of the video:
http://www.pcper.com/news/Graphics-Cards/PC-Perspective-Live-Review-Recap-NVIDIA-GeForce-GTX-680
 
Back
Top