NVIDIA Kepler GeForce GTX 680 Overclocking Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,653
NVIDIA Kepler GeForce GTX 680 Overclocking Review - We evaluate overclocking performance of the new NVIDIA GeForce GTX 680 video card. Can we can squeeze any more performance out of the 680 that GPU Boost isn't already doing on its own. We compare to a standard GeForce GTX 680 and Radeon HD 7970. We also compare both video cards at the highest overclocks head-to-head.
 
Last edited:
great review, 1 comment:

my Galaxy cards had the same default voltage, but EVGA tool let me set the voltage to 1.175 ( and push the slider to 1.2) it would just snap back to 1.175, I'm wondering if other cards have different voltage limits.
 
great review, 1 comment:

my Galaxy cards had the same default voltage, but EVGA tool let me set the voltage to 1.175 ( and push the slider to 1.2) it would just snap back to 1.175, I'm wondering if other cards have different voltage limits.

Here is some insight on that voltage slider which is worth reading for sure.

One note on the GPU voltage slider. EVGAs tool poorly names it. In NVIDIA reference documents it is called “GPU voltage minimum.” It effectively sets a floor under the GPU voltage. The software will continue to scale voltage up when the boost algorithm determines that it will help. It is primarily there to help with memory overclocking, believe it or not. As you increase memory frequency it may be necessary to prevent the GPU voltage from going too low. Low GPu voltages affect the logic that interfaces to the memory and can lead to unstable memory when we clock the GPU down. It is not super obvious.

We would not expect adjusting the voltage slider to affect the max GPU overclock very much. Instead it should mostly just raise the power consumed – and make higher memory overclocks possible in some cases.
 
I am curious as to why this review was done on a PCIe 2.0 system and not a PCIe 3.0 system? Can you perform benchmarking of this product on a PCIe 3.0 system and show us the comparison to the 2.0 system so that we may evaluate the value using this in a LGA2011 based system.

Thanks!
 
Thanks for the review guys. This is by far the most important to me of the reviews as I'll be throwing what ever I buy under water as soon as I verify its functional. Looks like I'll be going team green after a long stint on team red (and for some reason, I almost feel guilty!).

Thanks again!
 
I am curious as to why this review was done on a PCIe 2.0 system and not a PCIe 3.0 system? Can you perform benchmarking of this product on a PCIe 3.0 system and show us the comparison to the 2.0 system so that we may evaluate the value using this in a LGA2011 based system.

Thanks!

I'm almost certain this would make no difference. I went from a P8p67 board to a gen 3 P8Z68 and throughput isn't different at all, we're not even close to saturating the bandwidth of PCI-E 2.0 yet. Also, PCI express 3.0 can't be utilized properly without an IB processor anyway, is my understanding. I could be wrong on the last part however. Its been a while since i've read up on it - if anyone can clarify on that last part.
 
As soon as one of these comes back in stock on Newegg, I'm going to find out how high one of these will go when modded with a Corsair H80 for cooling :p
 
Good review guys, nice to see the cards competing well with eachother. I'm glad to see the 680 still competes well with an OC 7970. I was thinking that my card would be easily exceeded by an OC amd card. My only buyers remorse now is the low voltage cap on the 680. I'm able to do +150 on the core and it seems to be stable so far, and I've seen my 680 run over 1250 consistently.

Looks like ill have to sell my card once some custom pcbs are made. Being able to push voltage as high as a Radeon would be sweet, I'm sure.

*edit* hm, that is if higher voltages will help max oc much, after reading the first few posts here
 
I'm almost certain this would make no difference. I went from a P8p67 board to a gen 3 P8Z68 and throughput isn't different at all, we're not even close to saturating the bandwidth of PCI-E 2.0 yet. Also, PCI express 3.0 can't be utilized properly without an IB processor anyway, is my understanding. I could be wrong on the last part however. Its been a while since i've read up on it - if anyone can clarify on that last part.

It can on SB-E X79 systems. My 7970 utilizes PCIe 3.0, but Nvidia has opted not to support the PCIe 3.0 implementation in x79 on the 680 for some reason. They only support it on upcoming Ivy Bridge systems. Don't know why.

For single cards running in x16 mode PCIe 3.0 is currently overkill. You won't notice a difference.

Where it would be useful is in multi card setups or when installing another different card (sound card, raid card, etc.) forces an X16 slot to switch down to 8x, or 4x mode. In these cases, a 4x 3.0 slot will get performance as if it were in a 8x 2.0 slot, and an 8x 3.0 slot would get performance as if it were in a 16x 2.0 slot.

In the future, as cards get faster, we will likely get to the point where PCIe 3.0 will be useful in x16 mode, but for now, we are not there.
 
Are there any free testing tools that can reliably push clocks to max in a way that is easily reproducible? The OCscanner tool from EVGA incorporated into PrecisionX seems a bit lacking...

EDIT erm might be because for some reason adaptive vsync was forced on. Whoops. Regardless, is there anything better?
 
Last edited:
Nice review, looks like it's a decent overclocker. Will be interesting to see if anyone can push the voltages more at a later time and go from there.
 
I am curious as to why this review was done on a PCIe 2.0 system and not a PCIe 3.0 system? Can you perform benchmarking of this product on a PCIe 3.0 system and show us the comparison to the 2.0 system so that we may evaluate the value using this in a LGA2011 based system.

Thanks!

In LGA2011 systems you're still limited to PCIe 2.0 due to a restriction in the nVidia driver. You can't run 3.0 unless you're on LGA1155 or use a registry hack (which causes instability for some people). And yes for a single card for all intents and purposes there's no difference.
 
So I guess both my EVGA cards are great overclockers as I can run 1300MHz core on both in SLI with no problems. Runs a little hot so I am switching to aftermarket air coolers, but I can benchmark and play BF3 @ 1316MHz core (+180 offset) without issue.
 
Yup seems like the OC tested was pretty average. Props to whoever for not sending/using a cherry picked card.
 
Yeah, I felt like the overclocked speeds used were fair. Would be kind of silly to have used an unrealistic overclock on either card. Kudos to you guys for that.
 
Nice review guys. Thanks for taking a bit of extra time to test several games and OC both the 7970 and the 680.

I'm really liking the auto-overclocking the card does, right out of the box. This is a great feature, quite innovative.

Another small article, showing how much OC'ing helps with 5760x1200 performance would be sweet :)
 
Very nice, it looks like I may be headed back to team blue after all...

From the article though:
On one hand we can appreciate that the GTX 680 is nearer to its maximum threshold out-of-the-box, giving you the best possible performance. Yet on the other hand we can also appreciate AMD giving add-in-board manufacturers more room to play with for overclocking specific products, or give end-users more options in terms of overclock potential.

I don't see how we can "appreciate" this from AMD, and this seems especially bad for end users. For the manufacturer, I see it both bad and good; bad because AMD is slower out of the box than nVidia and it's more expensive, good because you can now put out "OC Edition" cards and charge $50 more for them without breaking a sweat; either way, I think the end user loses out in respect to choice and price. For the end user, excluding the above, you should overclock at least something because it looks like it AMD has potential for maximum threshold out-of-the-box, but simply not to choose to (so manufacturers can sell us an OC edition maybe?).

In today's world, even for [H] members, what really matters more: 1) the potential overclock of 15% vs 30% or an "OMG 100% better overclock" or 2) the clock for clock (on overclocked cards). I think [H]OCP put it best on all your testing in that it isn't the benchmarks that matter (overclock percentage), but what we see in real world eye-candy (clock vs. clock). I hope this taught AMD a lesson that being a big fish in a little pond doesn't cut it when you could be a big fish in a big pond.
 
I am curious as to why this review was done on a PCIe 2.0 system and not a PCIe 3.0 system? Can you perform benchmarking of this product on a PCIe 3.0 system and show us the comparison to the 2.0 system so that we may evaluate the value using this in a LGA2011 based system.

Thanks!

The PCIe system has nothing to do with the cards overclocking ability.
 
Are there any free testing tools that can reliably push clocks to max in a way that is easily reproducible? The OCscanner tool from EVGA incorporated into PrecisionX seems a bit lacking...

EDIT erm might be because for some reason adaptive vsync was forced on. Whoops. Regardless, is there anything better?

The problem with using a tool to push the clocks is that if it is a high demanding power tool, GPU Boost won't utilize higher clocks, but keep clocks at the TDP, or whatever power setting you've set. It's hard to explain, but let's take 3DMark for example, it's a high powered app, demands a lot of power, and GPU Boost won't raise the clock speed as high because the card reaches TDP sooner than games. Now take a game like BF3, and it's a lower powered app, GPU Boost can utilize higher clock speeds because the card doesn't reach TDP.

Therefore, tools like 3DMark are a BAD representation of clock speed and what you will actually experience in real-world games. The only way to see relevant clock speed in games, is to use games. What i'm talking about can best be shown in graphs, using EVGA's precision, to show power and clock speed. I might do that in a followup eval to look closer at GPU Boost, so its clearer. The point is, the only way you are going to know the highest clock speed in your game, is to watch the clock speed IN THAT GAME.
 
@Kyle, Brent:
Do you have wattage numbers on the 7970 OC for each game to compare with the 680 OC?
 
Nice review guys. Thanks for taking a bit of extra time to test several games and OC both the 7970 and the 680.

I'm really liking the auto-overclocking the card does, right out of the box. This is a great feature, quite innovative.

Another small article, showing how much OC'ing helps with 5760x1200 performance would be sweet :)

I really want to erase auto-"over"clocking from our vocabulary in reference to GPU Boost.

It is auto-clocking, not auto-overclocking, because overclocking can still be done manually by the user over the capabilities of GPU Boost. So it isn't auto-overclocking, it's simply an auto-clocking feature, and quite a smart one at that.
 
Zarathustra[H];1038577826 said:
As soon as one of these comes back in stock on Newegg, I'm going to find out how high one of these will go when modded with a Corsair H80 for cooling :p

I've thought of doing this with an H50 I have lying around. Whats your plan for keeping the Ram cooled?
 
The problem with using a tool to push the clocks is that if it is a high demanding power tool, GPU Boost won't utilize higher clocks, but keep clocks at the TDP, or whatever power setting you've set. It's hard to explain, but let's take 3DMark for example, it's a high powered app, demands a lot of power, and GPU Boost won't raise the clock speed as high because the card reaches TDP sooner than games. Now take a game like BF3, and it's a lower powered app, GPU Boost can utilize higher clock speeds because the card doesn't reach TDP.

Therefore, tools like 3DMark are a BAD representation of clock speed and what you will actually experience in real-world games. The only way to see relevant clock speed in games, is to use games. What i'm talking about can best be shown in graphs, using EVGA's precision, to show power and clock speed. I might do that in a followup eval to look closer at GPU Boost, so its clearer. The point is, the only way you are going to know the highest clock speed in your game, is to watch the clock speed IN THAT GAME.

Well that was kinda my point, the OCscanner tool and most other GPU stressing programs seems to just pour on the power draw. I was just curious if aside from testing individual games endlessly, if there was another app that might be more representative of a current generation power draw from a game rather than a benchmark. Wishful thinking when I asked I know, but was curious nonetheless. Off to test a bunch of games then.
 
Another great review!! thanks!

I know that we normally dont see this from [H] but are there Liquid cooling plans for GPUs this round? I would like to see if the auto-clocking keeps scaling as they get cooler. If not I will rely on the forums :)

How about Eyefinity/Surround? does this help the NV story at all when OC'd?
 
Nice OC results, the 680 is a win all around it seems. Nice to see the 7970 still keep up in OC results as well. However AMD does need to drop the price so maybe I could buy a second. :)
 
You didn't post fan setting for the 7970. For apples to apples, you should be manually setting the 7970 to a high fan speed. My 6950 chokes on OCs unless I prop the speed above 55%
 
It is auto-clocking, not auto-overclocking, because overclocking can still be done manually by the user over the capabilities of GPU Boost. So it isn't auto-overclocking, it's simply an auto-clocking feature, and quite a smart one at that.

I don't know if it sounds all that smart to me. Pretty much, when you really, really need the extra clock speed in a high power situation (say, a game that pushes as hard as 3dmark) is when you get down clocked. When you're running at 100+fps in a low power situation is when you get an added boost because you're at low power...

I would think you would want the high power situation to be the time that the card ramps up extra voltage and clocks to push you through the demanding segment you're in.
 
I'm quite impressed with the performance of the GTX 680 and even more so when overclocked.

Kyle, do you see a better performance if you favor memory clocks rather than gpu clocks?

It seems to me that the 680 is not bandwidth limited even though the 7970 has a massive bandwidth advantage. I think nvidia improved on the compression technology that was implemented on the 9800GTX.
 
I don't know if it sounds all that smart to me. Pretty much, when you really, really need the extra clock speed in a high power situation (say, a game that pushes as hard as 3dmark) is when you get down clocked. When you're running at 100+fps in a low power situation is when you get an added boost because you're at low power...

I would think you would want the high power situation to be the time that the card ramps up extra voltage and clocks to push you through the demanding segment you're in.

Not downclocked, just to the baseclock. It keeps the TDP in check, now if you set the TDP power slider higher, then it can exceed that.

I understand what you are saying though, but in a high power app, the card basically acts like it would if GPU Boost didn't exist, operating at the baseclock.

I'm going to look at this more closely though in a followup, run some high power stuff vs. low power stuff and see what is really happening.
 
I'm quite impressed with the performance of the GTX 680 and even more so when overclocked.

Kyle, do you see a better performance if you favor memory clocks rather than gpu clocks?

It seems to me that the 680 is not bandwidth limited even though the 7970 has a massive bandwidth advantage. I think nvidia improved on the compression technology that was implemented on the 9800GTX.

680 is engine limited, sacrifice memory clocks in favor of raising core clock as high as you can.
 
Not downclocked, just to the baseclock. It keeps the TDP in check, now if you set the TDP power slider higher, then it can exceed that.

I understand what you are saying though, but in a high power app, the card basically acts like it would if GPU Boost didn't exist, operating at the baseclock.

I'm going to look at this more closely though in a followup, run some high power stuff vs. low power stuff and see what is really happening.

By high power do you mean under heavy load? what would be the point of boosting under low load? other than to get even higher fps and raise the "average" numbers we see, that seems almost like cheating if that's the case, and that they put engineering time to just raise the "average/max" vs "average/min" that's just a shame.
 
Excellent review, as always! Thanks for taking the time to evaluate the 680 on every possible facet.
 
By high power do you mean under heavy load? what would be the point of boosting under low load? other than to get even higher fps and raise the "average" numbers we see, that seems almost like cheating if that's the case, and that they put engineering time to just raise the "average/max" vs "average/min" that's just a shame.

High power as in pushing the TDP, a "power virus" as NV describes it, apps like Furmark, NV showed a test in 3DMark that was also like this.

Note that this isn't typical in games, that's why games are the right application to use to test clock speed and overclocking and so forth. Game's don't behave like some of these specific tools, those tools don't show you what you are going to get in real games.

I'm going to flesh this out more in a closer look at GPU Boost, I'll show you graphs in these apps, and in games, and how it relates to power and clock speed. The important thing to keep in mind is simply none of those tools or specific benchmarks relate in anyway to what you will experience in real games. This is also why we've started reporting power in each game, its necessary now. Same with clock speed, if clock speed is different in games (and it will be) I'll show that in the next article.
 
I've thought of doing this with an H50 I have lying around. Whats your plan for keeping the Ram cooled?

From pictures online where people have taken their 680's apart, it looks like they are of the variety where the heatsink and fan can come off, but the heat spreader for the VRM's and RAM is separate.

gtx680-metal-bracket.jpg


My plan is to leave that heat spreader on there, and have one of my case fans blow cool air in from the outside over it.

From the looks of it the heatsink doesn't even make contact with the heat spreader in the stock configuration, so at worst it should be no worse than the stock design.

It won't be pretty, but it should work very well.
 
Not downclocked, just to the baseclock. It keeps the TDP in check, now if you set the TDP power slider higher, then it can exceed that.

I understand what you are saying though, but in a high power app, the card basically acts like it would if GPU Boost didn't exist, operating at the baseclock.

I'm going to look at this more closely though in a followup, run some high power stuff vs. low power stuff and see what is really happening.

Excellent. I look forward to this.

For those of us who plan to use custom cooling that likely will exceed the capabilities of the stock cooler, do you know if there is any way to override the power limit, err limit, and set it higher than 130%?
 
Brent: Is Power Target the same as AMD's Powertune? They seem very similar, and if so I would like to see Powertune boosted from +20 to +32 especially as it has 8 and 6 pin power connectors rather than double 6 pin.

I would also love for the Powertune not to arbitrarily reset on reboots or in the middle of games... particularly on the second card in a crossfire setup. Nothing like having that slider pop back which starves the card for power, leads to throttled processing, then game crash, and usually finishing with a blue screen shortly thereafter.
 
High power as in pushing the TDP, a "power virus" as NV describes it, apps like Furmark, NV showed a test in 3DMark that was also like this.

Note that this isn't typical in games, that's why games are the right application to use to test clock speed and overclocking and so forth. Game's don't behave like some of these specific tools, those tools don't show you what you are going to get in real games.

I'm going to flesh this out more in a closer look at GPU Boost, I'll show you graphs in these apps, and in games, and how it relates to power and clock speed. The important thing to keep in mind is simply none of those tools or specific benchmarks relate in anyway to what you will experience in real games. This is also why we've started reporting power in each game, its necessary now. Same with clock speed, if clock speed is different in games (and it will be) I'll show that in the next article.

My biggest concern is for people who are going to keep the card for a while. I'm on a 3 year upgrade cycle (after 3 years, I pick at what point I want to upgrade -- this time its Ivy). Back in 08 when I picked up my 4870, I'm sure most games didn't push the card as hard as 3dmark 08 did. I'll bet that BF3 pushes my 4870 just as hard or harder than 3dmark 08 does.

What I'm saying is, in 3 to 4 years, I suspect I won't be getting much over base clock anyway since games will (hopefully) be much more demanding. Sure, I'll still WC the card and all will be well, I'm just not sure the dynamic clock is really something to be all that excited about in the long term.
 
Back
Top