Official 660Ti Overclocking Thread

Something I'd like to encourage more of, when reporting clock speeds with the Kepler generation, is the Actual GPU clock speed when gaming. As you know, most cards actually run at a frequency well above the GPU Boost clock speed. So, the real clock speed they run at is not the GPU Boost clock. I'd suggest calling it the "Top GPU Boost Clock" or the "Actual GPU Clock Speed" it can be seen when running a utility that has an on-screen display that shows your GPU clock in real-time in games, or monitoring that shows the actual gpu clock over time in the background.
 
Good call, Brent. I changed OP to reflect your comments.
 
Something I'd like to encourage more of, when reporting clock speeds with the Kepler generation, is the Actual GPU clock speed when gaming. As you know, most cards actually run at a frequency well above the GPU Boost clock speed. So, the real clock speed they run at is not the GPU Boost clock. I'd suggest calling it the "Top GPU Boost Clock" or the "Actual GPU Clock Speed" it can be seen when running a utility that has an on-screen display that shows your GPU clock in real-time in games, or monitoring that shows the actual gpu clock over time in the background.

i wish more reviewers would do this as quite a few just post their highest stable screen shot of GPUz which doesn't tell the whole story.
 
Good call.
With the advent of these new cards, they simply" run themselves" after we put in the numbers.....which takes some getting used to, I must say.:D

on my 670s I see wonderful results, far and away over what typical reviews show.
 
Something I'd like to encourage more of, when reporting clock speeds with the Kepler generation, is the Actual GPU clock speed when gaming. As you know, most cards actually run at a frequency well above the GPU Boost clock speed. So, the real clock speed they run at is not the GPU Boost clock. I'd suggest calling it the "Top GPU Boost Clock" or the "Actual GPU Clock Speed" it can be seen when running a utility that has an on-screen display that shows your GPU clock in real-time in games, or monitoring that shows the actual gpu clock over time in the background.

Amen, so many people still don't understand this concept.
 
Amen, so many people still don't understand this concept.

thats because boost can be confusing. my boost is 980 or whatever and in game its 1110 by default. you could understand why there would be some confusion for those who dont own a kepler card. I know i was a bit lost until i actually had the card and then i still wanted more information.
 
Final clocks after testing for stability for 6 hours on my Galaxy 3GB are 1330/7000. Here's a pic of it with boost to 1322.

 
1199 core, can't raise memory more than single digits... couldn't pass a minute of heaven on anything higher.

But this card is going back anyhow (heatsink rattles), so who knows.
 
1199 core, can't raise memory more than single digits... couldn't pass a minute of heaven on anything higher.

But this card is going back anyhow (heatsink rattles), so who knows.

Maybe those two issues are related. Hopefully the new one works out better for you.
 
Final clocks after testing for stability for 6 hours on my Galaxy 3GB are 1330/7000. Here's a pic of it with boost to 1322.


can you post benchmarks with that on bf3? just curious if it will come close to the 680 @ 1080p
 
Well one of my pair of MSI 660ti PE (not the overclocked model just reference clocked) will do 1249/7200mhz and unfortunately the other one appears to be DOA... Haven't had a piece of hardware be DOA in years.
 
My MSI 660Ti PE seems happy with me maxing out the power/voltage setting in MSI Afterburner (+100mv and I think 114% power) along with going to 1170 on the core and 7000 (or 3500, or 1750, depends on what software you ask) on the memory.

I've still got to do an hours-long Kombustor/Furmark test, I suppose, but so far this thing hasn't even made it to 80C, nor has the fan gotten near to 50% speed. It's quieter than the rest of the components in my PC, something I can't say for the Galaxy 1GB GTX460s I had in SLI previously.
 
Thought I would post these two gpuz's the first is the settings I use and the second is me messing with MSi afterburner in order to get GPUz to show actual Pixel/texel fill rate for the highest boost my card goes to:

 
I've still got to do an hours-long Kombustor/Furmark test, I suppose, but so far this thing hasn't even made it to 80C, nor has the fan gotten near to 50% speed. It's quieter than the rest of the components in my PC, something I can't say for the Galaxy 1GB GTX460s I had in SLI previously.

Why would you run Furmark for an hour (or at all)? It's a totally unrealistic load, and places un-needed stress on the system for no real benefit.
 
Why would you run Furmark for an hour (or at all)? It's a totally unrealistic load, and places un-needed stress on the system for no real benefit.

Any real piece of hardware installed into a properly cooled setup should be able to run full tilt forever. CPUs have always been held to this standard. Don't know why GPUs get a waiver on this by so many people.
 
Any real piece of hardware installed into a properly cooled setup should be able to run full tilt forever. CPUs have always been held to this standard. Don't know why GPUs get a waiver on this by so many people.

Because it makes no sense? Why would I want to put that kind of unneeded stress on any component? Furmark has very little connection with card stability or performance, all it does is thermally and electrically stress the card to totally unrealistic levels. Why test a card at 300W and 90C when in even the most demanding game it only uses 250W and runs at 75C? You don't put your car up on blocks and run it at redline for a few hours just to be sure it can "handle the load".

There's a reason Furmark is now throttled at the driver level.
 
Furmark has very little connection with card stability

This should be enough reason to ignore furmark as a tool for at the very least Kepler and maybe others. I can run furmark for hours and crash in 15 seconds in Heaven or BF3.

Now if you are thermally testing your card sure go ahead.
 
Because it makes no sense? Why would I want to put that kind of unneeded stress on any component?

Properly designed, it shouldn't be experiencing any stress considered damaging. The car analogy fails, but everyone loves it for some reason. Thermal testing and speed path testing are two different tests. Thermal testing should expose excessive vdroop (similar to when BF3 was introduced, and then there was a rash of 560Ti cards that needed vcore bumps ...)
 
Well, I was a little surprised to find that on the MSI 660 when I overclocked it to 1170/7000, MSI Kombustor (which is pretty much Furmark, as it's even made by the same guys) actually did wind up doing basically the same thing as just idling in Heaven Benchmark or that Nvidia "A New Dawn" demo, Power usage was from 95-104%, GPU usage was mostly pegged at 100%, and the GPU would reach 80C and then drop right back down to 79 for a while. Fan speed stayed at 45% or less.

I guess that's what Forceman meant by "throttled at the driver level". Either way, it's been a couple of years since I installed and stress-tested a card, so I guess I was a bit behind.
 
My 2GB Galaxy is sitting at 1228/7000 made it about 2 hours through Heaven before it crashed...

I was hoping for more, as I imagine I'll probably have to turn it down some more.

The cooler is pretty sweet though.
 
This Gigabyte just keeps rocking. Now stable at 1341 core and 7000 mem!
 
Curses!

My replacement card from Galaxy (which runs pretty well and pretty quiet, I must say) clocks the same on the core in Heaven (won't break 1200). It can at least do 7000 on memory, which I'm happy about, but it looks like the monster overclock has eluded me again.

How are y'all getting these ridiculous 1300 numbers anyhow?

I'm going to be reinstalling Windows later to rid myself of any AMD ghosts on my computer to see if that makes any difference, but I'm not holding my breath (it's not a terrible result anyhow, and I already max out everything at my resolution, so I'm happy)

EDIT: Also, is it normal that Heaven crashes even when GPU power is only at 65-70%? Does that still mean unstable overclock or could that be something else?
 
I am doing testing with my PNY 660 Ti too, I would LOVE to hear how to get to 1300 as well = )
 
EDIT: Also, is it normal that Heaven crashes even when GPU power is only at 65-70%? Does that still mean unstable overclock or could that be something else?

Normally means an unstable overclock. Just crashes, or does it artifact first?
 
Normally means an unstable overclock. Just crashes, or does it artifact first?
Just plain crash, with some error message about missing device from Heaven or the generic stopped responding message from Windows. I've yet to see any artifacts in Heaven running either of these 660s, though I did on my 7950 when pushing past 1150 core.

Could it actually not be related to the card and OC?
 
Just plain crash, with some error message about missing device from Heaven or the generic stopped responding message from Windows. I've yet to see any artifacts in Heaven running either of these 660s, though I did on my 7950 when pushing past 1150 core.

Could it actually not be related to the card and OC?

that means your driver crashed and restarted which does mean you are unstable. The only way to see artifacts in heaven is with a memory overclock, as far as I can tell the GPU just crashes before it produces visual artifacts and this is true of the other keplers I have messed with.
 
Curses!

My replacement card from Galaxy (which runs pretty well and pretty quiet, I must say) clocks the same on the core in Heaven (won't break 1200). It can at least do 7000 on memory, which I'm happy about, but it looks like the monster overclock has eluded me again.

How are y'all getting these ridiculous 1300 numbers anyhow?

I'm going to be reinstalling Windows later to rid myself of any AMD ghosts on my computer to see if that makes any difference, but I'm not holding my breath (it's not a terrible result anyhow, and I already max out everything at my resolution, so I'm happy)

EDIT: Also, is it normal that Heaven crashes even when GPU power is only at 65-70%? Does that still mean unstable overclock or could that be something else?

Its good to hear your new card is quiet, my card seems to be having fan issues now too, sometimes it sounds like they're grinding and rattling around, other times its queit and my temps have gone up but my ambients have dropped, its pretty weird.... I don't want to RMA mine though if the binning is really that bad....

I hope you break 1200 core man!
 
Well, I RMA'ed my Galaxy, I'm sure Andrew would have gotten back to me eventually, but it took less than 15 hours for Amazon to get me another card....

I went with the 3GB Superclocked card, and right out of the box, I could tell there had to have been issues with my old card, I mean, the reference cooler is about as loud and almost as cool as the Galaxy, its about 3-4c hotter. I also found it weird I gained 60 points and almost 5% FPS increase clock for clock over the Galaxy in Heaven, and I'm not even using the extra memory.

1238/7000, as long I keep it under 70 with a fan blowing in my case, the default speeds post 70c are:

70c 1228 @ 1.162mv
73c 1215 @ 1.15mv

which honestly sucks, because regardless of which card you're running, you're more than likely going to break 70c at times, which causes voltage flux and I have a hard time believing people are running 1300 for hours with Heaven without at least hitting 70c once or twice; the rapid drop in voltage kills the stability, and I see [H] dropped their review card under 1300 as well.

Did I mention Kepler throttling sucks?
 
Last edited:
Well, I RMA'ed my Galaxy, I'm sure Andrew would have gotten back to me eventually, but it took less than 15 hours for Amazon to get me another card....

I went with the 3GB Superclocked card, and right out of the box, I could tell there had to have been issues with my old card, I mean, the reference cooler is about as loud and almost as cool as the Galaxy, its about 3-4c hotter. I also found it weird I gained 60 points and almost 5% FPS increase clock for clock over the Galaxy in Heaven, and I'm not even using the extra memory.

1238/7000, as long I keep it under 70 with a fan blowing in my case, the default speeds post 70c are:

70c 1228 @ 1.162mv
73c 1215 @ 1.15mv

which honestly sucks, because regardless of which card you're running, you're more than likely going to break 70c at times, which causes voltage flux and I have a hard time believing people are running 1300 for hours with Heaven without at least hitting 70c once or twice; the rapid drop in voltage kills the stability, and I see [H] dropped their review card under 1300 as well.

Did I mention Kepler throttling sucks?

That's not the way it works. The clock frequency rolls off ~15 Mhz at different temperature limits (70C is one of them) but there is no hard limit. If you have 1300 at 60C you'll have ~1285 at 71C. Mine does 1267 @69C and 1254 at 71C, for instance.
 
That's not the way it works. The clock frequency rolls off ~15 Mhz at different temperature limits (70C is one of them) but there is no hard limit. If you have 1300 at 60C you'll have ~1285 at 71C. Mine does 1267 @69C and 1254 at 71C, for instance.

If I set my card to 1260 *which it runs but not stable for more than a run* it blips 1228 regardless, and 1215, its not pegging 1267 consistently. It does the same thing when its set to 1238, it will blip the same 1228/1215, the Galaxy was the same way.
 
Last edited:
If I set my card to 1260 *which it runs but not stable for more than a run* it blips 1228 regardless, and 1215, its not pegging 1267 consistently. It does the same thing when its set to 1238, it will blip the same 1228/1215, the Galaxy was the same way.

Well, I don't know what to tell you except possibly the 660 Ti cards are different. But my 680 sits at 1275 like a rock below 70C, then rolls off at temps above 70C. Did you turn up your power limit? Maybe your card is power throttling.
 
Back
Top