The Official GTX980/970 OC & Benchmark Perf. Thread

Is there a way to move the TDP target from software side? Or would I have to edit bios to change the power consumption? Thank you.
You should be happy it has a higher TDP that why you wont be hitting the limits and throttling. My card will throttle like crazy and drop voltage with any OC because I am hitting TDP.
 
Sorry Marcdaddy my man cave is closed for the night. my 5 year old is asleep and i can't wake him. I'll update benchmarks probably tomorrow. My Pcie 3.0 slots run 16x for one card or 8x8 with 2 cards, just like the P67 chipset, however they are pcie 3.0 so the 8x @ pcie 3.0 has more bandwidth than 8x @ pcie 2.0. In any case I don't think we're at the point of saturating the pcie 2.0 8x bandwidth unless you get a dual gpu card and even still I'm not sure. As far as Quad channel ram, that's for the new X99 chipset with DDR 4. Cas latency is high on most of the DDR 4 chips out there. If your not spending over $450 on ram in a DDR 4 setup your not getting anything truly fast. That's why I chose Z97 instead. I'm running dual channel DDR 3 but I paid less than half of what a 16 GB DDR 4 kit would have costed me for quad channel.:)

Also benchmarks on my cpu @ 4.8 show that it keeps up pretty damn well against the newest 8 core and 6 core intel chips. They are more future proof but Damn that setup would have costed me a lot more. I'll skip X99 and see what's next. By then DDR 4 prices will come down and speeds will come up tremendously.


Before I build my new system next weekend I'm gonna test my 2600k @ 4800 MHz with the gigabyte 980s and give some results to see if PCI 2.0 is choking them. The cards will be here Tuesday so that will give me a few days. I've been up since 5, my 7 and 3 year old sons are allready playing games. 7 year old playing Minecraft and the 3 year old is on my rig playing BF4, yes he is only 3 and plays BF4. I feel bad for the people that jump in his Helicopter lol.
 
Before I build my new system next weekend I'm gonna test my 2600k @ 4800 MHz with the gigabyte 980s and give some results to see if PCI 2.0 is choking them. The cards will be here Tuesday so that will give me a few days. I've been up since 5, my 7 and 3 year old sons are allready playing games. 7 year old playing Minecraft and the 3 year old is on my rig playing BF4, yes he is only 3 and plays BF4. I feel bad for the people that jump in his Helicopter lol.

consdering 2.0 16x is 3.0 8x i doubt it. Even the techpowerup 980 sli review has them running at 3.0 8x speeds

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980_SLI/2.html

Now if you are playing at 4k. Might be a very slight choke with pci 2.0 @ 8x, but @ 16x you should be good.
 
Beginning my Gigabyte 970 WF3 overclocks today. It's been a long time, running my plan by folks:

1.) Max power limit to 112%
2.) Prioritize temperature, setting to aim for 80c.
3.) Max Core Voltage, which is .87mv (is this necessary to start?)
4.) Begin in 25mhz increments with GPU core until I get artifacting or notice throttling (how will I notice this?)
5.) Work on memory in 50mhz increments.

Can anyone recommend specific stress tests to use?

My first run:

http://www.3dmark.com/3dm/4191911?

Did not increase voltage on that run, +100mhz on core.

My best so far (no memory OC here):
http://www.3dmark.com/3dm/4192407

12461 graphics score on that last run.

Breaking 1500mhz pretty easily, assuming I'm doing this right.
 
Last edited:
Dang, I can't seem to get past GPU +175 MHz without getting artifacts in Tomb Raider @ 4K (get sudden black polygons streaking across the images).

EVGA 980 reference. +500 memory. 125% Power. Doesn't matter what additional voltage I use.
 
Dang, I can't seem to get past GPU +175 MHz without getting artifacts in Tomb Raider @ 4K (get sudden black polygons streaking across the images).

EVGA 980 reference. +500 memory. 125% Power. Doesn't matter what additional voltage I use.

Your memory OC is high though. Have you tried doing just the core first then the memory?
 
Not yet. Oh 200 isconsidered high? It seems half the people here have cleared 1500 MHz boost so I assumed my 175 MHz was modest!
 
Fire Strike:

SCORE
9545 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 11862
Physics Score 7928
Combined Score 4422

MSI Gaming 970 @ Defaults
2500K @ 4.4 GHz

Put in a quick OC attempt here, card seems to scale nicely with clock increases:

10101 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 12668
Physics Score 7883
Combined Score 4816

MSI Gaming 970 @ +100 / +500
2500K @ 4.4 GHz
 
Any 3dmark default firestrike results for 980 SLi overclocked setup on a rig similar to mine (around 4.5 Ghz)?

I break 15K in the new 3dmark with my mild overclocks on 780s and CPU at 4.5 GHz. Can do about 15.4/15.5K when I go all out with 1.3 volts on the 780s and overclock as per sig of rest of the computer.
 
What is your actual boost? Most of the 980 reviews get about 1450 for ocing the reference cards.

I think I see about 1430 Mhz while playing Tomb Raider (OSD from EVGA Precision X). Maybe a little less. What is confusing me is that GPU-Z thinks the boost is 1391 MHz. So Maybe I am good to go anyways?

Oh - I switched to Afterburner because Precision X has been causing some glitches and crashes - particularly when I start enabling/disabling other monitors.

So I'll test again with Afterburner and see if I can get the OSD to tell me the boost again.
 
I've settled in happily with sli mode Gtx 970 with a fully 24/7 stable and quiet clock of 1504mhz core and 8004mhz mem for now. Any higher and power limiter nails me, but I have played for probably 16+ hours of bf4, archeage, and some tomb raider, rock stable with these speeds. I have zero doubt I could be running 1600 in sli for 24/7 but the power limiter and voltage down binning from srivers/bios stop me for the time being until we have a new nvflash to put on custom bios files with. Can't complain, it is kicking everything's ass at 4k as is.
 
Is there a way to move the TDP target from software side? Or would I have to edit bios to change the power consumption? Thank you.

Just custom bios for wattage amounts, but afterburner allows a higher percentage than the default regardless, just with stock BIOS range of course.
I think I see about 1430 Mhz while playing Tomb Raider (OSD from EVGA Precision X). Maybe a little less. What is confusing me is that GPU-Z thinks the boost is 1391 MHz. So Maybe I am good to go anyways?

Oh - I switched to Afterburner because Precision X has been causing some glitches and crashes - particularly when I start enabling/disabling other monitors.

So I'll test again with Afterburner and see if I can get the OSD to tell me the boost again.
Pay attention to actual boost in gpuz sensor window and Afterburner's overlay in games :). Those are your real clocks! I have heard others say precision x is glitchy right now too.
 
I've settled in happily with sli mode Gtx 970 with a fully 24/7 stable and quiet clock of 1504mhz core and 8004mhz mem for now. Any higher and power limiter nails me, but I have played for probably 16+ hours of bf4, archeage, and some tomb raider, rock stable with these speeds. I have zero doubt I could be running 1600 in sli for 24/7 but the power limiter and voltage down binning from srivers/bios stop me for the time being until we have a new nvflash to put on custom bios files with. Can't complain, it is kicking everything's ass at 4k as is.

Great clocks indeed :) If you really wanted to test to see if you can get 1600mhz. Put the memory clocks back to normal and see how much higher your GPU clock does.

Putting the memory clocks back to normal will lower the power you using, allowing you to use more power toward the GPU.
 
Thanks :D... . And great Idea, think I will try in the morning. By the way you typo'd after "great", might want to edit that (lol!).
 
Thanks :D... . And great Idea, think I will try in the morning. By the way you typo'd after "great", might want to edit that (lol!).

LOL Yay for auto correct on the phone...How the fuck did it pick Cocks...LOL

Thanks man..
 
Thanks Golden for the advice - I'll keep an eye on the clocks. Hmmm looks like 1415 Mhz in TR @ 4K (with no AA, DOF turned off, and some other settings I dialed down).

Damn, great clocks on your 970.

I know TR plays fine with +200 on my system, but the occasional black flickering in some polygons bothers me. :)
 
lol if I lower the res in game so I am not pushing my gpu even close to its TDP limit the card is actually perfectly stable at 1584 mhz with voltage bump.

You cant say its stable if its not being used at 99%......
 
Thanks Golden for the advice - I'll keep an eye on the clocks. Hmmm looks like 1415 Mhz in TR @ 4K (with no AA, DOF turned off, and some other settings I dialed down).

Damn, great clocks on your 970.

I know TR plays fine with +200 on my system, but the occasional black flickering in some polygons bothers me. :)

Thanks :D, and yeah, I don't consider a clock 24/7 stable unless it's artifact-free (no flickering polys or dots, etc.) in all games I have tried with it and it's been stable for a couple of continuous hours in a game + more time across various sessions. I'm plenty happy with this for now, but I'm going to try DASHIT's idea to see if TDP is really limiting my core like I think.

Questib: how do you interpret the memory oc with afterburner? Do I muliply the offset by 2?

There are three main ways to refer to memory speeds:

-Actual mhz clock, which is shown by GPU-Z on its main page. This is the literal "base" frequency of the memory, and what older system memory speeds used such as SDRAM.
-DDR aka Double Data Rate, which is the base clock multiplied by 2. This is what system memory such as DDR3 uses as its rated speeds. MSI Afterburner's memory offset for GPU memory is rated in this form.
-QDR aka Quad Data Rate, which is what GDDR5 uses as its final clock speeds, and spec sheets for video cards list (aka 7000mhz). This is the base actual mhz clock multiplied by 4, or alternatively you could call it the DDR speed multiplied by 2.

So as to to your question. If you want the QDR speed which is what people typically say when they talk about their overclocks, you multiply Afterburner's offset by 2 and then add that number to the video card's rated memory speed (in this case, 7000mhz + 2x (Afterburner offset) = your QDR speed). :)
 
Guess who's a smart cookie? *cough*DASHIT is*cough*

So I toyed around with lower mem speeds and they drastically reduce TDP usage. By doing testing with stock mem I was able to determine a maximum core that then triangles instead of just driver TDR'ing/crashing. Turns out everyone is probably going to be throttling at around 1500-1520 core if using 8ghz+ memory at this point for playing games (at least on MSI Gaming GTX 970's and anything but the Gigabyte G1 which is known to have a higher bios tdp limit) . This is why we're also seeing everyone reporting driver crashes instead of hard locks when they are trying to test ;). Clocks will vary on ASIC quality for when this happens but this is why.

By using stock memory and +0 mv I am finding I start getting triangle artifacts at around 1540mhz on the core in SLI, which doesn't throttle down or hit the power limiter, or crash for at least the couple of minutes I left it running. If I up the core voltage just to +12mv instead of 0mv, I hit the limiter, it throttles a little (and automatically lowers the volts) and crashes :eek:. Long story short, custom BIOS flashing is going to unchain GTX 970 cards for people once the new nvflash is out and about. The TDP limit is what is holding back real oc'ing on these things and causing driver crashes/TDR instead of actual artifacting/hard crashing.

The cards, interestingly, will NOT go to 110% tdp if the power limit is being reached. Instead they will lower the volts (even though it won't show in the OSD but rather the TDP shown, voltage monitoring doesn't show in proper substeps it appears) and driver crash, or lower the clocks 5-10mhz at a time in conjunction and then either crash or stay stable, but perhaps crash after 30-60 minutes of gaming once the cards warm up and the TDP limit is hit again.

Another interesting tidbit is that with my 2600k @ 4.4ghz, I am actually NOT loading the gpu's 100% inside of Firestrike normal mode... I can see the GPU usage sitting at 85-89% the entire run, which is part of why some people's gpu score is lower here.


I then tested for max memory inside of the TDP with as high a clock as I could do on the core. My best game-stable result (2 hours) is this:

http://www.3dmark.com/3dm/4200438

1514mhz core and 8224mhz mem, on Firestrike Extreme resulting in 11042 gpu score. This was with no volt adjustment on the cards with TDP peaking (max reading) of 106% on one card and 101% on the other, with them sitting at 98-102% most of the time. It's about the same in games and does not throttle at all while so far being rock-stable and artifact-free.

2aI774K.png
 
Last edited:
Guess who's a smart cookie? *cough*DASHIT is*cough*

So I toyed around with lower mem speeds and they drastically reduce TDP usage. By doing testing with stock mem I was able to determine a maximum core that then triangles instead of just driver TDR'ing/crashing. Turns out everyone is probably going to be throttling at around 1500-1520 core if using 8ghz+ memory at this point for playing games (at least on MSI Gaming GTX 970's and anything but the Gigabyte G1 which is known to have a higher bios tdp limit) . This is why we're also seeing everyone reporting driver crashes instead of hard locks when they are trying to test ;). Clocks will vary on ASIC quality for when this happens but this is why.

By using stock memory and +0 mv I am finding I start getting triangle artifacts at around 1540mhz on the core in SLI, which doesn't throttle down or hit the power limiter, or crash for at least the couple of minutes I left it running. If I up the core voltage just to +12mv instead of 0mv, I hit the limiter, it throttles a little (and automatically lowers the volts) and crashes :eek:. Long story short, custom BIOS flashing is going to unchain GTX 970 cards for people once the new nvflash is out and about. The TDP limit is what is holding back real oc'ing on these things and causing driver crashes/TDR instead of actual artifacting/hard crashing.

The cards, interestingly, will NOT go to 110% tdp if the power limit is being reached. Instead they will lower the volts (even though it won't show in the OSD but rather the TDP shown, voltage monitoring doesn't show in proper substeps it appears) and driver crash, or lower the clocks 5-10mhz at a time in conjunction and then either crash or stay stable, but perhaps crash after 30-60 minutes of gaming once the cards warm up and the TDP limit is hit again.

Another interesting tidbit is that with my 2600k @ 4.4ghz, I am actually NOT loading the gpu's 100% inside of Firestrike normal mode... I can see the GPU usage sitting at 85-89% the entire run, which is part of why some people's gpu score is lower here.


I then tested for max memory inside of the TDP with as high a clock as I could do on the core. My best game-stable result (2 hours) is this:

http://www.3dmark.com/3dm/4200438

1514mhz core and 8224mhz mem, on Firestrike Extreme resulting in 11042 gpu score. This was with no volt adjustment on the cards with TDP peaking (max reading) of 106% on one card and 101% on the other, with them sitting at 98-102% most of the time. It's about the same in games and does not throttle at all while so far being rock-stable and artifact-free.



Very interesting finds. Memory speed seems to be important for high res gaming but for people running a single 25x14 or 19x10 or 19x12 boosting the GPU and leaving the memory down may be a good strategy to get the best possible performance with the limited TDP available on the GTX 970 and possibly GTX 980 too. 7000mhz GDDR 5 speed may be enough for lower res gaming especially with the enhancements on compression. Thanks for Sharing.
 
Guess who's a smart cookie? *cough*DASHIT is*cough*

So I toyed around with lower mem speeds and they drastically reduce TDP usage. By doing testing with stock mem I was able to determine a maximum core that then triangles instead of just driver TDR'ing/crashing. Turns out everyone is probably going to be throttling at around 1500-1520 core if using 8ghz+ memory at this point for playing games (at least on MSI Gaming GTX 970's and anything but the Gigabyte G1 which is known to have a higher bios tdp limit) . This is why we're also seeing everyone reporting driver crashes instead of hard locks when they are trying to test ;). Clocks will vary on ASIC quality for when this happens but this is why.

By using stock memory and +0 mv I am finding I start getting triangle artifacts at around 1540mhz on the core in SLI, which doesn't throttle down or hit the power limiter, or crash for at least the couple of minutes I left it running. If I up the core voltage just to +12mv instead of 0mv, I hit the limiter, it throttles a little (and automatically lowers the volts) and crashes :eek:. Long story short, custom BIOS flashing is going to unchain GTX 970 cards for people once the new nvflash is out and about. The TDP limit is what is holding back real oc'ing on these things and causing driver crashes/TDR instead of actual artifacting/hard crashing.

The cards, interestingly, will NOT go to 110% tdp if the power limit is being reached. Instead they will lower the volts (even though it won't show in the OSD but rather the TDP shown, voltage monitoring doesn't show in proper substeps it appears) and driver crash, or lower the clocks 5-10mhz at a time in conjunction and then either crash or stay stable, but perhaps crash after 30-60 minutes of gaming once the cards warm up and the TDP limit is hit again.

Another interesting tidbit is that with my 2600k @ 4.4ghz, I am actually NOT loading the gpu's 100% inside of Firestrike normal mode... I can see the GPU usage sitting at 85-89% the entire run, which is part of why some people's gpu score is lower here.


I then tested for max memory inside of the TDP with as high a clock as I could do on the core. My best game-stable result (2 hours) is this:

http://www.3dmark.com/3dm/4200438

1514mhz core and 8224mhz mem, on Firestrike Extreme resulting in 11042 gpu score. This was with no volt adjustment on the cards with TDP peaking (max reading) of 106% on one card and 101% on the other, with them sitting at 98-102% most of the time. It's about the same in games and does not throttle at all while so far being rock-stable and artifact-free.

2aI774K.png

WOW you got that on stock volts, AND you were able to keep the TDP below the power limit. Man those MSI cards are truly badass.

Just been waiting for them to pop up on newegg so I can snag 2. And that score is for 4k?
 
I noticed that lowering my core voltage (which I had maxed at .87) to .35 caused my score to go up. Check this out:
http://www.3dmark.com/3dm/4203657?

My core was stable at 1542 during that run.

Very interesting, I'm now playing with this some more to see if I can get more out of the memory. If what is above is correct, perhaps I'll be better served by leaving the memory at default and just pumping the core up.
 
Can a few of you do something for me? I wanted to know if you guys were seeing what I was seeing. Whether I go +87v or leave overvoltage disabled, my OSD shows the same voltages in Precision X. I'm going to uninstal Prec. X and go back to afterburner and try again tonight. Can you try with and without voltage bump and let me know if your OSD shoiws different values?

I tested last night without any voltage bumps and noticed the same overclocking results when overclocking but with lower temperatures. I'm not sure if the OSD is broken and reporting the wrong voltage but it shouldn't be showing the same. I am starting to believe the OSD is broken. OSD says I'm at 1.25v on the secondary card when I bump voltage up or leave it at stock. Card 1 still shows as .5 mv lower regardless. Hmm.. :confused:
 
I just achieved my best result, but I'll need to back off on memory a tad (noticed a few times screen flickered). 10833 combined score, 12729 graphics. :D

http://www.3dmark.com/fs/2859229

This was with +150 core/+500 memory, with +.5mv through afterburner. I'm going to back off memory a little and see where I end up. These cards are overclocking beasts.

Any suggestions on additional tests I should do to confirm stability with these settings?
 
I just achieved my best result, but I'll need to back off on memory a tad (noticed a few times screen flickered). 10833 combined score, 12729 graphics. :D

http://www.3dmark.com/fs/2859229

This was with +150 core/+500 memory, with +.5mv through afterburner. I'm going to back off memory a little and see where I end up. These cards are overclocking beasts.

Any suggestions on additional tests I should do to confirm stability with these settings?
Seems a bit low. I got over 11000 combined and over 13000 for the graphics score even when throttling quite a bit. Maybe my slightly faster cpu and ram is making that little bit of difference in this benchmark though.
 
Yeah, I would venture your other components push it over the top. I'm backing off my OC a bit, I think I'll end up at +150/+400, which is giving me about 1542 core in firemark. I suppose I could create a custom fan profile, which might give me a little more room.
 
Just got my MSI Gaming 970 today, put it in, and oh God it's beautiful!

BVrVKQV.png

This is how I've adjusted my timings so far, and how 3DMark was. I assume I can probably go higher, considering most of you are going WAY higher than this! I always put my fan at 100%, btw, I don't mind the noise and I like keeping it as cool as possible. Although in the winter time, it's cold enough that I don't leave it at 100%.
 
Hmmm, looks like my memory wasn't going great at +400, I noticed a few flickering screens during the second test of Firemark (where it goes through the cavern). 300 seems to be okay, but I'm hoping to find a way to make 400 work since it looks like it's damn close. Maybe just upping my fan speed a bit will be enough.
 
I was starting to wonder why a guy on Guru3D with my exact same system was pulling 2K higher 3DMarks than I was...then I had enough sense to turn off my fps limiter in Afterburner OSD. My total jumped up to over 13K now. Good enough for me!
I can't be the only one who forgot to do that.
 
Where is this limiter setting in afterburner? Are you referring to the graph overlay?

EDIT: I think I found it: Settings->On Screen Display->More button. My framelimit is set to 0 - which is the default when I installed it.
 
Back
Top