The Official GTX980/970 OC & Benchmark Perf. Thread

Quick Question about 3d Mark Fire Strike. What should someone score with this benchmark on Windows 7. I hear that windows 8 gets you a huge boost in score. I ran the demo version from Steam. Is it possible it was an outdated version of the bench?

18798 is my best score so far.

http://www.3dmark.com/fs/2953754

Yeah, I was chatting with someone on the geforce forums who was comparing firestrike scores from similar systems (z97 asus boards with 4790 cpus and 2x gtx970). The poster was on win7 64 and the friend was on win 8.1 and the friend would get like 25k on the graphics score while the poster was getting around what I am getting between 20 and 21k (graphics score). The poster had a much higher cpu OC but the card clocks were very close.. Their 3dmark11 scores were about even also.. So it would appear something about win 8.1 boosts the firestrike graphics score a good bit. edit: this thread
 
Last edited:
I also wonder if memory clocks would affect it. I can't get much over +300 MHz on the memory before I start to see some instability in Valley. I kind of wonder if some of the people rocking +500 and more on the memory aren't possibly running into issues with error correction, perhaps not realizing it. Could be one card is being overclocked too far on the memory and the error correction is kicking in causing lower performance.
 
Yeah, I was chatting with someone on the geforce forums who was comparing firestrike scores from similar systems (z97 asus boards with 4790 cpus and 2x gtx970). The poster was on win7 64 and the friend was on win 8.1 and the friend would get like 25k on the graphics score while the poster was getting around what I am getting between 20 and 21k (graphics score). The poster had a much higher cpu OC but the card clocks were very close.. Their 3dmark11 scores were about even also.. So it would appear something about win 8.1 boosts the firestrike graphics score a good bit. edit: this thread

Yeah, I think windows is holding me back but there is something else I can't put my finger on. With the rig in my sig I should be scoring around 20k or so. I'm under 19k.

Cards in SLi were set to 1501core 8000memory, +87v and +25v and +0v and that was my best run. I've even run at +1520core w +8000memory with similar results. Vsync off etc...

Anyway, I've never cared about synthetic benchmarks but I'm bored. LOL

I think I'll just move on and play some borderlands 2. :D
 
Yeah, I was chatting with someone on the geforce forums who was comparing firestrike scores from similar systems (z97 asus boards with 4790 cpus and 2x gtx970). The poster was on win7 64 and the friend was on win 8.1 and the friend would get like 25k on the graphics score while the poster was getting around what I am getting between 20 and 21k (graphics score). The poster had a much higher cpu OC but the card clocks were very close.. Their 3dmark11 scores were about even also.. So it would appear something about win 8.1 boosts the firestrike graphics score a good bit. edit: this thread

Turns out that the graphics score improves with more powerful CPUs.
When I further overclocked my 4770k from 4.7GHz to 4.8GHz, I went from 15955 on the graphics subscore to my current best of 16162. Suicide run, in the sense that I get an artifact here and there. Rarely crashes though.

1531Mhz (+139) / 8512Mhz (+750)
http://www.3dmark.com/3dm/4344102

A 2% increase in CPU clock speed yields 1% higher graphics subscore on my system.

I also wonder if memory clocks would affect it. I can't get much over +300 MHz on the memory before I start to see some instability in Valley. I kind of wonder if some of the people rocking +500 and more on the memory aren't possibly running into issues with error correction, perhaps not realizing it. Could be one card is being overclocked too far on the memory and the error correction is kicking in causing lower performance.

Memory is critical in maxing out that score. From all of the combinations I've tested, I've concluded that 1% of core clock increase is equivalent to 2.2% memory clock increase.

A couple data points using Firestrike graphics subscore.
1379MHz / 7000MHz - 13920
1531MHz / 7000MHz - 15222
1379MHz / 8096MHz - 14742

I have to take my memory wayyyy up to start getting a performance hit. I still see a score improvement from +654 to +750. +797 is slower, and has lots of artifacting. It can also become afflicted with huge pauses at the 750 mark, although at that point it could be the memory or the power limit causing it too.

I don't have any artifacting at the +591 (8192MHz) mark, I've been using it as my game-stable setting, for what it's worth. With the core bouncing around 1493-1518MHz for gaming, I think this card is solid overall.
 
You mean to tell us that the benchmark score goes UP when you get a better CPU? That's SHOCKING!!!!!

I think what Womper is referring to is the Graphics portion of the benchmark score goes up from a faster cpu. in 3dmark it's split, you'll see graphics, cpu physics, and overall (combined)

This proves that processors still suck @ss. We aren't feeling it in games too much but I feel that GPU's are sitting there waiting for processors to give them shit to crunch. GPU's are far and beyond what today's processors can feed them.

If things were way more competitive and AMD put a shiny boot in Intels Arse in the CPU arena then we'd have more than these incremental updates and have some processors that really push the boundaries. Then our GPU's will shine even more.
 
Huh interesting. Remind me to run some benches when I get my 970s back from RMA. Wonder if a hex core 4930K will be enough to pass the bottleneck back to the GPUs.
 
You mean to tell us that the benchmark score goes UP when you get a better CPU? That's SHOCKING!!!!!

lol

I think what Womper is referring to is the Graphics portion of the benchmark score goes up from a faster cpu. in 3dmark it's split, you'll see graphics, cpu physics, and overall (combined)

Thank you.

In other words, unless you have an identical CPU setup, then you can't even ask the question "Why are my identically overclocked GPUs giving me a crappier graphics subscore?" We all know the overall score will differ.
 
Yeah, I think windows is holding me back but there is something else I can't put my finger on. With the rig in my sig I should be scoring around 20k or so. I'm under 19k.

Cards in SLi were set to 1501core 8000memory, +87v and +25v and +0v and that was my best run. I've even run at +1520core w +8000memory with similar results. Vsync off etc...

Anyway, I've never cared about synthetic benchmarks but I'm bored. LOL

I think I'll just move on and play some borderlands 2. :D

Haha yeah I haven't been this bench crazy in a LONG time.. The lowish scores got under my skin though and sent on a bit of a tear. OT, but I even busted out my old WC gear and bought a water block for my cpu. I haven't installed it yet but hope to this weekend. My load temps under prime and burn in are too high for my taste but in games they are fine.. but still bugs me. In the mean time I am getting back to just gaming now though too. :)

Turns out that the graphics score improves with more powerful CPUs.
When I further overclocked my 4770k from 4.7GHz to 4.8GHz, I went from 15955 on the graphics subscore to my current best of 16162. Suicide run, in the sense that I get an artifact here and there. Rarely crashes though.

1531Mhz (+139) / 8512Mhz (+750)
http://www.3dmark.com/3dm/4344102

A 2% increase in CPU clock speed yields 1% higher graphics subscore on my system.
[snip]

Yeah - makes sense. Managing the graphics operations is a system wide work load until you get into specific image quality stuff that the gpu does internally (mostly). Especially with SLi and these synthetic benches. Taking my cpu from 4.25 to 4.6 boosted my graphics score about 2000 points (from 18k something to 20k something).. I will have to go back and verify that - but it did help a lot. I still see win 8.1 people with graphics scores in the 25k range - significantly higher than similar win7 setups.
 
Last edited:
I've had much greater success with stability without increasing my voltage in my overclock. Seems fairly stable at +125/+300 in most games, including shadows of modor. Going to keep playing, but this puts me over 1500mhz in most games (BF4, ect) which is what I was aiming for. I have been able to bench at 1542, but not certain its super stable.
 

I thought about buying either 2 x 970 or 980, but seeing how my 290 Crossfire works so well with 14. drivers, I decided to kill that idea (they are under water as well)

My cards run at 1085/1425, (34C idle / 46C load), no voltage increases, and I get 17,168 Score in Fire Strike
http://www.3dmark.com/3dm/4375419?

I am not trying to start a war here, just wanted to put this as reference since I thought about switching.
 
PNY 970
stock 1228
+33 mv, 1450-1470
Any memory overclock is unstable
 
So for shits and giggles, I decided to lower my monitors refresh rate from 120hz to 60hz to try something. And WOAH look at that. I now have the options for DSR.

Interesting, so DSR doesn't like refresh rates over 60?
 
So for shits and giggles, I decided to lower my monitors refresh rate from 120hz to 60hz to try something. And WOAH look at that. I now have the options for DSR.

Interesting, so DSR doesn't like refresh rates over 60?
I have no problem with DSR @ 144Hz. Confirmed through OSD and framerates with VSync on that it's actually running at 144Hz with either 1440p or 4k downscaled to 1080p.

Monitor is an ASUS VG278HE.
 
So for shits and giggles, I decided to lower my monitors refresh rate from 120hz to 60hz to try something. And WOAH look at that. I now have the options for DSR.

Interesting, so DSR doesn't like refresh rates over 60?

I can do DSR @ 120hz. I may try surround with SLi again when I get home but lower my refresh to 60hz just to see if that helps.

Funny thing is Marcdaddy told me he had to turn down his refresh rate to get DSR working on his Asus Gsync monitor if I'm not mistaken.
 
I can do DSR @ 120hz. I may try surround with SLi again when I get home but lower my refresh to 60hz just to see if that helps.

Funny thing is Marcdaddy told me he had to turn down his refresh rate to get DSR working on his Asus Gsync monitor if I'm not mistaken.

Huh makes me wonder if it just doesn't like these korean monitors at 120hz. Who knows
 
I can do DSR @ 120hz. I may try surround with SLi again when I get home but lower my refresh to 60hz just to see if that helps.

Funny thing is Marcdaddy told me he had to turn down his refresh rate to get DSR working on his Asus Gsync monitor if I'm not mistaken.

right now I can only get DSR on my Rog Swift on 1 card at 60hz, once I have it enabled i can turn my miter back to 144hz. It never works in sli fir me.
 
Is there someone here running Asus realbench to test for stability?
Test proceed without errors but as soon as it stops or I halt it I get a driver error that says that drivers stopped and has been repristinated
 
Looks like Nvidia is finally working on a fix to people with SLI and voltage problems.

https://forums.geforce.com/default/...y-lower-voltage-than-the-other-driver-bug-/6/

At least it how has now been acknowledged.

I wish that they would give us option to disable fluctuating boost voltage. I'm almost willing to bet that everyone with 1430 MHz and up OC will have stability issues in less demanding games because the card drops voltage way too much compared to core speed.
 
Anyone with a reference cooler 980 (and if EVGA, even better) try replacing their thermal compound on their GPU? Just wondering if there's any significant thermal advantage to replacing it with a high quality paste.
 
Put in a quick OC attempt here, card seems to scale nicely with clock increases:

10101 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 12668
Physics Score 7883
Combined Score 4816

MSI Gaming 970 @ +100 / +500
2500K @ 4.4 GHz

New scores with a Strix 970, stock speeds:
9340 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 11539
Physics Score 7908
Combined Score 4329

Strix 970, Power Target @120%:
9454 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 11716
Physics Score 7904
Combined Score 4391

Strix 970, Power Target @120% + Boost Clock +143 (Max in ASUS GPU Tweak):
9994 with NVIDIA GeForce GTX 970(1x) and Intel Core i5-2500K Processor
Graphics Score 12561
Physics Score 7922
Combined Score 4671
 
Last edited:
I wish that they would give us option to disable fluctuating boost voltage. I'm almost willing to bet that everyone with 1430 MHz and up OC will have stability issues in less demanding games because the card drops voltage way too much compared to core speed.

Interesting you mention that, because I've had some stability problems when playing the Elite: Dangerous beta and wasn't really sure why - but that would make a lot of sense. I hope they do something about that, but in the mean time what about turning on the "KBoost" option. In PrecisionX it says that "KBoost forces the card to operate at full boost speed, regardless of load, this will generate more heat and use more power, but can help in certain testing environments as well as in benchmarking."
 
Anyone with a reference cooler 980 (and if EVGA, even better) try replacing their thermal compound on their GPU? Just wondering if there's any significant thermal advantage to replacing it with a high quality paste.

You first :D

I have a pair of reference cards EVGA and they run fine, temps are great, Fan isn't noisy. I usually only replace Tim if I feel things aren't as good as they need to be. What are your temps? Usually if your asking about this deep down you feel your temps are hotter than they should be and if you feel that way they probably are. ;)
 
Just received my EVGA GTX 980 Superclocked AC2. The listed boost clock is 1367 but its boosting as high as 1430.
 
You first :D

Oh alright.

I have a pair of reference cards EVGA and they run fine, temps are great, Fan isn't noisy. I usually only replace Tim if I feel things aren't as good as they need to be. What are your temps? Usually if your asking about this deep down you feel your temps are hotter than they should be and if you feel that way they probably are. ;)

Temps aren't terribly high. I have mine sit at a fan speed plateau of 58% until it hits higher temps, so it's not bad, but it's noticeable considering everything else in my system is super quiet. If I could knock a few degrees off to reduce fan speed a few notches, it would be nice.
 
Oh alright.
Temps aren't terribly high. I have mine sit at a fan speed plateau of 58% until it hits higher temps, so it's not bad, but it's noticeable considering everything else in my system is super quiet. If I could knock a few degrees off to reduce fan speed a few notches, it would be nice.

I have some arctic MX5 here so technically I can do it but I would expect similar results. My temps are better than I thought they would be with reference EVGA cards. At best I think it may be a even wash If I reapplied TIM, I'd be absolutely shocked if I got even better results.
 
Trying to overclock my Gigabyte 970, I can't seem to push the voltage without being throttled during Firestrike demo. If I +25mV, I would be throttled down to stock voltage halfway though, and if I set +50mV, I would be throttled down to +25mV. The clock speed will drop along with the voltage drop.

I check my TDP, it barely went above 90%, while in [H] review, it seems to hit max TDP before throttling occur. My temps look fine too, set the max temp to 85 just in case, along with 112 TDP in the OC tool. Nothing helps :(

Anyone knows what else could be causing the throttling?
 
Trying to overclock my Gigabyte 970, I can't seem to push the voltage without being throttled during Firestrike demo. If I +25mV, I would be throttled down to stock voltage halfway though, and if I set +50mV, I would be throttled down to +25mV. The clock speed will drop along with the voltage drop.

I check my TDP, it barely went above 90%, while in [H] review, it seems to hit max TDP before throttling occur. My temps look fine too, set the max temp to 85 just in case, along with 112 TDP in the OC tool. Nothing helps :(

Anyone knows what else could be causing the throttling?

Try maxing out voltage. Worked for me to keep from throttling.
 
Trying to overclock my Gigabyte 970, I can't seem to push the voltage without being throttled during Firestrike demo. If I +25mV, I would be throttled down to stock voltage halfway though, and if I set +50mV, I would be throttled down to +25mV. The clock speed will drop along with the voltage drop.

I check my TDP, it barely went above 90%, while in [H] review, it seems to hit max TDP before throttling occur. My temps look fine too, set the max temp to 85 just in case, along with 112 TDP in the OC tool. Nothing helps :(

Anyone knows what else could be causing the throttling?
The [H] review was for the MSi 4G GAMING version. The Gigabyte G1 has a different power profile from the MSi, including higher core voltage when stock. Not every card is going to be able to overclock the same. You should still be able to get a solid overclock before even touching the core voltage.
 
Trying to overclock my Gigabyte 970, I can't seem to push the voltage without being throttled during Firestrike demo. If I +25mV, I would be throttled down to stock voltage halfway though, and if I set +50mV, I would be throttled down to +25mV. The clock speed will drop along with the voltage drop.

I check my TDP, it barely went above 90%, while in [H] review, it seems to hit max TDP before throttling occur. My temps look fine too, set the max temp to 85 just in case, along with 112 TDP in the OC tool. Nothing helps :(

Anyone knows what else could be causing the throttling?

Make sure the power slider is at the maximum. Try leaving voltage at stock and only adjusting your GPU Core boost clock to see how far you can take the core alone at stock voltage and at stock memory clocks with MAX power. (this varies but usually 110%-120%)

Once you got that value you can either take your core down 10mhz and boost the Vram. I think if I was gaming at 2560x1440 and below I would stick with the stock 7000mhz value on vram but that's just me. I think core clock would net the best results but that's debatable.

With most 970s it seems stock voltage is the way to go. Any higher voltage seems to cause the cards to throttle in Benchmarks (specifically) Games seem to work fine from what I've seen.
 
Back
Top