The Official GTX980/970 OC & Benchmark Perf. Thread

Huh I don't know about you guys, but I don't even have DSR as an option in my drivers.

Newest WHQL For the 980/970 cards.

WTF is going on with these cards.

P.S. Both cards now have the same Bios, and the voltages are still screwy.

P.S.S. DDU was using to uninstall all AMD drivers.

If I disable SLi and check under 3d Settings in the Global tab it shows up right under Cuda GPU's. It's called DSR Factors. Check if you see it with SLi disabled when you have a chance.

I just heard from Marcdaddy who doesn't have the option regardless if SLi is on or not.

Damn.

Nvidia will have a kick ass driver to fix most/all of this soon. They have MFAA coming in about 15-20 days so I'm expecting some of these other issues fixed soon too.
 
They're also supposed to introduce DSR to GK110-based cards at some point, as well. I'm curious how well that will perform.
 
If I disable SLi and check under 3d Settings in the Global tab it shows up right under Cuda GPU's. It's called DSR Factors. Check if you see it with SLi disabled when you have a chance.

I just heard from Marcdaddy who doesn't have the option regardless if SLi is on or not.

Damn.

Nvidia will have a kick ass driver to fix most/all of this soon. They have MFAA coming in about 15-20 days so I'm expecting some of these other issues fixed soon too.

Ahh yea I see it now with SLI disabled. I knew SLI didn't work with DSR, I just figured it would gray out the option not remove it entirely.

No ffense, but after this driver launch, no one should bitch about AMD drivers anymore if you ask me. Hasn't been the greatest drivers for a launch.
 
FWIW, I had trouble getting the DSR resolutions to work outside of games that had the option in the "Geforce Experience" control panel, but they now show up in all games. I'm not sure if I just needed to reboot the system or what - I tried setting them up almost immediately after getting the card two weeks ago, and just noticed that they were working in other games last night. So I'm not sure what caused the extra resolutions to start working, but maybe try a reboot.
 
Screenshot running in game or it never happened :D

Seriously though, can you post a screenshot. I haven't met someone who actually has both running at the same voltage.

CtgMXrD.jpg
 
For those that do not get DSR even when not using SLI, what monitors do you have attached, and what resolutions (including custom resolutions) are listed in the control panel for each monitor?

I've had DSR options disappear on me when I had both my 4K and 1080p (120hz) monitor attached at the same time. But then the option comes back when I start changing one of them to be primary instead of the other, or when I disable a monitor then re-enable it.

It kind of makes sense in a way - running at 4K removes the option. However, it gets complicated on a multimonitor setup where one of them is less than 4K. It may be dependent on which monitor you open the NVIDIA control panel on, or which monitor is the primary.
 
I'm using a a Rog Swift, 1400P 144hz, Gsync on or Off doesn't matter, I just tried 60hz as well and still a no go.
 
For those that do not get DSR even when not using SLI, what monitors do you have attached, and what resolutions (including custom resolutions) are listed in the control panel for each monitor?

I've had DSR options disappear on me when I had both my 4K and 1080p (120hz) monitor attached at the same time. But then the option comes back when I start changing one of them to be primary instead of the other, or when I disable a monitor then re-enable it.

It kind of makes sense in a way - running at 4K removes the option. However, it gets complicated on a multimonitor setup where one of them is less than 4K. It may be dependent on which monitor you open the NVIDIA control panel on, or which monitor is the primary.

My story is kind of strange. The option is there if I disable SLi but dissapears when I enable it. AT first the option was always there.

I have 3 120hz 1080P monitors hooked up as a portrait nv surround setup. 3240x1920.
1 in the 1 DVI port I have and the 2 in the Display Ports (far left, & far right)

Funny thing you mention that though. I did swap my 2 displayport monitors recently and that's when I lost the option. Maybe when I get home I'll swap them back. Hmm.
 
I CAN CONFIRM DSR ONLY WORKS WITH GSYNC DISABLED and on 1 card, SLI will not work with or without Gsync enabled.!
 
EDIT: The driver below is for 980M and 970M only it seems. I'm blind. Try at your own risk

Guys there's a 344.24 WHQL Driver just for the 980 and 970 on the UK website. Hasn't hit the US site yet but thought I'd let you guys know.

the driver string is older than the current "official" whql drivers
 
On my setup SLI Never works. I just reinstalled the newer drivers after trying the older ones, I got the same results.

Yes I think DSR and Sli usually doesn't work, but I can't say never because I actually ran it once for 10 minutes.

Now the option is gone.:(
 
I'll check when I get home, which will be 5 hours later at the very least. :p

Also when I did +160 core without touching volts, both of my cards boosted to 1506MHz and one was 1.2V while the other at 1.212V. This was the case in all benchmarks (3DMark11, Heaven, Valley) as well as Sleeping Dogs.
 
marcdaddy, Lord_Exodia: Yeah, I wonder if using a different DP port (if available - guess not on most 970s) will get that option back?

If you don't have another DP port in the card available, you could temporarily try the HDMI or DVI connection, leaving the other unplugged, and see if the driver gets a kick in the ass and gives you back DSR? Although it would suck if, when you plug back in DP, you lose it again. :(
 
We are going to have to re-think the way we overclock. In the past, priority was put on voltage, but with the current TDP/Power Limits in place, and the way GPU Boost and Power Tune work, over voltage could hurt, or lessen the potential of your overclock. This is mostly true on 970 and 980, and also 290X. It's something that I think everyone pretty much gets, and understands now, but is not said enough or thought out on paper to explain it well. I know right, mind frack. I will discuss this in the upcoming review.

Good article Brent, it must've been a painstaking process to find the clocks for all those cards.

I'm curious if Brent's findings are similar to other 980 owners out there- mine definitely doesn't behave like Brent's. Before we get into this, I have the evga sc edition, so it could all be due to that. In terms of max clocks, I have 4 choices. Clock speeds are what the OSD measured most of the time during Crysis 3 and 3dmark.
1. 1480MHz (+114core) / 8192MHz with +0mV
2. 1493MHz (+127core) / 8192MHz with +0mV
3. 1493MHz (+127core) / 8192MHz with +87mV
4. 1506MHz (+139core) / 8192MHz with +87mV

Turns out option 4 is the fastest. Furthermore, options 1 and 3 are faster than option 2. So maybe my card in particular benefits more from overvolting and hitting the next clock bin than it does by not overvolting and having more power budget available.

Here's something odd- 1480MHz is outperforming 1493MHz. There's more to this situation than just the reported clock speed you hit. It also turns out 1506MHz barely beats 1493MHz- it doesn't scale as well in actual FPS as it should. I can actually bump the core up again to 1518MHz, and at that point things get unstable, but also a whole lot slower- like equivalent to 1400MHz.

Without the overvoltage, 1493MHz experiences occasional instability, and 1506Mhz is completely unstable. Even with overvoltage, 1506 sometimes crashes. I think options 2 and 4 are too unstable to yield the FPS increase they should, but it's still definitely better to overvolt.
 
Good article Brent, it must've been a painstaking process to find the clocks for all those cards.

I'm curious if Brent's findings are similar to other 980 owners out there- mine definitely doesn't behave like Brent's. Before we get into this, I have the evga sc edition, so it could all be due to that. In terms of max clocks, I have 4 choices. Clock speeds are what the OSD measured most of the time during Crysis 3 and 3dmark.
1. 1480MHz (+114core) / 8192MHz with +0mV
2. 1493MHz (+127core) / 8192MHz with +0mV
3. 1493MHz (+127core) / 8192MHz with +87mV
4. 1506MHz (+139core) / 8192MHz with +87mV

Turns out option 4 is the fastest. Furthermore, options 1 and 3 are faster than option 2. So maybe my card in particular benefits more from overvolting and hitting the next clock bin than it does by not overvolting and having more power budget available.

Here's something odd- 1480MHz is outperforming 1493MHz. There's more to this situation than just the reported clock speed you hit. It also turns out 1506MHz barely beats 1493MHz- it doesn't scale as well in actual FPS as it should. I can actually bump the core up again to 1518MHz, and at that point things get unstable, but also a whole lot slower- like equivalent to 1400MHz.

Without the overvoltage, 1493MHz experiences occasional instability, and 1506Mhz is completely unstable. Even with overvoltage, 1506 sometimes crashes. I think options 2 and 4 are too unstable to yield the FPS increase they should, but it's still definitely better to overvolt.

Try only +25 on the voltage. I find that this one works best for me. Others have reported differently. When you bump +87 it only gives you +25 more or at least that's what my OSD shows. I did this and got better stability. My theory is that the +62 your not getting still penalizes you and makes you hit that wall faster. In my case and with my cards the +25 gave my cards the extra umph I needed to stabalize me for an extra 20mhz. I'm sitting around 1521mhz with both cards. I may be able to do more in fact but with 1 card running .50mv lower I don't think that card has it in him to give me any more at 1.162v that it's getting. Cmon Nvidia fix this pls :(
 
For what it's worth I have my monitor connected to the DVI-D port and DSR is working with SLI.

Im not changing ports for fear of DSR option disapearing lol.
 
For what it's worth I have my monitor connected to the DVI-D port and DSR is working with SLI.

Im not changing ports for fear of DSR option disapearing lol.[/QUO

I just tried it and no luck, I also tried the Other Display ports and its a no go. I did disable the GSYNC just to be sure. It might be a Bug with the ROG SWIFT?
 
Try only +25 on the voltage. I find that this one works best for me. Others have reported differently. When you bump +87 it only gives you +25 more or at least that's what my OSD shows. I did this and got better stability. My theory is that the +62 your not getting still penalizes you and makes you hit that wall faster. In my case and with my cards the +25 gave my cards the extra umph I needed to stabalize me for an extra 20mhz. I'm sitting around 1521mhz with both cards. I may be able to do more in fact but with 1 card running .50mv lower I don't think that card has it in him to give me any more at 1.162v that it's getting. Cmon Nvidia fix this pls :(

Actually, even +50mV was crashing on 1493MHz. I'll rerun it in case it was caused by the OSD.

By the way, your sig says "Big Maxwell 5021mhz core " :p
 
Actually, even +50mV was crashing on 1493MHz. I'll rerun it in case it was caused by the OSD.

By the way, your sig says "Big Maxwell 5021mhz core " :p

No mistake there. I'm actually Beta Testing GTX 980Ti For nvidia.

J/K fixed
 
I'll check when I get home, which will be 5 hours later at the very least. :p

Also when I did +160 core without touching volts, both of my cards boosted to 1506MHz and one was 1.2V while the other at 1.212V. This was the case in all benchmarks (3DMark11, Heaven, Valley) as well as Sleeping Dogs.

It's interesting in that your cards are boosting less than mine, though I have my offset lower. Maybe that is how GPU Boost 2.0 works? When I set my boost to 150 I get around 1542 on the core. If I set to 125 (which is where stability appears best) I'm at around 1516.
 
It's interesting in that your cards are boosting less than mine, though I have my offset lower. Maybe that is how GPU Boost 2.0 works? When I set my boost to 150 I get around 1542 on the core. If I set to 125 (which is where stability appears best) I'm at around 1516.

You're running a single card right? Maybe I should try with SLI off and see what happens. I get this feeling one card is dragging the other down (probably the one that needs 1.212V)
 

Hmm seems like even in Tri SLi your not completely immune to the bug. TBH I never seen a osd on a triple SLi setup yet for the GTX 980/970. It looks like you still have 1 card running below voltage. It looks like your running stock voltage too @ 1.215. I bet if you bump your voltage that card 2 would run around 1.162 or so and the others probably @ 1.250 like my card runs. Hopefully the 2 you have stable stays that way. If you have time can you try that?

If possible post a screenshot if that happens. I want to post it to Manuel From nvidia who is looking into the voltage discrepancy bug in multi card setups..
 
No matter what I put my voltage on it will drop down to 1.187 after a bit even if its no where near TDP limit.
 
This issue really makes me mad, to be honest. We've been listing the actual in-game frequency since GPU Boost was invented. I can't possibly see any other way to do it. I want to know what it is actually running at, not what it isn't. I get mad when other sites only list the base/boost clock when overclocking and never mention what it is actually running at. It is misleading. This is also how you detect throttling, or GPU clock changes over time while gaming, else you'd never know what was happening with the clocks. We have always, and will always, report it the right way.

Hey Brent, I'm not sure if you've been following my thread or have poked in from time to time in between your review work but would you or Kyle test a dual card SLi 970/980 setup and note if you have a voltage discrepancy on the cards. So far it seems to be a software bug since the issue doesn't follow a specific card as each alone works fine but when you put them in a SLi setup that's when the issue pops up regardless which card is the top or bottom card.

If you do see the same can you contact your usual people in Nvidia that you would contact during a review if something strange is found and see if they will look into it. The voltage discrepancy has caused instability issues for many of us in this thread.

Goldentiger posted a thread on the Nvidia Forums https://forums.geforce.com/default/...y-lower-voltage-than-the-other-driver-bug-/1/ and someone said they'd look into it but it's been 2 weeks and no word. Maybe you guys can ask someone else to see if hopefully the person you guys nudge is a bit more aggressive in following up on the issue.
 
Hmm seems like even in Tri SLi your not completely immune to the bug. TBH I never seen a osd on a triple SLi setup yet for the GTX 980/970. It looks like you still have 1 card running below voltage. It looks like your running stock voltage too @ 1.215. I bet if you bump your voltage that card 2 would run around 1.162 or so and the others probably @ 1.250 like my card runs. Hopefully the 2 you have stable stays that way. If you have time can you try that?

If possible post a screenshot if that happens. I want to post it to Manuel From nvidia who is looking into the voltage discrepancy bug in multi card setups..

This is with +87mv:

zCGQRpT.jpg


(1080P Max in game settings btw)
 
That's pretty awesome. Notice any negatives with tri-sli? Stuttering at all?

no stuttering, though I have never noticed it with a multi-gpu setup.

btw settings in tomb raider for that bench:

punl0vd.png


I usually run with no added voltage and let the cards boost to whatever they can, usually 1316MHz. With tri-sli I dont need to mess with overclocking them, though they are 100% stable at 1500MHz.
 
no stuttering, though I have never noticed it with a multi-gpu setup.

btw settings in tomb raider for that bench:

I usually run with no added voltage and let the cards boost to whatever they can, usually 1316MHz. With tri-sli I dont need to mess with overclocking them, though they are 100% stable at 1500MHz.

Thanks for the pic before in the other post. In your setup it looks like there is some light at the end of the tunnel for you. By boosting your voltage + 87 your getting that 2nd card up to normal voltage levels and only boost your top and bottom card by 22mv which isn't terrible and is actually good.

I wish there was something I could do to get my card to hit just the default voltage of 1.215v. Until nvidia fixes it it looks like my potential overclock will suffer. Anyway this is all preliminary because we all know once bios flashing to disable the power saving TDP wall is available this is all just practice :D
 
okay so

DSR + SLi = Sometimes
DSR + Gsync = No
DSR + Single Montitor (no gsync) = Yes

Seems like it. I don't have a GSync monitor, and with or without SLI the DSR option is always available for me.

Gigabyte 970s and driver 344.11
 
Alright guys, so finally got around to toying with the card as I promised. In order to establish how volts and OC affect the actual boost clock, I decided to only OC the core to keep things simple and avoid confounding variables.

Test methodology:
Unigine Heaven @ 1080p single monitor w/ 8x MSAA. Loop for 2 minutes at each voltage setting with pre-defined OC and monitor boost clock as well as Vcore in real time using Afterburner 4.0. Power limit set to 112% in all cases.

At bone stock settings, boost clock is 1354 MHz and Vcore is 1.212V/1.200V.

With Unigine Heaven, I was able to determine that the absolute max my Gigabyte 970s would go in SLI is +173 core. Anything above that and it's an instant crash, and even then +173 core didn't seem to be stable. Boost clock was 1553 MHz initially and GPUs at 1.256V/1.243V, but after 30 seconds it dropped back to 1539 MHz and 1.231V/1.218V.

So I decided to stick with +170 core for my max OC. Then just for giggles I also did the same test with +160 core to see if there would be any difference. Results are below:



The results make good sense at +160 core, but at +170 core there are some weird anomalies. For some strange reason Heaven would always insta-crash at +170 core and +10 mV no matter how many times I tried to run it. And then for both OCs, the highest voltage setting +87 mV actually seemed to destabilize things, as at +160 OC Heaven just insta-crashed no matter what I did, and at +170 OC it crashed after 30 seconds and dropped back to stock boost.

What's really interesting is that at least at +160 core, feeding the cards more volts helped to bump the boost clock up to a certain point. Oddly, while the card didn't seem to want to do the full +160 core without upping volts, they were more than happy to do the full +170 core without asking for more power.

+40 mV seems to be close to the sweet spot for maxing out the boost clock for my cards, although somewhere between +40 mV and +50 mV is need for +170 core to be sustainable. And as almost everyone have observed, the cards don't actually go above 1.25V, yet adding more volts clearly has a stabilizing effect up to a certain point.

One last thing: the power limit never once exceeded 84% TDP even at the highest tolerated levels of voltage, and indeed setting the power limit to 100% or 112% made 0 difference to either the boost clock or the Vcore. I also discovered that needlessly adding volts once past the optimal point only resulted in increased thermals and TDP with absolutely 0 gain. Again consistent with what everybody has reported so far.

Hopefully somebody will find this useful.
 
Last edited:
Finally got my water blocks in.
cine152akim.png

Could still push further, but im good for now, time to actullay play some games!
 
Back
Top