Overclocking and SLI?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,857
Hey all,

I just set up SLI for the first time with two 980 ti's.

I haven't started overclocking yet (I want to get some custom cooling going first) but I have experimented with just bumping the power limits, temp limits and fans up to max, and have made some observations.

With a single card (EVGA Superclocked 980ti ACX 2.0+) installed, or with two cards installed, running in single GPU mode it clocks itself up to 1349Mhz (!?) and stays there for the duration of the game/bench.

Once both cards are installed and in SLI mode, all the same settings, they only clock themselves up to 1278Mhz, and stay there for the duration of the game/bench.

I would blame heat as there are now two heat producing cards next to eachother rather than one, but they don't seem to be running any hotter, and my case has pretty good airflow.

Is there an automatic protection mode that kicks in in SLI making it harder to overclock than a single card, or should I just proceed as usual?

Appreciate any thoughts, or links where I can read up on this.

Thanks,
Matt
 
Check voltages, with maxwell cards nvidia drivers tend to sync the cards clocks to the lower common denominator also to provide the highest stable clock as possible as the SLI frame pacing depend a lot of the Clocks so is normal that they overclock less.. also "but they don't seem to be running any hotter" what are those temps?, and also what PSU are you using?.
 
Check voltages, with maxwell cards nvidia drivers tend to sync the cards clocks to the lower common denominator also to provide the highest stable clock as possible as the SLI frame pacing depend a lot of the Clocks so is normal that they overclock less.. also "but they don't seem to be running any hotter" what are those temps?, and also what PSU are you using?.

Thanks for the suggestions.

I'm actually blanking on the temps right now, will have to check when I get home after work, I just remember them seeming very close to the load temps with a single card.

As for the PSU, it is a Silverstone Strider 1200w Gold, the 1200W version of the one the H have a pretty good review.

I feel fairly confident in the PSU, but that being said, I bought it in 2011, so it is getting older...
 
There are several things that can affect this, Powersupply Heat or Silicon Lottery. Have you tried Overclocking the cards at all in SLI?
 
Yep, definitely aware of silicon lottery, aand I suspected it would sync the two cards to the One clocked lowest, just wasn't sure.

I also didn't know if there was any other form of "stepping back" that they do simply because of SLI.

I'm considering picking up a pair of Corsair HG10 N980's and possibly a couple of Corsair H90's to cool them. I'm trying to determine if lowering the core temp will be worth my while, or if I should just save the cash and be happy as is.
 
If you want to overclock your Maxwell cards you're going to need to desync the card settings and clock the higher ASIC card a little faster to try and get them both running at the same voltage. When I had a pair of 970s I clocked the higher ASIC card 25 MHz faster than the other to get the result I wanted.

The alternative is to flash both cards with a custom BIOS that keeps the voltage constant on both cards.
 
If you want to overclock your Maxwell cards you're going to need to desync the card settings and clock the higher ASIC card a little faster to try and get them both running at the same voltage. When I had a pair of 970s I clocked the higher ASIC card 25 MHz faster than the other to get the result I wanted.

The alternative is to flash both cards with a custom BIOS that keeps the voltage constant on both cards.

Appreciate the suggestion.

How do I know the ASIC difference?
 
With the two 970s I had I didn't experience any difference with one card vs two. Both clocked to around 1500 core. Personally I wouldn't even bother messing with different clocks for different cards, just clock them in sync.
 
Zarathustra[H];1041701788 said:
Appreciate the suggestion.

How do I know the ASIC difference?
With GPU-Z running, right-click the title bar and the option will be on the bottom of the menu IIRC. Use the pulldown menu on the bottom of the program itself to switch between cards to get both ratings. You can also see which one needs to be offset higher by looking at the voltages under load. The one running at a lower voltage needs the higher clock.
 
Have you looked at GPU usage? No reason for a gpu to go faster if its not being pushed to.
 
If you want to overclock your Maxwell cards you're going to need to desync the card settings and clock the higher ASIC card a little faster to try and get them both running at the same voltage. When I had a pair of 970s I clocked the higher ASIC card 25 MHz faster than the other to get the result I wanted.

The alternative is to flash both cards with a custom BIOS that keeps the voltage constant on both cards.

This. Another way to do this is to increase the voltage on the card with the higher ASIC.
 
Are you guys aware of any site that does a good writeup on this? I'm not really familiar with what ASIC means and why if want to clock one card a little higher.

Would like to understand the theory behind what I am doing before I do it.

Thanks,
Matt
 
No one is precisely clear on what ASIC means.

Through observations the general consensus is that the higher the ASIC the higher the potential overclock is without adding voltage. Higher ASIC cards can therefore maintain higher Boost clock speeds without feeding the core more power. At the same time the card is not going to respond well to forced increases in voltage. I believe this is the reason why my Titan X did not respond well to the custom MAXAIR2 BIOS I tried because GPU-Z states the ASIC quality as being 78%. One can assume that this means the lower the ASIC quality is, the higher the leakage. This is why the window in GPU-Z says lower ASIC is better for water cooling and higher voltage while higher ASIC is better for air cooling and lower voltage.

The idea with trying to match voltages in SLI is for stability reasons. With the way Maxwell cards aggressively respond to requested power states for a given clock, the sync in performance is going to cause the weaker card to flinch. This can lead to crashes are degrading performance over the length of a gameplay sessions. By matching voltages both cards are theoretically going to be synced in power states and not just their clock speeds which will minimize the impact of the ill effects of changing states under 3D load.

But this is all armchair engineering. If you believe what NVIDIA says then the differing voltages are really not a problem, despite the preponderance of anecdotal evidence to the contrary.
 
Well, there's my problem.

I got one at 82.3% and the other at 64.4%

That 64.4% seems REALLY low. Makes me wonder if I should return it, eat the restocking fee, and try again.
 
If you're really concerned about it, it could be worth it because that much of a difference between the two cards is going to really limit your Boost clocks and overclocking ability in SLI. ASIC quality is a lottery, though. You have to consider if it will be worth the time and effort to keep exchanging cards until you get close to a match over the lifespan of your setup.
 
I would not worry about your PSU, these cards with an overclock on my system don't draw more than 750W from the wall and this is at 70% resolution of 4K and both GPU's at 99% usage in 3D rendering.


I would have no problem making a trade for one of my 2 cards.

One is 77.7% and the other is 69.6%

You can choose which one you'd like, they have never been run in air since I bought them, I installed waterblocks on them as soon as they left the packaging.

But if you want to go the RMA/return route I'd understand too, I've been contemplating doing that myself as I've been having some hiccups with games taking a crash occasionally.
 
Last edited:
could you tell us your load temperatures of the top and bottom card?

Alright, so now I am REALLY scratching my head...

Captured the afterburner charts below after a run in Unigine Heaven, with everything maxed at 4k.

Click to embiggen:


GPU1, which boosts to 1349 without overclocking when used alone, and has an 82.3% ASIC, and runs on lower voltage is 15C hotter than GPU2 with an ASIC of 64.4%, higher voltage, and together everything is limited to 1278 (again, no overclocks)

Everything I know is telling me that GPU1 should be cooler than GPU2, not hotter....

That being said, the way I have my case set up might be partially to blame (really old pic, with different motherboard, and GPU's but still tells the story)

6663598067_9492431243_b.jpg


As you can see fresh air is pulled in from bottom through a 180mm rad on the right, but the 180mm fan on the left pulls in cold air.

GPU1 is probably sucking in hotter air from the CPU coming in through the radiator.


My plan is to get a new case, a corsair HG110i GTX exhausing heat, and a HG10 with an H90 for each of the GPU's blowing out their heat as well. Hopefully this will solve the issue.

I'd do a proper custom water cooling loop, but I really don't want to mess with it, so AIO's will have to do.

Those damned HG10 N980's cant be released soon enough.

Also, holy frame time batman...
 
Since my 980 Ti G1 blows hot air inside the case, my second 980 Ti will by water cooled using an AIO (probably HG10 with H80i GT).
 
hey, GPU1 is the top card, and GPU 2 is the bottom card in most orientations so that is correct but your case isnt like most cases o_o
 
Last edited:
Zarathustra[H];1041714356 said:
Alright, so now I am REALLY scratching my head...

Captured the afterburner charts below after a run in Unigine Heaven, with everything maxed at 4k.

Click to embiggen:

GPU1, which boosts to 1349 without overclocking when used alone, and has an 82.3% ASIC, and runs on lower voltage is 15C hotter than GPU2 with an ASIC of 64.4%, higher voltage, and together everything is limited to 1278 (again, no overclocks)

Everything I know is telling me that GPU1 should be cooler than GPU2, not hotter....

That being said, the way I have my case set up might be partially to blame (really old pic, with different motherboard, and GPU's but still tells the story)

As you can see fresh air is pulled in from bottom through a 180mm rad on the right, but the 180mm fan on the left pulls in cold air.

GPU1 is probably sucking in hotter air from the CPU coming in through the radiator.


My plan is to get a new case, a corsair HG110i GTX exhausing heat, and a HG10 with an H90 for each of the GPU's blowing out their heat as well. Hopefully this will solve the issue.

I'd do a proper custom water cooling loop, but I really don't want to mess with it, so AIO's will have to do.

Those damned HG10 N980's cant be released soon enough.

Also, holy frame time batman...
Those frame times actually are not that bad for SLI, but that case is a thermodynamic nightmare...
 
Those frame times actually are not that bad for SLI, but that case is a thermodynamic nightmare...

I was under the impression that "good" frame times in SLI are on the order of 17ms, and static, without peaks. Maybe that's an ideal perfect world that never occurs, but...

In the game I predominantly play (Red orchestra 2) frame time variability is atrocious. I'm not yet certain if it is an issue with my setup or just a poorly coded game.

I dropped the CPU back down to base clocks the last time I updated my BIOS and haven't gotten around to re-overclocking it yet, so its possible it's getting CPU starved (Sandy Bridge-E base 3.2, Turbo 3.8).

Also, it is an x79 system. I enabled PCIE3 using the "force gen3" exe file that nvidia distributed, but the notes said nvidia had found some timing issues in the Sandy-E x79 PCIe 3 implementation, and I wonder if that is exacerbating things, and if I'd be better off dropping back down to 2. Is 16x-16x in PCIe2 sufficient for SLI on two 980ti's these days? Back 5 years ago there were articles suggesting that PCIe bandwidth really wasnt as important as we all thought, but a lot has happened since then.

As far as the thermodynamics of the case go, yeah, its not ideal for my current application, but it made lots of sense back when I built it that way. At that time I had the dual Asus 6970 DirectCU II 3 slot cards pictured. The 3 slot widh cooler on those things meant that heat was not holding me back on my GPU overclocks.

I was however on a Phenom II x6 1090T, at the time, and I was HORRIBLY CPU limited in my favorite game, so my focus was to get as much as a CPU overclock as possible, and hopefully it wouldn't hurt my GPU clocks, by pulling fresh cold exterior air directly into my huge (and rare) 180mm AIO cooler. (and that actually turned out to be the case, at the time).

With my current hardware, this particular case setup is no longer ideal, which is why my plan is to switch it up.

my plan is as follows:

- Fractal Design Define S case
- Corsair H110i GTX blowing out the top of the case
- Corsair HG10 N980's on the GPU's, with two Corsair H90's blowing out the front of the case
- 4 additional corsair SP140 fans, so all rads are push-pull
- Included case fans sucking in through back vent and bottom vent.
- Aftermarket dust filter on back vent.

I know I'd be moving the air back to front, rather than front to back which is more customary, but this way I can make sure I am exhausting all hot air, rather than filling the case with it.
 
Related Question:

Is the top GPU usually hotter simply because of its location, and heat rises, or is it also hotter because it does more work than the second GPU? Does it - for instance - take more of an active role in stitching everything together to display it, or controlling GPU2 in a master/slave type relationship or something like that?

In other words, if you had a theoretical SLI setup where one GPU were separated thermally from the other, would one still be hotter, or is it entirely because of its location in the case?
 
Zarathustra[H];1041715612 said:
Related Question:

Is the top GPU usually hotter simply because of its location, and heat rises, or is it also hotter because it does more work than the second GPU? Does it - for instance - take more of an active role in stitching everything together to display it, or controlling GPU2 in a master/slave type relationship or something like that?

In other words, if you had a theoretical SLI setup where one GPU were separated thermally from the other, would one still be hotter, or is it entirely because of its location in the case?
It's the heat coming off the uncooled back of the PCB on the second card. A blower card will suck in that hotter air. A backplate can help with this issue.

If you have a dual- or triple-fan cooler then the heat can be reflected back to the top card from the second if the air isn't being routed out the side of the case. All things being equal, both cards should have the same thermals if they were to be completely isolated from each other.
 
Ahh, cool.

The ACX2.0+ cards come with back plates.

I hope the provided screws that come with the HG10's fit though them properly.
 
Those frame times actually are not that bad for SLI, but that case is a thermodynamic nightmare...

Can confirm. Actually, my frame times are all over the place with even a single 980ti (my second one is still coming via step up) in Heaven benchmark. I've been SLI/CFX'ing for the last 5-6 years, and this is pretty standard. To be honest, it doesn't really impact my gameplay, especially on a G sync monitor.
 
I've been SLI'ing for several years now and I have also experienced this issue. I never bothered to investigate the cause though and just chalked it up to the additional power demands of running a 2nd GPU. To compensate I'd ultimatelly just end up overclocking the GPUs.

Now I don't know this for a fact, but I suspect that the issue was potentially related to some of my previous motherboards. On the last two mobo's I've owned I have had an additional connector, for either an extra 8 pin or SAtA power cable, that is dedicated to the PCI-E slots. In my current mobo, this extra power has been the difference between a slight or negligible SLI overclock and a significant overclock. I would suggest that if you have a dedicated PCI-E power slot like this that you ensure it is connected to your PSU.

With regards to your temperatures, the top GPU is always going to run hotter than the bottom if your cards have the stock blower style cooler. Before I put my 980 TI's on water the top GPU would consistently stay around 10-12C hotter than the bottom.
 
I've been SLI'ing for several years now and I have also experienced this issue. I never bothered to investigate the cause though and just chalked it up to the additional power demands of running a 2nd GPU. To compensate I'd ultimatelly just end up overclocking the GPUs.

Now I don't know this for a fact, but I suspect that the issue was potentially related to some of my previous motherboards. On the last two mobo's I've owned I have had an additional connector, for either an extra 8 pin or SAtA power cable, that is dedicated to the PCI-E slots. In my current mobo, this extra power has been the difference between a slight or negligible SLI overclock and a significant overclock. I would suggest that if you have a dedicated PCI-E power slot like this that you ensure it is connected to your PSU.

With regards to your temperatures, the top GPU is always going to run hotter than the bottom if your cards have the stock blower style cooler. Before I put my 980 TI's on water the top GPU would consistently stay around 10-12C hotter than the bottom.

What were your temps before watercooling if I may ask?
 
What were your temps before watercooling if I may ask?

My cards ran pretty hot. With the stock fan profile, overclocking was pretty much out of the question since the top card would break the 80C threshold easily and then throttle. With a custom fan profile (fan pegged at 85 to 90% usually) I could up the power target and get a decent overclock but the top card still got up to around 81/82C. Bottom card was directly below the top (no extra gaps between the two) and generally 10-12C cooler.
 
Zarathustra[H];1041701519 said:
Hey all,

I just set up SLI for the first time with two 980 ti's.

I haven't started overclocking yet (I want to get some custom cooling going first) but I have experimented with just bumping the power limits, temp limits and fans up to max, and have made some observations.

With a single card (EVGA Superclocked 980ti ACX 2.0+) installed, or with two cards installed, running in single GPU mode it clocks itself up to 1349Mhz (!?) and stays there for the duration of the game/bench.

Once both cards are installed and in SLI mode, all the same settings, they only clock themselves up to 1278Mhz, and stay there for the duration of the game/bench.

I would blame heat as there are now two heat producing cards next to eachother rather than one, but they don't seem to be running any hotter, and my case has pretty good airflow.

Is there an automatic protection mode that kicks in in SLI making it harder to overclock than a single card, or should I just proceed as usual?

Appreciate any thoughts, or links where I can read up on this.

Thanks,
Matt

Hey Zarathustra, it looks like your mobo may have the same supplemental PCI-E power connection that I described a few posts ago. It depends on whether or not you have the -E version of the Asus P9X79 WS (the WS-E does not appear to have this connection).

On page 2-33 of your mobo's manual this power connector is circled and described (standard hard drive 4-pin power connection at the top right of the board):

https://www.asus.com/us/Motherboards/P9X79_WS/HelpDesk_Manual/

The manual states that this power connection may be necessary when running 3+ GPUs (my mobo manual says the same) but I have to use it even with just two cards to get a decent overclock in SLI.
 
Hey Zarathustra, it looks like your mobo may have the same supplemental PCI-E power connection that I described a few posts ago. It depends on whether or not you have the -E version of the Asus P9X79 WS (the WS-E does not appear to have this connection).

On page 2-33 of your mobo's manual this power connector is circled and described (standard hard drive 4-pin power connection at the top right of the board):

https://www.asus.com/us/Motherboards/P9X79_WS/HelpDesk_Manual/

The manual states that this power connection may be necessary when running 3+ GPUs (my mobo manual says the same) but I have to use it even with just two cards to get a decent overclock in SLI.

Interesting, never noticed it before. Says its for 3 or more GPU's, but IO'll definitely try it.
 
Zarathustra[H];1041723668 said:
Interesting, never noticed it before. Says its for 3 or more GPU's, but IO'll definitely try it.

Actually, never mind, it's connected. Don't remember plugging it in, but I built the system with this motherboard in November 2011, so my memory is a little hazy :p
 
Zarathustra[H];1041723672 said:
Actually, never mind, it's connected. Don't remember plugging it in, but I built the system with this motherboard in November 2011, so my memory is a little hazy :p

Ah damn, well it was worth a shot! Hope your new case configuration helps with the temps at least. :)
 
Lol.

I should make a point out of opening my case while it is running more often.

Either my fan controller has gone bad, or one of my 180mm fans has died. I wonder how long it's been like that :p

I need to just retire this case and proceed with my rebuild. Just waiting for the h110i GTX to become available...
 
Actually, it's available. Ordering it + new case now. You guys know if there are any good 140mm fans, except for corsairs 140sp's? They are kind of pricy.
 
Back
Top