The Official GTX980/970 OC & Benchmark Perf. Thread

I've had two people confirm to me that you get way better benchmark scores and better performance in game with these drivers though.

How much performance in games are we talking about? And is that with or without DSR?

Free performance is always great but +10% FPS is kinda pointless if I have to worry about random BSODs while playing though :/ Doubly worse for me because these days I either go on 4 hour+ long sprees or don't play at all.
 
How much performance in games are we talking about? And is that with or without DSR?

Free performance is always great but +10% FPS is kinda pointless if I have to worry about random BSODs while playing though :/ Doubly worse for me because these days I either go on 4 hour+ long sprees or don't play at all.

I don't think it is much, probably +10% but if your BSOD crashing like DASHIT then I'd stay away. I'll try some gaming tonight if possible to see if I run into stability issues. Benchmarks on the other hand have improved substantially. Marcdaddy for example has roughtly 30% higher scores across all benchmarks.
 
I've had two people confirm to me that you get way better benchmark scores and better performance in game with these drivers though.

Firestrike was 70pts more on the gfx subscore for me, a 0.5% increase. Nearly as much performance as adding 13Mhz on the core.

LOL I'm not being specific about the mhz numbers I'm using rough examples. Still my findings still stand for what I've tested and seen.

I've also never seen a 14% memory increase add 5 fps at lower resolutions but I have seen it in Surround gaming, well close enough. The reasoning is that you're memory limited at higher resolutions. Offloading your textures stored in vram faster and making more vram available faster will help things there. it's logical but will vary by scenario, game, and your settings.

At lower resolutions faster vram may net you barely any additional real world performance increase of +1 fps (rough example) so in many(possibly moist) cases it's not worth boosting your vram 1000mhz which will cause you to have to scale back your core roughly 15-20mhz(it can vary) and lose 2-3fps. You may hit the TDP wall faster with these cards by boosting your VRAM since they take power and your already running on a small amount.

You also need to understand that nvidia has a new compression technology with Maxwell 2.0 GPUs where Vram works significantly better. It should theoretically be able to handle more work @ 7000mhz than older architectures could at the same speed. Whether they're tapping into that with current drivers is still debatable.

So if someone asked me on these 970/980 cards if they should push vram as far as they can get it while gaming at 1080P I'd tell them to first find their stable Core clock, after that boost vram until they hit the TDP wall but don't sacrifice core clock for Vram. Core Clock > Vram for low res. on Maxwell 2.0.

Core clock is of course the first thing to do since you benefit across the board (both geometry, pixel, and compute shader horsepower). But the story doesn't change at higher resolutions- you're not changing the amount of megabytes required per pixel. You're just running more pixels, which means more work for the cores and mem equally. So, the benefit from core oc and mem oc are always the same. Mem doesn't suddenly become way more important at a certain resolution. There is overhead due to a larger data set, but it's not significant enough to alter the way you should overclock...these new architectures and games address that problem.

My point- if you are ignoring mem overclock just to get 25MHz more out of your core, you're leaving performance on the table. Maxing out your memory should have minimal effect on your max core anyways.

Some quick FRAPS observations in Crysis 3 from a scene of interest. Looks like +290 on the memory is the same as +63 on the core in 1080p. At 4k, the fps is too low to make fine comparisons, but I'm not seeing memory being more important than 1080p. I saw the same perf scaling in my FS score.

1080p
+0 +0 - 58 to 62 fps
+ 0 +290 - 60 to 64
+0 +591 - 61 to 65
+127 + 591 - 65 to 70
+127 +290 - 64 to 68
+127 +0 - 62 to 66
+101 +0 - 61 to 66
+76 +0 - 61 to 65
+63 +0 - 60 to 64

4k @ 33%smoothness
+0 +0 - 19 to 21
+ 0 +290 - 20 to 22
+0 +591 - 21 to 22
+127 + 591 - 22 to 24
+127 +290 - 21 to 23
+127 +0 - 20 to 22
+76 +0 - 20 to 22
 
1080p
+0 +0 - 58 to 62 fps
+ 0 +290 - 60 to 64
+0 +591 - 61 to 65
+127 + 591 - 65 to 70
+127 +290 - 64 to 68
+127 +0 - 62 to 66
+101 +0 - 61 to 66
+76 +0 - 61 to 65
+63 +0 - 60 to 64

4k @ 33%smoothness
+0 +0 - 19 to 21
+ 0 +290 - 20 to 22
+0 +591 - 21 to 22
+127 + 591 - 22 to 24
+127 +290 - 21 to 23
+127 +0 - 20 to 22
+76 +0 - 20 to 22

These weren't my results at all back when I tested. I'm not sure if DSR is really hitting your setup and vram usage like a real 4k monitor would. That may be the deal with that part.

Benchmarks rarely cause you to run out of vram, they throw medium sized textures at you but just a shitload of them as far as I know. I've never seen someone say that they were running out of vram in a benchmark. Unfortunately the way you'll be able to test my finding accurately is with a surround setup or with a real 4k display.

Keep in mind when I bought my surround setup I do not like to compromise on IQ so I like to game with at least 4x AA and high settings in all games.
 
These weren't my results at all back when I tested. I'm not sure if DSR is really hitting your setup and vram usage like a real 4k monitor would. That may be the deal with that part.

Benchmarks rarely cause you to run out of vram, they throw medium sized textures at you but just a shitload of them as far as I know. I've never seen someone say that they were running out of vram in a benchmark. Unfortunately the way you'll be able to test my finding accurately is with a surround setup or with a real 4k display.

Keep in mind when I bought my surround setup I do not like to compromise on IQ so I like to game with at least 4x AA and high settings in all games.

I'm also unsure what the impact of DSR is. From NVidia's description, it is doing real 4k (or w/e you choose), plus an additional downsampling step at the end of each completed frame. That's the best I can do in terms of testing at higher than 1080p. And Crysis 3 is just one game, there are games and benches out there that act differently. Valley rewards mem speed a lot, for example.

Running out of VRAM is a totally different issue, and just like running out of system RAM, your performance takes a dive. That's when I would expect to see potentially high sensitivity to VRAM speed also. You're definitely right about a surround setup-- with some MSAA and high texture IQ, you would definitely be in the realm of running out of VRAM like with 4k.

Anyways, anybody been playing with a custom BIOS?
 
Anyways, anybody been playing with a custom BIOS?

Yeah I like to play right at the Cusp of running out of Vram usually 4x MSAA @ 3240x1920 does it depending on the game. I should have been more clear upfront in your defense.

Regarding custom bioses, I created and flashed one to both my cards but ended up flashing back because the driver issues made testing a pain in the ass. Voltage delta bug doesn't let me benefit while in SLi since onc card is always holding the other one back. Even with the trick I made a thread about the highest voltage I saw was 1.257 instead of what others were seeing. Once the driver issues gets hashed out I'll revisit that process. Has anyone else been customizing/flashing bioses? If so what were your results?
 
Yeah I like to play right at the Cusp of running out of Vram usually 4x MSAA @ 3240x1920 does it depending on the game. I should have been more clear upfront in your defense.

Regarding custom bioses, I created and flashed one to both my cards but ended up flashing back because the driver issues made testing a pain in the ass. Voltage delta bug doesn't let me benefit while in SLi since once card is always holding the other one back. Even with the trick I made a thread about the highest voltage I saw was 1.257 instead of what others were seeing. Once the driver issues gets hashed out I'll revisit that process. Has anyone else been customizing/flashing bioses? If so what were your results?

I still dont understand how I don't have this issue, yet others do. My top card runs at 1.212volts and bottom at 1.193. A difference of 0.019 volts. This is due to the difference in ASIC quality (top card 68, bottom 71), so the bottom will always use less volts.
 
I still dont understand how I don't have this issue, yet others do. My top card runs at 1.212volts and bottom at 1.193. A difference of 0.019 volts. This is due to the difference in ASIC quality (top card 68, bottom 71), so the bottom will always use less volts.

Very likely, the more I got to understand the issue the more I believe it's a combination of both ASIC quality and Driver not compensating.

If your card's ASIC are close to eachother I'd imagine that the issue wouldn't be as severe.

Where I believe the drivers come in is that the drivers are not compensating for discrepancies. E.G.

If card 1 can do 1510 mhz @ 1.195 V
But card 2 can do 1510 mhz @ no less than 1.215 V

It's supposed to compensate the voltage and boost card 1s voltage so that both run the same clock speed and voltage.

Same goes for another scenario

If card 1 can do 1510 mhz with a boost of +200
But card 2 can do 1510 mhz with a boost no less than +230

It's supposed to compensate that as well and lower the clocks on card 1 so that the 2 cards are in Sync.

My theory is the driver bug is not taking into account ASIC discrepancies between the cards. That's why when we manually compensate we get a poor man's (sort of)working SLi setup
 
In regards to custom bios: With no way of monitoring VRM temps - no thanks. I have a feeling those running a custom bios with 1.275 volts are running at 100C+ VRM temps.
 
In regards to custom bios: With no way of monitoring VRM temps - no thanks. I have a feeling those running a custom bios with 1.275 volts are running at 100C+ VRM temps.

That's why I purchased an infrared temp monitoring gun. I can just point it at the VRMs and get their surface temp :D

Even with fairly poor VRM cooling (NZXT Kraken G10), I've only seen the VRMs on my GTX 780 hit 100c while stress testing with fur-mark.
 
In regards to custom bios: With no way of monitoring VRM temps - no thanks. I have a feeling those running a custom bios with 1.275 volts are running at 100C+ VRM temps.

I have a kraken. All I did was throw some oversized heatsinks on all the VRMs with artic silver's epoxy. Makes it hard to resell but I plan to run it into the ground anyways.

You're right in that if you plan to OC you need to be smart. Like full cover water block, something like I did, or I believe gigabyte G1 contacts everything. All metal case helps to contain potential flames ;)

As far as I could tell even the reference 980 had contact thermal pads for VRMs. I think where you have to watch more are the 970s
 
Well in that crazy custom BIOS thread over on overclock.net, some people with good bins are hitting 1640 on air with 970s...wonder how long that will last :)
 
Very likely, the more I got to understand the issue the more I believe it's a combination of both ASIC quality and Driver not compensating.

If your card's ASIC are close to eachother I'd imagine that the issue wouldn't be as severe.

Where I believe the drivers come in is that the drivers are not compensating for discrepancies. E.G.

If card 1 can do 1510 mhz @ 1.195 V
But card 2 can do 1510 mhz @ no less than 1.215 V

It's supposed to compensate the voltage and boost card 1s voltage so that both run the same clock speed and voltage.

Same goes for another scenario

If card 1 can do 1510 mhz with a boost of +200
But card 2 can do 1510 mhz with a boost no less than +230

It's supposed to compensate that as well and lower the clocks on card 1 so that the 2 cards are in Sync.

My theory is the driver bug is not taking into account ASIC discrepancies between the cards. That's why when we manually compensate we get a poor man's (sort of)working SLi setup

Yep definitely ASIC at work here.

Previous set of Gigabyte 970s had ASICs of 64% and 66%, and ran like a charm in SLI with Vdelta being only 6mV. Of course they squealed like stuck pigs so they promptly went back (had them for a grand total of 4 days ROFL).

Second set of cards have ASICs of 74% and 66%, and even out the box Vdelta is 50mV. :eek:

Like you I'm also compensating by making the higher ASIC card lag behind the lower ASIC in order to match volts and boost as much as possible. For me I found the optimal lag was 20MHz, and now I can do 1506 boost 7600 mem without touching volts and is 100% game stable.

Tried my hand at overvolting but it's just a total shitshow right now requiring way too much finesse and tinkering, so I basically just said screw it and pushed the cards on stock volts until they broke during Firestrike, then dialed them back a good 15% and that's my 24/7 speed (for now anyway).

In regards to custom bios: With no way of monitoring VRM temps - no thanks. I have a feeling those running a custom bios with 1.275 volts are running at 100C+ VRM temps.

Get a Gigabyte 970, active cooling = VRM doesn't run past 65C at stock. I imagine even at 1.25+V they still shouldn't shoot past 85C.
 
VRM doesn't run past 65C at stock[/url]. I imagine even at 1.25+V they still shouldn't shoot past 85C.

I agree these cards really do cool well. I am just waiting for the really good bios mod to give me everything I need to unleash these beasts.

I have my 2 H55's sitting here waiting with my G10 brackets :)
 
Been testing this bios out:

http://www.overclock.net/t/1517316/...ld-and-new-nvflash-5-190/550_50#post_23078677

This is my latest 970 MSI 4Gaming bios for anyone who would like to use. This bios has given me fantastic results. Zero throttling and constant voltage while in 3D applications single and SLI card configurations. GPUs will still idle down as well. Information about bios below. I am not responsible for any damage done to your GPU. Hope everyone enjoys cheers.gif

TDP increased to 300w
Power limit increased to 300w at 120% 250w at 100%
Both 12v rails increased to 120w
Temp limit set to 95c default
Voltage set to 1.250 default
Voltage minimum limit while in 3D app set to 1.250 default
Running SLI both GPU's will run at 1.250 default
CLK 56 - 74 set to 1.250 default voltage in minimum and maximum brackets
XBAR, SYS, and L2C values set to 1455.5 MHz

FYI, the way bios is currently configured default vGPU cannot be increased from 1.250mv. This however can be altered. At this time I simply do not care to run more than 1.250mv as additional core clock gain is not worth it.

Edit: Since the default TDP, Power Limit, an voltage have been increased to those specified levels boost 2.0 recognizes no immediate limits an allows the maximum boost table of 1455.5 MHz to be reached at default.

Have my cards at 1570MHz core, 3800MHz mem. No throttling at all. BOTH cards are locked at 1.250volts!

Been gaming with it for the last 2 hours now. :cool:
 
Been testing this bios out:

http://www.overclock.net/t/1517316/...ld-and-new-nvflash-5-190/550_50#post_23078677



Have my cards at 1570MHz core, 3800MHz mem. No throttling at all. BOTH cards are locked at 1.250volts!

Been gaming with it for the last 2 hours now. :cool:

Hey do you happen to know the exact process to fix the voltage discrepancy via a bios tweak. I read through a thread at OCN and it wasn't clear. Something about right clicking the gpu boost table and choosing fix table then flashing that bios on both cards?
 
Hey do you happen to know the exact process to fix the voltage discrepancy via a bios tweak. I read through a thread at OCN and it wasn't clear. Something about right clicking the gpu boost table and choosing fix table then flashing that bios on both cards?

I'm not sure. I'm sure if you posted in that thread I linked to someone will respond with an answer.
 
Played a bit with my Asus. Without raising power limit, I can get 1476mhz max, otherwise it throttles. With it topped to 120%, I managed to get 1522 MHz GPU and 4000 MHz RAM. temps didn't cross 68C, and fan was at 38-40% (totally unaudible). Higher then those settings, not a single pass of Heaven/Valley could be achieved. In Valley I got max 2766 score.

Power consumption, according to GPU-Z log file went from 96% TDP to 115% TDP, so if the TFP is 145W, the rise of about 30W is unnoticeable :)
Loving this card
 
For those with reference cards, can you post what stable OC your're running at with & without extra voltages?
I've been running @ +210 Core, +400 Mem, +125Pwr @ 1450Mhz with a stock fan profile. Its been completely stable, until I fired up Titanfall where every so often a driver reset occurs - unless I add +10mV. Everything else (BF4, Mordor, Valley etc) has been fine. I suspect it is because of a random voltage dip in one of the cards when reviewing the graphs in afterburner but am not quite sure.

My asic quality is ok-ish for consecutive serial numbered chips off the factory line - around 68.9% & 67.6%.
 
For those with reference cards, can you post what stable OC your're running at with & without extra voltages?
I've been running @ +210 Core, +400 Mem, +125Pwr @ 1450Mhz with a stock fan profile. Its been completely stable, until I fired up Titanfall where every so often a driver reset occurs - unless I add +10mV. Everything else (BF4, Mordor, Valley etc) has been fine. I suspect it is because of a random voltage dip in one of the cards when reviewing the graphs in afterburner but am not quite sure.

My asic quality is ok-ish for consecutive serial numbered chips off the factory line - around 68.9% & 67.6%.

For me the overclock is the same I can get about 1500mhz / 800mhz without a voltage bump and 1520 / 8000mhz with a voltage bump. Kinda makes the extra voltage not necessary. I will note that since I have the voltage discrepancy bug I have to boost one card about 35mhz more than the other in order to get the voltages close and I think this is playing with my stability. As far as ASIC I have one Golden card and on turd card. ASIC 1 = 84 Asic 2 = 67

Titanfall seems to have gone to shit with the latest patch for me. My Voltages are unusally super low in that game for some reason even if my GPU voltages are maxed in afterburner. So for some reason my cards are severely undervolted in Titanfall. The only thing I haven't tried is Prefer maximum performance in the nvidia profile under Power Management mode which I'll try now and post back. This is at 3240x1920 with max settings.

Yup 100mv or so undervolted I'm running around 1.14-1.18mv in that game now on each card. /sigh

Borderlands 2 runs fine.
 
Last edited:
My pair of EVGA GTX 970 FTWs just came in last night. Temps are running a bit hotter than I'd like. This is with a +120 overclock on a stock BIOS:

900x900px-LL-3f0e593f_valley_crop2_zps36a63aaa.png

(Unigine Valley demo @ 6040x1080 on Ultra settings)

I think I'm pretty much capped as is...I think the highest I got was 1540 clock, but it wasn't stable.
 
For me the overclock is the same I can get about 1500mhz / 800mhz without a voltage bump and 1520 / 8000mhz with a voltage bump. Kinda makes the extra voltage not necessary. I will note that since I have the voltage discrepancy bug I have to boost one card about 35mhz more than the other in order to get the voltages close and I think this is playing with my stability. As far as ASIC I have one Golden card and on turd card. ASIC 1 = 84 Asic 2 = 67

Titanfall seems to have gone to shit with the latest patch for me. My Voltages are unusally super low in that game for some reason even if my GPU voltages are maxed in afterburner. So for some reason my cards are severely undervolted in Titanfall. The only thing I haven't tried is Prefer maximum performance in the nvidia profile under Power Management mode which I'll try now and post back. This is at 3240x1920 with max settings.

Yup 100mv or so undervolted I'm running around 1.14-1.18mv in that game now on each card. /sigh

Borderlands 2 runs fine.

thanks for the heads up. Titanfall has been stable at a lower clock for me. +200 instead of 210+ extra voltage. Maybe my OC wasnt as stable as I thought it was. Will use my +210 at stock Volts for normal use, just not for this game. doh....
 
Been testing this bios out:

http://www.overclock.net/t/1517316/...ld-and-new-nvflash-5-190/550_50#post_23078677



Have my cards at 1570MHz core, 3800MHz mem. No throttling at all. BOTH cards are locked at 1.250volts!

Been gaming with it for the last 2 hours now. :cool:

What do you monitor/modify with? both afterburner and gpuz are reading 1455 constant for me at stock clocks(PL 120%) and with +75 in AB... not understanding whats going on.

metroll bench, max temp 81c, 1.25 volts, gpu load up to 96, tdp didnt go over 83%
 
for most people around 1500MHz is when the cards will throttle back.

I actually went back to stock bios for now. 1500MHz stable, not worth flashing the bios just for 70mhz more.
 
for most people around 1500MHz is when the cards will throttle back.

I actually went back to stock bios for now. 1500MHz stable, not worth flashing the bios just for 70mhz more.

Yeah, I went back to a power level modded version of my stock BIOS on my 980. I was having a weird issue with the "no limits" BIOS where my card would spike up to around 1575MHz core momentarily when loaded from cold and would often crash the driver. After reaching a fairly steady state temp, my card would still throttle back to right around the same speed it would with the stock BIOS with the power levels increased anyway.

Gonna give this BIOS flashing thing a rest until someone figures out a BIOS with boost completely disabled.
 
After much fiddling I've determined that my reference Zotac 980 requires additional core voltage to provide a stable memory overclock. I'm using Mordor as my stability benchmark. I maxed out vcore at +87 mV. Am I correct in my conclusion that additional vcore over what's needed produces more thermals and power consumption, but otherwise increases stability at a given overclock? What I'm getting at is if I'm OK with my GPU temps, and +87mV is stable is there any point in me dialing down the vcore until I find the minimum this card needs?
 
After much fiddling I've determined that my reference Zotac 980 requires additional core voltage to provide a stable memory overclock. I'm using Mordor as my stability benchmark. I maxed out vcore at +87 mV. Am I correct in my conclusion that additional vcore over what's needed produces more thermals and power consumption, but otherwise increases stability at a given overclock? What I'm getting at is if I'm OK with my GPU temps, and +87mV is stable is there any point in me dialing down the vcore until I find the minimum this card needs?
You will be running into a TDP wall in some games with that extra voltage long before you reach the highest boost that card is capable of.
 
I'm trying to understand how that would be possible.

It's all in the magic of "boost".

My 980 will boost to the same max clocks and will display the same GPU voltage with the voltage set at anything from +40 to +87 mV. And from starting "cold" to a stable loaded ~70C, my core clock will drop about 20 MHz and a couple of ticks in voltage.

Setting the voltage slider is not actually making a change to the voltage applied. It's really more of a "recommended maximum". And it will never actually get there at +87 with air cooling.
 
Hi,
I am returning my GTX980 SLI to the shop for a refund, the shop agreed to refund my money due to the display port issue.

What should I buy now?
 
Hexus looked at the new 8GB card from AMD on Shadows of Mordor, for those interested in whether 4GB is too little for NVidia's cards:

http://hexus.net/tech/reviews/graphics/76685-sapphire-radeon-r9-290x-vapor-x-8gb/?page=9

Good article. The conclusion was interesting to read. I think they are being a little short sighted in the sense that they are not taking into account the laziness or lack of development time developers will have or are given to develop a pc version of these console ports from the current gen consoles with Shared dedicated memory. This has been snowballing vram requirements in games and I do expect that trend to continue. In 2015 I think 8GB is a safe bet.

It's better to not need the vram and have it, then to need the Vram and not have it ~ Lord_Exodia
 
Hexus looked at the new 8GB card from AMD on Shadows of Mordor, for those interested in whether 4GB is too little for NVidia's cards:

http://hexus.net/tech/reviews/graphics/76685-sapphire-radeon-r9-290x-vapor-x-8gb/?page=9

To sum-up:
4GB GTX 980 is noticably faster than the 8GB R9 290X at 1080p
4GB GTX 980 is the same speed as the 8GB R9 290X at 4k

Oh, and neither card delivered a playable experience (minimum FPS above 30) at 4k + ultra settings... so that extra RAM means squat :p
They're going to have to put two 4GB GTX 980's in SLI and compare it to two 8GB R9 290X's in crossfire to see any real useful gains from that extra RAM.
 
4k is really not a viable option until graphic cards can sustain at least 60fps, don't tell me 30fps is acceptable because that is just down right not fun.
 
Hexus looked at the new 8GB card from AMD on Shadows of Mordor, for those interested in whether 4GB is too little for NVidia's cards:

http://hexus.net/tech/reviews/graphics/76685-sapphire-radeon-r9-290x-vapor-x-8gb/?page=9

thanks for posting this article.
as I tought Shadow Of Mordor don't need 6GB since it runs great on ultra texture on my GTX980 SLI.
It probably use up to 6GB for buffering but this does not mean that it is really needed, most of the time I have a VRAM usage aroung 3.8GB.

no news on GTX980 8GB yet?
when do you think that we will see 8GB cards from an nvidia on the "non titan" series?
 
NVidia's board partners used to make versions with double the stock memory, but I don't know if they did that for the 780's. I don't see why they won't do it again, but it might be a couple more months.

I think Marcdaddy pretty much hit the nail on the head- by the time you really need more than 4GB, you won't have enough cores to push a decent framerate anyways. Hexus also pointed out that 8GB has slightly slower timings. I guess AMD overclocked the memory a bit to compensate for it.

SLI and crossfire won't show anything different though- VRAM is not additive across multiple cards. You'd just be testing SLI versus crossfire.

What's interesting is that the 980m has 8GB. I guess since high end laptops that pack a 980m would also pack a 4k-ish screen.
 
NVidia's board partners used to make versions with double the stock memory, but I don't know if they did that for the 780's.
There's a 6GB version of the GTX 780 floating around.

Didn't make much difference then, either.

SLI and crossfire won't show anything different though- VRAM is not additive across multiple cards. You'd just be testing SLI versus crossfire.
Not true at all... you even pointed out precisely why SLI vs. Crossfire would be a valid test.

by the time you really need more than 4GB, you won't have enough cores to push a decent framerate anyways.
SLI and Crossfire give you more cores utilizing the same-size pool of memory. So by the time you need more than 4GB, you might in-fact have enough cores to push a decent framerate.

They should test two 4GB 290X's in crossfire + two 8GB 290X's in crossfire + two 4GB GTX 980's in SLI. The results would tell us if 8GB is useful in ANY way, at all, with current games and hardware.
 
There's a 6GB version of the GTX 780 floating around.

Didn't make much difference then, either.


Not true at all... you even pointed out precisely why SLI vs. Crossfire would be a valid test.


SLI and Crossfire give you more cores utilizing the same-size pool of memory. So by the time you need more than 4GB, you might in-fact have enough cores to push a decent framerate.

They should test two 4GB 290X's in crossfire + two 8GB 290X's in crossfire + two 4GB GTX 980's in SLI. The results would tell us if 8GB is useful in ANY way, at all, with current games and hardware.

SLI and Crossfire use alternate frame rendering right? That means per-frame, you only get 1 card's cores and VRAM. Neither the cores nor the VRAM are additive across multi-GPU setups; comparing two 8GB 290X's is not equivalent to testing a theoretical 8GB card with twice the # cores.

There's no new information to glean from testing SLI or crossfire. The formula is (The FPS from 1 card) * (Number Of Graphics cards) * (SLI/XFire scaling factor).

So the only thing you're testing in the end is how good SLI/XFire are- the scaling factor would be 100% in a perfect world, but is less than that in the real world.
 
Back
Top