NVIDIA Kepler GeForce GTX 680 Overclocking Review @ [H]

My biggest concern is for people who are going to keep the card for a while. I'm on a 3 year upgrade cycle (after 3 years, I pick at what point I want to upgrade -- this time its Ivy). Back in 08 when I picked up my 4870, I'm sure most games didn't push the card as hard as 3dmark 08 did. I'll bet that BF3 pushes my 4870 just as hard or harder than 3dmark 08 does.

What I'm saying is, in 3 to 4 years, I suspect I won't be getting much over base clock anyway since games will (hopefully) be much more demanding. Sure, I'll still WC the card and all will be well, I'm just not sure the dynamic clock is really something to be all that excited about in the long term.
100% load is 100% load. Games that load up the card more in the future might load different areas of the GPU, but I'm seeing 100% utilization on my 680s in BF3 now so I don't think you will be worse off in the future.
 
What is the actual rated speed for the memory chips on the 680?

Also I'm wondering if you can look at whether or not card is more memory bandwidth limited? Basically whether it gains more from OCing the memory relative to the core.
 
Overclocking seems really hit or miss on these. I see people/sites getting like +250 on the core, but me and quite a few others are seeing < 100Mhz. I can run the OC Scanner well over +100 and I've tried several games well over that as well, with no issue, but Heaven won't do anything over +75 without crashing. I realize Heaven is much more stressful than the typical game, so I wonder what everyone else is using for stability testing.
 
What is the actual rated speed for the memory chips on the 680?

Also I'm wondering if you can look at whether or not card is more memory bandwidth limited? Basically whether it gains more from OCing the memory relative to the core.

According to Brent the 680 is GPU limited, I expected otherwise.
680 is engine limited, sacrifice memory clocks in favor of raising core clock as high as you can.
 
Well done guys, it's a very good review. I found during my testing that the GPU clock and voltage will each downclock one step at 70C. Watching the OSD right when it hits 70C both of them drop from 1215/1.175 to 1202/1.162 respectively.
 
Nice review guys. Thanks for taking a bit of extra time to test several games and OC both the 7970 and the 680.

I'm really liking the auto-overclocking the card does, right out of the box. This is a great feature, quite innovative.

Another small article, showing how much OC'ing helps with 5760x1200 performance would be sweet :)

I really want to erase auto-"over"clocking from our vocabulary in reference to GPU Boost.

It is auto-clocking, not auto-overclocking, because overclocking can still be done manually by the user over the capabilities of GPU Boost. So it isn't auto-overclocking, it's simply an auto-clocking feature, and quite a smart one at that.

Well, it speeds up the clock, which still feels like "overclocking" to me, but I can see what you mean since it is an automatic-clock-speed-on-the-fly-adjustment. Gonna have to give us time to get used to it, it's new and all.

You never did mention if a small overclocked performance with triple screens was on your plate. /Piles_it_On
 
High power as in pushing the TDP, a "power virus" as NV describes it, apps like Furmark, NV showed a test in 3DMark that was also like this.

Note that this isn't typical in games, that's why games are the right application to use to test clock speed and overclocking and so forth. Game's don't behave like some of these specific tools, those tools don't show you what you are going to get in real games.

I'm going to flesh this out more in a closer look at GPU Boost, I'll show you graphs in these apps, and in games, and how it relates to power and clock speed. The important thing to keep in mind is simply none of those tools or specific benchmarks relate in anyway to what you will experience in real games. This is also why we've started reporting power in each game, its necessary now. Same with clock speed, if clock speed is different in games (and it will be) I'll show that in the next article.

Thanks for the info :) I'm looking forward to the closer look into gpu boost!
 
Great review, really look forward to you guys exploring how the card reacts under different conditions. See'ing how GTX680 and 7970 are the first cards of the generation, the future is looking mighty competitive.
 
I don't know if it sounds all that smart to me. Pretty much, when you really, really need the extra clock speed in a high power situation (say, a game that pushes as hard as 3dmark) is when you get down clocked. When you're running at 100+fps in a low power situation is when you get an added boost because you're at low power...

I would think you would want the high power situation to be the time that the card ramps up extra voltage and clocks to push you through the demanding segment you're in.

I'm not sure it actually works that way in practice. My card clocks up to the max freq (1260 for me) and stays there all the time. I have never seen it drop below 1260 even when the GPU is at 99% load in BF3. So I'm not sure that under demanding situations it is really going to push the clock speed down, unless you are really pushing the voltage or something - my card doesn't exceed 120-125% power draw, so maybe it just hasn't hit a situation where it needs to downclock, but so far, in practice, it runs flat out all the time.
 
I'm not sure it actually works that way in practice. My card clocks up to the max freq (1260 for me) and stays there all the time. I have never seen it drop below 1260 even when the GPU is at 99% load in BF3. So I'm not sure that under demanding situations it is really going to push the clock speed down, unless you are really pushing the voltage or something - my card doesn't exceed 120-125% power draw, so maybe it just hasn't hit a situation where it needs to downclock, but so far, in practice, it runs flat out all the time.

This. My cards might drop 10-20Mhz if they get really hot (85C+), but I've never seen them clock down below ~1200 if FPS is below 60. The only time GPU clocks back to normal at 1006MHz is when I'm maxed out at 60FPS with vsync on.
 
Brent,

How do you feel these will perform under water cooling? I bet these things could clock pretty high and still stay cool under water cooling, but will the card feed itself enough voltage? How are we going to be able to get them to to really crank up under more advanced cooling?
 
Brent,

How do you feel these will perform under water cooling? I bet these things could clock pretty high and still stay cool under water cooling, but will the card feed itself enough voltage? How are we going to be able to get them to to really crank up under more advanced cooling?

I'm currently holding off on a purchase of a 680 until I find out a bit more info on this (that and prices in Australia atm are at the $700 mark which are bound to drop a fair bit in the coming weeks). I'd hate to be artificially limited.

Or perhaps until one of these becomes available.
 
Last edited:
Kyle/Brent: Great article, even though there are some people who would want an additional 15% in performance I'm not seeing a really big need to push these for anything, especially if the life expentancy may be sacrificed!

I've thought of doing this with an H50 I have lying around. Whats your plan for keeping the Ram cooled?

I have an H50 in the box (got it from best buy for $40!) somewhere. Waiting for IB to drop and I had planned on putting the H50 on it but I think I will be putting a H100 on the CPU instead and place the H50 on what ever card I get. My case has a fan that mounts above the PCI-e slots so I will put some memory heat spreaders on and the same for the VRM's and let the fan cool it down.
 
Great f'n review Kyle and Brent, thanks! I am pleased to see you have a "typical" OC 680 and found an above avg 7970 OC (not just sticking with CCC limits). My newly and limited OCing SLi 680s testing results in the linear fashion shown here, the 7970 OC results seem to be all over the place for performance. Except for Skyrim, I also see much better MINs, even when 7970 wins the AVG/MAX, and Skyrim is neck and neck MINs.

I've only had an hour today to toy with OC, but the results on Dirt 3 are perfectly linear (i.e. an ~10% increase in OC gets an ~10% increase in FPS) and now 4xMSAA Ultra doesn't have any slowdowns/funny business like it did with my SLi 580s, butter smooth and FPS rippin Avg83 Min70 with 1110/3004 clocks at 6010x1080 (with a 4th extended to display Precision X, RealTempGT, 2x GPU-Zs).

At stock, my SLi 680s lock on 1110 core and 3004 mem. With a +106/198, locks at 1215/3206. With +123/198, locks at 1233/3206 and I haven't found the best OC yet. Voltages go to 1.175v and sometimes card 1 drops to 1.16v. I didn't have time to go any further, but I do know from playing with it a couple days ago, I get artifacts with mem offset of +400.

I am very pleased with the performance/mem boost and added features from the 680s vs 580s and I think I'm going to go ahead and waterblock these, but I'm in no hurry for now.

Can't wait for Dirt: Showdown
 
Nice review. The card performs great but it is a bit of let down that the PCB is gimped to prevent greater over clocking performance.

I wouldn't be surprised if GTX670ti with custom PCB like an MSI TF3 flavor will be able to over clock and perform better than current GTX680.
 
I'm currently holding off on a purchase of a 680 until I find out a bit more info on this (that and prices in Australia atm are at the $700 mark which are bound to drop a fair bit in the coming weeks). I'd hate to be artificially limited.

Or perhaps until one of these becomes available.

I've had my eye on the EVGA Hydro Coppers myself. They are out now, but I don't know if any have been delivered yet.

I'm waiting on 4 GB, which means waiting for FTW or Classy editions :(.

Hopefully we'll know a lot more about the water cooled overclocking potential of these things by then.
 
I don't know if it sounds all that smart to me. Pretty much, when you really, really need the extra clock speed in a high power situation (say, a game that pushes as hard as 3dmark) is when you get down clocked. When you're running at 100+fps in a low power situation is when you get an added boost because you're at low power...

I would think you would want the high power situation to be the time that the card ramps up extra voltage and clocks to push you through the demanding segment you're in.

It's counter-intuitive. Because the auto-clocking keeps the system stable, it lets you overclock higher, more safely. Therefore you can overclock higher. Sure in a few cases you might lose some MHZ when it auto-clocks down, but I think overall the gain outweighs the loss. Especially since those brief game slowdowns are often due to inefficient GPU usage when auto-clocking is more likely to boost and not throttle. Depends on the game of course.
 
Nice review. The card performs great but it is a bit of let down that the PCB is gimped to prevent greater over clocking performance.

I didn't read that exactly. It looks to me like it may be a software limit with voltage adjustment. Or yes it might be a limit with the reference design. How can we be sure where the 680 voltage limit in this review is happening?
 
I don't know if it sounds all that smart to me. Pretty much, when you really, really need the extra clock speed in a high power situation (say, a game that pushes as hard as 3dmark) is when you get down clocked. When you're running at 100+fps in a low power situation is when you get an added boost because you're at low power...

I would think you would want the high power situation to be the time that the card ramps up extra voltage and clocks to push you through the demanding segment you're in.

I thought this was possible too but seeing most graphs it looks rather consistent without to much variations. Did find one test where the mhz varried up to 170mhz during a 3dmark11 test based upon power fluctuations but that is not the norm. It did though exactly what you said at times when most needed it down clocked giving a lower low when you most needed it. It was probably a misconfiguration on their part to cause the card to be severly power limiting GPU boost.
 
Great review. Thanks [H]

Now I have the urge to wait for those custom cooler version. It does looks like this GPU have more potential in it if we could keep it cool.
 
Hi guys, first off I'd like to say that I'm a pretty regular visitor here, just never got round to registering til now as I didn't need another forum to hang out on. Guess that's changed, primary reason being I have a few issues with this overclocking article.

First issue is to do with over-volting the card, basically you can't, at least not on a reference card, you can play with the voltage adjustment in Precision as much as you like but the card will not accept any voltage increase over the stock default of 1.175V. Incidentally, the reason your voltage slider only goes as high as 1.15 is you are using the Beta version of Precision, upgrading to 3.0.1 would solve this, not that it would make much difference. What you are actually doing by setting voltage in Precision is telling the card not to drop below the voltage you set, in your tests this is 1.15V, just a little lower than the stock voltage due to using the Beta version. As to whether or not there is a benefit in setting a minimum voltage I'm not sure without testing it further but makes no noticeable difference I can tell so far.

Secondly there is no reference to the thermal throttling feature where clock speeds and voltage will automatically throttle downwards at each 5 degree step once above 70 degrees. And this is the single most important factor here in the difference between your test results.

In your first set of tests you increase Power Target and clock offsets, but without increasing fan speed and allowing the temps to rise to over 80 degrees the card is throttling itself by at least 2 steps, each step is a 13.5MHz drop in clock speed and also a drop in voltage. A 3rd step follows at 85 degrees but can't tell if the card ever reached this temp. You can see in the first screenshot of Precision monitoring the card actually reaches it's stock voltage of 1.175 very briefly, before the card reaches 70 degrees and is throttled downwards for the remainder of your testing as temps rise.

In the second set of results where you increase fan speed you are keeping the card below the thermal throttling threshold and as a result your clock speeds aren't being throttled and you are running at the default stock voltage of 1.175V for periods of time. Assuming the card never went over 70 any drop below 1.175V would be part of the dynamic clock process or Power Target feature.

So the main problem is you believe you are over-volting the card when you aren't, but have reinforced this belief by attempting to over-volt at the same time as increasing fan speed, as a result temps have dropped, thermal throttling has lifted and clockspeeds and voltage have risen as a result. This is reported as a success for over-volting, it isn't, this is a success for managing your temps.

End result being, all the before & after and comparison test results are skewed and implies over-volting is possible when it isn't but does show very well the benefits of turning up the fan speed.

Finally, you have also confused the Power Target and its relation to TDP, as seems to be quite common, TDP and Typical Power Draw are two different things. The TDP is 195w, but 100% Power Target is typical power draw which is 170W , at 132% Power Target you are at 225w which is the limit for 2 x 6pin connectors and PCIe slot. To be at the TDP of the card you would need to be at 115% Power Target.

I apologise if this sounds harsh or critical, it certainly isn't meant that way and I have huge respect for the work you guys do. I appreciate the effort that went into this article but I do feel it should make proper reference to the inability to increase voltage over stock and the significance of thermal throttling and temperature management.
 
@Kyle, Brent:
Do you have wattage numbers on the 7970 OC for each game to compare with the 680 OC?
Was looking for this as well. I suspect reference numbers could be found in the 7970 OC article, but it'd be good to have them here for comparison's sake since the real "bottom line" aspect for me (and i'm guessing many [H] users) is how the two cards compare when OC'd
 
In the one game it lost a lot(deus ex) you should check out those minimum FPS. There was a difference of 13 FPS.(GTX 680 - 42FPS, 7970 - 29FPS). Might have been an anomaly, but it seems in most games the 680 had better minimum FPS.

So its cheaper, performs better stock, and performs better/on par when overclocked. Damn, sounds like a real winner to me. What will the arguments be now against the 680?
 
In the one game it lost a lot(deus ex) you should check out those minimum FPS. There was a difference of 13 FPS.(GTX 680 - 42FPS, 7970 - 29FPS). Might have been an anomaly, but it seems in most games the 680 had better minimum FPS.

So its cheaper, performs better stock, and performs better/on par when overclocked. Damn, sounds like a real winner to me. What will the arguments be now against the 680?

Well, look at the graph... the drop to 29 is not consistent, simply spikes that could be due to loading in textures. Whereas, with the 680, there was a large gap where it was under 60 frames.

I'm not convinced the 680 is a "winner." The performance between both is pretty comparable. If I was buying a card now I'd go Nvidia simply because they tend to have better drivers. The price is debatable... it can be argued that the 7970 is a better value because it has 3GB VRAM. If you don't need the 3GB, then the 680 is a better buy of course.
 
I tried out overclocking on my 680 right away. I set the power target to +32%, GPU offset to +100Mhz, and didn't touch the memory. It pulls 1227MHz clock rate most of the time with a 1.175v reading. Easy.

After reading this article, I tried out overclocking my memory a bit. It ran without trouble at +500MHz, so I decided to stick with that.

This is the easiest overclocking I've ever done. Even my 2600k overclocking was harder than this.
 
Well, look at the graph... the drop to 29 is not consistent, simply spikes that could be due to loading in textures. Whereas, with the 680, there was a large gap where it was under 60 frames.

I'm not convinced the 680 is a "winner." The performance between both is pretty comparable. If I was buying a card now I'd go Nvidia simply because they tend to have better drivers. The price is debatable... it can be argued that the 7970 is a better value because it has 3GB VRAM. If you don't need the 3GB, then the 680 is a better buy of course.

And it was already proved in another review that this whole 3gb thing didn't help the HD 7970. Because with higher AA settings it couldn't run it smoothly anyways, therefore negating the usefulness of the 3gb. Maybe in quad fire it will pull ahead because of the 3gb, but not really anywhere else.
 
I don't see how we can "appreciate" this from AMD, and this seems especially bad for end users. For the manufacturer, I see it both bad and good; bad because AMD is slower out of the box than nVidia and it's more expensive, good because you can now put out "OC Edition" cards and charge $50 more for them without breaking a sweat; either way, I think the end user loses out in respect to choice and price.

How can the end user be the looser when AMD's policy enables a wider range of card choices? This doesn't make any sense to me.

If AMD is setting the default clock rates more modestly for reference cards, this means people who want to get the latest hardware can buy it for it for less and maybe even throw their own cooling on them, and if they want the card pre-overclocked for them, they can buy an OC edition for a few bucks extra which will give them a guaranteed level of overclocked performance. This is a good thing in my view, not a bad thing, as it means people who want to water cool can pick up a cheaper reference card, pull the HSF off it and throw on their water cooling solution.

nVidia must be aware of this, and that's why they're setting the price of their reference cards at $499 in order to undercut AMD, so AMD will eventually lower their prices. It's partly because AMD is ahead of nVidia in terms of product cycles, and AMD's partners are already putting OC edition cards on store shelves, while nVidia is only just trying to get reference cards out the door to stores now.
 
First issue is to do with over-volting the card, basically you can't, at least not on a reference card, you can play with the voltage adjustment in Precision as much as you like but the card will not accept any voltage increase over the stock default of 1.175V.

...Secondly there is no reference to the thermal throttling feature where clock speeds and voltage will automatically throttle downwards at each 5 degree step once above 70 degrees. And this is the single most important factor here in the difference between your test results.

Bob, thanks for this heads-up. I was just wondering about something. When you said you basically can't over-volt the card, did you mean specifically that you can't over-volt a GTX680 reference card or did you mean any reference card? Surely the regulator on a card can be adjusted somehow. Perhaps you mean the regulator cannot be adjusted in software because the firmware prohibits this?

Also, you mention that thermal throttling is taking place during the testing on the GTX680. Is there any means of disabling the thermal throttling in software; would a modified bios be required or is this a circuit-level feature on the GPU which cannot be overridden?

Thanks for any answers in advance.
 
Bob, thanks for this heads-up. I was just wondering about something. When you said you basically can't over-volt the card, did you mean specifically that you can't over-volt a GTX680 reference card or did you mean any reference card? Surely the regulator on a card can be adjusted somehow. Perhaps you mean the regulator cannot be adjusted in software because the firmware prohibits this?

Also, you mention that thermal throttling is taking place during the testing on the GTX680. Is there any means of disabling the thermal throttling in software; would a modified bios be required or is this a circuit-level feature on the GPU which cannot be overridden?

Thanks for any answers in advance.

I believe that the inability to over-volt these cards is at the hardware level using a sensor on the reference boards, so without bypassing this at the hardware level you are unable to over-volt, I'm only referring to the GTX680 reference cards here.

The thermal throttling also can't be disabled, it's part of the dynamic clocking of the card, working as a stepped thermal protection system from 70 degrees up to the thermal threshold of the card.
 
I believe that the inability to over-volt these cards is at the hardware level using a sensor on the reference boards, so without bypassing this at the hardware level you are unable to over-volt, I'm only referring to the GTX680 reference cards here.

The thermal throttling also can't be disabled, it's part of the dynamic clocking of the card, working as a stepped thermal protection system from 70 degrees up to the thermal threshold of the card.

I'm not sure you would want to disable the thermal throttling, at least not when using the stock HSF.

It seems to me the obvious solution is to water-cool, and the card will auto-matically adjust it's clock upwards, farther, as long as you are still in the power envelope. This alone might yield nice performance gains.

I would have to assume that some manufacturers will come out with cards, where they can take higher voltage settings, and it will be part of the design. For example the cards that come with waterblocks.

In the case of the reference cards, perhaps a newer/modded bios would add these options as well, in cases where some form of aftermarket cooling is added.

Should be some exciting stuff coming in the next few months.
 
I'm surprised that the GTX 680 starts to throttle its clocks at 70 C, particularly as the thermal shutdown temperature of this CPU is 98 C according to NVIDIA. Considering almost every other high-end GPU hits 75-85 C under heavy load 70 C seems a little overcautious IMO. 75 C would have been a better throttling point, especially as I've not seen any game go above that temperature with my custom fan profile during my testing and gaming.

My own GTX 680 can handle 1,287 MHz in many games with the GPU clock offset at +135 MHz and the memory also set to 6.8 GHz but I have seen it drop the GPU clock to 1,274 MHz, presumably because of the thermal throttling above 70 C? Such a small drop is insignificant in terms of framerates anyway so it isn't that big of a deal I guess.
 
I'm not sure if it's been mentioned before in this thread but nvidia should make a per game boost clock customization option in their nv control pannel custom game profile options. This would totally own.

Maybe bring back the good old coolbits mod :)
 
Thanks for the clarifications, Bob.

I guess disabling a sensor would be something you'd either have to do in the firmware, or you'd have to butcher the card physically. The latter would seem to be very undesireable!
 
Truly a fine article, thanks for going into finer details to project a comparable scenario between the 680GTX and HD7970. I've been looking for this no-holds-barred OC comparison and while it's clear that there is much more to learn about these beasts (given the different nature and traits), this is a great starting point. On a different note, the more i read about 680, I can't help thinking that AMD should've gone with a Pitcairn (die space and power efficiency, higher base clock) like approach for the high-end too. I mean Tahiti might be a great GPU but HD7970 isn't the best gaming graphics card.
 
The OC benchmarks for single GPU cards must be done at highest playable settings with maximum IQ for BF3 single player and that is 2560 x 1600 ULTRA 4XAA

http://hardocp.com/article/2012/03/20/msi_r7970_lightning_video_card_review/7

The testing has been done at 2560 X 1600 Ultra FSAA. Is it that the GTX 680 performance lower at 2560 X 1600 ultra 4XAA because of bandwidth/VRAM issues.

Also for Crossfire Eyefinity/SLI Surround testing at stock clocks is of no use . Enthusiasts who spend USD 500+ for a graphics card do overclock their cards. So the testing must be done at max stable overclocks on stock voltage and max voltage.

http://www.hardocp.com/article/2012/03/28/nvidia_kepler_geforce_gtx_680_sli_video_card_review/5

In BF3 single player the HD 7970 is faster at 5760 X 1080 ULTRA 4XAA. The gap will widen when u bring OC as the HD 7970 has more OC headroom.

Batman Arkham City is Nvidia's strongest game. The performance of GTX 680 is far superior at 4XAA / 8XAA and no overclock on HD 7970 can undo that.

Skyrim too could be close enough at 5760 x 1080 Crossfire/SLI if you test at max stable overclocks that you could call a tie. Dues Ex runs better on AMD.


The Radeon HD 7970 card is the better overall card because of superior compute performance, more bandwidth and more VRAM. Compute shaders are going to be used more pervasively in games in the future and in a more demanding fashion. And compute performance benefits from more bandwidth. One of the most demanding games even today is Metro 2033 which uses compute shaders for DOF. the performance of HD 7970 is superior to 680 when performance is compared in the playable range at 1080p max settings. 2560 is unplayable at max settings.


http://www.legitreviews.com/article/1886/10/

The Crysis games including crysis 2 (at max stable oc) run better on the HD 7970 because they are bandwidth demanding especially at 2560 x 1600 and higher.


http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/7
http://www.guru3d.com/article/asus-radeon-hd-7970-directcu-ii-review/23
http://www.guru3d.com/article/geforce-gtx-680-overclock-guide/14


I think there is a little bit of bias towards Nvidia in all websites. Most of the wesites excepting a few are refraining from a doing direct shootout at 2560 x 1600 and 5760 x 1080 on single GPU and multi GPU at max stable overclocks. There are 680 OC articles but they do not have an HD 7970 OC in the same comparison chart. Maybe its because websites are also businesses and depend on Nvidia for significant advertising money spent on their websites.

The GTX 680 is a good competitive gaming card but the HD 7970 is the better overall card. a 500+ dollar piece of hardware needs to be better in other things than just gaming and GPGPU is definitely one of them. More and more apps are becoming GPGPU enabled.

Nvidia' strength is TWIMTBP developer relations program and the driver team. There are more games in which Nvidia works closely with the dev team to extract the best out of their hardware.That is why you have a game like Batman Arkham city working far better on GTX 680. So AMD needs to improve on their developer relations and driver quality. GAMING EVOLVED is just a start but it has long way to go to compete with TWIMTBP. Ultimately its in the gamers interest that things remain competitive otherwise we will be faced with a situation like in the CPU market where one company controls the high end market. So get ur act together AMD. Can't wait to see Big Kepler GK110 vs RADEON HD 8970.
 
Did you read the [H] GTX 680 overclocking review? Because they compared overclocked 680 to an overclocked 7970 like you wanted.

http://www.hardocp.com/article/2012/04/04/nvidia_kepler_geforce_gtx_680_overclocking_review/5

The original GTX 680 review was done with 2X MSAA and was barely playable (40 fps on both cards), so I doubt 4x MSAA is going to be worth testing for a single card.

http://www.hardocp.com/article/2012/03/22/nvidia_kepler_gpu_geforce_gtx_680_video_card_review/5

And isn't the point of a gaming card gaming? GPGPU is nice and all, but I'd much rather have a lower power, better performing card at games, rather than a general purpose card (like Fermi was). I guess it just depends on what you want to do with the card.
 
Last edited:
Forceman
please check the link i refer to . The MSI lightning was overclocked to 1190 core / 5.9 Ghz memory.

http://hardocp.com/article/2012/03/20/msi_r7970_lightning_video_card_review/7

This is what Hardocp says

" Once we overclocked the MSI R7970 Lightning we had enough performance left to increase Anti-Aliasing settings to 4X MSAA. This provided an all around better gameplay experience with better image quality and smoother images. At these more demanding settings the overclocked MSI R7970 Lightning averaged 45.6 FPS.At its stock settings, the MSI R7970 Lightning was overall playable at 4X MSAA and averaged 41 FPS, but we felt that the periodic lag during intense fighting which was enough to force us to lower it back to 2X MSAA despite averaging over our 40 FPS standard in this game. "

So what do you mean " I doubt 4x MSAA is going to be worth testing for a single card".

Power is not a factor for people on the bleeding edge of technology. Acoustics and temps do matter. But custom air cooled cards like MSI Lightning and water cooled cards are going to handle both 680 and 7970 GPUs pretty well. Maximum stable performance is what counts.
 
Forceman
Also the particular MSI RADEON HD 7970 Lightning which Hardocp got did not overclock well. It is on the lower end of HD 7970 max voltage overclocks. The Gigabyte RADEON HD 7970 sample which Hardocp got clocked to 1.3 Ghz core / 6.4 Ghz memory. It scored 50.5 fps avg. That is amazing.

http://hardocp.com/article/2012/02/08/gigabyte_radeon_hd_7970_oc_video_card_review/6

"As we have seen in our last two games the overclocked achieved on the GIGABYTE Radeon HD 7970 has provided ample performance. We were able to increase MSAA to 4X and still have better performance than its stock speeds at 50.5 average FPS. This was by far the most enjoyable gameplay and the best performance. "

i would like to see if the GTX 680 can beat this performance at 2560 x 1600 ULTRA 4X MSAA.
 
Back
Top