NVIDIA Kepler GeForce GTX 680 Video Card Review @ [H]ardOCP

Got my 2 cards this morning, been testing a couple of hours. I am straight up not happy with the overclocking and SLI scailng in the titles I have tested so far. It is not as good as the 7970s I came from. I'm hoping thats a driver issue, but that aspect is not pleasing. Comparing SLI 680 to xfire 7970 there are areas that are 10-20 fps slower in crysis 1/2, this is 2560 resolution with ultra textures / dx11...same for metro 2033 and Witcher 2 at the same resolution. On the other hand, Skyrim runs much faster on the 680s, Dragon age 2 is about the same on both setups, bf3 is faster on the 680s. This is comparing oc'ed 7970s in crossfire.

Based on what I've seen so far from reviews, it seems like SLI scaling is not what it should be. Probably some driver updates to come.
 
Day 1 adoption is always pretty shaky, particularly when it comes to new architectures. Always has been; always will be.
 
yeah xoleras don't return them just yet. btw what is that gpu shroud you got there?
 
yeah xoleras don't return them just yet. btw what is that gpu shroud you got there?

Its the one that comes with the cooler master HAF X case. I like them, I can attach a 120mm fan on the end and it generally lowers GPU temps by a good 4-5C.

The case is awesome for cooling GPU's on air :cool:

As far as SLI, I guess its only specific titles. BF3, Batman and skyrim scale ridiculous well, crysis / crysis 2 / metro 2033 at ultra settings not so much. (compared to 7970 xfire). With the latest WHQL the 7970 scales better in those last 3 titles....OTOH Batman: AC was a treat, 680 SLI is great on that game. Also doesn't take such a big performance hit with physx as the 580s did. Hopefully NV will get some new drivers out soon.

PS I'm still amazed at how damn quiet these cards are
 
On day 1 of the 7970 launch, wasn't crossfire completely non-functioning? So yeah, it's not really a fair comparison to make just yet as this is day 1 of the card.
 
On day 1 of the 7970 launch, wasn't crossfire completely non-functioning? So yeah, it's not really a fair comparison to make just yet as this is day 1 of the card.

Can't really argue with that. It took more than a month to shake those issues out...point taken :cool:
 
the more i hear about these cards the more i want two haha - i need one but want two haha shit
 
I don't think pci-e 2.0 versus 3.0 is even an issue bro, we're not even close to saturating pcie 2.0 yet.

Then Nvidia should not represent that thier new card is PCIe 3.0 compliant. And per your logic they should not bother even creating a card that is...right?

And, by the way, Tri and Quad Fire 7970 systems already do show gains on PCIe 3.0.

It's seriously lame to release a card that is PCIe 3.0 compliant that cannot run 3.0 on the only available 3.0 platform. Pretty silly really. AMD figured it out with the 7000 series....and those idiots at AMD cannot even create good drivers.
 
Show us ONE application that is actually hurt by only having PCIe 2.0 x16. Give nVidia time to figure this one out. AMD might have got it right, but they still have a whole lot of other issues to fix.
 
I don't think pci-e 2.0 versus 3.0 is even an issue bro, we're not even close to saturating pcie 2.0 yet.

Look at it this way, why put it on the box if it's not true and whether or not it makes a difference doesn't matter. Classic "if so and so lies for something so negligible what else could they lie about" example.

I feel for him because way back in the day this happened to me with nVidia and the 6800 GT. I so excited over my launch day card, read about all it's features on the box on my way home. Then I remember the utter disappointed of the video processor never working and after massive complaintse they just simply removed it from the box and pretended it never existed. The slap on the face was they released a 60-70 usd card that had all the video features it promised. It's easy to argue from your chair but it makes a world of a difference
when it's your new baby. :D


Sorry to hear about your performance issues though. Considering the card shines everywhere else I'm sure it was just timing, they'll eventually improve it.
 
I'm guessing this has more to do with X79 than it does the GTX 680. Let's see what happens with a true PCIe 3.0 implementation like Ivy Bridge before we get too excited.
 
Got my 2 cards this morning, been testing a couple of hours. I am straight up not happy with the overclocking and SLI scailng in the titles I have tested so far. It is not as good as the 7970s I came from. I'm hoping thats a driver issue, but that aspect is not pleasing. Comparing SLI 680 to xfire 7970 there are areas that are 10-20 fps slower in crysis 1/2, this is 2560 resolution with ultra textures / dx11...same for metro 2033 and Witcher 2 at the same resolution. On the other hand, Skyrim runs much faster on the 680s, Dragon age 2 is about the same on both setups, bf3 is faster on the 680s. This is comparing oc'ed 7970s in crossfire.

730x549xHPIM07371JPG_qlqp61hej1.jpg.pagespeed.ic.IMrzikUwSx.jpg


Other than that I am amazed out how cool and quiet these cards are. Even with 60% manual fan I can't really hear it, whereas the AMD 7970 fan is audible even at 45%. The fan is much better designed and quieter.

I'm not sure if i'm keeping the 680s, i'm very impressed by how cool and quiet they are. The SLI scaling is downright puzzling though, because I remember the 580 SLI were *very* good with SLI scaling, the 680s appear to be NOT as good and need some driver work. 7970 xfire seems to scale well with a few exceptions (batman: AC being one)

Back to more testing

followup: Deus Ex is about the same on both setups, both scale well. Surprising since Deus EX: HR is an AMD title. Alan Wake does not scale well on 680 SLI. Back to more testing
 
Wouldn't have thought of it that way. I can't wait to see what the mobile gpus do. Unfortunately, nV only announced a couple of rebranded laptop cards today.

Mobile cards are often several tiers below similarly named cards on desktop. So it still won't be close to desktop performance. But it will be better than the current best card on laptop at least.
 
followup: Deus Ex is about the same on both setups, both scale well. Surprising since Deus EX: HR is an AMD title. Alan Wake does not scale well on 680 SLI. Back to more testing

it looks like you're using an SB system. Do you think your 8x PCIe bandwidth could be the issue?

Mobile cards are often several tiers below similarly named cards on desktop. So it still won't be close to desktop performance. But it will be better than the current best card on laptop at least.

yeahp but given how this card scales up and down there is really no reason they can't just make it run at 51.2% power to meet the MXM3b 100watt limitation. Assuming the scaling is linear(which it isn't) that is still the power of half a 6990 or one 6950~. This quite a step forward for laptops if it even remotely works like this.
 
Last edited:
I don't think pci-e 2.0 versus 3.0 is even an issue bro, we're not even close to saturating pcie 2.0 yet.

For single cards, you are right. No need for PCIe 3.0

Many motherboards - however - especially upcoming PCIe 3.0 Ivy Bridge compatible systems have a limited number of PCIe lanes.

Ivy Bridge will has 16 PCIe 3.0 lanes and 4 PCIe 2.0 lanes (plus a few more PCIe 2.,0 lanes off the chipset)

Lets say you put two boards in that system.

Then you wind up with 8x-8x (if that's how the switching is laid out)

This would be just as good as a 16x-16x configuration if you are on PCIe 2.0, but because GTX680 limits itself to 2.0 in the drivers, its just going to be like 8x-8x PCIe 2.0

Now 8x-8x PCIe 2.0 really isn't bad for current SLI. But what if we add another card (provided the motherboard layout and switching supports it). Now you wind up with 8x-4x-4x. if you have PCIe 3.0 it's equivalent to 16x-8x-8x which is great, but if you don't you are going to see some performance slowdowns.

I agree that for most people it's likely not an issue, but for those who want to put multiple boards in upcoming Ivy Bridge systems it might be.
 
Keep in mind nVidia is still using that fucked up branding system of old for the mobile parts, meaning you can have 40nm Fermis, 40nm architecturally improved Fermis (with respect to perf-per-watt) and 28nm Keplers using the same name...

The good news is that the Fermis will be replaced by Keplers wearing the same names/numbers, but who the hell knows when.
 
yeahp but given how this card scales up and down there is really no reason they can't just make it run at 51.2% power to meet the MXM3b 100watt limitation. Assuming the scaling is linear(which it isn't) that is still the power of half a 6990 or one 6950~. This quite a step forward for laptops if it even remotely works like this.


That would be pretty badass, but even in that case, they have the choice of going even lower power and giving you more battery life instead. My guess is that most mainstream laptops will do the latter, but that a few gaming models might try for a system that utilizes the full 100W MXM standard.
 
Zarathustra[H];1038529115 said:
This would be just as good as a 16x-16x configuration if you are on PCIe 2.0, but because GTX680 limits itself to 2.0 in the drivers, its just going to be like 8x-8x PCIe 2.0

For now, the PCIe 2.0 limitation is only for the X79 chipset - I haven't seen any indication that the card will still be limited to 2.0 speeds when placed in an Ivy Bridge system.
 
Zarathustra[H];1038529125 said:
That would be pretty badass, but even in that case, they have the choice of going even lower power and giving you more battery life instead. My guess is that most mainstream laptops will do the latter, but that a few gaming models might try for a system that utilizes the full 100W MXM standard.

Of course. I am only speaking of the ultrahigh-end gaming laptops such as clevo/alienware and to a lesser extent asus/msi.
 
followup: Deus Ex is about the same on both setups, both scale well. Surprising since Deus EX: HR is an AMD title. Alan Wake does not scale well on 680 SLI. Back to more testing

Hey bud, are you doing an mem oc on the 7970s and mem offset on 680s?

Thanks
 
Two cards max on these. Hopefully it's the drivers and not hardware limitations.

Actually, it's probably the limitations of only 2Gb ram. I must say that NVidia has made a very strange decision to position a 2Gb GTX680 for $500 against the 3Gb 7970. The problem is that a lot of people who are going to be willing to spring for $500 for a video card are probably also either running a 3 monitor setup, or are seriously considering it. Which means that with some current games already going over the 2Gb mark for textures, these GTX680s are already obsolete. I for one don't give a damn how much faster this card is at 1920x1080 because I usually game at 5760x1050. At this resolution, the difference in frame rates between a GTX680 and a 7970 is non-existent EXCEPT for when the GTX680 runs out of video memory.

Honestly, the mid-range 600-series should have come with 1.5 or 2Gb of video memory, but the high-end should have come with 3 or 4Gb. There's no doubt the 600-series is a nice chip, but what a silly move by NVidia to starve its top card of memory where it matters most.

To make matters worse, here in Canada, the GTX680 is $550, not $500, and most are being priced at $599.
 
Last edited:
nice review guys.

very suprising indeed to see the 680 more power efficient than the Amd cards. Took awhile for me to grasp that.I first saw the pictures of the gtx680 and noticed only two 6 pin connectors and I was just waiting to laugh at nividia. I will eat my crow now, in regards of power efficiency good job nividia.

One thing that disappoints me as of late in your video card reviews. You always overclock the card being reviewed, but never pit it against the other cards overclocked as well. Maybe something to look into.

You guys have to be slammed with articles to write with all this new tech coming out. I hope to see 7870 crossfire review, 680 gtx sli vs 7970 crossfire. (maybe even tri sli/quad sli) vs (trifire and quadfire).
 
One thing that disappoints me as of late in your video card reviews. You always overclock the card being reviewed, but never pit it against the other cards overclocked as well. Maybe something to look into.

Maybe you can give some examples of this. All I see in [H] reviews are out of the box experiences. I dont consider auto-clock a true overclock. This has been beat to death in this thread already.
 
One thing that disappoints me as of late in your video card reviews. You always overclock the card being reviewed, but never pit it against the other cards overclocked as well. Maybe something to look into.

They pit it against the stock version of itself, so you can see how much overclocking actually benefits you. And even then, it's a crapshoot whether you're going to reach the same speeds. Doing overclock versus overclock is a doubly complex crapshoot, so that seems even less useful to me.
 
Actually, it's probably the limitations of only 2Gb ram. I must say that NVidia has made a very strange decision to position a 2Gb GTX680 for $500 against the 3Gb 7970. The problem is that a lot of people who are going to be willing to spring for $500 for a video card are probably also either running a 3 monitor setup, or are seriously considering it. Which means that with some current games already going over the 2Gb mark for textures, these GTX680s are already obsolete. I for one don't give a damn how much faster this card is at 1920x1080 because I usually game at 5760x1050. At this resolution, the difference in frame rates between a GTX680 and a 7970 is non-existent EXCEPT for when the GTX680 runs out of video memory.

Honestly, the mid-range 600-series should have come with 1.5 or 2Gb of video memory, but the high-end should have come with 3 or 4Gb. There's no doubt the 600-series is a nice chip, but what a silly move by NVidia to starve its top card of memory where it matters most.

To make matters worse, here in Canada, the GTX680 is $550, not $500, and most are being priced at $599.

Can you please show me in the review where the GTX 680 ran out of memory, because I went through the review again and couldn't find it.
 
I ran my 7970s at 1125/1700 for 24/7 use, I'm about to fiddle with memory offset more on the 680s.

Please let us know the results.

If anything, I think that the 680 may be more limited by memory bandwidth due to its narrower bus, so in order to get good results out of an overclock, you probably really need to dial in the RAM.
 
Actually, it's probably the limitations of only 2Gb ram. I must say that NVidia has made a very strange decision to position a 2Gb GTX680 for $500 against the 3Gb 7970.

It's not strange at all when you consider the GPGPU market. Memory size is MUCH more important there than gaming and nVidia doesn't want people to buy $500 GeForce cards in place of $2000 Tesla cards. AMD doesn't really have that problem since no one seriously uses them for GPGPU (other than the bitcoin crowd and some hackers)
 
http://www.brightsideofnews.com/new...e-from-the-architects-of-g80-or.aspx?pageid=2

we'll give you an efficiency example. Samaritan demo by Epic Games required three GTX 580 boards in 3-Way SLI to run at 30 frames a second in Full HD resolution. Even by going Quad SLI with two GTX 590 boards were not a guarantee of smooth framerates. In terms of finances, we're talking about a $1647 investment; $1107 after the last round of GTX 580 price cuts. A single GeForce GTX 680 manages to achieve the identical framerate and we're talking about a $499.99 investment (the cost of plain vanilla GTX 680).

Link to samaritan video if anyone has forgotten:
http://www.youtube.com/watch?v=XgS67BwPfFY
 
That means nothing really, the demo running on the 680 was using FXAA instead of 4xMSAA and they've had over a year to optimize it. It's just marketing hype.
 
Anyone know where if this Samaritan demo is available for download? Would make a nice bench.
 
Back
Top