I don't get why people are declaring the 680 the winner already??

Stop being so defensive. The OP didnt say anything remotely similar to what you are saying.
Its a fair and valid point.
That doesnt mean we dont respect that Nvidia has a great technology approach. It is a smart system for the card and its great. But as enthusiasts I think determining which card wins when both are OCed would be a huge benefit to this community and many others.

Wasn't directed @ the OP.
 
Hilarious! 7970 comes at $50 more than GTX 580, lays a pretty decent smack on it, and it's considered "meh." Tack on 5% performance and $50 cheaper (and 2 months late) and the GTX 680 is the greatest card evarrrrrrr.
For the trash that they talked, in my mind NVIDIA failed to deliver. Instead of a beat down we got "me too but I'm slightly cheaper." I'm glad to have spent the extra $50 over 2 months ago to enjoy a performance increase. AMD will no doubt drop prices to match, which then we could actually get excited about a potential price war.

Brent is finally on the right track, getting down and dirty with overclocking. Overclock potential is still the big selling point for the OG [H] members. I know I didn't pay more for an Opteron 64 because it had the same "out of the box experience" as an Athlon 64.
 
NVIDIA’s GeForce GTX 680 has delivered a more efficient GPU, lower in TDP, that is better or competitive in performance, at a lower price.

/end thread.
 
Doesn't AMD already have this tech in place with the 6900/7900 series in the form of Powertune, except it only downclocks?
 
why is it a winner?

out of the box it's
$50 cheaper
5-10% faster
uses 5-10% less power

So lets look at it again,, cheaper, less power, faster

how is that not winning? :confused:

You left out the most important bit! Better drivers.

Now I know Nvidia has had some issues, but using AMD drivers for the last year has been dreadful. I have had to go back and forth between an Eyefinity layout and a normal 3 monitor extended layout a ridicuous number of times, and it NEVER remembers which monitors are which. Sometimes when i switch to Eyefinity, nothing happens and it just shows me the same content in all three monitors. The fact that the cards down clocks anytime Flash media plays (even while gaming) is annoying as hell. So to solve that you need to turn off Flash acceleration, which is sort of against the point. When I was running Crossfire, I am pretty sure I spent more time playing Crossfire roulette than playing game. There were pkeby of times when i just turned off Crossfire and forgot about it because whatever I was playing at the time had reverse scaling. Plus waiting for cap drivers which come out every other week just to play new games was giving me a migraine.

By no means do I think Nvidia is perfect, but I certainly think it is the less painful choice.
 
7970 hardware appears to have much untapped potential. BUT if you want to play with 2 or 3 7970s in a crossfire/tri-fire config, then it's just plain awful. Half the games I play I get horrible scaling in, and it's not all solely due to shitty engines (SWToR, Tera, Skyrim, EVE-Online) and very noticeable stuttering at anything below 60fps.

Hell I get negative scaling in SWToR if I run tri-fire, and with Skyrim my 3rd video card is not used at all because it's disabled in the crossfire profile. This is not even taking into account that at launch we did not even have a proper driver for the 7970.

Right now, I get all sorts of BSoD with the newest betas out there from guru3d just sitting idle in windows, though it is faster (so I will give it that much). Other drivers just perform like shit. If it wasn't for the buggy AMD drivers with negative tri-fire scaling and the stuttering with crossfire/tri-fire enabled, I'd honestly think these cards could perform soo much better.

At this point after having owned crossfire rigs since the 5850 (which I regret getting rid of my 5850s since they were awesome), I am going to go back to Nvidia. No idea if the grass is any greener on the other side of the fence, but we will see!
 
Nvidia did their work right this time and I am sure ATi will adjust prices accordingly. Any of you saying well it's $50 cheaper know that this is not going to be the case for very long
 
7970 hardware appears to have much untapped potential. BUT if you want to play with 2 or 3 7970s in a crossfire/tri-fire config, then it's just plain awful. Half the games I play I get horrible scaling in, and it's not all solely due to shitty engines (SWToR, Tera, Skyrim, EVE-Online) and very noticeable stuttering at anything below 60fps.

Hell I get negative scaling in SWToR if I run tri-fire, and with Skyrim my 3rd video card is not used at all because it's disabled in the crossfire profile. This is not even taking into account that at launch we did not even have a proper driver for the 7970.

Right now, I get all sorts of BSoD with the newest betas out there from guru3d just sitting idle in windows, though it is faster (so I will give it that much). Other drivers just perform like shit. If it wasn't for the buggy AMD drivers with negative tri-fire scaling and the stuttering with crossfire/tri-fire enabled, I'd honestly think these cards could perform soo much better.

At this point after having owned crossfire rigs since the 5850 (which I regret getting rid of my 5850s since they were awesome), I am going to go back to Nvidia. No idea if the grass is any greener on the other side of the fence, but we will see!

That was why i returned the 3rd 7970,
only running 2 , and to me everything that i played are perfect.
BF3, mass effect 3 .
i was having BSOD issue with the 12.2 driver.. forced me to go rc11
but now using the newest ( said to be 12.2 but kind of like the 12.3)
everything works fine.
i am waiting for the driver to get better before i ill go tri fire 7970
 
7970 hardware appears to have much untapped potential. BUT if you want to play with 2 or 3 7970s in a crossfire/tri-fire config, then it's just plain awful. Half the games I play I get horrible scaling in, and it's not all solely due to shitty engines (SWToR, Tera, Skyrim, EVE-Online) and very noticeable stuttering at anything below 60fps.

Hell I get negative scaling in SWToR if I run tri-fire, and with Skyrim my 3rd video card is not used at all because it's disabled in the crossfire profile. This is not even taking into account that at launch we did not even have a proper driver for the 7970.

Right now, I get all sorts of BSoD with the newest betas out there from guru3d just sitting idle in windows, though it is faster (so I will give it that much). Other drivers just perform like shit. If it wasn't for the buggy AMD drivers with negative tri-fire scaling and the stuttering with crossfire/tri-fire enabled, I'd honestly think these cards could perform soo much better.

At this point after having owned crossfire rigs since the 5850 (which I regret getting rid of my 5850s since they were awesome), I am going to go back to Nvidia. No idea if the grass is any greener on the other side of the fence, but we will see!

I experienced all the same exact problem has you...From either going crossfire or trifire with the Radeon 7970. I sold everything also and went back to Nvidia today. Time will tell if I made the right choice, but by look of thing I don't see AMD doing a 360 and by miracle coming up with proper drivers.
 
I experienced all the same exact problem has you...From either going crossfire or trifire with the Radeon 7970. I sold everything also and went back to Nvidia today. Time will tell if I made the right choice, but by look of thing I don't see AMD doing a 360 and by miracle coming up with proper drivers.

Maybe it's luck. I never had problems with ATI cards or crossfire. But when I moved to a 580, 480 and then 560 I had problems with drivers all the time and I was never able to get 560 SLI to work properly so I just went back to ATI.
 
The problem with GPU Boost is that it doesn't seem to be the same between cards. I have read some reviews where it never went past 1056mhz, and Brent said he saw 1.3Ghz. Reviews have been all over the place too. I don't know how much I'd personally trust this new technology.

If reviewers happen to get cards that hit higher clocks than you'd get, you'd end up with a slower GTX 680, even though you paid the same. The difference between 1056mhz and 1.3ghz is pretty massive.

One review I see, they are pretty close and GPU Boost looks like it only goes up to 1.1ghz. Others 7970 gets creamed and they either don't talk about GPU Boost clocks (which is what I think NV wants from reviewers) or it ends up being a pretty large boost.

I really, really want to see a graph overlaying FPS with frequency. It'd be interesting to see if GPU Boost is only padding FPS results by clocking higher on low load renders (like looking at the floor) or if it's coming through when it matters. Right now it's impossible to tell, and if it does the latter, GPU Boost is great. If it only helps raise your FPS from 80 to 100fps and you're on a 60hz screen, no one cares in gaming experience, but it'll raise the average FPS and make the card look really good and it ends up being a cheap marketing trick.

When we get the [H] OC review, it'd be awesome to see them both at the same clocks and at maximum clocks.
 
Maybe it's luck. I never had problems with ATI cards or crossfire. But when I moved to a 580, 480 and then 560 I had problems with drivers all the time and I was never able to get 560 SLI to work properly so I just went back to ATI.

trolling much?
 
We have yet to see overclocking performance on HardOCP, so the gap might narrow. The 7970 has crazy overclocking headroom.
 
So far the 2 GB RAM hasn't become an issue, as can be seen in the [H] multi-monitor review.

Am I missing something? Neither card was showing great frame rates with a single 680 at 5760x1200. Sure the 680 is sometimes better than the 7970, but both companies had bad frame rates on single cards. I think sites need to do multimonitor testing with multiple card setups (SLI/Crossfire) and not just single cards, otherwise the tests are out of context. It should be done this way to demonstrate how these cards will work in real setups, and to show whether or not these cards are designed to scale with multimonitor SLI/Crossfire setups. Without showing how multiple cards scale with multimonitor setups a review is giving a dubious impression of the cards being compared.

The question that hasn't been answered, except perhaps in the Swedish review (unless the issues there are just a driver issue) is whether or not the 2GB of memory in the 680 is enough to scale with two-way and three-way (or even four-way) SLI/Crossfire at 5760x1200 and 7680x1600 resolutions. If a performance comparison shows that the 680 is not increasing in performance at the same rate as the 7970 as cards and higher resolutions are added, then that is a significant weakness for people who are looking to run a good multimonitor setup.
 
Last edited:
The problem with GPU Boost is that it doesn't seem to be the same between cards. I have read some reviews where it never went past 1056mhz, and Brent said he saw 1.3Ghz. Reviews have been all over the place too. I don't know how much I'd personally trust this new technology.

If reviewers happen to get cards that hit higher clocks than you'd get, you'd end up with a slower GTX 680, even though you paid the same. The difference between 1056mhz and 1.3ghz is pretty massive.

One review I see, they are pretty close and GPU Boost looks like it only goes up to 1.1ghz. Others 7970 gets creamed and they either don't talk about GPU Boost clocks (which is what I think NV wants from reviewers) or it ends up being a pretty large boost.

I really, really want to see a graph overlaying FPS with frequency. It'd be interesting to see if GPU Boost is only padding FPS results by clocking higher on low load renders (like looking at the floor) or if it's coming through when it matters. Right now it's impossible to tell, and if it does the latter, GPU Boost is great. If it only helps raise your FPS from 80 to 100fps and you're on a 60hz screen, no one cares in gaming experience, but it'll raise the average FPS and make the card look really good and it ends up being a cheap marketing trick.

When we get the [H] OC review, it'd be awesome to see them both at the same clocks and at maximum clocks.

Excellent post :)
You know your stuff and covered all the angles.
 
Damn some of you take your video cards companies way too personally, I never said nVidia was "cheating" or that there was anything wrong with the [H] review! In fact I think GPU boost is a cool feature and am pleasantly surprised with the 680's performance after some of the negative rumors. But it is overclocking itself, so I think it only fair to compare it to an overclocked 7970. Some of you guys need to chill out! I honestly don't care if nvidia or amd/ati wins this round, and think all this fanboyism is immature and obnoxious

Instead of thinking of it as overclocking itself just think of it as variable clockspeed since you know, that is what it actually is doing.
 
The problem with GPU Boost is that it doesn't seem to be the same between cards. I have read some reviews where it never went past 1056mhz, and Brent said he saw 1.3Ghz. Reviews have been all over the place too. I don't know how much I'd personally trust this new technology.

If reviewers happen to get cards that hit higher clocks than you'd get, you'd end up with a slower GTX 680, even though you paid the same. The difference between 1056mhz and 1.3ghz is pretty massive.

One review I see, they are pretty close and GPU Boost looks like it only goes up to 1.1ghz. Others 7970 gets creamed and they either don't talk about GPU Boost clocks (which is what I think NV wants from reviewers) or it ends up being a pretty large boost.

I really, really want to see a graph overlaying FPS with frequency. It'd be interesting to see if GPU Boost is only padding FPS results by clocking higher on low load renders (like looking at the floor) or if it's coming through when it matters. Right now it's impossible to tell, and if it does the latter, GPU Boost is great. If it only helps raise your FPS from 80 to 100fps and you're on a 60hz screen, no one cares in gaming experience, but it'll raise the average FPS and make the card look really good and it ends up being a cheap marketing trick.

When we get the [H] OC review, it'd be awesome to see them both at the same clocks and at maximum clocks.

Excellent point about when it is boosting - as you said it is may be easier to boost when the load is low and give meaningless frame rates.

I don't agree with the other part of your comment though, the part about the difference between cards. Just like different cards overclock different, different cards are going to boost differently, so I don't really see how this is a problem. If you consider anything above the default boost (1056) as free, then you really have nothing to complain about - just like if you happen to get a 7970 that only does stock clocks or very small overclocks. Additional boost over 1056 isn't guaranteed, just like overclocking isn't. It does make it hard to do a straight benchmark comparison though, I'l grant you that.
 
Hilarious! 7970 comes at $50 more than GTX 580, lays a pretty decent smack on it, and it's considered "meh." Tack on 5% performance and $50 cheaper (and 2 months late) and the GTX 680 is the greatest card evarrrrrrr.
For the trash that they talked, in my mind NVIDIA failed to deliver. Instead of a beat down we got "me too but I'm slightly cheaper." I'm glad to have spent the extra $50 over 2 months ago to enjoy a performance increase. AMD will no doubt drop prices to match, which then we could actually get excited about a potential price war.

Brent is finally on the right track, getting down and dirty with overclocking. Overclock potential is still the big selling point for the OG [H] members. I know I didn't pay more for an Opteron 64 because it had the same "out of the box experience" as an Athlon 64.

Don't forget the 680 introduces a great new AA method, is more power efficient, runs cooler. The new Vsync feature is also very interesting. Between the new tech, performance, efficiency and price, it is a clear winner.
 
Since overclocking is now such a big part of marketing cards these days, why not include an overclocked Radeon? One card does it automatically, the other one manually, either way its still overclocking.

Except it isn't. Nvidia has a fluctuating clockspeed that dynamically changes based on certain criteria but the upper end of that clockspeed still isn't overclocked. Its the factory clockspeed of a GTX 680. The 7970 comes shipped with its factory highest clockspeed enabled by default.

Not the same.
 
Hilarious! 7970 comes at $50 more than GTX 580, lays a pretty decent smack on it, and it's considered "meh." Tack on 5% performance and $50 cheaper (and 2 months late) and the GTX 680 is the greatest card evarrrrrrr.
For the trash that they talked, in my mind NVIDIA failed to deliver. Instead of a beat down we got "me too but I'm slightly cheaper." I'm glad to have spent the extra $50 over 2 months ago to enjoy a performance increase. AMD will no doubt drop prices to match, which then we could actually get excited about a potential price war.

Brent is finally on the right track, getting down and dirty with overclocking. Overclock potential is still the big selling point for the OG [H] members. I know I didn't pay more for an Opteron 64 because it had the same "out of the box experience" as an Athlon 64.
I jumped on two 680s to replace my two 580s, but it's not simply because of performance reasons. Yes, the performance is better, which is the primary motivation - but they are also 65w less TDP, each card, and living in FL with summer coming, this is actually something I care about to an extent. I also really like the new technology and I am curious to see how TXAA and adaptive vysnc work. The combination of lower heat output, TXAA, adaptive vsync AND better performance, while being introduced at the same price I paid for my 580s 18 months ago, is a good deal in my opinion.
 
Except it isn't. Nvidia has a fluctuating clockspeed that dynamically changes based on certain criteria but the upper end of that clockspeed still isn't overclocked. Its the factory clockspeed of a GTX 680. The 7970 comes shipped with its factory highest clockspeed enabled by default.

Not the same.

Yeah, it's really quite ingenious. AMD has to set one clockspeed for all their cards, meaning they have to use the lowest common denominator. With Nvidia's autoclock system, they don't need to bother with the lowest denominator seeing as the software will adjust the clock depending on each and every card individually. You'll get cards with varying stock performance, sure, but you'll also squeeze everything possible out of your product.

At any rate, that feature is all fine and dandy but rather meaningless for us seeing as we'll all be overclocking ours cards anyway. Gonna be interesting to see the OC review to see which card really is faster. I'd have liked a bit more games for the review though, can't help fear that Nvidia might have sacrificed something to get 7970 performance with a smaller die. I'm also somewhat disappointed by the rather low performance advantage the card has. Gotten used to seeing Nvidia with a 20% lead across the board.
 
Excellent point about when it is boosting - as you said it is may be easier to boost when the load is low and give meaningless frame rates.

I don't agree with the other part of your comment though, the part about the difference between cards. Just like different cards overclock different, different cards are going to boost differently, so I don't really see how this is a problem. If you consider anything above the default boost (1056) as free, then you really have nothing to complain about - just like if you happen to get a 7970 that only does stock clocks or very small overclocks. Additional boost over 1056 isn't guaranteed, just like overclocking isn't. It does make it hard to do a straight benchmark comparison though, I'l grant you that.

It's a problem because a GTX 680 that runs at 1.3Ghz in Unigine Heaven will score much better than a GTX 680 that can only run 1056mhz. If a review site, like [H], gets a good GTX 680 that can run 1.3ghz, it's going to score more. In the review, it's not going to say anything other than, "GTX 680 beat 7970 by 20%" or whatever, and that's it. No mention of clock. Someone will then see the benchmarks with GTX 680 between 1.2 and 1.3ghz beating a stock 7970 at 925mhz and conclude that the GTX 680 is indeed superior.

They could, if 1056mhz is the highest guaranteed value, end up buying a GTX 680 and bring it home and only have Turbo GPU kick in to 1056mhz. Suddenly, that card that looked so good in reviews isn't really all that good afterall. The difference between 1056mhz and 1.3Ghz in a GPU is massive. In the case of the 7970, a 300mhz change of clock gives a 15-25% increase in performance.

The bottom line is that NV is saying you're going to get a guaranteed 1056mhz out of Turbo, and everything after that is luck; everyone who is reviewing these chips are running chips and they turbo over 1056mhz. It's not like either team hasn't cherry picked chips before.

If every chip hit the same clocks, I wouldn't have a problem with it. That's what separates NV Turbo from CPU turbo, and it's very secretive and Apple-esque, and I don't like it when companies dumb things down and hide things from customers.
 
An interesting discussion, but I'd wait until we have definitive data of different cards boosting to different amounts. Right now I haven't seen any site go into great detail on the boost feature, let alone plotting graphs of the boost in action with real min/max recordings in a variety of situations across multiple cards. Hopefully the HOCP article on the boost feature covers some of this.
 
I'd wait until we have definitive data of different cards boosting to different amounts. Right now I haven't seen any site go into great detail on the boost feature, let alone plotting graphs of the boost in action with real min/max recordings in a variety of situations across multiple cards.

We really need to see GPU Boost analyzed. The fact that NV says GPU Boost can only hit 1.1ghz when [H] saw 1.3Ghz is a little disturbing. I don't want to make any brash accusations but it's odd that a review site would get a GPU that automatically overclocks itself 200mhz than what a company claims it should.
 
Honestly, GPU Boost makes sense when you realize that the majority of consumers don't overclock. Does it make things murkier for enthusiasts who want to know every intimate detail of the GPU and push it to the max? Yes. But that's not the average person. I don't see how you can fault NVIDIA for designing a feature that makes the product better out of the box. I would love to see AMD introduce this. I can't imagine there would have been much of an outcry if AMD had done this on the 7970 - it's only a problem because AMD was (imo) too conservative on the stock clocks this round.

Looking at all the products on paper, I think AMD is having some driver issues. The 7970 should be scoring better than it is.
 
I agree.. this isn't an enthusiast card, it's a cheaply made gaming/mainstream card pushed to extreme clocks out of the factory.. there's no overclocking headroom and the scaling is pathetic... this won't take any HWBot records from the 7970.

For the gaming crowd this is the card to have, however for the enthusiast crowd(overclockers/benchers) it isn't.

Just for you Profumo: GTX 680 takes #1 on HWBot from 7970, Crow???

[HWBOT] GTX 680 takes record @ 1900 Mhz

http://hwbot.org/submission/2267660_kingpin_3dmark11___performance_geforce_gtx_680_15327_marks/

cS1Vw.jpg
 
Last edited:
Just for you Profumo: GTX 680 takes #1 on HWBot from 7970, Crow???

[HWBOT] GTX 680 takes record @ 1900 Mhz

Very nice.

However, he did completely bypass the cards voltage regulators and use an external power board for it - something that didn't have to be done for the 7970 record.

Awesome achievement and it took an extreme degree of skill to do (and wow that was fast, not just the card but his hardware modifying)
 
Yeah, I saw it ;).. impressive, as were the binning efforts that went into it :)

The GPU is less efficient than Tahiti(no surprise since it's considerably smaller and comes at an important transistor disadvantage) hence the higher clocks needed to get here, and the card needed a proper VRM for it, the EPower(the reference is simply insulting) as opposed to the Lightning which held the record till today... but this should change once the Matrix, SuperOverclock, Lightning versions show their faces...

I still wouldn't buy this card(the implementation is no better than a ~100$ GTX 460), I'd wait for custom solutions, hopefully they'll get the TDP restrictions removed or pushed a lot further too.
 
Though it may be slight, and I am not getting either card, I would call a card that is cheaper, runs cooler, runs faster, and uses less power a pretty clear winner.

Overall Performance at all Resolutions (Techpowerup): http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_680/images/perfrel.gif

That and the considerable better driver support from nVidia...

Also, while SLI may not scale as well in benchmarks in comparison to Crossfire, even [H] in an article stated that the real world experience was much better/smoother (with SLI), especially at lower FPS. Once again, drivers coming into play here.
 
Also, while SLI may not scale as well in benchmarks in comparison to Crossfire, even [H] in an article stated that the real world experience was much better/smoother (with SLI), especially at lower FPS. Once again, drivers coming into play here.

I agree 100% with this. It doesn't matter if adding a second 7970 gets you a 50% increase in your speeds, if you are going to be stuttering to hell every time your fps dips below 60.
 
We really need to see GPU Boost analyzed. The fact that NV says GPU Boost can only hit 1.1ghz when [H] saw 1.3Ghz is a little disturbing. I don't want to make any brash accusations but it's odd that a review site would get a GPU that automatically overclocks itself 200mhz than what a company claims it should.

Except Kyle and Brent saw these rates while looking at live demos. They didn't say they saw these rates during testing.
 
Faster, quieter, cheaper, less power, better features, better drivers. Nvidia really hit it out of the park. You could argue performance it's all that especially in games that count but when you look at everything else it's a no brainer which card is better and better value.

This is best card for nvidia since 8800GTX it's all time best, a legend.
 
Except Kyle and Brent saw these rates while looking at live demos. They didn't say they saw these rates during testing.

Whats keepng Nvidia or thier partners from sending the best 1% chips to reviewers or the most relevent ones? Does anyone really think that this doesnt happen ? I I was nvidia I would send the best performing chips to all the most viewed reviewers on the net. For sure most customers arnt getting these chips.
 
Whats keepng Nvidia or thier partners from sending the best 1% chips to reviewers or the most relevent ones? Does anyone really think that this doesnt happen ? I I was nvidia I would send the best performing chips to all the most viewed reviewers on the net. For sure most customers arnt getting these chips.

You are correct. This could very well be the case. I was just poing out that the review stated they saw those numbers during a demo, not during their testing.
 
It's a problem because a GTX 680 that runs at 1.3Ghz in Unigine Heaven will score much better than a GTX 680 that can only run 1056mhz. If a review site, like [H], gets a good GTX 680 that can run 1.3ghz, it's going to score more. In the review, it's not going to say anything other than, "GTX 680 beat 7970 by 20%" or whatever, and that's it. No mention of clock. Someone will then see the benchmarks with GTX 680 between 1.2 and 1.3ghz beating a stock 7970 at 925mhz and conclude that the GTX 680 is indeed superior.

They could, if 1056mhz is the highest guaranteed value, end up buying a GTX 680 and bring it home and only have Turbo GPU kick in to 1056mhz. Suddenly, that card that looked so good in reviews isn't really all that good afterall. The difference between 1056mhz and 1.3Ghz in a GPU is massive. In the case of the 7970, a 300mhz change of clock gives a 15-25% increase in performance.

That's a good point - I hadn't thought of that angle. We'll have to wait and see as more sites do GPU boost testing - and see what clocks the cards really do boost to when they are left at default.
 
Sure, it's true that it will lose much to an OC'ed 7970, and for enthusiasts the new Turbo feature might seem little more than a gimmick. However, that Turbo thing will do wonders for casual players who don't usually bother OC'ing their cards. It also makes the card look very decent in reviews( though mostly thanks to AMD's pricing). As such, I'd call it a definite win. Just remains to be seen how long until AMD comes up with a similar feature. It's likely that Nvidia will have it in every card from now on, so unless AMD wants to be left behind in reviews and on the casual market, they'll need a similar trick.

The amount of overclocking is minimal. i doubt a casual gamer will notice
 
Back
Top