NVIDIA GeForce GTX 660 Ti at High AA Settings Review @ [H]

But I do agree, its mainly 660 vs 670. Hands down 1080p low AA, the 660ti is a great value card. High AA? Well, its not terrible, but no longer as great in value and the 670 shows it deserves the premium.

really? so 14-25% is worth the price increase of $70-$100 to you?
Not to me. Based on the results, you say the price premium is worth it. But according to the [H] reviews, the playable settings are virtually the same. There's only a few cases where the GTX 670 is "playable" and the 660 is not.

MP3
2560x1600 - 4x MSAA + FXAA - 16X AF
Important FACT: Neither video card was actually playable at this setting in this game.
2560x1600 - 2x MSAA + FXAA - 16X AF
Important FACT: Neither video card really felt playable with 2X MSAA enabled, when we turned 2X MSAA off and just used FXAA performance was a lot better on both cards and the game had smooth gameplay.

BF3
2560x1600 -4x MSAA+FXAA -16X AF
Important FACT: Neither video card was truly playable at this setting of 4X MSAA at 2560x1600. The GTX 670 was close, but still dropped to unplayable levels throughout the run-through, and the GTX 660 Ti was most definitely not playable.
2560x1600 -2x MSAA+FXAA -16X AF
Important FACT: The GTX 670 is just on the cusp of being playable at 2X MSAA at 2560x1600 in this game. For the most part, it's playable, only a few scenes that may drop to 30 FPS. The GTX 660 Ti is slower, but still in the playable zone as well, except for that one spot where it dropped to 22 FPS. On the whole, it's pushing it to call this playable on the 660 Ti. Dropping both cards to FXAA is very playable at 2560x1600 with smooth performance.

Batman AC
2560x1600 -32x CXAA -16X AFImportant FACT: Neither video card is actually playable at this setting. Yes, the GTX 670 is 25% faster than the GTX 660 Ti, but it doesn't have high enough framerates to actually play at this level. Therefore, both are equally unplayable.

2560x1600 -8x MSAA -16X AF
Important FACT: Again, neither video card was actually playable at 8X MSAA in this game at 2560x1600, just not possible.
2560x1600 -8x MSAA -16X AF
Important FACT: The GTX 670 is just on the border of being playable at this setting, it is very close, but still drops below 30 FPS a bit during tessellation scenes. The GTX 660 Ti is definitely not playable.

Skyrim
2560x1600 -8x MSAA+FXAA+AO-16X AF
Important FACT: Neither video card was actually playable at this setting. Though the GTX 670 was faster, its performance was still a sluggish fest while gaming, and would not be considered playable in this game. It was very laggy.

2560x1600 8x MSAA+ 8XTR SSAA +FXAA
Important FACT: Both video cards are unplayable at this setting, thses both come in under 30 FPS a lot.

2560x1600 8x MSAA+ 8X TR SSAA +FXAA
Important FACT: The GTX 670 was barely playable at 4X TR SSAA, there were still some sticky situations where the game got a bit choppy. The GTX 660 Ti was definitely not playable.

2560x1600 8x MSAA+ 2X TR SSAA +FXAA
Important FACT: The GTX 670 is playable at this setting no question, but the GTX 660 Ti is right on the border of being playable. All in all, we wouldn't consider it playable, there was still a bit of choppiness, turning off TR SSAA was playable at 2560x1600 on it.

From the OC Review:
BF3 : 2560x1600 -FXAA -16X AF
We were surprised to see the GALAXY GTX 660 Ti so close to the GTX 670's overclocked performance, both are very close and would be impossible to tell apart in actual gaming.

In Max Payne 3 performance is close between the overclocked GTX 670 and overclocked HD 7950, literally 1 FPS average separates them. The GALAXY GTX 660 Ti GC overclocked video card is the slowest, if you consider averaging 68 FPS slow and only a 5 FPS average difference from the overclocked GTX 670.

Batman AC:
3 fps difference at 54.8 and 57.2 for 660Ti and 670, respectively.

In Skyrim we were impressed how well the HD 7950 did when we overclocked it to this level. It received a large performance increase from stock HD 7950 clock speeds and came out on top in this game delivering 16% better performance than the overclocked GTX 670 with 8X MSAA at 2560x1600 in this game. The overclocked GTX 670 was 17% faster than the GALAXY GTX 660 Ti GC overclocked video card. Note, while the differences were large in this game, this setting was still playable on each video card tested here at these clock speeds.

In Witcher 2 we see the overclocked HD 7950 again take a large lead over the other video cards, providing the best performance from start to finish. The GALAXY GTX 660 Ti overclocked video card lagged quite a bit behind the other two in this game, it was still a playable setting, but you could definitely tell there was a performance difference.
 
Everyone who is wanting us to do clock-for-clock 7950 vs. 660 ti just proves that other things like core frequency matter more than memory bandwidth. At the out-of-box settings the 7950 has a 66% memory bandwidth advantage, but that doesn't translate to real-world gaming with similar performance between the 660 Ti and 7950, and in some cases the 192-bit 144GB/sec video card giving us higher framerates compared to the 384-bit 240GB/sec video card in games at high AA settings. Core speed is a more important factor than memory bandwidth between the NV GPU and AMD GPU.
 
Thanks for the additional 660ti reviews. I love the multiple angles of the separate 660ti/7950 reviews. Also it is great that you listen to feedback and use that to generate follow up reviews. (or if it is not a full review at least address the question on the forum)

Have you guys exhausted the review options for this card yet or do you have plans for more?
 
Everyone who is wanting us to do clock-for-clock 7950 vs. 660 ti just proves that other things like core frequency matter more than memory bandwidth. At the out-of-box settings the 7950 has a 66% memory bandwidth advantage, but that doesn't translate to real-world gaming with similar performance between the 660 Ti and 7950, and in some cases the 192-bit 144GB/sec video card giving us higher framerates compared to the 384-bit 240GB/sec video card in games at high AA settings. Core speed is a more important factor than memory bandwidth between the NV GPU and AMD GPU.

Thanks for clarifying. Makes perfect sense. So really this is more of an esoteric article narrowly addressing the bandwidth issue (or non-issue, as it turns out), and people looking for all-around card comparisons should look elsewhere.
 
really? so 14-25% is worth the price increase of $70-$100 to you?
Not to me. Based on the results, you say the price premium is worth it. But according to the [H] reviews, the playable settings are virtually the same. There's only a few cases where the GTX 670 is "playable" and the 660 is not.

Maybe you should look at the 1080p results, where there's still a big lead for the 670, and its very playable.

14-25% is worth the difference of $70-100 when its $400. What's the price difference %?

Note for future reviews: http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/11/

Without powertune +10% in the CCC slider, AMD's boost bios is barely functional and irrelevant.
 
Maybe you should look at the 1080p results, where there's still a big lead for the 670, and its very playable.

14-25% is worth the difference of $70-100 when its $400. What's the price difference %?

You're funny.
Now you're arguing that the 670 is worth the premium for 1080p results?
$400 for 1080p. Yeah, right. Definitely not worth the 33% price premium.

At 1080p the 660Ti was most definitely "playable" in all their benchmarks.

Anybody who spends $400+ on a video card should have a higher resolution than 1080p. It's preposterous to spend that much otherwise.
 
i don't care what anyone says, nvidia is devious for putting out such an awesome card. We all literally fell for the infamous banna in the tail-pipe. A card that can do what the 670 does for less money on any given day minus when things go on sale. This discussion should be over. everyone who has a 670 is pissed cuz they could have got the same performance with less cash but getting a lesser card will crush their pride lol. and everyone who held out for something with better price/performance is getting a sweet deal.
 
You're funny.
Now you're arguing that the 670 is worth the premium for 1080p results?
$400 for 1080p. Yeah, right. Definitely not worth the 33% price premium.

At 1080p the 660Ti was most definitely "playable" in all their benchmarks.

Anybody who spends $400+ on a video card should have a higher resolution than 1080p. It's preposterous to spend that much otherwise.

Depends on someone's buying cycle. If you buy new video cards every generation or every other generation, I would agree.

Myself, I'm still running 768MB GTX 460s. These cards, even in SLI, are choking on Skyrim @ 1080p on Ultra and Saints Row 3 at max in-game.

When they came out, most websites reported the 1GB cards were not worth the extra money and 2 768MB cards could keep up with a 480. But now, those small frame buffers are showing their age.

So for someone who will keep their card for quite some time, I would think it still makes sense to buy a 670, as it will take longer to show its age.

As early as next year we could be looking at a new generation of console (Xbox Ininity/PS4) ports that will push these new mid-range cards pretty hard.
 
Everyone who is wanting us to do clock-for-clock 7950 vs. 660 ti just proves that other things like core frequency matter more than memory bandwidth. At the out-of-box settings the 7950 has a 66% memory bandwidth advantage, but that doesn't translate to real-world gaming with similar performance between the 660 Ti and 7950, and in some cases the 192-bit 144GB/sec video card giving us higher framerates compared to the 384-bit 240GB/sec video card in games at high AA settings. Core speed is a more important factor than memory bandwidth between the NV GPU and AMD GPU.

Now we have data that proves core frequency matters more.

Then why do you compare an oc card to a reference card? Where is the logic?
 
Now we have data that proves core frequency matters more.

Then why do you compare an oc card to a reference card? Where is the logic?

Because the focus of the article is to look at memory bandwidth, bus width, and super high AA settings in MSAA that push memory bandwidth differences. We've already done overclocking articles. This article's focus is to use high AA settings, push the memory bus, and see if the 192-bit 144GB/sec card crumbles before the 384-bit 240GB/sec card. In our testing, when we found out that they were rather close, and the 660 Ti was even faster at 8X MSAA at 1080p in a couple games, it makes us question where the benefits of that 240GB/sec of memory bandwidth is with the 7950. We found out that memory bit depth, and memory bandwidth, isn't everything to performance, even at high AA settings. That was the result of our testing. Core frequency was not the focus here, yes we could have gotten faster performance out of the 7950 by overclocking the core, but that wouldn't have helped us on this article. Testing the fastest 660 Ti vs. the fastest 7950 wasn't the goal here, the goal was look at high AA settings and ho w it relates to memory bandwidth. We have two previous articles that focus on overclocking.
 
I feel good about my conclusions, but if you disagree them that is fine, as Kyle said all the testing and data is there so you are able to look at them, and draw your own conclusions. I performed the testing, then drew mine. I've made sure to include as much information as I can in the article so that you can make your own conclusion.

There are always limits as to how much can be done in an article, and I appreciate the feedback. It took me literally all the time from the last overclocking article till this morning to deliver this amount of information and testing. Yes, it was a lot of comparisons, especially in the clock for clock section. In hindsight, perhaps clock for clock with 7950 could have been done, but in order to do that I would have to drop the 660 Ti clock since I don't have a 7950 that hits 1215, doubling the testing. This was our fourth article I've worked on back-to-back with a focus on the 660 Ti, I've literally been working everyday since the 660 Ti launch to deliver these articles for you all, I know it is an important topic and you all want to see these cards from every angle, I do it out of love, nothing but love for you all and a passion for gaming hardware.

I wish I could include all kinds of comparisons for you all, alas only so much can be done, other projects loom on my shoulders right now, one that has an embargo date. Hopefully you can take the tests I've performed and draw your own educated and smart conclusions based on what you see there, and I hope it helps somebody out there in making a buying decision either for or against this GPU. In the end it all comes down to your budget and how much you can spend on a gaming video card and what delivers the best experience at your budget. Happy Gaming
 
Great article as always. I think the crux of the confusion in the comments is that this isn't really a review but an article on the extent of the benefits of higher mem bandwidth and bus width.

It probably would be better to have left the 7950 out of it and just compare the 660Ti and 670.
If there ever were 2 modern cards where they only differ by mem bandwidth the 660Ti and 670 are it and prime candidates for an examination of the advantages of hi mem b/w.

And I am glad someone took the time and effort to do an in-depth article on it based on real world gaming. This is something I won't get on any other site and why I come here everyday.
 
Last edited:
Clock for clock makes no sense, the arquitectures are completely different.

Heck, let's downclock the 660ti for a clock for clock comparison with the 560ti, or even better a 8600GT.
 
Yup, my GTX 660 TI wins hands down against my GTX 480 at 2560X1600 even though the GTX 660 TI has half the bandwidth and bus. Bandwidth bottleneck on a 660 I think not. I think Keplar is just a very efficient architecture and bandwidth, memory, etc are just specs on paper. The performance means something else. I'm very impressed with my GTX 660 Ti that I'm going to buy a few more. ;)
 
Last edited:
Yup, my GTX 660 TI wins hands down against my GTX 480 at 2560X1600 even though the GTX 660 TI has half the bandwidth and bus. Bandwidth bottleneck on a 660 I think not. I think Keplar is just a very efficient architecture and bandwidth, memory, etc are just specs on paper. The performance means something else. I'm very impressed with my GTX 660 Ti that I'm going to buy a few more. ;)

I have noticed some new 480's out there for sale for $200 and I have been mighty tempted, thinking "Sure it's not DX11, but it's a former flagship and it's way ahead of my current card." I'm so glad you made that comparison before I bought!
 
Clock for clock makes no sense, the arquitectures are completely different.

Heck, let's downclock the 660ti for a clock for clock comparison with the 560ti, or even better a 8600GT.

This. Anyone who asks for clock for clock between 2 different architectures is an idiot.
 
Could the ROPs be the bottleneck on the 660? It's 24 vs 32 - a 25% difference which is a bit less than the difference in memory bandwidth, and obviously closer to the performance difference we are seeing between the 660 and 670. How about down-clocking the 670 memory clocks to 4ghz and then comparing it to the 660?
 
yes, but it also isn't fair to measure an overclocked card versus a stock card

Comparing 2 different architectures at the same clock is a waste of time because they have different IPC and are not meant to run at remotely similar clockspeeds, so trying to do a similar clockspeed comparison is beyond comical.
 
Could the ROPs be the bottleneck on the 660? It's 24 vs 32 - a 25% difference which is a bit less than the difference in memory bandwidth, and obviously closer to the performance difference we are seeing between the 660 and 670. How about down-clocking the 670 memory clocks to 4ghz and then comparing it to the 660?

Thank you for bringing this up! It is about damn time! The ONLY differences between the various gtx660ti,
gtx670, and hd7950 are NOT just memory bandwidth(function of mem. clock times data path size) and the
frame buffer, there's ROPS too! You remember those don't you? The part of a graphics card that actually
does the calculating of traditional multi sample aa and super sample aa? Yes, bandwidth to and from
the frame buffer is an important aspect of aa performance(as is the amount of frame buffer available
at high res like 3x1080p with high levels of aa), but in addition to the shaders and texture units doing
the leg work leading up to final rasterization, it is the ROPS that add that last touch to smooth out
the picture and make it extra pretty. The gtx660ti has 24, and both the gtx670 and hd7950 have 32. The
number of rops are tied to the width of memory passage(8rops per 64bits), at least on the gtx600s.

The drop from 32rops on gtx670 to 24 on the 660ti is the same as the drop in bandwidth w/same vram
(256bit/32 bytes x 6 giga transfers/second=192GB/s, 192bit/24bytes x6gbps=144GB/s)

The bigger performance gap between the gtx660ti and 670(vs it & 7950)at the same clock speed can likely
be attributed in large part to the gtx670 having 33% more ROPS(or gtx660ti having 25%less, potayto,
potahto). as for the galaxy gtx660ti 3GB vs. the hd7950 w/boost bios(7950b has a nice ring to it, like
the old g92b or gt200b 55nm nvidia GPUs shrunk from 65nm for lower power, cost, and better perf).
7950b has higher power use, though. Got off track a bit,... back to the point....

1.2ghz x24(galaxy 660ti)=28.8 giga pixel fill rate, .9ghz x 32(7950b)=28.8 giga pixel fill rate.

According to the article, in the comparison between the 7950 and galaxy 660ti the radeon was at a
constant 925mhz(a little over 900) and the geforce at a constant 1188mhz(a little shy of 1200) through-
out testing of all the games. That equates to roughly equal rop performance. now Kyle and Brent, I love
how you generally do your gpu reviews on this site compared to most others(techreport is different in
some good ways and some bad, but you mentioned in something i read about eventually procuring the
necessary software/hardware to record the amount of time it takes for a card to render individual frames
and techreport said that one or more awesome sites had inquired about methods of replicating their
findings in their reviews of the last year or so), but why on earth didn't you go ahead while you had your
660ti, 670 and 7950 max clocked and get high level aa results like that? and what about high aa resullts
at an even more demanding 3x1080p res for the 3GB cards? too late, i guess.

(Edit:This is what i suspect many readers were hoping4/expecting to get out of this particular article.I was.)

Really though, while the radeon shaders used to be far less efficient compared to geforce shaders,
(Examples:gtx470 with a little over 1088gflops vs hd5850 @2088gflops or 480@1345 vs 5870@2720)
the combination of gcn and new drivers for radeons and the move to matched shader/core clock and
3x the number of shaders on the geforce side have narrowed the gap a lot.
Example: hd7970ghz ed. over 4tflops vs gtx680 over 3gflops. Both Tahiti and gk104 run at fairly high
clocks by default(1ghz give or take), and both are capable of going much higher.

Thing is, when you overclocked the 7950(can be had for 300), gtx660ti(3GB galaxy at 340, non oc cards
around 300) and gtx670(used to be at least 400, but lower to at least the point of 380), the 7950 on the
whole ended up with better performance(even at a lower oc than average compared to 2 very high OCs).

If only we had very high aa results based on that(and/or 3x1080p for the 3GB cards/all the cards).

You said back before Christmas that the 7970 was the first single gpu card to allow playable performance
with high settings at 1080p in current games(w/launch drivers). when the gtx670 came out it was
found to have similar performance at lower res(1920x1080/1200,2560x1600/1440), but since the driver
that came with 7970ghz ed. the normal 7970 performs better. Considering that it is fairly easy to overclock
a us300 7950 to levels that easily beat a normal 7970, that seems like the best value proposition among
the tested cards. sure geforce has twimtbp, physX, txaa and adaptive af, but there's ways to make radeons
use vsync and triple buffering, the visual diff. between 4x aa and 8x aa is marginal when running around in
games actually playing them, physX support seems mighty slim on nvidia's feature page and the list of
those games on the wikipedia physX page, and while a few twimtbp games do run notably better on
geforces than on radeons at comparable price/performance levels, most run about the same (a few even
run notably better on a radeon with otherwise similar price/perf).

Oh yeah, i guess(right now, anyway) that gk104 is better suited to high levels of tesselation than Tahiti,
but those games are few and far between too. The ones that use it still look great on normal, and ones
like crysis 2 with the dx11 update are better run at lower levels(extreme tesselation of a flat street
barricade... are they serious? That feature could have been used to much greater benefit on both nvidia
and amd cards without such a tremendously negative impact on performance.)

Getting off my high horse now...

Again Kyle and Brent, love the work ya'll been doin' over the years. Please just try to be more careful
in the future about disregarding important aspects of what makes a gpu tick.

Edit:
P.S.: Saw/heard Kyle(youtube?) with [H] news and/or eval. of hd5870,5850, gtx480 and 470 (single gpu
and sli/cf?) noise and/or temp levels. someone recently posted in the forums here including a link to
Brent's play through/commentary of star trek online on youtube. I believe at the time Kyle was sporting
a somewhat scruffy Grizzly Adams look and had a deeper top salesman tone of voice, while Brent's
voice was higher pitched and talked fast a lot(if memory serves, episode about upgrading your max
level ensign's stuff and outfitting a new ship/testing it a bit out in space), It's neat to put other aspects
of a person's individuality together with what they write online. Another reason u 2 are awesome.
 
Last edited:
yes, but it also isn't fair to measure an overclocked card versus a stock card

this article is not an examination of price vs performance. it is an examination of whether memory bus width and data throughput plays any significant role in improving visual fidelity and gaming experience.

regarding the comparison between the galaxy gc 3gb card and the boosted 7950. both cards come equipped with 3gb frame buffers. out of the box, the galaxy card will concede 50% less memory bit depth (192bit vs 384bit) and 30% less throughput (144gb/s vs 240gb/s). both cards deliver very similar gaming experiences in instances where their respective 3gb frame buffers are being saturated. essentially, the galaxy gtx660 ti gc does more with its few resources than the 7950 does with its bounty of resources.

should the 3gb gtx660 be rewarded its $30 premium over a stock $300 7950 just for being a more efficient worker bee? not in my book...but again price vs performance is not the goal of the article.
 
alot of people in here, mostly if not all the red fans dont have a clue what this benchmark was all about.

if you want to see how 660ti vs 7950 compare with the same MHZ then go here: http://www.hardocp.com/article/2012/08/23/galaxy_gtx_660_ti_gc_oc_vs_670_hd_7950

now back to the topic, cough, cough, i think memory bandwidth has hit the wall for gaming, just like regular pc RAM, you know like, ddr1600 vs ddr2000, etc....

maybe until we get into DX13 gaming in the future where games eat up more stuff up it will go back the way it was, but right now its not that importaint with today's games.
 
Last edited:
this article is not an examination of price vs performance. it is an examination of whether memory bus width and data throughput plays any significant role in improving visual fidelity and gaming experience.

regarding the comparison between the galaxy gc 3gb card and the boosted 7950. both cards come equipped with 3gb frame buffers. out of the box, the galaxy card will concede 50% less memory bit depth (192bit vs 384bit) and 30% less throughput (144gb/s vs 240gb/s). both cards deliver very similar gaming experiences in instances where their respective 3gb frame buffers are being saturated. essentially, the galaxy gtx660 ti gc does more with its few resources than the 7950 does with its bounty of resources.

should the 3gb gtx660 be rewarded its $30 premium over a stock $300 7950 just for being a more efficient worker bee? not in my book...but again price vs performance is not the goal of the article.

i agree the main focus of the article attempted primarily to isolate the impact of bandwidth differences
among the tested cards, but i take issue with parts on the last page about prices(gtx670 vs 660ti) and
bottom line of the 660ti being such a great value. right now it is better to find an older model 7950 at 300
(or get a newer card with the new bios and flash an older bios, they SHOULDN'T cost more than before)
and overclock it yourself. even with a relatively mild overclock it will meet or beat gtx660ti(300 reference,
340 for 3GB OCed), gtx670(380 to more than 400) or a regular 7970(a little over 400). well, maybe not an
overclocked/ghz ed. 7970, but we are talking 100usd+ diff. vs 40(galaxy 660ti 3GB vs 670).

yes, tahiti has a 128bit more bus & is larger,more gluttonous, hotter, and more expensive for amd to make
at 28nm and 4.3B transistors because of it, while gk104 was downsized from 384bit to 256( and halved
the shader clock but tripled the # of shaders) to get a smaller, cooler, leaner, less expensive gpu to
produce at 28nm and 3.5B transistors. tahiti might well have had the same performance it does now with
only a 256 bit vram bus, but adding data width(and pcb complexity, maybe even more vram than is really
needed) cost amd this round. that wider memory controller must be a huge part of the die. the move
from cayman to tahiti saw shaders go from only 1536 to 2048 and texture units from 96 to 128 + dx11.1,
but the transistor count went from 2.64B to a whopping 4.3B!(40nm to 28nm).

g80 had 686M transistors with 384bit mem and 24 rops on 90nm while g92(b) had only 16rops/256bit
mem. also from 32 tex. units w/32 address/64 sample to 64 units with one each for 754M on 65nm.
(maybe texture address units are relatively large on the die compared to other features?)

hd2900xt w/700M+ transistors and 1024 bit ring bus controller(512 each way) at 80 nm was shrunk to
666M transistors(insert "mark of the beast" comment here) with a simpler 256bit crossbar and added
dx10.1 at 55nm for hd3870 with otherwise similar clocks, gpu guts and performance. 16 rops on 'em btw

the move from 3870 to 4870(on the same 55nm process) and 320 shaders to 800 shaders saw an
increase in transistor count of only 290M(666+290=656). so i guess shaders are small.
actually from 16 to 40 texture units too. hd4890(rv790, 959M) had higher clocks.

also at 65nm, g92(b) went from 754M transistors( specs as above:w/ 8 texture units/16 shaders x8)
to 1.5B on gt200(b) w/512bit vram path, 240 shaders (10 groups of 24) and 80 texture units.
+25%texture units, 2xbandwidth and rops but less than 2x shaders.

4870 to 5870: 956M to 2.15B, everything doubled save vram path(same) +dx11(55nm to 40 nm).

gt200(b) to gf100/110: 1.5B to 3B, 16 groups of 32 shaders(512), only 64 texture units. 55nm to 40nm.
also cut from 512bit mem to 384 bit, rops grew in #( 32 to 64).

edit: rops went from 32 to 48, but dropped back to 32 with gk104

explains how the 3B transistor count of gf100/110 grew by only 500M when they removed 16 rops,
128bit of vram width, and added dx11.1 but grew shaders from 512 to 1536(40nm vs 28nm). texture units
stayed the same, but even bottom level amd and nvidia cards can do high quality 16x af w/o issue.

don't know quite why i wrote all that, but is seems higher end radeons have had the same ratio of
texture units to shaders for a while(4 per block of 80 on vliw5, 4 per 64 with vliw4/gcn). in the transition
from g92(b) to gt200(b) and fwd, nvidia seems to be gradually shrinking the ratio of texture units to shaders
on their cards. again, even bottom level cards from both camps have been running 16x high quality af with
little to no performance loss for sometime. maybe amd would stand to save a little there next round too.
 
Last edited:
alot of people in here, mostly if not all the red fans dont have a clue what this benchmark was all about.

if you want to see how 660ti vs 7950 compare with the same MHZ then go here: http://www.hardocp.com/article/2012/08/23/galaxy_gtx_660_ti_gc_oc_vs_670_hd_7950

now back to the topic, cough, cough, i think memory bandwidth has hit the wall for gaming, just like regular pc RAM, you know like, ddr1600 vs ddr2000, etc....

maybe until we get into DX13 gaming in the future where games eat up more stuff up it will go back the way it was, but right now its not that importaint with today's games.

i dont know if this reference was supposed to dis hd7950 or praise it but the overclock article had the
xfx 7950 they used at 1200(many in that article discussion say that was low) vs the gk104 cards at
1300. that, my friend, is not a clock for clock comparison. and the 7950 did better! from the last page:

"All of this comes with a price though. The Galaxy GTX 660Ti GC can be found today for $339.99 at Newegg. Amazon has sold out of the card through its more reasonable retailers and now Amazon is showing an inflated price of $364.41. Galaxy is telling us that more cards are on the way in to all retailers. TigerDirect also has the card at MSRP of $339.99.

The ASUS GTX 670 DirectCU II TOP is pretty much a discontinued product from what we can tell. The last price we see on it was $429.99 from Amazon.

The XFX Radeon HD 7950 is currently priced at $383.87 at Amazon and in stock but that one has a bit different fan configuration. You can find the XFX "Double D" Black Edition HD 7950, which is the one we used, at Newegg for $349.99 with a $30.00 MIR, which makes it a tasty deal should you want to go with the Red Team.

Hopefully our OC testing will give you a solid basis for your enthusiast purchase or at least fuel the fires for another healthy forum argument discussion."

so yeah, for 320 after rebate(add about 8.00 for shipping), 7950 is better price/performance.

i do agree about bandwidth becoming less relevant. more and more games have shader based
fxaa as an option in the settings menu, but gradually in game support for more traditional msaa/ssaa
is being phased out. makes sense, since texture units and rops are fairly static, but the amount of
shaders in newer cards keeps growing and growing. they seem to take up less space on a die than
other gpu components, so are likely a more economical feature for amd and nvidia to have in there.

kind of makes me wonder what would have happened if intel had kept pushing forward with the larrabee
project. with smaller processes for more floating point units on smaller, less power hungry dies, cards
based on that architecture may have been able to compete with ones from the 2 discrete gpu titans.
larrabee was all foating point except for some texture units. look up on wikipedia if u want to know more.

personally, i am not a fanboy for either side, i've owned 2 radeons, & am currently rocking a geforce 2mx
'cuz it was all i had left after the fans on the radeons died(both after several years of service). I've followed
the introduction of gtx660ti with great interest and have considered other geforces in the past(6800 class)
and am even thinking about a low end dx11 geforce on old school pci(yes, currently stuck on agp).
really, whatever gives you the most bang for your hard earned buck. right now at 300-380, that's hd7950.

maybe microsoft will skip from dx12 to dx14. that's likely a while down the road yet, though.
 
Last edited:
but i take issue with parts on the last page about prices(gtx670 vs 660ti) and
bottom line of the 660ti being such a great value. right now it is better to find an older model 7950 at 300.

do you see what you did there? bait and switch maybe? the conclusion at the end of the article suggested that gamers might find greater value in choosing a gtx 660 ti (and overclocking it) rather than a gtx 670. nowhere in the conclusion did brent say anything about the gtx 660 ti being a greater value than the 7950. you take exception with brent's conclusion but then go on to ramble about a video card he catagorically does not mention in his conclusion. i don't know why you wrote so much either, but i enjoyed reading it nonetheless.
 
do you see what you did there? bait and switch maybe? the conclusion at the end of the article suggested that gamers might find greater value in choosing a gtx 660 ti (and overclocking it) rather than a gtx 670. nowhere in the conclusion did brent say anything about the gtx 660 ti being a greater value than the 7950. you take exception with brent's conclusion but then go on to ramble about a video card he catagorically does not mention in his conclusion. i don't know why you wrote so much either, but i enjoyed reading it nonetheless.

it was in response to a link directing to the last page of the 3 OCed cards article. maybe i should have
gone on about that there. but it was already in here, so....

edit: actually probably some weird combo based on the last pages from this article and the last one.

glad someone got something out of my ramblings.(edit: whether it be useful info, interesting/fun read, etc.)
 
Last edited:
it was in response to a link directing to the last page of the 3 OCed cards article. maybe i should have
gone on about that there. but it was already in here, so....

glad someone got something out of my ramblings.

the links were in reference to the overclockability of the gtx 660 ti not an empirical observation of the card as a better value than the 7950.
 
the links were in reference to the overclockability of the gtx 660 ti not an empirical observation of the card as a better value than the 7950.

isn't the overclockability of a card part of its value(in comparison to the overclockability of competing
cards)? you know how they perform next to one another stock, and if you have a good idea of the
limits you could push them each to and their relative performance at that level, doesn't that factor into
the value equation as well? granted you may not get the best overclock possible on the card you end
up choosing, but you will likely be quite satisfied with the level of performance it provides you( given
your budget and your desired combo of res and settings are playable to you on it). of course no card
is guaranteed to oc well, but well built,newer cards on a more matured process all(edit:most)should bin
w/highest clocks and full functionality(stable). it becomes a matter of the manufacturer sacrificing some
good dies to sell more cards at lower prices points and make more money than they would if they put
all the fully functional, high clock dies into the best, most expensive cards and lose money because
only a smaller percentage of customers are willing to spend that much and far more are on tighter
budgets for such an upgrade.(edit: dies are binned, not cards. boy it's late here...zzzzz.....)

edit: value was not specifically analyzed at that point vs. hd7950, but mentioning values and the 7950
(not at the same time) higher up on the page in question (or just mentioning them at all throughout the
course of the article in general) then going on to mention its(edit:gtx660ti) value when OCed compared to
gtx 670and nothing said about 7950 IMPLIES(to me anyway, not sure about others)that they would not
endorse the 7950 as good value as well. I did take issue with that(the implication I apparently
made up in my head).

edit: they did say in the last article the on sale xfx was a hot deal for folks on the red team, though.
 
Last edited:
Thank you for bringing this up! It is about damn time! The ONLY differences between the various gtx660ti, gtx670, and hd7950 are NOT just memory bandwidth(function of mem. clock times data path size) and the frame buffer, there's ROPS! You remember those don't you? The part of a graphics card that actually does the calculating of traditional multi sample aa and super sample aa? Yes, bandwidth to and from the frame buffer is an important aspect of aa performance(as is the amount of frame buffer available at high res like 3x1080p with high levels of aa), but in addition to the shaders and texture units doing
the leg work leading up to final rasterization, it is the ROPS that add that last touch to smooth out the picture and make it extra pretty. The gtx660ti has 24, and both the gtx670 and hd7950 have 32. The number of rops are tied to the width of memory passage(8rops per 64bits), at least on the gtx600s.

The drop from 32rops on gtx670 to 24 on the 660ti is the same as the drop in bandwidth w/same vram (256bit/32 bytes x 6 giga transfers/second=192GB/s, 192bit/24bytes x6gbps=144GB/s)

The bigger performance gap between the gtx660ti and 670(vs it & 7950)at the same clock speed can likely be attributed in large part to the gtx670 having 33% more ROPS(or gtx660ti having 25%less, potayto, potahto). as for the galaxy gtx660ti 3GB vs. the hd7950 w/boost bios(7950b has a nice ring to it, like the old g92b or gt200b 55nm nvidia GPUs shrunk from 65nm for lower power, cost, and better perf).
7950b has higher power use, though. Got off track a bit,... back to the point....

1.2ghz x24(galaxy 660ti)=28.8 giga pixel fill rate, .9ghz x 32(7950b)=28.8 giga pixel fill rate.

yes as you said the ROPs form an important part of the performance equation with MSAA or SSAA. Shader based AA like FXAA doesa not require ROP performance but traditional MSAA does.

Also the HD 7950 with Boost does not perform like a HD 7950 (925 Mhz) because of two reasons.

1. The boost voltage of 1.25v raises power consumption and the HD 7950 performs closer to a HD 7950 (850 Mhz) because its not able to run at 925 Mhz consistently as it is TDP constrained.

2. At default settings with power option at 0% its clocks are throttled and do not reach 925 Mhz.If you max out the power option slider to +20% then it would always run at 925 Mhz

http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/12/

This review shows the difference in pushing power option to +10% on a variety of games. Manual overclocking to 1.1 Ghz speeds would make it a no contest against the GTX 660 Ti OC as was shown in the previous OC article. HD 7950 might not be well balanced in terms of resources but it has tremendous performance scaling with overclocking and that is something the GTX 660 Ti cannot match. The HD 7950 is the best value for money card in the high end space. There are nice designs which use HD 7970 PCB like the Sapphire HD 7950 950 Mhz edition , Sapphire HD 7950 Vapor-X . These cards overclock to 1150 Mhz with voltage tweaking and will compete with a GTX 670 (1250) easily. GTX 660 Ti is a not a competitor for the HD 7950.
 
yes as you said the ROPs form an important part of the performance equation with MSAA or SSAA. Shader based AA like FXAA doesa not require ROP performance but traditional MSAA does.

Also the HD 7950 with Boost does not perform like a HD 7950 (925 Mhz) because of two reasons.

1. The boost voltage of 1.25v raises power consumption and the HD 7950 performs closer to a HD 7950 (850 Mhz) because its not able to run at 925 Mhz consistently as it is TDP constrained.

2. At default settings with power option at 0% its clocks are throttled and do not reach 925 Mhz.If you max out the power option slider to +20% then it would always run at 925 Mhz

http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/12/

This review shows the difference in pushing power option to +10% on a variety of games. Manual overclocking to 1.1 Ghz speeds would make it a no contest against the GTX 660 Ti OC as was shown in the previous OC article. HD 7950 might not be well balanced in terms of resources but it has tremendous performance scaling with overclocking and that is something the GTX 660 Ti cannot match. The HD 7950 is the best value for money card in the high end space. There are nice designs which use HD 7970 PCB like the Sapphire HD 7950 950 Mhz edition , Sapphire HD 7950 Vapor-X . These cards overclock to 1150 Mhz with voltage tweaking and will compete with a GTX 670 (1250) easily. GTX 660 Ti is a not a competitor for the HD 7950.

ok, so after rereading parts of this [H] article and also using google translate to decipher the web page
you linked, i think i know what you are saying

Brent did check clock levels on the hand overclocked 660ti 2GB and 670 during games to make sure
they were running at 1215mhz all the time, but only listed 1188(maybe the card's max boost clock?)
for galaxy 660ti 3GB and said that the 7950 was running 925 in all games without saying they checked
(depending on them to be at 100% boost all the time?). they did mention the voltage increase but nothing
about changing the powertune limit. i wonder if the 7950(maybe 660ti too?) did throttle a bit in some
games. in a way that could be considered by some as even better for the 7950 since it was performing
like that at lower clocks, or maybe the same or worse if the gtx660ti was throttling too/more.

the german article you linked tested a lot more games but included all the ones in these past few
articles on the [H]. seems the best cases on the foreign site were batman AC and skyrim, while the
worst case was metro 2033(not tested here). all other games fell in between a non overclocked, old
bios 7950 and a hand overclocked 7950(925mhz). to those reading this but not that article, that page
was testing the boosted 7950 vs. non boosted(old reference clock)and hand overclocked (to 925)
versions of the same card only.

I do agree about hand tuning the oc on a non updated 7950 being better(running higher clocks w/o
pushing the volts so hard and adjusting powertune as well) or getting a new card and flashing an older
bios(wonder if you'll find those online later on?). still making the hd7950 with greater headroom
than 660ti a better deal at a similar price.

i might mention that the article you linked noted the fan of the OCed by hand 7950 being a bit louder
than the boosted 7950, but depending on the details that could be better. I think a lot of overclockers
set the fan speed to a constant rpm that give the best combination of cooling and tolerable noise
levels. the boosted 7950 probably throttled the fan speed too, but maybe not as fast as the hand
overclocked card. an up and down in RPMs could get pretty annoying.

edit: and yeah, a lot of games are going more toward shader based aa solutions(that stay on the die?)
such as fxaa versus old school aa which substantially taxes the memory, bandwidth and rops.
 
Last edited:
jtenorj, your posts make my eyes bleed, please use better formating and punctuation.

Is it possible to calculate the memory/bandwidth theoretically needed for a game at various resolutions?
 
jtenorj, your posts make my eyes bleed, please use better formating and punctuation.

Is it possible to calculate the memory/bandwidth theoretically needed for a game at various resolutions?

will try to do better on punctuation. not entirely sure what you mean by formatting, though.

If you mean smaller paragraphs i can try, but my 1152x864 monitor only creates an initial input window
so wide. not sure if i can widen it. maybe i can by editing after the fact. however, some of my posts are
quite long. i think it might be very difficult.

as for calculating vram/bandwidth use for a game, not sure about the best way to do that. for bandwidth
is likely very hard, and high res textures in vram hard to see all at once. likely why developers set certain
size vram for a card in minimum/suggested requirements(for low setting, low res textures to high/high).
 
jtenorj, your posts make my eyes bleed, please use better formating and punctuation.

I agree. Holy crap! I am interested in what he has to say, but the lack of a capitalized letter at the beginning of each sentence really makes it look like a run-on ramble.

It's amazing how such a little thing can have such an impact.
 
edit: value was not specifically analyzed at that point vs. hd7950, but mentioning values and the 7950
(not at the same time) higher up on the page in question (or just mentioning them at all throughout the
course of the article in general) then going on to mention its(edit:gtx660ti) value when OCed compared to
gtx 670and nothing said about 7950 IMPLIES(to me anyway, not sure about others)that they would not
endorse the 7950 as good value as well.

in the instances where the performance of the gtx660 ti gc and 7950 were close, the 7950 was still ahead by a healthy number of frames (drawn from the overclocking comparison prior to the publishing of this article). considering the in-market cost of the galaxy card in comparison to that of any 7950, it would be foolhardy if not completely mental to suggest that a gtx660 ti is of "greater value" than a 7950. the editors of this site don't strike me as those kinds of blokes.
 
I have to agree with jtenorj . Mentioning the 7950 last gave me the impression that the author didn't really like the 7950, or would recommend it, even though it was unquestionably the best performer in the review and is the best value . The last few reviews, in my opinion, have been biased toward Nvidia cards . So much so, that in their conclusions it seems as if they think Nvidia was the better performer even though the actual data points the other way
 
I have to agree with jtenorj . Mentioning the 7950 last gave me the impression that the author didn't really like the 7950, or would recommend it, even though it was unquestionably the best performer in the review and is the best value . The last few reviews, in my opinion, have been biased toward Nvidia cards . So much so, that in their conclusions it seems as if they think Nvidia was the better performer even though the actual data points the other way

you're doing the same thing that jtenori did, brent did not mention the 7950 last. he did specifically say that the gtx 670 wasn't significantly faster in memory intense situations and that the gtx 660 ti holds its value when you consider its overclockability. whatever chip you already have on your shoulder regarding the 7950, you are applying it to the way you interpret the ending of the article. jtenori concedes to doing it, and you may want to do the same.
 
you're doing the same thing that jtenori did, brent did not mention the 7950 last. he did specifically say that the gtx 670 wasn't significantly faster in memory intense situations and that the gtx 660 ti holds its value when you consider its overclockability. whatever chip you already have on your shoulder regarding the 7950, you are applying it to the way you interpret the ending of the article. jtenori concedes to doing it, and you may want to do the same.

Just like its big brother the GeForce GTX 670, the GeForce GTX 660 Ti is just as capable of a performer when it comes to overclocking. It is, after all, the same GPU, just on a narrower memory bus. What will set apart GeForce GTX 660 Ti video cards is going to be how well each manufacturer designs its printed circuit boards, power supply and circuitry, and its overall focus on building an enthusiast overclocking video card. GALAXY has taken care to focus on overclocking ability with its version of the GTX 660 Ti in the GC edition.

GALAXY designed this video card with a full-length PCB, instead of the shortened PCB you'll find on reference GTX 670 cards and 660 Ti cards which gives Galaxy more room to build an efficient trace pattern to deliver the power needed. GALAXY also designed this GC video card with an 8-pin and 6-pin power supply for a more stable power capacity compared to reference. GALAXY is using 5+2 phase power supply versus the 4+2 in the reference design. GALAXY is using a custom cooling solution that also works extremely well keeping the temperatures down.

All of these things combined give the GALAXY GTX 660 Ti GC a high capacity and potential for overclocking performance. These kind of features are what will separate each GTX 660 Ti video card from one another. If you want a high level of overclockability that can reach near an overclocked GTX 670, you are going to need one with these features. If not, you may not receive the kind of overclock that can compete.

All of this comes with a price though. The Galaxy GTX 660Ti GC can be found today for $339.99 at Newegg. Amazon has sold out of the card through its more reasonable retailers and now Amazon is showing an inflated price of $364.41. Galaxy is telling us that more cards are on the way in to all retailers. TigerDirect also has the card at MSRP of $339.99.

The ASUS GTX 670 DirectCU II TOP is pretty much a discontinued product from what we can tell. The last price we see on it was $429.99 from Amazon.

The XFX Radeon HD 7950 is currently priced at $383.87 at Amazon and in stock but that one has a bit different fan configuration. You can find the XFX "Double D" Black Edition HD 7950, which is the one we used, at Newegg for $349.99 with a $30.00 MIR, which makes it a tasty deal should you want to go with the Red Team.

so what card was mentioned last ?
 
so what card was mentioned last ?

really dude? the 7950 was nentioned in the last article not this one in which you are in the forum commenting about now. and you might have missed what brent wrote about the 7950 being a "tasty deal" for gamers looking to buy a card from amd. sheesh. look up self awareness, it may help you.
 
Back
Top