NVIDIA GeForce 3-Way SLI and Radeon Tri-Fire Review @ [H]

Status
Not open for further replies.
I did not know that about Tom's

lame

Me neither. I know their forums are pretty bad cuz of lack of moderator control but i always go there for their articles. Granted I always read as many articles/reviews as I can from different websites to come to an unbiased conclusion.
 
Me neither. I know their forums are pretty bad cuz of lack of moderator control but i always go there for their articles. Granted I always read as many articles/reviews as I can from different websites to come to an unbiased conclusion.

Yeah, I used to read THG back in the 90's before I knew of their shady practices. These days it's: [H] > Anand > Guru3D
 
Thanks for this post. I was thinking of getting a ATI radeon 6990 as the reviews having it be neck to neck almost the whole way. People are saying the build of the 6990 is alot better, then the gtx 590. The benchmarks proof it to a certain extent. Thanks for this read!
 
Zarathustra[H];1037191610 said:
Yeah, I used to read THG back in the 90's before I knew of their shady practices. These days it's: [H] > Anand > Guru3D

How about techpowerup, tweaktown, and overclock3d? I'm pretty impressed with techpowerup for calling out Nvidia on the poor VRMs on the GTX 590. Though i still don't see why techpowerup uses benchmarks for resolutions below 1680x1050 especially on high end cards....
 
Agreed. Also, it seems that the Nvidia SLI setup is less sensitive to the pcie lane limitation than the AMD setup. The SLI was only affected 10% at 1080p using pcie x4 and even less on 1600p

Well GTX 570 has LESS vram bandwidth, then 6950. Not much though (152 vs 160 GB/s)

And here in Hardocp's test, its the other way around with Nvidia having more b/w and this being multi GPU run test.

3x GTX 580 vs 6990+697/50*

3x190 GB/s vs 3x160 GB/s


Frankly all these speculation are pointless, maybe it's best to just get a new mobo:

farcry22.png


et-qw-hq2.png


*Typo in the above images*

MSI Eclipse is actually running at x16 / x16 / x4
 
Here are some other results directly contradicting [H] results:

image021.png


image022.png


image023.png


Even only single monitor and GPU up to 18% performance loss going from 16x to 4x on slower cards not even having to frame swap anything.

The 18% drop in performance is on an AMD card, the Nvidia card (GTX570) loses 6-11%. The percentage of drop actually decreases the higher the resolution goes on the Nvidia card, how much would it be at Eyefinity res? My own experience at eyefinity resolutions is that going from 16x/16x PCIe to 16x/4x PCIe gave a 3-5% performance drop on a Crossfire HD6970 system.

None of this detracts from the fact that even with this probable 20% boost (being generous here) the 3 way SLI would be about as fast, or marginally faster than a much cheaper AMD TriFire solution. Even if money is no object paying $500-$555 for a tiny increase in performance would be a very poor move.

Just curious, if as some seem to imply there was some sort of CPU bottleneck, then why is the AMD TriFire system getting 18% higher speeds on the same CPU? Surely if there was a bottleneck with the CPU, both systems would have hit a wall and had identical FPS on most if not all tests? Instead the AMD system is up to 18% faster in some cases, the CPU bottleneck excuse doesn't add up given the evidence IMHO.
 
Last edited:
The 18% drop in performance is on an AMD card, the Nvidia card (GTX570) loses 6-11%. The percentage of drop actually decreases the higher the resolution goes on the Nvidia card, how much would it be at Eyefinity res? My own experience at eyefinity resolutions is that going from 16x/16x PCIe to 16x/4x PCIe gave a 3-5% performance drop on a Crossfire HD6970 system.

None of this detracts from the fact that even with this probable 10% boost the 3 way SLI would be about as fast, or marginally faster than a much cheaper AMD TriFire solution. If as some seem to imply there was some sort of CPU bottleneck then why is the AMD TriFire system getting 18% higher speeds on the same CPU? Surely if tehre was a bottleneck with the CPU the both systesm would have hit a wall and had identical FPS on all tests.



no more logic. just find a way for Nvidia to win!
 
There is no CPU limitation affecting this evaluation in any way. People keep bringing it up like it's something that needs to be examined, but it's just not an issue. 7MP is approaching two 2560 x 1600 monitors. Would an i7 @ 3.6GHz HT bottleneck a 2560 monitor? No, it wouldn't, and it definitely wouldn't bottleneck almost two of them. This setup's weakest link is its GPU, not its CPU. I stand by this post, and will reference it when the re-test proves my claim.


3 gpus is also ALOT of driver overhead bro
 
Both configurations are CPU restrained. I'm seeing 25-30% improvement going from 3.6 to 5 GHz on warhead, bf:bc2 and metro (trifire 6970)
 
I am quite happy I went with AMD Tri-Fire (6990 + unlocked 6950) to push my ZR30w. The gaming experience is just phenomenal! It's satisfying to know that my $950 GPU configuration matches and even exceeds a $1500 setup, while consuming less power.
 
Both configurations are CPU restrained. I'm seeing 25-30% improvement going from 3.6 to 5 GHz on warhead, bf:bc2 and metro (trifire 6970)

Just for fun, try searching for "low GPU usage" / utilization, and see how many hits you see on Red, and how many on Green cards.
CPU restrained case on AMD is less bad then on Nvidia.

This is a 3D Mark 2011 Physics test and it clearly demonstrates that rendering the same scene on Nvidia takes more of a CPU time, leaving tested i7-980X to perform 10% worse in a pure physics calculations.



i7-980x.jpg




None of this detracts from the fact that even with this probable 20% boost (being generous here) the 3 way SLI would be about as fast, or marginally faster than a much cheaper AMD TriFire solution.

When benchmarking you do it right, or not at all.

+/- 20% fps... that much I can guess myself without benching.
 
Last edited:
Bolded part is simply not true, as both anisotropic filtering quality and AA is currently better on Nvidia cards. High samples SLI antialiasing, SGSAA and TrAA(Supersampling) all work in DirectX 10 and DirectX 11, with nothing similar from AMD to counter this
(albeit having MLAA, which tbh is more performance then quality preset).

I don't think anyone will disagree that nVidia has implemented features that can provide better image quality, but if the frame rates are unusable, then what's the point? We can all sit around a table and state that given an infinite amount of budget, power, and PCIE bandwidth, that nVidia could achieve superior quality and frame rates. However, cost, efficiency, and bandwidth are real constraints that we all have to deal with everyday and at least to me, they seem like awfully reasonable evaluation criteria.
 
Well I didn't say all those AA mods are viable at multi monitor resolutions.

Xfire took this round @ [H] , so that(quoting review)

"2GB of RAM per GPU on the Radeon HD 6970 will allow you to do this and provide a noticeable gameplay experience"


but you(not u xD) can not at the same time claim visual improvement over GTX 580 3-Way SLI
AND better performance
 
I'm willing to accept the results from the sandy build as the final result. After reading the last page I now definitly have reasonable doubt about the results. Having the 3 cards in a build that should offer no pcie or cpu bottleneck will put this issue to rest for me.
 
Make you wonder, why you have to go for a high priced CPU to get the same performance that you can get with a mid range CPU and AMD GPU, are Nvidia drivers that inefficient?
 
BTW, to address VRAM concerns even more, I will make some ap2ap graphs at 2560x1600 on the Quad GPU testing i'm doing, and the re-testing here. We certainly want to address that and limit it completely as a factor, I've already done that of course, but I will do 2560x1600 to so those that find value in that will have it.
 
BTW, to address VRAM concerns even more, I will make some ap2ap graphs at 2560x1600 on the Quad GPU testing i'm doing, and the re-testing here. We certainly want to address that and limit it completely as a factor, I've already done that of course, but I will do 2560x1600 to so those that find value in that will have it.

It sounds like you'll be pretty busy for a while. I assume that Kyle got you the new platform already? Can't wait to see as you guys are probably the first review site to test this much graphics power with this much CPU power.
 
It sounds like you'll be pretty busy for a while. I assume that Kyle got you the new platform already? Can't wait to see as you guys are probably the first review site to test this much graphics power with this much CPU power.

Yes, behold the power! *watches as all electricity within a 20 mile radius shuts down*
 
BTW, to address VRAM concerns even more, I will make some ap2ap graphs at 2560x1600 on the Quad GPU testing i'm doing, and the re-testing here. We certainly want to address that and limit it completely as a factor, I've already done that of course, but I will do 2560x1600 to so those that find value in that will have it.



You guys should just use the 3GB 580's for Eyefinity/Surround resolutions if you can get your hands on them.
 
Yes, behold the power! *watches as all electricity within a 20 mile radius shuts down*

Nah, you'll be fine. Now if you head over to the DC subforum, we have some folders who probably cause a brownout everytime they switch their folding farms back on from holiday. I have to admit that it has to be an awesome feeling though. What you have is the equivalent to taking delivery of a 800hp Shelby Cobra SuperSnake and flooring the gas off the dealer lot.

You guys should just use the 3GB 580's for Eyefinity/Surround resolutions if you can get your hands on them.

They would probably have to buy it, and that would get expensive.
 
They would probably have to buy it, and that would get expensive.


Fair enough, they could at least use a better game selection to make their conclusion though. F1 and DA2 are known to run far better on AMD hardware, to keep things fair either add some known games that favour Nvidia or use a wider and more balanced set of games when making claims.
 
Fair enough, they could at least use a better game selection to make their conclusion though. F1 and DA2 are known to run far better on AMD hardware, to keep things fair either add some known games that favour Nvidia or use a wider and more balanced set of games when making claims.

Civ 5/Metro/BC2/Crysis have been known to favor NV in the past...

I really don't want to argue about that though. Save that for another rant.

We are in desperate need of new games, there are some good ones coming out this year I can't wait for.
 
Civ 5/Metro/BC2/Crysis have been known to favor NV in the past...

I really don't want to argue about that though. Save that for another rant.

We are in desperate need of new games, there are some good ones coming out this year I can't wait for.

Sure but come on in 5 game review F1 and DA2 were used, the 2 games that perform better than any other game on AMD hardware vs Nvidia hardware. I feel that if you include them its only fair to have a wider variety of games and some Nvidia favoured titles too.
 
exactly how would you do that? the other 3 games in testing already fall under "The Way It's Meant To Be Played" franchise. Isn't that good enough?
 
Fair enough, they could at least use a better game selection to make their conclusion though. F1 and DA2 are known to run far better on AMD hardware, to keep things fair either add some known games that favour Nvidia or use a wider and more balanced set of games when making claims.

How much fairer could it be? Yes F1 and DA2 are ATI sponsored games, but the other games in their suite are TWIMTBP games. Civ 5 might not be(havn't fired it up in a LONG time), but it still favors nVidia hardware.

Civ 5/Metro/BC2/Crysis have been known to favor NV in the past...

I really don't want to argue about that though. Save that for another rant.

We are in desperate need of new games, there are some good ones coming out this year I can't wait for.

Lionhead err I mean Microsoft supposedly has really made some improvements for Fable III when it comes out in a couple of weeks for the PC, but do you think this game might make it in? So far we have only been given a promise that the PC version has been significantly improved(we heard this one before, LOL), but it would be a change of pace for your testing suite. I imagine that BF3 will probably replace BC2.
 
I'd just use a wider variety of games to make blanket statments. It's no coincidence that the 2 games AMD run the best out of any game on the market are AMD sponsored games, and thats fair enough but at least make the sample size of games bigger.
Those games are in effect to AMD what Lost Planet 2 is to Nvidia, which I think would not be appropriate in a 5 game review.

Would be nice to see some oc results if possible and a couple synthetic benches too.
 
I'd just use a wider variety of games to make blanket statments. It's no coincidence that the 2 games AMD run the best out of any game on the market are AMD sponsored games, and thats fair enough but at least make the sample size of games bigger.
Those games are in effect to AMD what Lost Planet 2 is to Nvidia, which I think would not be appropriate in a 5 game review.

Would be nice to see some oc results if possible and a couple synthetic benches too.

Both camps use driver profiles when using in game benchmarks. This can of skew results either way. That is why [H] uses only in game performance.
 
I'd just use a wider variety of games to make blanket statments. It's no coincidence that the 2 games AMD run the best out of any game on the market are AMD sponsored games, and thats fair enough but at least make the sample size of games bigger.
Those games are in effect to AMD what Lost Planet 2 is to Nvidia, which I think would not be appropirate in a 5 game review.

Would be nice to see some oc results if possible and a couple synthetic benches too.

You guys do realize that the [H]ard doesn't go use in game benchmarking utilities and time demos right? It is easy to request everything including the kitchin sink, but at some point time is going to become an issue as the way they do it is going to be much more time intensive. Arn't you also forgetting about the Stalker games? Those are ATI sponsored and kill nVidia as well in performance. The fact that they run 3 ATI sponsored games, 3 TWIMTBP games, and Civ 5 is already going to be a time intensive chore by itself. This in my opinion is a pretty diverse lineup as the TWIMTBP games already outnumber the ATI sponsored games. BC2 is an ATI sponsored game and it already performs better on Fermi.
 
The way I look at games is, games are games, people play those games, so now how is that game gonna perform and which card gives me a better experience. Throw "this game favors xxx" out the window, fact is, game X performs one way on card A and one way on card B, so which one is the better experience for that game. If you have card A and play game X wouldn't you be curious how it played on card B. Well, we can do that for you, and show you which delivers the best experience at a similar price point :) Any issues about "favoring" take it to the game devs.... They are afterall making the game to perform a certain way. We just show you what to expect.
 
I really thought Nvidia in this type of scenario would have creamed AMD since on Nvidia each card powers it's own monitor - no need to transfer a frame buffer over the PCIx/bridge. While AMD one card has to drive all three monitors meaning the other card(s) has to transfer the frame buffer each frame to it. So Nvidia solution appears to be less bandwidth needed.

Now what would be interesting is the AMD configuration, as in if having the 6990 driving the three monitors and having the 6790 send it's frame buffer to the 6990 compared to the 6970 running the three monitors and the 6990 sending the frame buffer to the 6970. I would think it would be more efficient having the 6990 driving the three monitors because it would only need to receive one cards frame buffer (the 6970) over the bus while internally the 6990 two gpu's are not handicaped by an external bus for this. With the 6970 driving the three monitors, it would have to receive two frame buffers (All three monitors worth) from each GPU on the 6990. For AMD I would think 4x pcix speeds would be a rather big hinderance for Eyefinity and cross fire use, so having three 6970 in cross fire may end up slower (but what do I know without tests?).

As for the 3gb 580 cards, what do they cost? Since original test where not VRAM limited there would be no performance benefits for those same tests, on higher AA settings it would allow though, maybe allow for higher AA setting comparisons at best. Still we would be talking about more then $500 difference with no clear performance advantage.

Interested in the new tests with different pcix slot bandwidth 16x 8x 8x ? What will it be? Will this give a 10%-20% jump? We will see.

As for games, HardOCP seems to do it best, obviously what counts most is what games one plays or will play but how can anyone ever know that or test them all. I see nothing wrong with HardOCP game selection here. I guess having more wouldn't hurt but readers also need to be reasonable as well.
 
I'm just happy we have enough competition in the high end to warrant this type of discussion.

I hope/wish Bulldozer delivers this type of competition to the high end CPU market.
 
Civ 5/Metro/BC2/Crysis have been known to favor NV in the past...

I really don't want to argue about that though. Save that for another rant.

We are in desperate need of new games, there are some good ones coming out this year I can't wait for.


Civ 5, Metro, BC2, Crysis (surely you mean crappy looking Warhead, not The Crysis)
running better on NV does not mean favoring Nvidia.
At least not in an unfair sense, because these are not TWIMTBP exclusive games, where AMD was cut out of development process.
(dunno what ppl who claim otherwise have been smoking)

Unlike F1 2010 and Dragon Age 2, which are AMD exclusively sponsored and developed games.

So if you think that having 2 out of 5 AMD Partnership games,
and not having a single exclusively Nvidia sponsored TWIMTBP,
is either fair or representative of gaming industry, oh well...

Also...you say you need good games.

How about S.T.A.L.K.E.R. Call of Pripyat - one of the 1st DX11 games, a game with phenomenal gameplay and replay value, loved by many on this forum, yet never a part of your benchmark set?
But cartonish looking corridor RPG with 1 cave and DX11 features only on paper enters your bench set no problemo.

And no, we can not take this to developers, because you are the ones who make the choice which games to bench, and frankly this set could be larger.

How about widening it?
Because I don't think you can really say "X gives a better gameplay experience then Y" with a straight face after benching 4 games?

How's that for a rant :cool:
 
Status
Not open for further replies.
Back
Top