NVIDIA 3-Way SLI and AMD Tri-Fire Redux @ [H]

So really, unless money is no object and you plan on going quad-SLI for the highest end rig possible, why do 3x580? Doesn't make any sense to me from either a cost or a performance perspective, either do tri-fire or quad-fire 6970 for less. Shoot, if you go unlocked 6950s...it's just ridiculous the extra power you get for the $ from AMD this generation.

3x580s in the setup was faster overall and supports S3D so there's a reason. I wonder how a hexacore would affect these results.
 
Even if you're OC'ing to the bleeding edge, unless you want to spend $500 for an average of ~10% on the 5 games tested here, the AMD solution is the better choice. Keep in mind too that if you went 3x6970 it would narrow that gap even further, plus give you the extra 500MB per card for more IQ.

But really, for $100 cheaper you can get 4x6970s, each with the larger framebuffer, for the same price as 3x580s. It's almost $300 cheaper if you get the 3GB 580s.

Prices using Newegg's lowest price for each card.

3x580: $1470
3x580(3GB): $1620
4x6970: $1360
4x6950: $1020 (HOLY CRAP now there is value!)

So really, unless money is no object and you plan on going quad-SLI for the highest end rig possible, why do 3x580? Doesn't make any sense to me from either a cost or a performance perspective, either do tri-fire or quad-fire 6970 for less. Shoot, if you go unlocked 6950s...it's just ridiculous the extra power you get for the $ from AMD this generation.

And that just doesnt make sense with the 6950s providing that kind of power. I just dont see how AMD lets this slide, be nice if we could get Kyle and Co. to run a review focusing on 6970 vs unlocked 6950.
 
7Upping the frequencies is good for raw power ... but i think we all know that we should do reviews on lower frequencies right? I mean ... overclockers are a niche ...

Most users will just play with everything at stock ... or at low overclocks ...

You DO know what website this is, right??
 
good to see the comparison, but as a bung for buck shopper, nothing changes. If you however are in the "more money then brains" catagory then tri sli puts up some very impressive numbers.
 
And that just doesnt make sense with the 6950s providing that kind of power. I just dont see how AMD lets this slide, be nice if we could get Kyle and Co. to run a review focusing on 6970 vs unlocked 6950.

So that would be a review on 6970 vs 6970..
 
@Kyle - I am curious as to why not just overclock the I7 920 on the original setup to the 4.2Ghz range and rerun the tests? The reason I ask is, does SB favor Nvidia more than AMD?

It would be interesting to know if a platform was the reason for the differences with the second review. It might also explain the negative result of the one game for AMD. It could change someones buying decision if they were on a high end x58 setup. If the AMD performed better on the x58 setup versus the Nvidia, then it would make sense to go that route. However, if it was truly CPU bound, bumping the 920 up to 4.2Ghz might reveal that.

The other thing is I see a lot of posts talking about the MSI Eclipse having only a x4 3rd PCIe slot, is this correct? That seems a little odd on a higher end x58 board.

Edit: Also, great job on both reviews. It's also nice to see a reviewer willing to say they were wrong if they were. Kudos to both you and Brent, shows you guys have class.
 
Don't worry as you're okay. I put a :p at the end of the post to signify that I was joking, considering that Brett joked in the previous thread about how his new 4.8ghz test bed was going to cause a power outage or something.

I take that statement back too, much better power efficiency with the new system, noticed overall 100-150W less out of the wall with the new system at full load with 3-way SLI, much better.
 
Great job on both reviews props to Kyle and Brent !

I think I'll stick with my 6990+6970 combo:D

3x 3gb 580's+waterblocks&backplates = $2100+:eek:
 
Most interesting, thanks for doing it. Looking forward to the 3x3 and 4x4 comparisons very much.
 
I'm sick of hearing about $400-500 more for only 15% more performance.. you are not only paying for the extra performance, but you are also paying for IQ enhancements, features and support. Over priced? sure is, but what good is a cheaper, slower GPU if it cant do what you want it to?
 
I'm sick of hearing about $400-500 more for only 15% more performance.. you are not only paying for the extra performance, but you are also paying for IQ enhancements, features and support. Over priced? sure is, but what good is a cheaper, slower GPU if it cant do what you want it to?

not true. none of the settings for amd are different than the nvidia settings. none have unplayable framerates
 
not true. none of the settings for amd are different than the nvidia settings. none have unplayable framerates


Huh? I'm talking about IQ enhancements like SSAA & TrMSAA in DX10/11, mixed AA modes, AO, 3D surround, 3D support and yes PhysX (sure PhysX is barely used but even in that review there was a game that supports it). Point is you can get better IQ, functionality, performance and support, thats what you pay extra for, not performance alone.

What good is 100 fps in Crysis on AMD hardware if its still a jaggy mess without SSAA/TrMSAA? or BC2 with no SSAA? no AO to enhance almost any game on the market, plus no native 3D support with constant profiles and updates etc etc.. MLAA is nice though but it still doesnt compare quality wise to traditional AA methods.
 
Huh? I'm talking about IQ enhancements like SSAA & TrMSAA in DX10/11, mixed AA modes, AO, 3D surround, 3D support and yes PhysX (sure PhysX is barely used but even in that review there was a game that supports it). Point is you can get better IQ, functionality, performance and support, thats what you pay extra for, not performance alone.

What good is 100 fps in Crysis on AMD hardware if its still a jaggy mess without SSAA/TrMSAA? or BC2 with no SSAA? no AO to enhance almost any game on the market etc, plus no native 3D support with constant profiles and updates..

First, AO in nVidia panel give you shadow glitch in like 99/100 games I have tested, include CS Source and any casual games.

Second, TrSSAA is horrible to begin with, I have tested on GTX 480 with 8xMSAA or 32xCSAA, ALL OF THEM DID NOT get rid of alias. Its still very visible in many game compare to CFAA in ATi. Not even mention You still need a tool to turn SGSSAA on on nVidia, and its not widely support, just like ATi's SSAA, but once it work its fantastic.

ATi also have 3D, not quite sure what you trying to smoke out of. It does work I have tried it, but still works like crap and hurt my eye like what 3D Vision did.
IIRC, Sapphire did Eyefinity 3D before, go look it up.

About PhysX, IT DOES NOTHING... NOTHING.....Plus Metro 2033 natively support CPU physX. Unless you are a total Batman whore, otherwise it does nothing to enhance. Give a game that actually does something rather than complete unrealistic particle effects where object shattered into more pieces than it should.
 
Last edited:
Opinions like "PhysX does nothing" aside...

You are right about AO (It flickers with Source for example) and about ATI having 3D (LOL but ur right, it has)

AA part u got totaly wrong. TrMSAA does not even exist :)

EDIT: Oh, sorry. You mean multisampling? TBH I never use that as it is inferior to TrSS. Which works in pretty much anything, unlike ... (fill the void plx xD)
 
Last edited:
ATi also have 3D, not quite sure what you trying to smoke out of. It does work I have tried it, but still works like crap and hurt my eye like what 3D Vision did. IIRC, Sapphire did Eyefinity 3D before, go look it up.

AMD's level of S3D support is no where near nVidia's.
 
AMD's level of S3D support is no where near nVidia's.

Yep, but the market is too small.

Until they get rid of that ugly and eye hurting glass, I don't think it will go anywhere.

I would love to try 3D again, but only glassless.. Such as Nintendo 3DS, that is why I bought one ;)

Opinions like "PhysX does nothing" aside...

You are right about AO (It flickers with Source for example) and about ATI having 3D (LOL but ur right, it has)

AA part u got totaly wrong. TrMSAA does not even exist :)

EDIT: Oh, sorry. You mean multisampling? TBH I never use that as it is inferior to TrSS. Which works in pretty much anything, unlike ... (fill the void plx xD)

Sorry, Its TrSSAA, I wrote it wrong..

The main game I play is source engine based, and TrSSAA doesn't even seem to do anything. SGSSAA/SSAA CFAA with edge detects are far more superior.
 
First, AO in nVidia panel give you shadow glitch in like 99/100 games I have tested, include CS Source and any casual games.

Second, TrMSAA is horrible to begin with, I have tested on GTX 480 with 8xMSAA or 32xCSAA, ALL OF THEM DID NOT get rid of alias. Its still very visible in many game compare to CFAA in ATi. Not even mention You still need a tool to turn SGSSAA on on nVidia, and its not widely support, just like ATi's SSAA, but once it work its fantastic.

ATi also have 3D, not quite sure what you trying to smoke out of. It does work I have tried it, but still works like crap and hurt my eye like what 3D Vision did.
IIRC, Sapphire did Eyefinity 3D before, go look it up.

About PhysX, IT DOES NOTHING... NOTHING.....Plus Metro 2033 natively support CPU physX. Unless you are a total Batman whore, otherwise it does nothing to enhance. Give a game that actually does something rather than complete unrealistic particle effects where object shattered into more pieces than it should.


There is no Transparency MSAA or SSAA on AMD hardware in DX10/11. Its called Adaptive AA on AMD and only works in DX9 or lower. CSAA is not Transparency MSAA or SSAA. There is also no support for Full screen SSAA.

AMD do not support 3D natively, it is handled by 3rd parties, is inferior and is not updated with new profiles, bug fixes etc anywhere near as frequently.

AO works fine in many games, I use it all the time.

Physx is a nice bonus in games that support it, especially in 3D.

3D surround is you know, 3D + Eyefinity, which AMD does not support.

I also left out more texture filtering options, ability to set frames to render ahead, custom levels of mixed mode AA (ie 16xCSAA+4xTrMSAA+2x2SSAA etc)

So, you dont only pay for more performance, you are also paying for more IQ enhancements, features and support.
 
There is no Transparency MSAA or SSAA on AMD hardware in DX10/11. Its called Adaptive AA on AMD and only works in DX9 or lower. CSAA is not Transparency MSAA or SSAA. There is also no support for Full screen SSAA.

AMD do not support 3D natively, it is handled by 3rd parties, is inferior and is not updated with new profiles, bug fixes etc anywhere near as frequently.

AO works fine in many games, I use it all the time.

Physx is a nice bonus in games that support it, especially in 3D.

3D surround is you know, 3D + Eyefinity, which AMD does not support.

I also left out more texture filtering options, ability to set frames to render ahead, custom levels of mixed mode AA (ie 16xCSAA+4xTrMSAA+2x2SSAA etc)

So, you dont only pay for more performance, you are also paying for more IQ enhancements, features and support.

Not quite sure what you smoking again.

ADAA does work under DX10 and DX11 title...

Full screen SSAA (SGSSAA) does support on nVidia hardware through SGSSAA tool, which is released from official.

I don't care how many option they have for AA or add them up all together, IT STILL NOT getting rid of enough alias compare to what CFAA/ADAA did.

About 3D, http://www.amd.com/us/products/technologies/amd-hd3d/Pages/hd3d.aspx
Take a look first.

AMD's 3D works entirely fine with profile constantly update. Not quite sure where you smoke out your opinion from. It just Eyefinity 3D not entirely support yet (IIRC).

Again, you did not pay more for more features, they both equivalent, not quite sure where you getting to, but it sounds like you are leaning toward to your own flavor than what is there. You keep putting up the argument that does not even make sense.

Once again, I am done arguing with your none sense, especially that "IQ FEATURE" which is the most utter bullshit that I ever heard.

Uhhh.. in Source you can do what ever u want.



Ask me at PM

There is still tons of alias on the bridge. I know I am picky, but I have tried the exact same thing in CS Source, in office map its more obvious. That is why I went straight with SGSSAA, whcih is far more better in my opinion, but it does smooth out the texture, which is unacceptable for some people.
 
Last edited:
So the cliff notes version is:

1.) Buy a CPU that can overclock over 4GHz+
2.) If you're going to go tri or quad GPUs makesure it had a NF200 chip for Nvidia (maybe for AMD?)


I really didn't think that NF200 chip would make that much of a difference. Good work Nvidia :)
 
Shansoft I just showed you those, because u said it's not working. Not the best AA possible, just a working TrSS

BTW :) You are not that picky if you don't notice Adaptive not working in DX10/11
 
Last edited:
2.) If you're going to go tri or quad GPUs makesure it had a NF200 chip for Nvidia (maybe for AMD?)


I really didn't think that NF200 chip would make that much of a difference. Good work Nvidia

Of course, it very well could have been just the extra bandwidth provided by the higher CPU clock speed that allowed the cards to perform and had nothing whatsoever with the increase in pci-e lanes?!

Yer fuel pump may be fine at 800BHP, but increase the power output to 1000BHP and chances are you're gonna have to upgrade the fuel pump?
 
Shansoft I just showed you those, because u said it's not working. Not the best AA possible, just a working TrSS

BTW :) You are not that picky if you don't notice Adaptive not working in DX10/11

Now that you reminds me of it. I was testing Warhead long time ago with ADAA, and it didn't work.

But when I tested in BF:BC2 it does..... Or maybe it runs in DX9 mode somehow...
 
SO.... theres no stock cpu's to back the tri sli backup now?

hahah oh wow ... Im so getting the ATI setup XD
 
I assume you're talking about the clock speed of the CPU?

I'm fairly certain the original review didn't use a stock CPU?
 
I really didn't think that NF200 chip would make that much of a difference. Good work Nvidia :)

There are differing opinions on this subject.

The way I understand it is if you have a motherboard/platform that supports 16x lanes to all your video cards the NF200 chip is a waste of money and heat dissipation.

If - however - like in a P67/H67 chipset you don't have sufficient PCIe lanes for a multiple card setup like this, then a NF200 chip helps out a good amount, at least if you have an Nvidia card.

Jury is still out on what effect (if any) it has on AMD cards though.
 
Zarathustra[H];1037208146 said:
Jury is still out on what effect (if any) it has on AMD cards though.

But why is the Nvidia set-up getting better with increased CPU speed, but the AMD set-up is not, and even getting slower on one benchmark. :confused:

Something doesn't compute here.
 
3x580s in the setup was faster overall and supports S3D so there's a reason. I wonder how a hexacore would affect these results.

Yes but for less money than 3x580 you can get 4x6970 and even better performance. The H quad review is coming, so we'll see if I am right.
 
Yes but for less money than 3x580 you can get 4x6970 and even better performance. The H quad review is coming, so we'll see if I am right.

You forgot about S3D, this was the main reason I got the 580s. There's really not debate that the 6900s offer much better performance per price ratios but not everyone buys just for that.
 
HMM,

This article raises big question, is it actually CPU limited factor here or was the speed boost, simply due to the newer improved motherboard PCI lanes for the SLI to work better ??
 
Last edited:
"" Designed for true power users, the P8P67 WS Revolution uses a built-in NF200 controller that enhances bandwidth availability between the board and the four graphics card expansion slots. ""

How much of the new P67 vs X58 designs added to the performance boost, this clearly shows more then just the CPU with speed bump could be the cause for the raise in performance ?? You dont show an equalized factor here test with both CPU clocked the same so we can see how much the newer P67 with that NF200 adds to the differences, not simply a CPU boost only test, as thats not really effective since its swayed but unknown board performance results.

Are we sure its not the new imported PCI lanes adding to the boost here ?? Or just your CPU speed ramp ups only that shows the enhancements in performance ??
 
"" Designed for true power users, the P8P67 WS Revolution uses a built-in NF200 controller that enhances bandwidth availability between the board and the four graphics card expansion slots. ""

How much of the new P67 vs X58 designs added to the performance boost, this clearly shows more then just the CPU with speed bump could be the cause for the raise in performance ?? You dont show an equalized factor here test with both CPU clocked the same so we can see how much the newer P67 with that NF200 adds to the differences, not simply a CPU boost only test, as thats not really effective since its swayed but unknown board performance results.

Are we sure its not the new imported PCI lanes adding to the boost here ?? Or just your CPU speed ramp ups only that shows the enhancements in performance ??

Might read these....

http://www.hardocp.com/article/2010/08/16/sli_cfx_pcie_bandwidth_perf_x16x16_vs_x16x8/

http://www.hardocp.com/article/2010/08/23/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x8x8/

http://www.hardocp.com/article/2010/08/25/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x4x4/
 
I think that in order to narrow down the variables a test of a higher clocked X58 system needs to be done.
 
I think that in order to narrow down the variables a test of a higher clocked X58 system needs to be done.
MORE TESTING MORE TESTING haha

Seriously, though, if Kyle/Brent/etc get some time, I too would be curious about that same x58 test bench clocked closer to 4ghz :)
 
MORE TESTING MORE TESTING haha

Seriously, though, if Kyle/Brent/etc get some time, I too would be curious about that same x58 test bench clocked closer to 4ghz :)

This is interesting indeed.
could you check up this with running your I7 at 2.33 ghz and low res, to actually create a bottleneck for the 580TRISLI at 2560x1600 and find where it breaks and find where ati doesnt and find out if its really cause of surround software done or not.

for those who say: Who run trisli and doesnt clock their cpu to 4.6 ghz?

I know 2 people with quadfire... they run stock 3.2 ghz intel I7. and to be honest... I know two like of my close friends that actually overclock, they're all hardcore gamers, hate lagg, and have to run everything maxed out.
not everyone takes a interest in hardware other than performance, and alot doesnt do eyefinity just cause the competetive reasons..

So yes, There is alot of high end system buyers who run their cpu stock, more than you know there is ( i know its stupid to have those rigs with low clock cpu)
 
I wished some of the folks that have tri setups could post some of their own testing. I can't believe we are all spectators here (well I guess I am at least since I have only a one card/gpu system which is utterly outdated now). I like how HardOCP stayed focus, max cpu speed, 3 gpus, how do they perform, period. Going into affects of cpu with a tri setup would probably be best done in a separate article. Besides hopefully with BullDozer coming out soon we get to see some interesting testing there as well.
 
Back
Top