NVIDIA 3-Way SLI and AMD Tri-Fire Redux @ [H]

also reiterating what kyle said:



from asus's page, i have one question.

isn't nf200 chip optimized for nvidia? or maybe its not said like that, but i would not be surprised if the chip had something like "amd detected, cripple performance!" haha jk but the results make me want to question everything.. just like how some games are optimized for nvidia or amd. i would be interested in seeing the test rerun on the amd side on a board without the lucid chip

the lucid chip was originally an nvidia only thing as i don't believe it was present on early 775 boards that were intel boards where one could run crossfire x?

I'd also be interested to see how this goes with a board that doesn't have an auxiliary chip....
 
So what I take away from this is that Nvidia's drivers use the CPU more vs. ATI? That is really interesting. I'm wondering if they are doing some shaders (or post-processing on frames) in the driver/CPU vs. the GPU. Not saying that's necessarily wrong but just interesting on why they would need a better CPU and ATI not.

surround is software driven... eyefinity is hardware driven... so yes.
 
Many people have commented on the fact that they never see their CPU utilization go over 30 or 40%. Keep in mind most games don't take advantage of more than 2 or 3 cores. So your CPU utilization may still only be 30% or so on a quad core setup with hyperthreading on, but those two cores could be maxed out at 100%. This is why there would be such a big advantage to go to a higher overclocked system with less cores. I remember seeing this with the core 2 duo vs core 2 quad. The core 2 duo could game better b/c of the higher clocks.

Most people probably already know this, but I thought I would point it out.
 
I'm sure Nvidia engineers lost hours and hours and hours, staying up late at night, and dealing with a BIG budget, to be sure the Nvidia NF200 chip would be ''optimized'' for all the AMD graphic cards... their only competitor.

Just like showing a card with wood screws and saying ''it's done and ready'' to stop people from buying cards from AMD.

Shady tactics are shady.
 
Not to be a drag, but that nf200 decelerator is actually hurting the Amd setup and gives it less bandwidth.

It works like this:
The nf 200 takes 16 lanes of the PCiE bus,(from the chipset) and splits them into 32 lanes down to the graphics cards. So no matter how high bandwidth the card has, it is still communicating with the chipset at the same speed.

This hurts the Amd setup the most, because as before the 6990 would have 16 lanes available, it now in reality has 8, or 4 lanes per gpu. The 6970 had 16 lanes before, now it has 8.

LOL no. Blame the shitty crossfire bridge.

Because AMD cards were again in a full x16 PCIe.
And it's NV that was crippled with x16/x8/x8.

(There would be BLOOD if Civ 5, LP2, Mafia 2 and HAWX 2 were tested on say UD9 :D)
 
Good Redux. Pretty much ended up like I said it would. Use a good CPU over clock like a gamer would and don't put the third 580 in a 4x slot and it walks over the AMD setup. ;)
 
Thanks for taking the time and effort to run these tests. The results were certainly surprising and it is informative for a budget gamer like myself to know how cpu hungry the SLI setup is.

At the end of the day I'm pretty happy with my single monitor setup, but it's great to know what the real world cost requirements are to make a tri-SLI setup happy.
 
Good Redux. Pretty much ended up like I said it would. Use a good CPU over clock like a gamer would and don't put the third 580 in a 4x slot and it walks over the AMD setup. ;)

Vega, I have a question suited to your very particular level of knowledge.

I currently have a single 1.5gb GTX580 (which doesn't seem to like to overclock AT ALL).

I plan on Going SLI. Would I gain anything by selling it and going with two 3GB versions instead, if I only plan on playing at 2560x1600 on one screen? (and occasionally maybe three screens in PLP setup using SoftTH)

In other words, is there any game/setting (including heavy AA) where the 1.5Gb memory limitation becomes an issue on a single 2560x1600 monitor?

And if I sell, maybe a more cost effective method would be to get three HD6950's, flash them to 6970's and tri-sli them...
 
So I guess were waiting on an improved crossfire chipset or an improved set of catalyst drivers for Sandy Bridge?
 
Very, very interesting results and an awesome review. Thanks for listenting to the readership and re-running the results! SLI and Crossfire sure are interesting; I wonder how many more generations before they get more predictable? Seems like every SLI vs Crossfire review gets re-run a few months later under different circumstances and some things change. Great for me, as it gives me more content to read over instead of working.

All that being said though, for $400-$500 more, SLI better have beaten Tri-fire. I'm glad you addressed that so thoroughly in the review. So many forum warriors who will never buy any of these cards may debate them in a vacuum as if money is no object, but as a real review to base a buying decision off of this was fantastic.
 
That's what really surprised me, so for anyone who doesn't have a 4.5GHz+ plus machine, your 3-way SLI rig is being gimped.

On the plus side, Tri-Fire isn't, and runs fine at slower CPU speeds.

Which in itself is curious. I mean I trust your numbers 100%, I'm only wondering from a technical perpsective, what is actually going on with SLI?

Has anyone from AMD or Nvidia commented on your findings?

Perhaps it is to do with the way memory reads and writes are handled in Nvidia chips. The Cayman article on realworldtech.com was quite informative on how ATI do things in a almost purely graphical way regards designing their hardware, whereas Nvidia have the GPGPU thing running very deeply all the way through their designs. In a sense maybe memory transactions have a higher administrative overhead than the on the ATI side.
 
Zarathustra[H];1037202845 said:
I've been running my i7-920 at stock speeds (2.67Ghz) with my GTX580 since I got it.

My reasoning for this was that my CPU loads never exceeded 35% while playing games, so I figured it was always GPU limited anyway, especially since I run at 2560x1600...

Now I am going to have to try to overclock and see what happens :p (this won't happen for a while though as I need a new case, mobo and cooler)

i7 920 at stock speed definately will bottleneck a GTX 580. Some games prefer higher clocks rather than number of cores. I would say at least 3.5Ghz and 4.0Ghz would be the sweet spot.

Here are some interesting articles on CPU scaling. Some games scale very well.

http://www.techspot.com/review/336-cod-black-ops-performance/page7.html
http://www.techspot.com/review/379-crysis-2-performance/page7.html
http://www.techspot.com/review/368-bulletstorm-performance/page7.html
 
Seriously this is what I've been meaning to say, bench all of this again at 880/5500 for the AMD setup. See how big a difference it makes. I still think it's a more fair comparison to compare 6970 TriFire because then both setups take up the same number of slots and exhaust most of their their hot air out of the case, instead of exhausting ~130 watts into the case, and they're both running at AMD/nVIDIA's fastest default clocks, instead of AMD being held back ~8-10%.

My $0.02.
 
also reiterating what kyle said:



from asus's page, i have one question.

isn't nf200 chip optimized for nvidia? or maybe its not said like that, but i would not be surprised if the chip had something like "amd detected, cripple performance!" haha jk but the results make me want to question everything.. just like how some games are optimized for nvidia or amd. i would be interested in seeing the test rerun on the amd side on a board without the lucid chip

the lucid chip was originally an nvidia only thing as i don't believe it was present on early 775 boards that were intel boards where one could run crossfire x?

You are really mixing things up. Lucid didn't exist on motherboards until MSI's P55 Big Bang Fuzion. The goal of Lucid is to mix and match different cards for improved performance i.e. HD6970 with GTX 580. Most testing results have shown regular Crossfire and SLI scaling on the Lucid chip to be subpar at best.

The NF200 chip simply increases the amount of PCI-E lanes available for the graphics cards to communicate with each other. Most people familiar with SLI/crossfire know that communication between graphics cards occur through the PCI-E bus, rather than the SLI/crossfire link. The SLI link is ~equivalent to PCI-E x1, whereas the PCI-E bus has x8 bandwidth.

Also, you would need to overclock a Phenom II to 6 ghz or more to match a Sandy Bridge running at 4.8 ghz. So good luck trying to run on an AMD system.

Great review though, and it's always nice to know more about computer systems. Not that I would go that high-end... maybe someday if I win the lottery ;)
 
Zarathustra[H];1037203381 said:
Vega, I have a question suited to your very particular level of knowledge.

I currently have a single 1.5gb GTX580 (which doesn't seem to like to overclock AT ALL).

I plan on Going SLI. Would I gain anything by selling it and going with two 3GB versions instead, if I only plan on playing at 2560x1600 on one screen? (and occasionally maybe three screens in PLP setup using SoftTH)

In other words, is there any game/setting (including heavy AA) where the 1.5Gb memory limitation becomes an issue on a single 2560x1600 monitor?

And if I sell, maybe a more cost effective method would be to get three HD6950's, flash them to 6970's and tri-sli them...

Zara, I did a few tests on games with one 30" monitor with my 3GB cards. Only a few games like Metro and Crysis reached above the 1.5GB VRAM limit with around 1900MB @ 4x AA. The other games I tested where at 1.5GB or less.

Now the good thing is the 3GB cards are really future proof. I could see games like BF"BC2 using a ton of a VRAM. Largest maps they ever produced plus 64+ players. If I was sticking to the sole 30" monitor, I't would be a hard choice going 2x 3GB 580s or 3x 6950s. On one hand the 6950s will be slightly faster but with more scaling issues and micro-stutter versus the dual 3GB 580s.
 
... maybe someday if I win the lottery ;)

I shall have 4 quadfire machines running on liquid nitrogen and [H] on staff to maintain my 10ghz overclock, lol.

Really though, [H] is where excellence in journalism is found. Sometimes I think they sound a bit AMD bias but they admit when they are wrong which is outstanding.
 
Good Redux. Pretty much ended up like I said it would. Use a good CPU over clock like a gamer would and don't put the third 580 in a 4x slot and it walks over the AMD setup. ;)

You predicted an anomalous result? Seriously, if one setup LOSES performance significantly, while the other gains significantly from a system change that should increase performance for each, I don't see how you can draw any credible conclusions.
 
Very interesting results. Overall though, I'm still fairly unimpressed with the 580s compared to the 6990/6970 for the cost increase. #1, you have the necessity for a much more expensive system just to get the 580's to perform. #2, you have the seriously limiting 1.5Gb framebuffer. #3, you have significantly higher power consumption. #4, you have the extra $400-500 (or more) for the cards themselves. #5, The performance is really not that much better:
DA2- AMD + 21.7%
F1 - AMD - 11.5% (Due to the clearly anomalous results here I'm using the numbers from the original test for Tri-fire vs. the updated results for Tri-SLI)
Metro 2033 - AMD - 6%
BF:BC2 - AMD - 24.3%
Crysis:WH - AMD - 7.5%

If you take this all together, you average less than 10% performance increase for an absolute minimum of $400+ more. Not one of those situations did the 10% performance difference actually make a difference in playable settings, in fact, those are the max playable settings for 580s, but they can't go higher due to framebuffer, while the AMD cards can definitely do higher settings in many cases.

Kyle and Brent- You guys are the best! I really appreciate the fact that you listen to your readers and are willing to adjust. That is why I read this site and am active here in the forums. I'm also very happy to see the results here- I am a fan of both AMD and Nvidia, and while I think AMD clearly has the win right now as far as price/performance and power etc. I'm glad that Nvidia's Tri-SLI wasn't a total loss and with the proper power behind it is still a true performance beast.
I am very much looking forward to seeing what happens with the Quad GPU shootout and the tri-6970 followup.
 
Thanks for the follow up and very interesting results. F1 does indicate a problem but where? Now I hope this is an eye opener in how changes in systems can have dramatic results in outcome so benchmarks in general really relate more to a given configuration more then general results. Well I would think folks getting 3 580's will also invest in some hardware to run them so this round I think is a more valid configuration. As for AMD solution even in this configuration looks great as well considering the price.
 
Zarathustra[H];1037203381 said:
Vega, I have a question suited to your very particular level of knowledge.

I currently have a single 1.5gb GTX580 (which doesn't seem to like to overclock AT ALL).

I plan on Going SLI. Would I gain anything by selling it and going with two 3GB versions instead, if I only plan on playing at 2560x1600 on one screen? (and occasionally maybe three screens in PLP setup using SoftTH)

In other words, is there any game/setting (including heavy AA) where the 1.5Gb memory limitation becomes an issue on a single 2560x1600 monitor?

And if I sell, maybe a more cost effective method would be to get three HD6950's, flash them to 6970's and tri-sli them...

I'm using tri-sli 580s w/ 1.5 GB RAM on a 2560 x 1600 monitor, and it seems fine and/or overpowered for the games I've tried so far.

I think it will depend on the game; I'm sure the newer the game and higher resolution the textures the less AA you'll be able to apply. I was playing Company of Heroes over the weekend with 16xQ antialiasing and fantastic framerates.

I figure by the time I feel like I need a larger set of RAM for 2560 x 1600 the Next Big Thing in GPUs will be out, and I'll get that, anyway.
 
I wonder if six cores and/or hyperthreading enabled would have changed the results at all?

And thanks for posting these updating results.
 
You predicted an anomalous result? Seriously, if one setup LOSES performance significantly, while the other gains significantly from a system change that should increase performance for each, I don't see how you can draw any credible conclusions.

This. +1
 
Wow that's really useful information, well actually no it's not.

Majority don't care what scales faster with 4.8GHz+ systems for the simple reason we don't use them, this article was a huge waste of time to satisfy the fanboys.

Even after update the crossfire solution is still the best option for most users.

Oh and I use GTX580.
 
Last edited:
I'm using tri-sli 580s w/ 1.5 GB RAM on a 2560 x 1600 monitor, and it seems fine and/or overpowered for the games I've tried so far.

I think it will depend on the game; I'm sure the newer the game and higher resolution the textures the less AA you'll be able to apply. I was playing Company of Heroes over the weekend with 16xQ antialiasing and fantastic framerates.

I figure by the time I feel like I need a larger set of RAM for 2560 x 1600 the Next Big Thing in GPUs will be out, and I'll get that, anyway.

I agree, I am sticking with my 1.5GB ones simply because the $$ needed to go get 2 or 3 more 3GB versions doesn't make sense when in 6 months something new will be out and hopefully NV will have sufficient RAM on them for huge resolutions.
 
Good follow-up article.

It's also interesting to see the "cost" of nvidia's software approach vs AMD's hardware approach. It's much higher than I would have thought.

Some may remember a few years ago... winmodems, "chipless" sound cards, network cards...etc... all costing CPU resources.

The more things change, the more they remain the same...
 
Thanks for the review Kyle & Brent. As I've thought, the cpu clockspeed was really holding back the 580s. I expected more, but it's definitely not worth $500 more. I can get a nice IPS panel for that much moolah.
 
This only just complicate things even more for people shopping around.

I say, buy whatever makes you feel good, to each his/her own.

BTW, I blame DirectX for all this nonsense.
 
Wow that's really useful information, well actually no it's not.

Majority don't care what scales faster with 4.8GHz+ systems for the simple reason we don't use them, this article was a huge waste of time to satisfy the fanboys.

Even after update the crossfire solution is still the best option for most users.

Oh and I use GTX580.

Considering you can get a 4.8ghz cpu/mobo for $350 bucks before the heatsink, I'm pretty damn sure there will be plenty of people using them. Most of my friends in the hobby have sold their gen 1 icore/phenom stuff to switch to 2500k setups, because the cost to swap was so low.
 
I have been telling alot of different people about the same outcome as your article today...SUPER REVIEW BY HARDOCP ~! i am running a 920 DO X58 @ 4.4ghz, with TRI FIRE 5870, and i can feel off the mouse and eye on speed increases with gameplay....i agree 110% with this article......i wish i atleast had the Extreme edition CPU or the Sandy bridge on my X58! A++ review thats hard!
 
Most of the time people dont max there graphic card solutions like they think. we need 4gb -8gb ram on vid cards before we see DX 11 really come to life.. Even Crysis with HD FILES and modded Extreme Ultra Q files along with CINELTA HD bring TRI FIIRE 1GB 5870 to its knees in 8X SSAA 16XAF mode. using max detail in game and 1920X1080....and it barley plays 4X SSAA.....2XSSAA being able to play.....thats with 1gb....so 2-GB might not even cut it.....4gb VID CARDS COME OUT NOW
 
I am interested in the problems with HT mentioned before.

What are the issues being brought up with Hyper Threading and how would this affect SLI performance?
 
Considering you can get a 4.8ghz cpu/mobo for $350 bucks before the heatsink, I'm pretty damn sure there will be plenty of people using them. Most of my friends in the hobby have sold their gen 1 icore/phenom stuff to switch to 2500k setups, because the cost to swap was so low.

Agreed. It's pretty safe to assume that anyone willing to spend $1000+ on video cards is going to back it up with some pretty decent hardware. I can't imagine anyone that has the know-how and desire to make a 3 gpu system not being willing or able to overclock the cpu for it. Unless of course it's some trust fund noob that bought the most expensive thing he could design on Alienware.
 
Awesome article.

Sadly crawls back to basement to play on a Q6600 and 460SLI.
 
Wow that's really useful information, well actually no it's not.

Majority don't care what scales faster with 4.8GHz+ systems for the simple reason we don't use them, this article was a huge waste of time to satisfy the fanboys.

Even after update the crossfire solution is still the best option for most users.

Oh and I use GTX580.

I care what scales faster with a 4.8ghz+ system.

I do not own one, I will not own one until it's cheap and quasi-normal to do so, and I won't have more than single graphics card in my system. But I care because it's INTERESTING and it furthers my knowledge and my understanding of PC hardware.

I like to think of myself as a PC enthusiast and I tweak my own systems and read articles and comment on forums. Because I find these things interesting, because it answers questions people have, because it answers questions from the last article and it shows the reviewers were curious and open to new ideas too.

If this and the last article didn't tweak your interest then I'm sure toms hardware guide has a nice warm seat for you.
 
Am I the only one that gets the impression that on HARDOCP when an NVIDIA configuration beats an AMD configuration the entire subject is dropped until the next generation BUT when an AMD configuration beats an NVIDIA configuration the topic gets revisted over and over and over again until NVIDIA takes the lead again?!

TBH... I would expect a 3x 6970 Tri-Fire configuration to at least close the gap with the 3x 580 configuration where the CPU is less of a limitation. This series of articles have all been comparing 3x NVIDIA cards versus 2x AMD cards (not exactly apples to apples comparison ladies... the AMD configuration may have 3 GPU's but if history is any indication 3 actual 6970's in tri-fire would outperform that config by a good margin).
 
Last edited:
lol, didn't even get past the 2nd post on OCN before those guys started blaming the NF200 :D
 
Am I the only one that gets the impression that on HARDOCP when an NVIDIA configuration beats an AMD configuration the entire subject is dropped until the next generation BUT when an AMD configuration beats an NVIDIA configuration the topic gets revisted over and over and over again until NVIDIA takes the lead again?! Shouldn't all these revisits be 3x 580 versus 3x 3970 anyways (as a 3x 3970 configuration should be marginally faster than 1x 3990 and 1x 3970).

I couldn't care less which one happens to be on top this week but it's getting a bit silly :/

the conclusion of the article recommends getting the AMD setup. how on earth can you conclude that [H] is nvidia biased?
 
Back
Top