NVIDIA GeForce 3-Way SLI and Radeon Tri-Fire Review @ [H]

Status
Not open for further replies.
The point being that only attaining a 10-20% performance increase for getting a 3rd 580 is incredibly low and not replicated in many other benchmark reviews nor user tests. Scaling has been much better than that ever since SLI has been released. There are other factors at play.

I am guessing it's triple monitor. The third card for NVidia has never scaled well in that situation.
 
There's a clear cpu limitation in fully utilizing the third card. Vega clearly has proved this to satisfaction of anyone evaluating the data objectively. When it comes to max playable setting, the 3gb gtx is the requisite. As the Radeons similarly scale in correlation to clock speed, though, I doubt the apples comparison will shift much except to demonstrate much better performance across the board.
 
There's a clear cpu limitation in fully utilizing the third card. Vega clearly has proved this to satisfaction of anyone evaluating the data objectively. When it comes to max playable setting, the 3gb gtx is the requisite. As the Radeons similarly scale in correlation to clock speed, though, I doubt the apples comparison will shift much except to demonstrate much better performance across the board.

Objectively this is fair. Will the SLI improve with a better higher clocked CPU? If it does logic dictates Crossfire setup will as well. We could very well be getting the same difference in results.
 
I don't see how you can compare their results to yours. Last time I heard they didn't use the built in benchmarks, but they used a specific area in the game and ran through it manually. So unless you can mimic what they do exactly your results can't be compared to them at all.

Yep. Some people should let the professionnal do their work in properly controlled environments, instead of trying to be home-made ''experts'', and challenge the results of real professionnals. Spending alot of money on a computer doesn't ''transform'' you automatrically into an ''expert'', or a professionnal reviewer. You are just a random guy with a computer costing alot of money. :)

Hardocp uses real time actual gaming benchmarks. They run it like they are playing the game. Their results will be different from Vega's, or any other random guy, because of the methodology they use. Why is this so hard to understand? You can't expect their results to be the same when they use a completely different method then Vega.

Vega is using in game benchmarks to compare his results with their results, that they obtained using a totally different methodology. It's totally illogical. They are doing the right thing in using real time actual gaming benchmarks. Vega is not.

They are professionnals reviewers. He is not.
 
Last edited:
Objectively this is fair. Will the SLI improve with a better higher clocked CPU? If it does logic dictates Crossfire setup will as well. We could very well be getting the same difference in results.

I think Crossfire and SLI require some cpu load to hand tasks to each graphics card, and seperate what each gpu will do.
I do not know how much this require, but if you look at Gpgpu you can have a bottleneck for calculations done by a single graphicscard on a dualcore 2.66ghz i3.
how its for games and sli, I don't know, but it could be that sli require more cpu.

I'm too lazy to test anything so I'll leave it to you guys.
 
This is a wonderful benchmark. Apart from those super resolutions which is causing my eyes to bleed, look at those wattage consumed! But the result is not surprising. It's well proven that this gen of radeons scale much better in tri/quad-fire set up than that of Geforces in sli. AnandT had a similar result. But for me today's single gpu cards grant more than enough power for even a modest eyefinity set up. I'm not going to buy those tri fire/sli monstrosities for 24/7 gaming.
 
I think some of you are confusing in-game and out of-game benchmarks. In-game benchmarks are a good thing. For example, the F1 2010 in-game benchmark is perfect as all it is doing is replaying a real recorded race using the in-game engine and showing the FPS. This is perfect as every joe snuffy in the world can test the same benchmark of real in-game play against each other if using the same settings. If [H] uses some custom run through of their own devising, unless they release what they did it is of little use to other gamers for reference.

Now, out of-game benchmarks that don't take actual game play into account like Heaven 2.5, Metro2033 etc are the ones that can be "tweaked" by different drivers/companies to get the maximum possible score. Those are the ones to worry about.

In my SLI scaling thread, A-10C, Crysis 2, Batman and Metro2033 were all in game FPS measurements and not "benchmarks".
 
So why are the 580's running at pcie 4x? Because the Eclipse mobo used has 3 slots, 2 x 16 and 1 x 4.

You are benching 6990+6970 at 16+16 and the 580's at 16+16+4, which is why there isnt much benfit in games liek BC2.. 11% increase? lol no.

Benches are invalid. At least you guys didnt bench AAA vs TrMSAA/SSAA in DX10/11 like countless previous Hardocp reviews, so I guess thats a plus.

By the way try to pick a more balanced set of games next time, F1 and DA2 in a 5 game review and then publish them first and use them for a blanket statement about performance of the 2 systems?? seriously Hardocp..
 
So why are the 580's running at pcie 4x? Because the Eclipse mobo used has 3 slots, 2 x 16 and 1 x 4.

You are benching 6990+6970 at 16+16 and the 580's at 16+16+4, which is why there isnt much benfit in games liek BC2.. 11% increase? lol no.

Benches are invalid. At least you guys didnt bench AAA vs TrMSAA/SSAA in DX10/11 like countless previous Hardocp reviews, so I guess thats a plus.

By the way try to pick a more balanced set of games next time, F1 and DA2 in a 5 game review and then publish them first and use them for a blanket statement about performance of the 2 systems?? seriously Hardocp..

Wow, good catch demowhc. I didn't think to check the PCI-E lane allocation on the third 580. 4x would be seriously crippling to the TRI-580 Setup under alternate frame rendering as the whole setup is slowed down. Now I can definitely see why my 3x 580 tests at 16x/16x/16x would be way faster.
 
Last edited:
So why are the 580's running at pcie 4x? Because the Eclipse mobo used has 3 slots, 2 x 16 and 1 x 4.

Then surely only one of the 580s would be running at 4x, not all of them as you imply in the underlined part of your quote.

You are benching 6990+6970 at 16+16 and the 580's at 16+16+4, which is why there isnt much benfit in games liek BC2.. 11% increase? lol no.

Nvidia fanboy much? How come you neglected the part where the HD6990 is running in the slower mode. In essence two of the three AMD cards are running underclocked, thus any benefit of running the 3rd 580 at 16X is more than negated. Or are you going to argue that the 3rd 580 at 16X PCIe would magically make the Tri SLI run much better?

Benches are invalid. At least you guys didnt bench AAA vs TrMSAA/SSAA in DX10/11 like countless previous Hardocp reviews, so I guess thats a plus.

A (not so) thinly veiled attack on HardOCPs integrity, nice way to make a point.

By the way try to pick a more balanced set of games next time, F1 and DA2 in a 5 game review and then publish them first and use them for a blanket statement about performance of the 2 systems?? seriously Hardocp..

Yeah, maybe they could throw in some Lost Planet 2 benchies :rolleyes:

Neither of the games you mentioned are specifically designed with AMD enhancements in mind. Did you comlain when HardOCP kept using the woefully awfull Mafia II, Batman Arkham Asylum or or Lost Planet in their reviews? Also how come you don't take umbrage at HardOCPs inclussion of Battlefield: Bad Company 2, or Metro 2033? Both of these games currently or did in the past favour Nvidia hardware by a large margin. I'm seeing a pattern here.

If I were to guess (it isn't in your post), you are probably running a couple of GTX580s, would that be a good guess? Maybe you are trying to justify to yourself that 2X 580s are a much better deal than an equivelant priced TriFire AMD setup. You do have a valid point about the 16x16x4 issue, and HardOCP should have caught this. Unfortunately the way you deliver this message stinks of disgruntled Nvidiot.

1: You neglect to mention the fact that the HD6990 was running at slower speeds.
2: You call HardOCP biased for not picking a "more balanced set of games", obviously because you think the games picked heavily favour AMD GPUs. Which of course is BS.
3: You attack the way HardOCP conducts its reviews (the AA rant)

All of this shows your true agenda here, quite simply your aim is to discredit the entire review because it doesn't match with your biased Nvidiot views. I will go out on a limb and say that you wouldn't be posting "hey, the HD6990 is clocked at full speed, the review is invalid" if the SLI 580s had came out on top? Don't answer that, it's a hypothetical question.
 
Last edited:
I didnt ignore anything, 830 is the default clocks for the 6990 out of the box. I dont care if 880 is used, but to fair that isnt default out of the box and unchanged speed.

I'm no fanboy, I have owned far more AMD/ATi systems than Nvidia systems, but the testing is flawed and not an accurate depiction of 3 vs 3.

I only mentioned the AAA vs TrMSAA thing because after reading so many reviews without them realising it doesnt actually work in DX10/11 made me lose faith in Hardocp.. they would report scores higher on the AMD system while infact the Nvidia system was using 4 x SSAA etc.
 
So why are the 580's running at pcie 4x? Because the Eclipse mobo used has 3 slots, 2 x 16 and 1 x 4.

You are benching 6990+6970 at 16+16 and the 580's at 16+16+4, which is why there isnt much benfit in games liek BC2.. 11% increase? lol no.

Benches are invalid. At least you guys didnt bench AAA vs TrMSAA/SSAA in DX10/11 like countless previous Hardocp reviews, so I guess thats a plus.

By the way try to pick a more balanced set of games next time, F1 and DA2 in a 5 game review and then publish them first and use them for a blanket statement about performance of the 2 systems?? seriously Hardocp..

Thats at most 11% slower in one game on one card as where all three of the AMD gpus are running at a 6% slower clock speed than they would have been if 6970s had been used.

It was actually SirPauly at Rage3D that noticed this, btw.
 
Actually minaelromany at Rage3D picked up on the 4x pcie slot being used, and I already said I dont care if 880 is used, infact I'd like to see some oc'd results too.
 
Last edited:
Wow, good catch demowhc. I didn't think to check the PCI-E lane allocation on the third 580. 4x would be seriously crippling to the TRI-580 Setup under alternate frame rendering as the whole setup is slowed down. Now I can definitely see why my 3x 580 tests at 16x/16x/16x would be way faster.

Wow, good catch indeed... all 3 in afr would be severely slowed down by this.
 
Wow, that's very interesting. And yes they will rerun the tests with a nice sandy build using the asus ws revolution mobo and the 2600k @ around 4.7Ghz. I don't understand the discrepancy in performance between your run and the Hardocp but that overclock and faster cpu shouldn't do it especially at such a high res (I know you probably don't think it's that high vega) doesn't make sense. Something is weird for sure. I'm sure the next follow up may shine some light on the issue.

I feel like running it on SB may handicap a tri-SLI setup due to the fewer PCI-E lanes. An overclocked gulftown EE would have been a better choice.
 
What is wrong with using the stock 6990 clocks? The 580s are at stock clocks.

People do not realize the massive bandwidth requirements over the PCI-E bus and the crossfire/SLI bridges when you start to frame swap Eyefinity/Surround resolutions. Previous tests of PCI-E slot speed differences on single monitor are invalid. This is one of the reasons why my 4x 6970 could not push 12.3 mega-pixels. As soon as I lowered the resolution of the 4x 6970 setup to a certain point it magically started to work again.

I can easily see a 10-20+% performance loss with the third 580 in a 4x slot. I have done a lot of testing on this subject and it was very apparent that something was not right here.
 
Wow, good catch demowhc. I didn't think to check the PCI-E lane allocation on the third 580. 4x would be seriously crippling to the TRI-580 Setup under alternate frame rendering as the whole setup is slowed down. Now I can definitely see why my 3x 580 tests at 16x/16x/16x would be way faster.

Then a quick test on your system by making one of the PCIe lanes run at 4X would show how much of a difference it would make.

I wanted to test if it would be better to seperate my 2X 6970s by one slot for the extra airflow. My testing showed no major difference in speeds between 16x/16x or 16x/4x. So I seperated the cards for airflow rather than run them at 16x/16x for a very slight performance boots of 3%.
 
Then a quick test on your system by making one of the PCIe lanes run at 4X would show how much of a difference it would make.

I wanted to test if it would be better to seperate my 2X 6970s by one slot for the extra airflow. My testing showed no major difference in speeds between 16x/16x or 16x/4x. So I seperated the cards for airflow rather than run them at 16x/16x for a very slight performance boots of 3%.

There is no way that I can get a slot to run at 4x on my UD9 MB. Do you run 3x monitors?
 
People do not realize the massive bandwidth requirements over the PCI-E bus and the crossfire/SLI bridges when you start to frame swap Eyefinity/Surround resolutions. Previous tests of PCI-E slot speed differences on single monitor are invalid. This is one of the reasons why my 4x 6970 could not push 12.3 mega-pixels. As soon as I lowered the resolution of the 4x 6970 setup to a certain point it magically started to work again.

I can easily see a 10-20+% performance loss with the third 580 in a 4x slot.

Is a $500 - $555 increase in price worth 10-20% improvement in performance? Considering that most of that increase could be negated by flicking the switch to fast mode on your HD6990, or by simply using 3X HD6970s.
 
There is no way that I can get a slot to run at 4x on my UD9 MB. Do you run 3x monitors?

Yes, 3X 22" 1920x1080 monitors. When I did my testing there was no tangible difference when running 16x/16x or 16x/4x. Was there a difference, yes a massive one of around 3-5%, sorry for the sarcasm :)

Eventually a new BIOS came out for my board and this made the 3rd slot 8x PCIe. There was no noticable performance increase with 16x/8x.
 
Would be interesting to see if the result differ from earlier experience with just sli.tri sli should require more bw but would it cripple the cards ?
http://www.hardocp.com/article/2010/08/25/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x4x4/

As for setting 6990 to 6970 clock it would be fair since they suppose to comparing 3 6970 vs 3 580

You must remember though that those are single monitor tests on last gen equipment. Increase the resolution and overhead of Eyefinity/Surround and the numbers will change.
 
Wow even on a single monitor I'm totally surprised at the minimal difference of those 4+4 vs 16+16 results.
 
Here are some other results directly contradicting [H] results:

image021.png


image022.png


image023.png


Even only single monitor and GPU up to 18% performance loss going from 16x to 4x on slower cards not even having to frame swap anything.
 
You must remember though that those are single monitor tests on last gen equipment. Increase the resolution and overhead of Eyefinity/Surround and the numbers will change.
Correct but by how much is what we all want to know. Enough to cripple to cards or produce some slow down or hardly any at all.
 
My friend tried to run crossfire on a P55 16+4 board and the results were poor in BC2, but that could be CPU limitation.. However he gave me my 5850 back lol after complaining the improvment was small and it had very bad microstutter and choppy gameplay making the game unplayable for him.
 
It's an interesting point. Either way they're redoing the testing on a different mobo. One that doesn't have x4. Though it does have x8. Curious what would make a bigger difference or any at all now. Higher cpu clock speeds or x8 over x4. Makes a valid argument though for triple 6970 instead of 6990+6970 to see if there is a problem over pci lanes. Also the 6970 would be at stock speeds instead of downclocked.
 
Now, I once again state that AMD definitely has the price to performance crown as I have always stated...

... power consumption and multi-gpu flexibility. So please layout for us a testing methodology that pleases you and would show nVidia's GTX580s/590s in a more positive light. :rolleyes:
 
... power consumption and multi-gpu flexibility. So please layout for us a testing methodology that pleases you and would show nVidia's GTX580s/590s in a more positive light. :rolleyes:

No one cares about having nVidia's GTX580s/590s shown in a more positive light,
or at least that should not be the goal of any testing metodology, but...

Among all the statistical/hardware/software uncertainties involved in a particular benchmark, having to think about:

Correct but by how much is what we all want to know. Enough to cripple to cards or produce some slow down or hardly any at all.

Having to think about how much was particular setup crippled :eek:
kinda defeats the purpose of a benchmark

"AMD Radeon 6990/6970 Tri-Fire is better in terms of value, efficiency, and gaming performance than GTX 580 3-Way SLI. If you want to utilize that performance, the 2GB of RAM per GPU on the Radeon HD 6970 will allow you to do this and provide a noticeable gameplay experience and visual improvement over GTX 580 3-Way SLI. "

Bolded part is simply not true, as both anisotropic filtering quality and AA is currently better on Nvidia cards. High samples SLI antialiasing, SGSAA and TrAA(Supersampling) all work in DirectX 10 and DirectX 11, with nothing similar from AMD to counter this
(albeit having MLAA, which tbh is more performance then quality preset).
 
Nvidia has left their core business of having the fastest 3D gaming experience by playing around with other crap! Long live AMD!
 
Did anyone notice on the Toms hardware results that 4x is actually running better at the higher resolutions? or am I reading that wrong? :confused:
 
People do not realize the massive bandwidth requirements over the PCI-E bus and the crossfire/SLI bridges when you start to frame swap Eyefinity/Surround resolutions. Previous tests of PCI-E slot speed differences on single monitor are invalid. This is one of the reasons why my 4x 6970 could not push 12.3 mega-pixels. As soon as I lowered the resolution of the 4x 6970 setup to a certain point it magically started to work again.

AMD answers to your claims (that you did post on 50 different forums btw) was: ''the guy probably don't know how to install drivers properly, since it's working properly in-house on our tests systems''.

So who's right? ;) Do you know how to install drivers properly big V?

I will know more this week when receiving my 3X 30''. ;) Until then, you still have the benefit of the doubt. :)
 
Did anyone notice on the Toms hardware results that 4x is actually running better at the higher resolutions? or am I reading that wrong? :confused:

I don't know I haven't read their article, but keep in mind that THG is completely disreputable. They have been known for taking kickbacks from industry to paint hardware reviews in one way or another. I haven't even bothered reading their reviews in over 10 years.
 
Here are some other results directly contradicting [H] results:

image021.png


image022.png


image023.png


Even only single monitor and GPU up to 18% performance loss going from 16x to 4x on slower cards not even having to frame swap anything.

Again, THG has been known to completely fabricate their results and be paid for it by hardware vendors, so nothing they say can be trusted.

I trust you more than I trust THG.
 
Tom's. same place that gives favorable reviews to cooler master PSU's(because they use CM's lab).
 
Blah blah blah... PCIe lanes speed, 4x, 8x, 16x.

So. I'm beating alot of 580 Quad-SLI set-up in GPU scores in both 3D Vantage and 3D Mark 11 (even 990x/980x set-ups with those awesome 16x lanes), with my ''crippled'' P67 motherboard, with ''crippled'' PCIe lanes, and my lowly 2600K at 5.3. So how do you explain that? Please. Enlighten us. :)
 
You must remember though that those are single monitor tests on last gen equipment. Increase the resolution and overhead of Eyefinity/Surround and the numbers will change.

Agreed. Also, it seems that the Nvidia SLI setup is less sensitive to the pcie lane limitation than the AMD setup. The SLI was only affected 10% at 1080p using pcie x4 and even less on 1600p
 
LOL. Blame NVIDIA for not letting people do 590+580 on 2 PCIe slots.

Not AMD's fault if they can do 3 GPUs on 2 slots, while Nvidia can't. :)
 
After reading though some arguments in this thread, a new slogan came to mind :

AMD Cross-Fire, no Hocus-Pocus necessary.

+1 for ease of use.
 
Status
Not open for further replies.
Back
Top