NVIDIA GeForce 3-Way SLI and Radeon Tri-Fire Review @ [H]

Status
Not open for further replies.
Ok that proves it, Spoony is a troll/spammer. 23 posts per day...:rolleyes:

And thanks Kyle for going the extra mile and trying to get 4.8ghz just to make everyone happy :)

I think it shows, that Nvidia is doing everything possible to make AMD look bad.

Even by making themselves look stupid, they keep trying.

Internet trolls FTL
 
Running 5x1 portrait under quad 6970 with sandy @ 5.25 GHz. What may be unbeknownst to OCN devotees is crossfire scaling blows through ceilings with high clocks, just as well as NVIDIA. I see 99% usage on 3x1 portriat, landscape, and 11 megapixel 5x1.

Wow! You really have my interest here!

So the Nvidia fanboy posting in here and everywhere on 50 different forums that the AMD Crossfire bridge CAN'T support the bandwidth for Quad-Fire and 3 X 30 '' LCD and scaring everyone was wrong all that time????

I really hope so, since I'm waiting for 2X 30'' LCD more to come next week and go 3X 30', LCD with Quad-Fire.
 
Why would anyone want to SLI $1K in cards for that resolution anyways (2560x)?

Ans: because they can! But you are right in what you say :)

Wrong, wrong and wrong.

A single GTX580 or 6970 is completely insufficient to play the newest games with AA and acceptable frame rates on a simgle 2560x1600 monitor.

Even less demanding titles like S.T.A.L.K.E.R - Call of Pripyat, can fall below 40fps at that resolution on my GTX580. In the Crysis 2 demo, I was unable to get adequate frame rates at 2560 at any setting, and was forced to drop down to 1920x1200. Even with everything set to lowest, I would hover at about 38fps (sometimes dipping lower) and the game would feel unresponsive. At 1920x1200 the framerate would go up to about 60 even at the highest settings, and still feel playable.

Those of you who say a SLI setup is a waste on a single 2560x1600 monitor simply don't know what you are talking about. There is no single board solution in existence (not even the 6990 or 590) that can adequately handle the latest games at max settings with AA at 2560x1600. Simply does not exist.
 
Ya' you don't need mega cards for 2560x. [H]ard showed GTX 460 1Gb SLi @ 810 on the core pushing Metro maxed at that res.


Disagree completely.

FPS games simply don't feel fluent at much under 60fps. The above to me would be inadequate.

I'd be OK if the min FPS occasionally dropped into the high 50's, but averaging in the 40's is just not acceptable.

That and I refuse to play any game at anything below max settings, with at least 4X AA on.
 
Wow, this reminds me of the 8800 series pwnage of ATI back in the day! Truly, nvidia fanboys need to pipe down because they have the hot, loud, and slow tri setup!! Maybe they should crack their wallets open and donate to nVidia so that they can be competitive again with AMD :p
 
/bow to Kyle.

You have really showed us quite a great level in this thread, and going the extra mile i guess isn't something you are doing "out of pressure" but because you have your very own point of view from first hand experience and you want to convey it to us without a shadow of doubt, and that makes you a great journalist quite honestly.

Thanks again to Brent and to you, you guys are doing an amazing job here.
 
Why do I have the feeling that there will be someone to come in here and say: "You're STILL CPU LIMITED!!!Rage!!! You need to use a SR-2 with Dual 990Xs!!!TrollRage!!!!"?

Kyle, Brent, there are a majority of us that thank you guys for doing things like this, where you use REAL gameplay, and not some canned benchmarks.

This whole thing just goes to show that one should look at the price/performance ratio instead of 'equal number of gpus'. Maybe you guys could test Dual 6990s vs Quad GTX580s, I shudder to know what kind of behemoth power supply it would take. :eek:

Thanks guys, and keep up the [H]ardWork!

PS: I guess the Nvidia fanboys still can't get over the fact that AMD has progressed past the HD5850/5870 Crossfire scaling issues.
 
About the Nvidia ''CPU limited'' cards in that article...

I know it's just a synthetic benchmark... But just for the fun of it.

The highest 3D Mark 11 GPU score I have seen on the internet for Nvidia 2 X 590 is 19000. So we can already extrapolate the results from here, since my GPU score is 22452. :)

So AMD are also scaling well with higher CPU speed.. not just Nvidia. So the results will probably stay the same with a higher CPU speed. 6990+6970 will still beat 580 Tri-SLI with higher CPU speed... ;)

quad53.jpg
 
Excellent thanks for this, this has given me much more of a reason to buy ati next time, if the linux support is up to scratch at the time I may jump ships. Glad to hear you're repeating with a faster cpu, now if you also overclock both cards under water and use the 3gb 580's and ati still beat nvidia, then the fanboys have nothing left to whine about.
 
Zarathustra[H];1037186379 said:
As much as I doubt we are looking at CPU limitation issues I do appreciate you guys taking it the extra step to make sure that the CPU isn't the issue.

My stock clocked Core i7-920 (2.67Ghz), usually sits at about 35% utilization in most FPS games, and CPU util usually doesn't scale up much with resolution in most games, at least not FPS games.

35% usage of a core or 35% of usage across all cores? Assuming you mean the latter ith 8 threads on a 920 you may still be bottlenecked because the game isn't written to take advantage of the threads no? Just because you have spare threads doesn't necessarily mean that a clock increase won't help.
 
Zarathustra[H];1037186379 said:
As much as I doubt we are looking at CPU limitation issues I do appreciate you guys taking it the extra step to make sure that the CPU isn't the issue.

I think there will be some differences in performance, but I doubt it will actually change the outcome. Fun to see for sure though. :)
 
Excellent thanks for this, this has given me much more of a reason to buy ati next time, if the linux support is up to scratch at the time I may jump ships. Glad to hear you're repeating with a faster cpu, now if you also overclock both cards under water and use the 3gb 580's and ati still beat nvidia, then the fanboys have nothing left to whine about.

Cleary nVidia derives greater benefit from the lower temps provided by LN2, thus a waterblock won't show it's full powers. :rolleyes:

Apparently the 580s are like some kind of astral phenomenon...when Jupiter is in the house of Venus and there is a full moon, its true power will be revealed!
 
As an owner of an SLI 580 system, I can assure you that TWO 580s is enough to drive most games at 5760x1200, let alone three. 2560x1600 is simply not an issue for top-end multi GPU setups, it would be a waste.

They get crap framerates with 3 cards. Why would yours be any better with 2? The phrase "most games" is misleading because most games are crappy console ports that don't put a dent in a good GPU. Graphically intensive games are another story. Framerates should only dip below 60 on rare occasions yet their test of 3 cards shows that most of the games are spending most of the time in sub-60 territory. Just because you've convinced yourself that bad framerates are acceptable doesn't make it so.

2560x1600 is an issue for multi-gpu setups in the most graphically intense games. I have a pair of 480s with an overclock that probably puts them close to stock 580s, and there are tons of games that can dip below 60fps. I'd consider 80-90fps a good baseline average to ensure vsynch is rarely going to be broken, and triple monitor isn't even remotely close to that even with three of the best cards.
 
35% usage of a core or 35% of usage across all cores? Assuming you mean the latter ith 8 threads on a 920 you may still be bottlenecked because the game isn't written to take advantage of the threads no? Just because you have spare threads doesn't necessarily mean that a clock increase won't help.

Surprisingly I find CPU usage to be spread pretty well over the cores.

I wasn't expecting this myself either.
 
They get crap framerates with 3 cards. Why would yours be any better with 2? The phrase "most games" is misleading because most games are crappy console ports that don't put a dent in a good GPU. Graphically intensive games are another story. Framerates should only dip below 60 on rare occasions yet their test of 3 cards shows that most of the games are spending most of the time in sub-60 territory. Just because you've convinced yourself that bad framerates are acceptable doesn't make it so.

2560x1600 is an issue for multi-gpu setups in the most graphically intense games. I have a pair of 480s with an overclock that probably puts them close to stock 580s, and there are tons of games that can dip below 60fps. I'd consider 80-90fps a good baseline average to ensure vsynch is rarely going to be broken, and triple monitor isn't even remotely close to that even with three of the best cards.
I have a 4.4GHz i7 and run my GTX580s at 890MHz core, so I don't think my system is exactly comparable to your 480s. Also, 60fps or 40fps is irrelevant in the majority of games outside of FPS titles. Do you need 80-90 fps or even 60fps in Dragon Age, Civ 5, Dirt 2, etc? No.

The reality of multi-monitor gaming right now is that you are not going to be able to max out the most intensive of games with all settings on and cranked without 3-4 cards, and even then, getting 60fps is not a guarantee. 2560x1600 however is not the same story at all.

I don't think the problem is that I've convinced myself that "bad framerates are acceptable" so much as that you've convinced yourself that 80fps is required to enjoy a game. Newsflash, you aren't a pro gamer, you dont need 200fps to play online FPS games.
 
Newsflash, you aren't a pro gamer, you dont need 200fps to play online FPS games.

Nope. No one (pro or not) needs 200fps to play any game.

I - personally - prefer my minimum framerates in FPS games to be about 60FPS. When this isn't possible I deal with less.

In engines that have the function, I usually cap the framerates at 120fps, as this stops the video card from generating unwanted heat in simple screens by rendering way faster frame rates than I need. Also, being an even multiplier of the refresh rate of my monitor (60hz) it also results in less tearing without having to turn on vsync and be stuck at 60.

Frame rate are important in FPS titles. I found the Crysis 2 demo to feel like moving in molasses at 40fps. When I dropped from 2560x1600 to 1920x1200 the game felt light, free and unrestricted. Visually it still looked the same though, no skipping in either setting. This is why I usually try to get a minimum of 60fps on all FPS titles.

In something like Civ 5 - however - unless the frame rate is REALLY bad, it really doesn't matter. Even 15fps feels pretty decent and playable, and only skips a tiny bit when scrolling. Higher frame rates are nice, as they make the game scroll more smoothly, but the benefit is marginal.
 
Wow. Did not expect those results at all. Good jub [H] team!

Do you guys think this is a hardware limitation or a driver one? or both?
 
I think driver to some extent but the mem limitation on the 580's does come into play if you turn everything on like SSAO.
 
I think driver to some extent but the mem limitation on the 580's does come into play if you turn everything on like SSAO.
The tests in this article were run specifically to ensure that VRAM was not a limitation. They did not exceed the 1.5GB VRAM on the 580. That was not a factor, period. Read the article :)
 
It's like people come in here to talk about the review, but didn't even read it.

The VRAM limitation excuse goes on and on and on and on.... Even after explaining it in details countless times already.
 
Didn't read the review eh? That excuse for Nvidia has been put to bed with this review.

More excuses to come though...... If performance favors AMD again, they'll simply say that the selection of games was not enough :rolleyes:
 
Last edited:
I just got around to reading this review. I must say I highly question some of these nVidia results. How did you measure VRAM usage to make sure you were not hitting a VRAM wall? CPU limiting might have also played a significant role in the results.

I have never seen SLI scaling that bad under any circumstances. A 9.8% FPS increase going from dual 580s to triple 580s out of a theoretical maximum of 50% in F1 2010? Something is seriously bottlenecking there. I saw no less than a 43% performance increase in my testing going from two to three 580s. (Youtube videos comparing 2x, 3x and 4x 580s to prove it.) Granted, I am using 3GB 580s but there is something definitely bottlenecking nVidia in this review. Either VRAM, serious CPU limiting or driver issues. Just to put some of this into perspective, running F1 2010 with all settings maxed and 4x AA 16x AF just like your settings using the built in-game benchmark I get virtually the same FPS as you received @ TRI-SLI using a 78% higher resolution. Granted my 580s are slightly over clocked. ;)

Then showing only a 30% increase in Metro2033 which is known to fully utilize GPU power. I reached a almost perfect 49% scaling in my Metro2033 tests. Then on to only an 11% increase in BF:BC2. Something definitely awry.

Now, I once again state that AMD definitely has the price to performance crown as I have always stated. I just found some of these results unusual, especially claiming that VRAM limits were not reached as I have never seen the 69xx series be able to keep up with my 3GB 580s in benchmarks. I am also not sure why you keep mentioning stuff like " This is bad news for the GTX 580 3-Way SLI folks, considering the Radeon HD 6990 wasn't even running in its OC performance mode" when compared against stock clocked 580s. You do realize that the GTX580s can over clock too?

Now I know nVidia bashing is in vogue now, and sometimes rightfully so. They put too little VRAM on their flagship cards and they are definitely overpriced. I just call out unusual test results when I see them.
 
I just got around to reading this review. I must say I highly question some of these nVidia results. How did you measure VRAM usage to make sure you were not hitting a VRAM wall? CPU limiting might have also played a significant role in the results.

I have never seen SLI scaling that bad under any circumstances. A 9.8% FPS increase going from dual 580s to triple 580s out of a theoretical maximum of 50% in F1 2010? Something is seriously bottlenecking there. I saw no less than a 43% performance increase in my testing going from two to three 580s. (Youtube videos comparing 2x, 3x and 4x 580s to prove it.) Granted, I am using 3GB 580s but there is something definitely bottlenecking nVidia in this review. Either VRAM, serious CPU limiting or driver issues. Just to put some of this into perspective, running F1 2010 with all settings maxed and 4x AA 16x AF just like your settings using the built in-game benchmark I get virtually the same FPS as you received @ TRI-SLI using a 78% higher resolution. Granted my 580s are slightly over clocked. ;)

Then showing only a 30% increase in Metro2033 which is known to fully utilize GPU power. I reached a almost perfect 49% scaling in my Metro2033 tests. Then on to only an 11% increase in BF:BC2. Something definitely awry.

Now, I once again state that AMD definitely has the price to performance crown as I have always stated. I just found some of these results unusual, especially claiming that VRAM limits were not reached as I have never seen the 69xx series be able to keep up with my 3GB 580s in benchmarks. I am also not sure why you keep mentioning stuff like " This is bad news for the GTX 580 3-Way SLI folks, considering the Radeon HD 6990 wasn't even running in its OC performance mode" when compared against stock clocked 580s. You do realize that the GTX580s can over clock too?

Now I know nVidia bashing is in vogue now, and sometimes rightfully so. They put too little VRAM on their flagship cards and they are definitely overpriced. I just call out unusual test results when I see them.

Vega can you run one of these games at the same exact settings and resolution and record a fraps to see what you get and post. Granted it'll be different as it's not brent's exact playthrough, but it'll be informative nonetheless. Try and use all the same variables as in this article, driver revision, resolution, downclock your cards. It's alot of work but I've seen you do more. Only one game is all I'm suggesting. BTW your proc will have to be downclocked too. LOL Are you up to the task? Also measure Vram usage at these settings and be sure to post that too.
 
Vega can you run one of these games at the same exact settings and resolution and record a fraps to see what you get and post. Granted it'll be different as it's not brent's exact playthrough, but it'll be informative nonetheless. Try and use all the same variables as in this article, driver revision, resolution, downclock your cards. It's alot of work but I've seen you do more. Only one game is all I'm suggesting. BTW your proc will have to be downclocked too. LOL Are you up to the task? Also measure Vram usage at these settings and be sure to post that too.

I believe Kyle said he will re-run the tests with a better CPU setup. I will wait to see those results. Just screwing around though I set my setup to 5700x1200, 4x AA, 16x AF, max in game settings and ran the in-game benchmark using TRI-580s. It was showing only 1430MB VRAM usage. The benchmark had an average FPS of 71, or 48% faster than the [H] TRI-SLI config. Granted I am running a 990x @ 4.83Ghz and 580s at 1020Mhz. I am not sure if that would make up the almost 50% performance difference though!
 
I believe Kyle said he will re-run the tests with a better CPU setup. I will wait to see those results. Just screwing around though I set my setup to 5700x1200, 4x AA, 16x AF, max in game settings and ran the in-game benchmark using TRI-580s. It was showing only 1430MB VRAM usage. The benchmark had an average FPS of 71, or 48% faster than the [H] TRI-SLI config. Granted I am running a 990x @ 4.83Ghz and 580s at 1020Mhz. I am not sure if that would make up the almost 50% performance difference though!

Wow, that's very interesting. And yes they will rerun the tests with a nice sandy build using the asus ws revolution mobo and the 2600k @ around 4.7Ghz. I don't understand the discrepancy in performance between your run and the Hardocp but that overclock and faster cpu shouldn't do it especially at such a high res (I know you probably don't think it's that high vega) doesn't make sense. Something is weird for sure. I'm sure the next follow up may shine some light on the issue.
 
At least one other user confirmed the results measured in this review (not the interpretation though).
My personal experience with Tri-SLI in DA2 @ 5760x1080 matches up with what they are saying here. With max everything and the hi res pack 2xAA and SSAO off I get in the low 30's and choppy as hell for framerates while showing at most 1480MB used in Afterburner. I guarantee you it is running out of mem regardless of what afterburner says.
So whatever the cause is, it appears to be in common with that user's and [H]'s review system, and different on your system.
 
Fun review and thread, does show I think AMD listening to past complaints about CrossFire with this generation. Hopefully in the future a BullDozer rig compaired to an Intel rig type scenario be tested. I can't see any of the review tests as being cpu limited and don't expect anything significant to change with the 2600K. I too am finding HardOCP having unique great meaningful reviews especially breaking it down into what really is playable and at what settings. The apples to apples comparisons was kinda a switch but for this review it seemed to be the right thing to do for the two comparisons.
 
I have a suspicion that VRAM readouts from apps like GPU-z and MSI Afterburner only count the texture load in memory and not any overhead for things required for SLI, triple buffering, Surround etc that also require a certain amount of VRAM. That would explain why I do see strange behavior well before I actually hit showing 1536 MB VRAM usage on a regular 580 (when I had them).
 
I have a suspicion that VRAM readouts from apps like GPU-z and MSI Afterburner only count the texture load in memory and not any overhead for things required for SLI, triple buffering, Surround etc that also require a certain amount of VRAM. That would explain why I do see strange behavior well before I actually hit showing 1536 MB VRAM usage on a regular 580 (when I had them).

Yes that is the case, I confirmed that when I had a GTX 280. At 996MB my fps would tank even though the card/s have 1024. It's possible 1480 out of the possible 1536 or so could be the max before you take a fps nosedive in SLi
 
I just got around to reading this review. I must say I highly question some of these nVidia results. How did you measure VRAM usage to make sure you were not hitting a VRAM wall? CPU limiting might have also played a significant role in the results.

I have never seen SLI scaling that bad under any circumstances. A 9.8% FPS increase going from dual 580s to triple 580s out of a theoretical maximum of 50% in F1 2010? Something is seriously bottlenecking there. I saw no less than a 43% performance increase in my testing going from two to three 580s. (Youtube videos comparing 2x, 3x and 4x 580s to prove it.) Granted, I am using 3GB 580s but there is something definitely bottlenecking nVidia in this review. Either VRAM, serious CPU limiting or driver issues. Just to put some of this into perspective, running F1 2010 with all settings maxed and 4x AA 16x AF just like your settings using the built in-game benchmark I get virtually the same FPS as you received @ TRI-SLI using a 78% higher resolution. Granted my 580s are slightly over clocked. ;)

Then showing only a 30% increase in Metro2033 which is known to fully utilize GPU power. I reached a almost perfect 49% scaling in my Metro2033 tests. Then on to only an 11% increase in BF:BC2. Something definitely awry.

Now, I once again state that AMD definitely has the price to performance crown as I have always stated. I just found some of these results unusual, especially claiming that VRAM limits were not reached as I have never seen the 69xx series be able to keep up with my 3GB 580s in benchmarks. I am also not sure why you keep mentioning stuff like " This is bad news for the GTX 580 3-Way SLI folks, considering the Radeon HD 6990 wasn't even running in its OC performance mode" when compared against stock clocked 580s. You do realize that the GTX580s can over clock too?

Now I know nVidia bashing is in vogue now, and sometimes rightfully so. They put too little VRAM on their flagship cards and they are definitely overpriced. I just call out unusual test results when I see them.

I don't see how you can compare their results to yours. Last time I heard they didn't use the built in benchmarks, but they used a specific area in the game and ran through it manually. So unless you can mimic what they do exactly your results can't be compared to them at all.

http://enthusiast.hardocp.com/article/2008/02/11/benchmarking_benchmarks/4
 
I just got around to reading this review. I must say I highly question some of these nVidia results. How did you measure VRAM usage to make sure you were not hitting a VRAM wall? CPU limiting might have also played a significant role in the results.

I have never seen SLI scaling that bad under any circumstances. A 9.8% FPS increase going from dual 580s to triple 580s out of a theoretical maximum of 50% in F1 2010? Something is seriously bottlenecking there. I saw no less than a 43% performance increase in my testing going from two to three 580s. (Youtube videos comparing 2x, 3x and 4x 580s to prove it.) Granted, I am using 3GB 580s but there is something definitely bottlenecking nVidia in this review. Either VRAM, serious CPU limiting or driver issues. Just to put some of this into perspective, running F1 2010 with all settings maxed and 4x AA 16x AF just like your settings using the built in-game benchmark I get virtually the same FPS as you received @ TRI-SLI using a 78% higher resolution. Granted my 580s are slightly over clocked. ;)

Then showing only a 30% increase in Metro2033 which is known to fully utilize GPU power. I reached a almost perfect 49% scaling in my Metro2033 tests. Then on to only an 11% increase in BF:BC2. Something definitely awry.

Now, I once again state that AMD definitely has the price to performance crown as I have always stated. I just found some of these results unusual, especially claiming that VRAM limits were not reached as I have never seen the 69xx series be able to keep up with my 3GB 580s in benchmarks. I am also not sure why you keep mentioning stuff like " This is bad news for the GTX 580 3-Way SLI folks, considering the Radeon HD 6990 wasn't even running in its OC performance mode" when compared against stock clocked 580s. You do realize that the GTX580s can over clock too?

Now I know nVidia bashing is in vogue now, and sometimes rightfully so. They put too little VRAM on their flagship cards and they are definitely overpriced. I just call out unusual test results when I see them.

Your problem with your Benchmarks is you used the in-game benchmarks. Which is known by both companies to try to cheat to make there scores look better.

Hardocp uses in-game testing and real-time gaming. This is why the results are different.

They play the game...not run a benchmark.
 
Your problem with your Benchmarks is you used the in-game benchmarks. Which is known by both companies to try to cheat to make there scores look better.

Hardocp uses in-game testing and real-time gaming. This is why the results are different.

They play the game...not run a benchmark.

While I agree that actual game play is the best way to measure performance I think that saying that in game benchmarks as a whole are useless is and that drivers are always optimized for them is an overstatement.

I only have one of these games, Warhead, and wanted to confirm these results with that. I never have tested it on my sig rig and much to my chagrin I'm getting very odd rendering issues with Warhead and actually WORSE performance than [H] which obviously means I have some problems as my CPU and cards are faster than the once [H] used.

I have tested 2 to 3 way SLI scaling when I got the 580s in November and I think I've posted some results around here, need to dig them up.
 
I set up triple crossfire with 965/1425 clocks and ran through dead city (5 minute run) AAA and garnered 67 fps. Very smooth, low of 42. Im very interested to see comparative benefit with potential bottleneck removed now as it appears to hold significant influence. I ran at roughly 6000 x 1080 w bezel correction to (hopefully) simulate nearer the 1200 pixel vertical resolution.
 
Clearly, AMD has been working on the drivers and scaling on the 6xxx series is phenomenal.
Great review Kyle.
 
I don't see how you can compare their results to yours. Last time I heard they didn't use the built in benchmarks, but they used a specific area in the game and ran through it manually. So unless you can mimic what they do exactly your results can't be compared to them at all.

http://enthusiast.hardocp.com/article/2008/02/11/benchmarking_benchmarks/4

Your problem with your Benchmarks is you used the in-game benchmarks. Which is known by both companies to try to cheat to make there scores look better.

Hardocp uses in-game testing and real-time gaming. This is why the results are different.

They play the game...not run a benchmark.

That argument doesn't hold much water. The in-game benchmark uses identical graphics to the actual races that you play. I get virtually identical FPS scores across many different tracks and races as I do in the benchmark run. F1 2010 has very constant FPS levels as shown in their graphs and not large swings in performance during scene changes as in some other games. There will be slight differences in performance but not huge ones. Although I'd be more than happy to run the same "run-through" if the editors released the settings.

The point being that only attaining a 10-20% performance increase for getting a 3rd 580 is incredibly low and not replicated in many other benchmark reviews nor user tests. Scaling has been much better than that ever since SLI has been released. There are other factors at play.
 
I have a suspicion that VRAM readouts from apps like GPU-z and MSI Afterburner only count the texture load in memory and not any overhead for things required for SLI, triple buffering, Surround etc that also require a certain amount of VRAM. That would explain why I do see strange behavior well before I actually hit showing 1536 MB VRAM usage on a regular 580 (when I had them).

Using MSI Afterburner I hit right at 1536 MB of VRAM playing Crysis 2 in 3D on Extreme and get about 52 FPS which is close to the max. I would think that being right at the memory limit if what you're saying is true I'd be getting some performance issues. Crysis 2 isn't that demanding true but in 3D at 5870x1080 almost everything becomes demanding.

That argument doesn't hold much water. The in-game benchmark uses identical graphics to the actual races that you play. I get virtually identical FPS scores across many different tracks and races as I do in the benchmark run. F1 2010 has very constant FPS levels as shown in their graphs and not large swings in performance during scene changes as in some other games. There will be slight differences in performance but not huge ones. Although I'd be more than happy to run the same "run-through" if the editors released the settings.

The point being that only attaining a 10-20% performance increase for getting a 3rd 580 is incredibly low and not replicated in many other benchmark reviews nor user tests. Scaling has been much better than that ever since SLI has been released. There are other factors at play.

Pretty much agree here. I almost feel like picking up F1 2010 and testing that.
 
Status
Not open for further replies.
Back
Top