GTX 480 SLI PCIe Bandwidth Perf. - x16/x16 vs. x8/x8 @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,701
GTX 480 SLI PCIe Bandwidth Perf. - x16/x16 vs. x8/x8 - In our continuing coverage of Multi-GPU configurations with varying PCIe Bandwidth, we put x16/x16 and x8/x8 PCIe to the test. Does having less PCIe bandwidth make difference in gaming? We know that a "lesser" motherboard can save you money. Using GeForce GTX 480 SLI we show you the real-world differences.
 
Awesome news. Thanks for the review.

I just bought a Gigabyte GA-890GPA-UD3H, which runs x8/x8 when in crossfire, and was really hoping it wouldn't be a performance decrease. I'll be running 2x 5770, powering a single monitor at 1920x1080. According to your review, it'll run great, without a performance decrease.
 
i'm kinda surprised by the results. i was expecting 8x/8x to be a lot worse!
 
Thanks for the article. This is the kind of info that every gamer and enthusiast out there should know and understand.
 
Again Thank you BIG TIME for the info.
Just like last week it is something that needed to be tested and posted on the front page for some time.
 
Excellent article. i had always wondered about this and just always went big on the mobo. I now see I could have saved myself some money. Thank you [H].
 
Awsome, Thanks for putting this one out!

For those who have other bills to worry about, this means money for a gaming system can be saved by not trying to find a board with x16/x16 if SLI or Xfire if you were going to match 2 5770's or GTX 460's together.
 
I made a comment in the last pcie/sli bandwidth review regarding the impact of CPU speed on such benchmarks. It got overlooked.

Simple question, would an overclock on the CPU affect the results? 3.6 Ghz seems pretty low. A 4.0 or higher benchmark might show more of a difference and would be within the reach of most OCPers.

Processor speed affects single card configurations significantly in relation to PCIe bandwidth. I imagine the same logic could be applied to SLI configurations.
 
I made a comment in the last pcie/sli bandwidth review regarding the impact of CPU speed on such benchmarks. It got overlooked.

Simple question, would an overclock on the CPU affect the results? 3.6 Ghz seems pretty low. A 4.0 or higher benchmark might show more of a difference and would be within the reach of most OCPers.

Processor speed affects single card configurations significantly in relation to PCIe bandwidth. I imagine the same logic could be applied to SLI configurations.

Been there, done that.

http://hardocp.com/article/2009/05/19/real_world_gameplay_cpu_scaling/

And for what it is worth, neither of these articles have anything to do with benchmarks. You need to go to other sites for that information.
 
Awesome news. Thanks for the review.

I just bought a Gigabyte GA-890GPA-UD3H, which runs x8/x8 when in crossfire, and was really hoping it wouldn't be a performance decrease. I'll be running 2x 5770, powering a single monitor at 1920x1080. According to your review, it'll run great, without a performance decrease.

Not trying to be an ass, but what is the purpose of Crossfire while running 1920x1080? It seems like overkill.
 
Not trying to be an ass, but what is the purpose of Crossfire while running 1920x1080? It seems like overkill.

It'd be overkill if I was crossfiring a couple 5870s, but 2x 5770s is equivalent to about a 5870; and if you want to run Metro 2033, with high settings, at 1920x1080, you need a more than a single 5770.

2x 5770, both oc'd to 950/1350, cost me $270, and performs almost spot on with a 5870 (as long as ATI drivers are supporting xfire for that game:p)

1x 5870 costs ~$370


**I owned one 5770 for awhile, before I bought a second one. It made more sense to spend $130 on another 5770, then to sell the one I already owned, and then buy a 5870.
 
Please forgive my use of the word benchmark Kyle. I use it as a general term towards all tests on hardware. I will just use the word tests here on HardOCP from now on.

Thanks for replying, though I don't think I expressed my concerns well enough. My concern is that CPU speed will have an effect on SLI configurations in x8/x8 or x16/x8 mode. In short, who knows at what CPU speed a pair of 480 gtx cards will saturate the x8/x8 configuration. By pushing the CPU speed higher we would improve the chances of finding this limit and potential bottleneck.

Hypothetically, what if the point at which an x8/x8 configuration becomes a bottleneck occurs when a CPU is clocked at 3.8 or 4.0 Ghz? I think that information might be useful for new system builders who are considering whether or not to go socket 1156 or 1366. Who knows, maybe that point is at 6 Ghz and thus would become a non-issue for normal gaming use. But I do think cpu speed in relation to how it affects PCIe saturation in an SLI configuration is relevant to your tests...

I doubt it, he was using Nvidia cards for one and two at that point your going to be gpu bound not cpu I would imagine that higher cpu clocks would increase performance but I find it hard to believe that your going to do much in the way of turning up settings at 7mp
 
Thanks for the review, I've been wondering what, if any, impact there would be using this P55 mb if I re-entered multigpu.



 
Some games are more CPU bound than others. I am sure an even greater difference would have been apparent with a game like MW2, which is notoriously CPU bound.
 
Actually, this is pretty cool for one interesting reason.

All of us PC enthusiast spend a lot of time and money and honestly, the last thing we want to hear is about something we don't really need, like the latest and greatest. For example, dual 16x pci slots. Was just proven we don't need them.

What else do we not need?

Certainly, no one out there is talking. It's honestly a question we bury and ignore. We lie to ourselves all the time. I know for a fact that I've been splitting hairs for years. Cutting edge costs a TON of money.

I've been building computers since 88, 89 and I am finally coming full circle.

I just sold a top of the line 920 @ 4ghz, 6gig ram, 6 TB, Crossfire 5870's, Dual SSD's in raid, etc etc.

And guess what, my new MSI NF980-G65 AMD SLI board, my AMD 1055t running at 4.1Ghz, 4gig dual channel memory, my SSD 120gig HD, my Asus 460's in SLI are just as fast as what I had with a hell of a lot less money spent.

What people have and what performance they actually get between platforms / components when it comes to real world performance doesn't justify the cost or is just not needed a lot of the time.

I would love Kyle and Company to flip the Hardware Enthusiasts Sites on it's ear and start Debunking a lot of the so-called performance gains / specs we have been seeing for years.

Honestly, don't stop at 16x / 8x real world performance evaluations.

I would like to see the professionals at Hardocp tell its audience exactly what, if any the performance are vs DD2 dual channel vs DD3 dual channel / DD3 triple channel, etc. Real world performance. As in games, OS functions, anything measurable.

I know for a fact that Hardocp could build a system at 1/2 or 3/4 the cost of most of you and outperform you guys. I can do it, I just did it.

If they did something like that, it would be the most exciting editorial piece I've seen on computer hardware in a long time.

It really is the one dark corner none of us really explore since it goes against our enthusiast roots so to speak.
 
Last edited:
Thanks for a review, this is information I needed to know. <- Wow that sounded generic LOL.
 
Would running the highest FSAA and AF cause any saturation on the PCIe bandwidth or is that reflected on the card itself and doesn't bog down the PCIe bandwidth? For instance if you're running 2560x1600 with the highest settings, would 16x-16x / 16x-8x or 8x-8x show any severe fluctuations?
 
In short, who knows at what CPU speed a pair of 480 gtx cards will saturate the x8/x8 configuration.

This article is not about CPU speed impact on frame rates. I appreciate your concern but let's keep this thread on topic please.
 
Some games are more CPU bound than others. I am sure an even greater difference would have been apparent with a game like MW2, which is notoriously CPU bound.

We try to stay away from CPU bound games for GPU testing.......If we were testing CPU speed impact on games, which I linked you to earlier, then that would make more sense.

Back on topic please.
 
I would love Kyle and Company to flip the Hardware Enthusiasts Sites on it's ear and start Debunking a lot of the so-called performance gains / specs we have been seeing for years.

I don't really see where we have been saying differently if you are specifically talking about gaming alone. If you read the above mentioned CPU Game scaling article you will see what we told you a year ago. You are just now hearing it though for some reason. :) Now when you get away from a PURE gaming machine, the performance metric changes greatly. But if you just want to surf the web and play games, AMD boxes and i3 and i5 have been where it is at.
 
Any chance of a 4x/4x review for us poor folks that are still on 8x/8x PCI-e 1.x?
 
It'd be overkill if I was crossfiring a couple 5870s, but 2x 5770s is equivalent to about a 5870; and if you want to run Metro 2033, with high settings, at 1920x1080, you need a more than a single 5770.

2x 5770, both oc'd to 950/1350, cost me $270, and performs almost spot on with a 5870 (as long as ATI drivers are supporting xfire for that game:p)

1x 5870 costs ~$370

**I owned one 5770 for awhile, before I bought a second one. It made more sense to spend $130 on another 5770, then to sell the one I already owned, and then buy a 5870.

Thanks for setting me straight! :)


Also, great review. It's very useful to have insight on the foundations of your build.
 
.....................It really is the one dark corner none of us really explore since it goes against our enthusiast roots so to speak.

No, it's not a dark corner, and it's a subject that gets discussed here on a regular basis.
You just have to do a bit of reading and a bit of thinking.

[H] has always been up front about the nature of bleeding-edge tech, espeicially when it comes to performance per $.

Oddly enough, they expect YOU to read the articles and discussions and extrapolate what price point best suits your needs.

Between the reviews and the numerous GPU and CPU scaling articles, you have enough information to decide for yourself whether or not you want to spend an extra $150 to get 10% higher performance or not.
Whether it's CPU, GPU, chipset or hard drive, there is plenty of information to allow you to choose how to best spend your money.
That's the whole point of archiving every single article/review for the past 10+ years.
 
Any chance of a 4x/4x review for us poor folks that are still on 8x/8x PCI-e 1.x?

Having only personal opinion to state here - it is my belief that 4x is just no good anymore.

I run 2 4870X2's for Quad Fire. I was forced to separate the cards in the case as they got too hot side by side. Thus I'm running 16x/4x.

Frankly, it doesnt work. My gaming experience has been reduced with this config. I can consistently show that if I disable Crossfire X (Quad) and just run 1 X2 (Dual GPU) I get noticible game play improvements. Re-enable CrossfireX and the performance drops.

So far the only sensible upgrade I can find is the Asus Rampage Extreme - as this has the 16x/8x config I need to separate my cards and improve my bandwidth.

Thanks to HardOCP for the reviews!!
 
This is exactly what I was looking for. I was looking at motherboards for an upgrade and passed anything that anything less that 16X x 16X for dual GPU SLI. Now that I know 8X x 8X works just fine I will save money and buy the cheaper motherboard (mid range price)
 
I knew this months ago :D As the drivers ,and bios for mobo also, for vid cards got better the 8x2 matched the 16x2.
 
This is a bad experiment design that doesn't support the conclusion:

If you are running on a 30" display at 2560x1600 or below, an x8/x8 SLI or CFX configuration will perform the same as a x16/x8 or x16/x16 configuration. The only time that you should even be slightly concerned about running at x8/x8 is when you move up to a multiple display setup.

All the test shows is that the GTX 480 isn't usually saturating an 8x slot. It's ridiculous to assume this applies to CFX configurations, particularly when a 5970 probably needs a lot more bandwidth than a GTX 480. In fact, just totally ignoring the issue of dual GPU cards for an article like this is pretty sloppy.
 
But I do think cpu speed in relation to how it affects PCIe saturation in an SLI configuration is relevant to your tests...

Nope. He's testing at very high resolutions and multi-display. That those resolutions you are almost 100% GPU bound. Going from 3.6 even up to 4.6ghz would get you almost nothing and it certainly wouldn't impact these specific tests.


All the test shows is that the GTX 480 isn't usually saturating an 8x slot. It's ridiculous to assume this applies to CFX configurations, particularly when a 5970 probably needs a lot more bandwidth than a GTX 480. In fact, just totally ignoring the issue of dual GPU cards for an article like this is pretty sloppy.

While I would like to see 5970 numbers as well, I don't think it would really play out much differently. Some games you'd get a speed hit, others you wouldn't. I know one review of the 5970 I saw a while back, I think at HardwareCanucks, did testing at x8 and found that only one or two games exhibited a drop. I think Far Cry 2 was particularly sensitive to PCI Express speed.
 
Fine work as always gentlemen.

Am I alone in thinking that the next progression of this series of articles would be p45 SLI hacks and their performance?

There are plenty of people with p45 motherboards and fast c2qs that the only thing keeping them from buying another NVIDIA GPU is the locked down drivers...
 
Thanks for the review. Like for the other review came away with where I was thinking it was going to go. This sort of review will help people save money and make fully informed decisions. Not sure what kind of articles you have up next, but definitely looking forward.
 
This is a bad experiment design that doesn't support the conclusion:



All the test shows is that the GTX 480 isn't usually saturating an 8x slot. It's ridiculous to assume this applies to CFX configurations, particularly when a 5970 probably needs a lot more bandwidth than a GTX 480. In fact, just totally ignoring the issue of dual GPU cards for an article like this is pretty sloppy.


Your thoughts are noted. And thanks, "ridiculous" is the nicest thing I have been called all day.
 
Last edited:
Back
Top