pcie 3.0 vs 2.0

hahahaha...$2k in video cards. :eek: Wow. You got the biggest e-penis of all! :p Moe Powah 2 Ya. As for the rest of us mere mortals.... From his statements, I infer...the OP is not going for benchmark domination or trying to be the guy with the mighty quad gpu setup. Seems to just want a solid overclockable single GPU system. Assuming I am correct, 3.0 will not give him much real world benefit over 2.0. Sooooo, if there are some nicely equipped 2.0 mobos out there...there are....at reasonable prices....he would be better off spending LESS on the mobo and MORE on the GPU. Am I wrong? (BTW, no idea why you are writing off ASUS....they make some of the best stuff out there, but hey, your $$$$, your choice. ;) )
 
All of this just reminds me of the AGP speed updates of yesteryear. Basically it seemed to take until the next speed bump for cards to really even need something close to the bandwidth that the previous increase gave. The only real tangible increase in speed that you got for the most part was from the decrease in latency that you got with the speed increase.

For the most part I wouldn't replace a current system just to get PCI-E 3.0, but if I was buying a new one I'd try and get one with it just to be slightly more future-proof. That is unless the price difference was ridiculous, like when I bought my sandy bridge system last summer.
 
Obviously it would help for triple + setups. I don't think the benefit would be very large for dual setups though.

I've benchmarked my single GTX680 running at x8 and x16 2.0 speeds and when running at x8 I get around a 5-6% loss. So I figure 10-12% total if running dual cards.

Ya, if you are running a single monitor with 1-2 3.0 cards like a 680 or 7970 the difference between PCI-E 2.0 and 3.0 would be around 5-10% max. While that isn't staggering like these tests I did that put extreme stress on the PCI-E bus, they aren't to be dismissed either. Basically there is no point not to go with PCI-E 3.0 if you have a 3.0 GPU.
 
Ya, if you are running a single monitor with 1-2 3.0 cards like a 680 or 7970 the difference between PCI-E 2.0 and 3.0 would be around 5-10% max. While that isn't staggering like these tests I did that put extreme stress on the PCI-E bus, they aren't to be dismissed either. Basically there is no point not to go with PCI-E 3.0 if you have a 3.0 GPU.

I was hoping bus bandwidth or drivers and not an architecture flaw was what was holding up the gtx680's 3 way and 4 ay sli benchmarks, glad to hear you got it sorted out! I guess it's time to seriously consider upgrading to x79 since I was planning on going 3 way sli.
 
We're talking about quad sli on release day drivers here with a configuration that you admit is very problematic (the pci-e 3.0 hack).

SKYMTL said:
Funny. How does the registry key modification work for you but so many people have been reporting major issues? Did you do something different?

Callsign_Vega said:
I have major issues with it. Every time I power down my machine it won't boot up into windows again. I have to remove one of the monitors, then reboot and reconfigure my Surround setup every single time. Such a PITA that's why it took me like 5 hours to get these benchmarks lol.
 
You rarely save money by spending money. By the time PCI 3.0 is required you would likely have upgraded to a different GPU and motherboard by then. I wouldn't spend extra to get it.
 
Nice, glad to see someone else backing up my tests. Although the difference isn't as dramatic as mine due to a lower resolutions and card count, they are still very impressive differences. :thumb:


10b499c8_DIFFERENCE.jpeg


From user: psikeiro.
 
Well PCIe3.0 is the bees knees eh. Still not needed, the frame difference is factual but not needed.
 
To the OP, this is [H]ard forum not oftforum. Get the PCIe 3.0 board and be ready for whatever....
 
Nice, glad to see someone else backing up my tests. Although the difference isn't as dramatic as mine due to a lower resolutions and card count, they are still very impressive differences. :thumb:


10b499c8_DIFFERENCE.jpeg


From user: psikeiro.

I guess this means that people running two 680SLi/7970CFX on PCIe 2.0 8X/8X, will see an even bigger improvement moving to PCIe 3.0 8X/8X?

No doubt im gonna buy IVY when i comes.
 
I'm sorry but I'm not buying those results. I find it hard to beleive that a GTX680 saturates a pci-e 2.0 16x slot. I would much rather see results on a stable platform.
 
I'm sorry but I'm not buying those results. I find it hard to beleive that a GTX680 saturates a pci-e 2.0 16x slot. I would much rather see results on a stable platform.

it doesn't show 1 card making a big difference. but multiple cards (which will run at 16x/8x on most platforms other than the 990fx and x79) shows a difference.
 
it doesn't show 1 card making a big difference. but multiple cards (which will run at 16x/8x on most platforms other than the 990fx and x79) shows a difference.

That much of a difference? That much of a difference between 2.0 16x and 3.0 16x? I don't see that much of a difference between 2.0@16x and 2.0@8x. Even at a lower res it should still show a limitation.

The fact that this hack is known to cause issues kind of makes me doubt the results.

Edit: I just saw this anandtech article. Those results might be in the ballpark.
 
Last edited:
I'm sorry but I'm not buying those results. I find it hard to beleive that a GTX680 saturates a pci-e 2.0 16x slot. I would much rather see results on a stable platform.

A GTX680 doesn't saturate a 2.0 16x slot. (Although it does quite easily saturate 2.0 @ 8x)


Four of them in SLI having to talk across PCI-E (And the top connections) would more then use all available bandwidth though. Keep in mind that to even run 'true' quad SLI (Four physical cards, not two dual-GPU cards) that one is forced to run the majority of the cards at x8 electrically anyways.

Of course 99% of the people out there aren't running those insane resolutions. Note how at a 'normal' resolution of 1080p it didn't make a difference.
 
Last edited:
A GTX680 doesn't saturate a 2.0 16x slot. (Although it does quite easily saturate 2.0 @ 8x)

Thats exactly what that graph above shows. That anandtech article really makes me think that those results might be accurate.
 
On 1155 with sli or crossfire at 8x/8x? I'm not so sure anymore.

I guess we'll find out for sure when Ivy is released in a couple of weeks. That'll be native PCIe 3.0 (not the X79 hack-job) so it should be easy to get some accurate benchmarks.
 
I guess we'll find out for sure when Ivy is released in a couple of weeks. That'll be native PCIe 3.0 (not the X79 hack-job) so it should be easy to get some accurate benchmarks.

Yeah, thats what I'm really waiting to see. I'm sure that someone will do a solid comparison.
 
There will be zero difference when IB launches. X79 and SB-E are native 3.0 and work just fine at 3.0. As a matter of fact X79-SB-E is the premier 3.0 chipset as it has 40 PCI-E 3.0 lanes versus Z77's-IB's 16 lanes. The benchmarks I have posted are completely accurate.
 
That registry hack seems to be very problematic for most people. Can you actually get away with using that 24/7?
 
Yeah everyone HERE KNOWS that PCI 3.0 is just a marketing gimmick to get you to buy a flashy new mobo, there's NO tangible benchmarking chart performance boost, and that's all that matters to THIS crowd. EVERYBODY...cept me and 2 other dudes :)
 
There will be zero difference when IB launches. X79 and SB-E are native 3.0 and work just fine at 3.0. As a matter of fact X79-SB-E is the premier 3.0 chipset as it has 40 PCI-E 3.0 lanes versus Z77's-IB's 16 lanes. The benchmarks I have posted are completely accurate.

Except it must not work fine or Nvidia wouldn't have disabled it through the driver. Obviously there is something up with it - I really doubt Nvidia is not supporting it to drive people off the X79 or something.

I'm not saying it will make a difference,but there is more here than just Nvidia not wanting people to enjoy PCIe 3.0 on X79 motherboards.
 
Except it must not work fine or Nvidia wouldn't have disabled it through the driver. Obviously there is something up with it - I really doubt Nvidia is not supporting it to drive people off the X79 or something.

I'm not saying it will make a difference,but there is more here than just Nvidia not wanting people to enjoy PCIe 3.0 on X79 motherboards.

People in this forum have tested pci2 vs pci3 speeds on x79 with the 680 & have posted pretty impressive benefits of pci3 speed. It may have been an over sight on their part. I sure hope they address this with the next driver version.
 
It's simply because 3.0 wasn't officially put into the Intel white papers at launch because there were no 3.0 GPU's in existence to test at that time. On the NVIDIA forums they said they are already working on the fix. For the clue-less out there we couldn't care less what PCI-E version you use, but for high end users there's a massive difference going with 3.0. Simple as that. Adding a simple registry key isn't exactly what I would call a hack or difficult to do.
 
People in this forum have tested pci2 vs pci3 speeds on x79 with the 680 & have posted pretty impressive benefits of pci3 speed. It may have been an over sight on their part. I sure hope they address this with the next driver version.

I don't think it was an oversight since it was reportedly enabled in the beta drivers and removed from the WHQL ones.

It's simply because 3.0 wasn't officially put into the Intel white papers at launch because there were no 3.0 GPU's in existence to test at that time. On the NVIDIA forums they said they are already working on the fix. For the clue-less out there we couldn't care less what PCI-E version you use, but for high end users there's a massive difference going with 3.0. Simple as that. Adding a simple registry key isn't exactly what I would call a hack or difficult to do.

I wouldn't call 4-way GTX 680 high-end, I'd call that extreme. Regular high-end users still aren't going to see a benefit from PCIe 3.0 - it's not even clear that 2x SLI makes a noticeable difference (outside a few percent).

We should see all kinds of comparisons and tests once Ivy Bridge comes out.
 
It's simply because 3.0 wasn't officially put into the Intel white papers at launch because there were no 3.0 GPU's in existence to test at that time. On the NVIDIA forums they said they are already working on the fix. For the clue-less out there we couldn't care less what PCI-E version you use, but for high end users there's a massive difference going with 3.0. Simple as that. Adding a simple registry key isn't exactly what I would call a hack or difficult to do.

Most people including yourself seem to have issues with that "simple registry edit". Do you use pci-e 3.0 for 24/7 usage?
 
Tests will come when the 680 is officially supporting 3.0. I've already seen some other people testing the difference of 680 SLI at x16/x16 PCI 3.0 and x8/x8 PCI 2.0 and the difference was minimal at 1980x1080 and very significant at 3 1080p monitors.
 
Most people including yourself seem to have issues with that "simple registry edit". Do you use pci-e 3.0 for 24/7 usage?

Yes mine works just fine at 3.0 as you can see in the video I posted. I've found my surround issue resetting after cold boot is not 3.0 related as it happens under 2.0 also. It's a driver problem.
 
Back
Top