Please help - upgrade advice - worried about PCIe bottleneck.

Joined
Nov 9, 2012
Messages
5
Built this rig in 2011 :

mobo: ASRock Z68 Extreme4
cpu: i5-2500k (stable @ 4.5 ghz)
ram: CORSAIR Vengeance 8GB (2 x 4GB) DDR3 1600 (PC3 12800)
GPU: EVGA GTX 570 HD
PSU: ENERMAX MODU87+ 900W (Peak 990W) 80+ Gold
monitor: 25" ASUS VE 258q 1080p

Thinking about moving from 1080p to 120hz 1440p (single monitor) in 2013 (27"QNIX). I will be playing a lot of BF4.

I am thinking about purchasing a GTX 880 when it arrives in 2014 and possibly a 2nd one for SLI down the road.

I'm worried specifically about the motherboard (http://www.asrock.com/mb/overview.asp?Model=Z68 Extreme4...) limiting the bandwidth of the GPU's, not so much with single GPU but if I purchased a 2nd GPU for SLI. The mobo's PCI specs are as follows

3 x PCI Express 2.0 x16 slots (PCIE2/PCIE4: single at x16 or dual at x8/x8 mode; PCIE5: x4 mode)

So, I realize that with a single GPU, the 2.0 (x16) will only reduce my frames by 1 fps or so, which is negligible, but with 2x GTX 880's, looks like it drops the memory bandwidth available to x8/x8 (at PCI 2.0).....

My main question is, how many frames will the outdated mobo steal from me vs a newer one with a SLI setup? Will it be worth it to upgrade the mobo/CPU? Or even just buy a 'newer', more fully featured LGA 1155 mobo and plug in my current CPU? From what I've researched, I'm looking at about a 4-6% reduction in my likely 'max' performance (vs. either 2.0 (x16 / x16) or 3.0 (x8 / x8).

One other question - my current CPU (Sandy Bridge i5-2500k) doesn't support PCI-E 3.0. Assuming that I drop $100 to upgrade just the mobo (because I'm not worried about the CPU becoming an issue for a long time), will my CPU just treat a PCI-E 3.0 slot like a 2.0 slot (because Sandy Bridge doesn't support 3.0), or will it not be able to use the 3.0 slot at all ?

(Also, I realize that SLI GTX 880's is likely overkill for a single monitor, even 1440p@120hz, but trying to future proof for at least a few more years)
 
Anything said would just be theoretical since the gtx 880 isn't even out yet but if I were to bet, 8x / 8x sli is fine. Some people make this out to be a huge issue when there are no benchmarks to support an appreciable decrease in performance going from 16x to 8x..
 
the GTX 780 its still a new card in the market, why start to thinking in a future GTX880??.. O_O.. also, for the future i would not put that CPU with a 780SLI.. it simply will lack of enough power to fed those 780s. so, thinking right now in the market i would wait for the new AMD 9000 launch to see how much well they will perform vs the actual 780/Titan and how will be priced or if even can make a price drop in the green team... about the Dual 2.0 x8 Lanes. you maybe and MAYBE will notice a decrease of performance for about 1-3FPS at that resolution??? so not worry about that.
 
GTX780 is king of the hill right now, atleast until we see what AMD has up its sleeve but they are expensive. I just bought a GTX770, coming from a 670, and will go SLI before year end. This way, I am enjoying my games now at max settings and when BF4 comes out in November, I can go SLI and be sitting pretty for atleast another year.
 
PCIe 1.0 @ 16x is still pretty good for just about any graphics card out there and that's pretty much what two GPU's running in SLI @ PCIe 2.0 8x would be. I don't think that is really something to be concerned about so long as you aren't going below PCIe 2.0 8x. By the time PCIe 2.0 8x becomes a significant problem it will be many years down the road and MOBO's will start to be pushing 4.0 (possibly 5.0 with how fast 4.0 is due out) and 3.0 boards will have saturated the market.

The differences with a single graphics card isn't even worth dicussion (16x vs 8x). It's well within the margin of error it's that low of a difference. Between 16x > 8x/8x you will definitely drop maybe 5% performance, but you make up vastly in performance via SLI that it nullifies that quickly.

Now PCIe 3.0 there is a clear difference that many people aren't mentioning that I think plays a bigger difference. The encoding scheme is entirely different from previous generations which is why you might see even a couple frames higher at 2.0 16x > 3.0 16x. That efficiency gained by changing the encoding method of data crossing the lanes is probably giving the slight boost, not the actual raw bandwidth. MOST GPU processing is done within the GPU (hence video cards) because they're able to process data much closer to the source/destination and the bandwidth capable on GPU's is insane compared to traveling the dial-up speed PCIe lanes.
 
Thanks for the replies guys.

@Araxie - what do you mean not put that CPU with 780 SLI. Are you saying i5 2500k will bottleneck 2x 780's?
 
Worst case scenario sell your CPU and drop in a 3570k and your board is pcie 3.0 lol. Prob cost you like $50-$100 after selling yours.

Edit: Man why didn't you buy the extreme4 gen3 instead of just the extreme4. Looks like only the gen3 z68 boards support pcie 3.0. Not like it matters if you can afford an 880 when it comes out you'll be fine selling your board and CPU and doing a couple hundred $ platform upgrade. Even at 5760x1080 in sli 2.0 isn't bottlenecking anything as of yet. Its when you get into tri/quad sli at that res it starts to show. At the res your talking with 2 cards you'll probably be fine. It'll be that CPU holding you back by the time the 880 rolls around.
 
Last edited:
the GTX 780 its still a new card in the market, why start to thinking in a future GTX880??.. O_O.. also, for the future i would not put that CPU with a 780SLI.. it simply will lack of enough power to fed those 780s. so, thinking right now in the market i would wait for the new AMD 9000 launch to see how much well they will perform vs the actual 780/Titan and how will be priced or if even can make a price drop in the green team... about the Dual 2.0 x8 Lanes. you maybe and MAYBE will notice a decrease of performance for about 1-3FPS at that resolution??? so not worry about that.


What crack are you smoking 2500k @4.5ghz is way more then enough to feed 780 sli
 
As far as I know the difference will be minimal unless your running a large resolution (surround) WITH multiple gpus. Vegas testing showed a difference of around 20fps difference with 3-4 gpus and surround
 
What crack are you smoking 2500k @4.5ghz is way more then enough to feed 780 sli

Not even close. 3770k isn't even enough. Go play bf3 in a full 64 player server. No way GPU usage will be pegged. Not at 1080p or 1440p. Maybe in surround as its WAY more graphics dependant. It would bottleneck crysis 3 mp as well at 1080p, probably not 1440p. 2500k bottlenecked the living shit out of my 7950 cf and 670 sli in bf3. Min fps in the 50s and shit GPU usage. 3770k made a bug diff and 3930k made a REALLY big diff, in bf3.
 
Not even close. 3770k isn't even enough. Go play bf3 in a full 64 player server. No way GPU usage will be pegged. Not at 1080p or 1440p. Maybe in surround as its WAY more graphics dependant. It would bottleneck crysis 3 mp as well at 1080p, probably not 1440p. 2500k bottlenecked the living shit out of my 7950 cf and 670 sli in bf3. Min fps in the 50s and shit GPU usage. 3770k made a bug diff and 3930k made a REALLY big diff, in bf3.


Regarding single GPU the 2500K @ 4.5Ghz vs 3770K is irrelevant. You're honestly high if you think it's a bottleneck for the OP at a single GPU.

Crysis 3 is its own thing, bringing that up is not even worth discussion. The developers pride themselves on creating an engine that will bring any video card to its knees practically requiring SLI just to play it. If you're trying to play that game at MAX for both SP and MP on a single card or even a SLI set up and expect perfection, that's not going to be fun to watch.

Typically it's the opposite. Lower resolutions typically bottleneck CPU's the most. I don't think 1080p is at that level, yet, but there is still some solid evidence to show it can lose you a few frames against higher-end parts. The problem these days is there is NOTHING higher-end with significant gain's that a previous generation OC'ed wont match! This of course unless you move up to a 6 core part. Anything above 1080p is just going to start sucking for a single or SLI/CF GPU. That's where memory and raw GPU horsepower count the most. It isn't the CPU that's most limiting, it's the lack of 'umpff' the video card is capable. I mean we're talking 10-15% differences from each tier of GPU's. That's literally down to a few frames in some cases. We're not talking oh the 670 is 10-15FPS faster than the 660 and the 680 is 10-15FPS faster than the 670. The gains these days are getting smaller and smaller, but games are looking prettier and prettier.

You said your 7950 CF and 670 SLI was bottlenecking your performance in BF3 with a MINIMUM of 50FPS? My jaw is dropping right now...

[EDIT] FYI BF3 is a well known CPU intensive game (one of the few) that make the most out of clock speeds and CORE COUNT (heavily multi-threaded). It's only natural for this specific game that a CPU might show some sort of bottleneck. One must ask, was your CPU OC'ed and if so by how much? Because if you didn't take advantage of a easy 4.5Ghz OC on that SB then you most certainly wasted money.
 
Last edited:
Thanks for the replies guys.

@Araxie - what do you mean not put that CPU with 780 SLI. Are you saying i5 2500k will bottleneck 2x 780's?

yup it will do for 1080p@120HZ..

What crack are you smoking 2500k @4.5ghz is way more then enough to feed 780 sli

LOL, just NO!.

Not even close. 3770k isn't even enough. Go play bf3 in a full 64 player server. No way GPU usage will be pegged. Not at 1080p or 1440p. Maybe in surround as its WAY more graphics dependant. It would bottleneck crysis 3 mp as well at 1080p, probably not 1440p. 2500k bottlenecked the living shit out of my 7950 cf and 670 sli in bf3. Min fps in the 50s and shit GPU usage. 3770k made a bug diff and 3930k made a REALLY big diff, in bf3.

+1...


Regarding single GPU the 2500K @ 4.5Ghz vs 3770K is irrelevant. You're honestly high if you think it's a bottleneck for the OP at a single GPU.

Crysis 3 is its own thing, bringing that up is not even worth discussion. The developers pride themselves on creating an engine that will bring any video card to its knees practically requiring SLI just to play it. If you're trying to play that game at MAX for both SP and MP on a single card or even a SLI set up and expect perfection, that's not going to be fun to watch.

Typically it's the opposite. Lower resolutions typically bottleneck CPU's the most. I don't think 1080p is at that level, yet, but there is still some solid evidence to show it can lose you a few frames against higher-end parts. The problem these days is there is NOTHING higher-end with significant gain's that a previous generation OC'ed wont match! This of course unless you move up to a 6 core part. Anything above 1080p is just going to start sucking for a single or SLI/CF GPU. That's where memory and raw GPU horsepower count the most. It isn't the CPU that's most limiting, it's the lack of 'umpff' the video card is capable. I mean we're talking 10-15% differences from each tier of GPU's. That's literally down to a few frames in some cases. We're not talking oh the 670 is 10-15FPS faster than the 660 and the 680 is 10-15FPS faster than the 670. The gains these days are getting smaller and smaller, but games are looking prettier and prettier.

You said your 7950 CF and 670 SLI was bottlenecking your performance in BF3 with a MINIMUM of 50FPS? My jaw is dropping right now...

[EDIT] FYI BF3 is a well known CPU intensive game (one of the few) that make the most out of clock speeds and CORE COUNT (heavily multi-threaded). It's only natural for this specific game that a CPU might show some sort of bottleneck. One must ask, was your CPU OC'ed and if so by how much? Because if you didn't take advantage of a easy 4.5Ghz OC on that SB then you most certainly wasted money.

yo read the OP line where is stated 1080p@120HZ?. for up to 60HZ it will be fair enough for single players games... but no for 120HZ and maintain constant 120FPS. also read my lines where i said for FUTURE?.. with more and more next gen console games and more focused on high multiplayers map coded for 8 Cores do you think the game will remain as now? or will start to show the battlefield 3 and crysis 3 symptoms? or even the crappy coded Assasins creed 3 as a CPU demandant game more than GPU? or anothers like far cry 3?..... you will have the answer there. :rolleyes: just think..
 
yo read the OP line where is stated 1080p@120HZ?. for up to 60HZ it will be fair enough for single players games... but no for 120HZ and maintain constant 120FPS. also read my lines where i said for FUTURE?.. with more and more next gen console games and more focused on high multiplayers map coded for 8 Cores do you think the game will remain as now? or will start to show the battlefield 3 and crysis 3 symptoms? or even the crappy coded Assasins creed 3 as a CPU demandant game more than GPU? or anothers like far cry 3?..... you will have the answer there. :rolleyes: just think..


Just because you have a 120Hz monitor with 1440p resolution doesn't mean you're going to be gaming at that, it's just something you're going to strive for. Games you may be running at 120HZ now in a year your chances will be much less when bigger and badder Triple A games come out.

Your future statement wasn't very clear and honestly I wasn't directly replying to you anyways. My point was made above. Games will eventually start to use more Cores, which is why I hinted at moving up to the $500-$1000 CPU's with 6 cores if you're going to do any sort of upgrade. An i5-2500K > i7-3770K at this moment is hardly worth it unless you really have the extra money to blow. It isn't future proof at all. An IVB-E or SB-E chip would be.

Consoles wont start to take massive use of the hardware for another 2 years. Fact. It happens every time, except on the occasion of those console specific titles that we all love dearly where they only need to optimize it for one platform.

Games wont stay steady now in terms of performance (BF3/Crysis) the point I was making stands. The intended person I was responding to was using Crysis 3 as an example. Crysis 3 kicks the crap out of any hardware SLI or not and you can google interviews where the developers mentioned their fans love their games because they cripple hardware and push things forward. Good luck getting Crysis 3 to run SLI or not on this or next generation cards at 1440p @ 120HZ. It just ain't gonna happen with or without a kickass CPU.

Lastly we're talking about the CPU being the limiting factor making SLI have less gains than if it had a higher-end part.

This review based on the i7 3770K will net you in the 120FPS mark in 1440p resolution.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_SLI/5.html

What you're telling me is that the i5-2500K stock at 3.4Ghz, which is roughly 5-10% slower in term of performance to the i7-3770K which it gains back in almost every benchmark equal ground to the i7-3770K (at its stock) when OC'ed to 4.5Ghz, is going to be drastically less performance and a "bottleneck" compared to the performance as shown above? Until I see otherwise I'm not buying it.
 
Regarding single GPU the 2500K @ 4.5Ghz vs 3770K is irrelevant. You're honestly high if you think it's a bottleneck for the OP at a single GPU.

Crysis 3 is its own thing, bringing that up is not even worth discussion. The developers pride themselves on creating an engine that will bring any video card to its knees practically requiring SLI just to play it. If you're trying to play that game at MAX for both SP and MP on a single card or even a SLI set up and expect perfection, that's not going to be fun to watch.

Typically it's the opposite. Lower resolutions typically bottleneck CPU's the most. I don't think 1080p is at that level, yet, but there is still some solid evidence to show it can lose you a few frames against higher-end parts. The problem these days is there is NOTHING higher-end with significant gain's that a previous generation OC'ed wont match! This of course unless you move up to a 6 core part. Anything above 1080p is just going to start sucking for a single or SLI/CF GPU. That's where memory and raw GPU horsepower count the most. It isn't the CPU that's most limiting, it's the lack of 'umpff' the video card is capable. I mean we're talking 10-15% differences from each tier of GPU's. That's literally down to a few frames in some cases. We're not talking oh the 670 is 10-15FPS faster than the 660 and the 680 is 10-15FPS faster than the 670. The gains these days are getting smaller and smaller, but games are looking prettier and prettier.

You said your 7950 CF and 670 SLI was bottlenecking your performance in BF3 with a MINIMUM of 50FPS? My jaw is dropping right now...

[EDIT] FYI BF3 is a well known CPU intensive game (one of the few) that make the most out of clock speeds and CORE COUNT (heavily multi-threaded). It's only natural for this specific game that a CPU might show some sort of bottleneck. One must ask, was your CPU OC'ed and if so by how much? Because if you didn't take advantage of a easy 4.5Ghz OC on that SB then you most certainly wasted money.
2500k was at 4.7ghz, 3770k at 4.6ghz and 3930k at 4.4ghz. The review posted shows average frames. It will not net you 120fps + constant. Best case scenario for 3770k with mesh quality low in bf3 even over clocked to 4.6ghz is like 70+fps. My 3930k just stays over 100fps in bf3. Now I'm talking big conquest 64 player full server. Review you posted is single player which means nothing compared to mp performance.
 
Last edited:
Thanks for all the input.... I think that I sorted out my plan. Again, BF4 is kind of the benchmark that I use as it consumes most of my gaming time. Will likely never go to multi-monitor setup, but will definitely go from 1080p@60 hz to a 1440p@120hz.

For now, will buy a second (used) GTX570 for $100 or so and toss it in there, hopefully extend life of my current rig for another year. I'm hoping to play BF4 at high/ultra mix near 60fps @1080p, hopefully with SLI 570 + 2500k @ 4.5 GHZ


Next summer, I'll upgrade to a new CPU/Mobo, something like i5-4670K and get the GTX 880 along with a 1440p/120hz monitor. I'll likely need a 2nd GTX 880 at some point to push frames over 100.
 
Well hopefully next gen comes with some mainstream hex cores. Looks like the flagship ivy e may be an 8 core. So 6 cores may not be out of the question for the 5770k or whatever they decide to call it lol. 6 cores help a lot when shooting for that minimum 120hz.
 
Well hopefully next gen comes with some mainstream hex cores. Looks like the flagship ivy e may be an 8 core. So 6 cores may not be out of the question for the 5770k or whatever they decide to call it lol. 6 cores help a lot when shooting for that minimum 120hz.

You mean haswell-E may be an 8 cores? Because right now.. Ivy bridge E its just a minimal improvement in performance over sandy bridge-E. Its more lower power comsumption than any other thing... (Sadly :( )

http://hardforum.com/showthread.php?t=1779041
 
Is the replacement for the 4670K in Intels next CPU cycle/generation going to be 8 cores?

I have been wanting to upgrade for a while but if an 8 core is around the corner I figure I can hold out some extra months.
 
Is the replacement for the 4670K in Intels next CPU cycle/generation going to be 8 cores?

I have been wanting to upgrade for a while but if an 8 core is around the corner I figure I can hold out some extra months.

its really hard to believe a mainstream chip intel with 8 cores soon... at most.. and being really dreaming like a boss, the next Haswell-E 5960X(or whatever they call it) could shoot for the 8Cores and displace the core amount to mainstream to 6 cores, and 6 core+HT. keeping 8cores in the Extreme Series, but cmon its just dreaming, that maybe happens with skylake mainstream not even broadwell.. just a guess :confused:....
 
Hey I'm going to give some contrary advice: PCI-E bandwidth matters! Not in the future, right now.
I've just been running benchmarks testing the efficacy of switching my setup from a GTX770OC on PCI-E 2.0 x16 to a GTX770OC and a GTX550ti (running PhysX) each on PCI-E 2.0 x8. I also tested the GTX770OC on x8 running the PhysX on its own too (The 550ti out of the picture entirely).

What I've found so far is that some games are really, really sensitive to the bandwidth. Metro 2033 (highest quality settings + 3D Vision) is the most extreme example I've tested, going from 28fps average to 16.33 fps average. Minimum framerates drop from 9.21fps to 5.11fps. That's before adding the GTX550ti for PhysX, at which point the average goes up to 33fps, and minimum jumps to 11.89.
Also of note is that max framerates go from mid 90s to high 60s in the switch from x16 to x8.

This is the most extreme example I've tested, but the other games I've tested (Just Case 2, Metro: LL, Batman: AC) are similarly affected, just not to the same extreme. They all dip a fair bit going from x16 to x8, and all jump slightly above the original framerates with the addition of the 550ti as a PhysX card. Honestly with that initial loss of performance at x8 it's hard to say a dedicated PhysX card is worth it, for the paltry few fps it contributes. SLI might be a different story, I don't know.
On top of that, JC2 has serious 3D Vision problems when you give it a dedicated PhysX card; the foliage and some buildings stop rendering stereoscopically.

In short, going from x16 to x8: your mileage may vary. Don't take that PCI-E 1.0 x16 equivalence at face value, it varies immensely from game to game.
 
Back
Top