Enthusiast setups now CPU limited?

geraltofrivia

Limp Gawd
Joined
Jun 12, 2012
Messages
409
GPUs have been getting faster at a more rapid pace than CPUs in the past few years. On my system, at 1080p, most games seem to be CPU limited. I don't see this changing in the next year or two either. Haswell is only going to be 10-15% faster in the best case scenario.

I wonder if this is because developers are aiming for the notebook market, where until very recently the CPU has always been vastly faster. Hopefully with Haswell and subsequent generations we will get to a point when notebooks become more balanced and game developers will stop making games so dependent on the CPU.
 
Yeah its because of lack of competition on the CPU front.. It's pretty annyoing tbh as there are games that drop below 60fps even with the fastest CPU's available, let alone 120fps.
 
Yeah its because of lack of competition on the CPU front.. It's pretty annyoing tbh as there are games that drop below 60fps even with the fastest CPU's available, let alone 120fps.

The first game I played after setting up my rig was Starcraft II HOTS, the 3rd or 4th level of the campaign, where you have like 400 banelings/zerglings half submerged in the water. I was getting all of about 25fps :(
 
They are CPU limited for a few reasons:
1. Multi-threading is still picking up steam and there are still huge amount of extra resources available on a CPU that few programmers use. Instead, most tend to take the "more CPU cores equals more simple executions I can do at once" approach rather than "lets re-write this code so these existing executions can be done on multiple cores at once to speed them up." Its a tough transition, I don't blame the programmers. It will take a long time to finally get there for games. In the super high end niche markets like servers and HPC, this ability is already there and has been for some time.
2. If you are playing DX9 based games, especially console ports, you aren't stressing the GPU hardly at all. We have DX11 GPU's now where even the slowest DX11 card can process DX9 faster than the fastest DX9 GPU from 10 years ago. I would love to see a comparison between a Geforce 6800 and a Geforce 650 while playing Doom 3.
3. On the super high end, you have multi-GPU setups. More GPU's you have, the more CPU power you need to run them and the program you are running. Fortunately, #1 is not an issue here since AMD and Nvidia both have very decent multi threaded drivers.
 
WTF? The problems you guys describe are not CPU limitations but design flaws in the games you're playing. Complain about the vast majority of software developers not even utilizing all threads before complaining about the CPU market's stagnation.

The problem is not CPU limitations but lack of software progression. CPUs have been progressing just as much as GPUs, to the point where they will soon be integrating with each other in much bigger ways. That doesn't translate to higher clocks though, which is an assbackwards way of looking at things if that's all you're looking at.

Moreover, what does it matter if we have 9Ghz processors if lazy game design and/or execution bog down systems to the point of near-freezing or 25% FPS? You can have the best system in the world and it doesn't matter one bit when the software you're using hogs all resources until a limit is reached.
 
Yeah its because of lack of competition on the CPU front.. It's pretty annyoing tbh as there are games that drop below 60fps even with the fastest CPU's available, let alone 120fps.
I don't attribute it to a lack of competition. Pricing is a problem on the Intel side — not performance. Intel's not abandoning the tick-tock cycle that's made them so successful.

Many aspects of real-time simulation as it pertains to games are just hard to parallelize. Rasterization is embarrassingly parallel, as is non-interactive physics (for particles, mainly), but things like AI and rigid body physics where you have interactions across a large range of other objects is not. And as graphics complexity goes up, so does graphics API overhead in many cases. Just keeping a GPU constantly fed with work given current CPU performance is hard. That's going to get better out of sheer necessity, and as tools for heterogenous execution start getting more fleshed out, but it's going to require some more time.

Some developers are doing a better job than others at paralellization, but they're all still suffering through the complexities of distributive processing of inherently non-distributive tasks and trying to do it with a high enough degree of reliability.
 
They are only priced as such because there is no real competition atm.. but sure, dev's need to make better use of multithreading, and this should hopefully happen with the next gen of console ports. As it stands though, the way many current games are coded many games are CPU limited and run below 60fps, which is bs even on $1k CPU's..

I want a reason to buy a 6 core, or possibly 8 core chip for a gaming PC.
 
Its all about the GPU, basically in most games a 2500k is really all you need for top notch performance. Buying a 3770k or 3930k will only give you a few fps difference in games. Some games are more cpu dependent though... depends on what the developers are doing and if anything this will only get worse IMO. Maybe when more games start utilizing 8 threads or cores but who knows when that will be.
 
WTF? The problems you guys describe are not CPU limitations but design flaws in the games you're playing. Complain about the vast majority of software developers not even utilizing all threads before complaining about the CPU market's stagnation.

The problem is not CPU limitations but lack of software progression. CPUs have been progressing just as much as GPUs, to the point where they will soon be integrating with each other in much bigger ways. That doesn't translate to higher clocks though, which is an assbackwards way of looking at things if that's all you're looking at.

Moreover, what does it matter if we have 9Ghz processors if lazy game design and/or execution bog down systems to the point of near-freezing or 25% FPS? You can have the best system in the world and it doesn't matter one bit when the software you're using hogs all resources until a limit is reached.

No. CPU bottlenecks don't ask "why". If the game is CPU bottlenecked, it just is. We laypersons do not have the competence or knowledge to judge which title could be better optimized and which one could not.
 
i really hope the next gen engines take advantage of multicore and multi threaded cpus
 
WTF? The problems you guys describe are not CPU limitations but design flaws in the games you're playing. Complain about the vast majority of software developers not even utilizing all threads before complaining about the CPU market's stagnation.

The problem is not CPU limitations but lack of software progression. CPUs have been progressing just as much as GPUs, to the point where they will soon be integrating with each other in much bigger ways. That doesn't translate to higher clocks though, which is an assbackwards way of looking at things if that's all you're looking at.

Moreover, what does it matter if we have 9Ghz processors if lazy game design and/or execution bog down systems to the point of near-freezing or 25% FPS? You can have the best system in the world and it doesn't matter one bit when the software you're using hogs all resources until a limit is reached.

What you're talking about doing is expensive. Are customers willing to pay? By the looks of the current software landscape, I would say no.
 
You can be gpu or cpu limited at any time. It all depends on whether the developers programmed the software to be more cpu based or gpu based. Nothing ever stopped being limited ;). Strategy always will hog more cpu power while everything else usually hogs the gpu more.
 
Last edited:
Its not a surprise. GPUs seem to have been progressing much more over the past few years than cpus. Look at the difference between 8800 and Titan, now look at core 2 to ivy bridge.
 
What you're talking about doing is expensive. Are customers willing to pay? By the looks of the current software landscape, I would say no.

"Pay more for games to have better optimisation? pffft. I'd rather buy a new Cpu."

People are always so much more willing to pay for hardware then software :p
 
Sounds like game performance has more to do with coding then the specs of the CPU or GPU. Which totally sucks because we will never be able to play gta 4 at 100fps even 5 years from now.
 
Anyone know some of the best scaled games which are actually coded properly?
 
I don't know I got a Core i7-3970X Extreme Edition paired with a geforce titan and the titan is the problem on crysis 3.
 
I don't know I got a Core i7-3970X Extreme Edition paired with a geforce titan and the titan is the problem on crysis 3.

Does crysis 3 take advantage of the extra 2 cores? We have comparable GPUs but on my system @ 1200p it's the CPU that keeps the game from maintaining a locked-in 60fps at max settings. I'd say it gets 60fps about 85% of the time. I'm going by CPU and GPU utilitization, in those areas that it drops under 60, the CPU usage is very high while the GPU is nowhere near max'ed out.
 
I have no Idea if crysis 3 uses 6 cores but looking at the CPU meter in windows 7 on another monitor while playing I see next to nothing happening on core 5 and 6. core 4 get used but it is mostly the first 3 cores.


sometimes they get some use for like a few seconds but that might be some background software or windows using them for something not the game.

the Turbo on my CPU does hit 4.0ghz when playing crysis 3 though sometimes.


Somebody would have to google Cry engine 3 and see how many cores and threads it can run.
 
Last edited:
http://www.neoseeker.com/news/14688-cryengine-3-can-use-up-to-8-cpu-cores/

Ok the cry engine 3 can use up to 8 cores but I don't know if it is true I don't think crysis 3 ever needs to use 8 cores or 6 maybe some day in some game that uses cry engine 3 but right now they don't seem to get much use.

the AI is pretty dumb and most the animation and everything else seems to run off the GPU and they don't need the CPU to do it and there is not much else going on.
 
http://www.neoseeker.com/news/14688-cryengine-3-can-use-up-to-8-cpu-cores/

Ok the cry engine 3 can use up to 8 cores but I don't know if it is true I don't think crysis 3 ever needs to use 8 cores or 6 maybe some day in some game that uses cry engine 3 but right now they don't seem to get much use.

the AI is pretty dumb and most the animation and everything else seems to run off the GPU and they don't need the CPU to do it and there is not much else going on.

Crysis 3 must be designed by 8 year olds then, because it uses between 3-4 *MAIN* CPU threads.

http://www.techspot.com/review/642-crysis-3-performance/page6.html

See that tiny increase going from 2 to 4 core with the Athlon II x4 640? The performance increase should be almost double if the engine used 4 cores, so I would say only 3 major threads. Further evidence for this is provided by the fact that all quad+ cores have the same performance, only separated by the combination of IPC + turbo clocks.

To put the final nail in the coffin, the only Vishera core tested shows about 10% higher performance, and what do you know - it has a 10% single-threaded boost over the 8150!

Don't waste your money on a new CPU. Crysis 3 is incredibly unoptimized (used essentially the same number of cores as Crysis 2 did)! Any additional cores you grab would be wasted.

I know this is not the sort of answer you'd like to hear, but that's how it is. I get the impression you want to run 120 Hz at 1080p (why else would you have SLI 680s?), but short of liquid nitrogen there's no way around this limit. This proves once again that Crysis is a console port first, targeting the 3-processor Xbox 360. I wouldn't waste my time with CryEngine games anymore.
 
Last edited:
Crysis 3 must be designed by 8 year olds then, because it uses between 3-4 *MAIN* CPU threads.

http://www.techspot.com/review/642-crysis-3-performance/page6.html

See that tiny increase going from 2 to 4 core with the Athlon II x4 640? The performance increase should be almost double if the engine used 4 cores, so I would say only 3 major threads. Further evidence for this is provided by the fact that all quad+ cores have the same performance, only separated by the combination of IPC + turbo clocks.

To put the final nail in the coffin, the only Vishera core tested shows about 10% higher performance, and what do you know - it has a 10% clock boost over the 8150!

Don't waste your money on a new CPU. Crysis 3 is incredibly unoptimized (used essentially the same number of cores as Crysis and Crysis 2 did)! Any additional cores you grab would be wasted. I know this is not the sort of answer you'd like to hear, but that's how it is.

Bingo...The kicker is comparing the quad i5-3470 with the hex or 8150 AMD procs...and the i5 topping both. Fact is here, the code is in the way and not the core count. If it was core count at fault, the AMD 8150 would be way higher up there and not tied with an X58 Nehalem....and the 8350 only tied with the i5 quad.
 
Maldo: "First of all, I want to note that multithreading optimization in Crysis 3 is awesome. Asaid famous ropes, enabling HyperThreading makes a significant difference in performance."

Not the case? Or is their some kind of distinction between threads and cores?

Edit: Not saying he knows everything though he did really know his way around Crysis 2. Both with optimising as well as the texture fixes/pack.
 
pc games that are cpu bound are just bad console ports.


look at BF3, it runs great on all cpu's, even on old intel and amd cpu's like core 2, etc.
 
The only applications I have every seen use all 6 cores and all hyperthreading 100% on my Pc are 3d studio max maya softimage mudbox and zbrush.

Everything else does not.
 
I know this is not the sort of answer you'd like to hear, but that's how it is. I get the impression you want to run 120 Hz at 1080p (why else would you have SLI 680s?)

I just wanted 60fps all the time at max settings. I've come to realize this is basically impossible no matter what kind of rig you have. You can get 60fps 95% of the time, or even 99%, but there will always be moments where it drops under. For example, in Tomb Raider there were a couple of moments when I was being shot at by 6 enemies up close at once, where my framerate dropped under 15 for a second. So, to keep 60fps at that moment, you would need something that had 400% the power of 680 SLI. Not even quad-Titan right now offers that.
 
I just wanted 60fps all the time at max settings. I've come to realize this is basically impossible no matter what kind of rig you have. You can get 60fps 95% of the time, or even 99%, but there will always be moments where it drops under. For example, in Tomb Raider there were a couple of moments when I was being shot at by 6 enemies up close at once, where my framerate dropped under 15 for a second. So, to keep 60fps at that moment, you would need something that had 400% the power of 680 SLI. Not even quad-Titan right now offers that.

Yup, same here. I wish we could have games at 60 fps or greater 100% of the time. Sigh.... the only way to do that is to play some very old games. Or play some simple modern games that don't stress the gpu/cpu. Then, we'll get our 60 fps 100% of the time.
 
Yup, same here. I wish we could have games at 60 fps or greater 100% of the time. Sigh.... the only way to do that is to play some very old games. Or play some simple modern games that don't stress the gpu/cpu. Then, we'll get our 60 fps 100% of the time.

There's more to life than 60fps.
 
pc games that are cpu bound are just bad console ports.


look at BF3, it runs great on all cpu's, even on old intel and amd cpu's like core 2, etc.

Makes as good point.

Everytime I play BF3 I am always amazed at how damn smooth that game is...

...needs 120Hz monitor...
 
Yup, same here. I wish we could have games at 60 fps or greater 100% of the time. Sigh.... the only way to do that is to play some very old games. Or play some simple modern games that don't stress the gpu/cpu. Then, we'll get our 60 fps 100% of the time.

lol @ username

There's more to life than 60fps.

Good enough,

You're right, but the thing is, the moments when FPS drops are usually the moments when you would benefit the most from keeping it at 60: i.e. lots of enemies on screen and up close.
 
It's no lie PC gaming has been held back by consoles for the past several years. With the release of next-gen consoles, the lowest common denominator will no longer be the ancient hardware of the 360 and PS3, but the 8 cores of the Jaguar and 8gb of memory of (at least) the PS4.

It's pretty much a given that next-gen games are not only going to have to be heavily multi-threaded to take advantage of this new hardware, but most next-gen games will most likely have to be run with x64 code to take advantage of the massive increase in memory.

I'm pretty sure this means PC versions will use every resource available in modern enthusiast cpus and will get native x64 executables that will use more than the 2 or 4gb limit of current x86 executables.
 
It's no lie PC gaming has been held back by consoles for the past several years. With the release of next-gen consoles, the lowest common denominator will no longer be the ancient hardware of the 360 and PS3, but the 8 cores of the Jaguar and 8gb of memory of (at least) the PS4.

Let's just hope the new lowest common denominator doesn't become all these Intel laptops.
 
Makes as good point.

Everytime I play BF3 I am always amazed at how damn smooth that game is...

...needs 120Hz monitor...



Not in my case. I got huge hiccups when playing 64 person servers on my i3. Went to a 2600k and noticed HUGE improvements in average and minimum fps. My i3 was totally serviceable, but you can certainly tell the difference when compared to an i5 or i7.
 
Maldo: "First of all, I want to note that multithreading optimization in Crysis 3 is awesome. Asaid famous ropes, enabling HyperThreading makes a significant difference in performance."

Not the case? Or is their some kind of distinction between threads and cores?

Edit: Not saying he knows everything though he did really know his way around Crysis 2. Both with optimising as well as the texture fixes/pack.

There are additional threads, but there are only four major threads, and the amount of multithreading DEPENDS ON THE MAP:

See here for another benchmark set:

http://www.xtremesystems.org/forums/showthread.php?285161-More-Crysis-3-CPU-benchmarks&

Notice that the 8350 Vishera blows away the Core i5 on Welcome to the jungle, while also moving ahead of the 6 core Vishera. This map really does take advantage of more than 4 threads, although the effect is not as great as the first 4 (see the massive boost the Core i3 gets thanks to it's Hyperthreading). The Core i7 versus i5 gap is much more constrained, only seeing 10% more performance.

Now look at the results of Post Human and The roof of all evil: in neither case does more than 4 threads make a difference. The Vishera cores are all grouped tightly together, and so are the Ivy quad cores. The i3 is again a standout, but that's thanks to 4 major threads.

So yeah, CryEngine supports more than 4 threads, but like any game it's left to the game designer to actually take advantage of them. And in Crysis 3, there's just no sign of that: performance that varies widely from map-to-map, and is not that impressive even on the *FEATURE* map (jungle) points to poor optimization in my book. This at least explains why TechSpot saw no performance improvement - they assumed that each level would see the same hardware usage (reasonable assumption), and only performed the one test.

EDIT: yep, TechSpot tested on Post Human, which means the two benchmark results are in-agreement (no more than 4 threads).
 
Last edited:
I am of the certain persuasion that most people do not realize why CPU architecture has stagnated or why CPU performance increases have been going down from generation to generation.

Quite simply, ISA x86 architecture was never intentioned to process more than 1 instruction through the datapath per cycle.

Yes, the industry has tricks up their hardware + software sleeves like duplicate registers / ALUs / pipeline stalling code that allow for a "multicycle" single cycle datapath.

The core issue here, though, is that the ISA x86 architecture multicycle datapath is nearing the max optimization point. This is evident in the decreasing performance delta between CPUs and their next gen counter parts. You simply cannot throw more cores at the issue, because the fundamental laws of computing begin limiting maximum performance.

Clever multhread programming only will go so far.

What we need is a true multicycle data path - not a souped up single cycle datapath.
 
Let me know when RISC ISA architectures quit failing at general purpose computing.
 
Back
Top