Why no Ghz progress?

Vega

Supreme [H]ardness
Joined
Oct 12, 2004
Messages
7,147
I've had my Core i7 @ 4.2Ghz now for going on two years here shortly. There isn't much faster out there unless you go crazy with cooling.

What's been keeping Intel from going 4-5-6 Ghz? Seems like we have been stuck at the same speeds for years now. They just toss more cores at us.

The issue with that is most game designers are still stuck in the stone age where virtually all of the game processing get's done on one thread. The only way to speed that up is faster Ghz. Tossing more and more cores at it doesn't help.

Does anyone know if Intel is coming out with anything on the horizon? Core i7 sure is getting long in the tooth and not terribly much faster then when it came out.
 
Sandy Bridge will be out shortly, supposedly there has been some serious overclocks on air cooling.
 
One aspect that keeps them and AMD from going over 4GHz is power. It's interesting to me that both companies have settled that about 130W is the maximum a stock processor can use however GPUs can use 200W+. I know that includes the whole video card.

The only way to speed that up is faster Ghz.

That is true if the IPC does not change. However both Intel and AMD have increased IPC with almost every iteration. I believe sandy bridge is supposed to be 15 to 25% faster than the current i7 clock for clock.
 
Last edited:
toss more cores at us? We've had quad cores since 2006.. and just got to 6 cores earlier this year that's almost four years with significant improvements in performance without an increase in clock speeds OR core counts. But if GHz are that important to you, I hear Pentium 4's are where it's at.

If you want an efficient processor that gets more work done per clock cycle.. well a lot of progress has been made since Netburst. So much so that Intel hasn't needed to increase clocks to have significant increase in per core performance. The faster a processor is running, the more power it uses, and the more heat it generates. Most people don't want 150+ watt CPU's in their desktops, but they DO want more performance.

As for games, a lot of games have been taking advantage of dual-cores for a while now.. and more and more are needing at least three or four to get decent performance from them (like BF-BC2)
 
toss more cores at us?

AMD may do that next year with a 4 module / 8 core however IPC predictions are hard to make with the info that has been released and so we can only speculate on how well that will perform.
 
Last edited:
As manufacturing process sizes decrease, wattage goes down for the same speed or performance goes up relative to the maximum TDP. The new 32nm CPUs for example, from the Westmere architecture to the prohibitively expensive Gulftowns, clock speed has definitely been going up. It's nowhere near the 10GHz Intel originally predicted about 10 years ago, but the amount of relative work being done has been going up the whole time.

4GHz was the sweet spot for Bloomfield, and now Westmere CPUs can sometimes hit 4.5GHz
 
As manufacturing process sizes decrease, wattage goes down for the same speed or performance goes up relative to the maximum TDP. The new 32nm CPUs for example, from the Westmere architecture to the prohibitively expensive Gulftowns, clock speed has definitely been going up. It's nowhere near the 10GHz Intel originally predicted about 10 years ago, but the amount of relative work being done has been going up the whole time.

4GHz was the sweet spot for Bloomfield, and now Westmere CPUs can sometimes hit 4.5GHz

There are exceptions to a smaller process leading to less heat. Prescott was a huge example of one. A smaller process tends to lead to more current leaking through the gate material. Unless you find a way to minimize this problem, (like Intel did with 45nm) the only way to get around it is to lower clock speeds and voltage. Lower the voltage too much, and transistor switching time is affected and you can't reach the clockspeeds you need to maintain performance increases that are expected....
 
That is true if the IPC does not change. However both Intel and AMD have increased IPC with almost every iteration. I believe sandy bridge is supposed to be 15 to 25% faster than the current i7 clock for clock.

This, I know very little about this so I just a few months ago felt the same, wondering why the clocks weren't rising. But a friend tried his best to explain to me, and from what I gather, as long as the instructions per clock are able to increase we shouldn't give to much of a crap about the clocks hovering around the same as we will see performance increases regardless.
 
Power/heat is a big portion of it, most of the rest is just approaching physical limits of tolerances, the materials, etc. The tolerances for Intel, in regards to stability, are significantly tighter than OCers. The chips from Intel have to be 100% stable at 120f ambient, or whatever the number is. Its much harder for Intel than an OCer.
 
Here's the math for you...

In a processor, P = ACV^2f + tAVIshortf +VIleak

Where A is the activity factor, C is the capacitance of the project, V is the voltage, f, is the frequency, t is the non-ideal switching time, Ishort is the short-circuit current in the transistors, and Ileak is the leakage current.

The first term is called active or power, meaning the power required to actually do something. Up until recently, it was the only important factor in CPU power. You can see that it scales directly with f, and since F scales with V, scaling the frequency up can lead to power cubing. In other words, on an arbitrary processor, you can't just scale the frequency up, because at some point you can't dissipate all the heat.

The second term is pretty easy to control with correct fabrication etc, so is generally ignored.

The obvious question raised by the increase in dynamic power is why not just drop the process size? The answer is that decreasing the process size increases the leakage current. So modern manufacturers have more or less settled at the 1-4 GHz range, and are now looking at other methods to improve throughput, namely more processing cores.

If you look at the equation, by doubling the number of cores, you double the power. While the math really isn't quite that simple because you now have power going to connection components like buses, it's pretty close. When as if you double the frequency, you have to double f and increase V, meaning your power increase is much greater than linear.
 
Can anyone explain to me what kind of performance increase can be expected from an out-of-order architecture? Does the performance come from less downtime (waiting time) for the processors? Does software need to be written specifically for it to take advantage of it?
 
Can anyone explain to me what kind of performance increase can be expected from an out-of-order architecture?

This improves IPC by allowing more instructions to execute at the same time then would have if all instructions were executed in order. BTW, I believe every Intel x86 CPU except atom made since the original Pentuim were all Out of order.
 
We've had quad cores since 2006.. and just got to 6 cores earlier this year that's almost four years with significant improvements in performance without an increase in clock speeds OR core counts..
In late 2006 we got the first quad core desktop processor the QX6700 at 2.67GHz.

In early 2009 arround two and a half years later we got the i7-975 at 3.33GHz with turbo to 3.6 GHz and it is indeed faster

I looked up Q6600 (which like the QX6700 is a kentsfield but has a slower clock speed) vs i7-975 on anandtech bench ( http://www.anandtech.com/bench/Product/53?vs=99 ) . Results seem to vary a lot between just under 50% faster and just over twice as fast. Correcting for the fact that the chip I compared was about 10% slower than the chip I wanted to compare that would mean an improvement of 40-90% in two years. Not too bad really though I don't think it's as good as the improvements made in the P3 and early P4 era.

However in the year and a half since then individual core performance has virtually stood still, the 975 is still the fastest quad core, there is an i5 with a 3.87 Ghz single core clock (including turbo boost) but it only has two cores.

I believe sandy bridge is supposed to be 15 to 25% faster than the current i7 clock for clock.
A 25% gain in instructions per clock and a 6% improvement in clock speed (at least if wikipedias model list is to be belived) over the i7-975 isn't much for a two year wait.
 
Last edited:
Can anyone explain to me what kind of performance increase can be expected from an out-of-order architecture? Does the performance come from less downtime (waiting time) for the processors? Does software need to be written specifically for it to take advantage of it?

The wikipedia page explains it pretty well.

http://en.wikipedia.org/wiki/Out-of-order_execution

An actual example of when out-of-order execution is advantageous takes enough time that its OT for this thread.

And to answer your second question, no, neither the software nor the compiler needs to be aware that the ISA is out-of-order. It is possible that the compiler may be able to take advantage of it if it is aware, however. This is why you can use the same software/compiler on an in-order platform like the atom and an out-of-order platform like most intel/amd chips.
 
Eh, that really wasn't a true quad core CPU, just two dual cores on the same package using the fsb and northbridge to communicate between the two.

The first true desktop quad was the Phenom in 2007.
 
A 25% gain in instructions per clock and a 6% improvement in clock speed (at least if wikipedias model list is to be belived) over the i7-975 isn't much for a two year wait.

I think there are 3 reasons for this.
1. The economy caused layoffs.
2. No competition from AMD at the high end.
3. This is the tock stage of Intel's tick/tock. Intel is just tweaking the process not drastically changing things.
 
Eh, that really wasn't a true quad core CPU, just two dual cores on the same package using the fsb and northbridge to communicate between the two.

The first true desktop quad was the Phenom in 2007.

That doesn't mean a damned thing. It's a true quad core in the sense that you could buy a single processor and get four complete execution modules with it. AMD's blabbering about being a TRUE quad-core turned out to be nothing but a bunch of BS trying to sell an inferior architecture.
 
One aspect that keeps them and AMD from going over 4GHz is power. It's interesting to me that both companies have settled that about 130W is the maximum a stock processor can use however GPUs can use 200W+. I know that includes the whole video card.

I would say this is it. There is giant thread about the 6 core amd's blowing motherboards because they draw too much power when overclock.
 
Eh, that really wasn't a true quad core CPU, just two dual cores on the same package using the fsb and northbridge to communicate between the two.

The first true desktop quad was the Phenom in 2007.
How many cores were in a QX6700? I'm pretty sure the answer is four, which would make it a true quad-core CPU since it has four actual cores. The only "non-true" quad-core CPUs Intel has made are dual-cores with Hyperthreading.
 
The only "non-true" quad-core CPUs Intel has made are dual-cores with Hyperthreading.

And the only Intel dual-cores with HyperThreading enabled that have been ever made have been the i3-5xx and i5-6xx. The first Intel dual-core processor (actually two separate single-core chips on the same package) was the Pentium D (LGA 775), which had HyperThreading disabled internally at the manufacturing stage. The Core 2 processors never had HyperThreading at all.
 
Last edited:
And the only Intel dual-cores with HyperThreading enabled that have been ever made have been the i3-5xx and i5-6xx. The first Intel dual-core processor (actually two separate single-core chips on the same package) was the Pentium D (LGA 775), which had HyperThreading disabled internally at the manufacturing stage. The Core 2 processors never had HyperThreading at all.
The Extreme Edition Pentium Ds had Hyperthreading.
 
software/ gaming companies need to catch up

this is the problem with going the multi-core route over increasing core speed. its all well and good to have a dozen cores, but when they are only 2.4ghz a piece and the application you are trying to run is only single threaded, you will have the performance of a single 2.4ghz CPU.

going with multiple processors is only faster if the software supports multi-threading and can take advantage of more than one core at a time. or when you need to multitask with many different programs at the same time.
 
this is the problem with going the multi-core route over increasing core speed. its all well and good to have a dozen cores, but when they are only 2.4ghz a piece and the application you are trying to run is only single threaded, you will have the performance of a single 2.4ghz CPU.

going with multiple processors is only faster if the software supports multi-threading and can take advantage of more than one core at a time. or when you need to multitask with many different programs at the same time.

Even single threaded games and any other application for that matter will indirectly benefit from having a multi-core due to the other system processes like the video, sound, and other I/O drivers not having to take up time on the same core that the game is running on.

That being said, game developers need to start being better about their games being multithreaded.
 
That being said, game developers need to start being better about their games being multithreaded.

and thus we find where the problem lies. Unforuantely it's just going to take some time for more companies to adapt their applications now that the mutliple core market has become the norm.
 
Back
Top