Has Clock Speed Outlived Its Usefulness as a Processor Specification?

CommanderFrank

Cat Can't Scratch It
Joined
May 9, 2000
Messages
75,399
The whole idea behind clock speed is to give the consumer a benchmark in which to determine how fast a chip actually is. Today with all of the creative marketing and proprietary tweaking taking place, it’s hard to know exactly what you are getting.

In this simpler time there was no question what you’d receive when you bought a CPU. A 1GHz chip ran at 1GHz, end of story.
 
what's the point of that article? turbo modes (auto overclocks) are nothing new and all it takes is a spec on how far beyond the base clock a turbo mode goes. there you go, usefulness restored. i hope i may still be alive when there will be 5ghz retail cpus because those 4ghz have been a wall since the pentium 4 era. i understand that the cpu architecture has overcompensated the lack of clock speed, but i want some horsepower damnit :)
 
This isn't really new.

People didnt realize it as often in the past, but other components that support the CPU, including cooling, often made the actual performance vary a lot. It is one of the reasons I loved Thinkpads as they typically outperformed similar paper speced machines.

There also is the fact that 1ghz is not always equal to 1ghz. Look at a 2.4ghz P4 vs. a new 2.4ghz i5
 
I don't get it really. Ghz works as long as you're in the same consumer Intel/AMD.

Core count and hyper threading is what's hard to explain to family tier people.
 
Better to have a machine throttle itself than cook itself. I remember losing a motherboard when the CPU fan stopped working on a 486 DX2-66 system I had. The chip survived without damage (Intel ships were built like tanks back then!) but the heat cooked the motherboard. Throttling is a GOOD thing since it saves your hardware. Unnecessary throttling is what's bad. It is unfortunate for laptop users, but they're always going to be faced with this problem. You'll never get the performance from a laptop that you'll get from an enthusiast PC with huge heatsinks and/or watercooling. Nobody's going to tote something like a HAF-932 case around, so you have to trade performance for portability. How much you're trading is the question.

There's a way to answer that question and determine what machine is faster, and that's to torture it with something like Prime95 and record what the CPU is doing frequency and voltage-wise over time, as well as performing application function tests to see which programs are performing faster under what machine. That takes into account CPU core, memory, and chassis differences to show actual performance, so if you're wanting to know if brand X's AMD laptop is faster than Brand Y's Intel laptop for whatever, you get some idea what's going to happen with a real program and not just an artificial benchmark.

This is why sites like the [H] are so necessary. Someone has to stress test in a controlled environment and see what really happens to find out which machine is performing better. Testing takes the mystery out of it so you can make an informed buying decision.
 
what's the point of that article? turbo modes (auto overclocks) are nothing new and all it takes is a spec on how far beyond the base clock a turbo mode goes. there you go, usefulness restored. i hope i may still be alive when there will be 5ghz retail cpus because those 4ghz have been a wall since the pentium 4 era. i understand that the cpu architecture has overcompensated the lack of clock speed, but i want some horsepower damnit :)

I agree. CPU scaling in terms of speed was pretty consistent till the Pentium 4. And CPU speed was really limited by three things: 1) Heat, 2)Distance between circuits and 3)Signal Noise. Heat and Signal Noise are independent of Moore's law. If distance between circuits is cut in half, then that CPU should scale twice as fast (ignoring 1 and 3)

AND YES I REALIZE MOORE SAID DOUBLING OF TRANSISTORS NOT SPEED EVERY 18 MONTHS. But speed could also double on basic operations if you cut the distance between circuits by 1/2.

Technically speaking if you looking at the performance metrics you can't sit and tell me today that the fastest cpu is 1024x faster then a CPU 15 years ago on any metric. (GPU's related task excluded) Optimized or not.
 
This essentially started with the Pentium 3. This question gets brought up every year and the answer is the same "YES"
 
I have a few applications I would love a factory 5+GHz core to use on.
 
I have a few applications I would love a factory 5+GHz core to use on.

You DO realise that everything else would be a bottleneck due to clock timings, right? A 5+Ghz CPU would be spending half its time idle even at full load waiting for the signals of other components to fall on clock sync.
 
A reporter wakes up from a 10 year coma and ...... I am sure there is a bad joke in there somewhere.


The reporter made one statement that was dumber than the rest. He mentioned he didn't think that the company was intentionally deceiving consumers. I beg to differ that is EXACTLY what intel and the OEMs are doing. Ever since we shifted to this mind numbing dumb i3/i5/i7 naming scheme we have seen an explosion in moronic system configurations. The reporter mentions one where an ultrabook is clearly unable to handle the heat of a higher end CPU thus the OEM is playing a spec game, look at this laptop its thinner and more powerful, its clearly better right? Everyone here knows that the hands down best value in CPUs you can get is the i5 K series most times. But LMK the next time you see that wonderful CPU featured in any consumer class device outside of boutique system builders? On top of that intel bogusly names ultra low power chips with the same exact naming convention on swapping out some letters at the end. This of course confuses consumers into believe the crappy MacBook air they just received actually has a CPU in it that is comparable to the i7 some guys has in his gaming PC. The other major downside is that most systems seem to come configured with the lowest end CPU in their class. Isnt it great when you can charge a certain price and just say an i7 while giving them the lowest end i7. And this shit absolutely works. Now days I ask people what CPU they have and almost all answer its an i7 or i5 or something rather than giving any indication of the model, and the few that don't answer with the speed in GhZ which I then have to explain has not been relevant for greater than a decade.
 
Yeah this is really really old news... October 9, 2001 the athlon xp when amd switched tot he PR naming scheme.
 
You DO realise that everything else would be a bottleneck due to clock timings, right? A 5+Ghz CPU would be spending half its time idle even at full load waiting for the signals of other components to fall on clock sync.

you can OC some CPUs past 5GHz, and you see increased performance.

if 5GHz yields better performance over 4GHz, im taking 5GHz.
 
It has only ever been useful to compare identical CPUs and it has always been this way.

Could say the same thing in 1997 or 2015.

Obviously a 200MHz Pentium Pro was faster than a 180MHz Pentium Pro. But clock is useless otherwise. Obviously a 200Mhz Pentium Pro performs different than a 200Mhz Pentium MMX, (or 200MHz AMD K6) you know, cause they are not the same CPU.

If clockspeed was useful for comparing different architectures, we would have never needed benchmarks lol.
 
Yeah this is really really old news... October 9, 2001 the athlon xp when amd switched tot he PR naming scheme.

It has only ever been useful to compare identical CPUs and it has always been this way.

Could say the same thing in 1997 or 2015.

Obviously a 200MHz Pentium Pro was faster than a 180MHz Pentium Pro. But clock is useless otherwise. Obviously a 200Mhz Pentium Pro performs different than a 200Mhz Pentium MMX, (or 200MHz AMD K6) you know, cause they are not the same CPU.

If clockspeed was useful for comparing different architectures, we would have never needed benchmarks lol.

That is not the issue being described in the article. The issue is that identical CPUs (as in what is listed on spec sheets) can now have widely different performances upon implementations they now all dynamically vary clockspeeds.

Let's use a hypothetical Intel CPU A as an example. This CPU, like most modern processors, has fine grained dynamic clock rates (its clcokspeed will vary depending on specific conditions).

Now lets say both Asus and Lenovo each use this Intel CPU A in a laptop each. All other internal components are identical and Intel CPU A is listed in the spec sheet for both. However -

Asus Laptop: Allows a much higher max temperature and fan speed (noise) than the Lenovo
Lenovo Laptop: Allows a much lower higher max temperature and fan speed (noise) than the Asus

End result is two widely different performing laptops despite indentical listed specifications as they will not reach (or at least sustain) the same clockspeeds even they are both an Intel CPU A.

This issue also isn't just CPUs. GPUs now also behave this way. Nvidia's gpu boost table is set according to the chips individual ASIC score. Therefore two idnetical model GPUs will at stock perform differently as they will clock to different speeds.

There's a way to answer that question and determine what machine is faster, and that's to torture it with something like Prime95 and record what the CPU is doing frequency and voltage-wise over time, as well as performing application function tests to see which programs are performing faster under what machine. That takes into account CPU core, memory, and chassis differences to show actual performance, so if you're wanting to know if brand X's AMD laptop is faster than Brand Y's Intel laptop for whatever, you get some idea what's going to happen with a real program and not just an artificial benchmark.

This is why sites like the [H] are so necessary. Someone has to stress test in a controlled environment and see what really happens to find out which machine is performing better. Testing takes the mystery out of it so you can make an informed buying decision.

Well the issue is there are way too many end user products and really only a few of most popular lines do get tested, and often not really in a representitive configuration.

Also hardware sites like HardOCP don't even test these devices but let's look at something they do test a lot of video cards. Even with video cards their are quite a few models where no hardware site has tested. In fact if you look at the models that do get tested all the sites pretty much test with a lot of overlap. So if you want to find data on say the mainstream Asus and MSI cards thats easy but otherwise? And their are way less video card model variations than just laptops alone.
 
meh -- my i5 2500K @ 4-something Ghz is holding me solid for quite a while. They could come out with a 8Ghz super duper pussy magnet CPU and I really wouldn't care.
 
That's not what the article is talking about, though.

It's raising the issue that complete systems (laptops, in this example) are exhibiting different levels of performance even comparing the same architecture. In some cases, slower chips are outperforming faster chips because of other factors in the build.

You buy a laptop with a 2.9GHz chip and compare it to last year's 2.3 GHz laptop, but the new one isn't faster because it's throttling. You aren't getting the performance you expected or paid for and comparing the raw processor speed isn't an accurate description of what you should expect.

They're asking whether these kinds of situations indicate that raw processor speed is becoming less informative to the consumer.
 
I don't get it really. Ghz works as long as you're in the same consumer Intel/AMD.

Core count and hyper threading is what's hard to explain to family tier people.

I don't think that's really true anymore. OEMs can set thermal limits manually so a CPU maintains a certain temp under load and throttles accordingly to hit that target. OEMs can freely allow a CPU to adjust speed up to Intel's thermal limit and then cooling becomes a pretty significant factor as the processor balances core speed and GPU speed. Anandtech did a pretty good write up on it after they noticed a lower end Core-M outperforming higher end Core-M CPUs in some benchmarks. Here's the article:

http://anandtech.com/show/9117/analyzing-intel-core-m-performance

So basically, yes GHz matter, but the end user can't always exert full control over the overall system design which can leave performance on the table unused or make a lower-end part perform better depending on workload. It's kinda interesting stuff.
 
If you asked AMD back in the Athlon XP days it was all about the P rating. :D:D

Clock speed is still an important performance spec, but its not the most important anymore.

But today there are so many variables like multicore, hyperthreading, ISA, IPC, cache, memory bandwidth, HSA, et al. That you can no longer base performance metric only on clockspeeed.
 
*cough Dell cough*

The XPS 16 series (and its Latitude cousins) had massive throttling issues that conveniently showed up only after about a half hour. Long enough for most review benchmarks to be completed.

Strangely, unthrottling using special programs (e.g. ThrottleStop) didn't damage the hardware in the least as the CPU is rated to 105C and the GPU to 95C--Dell just wanted "quiet" over "actually does what it says on the box."
 
I think it still depends on what a computer is being used for and especially the apps. If apps don't multi-thread and need performance then clock speed still matters. For most general computing though, I think ever since Core2Duo clock speed hasn't been that important. I still have and use my Core2Duo laptop and processor is more than adequate, however I did put SSD in it. Today one will see more benefit from having and SSD in their computer rather than higher clocked CPU that's for sure. As far as gaming goes, consoles hold the industry back so there's really not much need for a really high end box either.
 
meh -- my i5 2500K @ 4-something Ghz is holding me solid for quite a while. They could come out with a 8Ghz super duper pussy magnet CPU and I really wouldn't care.

I'd be first in line to buy a "8Ghz super duper pussy magnet CPU" just to see how it worked. I have NEVER seen a CPU with that kind of attractive power, the closest was this one girl who thought the fact that I had lights in my computer to be "weird".
 
I don't think that's really true anymore. OEMs can set thermal limits manually so a CPU maintains a certain temp under load and throttles accordingly to hit that target. OEMs can freely allow a CPU to adjust speed up to Intel's thermal limit and then cooling becomes a pretty significant factor as the processor balances core speed and GPU speed. Anandtech did a pretty good write up on it after they noticed a lower end Core-M outperforming higher end Core-M CPUs in some benchmarks. Here's the article:

http://anandtech.com/show/9117/analyzing-intel-core-m-performance

So basically, yes GHz matter, but the end user can't always exert full control over the overall system design which can leave performance on the table unused or make a lower-end part perform better depending on workload. It's kinda interesting stuff.

If the cooling of the device effects it's performance that much they should have to list what it actually runs at, not just the box performance.

Looking at the, the i5-5200U(2C/4T) does beat the Core M 5Y10(2C/4T) due to its higher clock speed in all the tests.

The only test in which it doesn't win is the DOTA 2 FPS test. The i5 should have a better GPU, a Intel 5500 vs the 5300 on the Core M, but Dell says their i5 laptop has a Intel 4400. So if that's true then yes it would lose to the Core M.
 
If the cooling of the device effects it's performance that much they should have to list what it actually runs at, not just the box performance.

Looking at the, the i5-5200U(2C/4T) does beat the Core M 5Y10(2C/4T) due to its higher clock speed in all the tests.

The only test in which it doesn't win is the DOTA 2 FPS test. The i5 should have a better GPU, a Intel 5500 vs the 5300 on the Core M, but Dell says their i5 laptop has a Intel 4400. So if that's true then yes it would lose to the Core M.

I don't at all disagree with the idea that the i5 has an advantage over the Core-M and that advantage is due to clock speed. However, the fact that it has a clock speed advantage to begin with is further due to the fact that it has a higher power and thermal budget (15 watts for the i5 vs 4.5 watts for the Core-M). What I'm getting at is that thermal and power constraints, which can be partly controlled by the OEM to achieve a certain device skin temperature, play a significant role in the performance of a given piece of computing hardware and a given workload which isn't going to be reflected in the processor's spec sheet where clock speeds are listed or even the processor's relative ranking based on the model number and sale price which is what Anandtech was getting at in that analysis.
 
The whole idea behind clock speed is to give the consumer a benchmark in which to determine how fast a chip actually is. Today with all of the creative marketing and proprietary tweaking taking place, it’s hard to know exactly what you are getting.

It's still viable as a specification within a specific family. You can't compare the clock speeds of an FX-9590 to a Core i7 5960X and make a judgment call on that information alone. You would be led to the wrong conclusion just comparing clock speeds alone. Now comparing CPUs that are the same in a given family but have differing clock speeds is useful. The problem is that there are so many different processor lines, families and feature sets per models in the same series that telling CPUs apart requires more effort than it did back in the day.
 
We need to know the output of the CPU and not the speed. like how we rate hard drives.
 
We need to know the output of the CPU and not the speed. like how we rate hard drives.

There is a rating like that called FLOPS (Floating Operations per Second).

It's not perfect, but it is better than using clock speed, and can be used to compare different architectures to each other.
 

What are you blabbering about?

I said I sure could use a factory 5ghz chip for certain application workloads I have, and now you are going on about how it would not help at all. I have OC'd a 4790k to 4.8 from 4.0 and in the applications in question (OCR) the output just goes up. on a 200,000 page doc, 4ghz vs 5ghz will make a sizeable time difference. Is there a limit to the benefit for faster clocks? Sure. But I see a 25% reduction in time from 4-4.8 ghz in OCR.

So It sure as shit would benefit me.
 
That is not the issue being described in the article. The issue is that identical CPUs (as in what is listed on spec sheets) can now have widely different performances upon implementations they now all dynamically vary clockspeeds.

That isn't a new issue either as laptops are completely custom one off designs, not carbon copies of a reference design from Intel with nothing changed but a sticker.

Take a look at this round up of Pentium 233 MMX laptops in 1997 and explain to me why the CPU score isn't identical

https://books.google.ca/books?id=oO...w#v=onepage&q=thinkpad 770 benchmarks&f=false

The cooling designs that OEMs and ODMs use are typically atrocious and change almost weekly. But that's just one piece of the puzzle. Location of memory, chipset, I/O, are all different and is bound to have some effect on system performance.

Brand name PCs, even desktops, should always be reviewed as a complete entity. Period. Specs don't matter if the manufacturers can change whatever they like.
 
I cringe every time I read this thread topic because this has been obvious for over a decade now.
 
I don't at all disagree with the idea that the i5 has an advantage over the Core-M and that advantage is due to clock speed. However, the fact that it has a clock speed advantage to begin with is further due to the fact that it has a higher power and thermal budget (15 watts for the i5 vs 4.5 watts for the Core-M). What I'm getting at is that thermal and power constraints, which can be partly controlled by the OEM to achieve a certain device skin temperature, play a significant role in the performance of a given piece of computing hardware and a given workload which isn't going to be reflected in the processor's spec sheet where clock speeds are listed or even the processor's relative ranking based on the model number and sale price which is what Anandtech was getting at in that analysis.

Which is why I said that OEM/Manufacturer should have to list what their device can actually do at, say 23c ambient, instead of the chip's stock speeds.

But that's require some truth in advertising. :(
 
This essentially started with the Pentium 3. This question gets brought up every year and the answer is the same "YES"

I don't agree. I think clock speed is still relative but that it's not the only metric involved because the architectures have advanced.

I say this because at the same time, you can't say that, given an identical architecture, a difference in clock speed is irrelevant.
 
Duh doy! That's why we moved to core count as the best processor specification.
 
Back
Top