Moore's law and CPUs

Sometwo

Limp Gawd
Joined
Nov 7, 2004
Messages
202
I've been out of the hardware world for a while, but I remember in the 90's CPUs were:

100Mhz
133Mhz
166Mhz
200Mhz
...

Every single release was +33Mhz. How does that work? You're telling me every single breakthrough in CPUs just happened to be an additional 33Mhz, or consistent with Moore's law?

Are they trying to squeeze money out of us, or has there ever been a giant breakthrough in CPUs that went beyond Moore's law, such as a big jump from 1 Ghz to 2 Ghz like you would expect to happen once in a while?
 
In the history of CPUs major speed increases came along side one of two things (sometimes both)... A new Architecture or a decrease in Die size... Also CPUs in the same family would be available in several Speeds... For example the Original Pentium (P5 Based) was available in 3 clock speeds 50MHz, 60MHz and 66MHz in 1993... The only difference being that the lower clocked versions "might" not have passed testing at higher speeds... Then when the the Die went from 0.8µm to 0.6µm in 1994 the Speed went up to 100MHz... This Increase of Speed along with Die size decreases went on until 1997 and the release of the Pentium II based on the P6 Architecture... This same trend goes on until today...
 
I hate to be the party pooper, but Moore's law was in reference to transistors and had nothing to do with clock rates. Clock rate has always been kind of a superficial method of measuring a CPU. It wasn't always like that because our entire world was single threaded, but eventually even the fastest single threaded CPU ran like crap (P4) in comparisons to multi-threading. It seems kind of silly looking back at how you used to wet yourself getting a 33-50Mhz OC stable on your system. Then 100Mhz became the norm, now 1Ghz is pretty average, but unlike those days of old you hardly see any improvement without some benchmark suite hammering your CPU.

The only way I can explain where it Mhz ended being relevent was around the 2-3Ghz mark. I remember feeling the improvement an overclock made during the 90's and early millennium, nowadays its pretty much a waste to do as the power draw skyrockets, so too does the heat and I see nothing from it except maybe 1FPS faster encoding H.264 video. Perhaps its that whole Google 1Gbps Fiber debate Cable Co's are having. Build it and they will come blah blah because I think computing nowadays is infinitely more complex than it was 10 years ago.

Are they milking consumers. Simple answer: Yes. Intel especially, but when you're the king of the damn hill who is really going to say much let alone do anything about it? I don't believe there have been major breakthroughs in hardware, just a breakthrough in design but the artists who call themselves microprocessor engineers. Making things smaller and sutffing more of what we know into a same size package is pretty much all we've been doing for the past 20 years. Now you can take some of the logic added onto CPU's and that surely has aided in the revolution of raw CPU horsepower.

The more I researched how a CPU really works the more I realize its dumber than people think. Each part is so dumb, yet so fast, that we stuff a crapload of them in there and hope it works. I believe I was watching some video on youtube about Intel engineering the 45nm chips back when that was a breakthrough and how all the engineers were shocked that it could even be done, let alone hundreds of millions of transistors be perfect without one damaged during the whole process.
 
You can split the pipeline even more and get even higher clock speeds, but that wouldn't make it any faster.

A giant breakthrough in CPUs were branch predictors reaching 1% error rate
 
The more I researched how a CPU really works the more I realize its dumber than people think. Each part is so dumb, yet so fast, that we stuff a crapload of them in there and hope it works. I believe I was watching some video on youtube about Intel engineering the 45nm chips back when that was a breakthrough and how all the engineers were shocked that it could even be done, let alone hundreds of millions of transistors be perfect without one damaged during the whole process.


Without one damaged? there are many thousands that are dead on every processor...
Since we have so much space on processors now we can add redundancy.
If you know that there is an important circuit on the chip and that it would effect many other things if it was dead, then just build two of them, or three, and then design the circuit so that it can be tested and then fallback to one of the backup circuits if needed.
 
Once CPUs get as small and power efficient as possible they will begin to get bigger and better again. :D

It's the only way for Intel and AMD to continue to improve design and profit.
 
Back
Top