10GHz Processors By 2011


Skynet disagrees: :mad:
pdcpu_rbcpu1_big_compare.jpg
 
Moore's law became irrelevant once Intel and AMD realized that silicon just wasn't going to cut it.

That happened around the time of the Athlon X2...

So screw Moore's law, My uncle's last name is Moore, he don't know nothin' bout 'puters

:D
 
I don't think anyone predicted how hard it takes to overcome the power wall as you crank up the clocks. Ultimately that meant the end to the Netburst and 10Ghz chip roadmaps
 
I don't think anyone predicted how hard it takes to overcome the power wall as you crank up the clocks. Ultimately that meant the end to the Netburst and 10Ghz chip roadmaps

This. That's why multi-core is the focus in CPUs these days as its our best hope for faster systems.
 
The comments are absolutely rediclous on that article. This one is actually 10% correct though.

"Personally, I think there will be problems making microprocessors go as fast as Moore's law would predict, due to RF interference generated by the ever shorter wavelengths of the data pulses internal to the processor, and due to excessive heat. My ideal view of a future computer would be massive multiprocessing – say, multiple 10-GHz chips each with their own memory and bus, working in parallel or each of them running different programs when I am multitasking."
 
The original prediction was based on the concept of single-core computing just getting faster and faster as time went by. Obviously, there's a lot of issues (and laws of thermodynamics and other things at work too) which simply make such a path untenable and unworkable.

Somewhere along the line someone got really wise and came to the conclusion that "parallel processing is the way" and because technology just kept producing smaller and smaller parts, well...

I know people don't look at multiple cores as additive but, I would say that with the introduction of the first Core 2 Quad processor, the much loved Q6600, Intel matched and fulfilled that prediction even in spite of it being just a 2.4 GHz processor natively (ok, the 4 x 2.4 = 9.2 GHz of potential effective processing power isn't 10 GHz but, dammit it's close enough).

Of course, with the advent of quad cores that enabled Hyper-Threading to eek out even more performance, I would still say we've gone past the 10 GHz processing power prediction, and things will just get faster as time goes by - but NOT by increasing the raw clock speed of the cores/processors. Instead, it'll be done by massively parallel processors... 8 cores, 16, 32, 64, 128 cores...

Works for me.
 
Intel should have stayed with the pentium 4 design. The ghz war was a lot more fun. I remember back then i'd always argue that i had more mhz/ghz than you lol. Stuff was a lot more fun before. I feel like we are hitting a wall though now. Not much has gone past 3ghz yet stock wise.
 
sorry forgot to quote, The Six core new intel processor has more than 1 qpi so it is faster than 10x

I've seen reference to a 6.5 GHz Phenom II X4 by an end-user, and 7 GHz by AMD themselves.

And an 8 GHz Pentium 4.


I just love the first comment, though. "If 10 GHz is the best that Intel can do by 2011, AMD or somebody else is going to eat their lunch."

I do agree with the "980X counts as 20 GHz", though. At the time the article was written, they were both misunderstanding Moore's Law, and not taking into account multiple cores. For what the prediction meant (even if they didn't know it,) we've exceeded it by a factor of 2.

Intel didn't quite hit the front side bus number, though. They used the NetBurst 100 Mhz * 4 as a basis, which is 3.2 GB/s. The fastest QPI is 25.6 GB/s, not quite 10x.
 
What I find more interesting are the comments by the readers way back in 2000. Most comments were that 2010 was going to be all great and stuff for computers, BUT from what I can see, most of it is still the same ol' same 'ol just a little faster.

What? Seriously...you don't use computers for anything now that you didn't use them for in 2000? Even if hardware improvements have disappointed (I don't think they have), software improvements have exploded in the past ten years.

Also, we may not have made improvements in raw clock speed, but we have seen substantial improvements in IPC and parallel processing. In terms of total processing power, I would say a well-threaded application would perform more than 10x as well on a 2010 computer as on a 2000 computer. Intel has met their goal; they just took a different path than they anticipated.

Moore's Law simply states that the number of transistors on a consumer-level chip will double every two years. Williamette had 42 million transistors; 980X has 1.170 billion. Moore's Law predicts 42 million * 2 ^ ((2010 - 2000) / 2) = 42 million * 2 ^ 5 = 1.345 billion, so Intel has very nearly fulfilled it over that ten years span (and the year's not over yet).

I do agree with the "980X counts as 20 GHz", though. At the time the article was written, they were both misunderstanding Moore's Law, and not taking into account multiple cores. For what the prediction meant (even if they didn't know it,) we've exceeded it by a factor of 2.

QFT.
 
^EDIT: Well, almost QFT. As I said, by strict interpretation of Moore's Law we've just nearly met it. Applying the same law to processing power, we may very well have exceeded it.
 
I know people don't look at multiple cores as additive but, I would say that with the introduction of the first Core 2 Quad processor, the much loved Q6600, Intel matched and fulfilled that prediction even in spite of it being just a 2.4 GHz processor natively (ok, the 4 x 2.4 = 9.2 GHz of potential effective processing power isn't 10 GHz but, dammit it's close enough).

Rather than additive, i'd say it's just a natural progression. 1Ghz today does a heck of a lot more than 1Ghz 10 years ago :)
 
^EDIT: Well, almost QFT. As I said, by strict interpretation of Moore's Law we've just nearly met it. Applying the same law to processing power, we may very well have exceeded it.

And actually, I'm pretty sure there are some Xeon processors that are either out or coming out soon that are well over ~1.3B transistors.
 
Rather than additive, i'd say it's just a natural progression. 1Ghz today does a heck of a lot more than 1Ghz 10 years ago :)

Whoever decided to measure processing power in GHz deserves the same painful death as whoever decided to measure the brightness of light bulbs in watts. :(
 
Well if you count that there is now 6-cores running at 3.33ghz, in highly threaded applications that is like one core at 20 GHz.

I'd rather have a single core running 10Ghz than 6 cores running 3.33, since so many tasks are not amenable to parallelization. Of course, I'd rather have multicore 10Ghz chips even better, but as someone who is trying to squeeze speed out of a CPU, it's easier to get that speed with an absolute faster single core than multiple slower cores. There are advantages to multicores, but they mostly revolve around running multiple applications simultaneously.

And never forget that we can put two CPUs on a MB for dual CPU goodness... imagine if we could have had the 10Ghz parts with 2 or 4 CPU cores (and maybe even hyperthreading!) right now......:eek:
 
The comments are absolutely rediclous on that article. This one is actually 10% correct though.

"Personally, I think there will be problems making microprocessors go as fast as Moore's law would predict, due to RF interference generated by the ever shorter wavelengths of the data pulses internal to the processor, and due to excessive heat. My ideal view of a future computer would be massive multiprocessing – say, multiple 10-GHz chips each with their own memory and bus, working in parallel or each of them running different programs when I am multitasking."

Um.... was that ME? That sounds like something I would have written back then, as I had postulated a problem with continuing increases in frequency causing RF that interfered with the nearby circuits in a CPU back in 1995, while working on my masters.... so this sounds like something I would have said 10 years ago in the original discussions!
 
I don't think CPU makers have such a massive goal in mind anymore. Instead of increasing clock speeds, more and more cores are being added.
 
Seems to me just about every architecture is becoming less efficient than the previous (dont take that out of context) however...scalability seems to be increasing.
 
Seems to me just about every architecture is becoming less efficient than the previous (dont take that out of context) however...scalability seems to be increasing.

Less efficient by what definition? Processors do more today per MHz, per dollar, and per watt of power usage than ever before.
 
Being that the article was written when multiprocessor was predominantly single core chips, multiple socket, the article is kind of irrelevant in hindsight.

Of course now we've moved more along the lines of parallel processing and multi-threading, and yet there is room to grow in that direction yet since it's only really gone mainstream starting with the Intel Core/AMD Phenom lineups. As has been mentioned before, Intel and AMD has been pushing the efficiency of processing power/efficiency vs. cost/energy used (something Intel has ran with since AMD "schooled" them with the Athlons during the P3/P4 era). Improvements of compilers and programming techniques will help push multi-threading and parallel processing even further. While gaming does well currently with just dual core, some other apps (such as video editing, 3D rendering, etc.) tend to do even better with quad core (assuming that they are programmed to take advantage of the available hardware). We the users just have to hope that programmers don't use multicore as a crutch to sloppy programming (I'm looking at you GTA4).
 
Sometimes this multi-core business feels like one of those stupid "Buy 3, pay for 2" deals that some stores run in an attempt to sell stuff people don't really want...
During the "MHz wars", you could upgrade from a 500 MHz Pentium III to a 1.2 GHz Athlon, and everything would run over twice as fast, including poorly optimized games and applications. Now you're much more dependent on developers writing good, multi-threaded code...
 
Sometimes this multi-core business feels like one of those stupid "Buy 3, pay for 2" deals that some stores run in an attempt to sell stuff people don't really want...
During the "MHz wars", you could upgrade from a 500 MHz Pentium III to a 1.2 GHz Athlon, and everything would run over twice as fast, including poorly optimized games and applications. Now you're much more dependent on developers writing good, multi-threaded code...

You cant blame the technology for developers being lazy. The MHZ wars where also largely marketing bullshit. Us enthusiasts now know (or should) that superior architecture means far more than higher clock speeds.

3GHZ i7 > 10GHZ single core any day of the week.

I would like to see a comparison of past flagship chips vs current chips. Disable all but one core on an i7 and cut the speed down to match each chip.
 
This prediction was also made when Intel had more competition a la AMD. AMD somewhat dropped the ball so Intel was sitting pretty, all on its own at the top. Now tell me this, what company is going to start releasing chips that would cannibalize its own existing stock? Intel hasn't even tried to release faster processors cause there is no one to compete, most consumers don't need that power and they don't stand to make more money by doing so. The only people to need a 10ghz processor are currently buying 2 and 4 processors to fill a system.
 
You cant blame the technology for developers being lazy. The MHZ wars where also largely marketing bullshit. Us enthusiasts now know (or should) that superior architecture means far more than higher clock speeds.

3GHZ i7 > 10GHZ single core any day of the week.

I would like to see a comparison of past flagship chips vs current chips. Disable all but one core on an i7 and cut the speed down to match each chip.

True, you can't blame lazy developers on the technology. But the thing is, higher clock speeds (and superior architectures) can compensate for developers being lazy. Your crappy code will always run nearly twice as fast on a 2 GHz processor compared to an identical 1 GHz CPU. Whether or not it will also run significantly faster on a new architecture depends to some extent on the compiler being used.

Parallelism at the thread level (multi-core, hyper threading) requires that developers make the effort to optimize their code and separate it into threads that can do a lot of work on their own. My point is that developers have much more responsibility these days, because they can't count on ever increasing clock speeds to solve all their problems.
 
Intel was still ramping up clockspeeds then, they couldn't have anticipated Prescott giving them so much heat!
 
You still don't add the clocks. :p

True, but would the single 10 Ghz processor perform calculations faster than the Hexacore 3.33 Ghz processor? Hard to say, but I'd give the edge to the multicore processor, given of course that the program properly took advantage of multi-threading. As long as the computational performance is there, the clock speed is secondary imo.
 
You still don't add the clocks. :p

Maybe not, but a 10 GHz single core ain't gonna crunch video with x264 like a quad/hex core will, ever. That's why people always say <x> GHz of combined processing power... code is moving towards proper multi-threading as time goes by, it's the only way we're ever going to really get things done.

Our brains are the most massively parallel computers ever to exist in this reality; Intel has a long damned way to go, I tell ya... :D
 
True, but would the single 10 Ghz processor perform calculations faster than the Hexacore 3.33 Ghz processor? Hard to say, but I'd give the edge to the multicore processor, given of course that the program properly took advantage of multi-threading. As long as the computational performance is there, the clock speed is secondary imo.

Its not hard to say at all. Unless the app running the calculations is coded like shit the hexacore should smoke it assuming its the same architecture.
 
It is more the other way around. NetBurst died because they couldn't ramp up the clock. At 10 GHz, NetBurst still would have been pretty awesome.

If they took NetBurst to 32nm I'm pretty sure 10 GHz would not have been very far off. They could have at *least* hit 6-8 GHz :). I sometimes wish they would try refining NetBurst to 32nm just to see what it could do.
 
That was made at a time Intel was thinking they'd superscale up single CPU's, in the height of the P4 madness. Before they canceled the P4 4 ghz chip.
Then they came to thier senses and re-enginered the current line of CPU's from thier laptop line.
Nowadays even if the actual clocks speeds are smaller than some of the P4's they do much more work per clock cycle, plus they have more cores.
 
Back
Top