ISSCC 2015 - Moore's Law Delivers

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Moore’s Law continues delivering and chip designers continue to make use of the ever more sophisticated technology. At this year’s International Solid State Circuits Conference, papers describe progress made possible by Intel’s leading 14nm process, including the first complete wireline transceiver on 14nm technology (60 percent smaller than the smallest comparable link) and the world’s smallest SRAM bitcell, reducing peak write power by 24 percent compared to actively biased circuits. Intel Labs presents research in chip design to boost performance and energy efficiency. This includes a graphics execution core for SoCs with an 82 percent reduction in energy consumption at near-threshold voltage and 75 percent higher frequency at maximum performance.
 
We've been stuck in the 3GHz-4GHz range for what? Maybe 8 years now?

Even the old Pentium IV e's were hitting 3GHz with virtual cores about 13 years ago maybe? (Edit: Just checked...it was November 2002 that Northwood cores came out.) Meanwhile my wife's i3 laptop putts around 1.7GHz. Those "power savings" really didn't do much for us.


Multiple cores are nice, but 16 cores isn't going to do squat unless you are running a server or media platform of some sort.

And intel wonders why they aren't selling squat...

Bought bloody time they get some serious increases.
 
We've been stuck in the 3GHz-4GHz range for what? Maybe 8 years now?

Even the old Pentium IV e's were hitting 3GHz with virtual cores about 13 years ago maybe? (Edit: Just checked...it was November 2002 that Northwood cores came out.) Meanwhile my wife's i3 laptop putts around 1.7GHz. Those "power savings" really didn't do much for us.


Multiple cores are nice, but 16 cores isn't going to do squat unless you are running a server or media platform of some sort.

And intel wonders why they aren't selling squat...

Bought bloody time they get some serious increases.

Higher "speeds" =/= performance
 
We've been stuck in the 3GHz-4GHz range for what? Maybe 8 years now?

Even the old Pentium IV e's were hitting 3GHz with virtual cores about 13 years ago maybe? (Edit: Just checked...it was November 2002 that Northwood cores came out.) Meanwhile my wife's i3 laptop putts around 1.7GHz. Those "power savings" really didn't do much for us.


Multiple cores are nice, but 16 cores isn't going to do squat unless you are running a server or media platform of some sort.

And intel wonders why they aren't selling squat...

Bought bloody time they get some serious increases.

There is a lot more to processor power than simply frequency (IPC, low-penalty prefetch, and I'm sure a lot of new hotness that I haven't bothered to learn).
 
We've been stuck in the 3GHz-4GHz range for what? Maybe 8 years now?
Even the old Pentium IV e's were hitting 3GHz with virtual cores about 13 years ago maybe? Multiple cores are nice, but 16 cores isn't going to do squat unless you are running a server or media platform of some sort.

It's not just clock speed, but then number of cores and the OPS. Intel has been making each generation slightly faster for the same clock speed. The current laptops I've been buying as clocked at 2.7Ghz, yet the chips benchmark as fast on most things as the previous generation at 2.9Ghz


I was running a single core 3.4Ghz p4 @ 3.8Ghz for several years before finally upgrading to my current i7. It was a huge performance boost at the time as I was a few generation behind.

However, I've now been running my i7 860 for over 5 years, and I still don't see a need to upgrade.
Even with the higher ops, and higher overclock speeds, I'd be looking at maybe a 33% increase, which is hardly noticeable unless you are running benchmarks.

As for more cores, I don't see the need for most desktop systems. On a desktop I really don't see the need for more than 4 cores, since for most people the extra cores really won't help much.

Servers are a different story. I do like having dual CPU's with 6-8 cores each, as it really helps on large SQL servers and with virtualization.
 
For people who don't understand what Moore's law really is:

"Moore's law" is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years.

That's transistor count. Like the number of cute little flippy-floppy switch thingies. That number doesn't translate directly into performance or how many giggle-hertz a chip has so try not to sticky-glue them together because it's really not the case.

Beyond that, Moore made that statement in 1965 and said it'd apply for about 10 years and that beyond that, things were kinda iffy which, I personally think, makes it kinda silly to even dredge it up now like it was meant to be some great universal constant and not just an offhanded observation of the short term future of computing made 50 years ago.
 
For people who don't understand what Moore's law really is:



That's transistor count. Like the number of cute little flippy-floppy switch thingies. That number doesn't translate directly into performance or how many giggle-hertz a chip has so try not to sticky-glue them together because it's really not the case.

Beyond that, Moore made that statement in 1965 and said it'd apply for about 10 years and that beyond that, things were kinda iffy which, I personally think, makes it kinda silly to even dredge it up now like it was meant to be some great universal constant and not just an offhanded observation of the short term future of computing made 50 years ago.

I know what Moore's law said...a double of transistors.

While the latency and IPC has certainly improved over time, the over all performance (MIPS and IPC) has not correlated to decrease in die size. (Thank god NetBurst is gone) In other words: All those fancy die shrinks were concentrated on duplicating cores and adding functionality instead of increasing raw speed. With the exception of GP-ASIC (GPU Matrix functions) code still depends on the RAW speed of the CPU and memory because it has been very hard to thread CPU task well. All these new circuits lead to signal noise, additional power consumption, and speed of electron limitations as circuit paths increase in length.

This is one reason why ARM has been steadily increasing in raw performance.
 
As the Pentium Anniversary Edition (overclockable to 4+ GHz) It's not the cores, but the raw speed that makes the most difference.
 
As the Pentium Anniversary Edition (overclockable to 4+ GHz) It's not the cores, but the raw speed that makes the most difference.

Well, that really depends lots on the workload in question and how the software running that work was written. IPC on a per core basis is super important, but so is software's ability to be scalable (understanding that there's some stuff that can't be broken out and issued to lotsa different cores) so YMMV is totally something to keep in mind when it comes to deciding whether a processor is actually faster because it contains a gajillion CPU cores or one crazy-cakes fast one.

Which all still doesn't have a lot to do with Moore's Law (honestly, that thing sorta grew into a beast all its own and we really should just stoppit already) since transistor count is only one of a lot of factors to consider.
 
Are we really getting faster, or are we just getting "different" ?

It seems like every 6 months there's a chipset/processor change that fucks up any idea one may have had of buying a solid MB and keeping it for years, and simply updating the processor from time to time.

Yet I'm not seeing the kinds of improvements that Moores Law is all about.

It seems like it's new for the sake of new and to re-generate sales. Not real improvements (other than maybe power and heat reduction).
 
http://slashdot.org/story/11/01/04/1922239/45-years-later-does-moores-law-still-hold-true

In 1965, an article in "Electronics" magazine by Gordon Moore, the future founder of chip juggernaut Intel, predicted that computer processing power would double roughly every 18 months. Or maybe he said 12 months. Or was it 24 months? Actually, nowhere in the article did Moore actually spell out that famous declaration, nor does the word 'law' even appear in the article at all.

http://www.foxnews.com/tech/2011/01/04/years-later-does-moores-law-hold-true/

"Yes, it still matters, and yes we're still tracking it," said Mark Bohr, Intel senior fellow and director of process architecture and integration. The company is certainly one reason Moore's Law has remained in the public's mind: A section on Intel's website details the law, explaining that "his prediction, popularly known as Moore's Law, states that the number of transistors on a chip will double about every two years. Intel has kept that pace for over 40 years, providing more functions on a chip at significantly lower cost per function."

Bohr told FoxNews.com that doubling the number of chips is far less important these days than making them smaller, which has other tangible benefits for consumers

"Moore’s law isn’t tracking exactly, but the spirit of the law is still alive in that the dies are still shrinking, and CPUs become more and more capable every 12-18 months or so," said Joel Santo Domingo, lead analyst, desktops at PCMag.com. His former boss agrees.

"I did the math, and while it’s not exactly doubling every two years, it’s pretty close," agreed Michael Miller, the award-winning math geek and former editor in chief of PCMag.com.

Maybe, as Johnny Depp said in the Pirates of the Caribbean movies, it's really more of guideline?

"Semiconductor chips haven't actually tracked the progress predicted by Moore's law for many years," said Tom Halfhill, the well respected chip analyst with industry bible the Microprocessor Report.

Halfhill is quick to note that Moore's law isn't truly a scientific law, merely "an astute observation." In fact, since Gordon Moore made his observation in '65, the law has been modified and manipulated to fit the actual progress of semiconductors to such an extent that it can arguably be said to have predicted nothing.

It's also been so frequently misused that Halfhill was forced to define Moron's Law, which states that "the number of ignorant references to Moore's Law doubles every 12 months."

Halfhill wrote a paper for "The Microprocessor Report," published in December of 2004, which debunked the connection between Moore's Law and reality. In it, he noted that Moore's Law was more like Bode's law, an observation by early astronomers that each planet in our solar system is roughly twice as far from the sun as the planet in the next inner orbit.

"Modern astronomers don't expect the distances between planets to add up exactly, and they don’t expect other solar systems to conform to the same rules," Halfhill explained. Likewise, engineers don't really require the latest generation of computer chips to exactly meet Moore's Law either.

In fact, to make it track more closely to actual transistor counts, he proposed Epstein's amendment, named after a fellow editor at "The Microprocessor Report," which adds a leveling factor that accounts for the law of diminishing returns.

Halfhill is quick to point out that the Law is meaningless -- but the idea that computing keeps relentlessly advancing, that's what's really important.
 
Are we really getting faster, or are we just getting "different" ?

It seems like every 6 months there's a chipset/processor change that fucks up any idea one may have had of buying a solid MB and keeping it for years, and simply updating the processor from time to time.

Yet I'm not seeing the kinds of improvements that Moores Law is all about.

It seems like it's new for the sake of new and to re-generate sales. Not real improvements (other than maybe power and heat reduction).

It's all about mobile (think smart phones, tables, etc). Longer battery life, power consumption, heat reduction, and small sizes are all that matters.
 
Moore’s Law continues delivering and chip designers continue to make use of the ever more sophisticated technology. At this year’s International Solid State Circuits Conference, papers describe progress made possible by Intel’s leading 14nm process, including the first complete wireline transceiver on 14nm technology (60 percent smaller than the smallest comparable link) and the world’s smallest SRAM bitcell, reducing peak write power by 24 percent compared to actively biased circuits. Intel Labs presents research in chip design to boost performance and energy efficiency. This includes a graphics execution core for SoCs with an 82 percent reduction in energy consumption at near-threshold voltage and 75 percent higher frequency at maximum performance.

I see.
 
Back
Top