DOOOM - "We're not prepared for the end of Moore's Law"

BB Gun

[H]ard|Gawd
Joined
Jan 14, 2004
Messages
1,551
https://www.technologyreview.com/s/615226/were-not-prepared-for-the-end-of-moores-law/

It has fueled prosperity of the last 50 years. But the end is now in sight.
...
But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?
...
Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.
 
Coding properly. At the moment its not worth coding efficiently at all. Shit could run 1000 times faster or more, but its currently a better use of programmer time to add functionality at whatever cpu cost because the cpus can take it.
 
From the article:
These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.
We saw a ~25% IPC gain from Sandy Bridge (2011) to Kaby Lake (2016) over a period of 5 years, and now he is saying we will get 30 times the amount of performance in 10 years?
If it weren't for AMD upping the competition, we wouldn't see that kind of performance gain for decades the way Intel has milked and stagnated the market. :meh:

Even if Jim Keller does indeed know what he is talking about, and can indeed innovate and engineer to get that level of performance, Intel would artificially stagnate it as long as possible.
Unless of course there is proper competition to force their hand.

Also from the article:
Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.
This is also the major issue, as idiomatic pointed out - lazy, unoptimized, inefficient, and ugly code.
Things are going to get to a point where there will be few true improvements left to be made in hardware, and coders are going to be forced to start optimizing their code.

Unlike the eras of the past, with the amount of code in existence these days, and programs no longer being in the KB to MB range, but in the GB+ range, this will most likely take much more time to truly optimize manually, and may come down to AI-based optimizations.
At that point, though, programmers might be looking at finding new work if their jobs are essentially being done for them.

This is going to be one hell of a balancing act over the next 30 years, for both megacorps, and their industry engineers and programmers.
A dark cyberpunk future indeed...
 
Last edited:
To some degree, the move to multi cores is the way around any limit on transistor count in individual cores. That also will have a limit as most folks won't want a coffee table sized personalized tracking device just to get 2048 cores in the thing.
Unless there are some major physics breakthroughs, Moore's Law was going to end somewhere around the point they started creating molecule sized transistors out of individual atoms.
Sadly, software bloat and function bloat has done a lot to undo the progress in chip speed. Multiple layers of abstraction may make code easier to code and more portable but all those abstraction layers eat cycles. Plus the number of scripts on websites seems to be following a corrallary of Moore's Law.
 
Coding properly. At the moment its not worth coding efficiently at all. Shit could run 1000 times faster or more, but its currently a better use of programmer time to add functionality at whatever cpu cost because the cpus can take it.

This is why we can't have nice things. Too many programmers think like that. Most of them are complete noobs.
 
Moores law is just a stupid marketing term. Its not a law. There are no institutions of physics that acknowledge it as a fundamental law.

I laugh when I hear the laymen approach moores law as if NASA has to fundamentally include it in calculating fuel burn to land a rover on Mars or something. Hilarious...
 
Coding properly. At the moment its not worth coding efficiently at all. Shit could run 1000 times faster or more, but its currently a better use of programmer time to add functionality at whatever cpu cost because the cpus can take it.
While this is somewhat true, that coders are coding 'inefficiently', it's not that better coding methods have been found, but that tools have become available that allow them to get the same outputs through less coding. And that's helpful even if the code runs a bit slower in places.

To note, while Python, which is an advanced scripting language, is dog slow; it's mostly used for serious processing by stitching together far more efficient programs coded in languages like c as mentioned above. The only reason to code something intensive in Python is either laziness or the impact of using Python instead of something faster isn't worth transcoding it.

Many times intensive stuff is first prototyped in Python and then coded in something else later.

Which is essentially what is being proposed here in the face of the so-called 'Moore's Law' no longer being in full effect. Not necessarily because there are better programming methods being discovered, but simply because some things could be coded better.


The only real exceptions that are obvious are the vague potential of applying machine learning techniques to compilers, which is most certainly happening with uncertain benefits, and then the customization of hardware, which is happening far more in other domains than desktop CPUs. Mobile processors are certainly being customized every year to better suit their workloads, increasing responsiveness and performance in processing intensive processes while (sometimes) keeping power usage reasonable.

The basic idea is that of the difference between customized hardware being extremely good at specific types of work and being pretty well unsuited for more general work. As hardware enthusiasts we tend to focus on a specific mix of general and special hardware, but that doesn't mean that there aren't better mixes of either.
 
Moores law is just a stupid marketing term. Its not a law. There are no institutions of physics that acknowledge it as a fundamental law.

I laugh when I hear the laymen approach moores law as if NASA has to fundamentally include it in calculating fuel burn to land a rover on Mars or something. Hilarious...

Thank you. I've been saying this for a long time now.

Moore's law is nothing more than an observation - it is not a physical law. Anything that humans perpetuate artificially is not a law.

So here's an idea: forget about Moore's law and continue to research and build new and better parts.

So in short change nothing. If the world comes to an end it surely won't be because Moore's 'law' failed us.
 
Generally speaking, when things earn money by running faster, you'll find the software gets more time to be designed and optimized for speed, and runs faster.

Outside of those cases, it isn't a "coding issue", it's that the people making budgets and schedules are making a conscious choice to ship sooner. Doing everything in tight C++ with solid designs which are the result of collaboration and iteration would push schedules out. The check-signers won't pay for that.

Developers are doing what our overlords tell us to do. Generally, it's fighting money / time, and thus we optimize heavily on that front. Very rarely is it optimizing for throughput, in most markets. Give me a super short schedule, I'll make you a solution of some sort. Might involve, as IdiotInCharge says, stuff like Python for internal logic. You most likely won't be floored by performance.
Give me the same task but more time and an indication performance = money, I'll give you a dramatically different solution. Performance will be much better, but it costs a lot more too.

Everything is a trade-off. "Coding properly" is making the product mandated by your boss, at least if you enjoy being a developer very long.
 
To some degree, the move to multi cores is the way around any limit on transistor count in individual cores
Not everything can be multi-threaded or SMT-compatible, and depending on the application itself, the linear-efficiency on the increasing number of threads/cores can also diminish after a certain point.
Look at both Windows and Linux while performing OS updates - there are many functions and programs that are strictly single-threaded (.Net Optimization and certain gzip functions, respectively), so throwing more cores at programs like that won't fix the issue, and sometimes it is purely a coding limitation itself with no reasonable or efficient 'fix' or optimization available.

I agree with you to an extent, than most major applications and programs in existence will eventually benefit from more cores, but the code-base has to be there to support it, and more cores is not necessarily a one-size-fits-all, regardless of the ISA, platform, and OS.
 
for all those doom talk i quote Jim Keller...
“Moore’s Law isn’t dead and if you think that, you’re stupid."

So stop, stop the doom talks. Just because intel is lagging behind with their 10nm its not end of the road.

5LPE (low power early) from 2018 offers 29% higher density (which puts us 2019 zen2 on track with moore), and the CLN5FF offers over 80% more density (2019) gives us way ahead.
 
We are at nano at the moment. We still have pico, femto, atto, zepto and yocto to go. I would say plenty of room for discovery, development and revolutions.
 
We are at nano at the moment. We still have pico, femto, atto, zepto and yocto to go. I would say plenty of room for discovery, development and revolutions.
You may want to add the /s as someone might take this seriously. Once you hit Pico that's it. Atoms don't get smaller after that and hydrogen while tiny isn't exactly usable lol.
 
Even if Jim Keller does indeed know what he is talking about, and can indeed innovate and engineer to get that level of performance, Intel would artificially stagnate it as long as possible.
Unless of course there is proper competition to force their hand.

I mean, Keller was a lead designer and team manager for the Zen architecture as well, so I feel he has some idea about what he's talking about. And his 30x quote was likely more inline with increased parallelism than IPC improvements... Since Moore's law only specifies transistor density, not performance per x unit where x can be any metric for performance, really.
 
I mean, Keller was a lead designer and team manager for the Zen architecture as well, so I feel he has some idea about what he's talking about. And his 30x quote was likely more inline with increased parallelism than IPC improvements... Since Moore's law only specifies transistor density, not performance per x unit where x can be any metric for performance, really.
Definitely agreed, and I do believe he knows what he is talking about.
I think it will more come down to Intel not letting him do what needs to be done for further "competition" and market stagnation, at least until they themselves can get a foothold in a different market.
 
Moore's law is nothing more than an observation - it is not a physical law.
It's more than an observation, it's something to aspire to, no matter the initial intention of the phrase. But I do think almost everyone here understands this, so nothing to really get worked up about (not saying you are, but sometimes these convos get heated in mixed company)
 
Coding properly. At the moment its not worth coding efficiently at all. Shit could run 1000 times faster or more, but its currently a better use of programmer time to add functionality at whatever cpu cost because the cpus can take it.

It's so funny that for the most part, productivity apps haven't gotten that much better, but we used to run them in less than 40K of memory.
 
Coding properly. At the moment its not worth coding efficiently at all. Shit could run 1000 times faster or more, but its currently a better use of programmer time to add functionality at whatever cpu cost because the cpus can take it.

This is why we're seeing more fixed function b parts. Like the dedicated matrix math units and dedicated RT units in Turing.

It's much easier to build it once, instead of depending on programmers to do it constantly. It's also more efficient than software.

This is one of the easier ways to extract more v efficiency when your power density is already way too high.
 
Last edited:
  • Like
Reactions: dgz
like this
Since Moore's law only specifies transistor density, not performance per x unit where x can be any metric for performance, really.
Does is it even specific to density? I thought it was simply how many you could cram into one IC, so make a bigger IC to get more transistors? I mean look at the 64 core Epyc, that thing is absolutely huge.
 
Does is it even specific to density? I thought it was simply how many you could cram into one IC, so make a bigger IC to get more transistors? I mean look at the 64 core Epyc, that thing is absolutely huge.

Yeah, before the 90s, Intel was doing a die shrink about every three years. They increased throughput by increasing the size of the wafer, or improving automation tolerances(better yields with larger dies )

When they finally hit the wall for wafer crystal size (300 mm), And Intel had already optimized their process, they went all-in on optical shrinks.

There used to be several ways to combine to double fab throughput every two years - not any more!
 
Last edited:
From the article:

We saw a ~25% IPC gain from Sandy Bridge (2011) to Kaby Lake (2016) over a period of 5 years, and now he is saying we will get 30 times the amount of performance in 10 years?
If it weren't for AMD upping the competition, we wouldn't see that kind of performance gain for decades the way Intel has milked and stagnated the market. :meh:

Even if Jim Keller does indeed know what he is talking about, and can indeed innovate and engineer to get that level of performance, Intel would artificially stagnate it as long as possible.
Unless of course there is proper competition to force their hand.

Also from the article:

This is also the major issue, as idiomatic pointed out - lazy, unoptimized, inefficient, and ugly code.
Things are going to get to a point where there will be few true improvements left to be made in hardware, and coders are going to be forced to start optimizing their code.

Unlike the eras of the past, with the amount of code in existence these days, and programs no longer being in the KB to MB range, but in the GB+ range, this will most likely take much more time to truly optimize manually, and may come down to AI-based optimizations.
At that point, though, programmers might be looking at finding new work if their jobs are essentially being done for them.

This is going to be one hell of a balancing act over the next 30 years, for both megacorps, and their industry engineers and programmers.
A dark cyberpunk future indeed...


Programmers will never be forced to start optimizing their code. Most things don't need to be optimized, and the things that do already are optimized.

As hardware has gotten faster less and less things have needed to be optimized for performance. The end of Moore's Law would just mean the status quo for what needs to be optimized doesn't change. If Moore's Law continued forever eventually nothing would be need to be optimized.
 
Most things don't need to be optimized, and the things that do already are optimized.
That is hilarious.

If Moore's Law continued forever eventually nothing would be need to be optimized.
haha, no, also hilarious.
Software does not just 'magically' take advantage of features and functions within CPUs, it has to be written/optimized for it.

This is why software like Crysis does not run better on a modern CPU than it did 13 years ago on C2D, despite the obvious and massive gains in IPC that have been made in that time.
Same reason the Pentium 4 was a flop when it was first released, because no software took advantage of SSE2, and clock-for-clock, it had lower IPC (without SSE2) than the Pentium III did with it only having SSE1.
 
Last edited:
Programmers will never be forced to start optimizing their code. Most things don't need to be optimized, and the things that do already are optimized.

As hardware has gotten faster less and less things have needed to be optimized for performance. The end of Moore's Law would just mean the status quo for what needs to be optimized doesn't change. If Moore's Law continued forever eventually nothing would be need to be optimized.

Must be nice to be 12 again. Everything used to be so simple
 
Well if anything else comes from this at least we are seeing a number of research groups and such starting to ditch python and going back to C and J to get the speed increases they now require as the hardware isn't getting them where they want and shocker they are astounded by how much faster the code runs...
 
That is hilarious.


haha, no, also hilarious.
Software does not just 'magically' take advantage of features and functions within CPUs, it has to be written/optimized for it.

This is why software like Crysis does not run better on a modern CPU than it did 13 years ago on C2D, despite the obvious and massive gains in IPC that have been made in that time.
Same reason the Pentium 4 was a flop when it was first released, because no software took advantage of SSE2, and clock-for-clock, it had lower IPC (without SSE2) than the Pentium III did with it only having SSE1.

Haha I never said it would.

No one writes slow, unoptimized code and counts on it to be fast in future.

Devs that write slow code for things like games that are supposed to fast just suck at coding. They don't purposely write slow code and expect people to wait 10 years for fast enough hardware.

Good devs optimize their games to get as much detail and fps as possible for old, current, and future hardware.

99% of unoptimized code is boring, non time sensitive crap that no one cares about or things that take .0001 seconds unoptimized and if they were optimized take 0.00001 seconds.
 
Haha I never said it would.

No one writes slow, unoptimized code and counts on it to be fast in future.

Devs that write slow code for things like games that are supposed to fast just suck at coding. They don't purposely write slow code and expect people to wait 10 years for fast enough hardware.

Good devs optimize their games to get as much detail and fps as possible for old, current, and future hardware.

99% of unoptimized code is boring, non time sensitive crap that no one cares about or things that take .0001 seconds unoptimized and if they were optimized take 0.00001 seconds.
99% of people who blame unoptimized code don't actually know what that means.
 
Well if anything else comes from this at least we are seeing a number of research groups and such starting to ditch python and going back to C and J to get the speed increases they now require as the hardware isn't getting them where they want and shocker they are astounded by how much faster the code runs...
People are writing code in JavaScript.

I'm appalled that anyone uses Java where they don't have to -- I've seen where the limits get hit due to outdated assumptions not being worth readdressing and end users simply have to live with the resulting performance issues.

Hopefully we're getting to the point where automatic optimization is becoming a thing.

With respect to performance, developer time (which is a function of effort among other things) is being balanced against performance. Carmack once mentioned that there was assembly code he'd written for Quake that had survived through to the last engine revisions he'd worked on at id, simply because it still worked and no one had time to do that kind of optimization these days.

I'm wondering if 'common solutions' for particular problems can't be codified, recognized by compilers, and applied at compile time for any code that isn't run as a script.
 
People are writing code in JavaScript.

I'm appalled that anyone uses Java where they don't have to -- I've seen where the limits get hit due to outdated assumptions not being worth readdressing and end users simply have to live with the resulting performance issues.

Hopefully we're getting to the point where automatic optimization is becoming a thing.

With respect to performance, developer time (which is a function of effort among other things) is being balanced against performance. Carmack once mentioned that there was assembly code he'd written for Quake that had survived through to the last engine revisions he'd worked on at id, simply because it still worked and no one had time to do that kind of optimization these days.

I'm wondering if 'common solutions' for particular problems can't be codified, recognized by compilers, and applied at compile time for any code that isn't run as a script.
Newer compilers are doing a lot for automatically "optimizing" code but even then they can only do so much and in some cases can make things worse as it can essentially be a black box for the developer when it does, all developers can really do is remember to code for what is needed, document, and keep it clean and readable for the next programmer. None of that look at me I made this whole function fit onto one line aren't I a super good programmer BS. I have spent more time that I am comfortable with doing code reviews, and I have seen some janky shit from really great programmers who sometimes forget their place.
 
This is why software like Crysis does not run better on a modern CPU than it did 13 years ago on C2D, despite the obvious and massive gains in IPC that have been made in that time.
Same reason the Pentium 4 was a flop when it was first released, because no software took advantage of SSE2, and clock-for-clock, it had lower IPC (without SSE2) than the Pentium III did with it only having SSE1.

Are you sure you don't have the Crysis 24Hz lock issue? Runs pretty great on new hardware once you get around that problem
 
People are writing code in JavaScript.

I'm appalled that anyone uses Java where they don't have to -- I've seen where the limits get hit due to outdated assumptions not being worth readdressing and end users simply have to live with the resulting performance issues.

The only Java found in JavaScript is in the name.
 
Back
Top