Big Trouble at 3nm

I've read this article several times over the years.

Just saying. It's always something that is going to end the march of CPUs.

I guess if you say it every year for 30 years, eventually you'll be right and you can pretend you weren't wrong the other 29 times.

Sounds like global warming!
 
No, power consumption isn't directly proportional to clockspeed. When you cut clockspeed, you also cut voltage, which gives you much more power savings overall.
I support this statement.

http://www.siliconintelligence.com/people/binu/perception/node13.html

Cutting clockspeed and voltage simultanously can dramatically reduce power usage. Intel did a research project some years back where they folded the pentium4 logic onto itself (2 layer stack pentium 4), they had to reduce clockspeeds but the processor was faster than traditional pentium 4 setups due to latency improvements.

https://ieeexplore.ieee.org/document/1347939/
 
In my line of work I run a research center dealing with electron microscopy (imaging) and x ray diffraction (structural determination based on crystallinity).
In this discussion I will demonstrate why sub 10 nm production is challenging and constantly put on hold.
Our scanning electron microscope (SEM) has a resolution of 5 nm and a magnification range from 10 to 300,000 times.
BTW these images are not public domain.

Photo A Fire ant head mm scale features

Photo B Semiconductor grade Si wafer with contamination (dust) micron scale features

Photo C Gold sputter resolution target for SEM 30 nm scale features

View attachment 83777
View attachment 83778
View attachment 83779

Hmm, so what you're saying is that intel should make chips out of gold-covered fire ant heads???
 
  • Like
Reactions: N4CR
like this
When intel makes a smaller nm chip, can they claim they made the security risks smaller? ;)
Actually, yes. They can scrap multi-threading, for example, because CPU cores are smaller in relation to total die size. Pull out the MT stuff and just xerox a few more cores onto the die to compensate.
 
  • Like
Reactions: N4CR
like this
No, power consumption isn't directly proportional to clockspeed. When you cut clockspeed, you also cut voltage, which gives you much more power savings overall.
Except you're forgetting the layer of vias and material between dies won't have perfect heat transference either. Essentially there'll be a sort of insulation effect going on no matter what you do. So IRL what they're finding out is exactly how I explained it.

And that is the best case. Even with "simple" die stacking situations sometimes it can sometimes work out to be worse than what I've described. That is why virtually no one is doing die stacking in mass volume consumer products while MCM's are instead taking off.

in the enterprise AMD have a huge advantage on the scale of bulldozer vs skylake.
No it won't be that big of a difference. It'll be closer to P4 vs Athlon 64 era. Which is big but nowhere near that degree of a shut out. And the server market is slow to change. It took 2yr for AMD to get around 20% of the server market back then and the general expectation of people in the industry is we'll likely see a repeat of that.

isn't it just because of performance, ie clock speeds?
No NV was publicly complaining about the cost not being worth the gains years ago, there were some slides floating around about it. If it would've been cheap to do they would've made the jump most likely. Neither Apple nor Mediatek have had issues with it and Mediatek (who aren't exactly known for their design prowess) got their SoC's to well over 2Ghz with it.
 
Bring on the Gallium Nitride, and the GHz wars again. Silicon has reached it's practical limits, time to look at the alternatives.
 
Except you're forgetting the layer of vias and material between dies won't have perfect heat transference either. Essentially there'll be a sort of insulation effect going on no matter what you do. So IRL what they're finding out is exactly how I explained it.

And that is the best case. Even with "simple" die stacking situations sometimes it can sometimes work out to be worse than what I've described. That is why virtually no one is doing die stacking in mass volume consumer products while MCM's are instead taking off.

Well that's just an engineering problem, not a fundamental limitation like linear clock/power scaling. You're absolutely right, it's not economically feasable now hence no one has really pursued it. But when we hit a shrinking wall, suddenly throwing alot more silicon at the problem may become the only way forward. Sometimes flat MCM designs will work, but there will be situations where designs need the lower latency or extra connections TSVs can provide.

Also, I think you're underestimating how much voltage/clock scaling affects power consumption. If you cut clock speeds by 1/2, based on my old conservative underclocking experiments, you could cut power consumption by 1/4th or more. With 2 identical chips stacked, that gives you 1/2 the overall power draw, which means the lower heat conductivity is less of an issue and you have a much more efficient chip overall, potentially without the latency or power penalty of an MCM design.
 
The "sizing" of a process node hasn't really correlated to real feature sizes for quite some time. Generally, your 10/12/14nm and soon to be 7nm chips have very little, if any, actual features of that size in them. In some cases, it's not even practical to actually build a feature of that size in a mass production IC. It's just MARKETING at this point. They could just start naming them sequentially, N1, N2, N3 and it would have as much meaning as the current scheme. In general, the next process generation is better than the last, nothing more or less regardless of what they decide to call it. How much better, well it depends.

There are challenges, as there's always been, but the options to keep making silicon work are getting slim and what remains is currently difficult to scale and/or really expensive. E.g. extreme-UV. Of course there's also other semiconductor groups, but they can be hard to make economical for mass production. We'll find other ways to keep making faster ICs for applications that can afford it, but it seems the economical bag of tricks to keep silicon around are getting really hard.
 
Physics is a bitch..
I dont think we will see a shift away from silicon any time soon. We will see more doping, and experimenting with different things, but the essence will be the same. However as the current tech gets old, it gets cheaper, so I think we will get more silicon in our devices for equal or less money, more parallel processing.. its not going to be bad.
 
Good question! Multithreaded development fails because of two things: Ahmdal's Law and It's Hard.

Ahmdal's law basically says you can't get multi-core improvements when you have jobs that can't run in parallel. In real life, it doesn't matter if you have 40 doctors that could do the surgery. It's one step after another. Suture AFTER the stuff inside is fixed. Some things have a surprisingly large amount of stuff that can't be run in parallel. Graphics is the best way we've found for things to run in parallel, and we're already doing it.

It's hard just means that writing multi-threaded code is complicated and easy to mess up. Here, it's like trying to get 40 toddlers to all play together peacefully. You try it! Someone is always upset with someone else. With computers, no one gets punched in the face, but I have gotten a number of blue screens and hangs when things don't quite work. I keep telling management that my first rule of multithreading is "don't". Break that rule only if you have to, because it will cost money to make it work. Sometimes a little. Sometimes a lot. Maintaining someone else's SW is 90% of any SW budget and multithreaded code is difficult to maintain.


Thanks. I wish it wasn't that "hard," so that we could really see some benefits from more and more cores and threads.
 
One thing that people might not realize is how extremely tiny 3nm actually is. Given the diameter of a copper atom is ~0.28 nm, the wires in a 3nm chip are only 11 atoms wide... Pretty crazy.
Intel said they're moving to cobalt, which is 200 pm or 15 atoms wide at 3 nm.
 
Well that's just an engineering problem, not a fundamental limitation like linear clock/power scaling.
It absolutely is a fundamental problem because the vias and supporting layers work as heat insulators no matter what you do.

Its going to require new materials and manufacturing methods to "fix" that and no one knows how to do either yet. Much less do it in a economic fashion.

And tons of people have pursued it. They've been trying for years!

Also, I think you're underestimating how much voltage/clock scaling affects power consumption.
I'm not. I know power/heat scale non-linearly in CPU's. The thing is so does heat in the package in certain situations. Such as when you're effectively insulating a heat source and stacking another heat source on top of it.

It flat out doesn't work the way you think it does. And that, again, pretty no one is doing consumer die stacking while lots are doing MCM's instead is your big tip off that the problem is bigger than you realize.
 
Back
Top