runudownquick
Gawd
- Joined
- Nov 11, 2008
- Messages
- 831
That does not bode well, but will spark innovation which is good.
Ahem, it would seem that 3nm may not node well.
I'm here all week.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That does not bode well, but will spark innovation which is good.
I've read this article several times over the years.
Just saying. It's always something that is going to end the march of CPUs.
I guess if you say it every year for 30 years, eventually you'll be right and you can pretend you weren't wrong the other 29 times.
I support this statement.No, power consumption isn't directly proportional to clockspeed. When you cut clockspeed, you also cut voltage, which gives you much more power savings overall.
In my line of work I run a research center dealing with electron microscopy (imaging) and x ray diffraction (structural determination based on crystallinity).
In this discussion I will demonstrate why sub 10 nm production is challenging and constantly put on hold.
Our scanning electron microscope (SEM) has a resolution of 5 nm and a magnification range from 10 to 300,000 times.
BTW these images are not public domain.
Photo A Fire ant head mm scale features
Photo B Semiconductor grade Si wafer with contamination (dust) micron scale features
Photo C Gold sputter resolution target for SEM 30 nm scale features
View attachment 83777
View attachment 83778
View attachment 83779
Actually, yes. They can scrap multi-threading, for example, because CPU cores are smaller in relation to total die size. Pull out the MT stuff and just xerox a few more cores onto the die to compensate.When intel makes a smaller nm chip, can they claim they made the security risks smaller?
Except you're forgetting the layer of vias and material between dies won't have perfect heat transference either. Essentially there'll be a sort of insulation effect going on no matter what you do. So IRL what they're finding out is exactly how I explained it.No, power consumption isn't directly proportional to clockspeed. When you cut clockspeed, you also cut voltage, which gives you much more power savings overall.
No it won't be that big of a difference. It'll be closer to P4 vs Athlon 64 era. Which is big but nowhere near that degree of a shut out. And the server market is slow to change. It took 2yr for AMD to get around 20% of the server market back then and the general expectation of people in the industry is we'll likely see a repeat of that.in the enterprise AMD have a huge advantage on the scale of bulldozer vs skylake.
No NV was publicly complaining about the cost not being worth the gains years ago, there were some slides floating around about it. If it would've been cheap to do they would've made the jump most likely. Neither Apple nor Mediatek have had issues with it and Mediatek (who aren't exactly known for their design prowess) got their SoC's to well over 2Ghz with it.isn't it just because of performance, ie clock speeds?
Except you're forgetting the layer of vias and material between dies won't have perfect heat transference either. Essentially there'll be a sort of insulation effect going on no matter what you do. So IRL what they're finding out is exactly how I explained it.
And that is the best case. Even with "simple" die stacking situations sometimes it can sometimes work out to be worse than what I've described. That is why virtually no one is doing die stacking in mass volume consumer products while MCM's are instead taking off.
but their practicality is being debated due to astronomical costs, new design challenges, and the lack of power/performance benefits
Good question! Multithreaded development fails because of two things: Ahmdal's Law and It's Hard.
Ahmdal's law basically says you can't get multi-core improvements when you have jobs that can't run in parallel. In real life, it doesn't matter if you have 40 doctors that could do the surgery. It's one step after another. Suture AFTER the stuff inside is fixed. Some things have a surprisingly large amount of stuff that can't be run in parallel. Graphics is the best way we've found for things to run in parallel, and we're already doing it.
It's hard just means that writing multi-threaded code is complicated and easy to mess up. Here, it's like trying to get 40 toddlers to all play together peacefully. You try it! Someone is always upset with someone else. With computers, no one gets punched in the face, but I have gotten a number of blue screens and hangs when things don't quite work. I keep telling management that my first rule of multithreading is "don't". Break that rule only if you have to, because it will cost money to make it work. Sometimes a little. Sometimes a lot. Maintaining someone else's SW is 90% of any SW budget and multithreaded code is difficult to maintain.
I would also have accepted "Big Trouble in Little Process"they should make them in (little) China
Intel said they're moving to cobalt, which is 200 pm or 15 atoms wide at 3 nm.One thing that people might not realize is how extremely tiny 3nm actually is. Given the diameter of a copper atom is ~0.28 nm, the wires in a 3nm chip are only 11 atoms wide... Pretty crazy.
It absolutely is a fundamental problem because the vias and supporting layers work as heat insulators no matter what you do.Well that's just an engineering problem, not a fundamental limitation like linear clock/power scaling.
I'm not. I know power/heat scale non-linearly in CPU's. The thing is so does heat in the package in certain situations. Such as when you're effectively insulating a heat source and stacking another heat source on top of it.Also, I think you're underestimating how much voltage/clock scaling affects power consumption.