Big Trouble at 3nm

Discussion in '[H]ard|OCP Front Page News' started by Megalith, Jun 23, 2018.

  1. Megalith

    Megalith 24-bit/48kHz Staff Member

    Messages:
    12,474
    Joined:
    Aug 20, 2006
    Vendors are already planning 3nm transistors ahead of 10nm/7nm, but their practicality is being debated due to astronomical costs, new design challenges, and the lack of power/performance benefits. Semiconductor Engineering explains that developing a complex chip could cost as much as $1.5B, yet neither major improvements in functionality or acceptable increases in transistor costs are guaranteed.

    Plus, the manufacturing costs are enormous. “3nm will cost $4 billion to $5 billion in process development, and the fab cost for 40,000 wafers per month will be $15 billion to $20 billion,” IBS’ Jones said. Then, even with new transistor structures, the benefits of scaling are shrinking while costs are rising. “Before 14nm, there was a 30% improvement in price/performance at each node.”
     
  2. Mugato

    Mugato Muh Feelz!

    Messages:
    957
    Joined:
    Feb 25, 2014
    That does not bode well, but will spark innovation which is good.
     
    Armenius and jnemesh like this.
  3. Mugato

    Mugato Muh Feelz!

    Messages:
    957
    Joined:
    Feb 25, 2014
    Design costs are also a problem. Generally, IC design costs have jumped from $51.3 million for a 28nm planar device to $297.8 million for a 7nm chip and $542.2 million for 5nm, according to IBS. But at 3nm, IC design costs range from a staggering $500 million to $1.5 billion, according to IBS. The $1.5 billion figure involves a complex GPU at Nvidia.

    Sorry, double posting, but that's interesting, Nvidia already has a 3nm chip designed.
     
    Armenius, jnemesh and Madmeerkat55 like this.
  4. clockdogg

    clockdogg Gawd

    Messages:
    516
    Joined:
    Dec 12, 2007
    So...it's finally true... Less is More. Much more.
     
    Madmeerkat55, risc and GoldenTiger like this.
  5. Delicieuxz

    Delicieuxz Gawd

    Messages:
    631
    Joined:
    May 11, 2016
    Intel had $62.76 billion in revenue last year, should be a piece of cake.
     
    nEo717, SickBeast and Madmeerkat55 like this.
  6. Azphira

    Azphira [H]ard|Gawd

    Messages:
    1,753
    Joined:
    Aug 18, 2003
    When intel makes a smaller nm chip, can they claim they made the security risks smaller? ;)
     
    katanaD, jnemesh, MrDeaf and 4 others like this.
  7. Gideon

    Gideon [H]ard|Gawd

    Messages:
    1,687
    Joined:
    Apr 13, 2006
    Yet they cant get 10nm to work properly. The wall is coming and money wont help fix it, silicon days are numbered.
     
    Armenius, nEo717, jnemesh and 5 others like this.
  8. gxp500

    gxp500 Gawd

    Messages:
    721
    Joined:
    Mar 4, 2015
    "Big Trouble at 10nm for Intel"

    Fixed the title for you...
     
  9. naib

    naib [H]ard|Gawd

    Messages:
    1,056
    Joined:
    Jul 26, 2013
    they should make them in (little) China :)
     
    hawkeye_wx, Meeho, Wyodiver and 2 others like this.
  10. c3k

    c3k [H]ard|Gawd

    Messages:
    1,779
    Joined:
    Sep 8, 2007
    Threadripper, 32 core, mmmmmm…….
     
    jnemesh and motqalden like this.
  11. oldmanbal

    oldmanbal [H]ard|Gawd

    Messages:
    1,642
    Joined:
    Aug 27, 2010
    Kurt Russell approves.
     
  12. Galvin

    Galvin 2[H]4U

    Messages:
    2,402
    Joined:
    Jan 22, 2002
    What ever happened to stacking?
     
  13. Mchart

    Mchart 2[H]4U

    Messages:
    2,372
    Joined:
    Aug 7, 2004
    Doesn’t fix power consumption.
     
    Armenius likes this.
  14. Twisted Kidney

    Twisted Kidney 2[H]4U

    Messages:
    3,088
    Joined:
    Mar 18, 2013
    I've read this article several times over the years.

    Just saying. It's always something that is going to end the march of CPUs.

    I guess if you say it every year for 30 years, eventually you'll be right and you can pretend you weren't wrong the other 29 times.
     
    Armenius, Vokar, _l_ and 1 other person like this.
  15. ChoGGi

    ChoGGi [H]ard|Gawd

    Messages:
    1,207
    Joined:
    May 7, 2005
    I hope so, I want my diamond chip at 10 or 15 GHz
     
    Armenius, nEo717, Nobu and 4 others like this.
  16. R_Type

    R_Type [H]Lite

    Messages:
    93
    Joined:
    Mar 11, 2018
    To be fair there were major hurdles to overcome over the years. In the 70s and 80s the rate of miniaturisation was *blistering*, which seemed to be unable to continue. Quite a reasonable assumption. Then a bit less far in the past there were real concerns that 1 micron (1000nm!) was unachievable. All through the semiconductor era various issues were encountered, which had entailed reaching into the corners of the periodic table to solve. Now a cutting edge process uses more elements than it leaves out. Where we are now we're running out of new tricks to pull out of the bag to make a new leap forward and there are some very hard physical barriers looming over the horizon.
     
    Armenius, jnemesh, Kinestron and 3 others like this.
  17. gigaxtreme1

    gigaxtreme1 2[H]4U

    Messages:
    3,318
    Joined:
    Oct 1, 2002
    It will go down to single atom quantum state to flip bits after 3nm or even before. Not much matter left that small.
     
  18. viscountalpha

    viscountalpha 2[H]4U

    Messages:
    2,338
    Joined:
    Oct 16, 2011
    I'm betting the yield at 3nm is going to be horrendous. Someone is going to figure out a better way.

    Necessity is the mother of invention.

    Whoever figures it out first would stand to control the next stepping of technology for literally the planet.
     
  19. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    Process developments don't automagically appear as revenue or profits increase though. Their revenue could be $1 trillion and they could still easily fail to produce a viable 3nm process just like they failed at 10nm.

    As Mchart notes power (or IOW heat) makes stacking problematic at best. MCM's of some sort can work though so I'd expect to see more of stuff like Epyc from everyone if for no other reason than to scavenge dies.

    They're starting to hit the limits of physics though (as in atomic limits) and there is nothing on the horizon you can point to that will provide more scaling of the sort they used to be able to pull off.

    Quantum computers aren't really viable still and are only situationaly better than current processors, they won't be a panacea even if the issues with them get worked out and they're made viable for affordable mass production.

    Potentially changing the fundamental way processing is done and switching to some sort of opto-electric design could still give big performance wins but nobody seems to be going that route because its supposed to be incredibly hard to do. Plus it still won't give you any sort of die shrinks or cost savings either like the industry used to be able to do. Its a pure performance win only. The days of win-win-win (better performance, lower cost, lower power) with a process shrink are clearly numbered.

    Problems do not get magically fixed just because someone is looking for a solution or because the solution would provide lots of money/power/fame/etc. Reality doesn't work that way.

    There are problems that people have been researching for decades (ie. fusion, FTL, batteries that have the same energy density as gasoline, etc.) that still aren't solved and might not ever be.
     
    Armenius and Nightfire like this.
  20. GreenOrbs

    GreenOrbs Limp Gawd

    Messages:
    204
    Joined:
    Jun 7, 2017
    One thing that people might not realize is how extremely tiny 3nm actually is. Given the diameter of a copper atom is ~0.28 nm, the wires in a 3nm chip are only 11 atoms wide... Pretty crazy.
     
    Armenius and nEo717 like this.
  21. stormy1

    stormy1 [H]ard|Gawd

    Messages:
    1,034
    Joined:
    Apr 3, 2008
    Anyone remember when they were saying it would cost billions and decades to break 40nm then it was done and the next was getting below 28nm was going to cost billions and take decades and then it was done?
    So take these kind of articles as the total bs that they are.
     
    nEo717 likes this.
  22. stormy1

    stormy1 [H]ard|Gawd

    Messages:
    1,034
    Joined:
    Apr 3, 2008
    Not really, nodes are named for the smallest possible node size but something made from them may have no part of it that size.
    What is actually more important to chip density is feature size which is always larger than node size.
     
  23. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    Uh no one said "decades" to do any of that around that time but there were serious delays, mostly for the 3rd party foundries at the time, and it did cost everyone billions.

    Problems with process improvements were obvious years ago and those problems haven't gone away.

    You're right that other stuff besides the smallest pitch is more important overall but he didn't say otherwise, he is quite correct about the size of 3nm copper wires too.
     
    Armenius likes this.
  24. drutman

    drutman n00bie

    Messages:
    39
    Joined:
    Jan 4, 2016
    In my line of work I run a research center dealing with electron microscopy (imaging) and x ray diffraction (structural determination based on crystallinity).
    In this discussion I will demonstrate why sub 10 nm production is challenging and constantly put on hold.
    Our scanning electron microscope (SEM) has a resolution of 5 nm and a magnification range from 10 to 300,000 times.
    BTW these images are not public domain.

    Photo A Fire ant head mm scale features

    Photo B Semiconductor grade Si wafer with contamination (dust) micron scale features

    Photo C Gold sputter resolution target for SEM 30 nm scale features

    LL.jpg
    LL.jpg
    LL.jpg
     
    Armenius, IKV1476, c3k and 8 others like this.
  25. stormy1

    stormy1 [H]ard|Gawd

    Messages:
    1,034
    Joined:
    Apr 3, 2008
    Except the part about wires being 3nm in a 3nm node chip.
    It will never happen,
    Wire in current cpu are min. 56 52 67 70nm depending on foundry and process.
     
    Armenius and Brians256 like this.
  26. Red Falcon

    Red Falcon [H]ardForum Junkie

    Messages:
    9,707
    Joined:
    May 7, 2007
    "Have you paid your dues, Intel?"
    "Yes sir, the check is in the mail."

    :D
     
  27. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    Actually in a 3nm chip the nanosheets/nanowires are indeed looking to be actually 3nm wide. Other features will still be larger than 3nm of course and that is important. I don't know if they're going to use copper though. But still.

    Which is irrelevant to the topic and his post. What processes are doing now and what processes are doing in the future are 2 different things. And there are still some features of current process that are indeed 16nm or 20nm, etc. even if the foundries are fudging on the naming in other ways so you're being kind've pointlessly pedantic here.
     
    Armenius likes this.
  28. N4CR

    N4CR 2[H]4U

    Messages:
    2,743
    Joined:
    Oct 17, 2011
    I find the 10nm issues quite interesting, because Nvidia also had issues at 10nm and abandoned it too. Perhaps it's some weird sizing issue which is a little like some unexplained resonance at a certain frequency, e.g. with some features at that size it simply will not work correctly, few nm either way and it's not an issue...
    Really can't see why else Intel and Nvidia both had 10nm issues.

    Beautiful and staggering world that you are lucky enough to see each day... thanks so much for sharing with us!
    I've always wanted a decent small scale game to be made again, bit like 'a bugs life' on PSX, they always are looking for 'new' game ideas, something at insect scale or smaller could be quite the mind-blower.
     
    gigaxtreme1 likes this.
  29. Brians256

    Brians256 n00bie

    Messages:
    13
    Joined:
    May 31, 2016
    Not necessarily. There is more than one way to compete with silicon. For example, Threadripper isn't doing well because of a sophisticated process but it is doing well by its architecture, excellent price/peformance tradeoffs and the packaging. Better MCM with more sophisticated interposers could also be a big win. Right now, the interposers are using low-end tech and are ripe for improvement. Better multi-chip modules could make larger scale GPU-style computation affordable or allow better trade-offs for multi-core CPUs (more transistors per core without a too-large die). In another area, if you can improve transistor consistenty by an order of magnitude, you could simply produce larger chips without suffering yield issues. You can use more async vs sync logic that could benefit from more precise/consistent gate thicknesses. Plenty to do.

    But, yes. the easy improvements of silicon have been seized. We won't be doubling our core MHz every year like the heady days of the 286 through 486... not unless we find some new way of making logic. I'll hope for it but I'll bet on slower but still useful improvements.
     
  30. Wyodiver

    Wyodiver [H]ard|Gawd

    Messages:
    1,935
    Joined:
    Aug 15, 2004
    Some tech barriers can be surpassed, perhaps others just can't. I remember reading many, many years ago that 486 processors could not get any faster due to RF interference. (I've searched for any info on that, but fell up short.)

    I totally understand that I'm ignorant about a lot of cutting edge tech, but as adding more cores is fine, where is the software side of things? Why can't software in 2018 be properly using multiple cores?
     
  31. Fuzzy_3D

    Fuzzy_3D Limp Gawd

    Messages:
    138
    Joined:
    May 27, 2007
    Gotta love the interwebz. :D
     
  32. Brians256

    Brians256 n00bie

    Messages:
    13
    Joined:
    May 31, 2016
    I wouldn't say never, but you are likely right. 3nm wires are just about useless for anything at modern speeds unless you use a superconductor wire or zero capacitance gates.

    The process name (e.g. 14nm) used to apply to the smallest things on a chip (features) but it's not that easy anymore. Intel 14nm is about 1.5x higher density compared to other 14nm processes. Intel may be failing at their 5/7/10nm development processes but their 14nm process family is really good.
     
  33. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    The circumstances are different there. NV ditched 10nm because it wasn't a big enough improvement for them for the cost that it would incur to fit their design to it. Other companies used TSMC's and others 10nm process without issue. Apple's 10nm SoC's produced with TSMC's process seem to work just fine and Apple seems to be sticking with them for 7nm.

    Intel's issue with 10nm is apparently they were far too ambitious in trying to get power down and logic density up while simultaneously using some new materials (ie. cobalt, at least according to rumors that are leaking out) that turned out to not work like they expected. This is why people are saying Intel's 10nm process is "broken". They can apparently produce some chips on it but they can't seem to get the yields up nor can they seem to produce them with high clock speeds. Which is why they're forced to stick with their 14+ and 14++ processes for the next 2yr at least and apparently will probably mostly skip 10nm and go straight to 7nm by hopefully 2020. The alternative would put them out of business.

    Just come home from working at the salt factory eh?

    Software is very slow to change in particularly when it comes to supporting things that are difficult to implement (like multi threading). Generally they wait for the hardware to become commonplace before really trying to write the software to make use of it.

    Its taken years just for AMD64 compatible software to become somewhat common. Same thing happened in the switch from 16 to 32bit i386 compatible software as well. And those were both things that were seen as unambiguously good.
     
    Last edited: Jun 24, 2018
    N4CR likes this.
  34. Brians256

    Brians256 n00bie

    Messages:
    13
    Joined:
    May 31, 2016
    Good question! Multithreaded development fails because of two things: Ahmdal's Law and It's Hard.

    Ahmdal's law basically says you can't get multi-core improvements when you have jobs that can't run in parallel. In real life, it doesn't matter if you have 40 doctors that could do the surgery. It's one step after another. Suture AFTER the stuff inside is fixed. Some things have a surprisingly large amount of stuff that can't be run in parallel. Graphics is the best way we've found for things to run in parallel, and we're already doing it.

    It's hard just means that writing multi-threaded code is complicated and easy to mess up. Here, it's like trying to get 40 toddlers to all play together peacefully. You try it! Someone is always upset with someone else. With computers, no one gets punched in the face, but I have gotten a number of blue screens and hangs when things don't quite work. I keep telling management that my first rule of multithreading is "don't". Break that rule only if you have to, because it will cost money to make it work. Sometimes a little. Sometimes a lot. Maintaining someone else's SW is 90% of any SW budget and multithreaded code is difficult to maintain.
     
    Wyodiver and mesyn191 like this.
  35. AlphaAtlas

    AlphaAtlas Limp Gawd Staff Member

    Messages:
    372
    Joined:
    Mar 3, 2018
    Still being worked on IIRC, but my money is on that being the next big improvement. Even if chips have to run a little slower thanks to the higher power density, at this point it'll be the easier way to cram more logic together in one spot.
     
  36. N4CR

    N4CR 2[H]4U

    Messages:
    2,743
    Joined:
    Oct 17, 2011
    Great post and way of seeing the game. When I heard in past about how they were using Cobalt I figured it wasn't an issue and they'd gone that route because it works, obviously it has not turned out well with defect rates requiring them to disable the iGPU on a likely quad die running dual core at lower clock rates than 14nm. It's really not looking good at all any way you slice it. Then they water chilled that cherry picked 10k Xeon.. damn Intel. Sort your shit out or you won't be competitive for long.

    Didn't know Apple was on 10nm, makes sense why they dominate mobile processors usually.
     
  37. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    In general stacking requires you to cut power (heat) roughly in half per die in order to make it work. Which frequently means you have to cut the clock speed in half (for 2 dies stacked, if you're going to stack 3 or more god help you). So if you double your logic by stacking chips but cut your clock speed in half you're looking at about the same overall performance. Or worse.

    Maybe someone will have a eureka moment and somehow address this but right now its not looking fixable.

    That is why Intel is going the MCM route. You can get multiple dies on a substrate but still space them out so heat isn't a problem. (edit) Which in turn allows you to still crank up the clock speed.
     
  38. mesyn191

    mesyn191 2[H]4U

    Messages:
    2,918
    Joined:
    Jun 28, 2004
    Yeah Intel will be a mess for a couple of years or so.

    I sort've expect them to do some of the stuff AMD had to do with Bulldozer (ie. blow out core counts and blow out their TDP's to get the clock speed up some while also dropping prices to get sales up). They don't really have much of a choice. Still I don't expect them to suffer for their error here as much as AMD did with Bulldozer since Skylake is still a pretty good core over all and their 14nm++ process isn't bad either.

    They aren't used to playing second fiddle to anyone though. Be interesting to see how it plays out for them. I wouldn't be surprised to see them resort to dirty tricks again just like the Athlon versus P4 days either.
     
  39. AlphaAtlas

    AlphaAtlas Limp Gawd Staff Member

    Messages:
    372
    Joined:
    Mar 3, 2018
    No, power consumption isn't directly proportional to clockspeed. When you cut clockspeed, you also cut voltage, which gives you much more power savings overall.
     
    Armenius and serpretetsky like this.
  40. ole-m

    ole-m Limp Gawd

    Messages:
    334
    Joined:
    Oct 5, 2015
    while I agree I have to also disagree, in the enterprise AMD have a huge advantage on the scale of bulldozer vs skylake.
    That advantage is going to be absolutely huge with an process advantage and their design being absolutely superior for high core counts for enterprise.

    isn't it just because of performance, ie clock speeds?
    We could do ryzen on Intel's 14nm and we'd see higher power consumption but near 5ghz.
    Some processes are meant for low power (ryzen) some are high performance (Intel) and hence the mile long amd vega pipeline to get 1600mhz whilst nvidia on 16nm achieves better clocks without sacrificing die space for pipeline., effeciency and all that.
    Process nodes and pipelines is difficult to guestimate though.