TSMC now sinking asets into developing 2nm node!

Joined
Feb 6, 2013
Messages
786
This comes from Digitimes courtesy of our friends at The FPS review. This shrinking of processes is coming fast and furious to me. I would love for someone in the thread to detail who exactly are the major players worldwide developing these nodes and why TSMC seems to be killing it. How will this affect enthusiasts? What do these shrinking of processes yield in terms of performance and can we expect to see products based on them soon? These were thoughts that popped into my head, here is the link : https://www.thefpsreview.com/2020/0...es-research-and-development-for-2-nm-process/
 
Unless intel shifts their focus back to being a meritocracy, I don't see them catching up any time soon. So it's neck and neck TSMC and Samsung. I look forward to some 2nm AMD chips.
 
2nm chips!?! Wow. The next bet will be some sort of quantum craziness or light based chips or something as we are fast approaching the limits of current chip design and physics.
 
How did Intel screw the pooch so bad? It like they completely shut down all their R&D. Did all th engineers they let go to to tsmc and Samsung.
 
Just like when we went from micrometer to nanometer, we will eventually go from nm to picometer. Probably from there, we would eventually get down to a femtometer. By the time we even come close to thinking about femtometer process nodes though, we will probably end up engineering the protons, neutrons, and electrons of the atom to create some sort of quantum transistors rather than "conventional" materials.
 
They got greedy and lazy and thought they will milk us, the consumers forever. I am ashamed to have an Intel processor.

Not only did they get greedy and lazy, they did some SERIOUSLY shady stuff over the years. I had an i7 920 then i7 4790k, now happy with Ryzen :) Not only for the performance but to support a company that isn't abusive like Intel.

Long winded video, grab some popcorn and learn of Intel's abuse over the years:
 
2nm chips!?! Wow. The next bet will be some sort of quantum craziness or light based chips or something as we are fast approaching the limits of current chip design and physics.

We'll be in negative nanometers very soon. You know what that means...
 
next bet will be some sort of quantum craziness
Quantum is now a thing, will just take some time before a larger group become enthuised by it, as it'll require forming new habits on how we design software and probably new programming languages as well. If you are new to application development, it might be cool to look into it as you'll basically have no annoying baggage to deal with

 
Intel got greedy with their constantly changing sockets forcing you to buy a new mob every time your wanted to upgrade. .
 
Intel got greedy with their constantly changing sockets forcing you to buy a new mob every time your wanted to upgrade. .

That explains why intel was sluggish to increasing core counts, but what about their process tech? That used to be more reliable in marching forward. They were/are on 14nm literally forever. They were ahead of TSMC, then let them catch up and pass them. There must be some hidden story on the back end that has not been told.

Was it a case of not enough R&D money? Or was it a series of random setbacks and bad luck combined with better luck for TSCM?
 
That explains why intel was sluggish to increasing core counts, but what about their process tech? That used to be more reliable in marching forward. They were/are on 14nm literally forever. They were ahead of TSMC, then let them catch up and pass them. There must be some hidden story on the back end that has not been told.

Was it a case of not enough R&D money? Or was it a series of random setbacks and bad luck combined with better luck for TSCM?

Well some of it is pooch-screwing, what particularly happened I'm not sure anyone outside of Intel knows for sure. Some of it is just having issues, apparently their 10nm node has been a huge headache to get working. Remember it isn't like you just head to Home Depot and pick up new fab tech, there's a shit ton of R&D you do and that can go wrong. Part of it is also they don't seem to be bullshitting around with their naming. TSMC is legitimately ahead of Intel with process, but not as much as it sounds from the naming. Their 7nm stuff would not be called 7nm by the ITRS roadmap. Basically when they improve their process, they just give it the next node name down, even if they didn't make a shrink anywhere near big enough to actually be that small.

It wasn't a lack of R&D spend though, Intel has spent a lot on their 10nm node, it just hasn't paid off yet.
 
That explains why intel was sluggish to increasing core counts, but what about their process tech? That used to be more reliable in marching forward. They were/are on 14nm literally forever. They were ahead of TSMC, then let them catch up and pass them. There must be some hidden story on the back end that has not been told.

Was it a case of not enough R&D money? Or was it a series of random setbacks and bad luck combined with better luck for TSCM?

Intel tried to change too many things in 10nm. In addition to a tighter gate pitch than other company's 10nm processes that would have made Intel the most dense of them, Intel also tried to replace copper conductor interconnects between transistors with cobalt, and had to move to quad patterning conventional optics which means higher failure rates. Cobalt is extremely hard to work with as an electrical interconnect especially at such small sizes. You may think why anyone would ever switch off copper for electrical properties, but the reason is that when you get down to these extremely small nanometer sizes, copper no longer has very good properties to itself. The cobalt reduces resistance by 2x at these nanometer sizes and gets a 5-10x improvement in electromigration resistance compared to copper. A very worthwhile improvement that would have made Intel 10nm the best by far. Only these changes, combined with using conventional optics lithography led to significant challenges. The density would have been much easier with EUV instead of quad patterning, but when Intel started the process node design they were so far ahead of the competition that EUV wasnt possible yet. Because remember, Intel's 10nm products were originally set for going into production in 2015. They are approaching 6 years late on desktop chips now, but we are only seeing EUV actually used on nodes just in the past year.


edit: found a good wikichip article that actually gets into it more: https://fuse.wikichip.org/news/525/...els-10nm-switching-to-cobalt-interconnects/2/
So while the transistor performance has been improving, the copper wire resistance has actually been increasing as the wire got smaller. This means that the signal slows down, distance traveled decreases, and we’re consuming more power than desired. In other words, despite having higher performance transistors, there is a growing disparity between the transistor capabilities and the wire capabilities. The copper wires have become a serious bottleneck.

Cobalt on the other hand has higher resistivity, but its electron mean free path is considerably lower – in fact it’s 1/4 the average distance, down to single-digit nanometer. Additionally, in contrast to copper, it has been demonstrated that a single film, as thin as 1 nm, is sufficient to serve as both the liner and barrier for cobalt. This creates a new scaling path forward for cobalt interconnect. While there are a couple of other factors that determine the final resistance of the wire, it looks as though Intel has managed to hit the Copper-Cobalt Crossover point whereby cobalt results in a performance win over copper. We suspect foundries will follow Intel as they scale their interconnects in future nodes.

It’s worth noting that cobalt isn’t used for everything. It’s only used for the first two metal layers (i.e., M0 and M1) where you have your local interconnect that have very narrow pitches (e.g., 36nm) and where cobalt does benefit them. Intel claims this provides a 2x reduction in via resistance as well as 5-10x improvement in electromigration in those layers. For the global routing and the large power rails which are longer distance and much thicker, it makes sense to continue to use copper. With future nodes, as additional upper metal layers shrink below the Copper-Cobalt crossover point, we’ll start to see cobalt climbing up the stack.

In addition to interconnect, Intel is also using cobalt for the Metal 2 through Metal 5 cladding layer to also improve electromigration. Low-κ carbon doped oxide (CDO) dielectrics are used on eleven layers out of the thirteen. This is the same Low-κ that was used for the 14 nm process. Finally, Intel also introduced cobalt fill at the trench contact, replacing the tungsten contact metal which was used previously (note that tungsten is continued to be used for the gate) due to the narrow line widths, reducing resistance.
 
Last edited:
Intel tried to change too many things in 10nm. In addition to a tighter gate pitch than other company's 10nm processes that would have made Intel the most dense of them, Intel also tried to replace copper conductor interconnects between transistors with cobalt, and had to move to quad patterning conventional optics which means higher failure rates. Cobalt is extremely hard to work with as an electrical interconnect especially at such small sizes. You may think why anyone would ever switch off copper for electrical properties, but the reason is that when you get down to these extremely small nanometer sizes, copper no longer has very good properties to itself. The cobalt reduces resistance by 2x at these nanometer sizes and gets a 5-10x improvement in electromigration resistance compared to copper. A very worthwhile improvement that would have made Intel 10nm the best by far. Only these changes, combined with using conventional optics lithography led to significant challenges. The density would have been much easier with EUV instead of quad patterning, but when Intel started the process node design they were so far ahead of the competition that EUV wasnt possible yet.


edit: found a good wikichip article that actually gets into it more: https://fuse.wikichip.org/news/525/...els-10nm-switching-to-cobalt-interconnects/2/

I'm glad somebody is pointing out the technical challenges Intel faced with 10nm.

Lisa Su of AMD talked about how they "bet big" on TCMS' 7nm. History is written by the victors, and lucky for AMD their bet payed off. Intel bet on some new material science and their bet didn't pay off.

I'm not propping up one company or throwing shade at another. I'm just bringing light to challenges the semiconductor industry is facing in the pursuit to shrink the features, reduce power consumption and increase performance.

It's a battle.
 
Maybe there will be announcement of chips the size of 800 micrometer in the near future
 
Will this be the final silicon node allowed by quantum mechanics, or is it possible to go even lower, perhaps to 1 nm?
 
Will this be the final silicon node allowed by quantum mechanics, or is it possible to go even lower, perhaps to 1 nm?

Since it is nowhere near actually having a 2nm pitch size, we'll see "smaller" nodes if nothing else as marketing fluff as things go forward. We have not reached the end of lithography yet, they'll keep improving the process and TSMC's MOU is when they get an improved process tech, call it a smaller node. So I fully expect to see "picometer" nodes in the future, even though it will have nothing to do with actual feature size. As for how small we can actually make things? Who knows but there is some solid research that 1nm may be achievable.

At this point though don't pay any attention to the size that you see coming out of fabs, it is largely detached from reality. Just take it as a marketing term that means "newer, better node." Remember even if there isn't a feature shrink, or only a minimal one, there are other things they can improve like the materials used and so on. So you could very well have a new lithography process that isn't really any smaller than the previous one, but still yields much faster chips.
 
This is the thing that will now benefit AMD in the future but took a lot of adjustment of perception from everyone. A LOT of people at the beginning when AMD spun off GloFo thought it was a mistake. I did for sure. To not have the same capabilities as Intel seemed crazy. But it was the long game. By allowing someone else to do the R&D on the process node and focus on their design, we are where we are now.
 
This is the thing that will now benefit AMD in the future but took a lot of adjustment of perception from everyone. A LOT of people at the beginning when AMD spun off GloFo thought it was a mistake. I did for sure. To not have the same capabilities as Intel seemed crazy. But it was the long game. By allowing someone else to do the R&D on the process node and focus on their design, we are where we are now.

You gonna sit there an tell me that fabless semi companies like Broadcom and Qualcomm haven't been successful for decades? :D

The only roadblock AMD had to overcome was giving GF enough of their business that they could keep the lights on (while they transitioned to Bulk Silicon)...but leaving for another lover was always the exit strategy here!

Remember the huge anchor weight that was SOI, and how it took AMD FIVE YEARS to fully integrate ATI's Bulk Silicon Libraries with their own, and produce Llano/Trinity?

It was the beginning of a 10-year painful transition to Bulk Silicon for both the fab and AMD. At least they didn't have to eat Global Foundries massive losses during those years!
 
Last edited:
My first CPU for a system I built was the AMD Athlon Thunderbird B 1GHz. It was back in 2001. It was on the 180nm process. That is almost 2 orders of magnitude bigger than this.

PIII for me in high school. 250um. I've been thinking about the orders too.
 
After reading this thread I think the question becomes: when will AMD start getting greedy and screwing us?
i don't think they're to the point where they can risk getting too greedy yet but don't be surprised if zen 3 hits the performance numbers are at or better than Intel's and they decide to price comparably to Intel's prices. the fact that intel still hasn't lowered their prices outside of HEDT shows there's still room on the bone for AMD to price their products a bit higher.
 
Agree with both of the last statements. AMD isn't there just yet to really screw the customer and they know it. But I fully remember buying my first Athlon X2 *AFTER* the price dropped from 1K to 600. I just couldn't justify spending 1K on a processor in my early twenties.
 
If Intel and Apple/ARM don't get their shit together soon, we might just see a repeat of the $1600 FX-60 sooner than we think.
Competition is good, but that works both ways!

You mean like how AMD's current top desktop processor is $3900? :p

Anyway, The FX-60 was $1k, not $1600, and Intel had processors approaching that price during the time too. Things are the same way once again because Intel has $4000 processors too with a ton of cores, and AMD is just matching their prices now that they have the performance to do so.
 
Last edited:
i don't think they're to the point where they can risk getting too greedy yet but don't be surprised if zen 3 hits the performance numbers are at or better than Intel's and they decide to price comparably to Intel's prices. the fact that intel still hasn't lowered their prices outside of HEDT shows there's still room on the bone for AMD to price their products a bit higher.

All things considered, the CEO is not dumb enough to do that. (Lisa Su) She is entirely aware of what it takes to be long term successful, what happens if you do not do so (Intel) and smart enough to know the difference.
 
Name another 64-core desktop processor, or even a 32-core... or even just another top end processor that has a lower cost per core than the next lower model.

Looks like his post was removed. Yeah, the highest end enthusiast processor is only $750 and that is a 16 core / 32 thread monster.
 
All things considered, the CEO is not dumb enough to do that. (Lisa Su) She is entirely aware of what it takes to be long term successful, what happens if you do not do so (Intel) and smart enough to know the difference.

She isn't ultimately in charge. If the board decides they want higher prices, she either does it, or they replace her.
 
She isn't ultimately in charge. If the board decides they want higher prices, she either does it, or they replace her.

Lol! Nope, AMD is not Intel and they aren't going to pull an Inel, at least well Lisa Su is in charge.
 
Last edited:
Before people get lost in PR numbers "nm" has been broken ever since TSMC renamed their 20nm Finfet into "16nm" and decided not to follow the pack and just fluff numbers.

Intel's 10nm node is intact smaller than TSMC' 7 nm.
Intel 10nm logic = 54nm x 36nm
TSMC 7 nm logic = 54nm x 40nm

So stop looking at PR numbers...or you delude yourself...just saying ;)
 
There's always a benefit to AMD CPUS. They hold their value like American cars. You can always buy last gen's on the cheap!
 
Back
Top