Chipmakers Are Unsure of Ability to Transition to Sub-10nm Technology

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,080
Chipmakers are slowing the transition to sub-10nm process nodes as money woes start to affect the foundries. Companies such as Mediatek and Qualcomm are rolling out new 14 1/2nm products while waiting in the wings for 7nm technology to mature for 5G products. As reported earlier, Globalfoundries has given up on 7nm and UMC is focusing on mature process nodes. This leaves only TSMC and Samsung to carry the sub-10nm torch for the time being.
China has a different tact to secure the sub-10nm process node. They recruited Surnamed Chou, the former TSMC deputy manager of technology at the foundry to steal confidential files detailing the entire 16nm and 10nm process nodes. He tried to take the data with him to a new job in China, but was arrested for breach of trust.

The cost of developing sub-10nm chips is prohibitively high. HiSilicon recently said it would spend at least US$300 million developing its new-generation SoC chip manufactured using 7nm process technology.
 
does this mean AMD 7nm GPU will suck ? ryzen is pretty small so it's ok, but GPU thy need to push over 400mm² to compete with nvidia
 
does this mean AMD 7nm GPU will suck ? ryzen is pretty small so it's ok, but GPU thy need to push over 400mm² to compete with nvidia

I doubt anyone knows at this point. It depends a lot on if they are doing something different than just shrinking the last design... which they’ve been doing forever. I am personally not on the “wait for 7nm” hype train.
 
The next gen of Radeon I believe is 7nm shrink of Vega. I think Navi is also 7nm but a different architecture (don't quote me on that). What makes things confusing is 7nm TSMC is supposed to be the same to what Intel calls it's 10nm process.
 
TSMC will have its hands full putting out the Apple, nVidia, and AMD chips for the foreseeable future, and Samsung will be hard pressed to keep up with DDR5 and GDDR6 demand. That doesn't leave a lot of room in the 7nm space for the rest of the pack and for the majority of electronics the performance increase gained by going to 7nm doesn't justify the cost increase over the 14-10nm space that has more competition and vastly better yields.
 
I doubt anyone knows at this point. It depends a lot on if they are doing something different than just shrinking the last design... which they’ve been doing forever. I am personally not on the “wait for 7nm” hype train.
well lisa su said couple weeks back that vega 7nm will not be a simple shrink, but will include many improvements to the design, even so, the die size is crucial, 7nm is expensive because it has low yield, low yields increase the cost as die size increase, cpu side ir's ok, because AMD makes small chips and glue them together with infinity fabric, but gpu doesn't work that way.
exemple instead of making 180mm² chip like ryzen, they will have to make 400-500mm² chip for vega ( or navi ) to be competitive in high end, so if ryzen yield 80% of good chips to be sold and 20% waste, vega would yield 40-50% to be sold and 50-60% waste, guess who pays for the 60% waste?...you and me
 
TSMC will have its hands full putting out the Apple, nVidia, and AMD chips for the foreseeable future, and Samsung will be hard pressed to keep up with DDR5 and GDDR6 demand. That doesn't leave a lot of room in the 7nm space for the rest of the pack and for the majority of electronics the performance increase gained by going to 7nm doesn't justify the cost increase over the 14-10nm space that has more competition and vastly better yields.

Huawei is already building is already producing the Kirin 780 on TSMC's 7nm process. Qualcomm is also sampling them now. This all bodes well for AMD's next gen Zens.
 
I think it was the 130nm shrink around 2002 where AMD had been delayed and they ended up needing to use copper interconnects.

This was the beginning of the end. Every generation has taken longer and needed ever increasing rare earth materials. Everything I have heard about post silicon based chips is still in the extreme theory stage.

Last few generations of shrinks from different suppliers have started using increasingly narrow metrics to define a shrink. Well this part of the chip is 14nm wide (28nm tall) etc. Parts of rest chip can be 65 mm.
 
Chip design starts peaked in the late '90s. So the thinning out started a long time ago.
 
The next gen of Radeon I believe is 7nm shrink of Vega. I think Navi is also 7nm but a different architecture (don't quote me on that). What makes things confusing is 7nm TSMC is supposed to be the same to what Intel calls it's 10nm process.

See I suspect navi was going to be a warmed over vega, which itself was an upgraded Fury X, which itself was a descendant of the 7970. But after the vega debacle (cmon now it was desperately disappointing) I reckon vega was pushed back to make it more of a change for the better than what was originally intended. We'll see though.

I think it was the 130nm shrink around 2002 where AMD had been delayed and they ended up needing to use copper interconnects.

This was the beginning of the end. Every generation has taken longer and needed ever increasing rare earth materials. Everything I have heard about post silicon based chips is still in the extreme theory stage.

Last few generations of shrinks from different suppliers have started using increasingly narrow metrics to define a shrink. Well this part of the chip is 14nm wide (28nm tall) etc. Parts of rest chip can be 65 mm.

Even before that point, breaking the micron barrier (1000 nm) seemed a major hurdle. Nevertheless the 10nm intel/7nm foundry stuff seems almost a bridge to far as it is and might for all intents and purposes be the leading edge for years and years and years.
 
Last edited:
I think it was the 130nm shrink around 2002 where AMD had been delayed and they ended up needing to use copper interconnects.

This was the beginning of the end. Every generation has taken longer and needed ever increasing rare earth materials. Everything I have heard about post silicon based chips is still in the extreme theory stage.

Last few generations of shrinks from different suppliers have started using increasingly narrow metrics to define a shrink. Well this part of the chip is 14nm wide (28nm tall) etc. Parts of rest chip can be 65 mm.
The tech is there, but I agree that this is part of the larger issue as it is becoming more and more prohibitively expensive to shrink nodes as we push the boundaries on the laws of physics. We're going to need a new type of transistor sooner rather than later if we are going to continue increasing our computational capacity.
 
My fear/concern with the next Vega is it will use the same expensive HBM2, be power hungry and under performing. I'd really like to have some solid competition like AMD is doing with Ryzen and Threadripper.

Originally I thought Navi was an all new design, if they just offer something subpar again it will really be disappointing.
 
Keep saying this is what stacking was suppose to solve. But if stacking only works for memory. Then we could end up hitting a wall for years to come.
 
Going to need to be some hard choices needed soon, do chipmakers continue to invest in shrinking dies with the massive costs in research in exotic solutions that may or may not pay off, or do they look at other ways of solving the demand for more processing power and risk other companies making breakthroughs they don’t have access to.

I imagine on die parallel processing will be more of a thing, assuming the software side of things can keep up.
 
Back
Top