How Intel Makes A Chip

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
What does it really take to develop a microprocessor? This article sheds a little light into one of the most costly and riskiest feats in business.

Intel rarely talks about how it creates a new chip. When Bloomberg Businessweek visited the Hillsboro fab in May, we were given the most extensive tour of the factory since President Obama visited in 2011. The reticence is understandable, considering that the development and manufacture of a new microprocessor is one of the biggest, riskiest bets in business. Simply building a fab capable of producing a chip like the E5 costs at least $8.5 billion, according to Gartner, and that doesn’t include the costs of research and development ($2 billion-plus) or of designing the circuit layout (more than $300 million).
 
I wonder what kinda brains it would take to make a Fab chip plant in America. Mos intel chips are Vietnam, Costa Rica, or Malaysia.
 
I had no idea they had one there props but they have they all over the world for some reason I never got a chip from Oregon =)
 
D1D and D1X, the development Fabs, are located in Hillsboro OR. The high volume Fabs copy exact from D1D/D1X, with the current process generation located at Fab32 and Fab24 (AZ and Ireland). Chip packaging is performed at the location you see stamped on the CPU package.
 
How much went into making failed SoC's for the mobile market? How about for the server market? All that R&D and here we have a 3% faster Skylake that costs more than Haswell. Money well spent. If Moore's Law exists, it certainly isn't for end user consumers.
 
Moore's Law ended where the spike in R&D occurred on that graph. Now the competition between AMD and Intel is like the cold war, and Intel is the west, will win it by simply out-spending them until AMD is so far behind they just call it quits.
 
Moore's Law ended where the spike in R&D occurred on that graph. Now the competition between AMD and Intel is like the cold war, and Intel is the west, will win it by simply out-spending them until AMD is so far behind they just call it quits.

AMD has a rather large OEM market. They won't call it quits, but they may quit a market segment. Who knows.
 
As I've learned from the least informed people at [H] and elsewhere, chip design is primarily a duty of Intel's marketing department. :p
 
Who ever makes reverse hyperthreading first will set new sales records. This stuff actually exists, there was an article on it not too long ago. But it wasn't called reverse hyperthreading. But the concept of having multiple cores run a single thread is real.
 
Who ever makes reverse hyperthreading first will set new sales records. This stuff actually exists, there was an article on it not too long ago. But it wasn't called reverse hyperthreading. But the concept of having multiple cores run a single thread is real.

Nice google bait. Read about it, most think it is fantasy stuff.
 
Who ever makes reverse hyperthreading first will set new sales records. This stuff actually exists, there was an article on it not too long ago. But it wasn't called reverse hyperthreading. But the concept of having multiple cores run a single thread is real.
Yeah, it's called VLIW and went nowhere due to (compiler, control software, etc) complexity required for such corner cases where it could theoretically be useful. See: Itanium, Transmeta, Elbrus and whatever the next chip is called that tries this approach.

The problem with the view of RHT being a solution is that throughput *is not* limited by the core itself. Moderns CPUs are *capable* of retiring more instructions per clock than is typical in software due to limitations caused by execution dependencies, cache misses, mispredicted branches, etc.

Just to hammer this point home, all the strategies developed over the years we have seen to keep a core busy (superscalar execution, OoO execution, speculative execution, SMT, etc) are NOT reverse hyperthreading. For some reason, people who are leading experts in chip design have not tried to graft RHT onto a chip to help in strange code execution corner cases. I wonder why.
 
Last edited:
As I've learned from the least informed people at [H] and elsewhere, chip design is primarily a duty of Intel's marketing department. :p

You mean Intel was on top of their game when ARM just ran in took the mobile market ?
With 3 year lead on 14 nm technology and still couldn't make a dent with mobile market.
 
You mean Intel was on top of their game when ARM just ran in took the mobile market ?
With 3 year lead on 14 nm technology and still couldn't make a dent with mobile market.
What is this nonsense? It's not that I'm surprised, but it makes little sense even by your usual posting standards.

Intel failed in mobile for some of the same reasons ARM has a hard time moving out of mobile and embedded: market inertia and no benefits over the products dominating the established market.

Stepping back a few years before the rise of the smartphone, Intel did have the top performing ARM chip with XScale. It sold very well for the market it was intended for: PDAs and embedded computing products. Intel decided to get out of that market and sold off that product line to Marvell in favor of developing Atom. As the smartphone market grew, Intel did not even have a product made for phones until years later, around 2012-2013 (45nm Penwell). In 2015 Intel released the first SoFIA SoCs (28nm) for low end/sub-$100 smartphones. Unsurprisingly, neither were competitive. Those were the last handset-oriented chips Intel released, with Broxton and the next gen SoFIA chips recently cancelled. It doesn't seem that Intel did anything more than dip its toe into the smartphone market, so the results shouldn't be surprising.

Neither of those things have anything to do with a process lead. Intel was caught flat-footed when smartphones became a thing. It takes years to design and release a new chip and ARM was in the right place with completed, cheaply license-able CPU core totally suitable for the smartphone market. The same missed opportunity happened to AMD when netbooks were a thing. That's the game. Sometimes you have the right product at the right time, and sometimes you don't and completely miss out.
 
What I see as frustrating is that none of the SC companies share their tricks of the trade (which is understandable), but what results is companies reinventing the wheel (sometimes multiple times within the same company). I have seen ideas come up to solve a problem and it works beautifully and then someone comes along and says "Oh ya, we did that like 15 years ago at NEC/Hitachi/IBM/Intel/Motorola".

Or the patent something something simple, so that no one else can use it. People at my employer have patented things that I'm sure other people at other SC companies would have come up with on their own just as well.
 
As I've learned from the least informed people at [H] and elsewhere, chip design is primarily a duty of Intel's marketing department. :p
Or that every week people forget what Moores Law has nothing to do with pure speed.
 
I wonder what kinda brains it would take to make a Fab chip plant in America. Mos intel chips are Vietnam, Costa Rica, or Malaysia.

Are you for SRS bro? Intel fabs almost everything in America. They have 3 in Oregon, 3 in Arizona, 2 in New Mexico, 1 in Massachusetts. They only have three total out of the US. You are thinking of their packaging plants. After the chips are fabbed they go off to different locations to be packaged. They have them all over the world, for all kinds of reasons. They have one in the US, one in Costa Rica, one in Ireland, and so on.

Intel is heavy, heavy in the US. If you buy a 14nm chip it came from the US or Ireland, that's the only place they fab them. The label on the chip is where its packed, not fabbed.
 
The same missed opportunity happened to AMD when netbooks were a thing. That's the game. Sometimes you have the right product at the right time, and sometimes you don't and completely miss out.

I thought this was about Intel ? I'm sure that Intel just missed their change just like you described.
 
I wonder what kinda brains it would take to make a Fab chip plant in America. Mos intel chips are Vietnam, Costa Rica, or Malaysia.

Um no. Intel doesn't fab chips in Vietnam, Costa Rica, or Malaysia. Intel's actual fabs are or have been in: Oregon, California, New Mexico, Arizona, Massachusetts, Ireland, and Israel (excluding the technology restricted fab in China)

Vietnam, Costa Rica and Malaysia are all substrate assembly and test facilities. These facilities are fairly low tech in comparison with an actual semiconductor fabrication facility.
 
Who ever makes reverse hyperthreading first will set new sales records. This stuff actually exists, there was an article on it not too long ago. But it wasn't called reverse hyperthreading. But the concept of having multiple cores run a single thread is real.

The technical term is "MultiScalar". A lot of the original research is from Sohi's research group at Wisconsin from the 90s.
 
Yeah, it's called VLIW and went nowhere due to (compiler, control software, etc) complexity required for such corner cases where it could theoretically be useful. See: Itanium, Transmeta, Elbrus and whatever the next chip is called that tries this approach.

VLIW has nothing to do with Multiscalar (sometimes refereed to as reverse hyper threading by uninformed message board people). VLIW is an instruction set paradigm that is completely independent from explicit and implicit hardware contexts. VLIW is in the same category as RISC and CISC.
 
VLIW has nothing to do with Multiscalar (sometimes refereed to as reverse hyper threading by uninformed message board people). VLIW is an instruction set paradigm that is completely independent from explicit and implicit hardware contexts. VLIW is in the same category as RISC and CISC.
That's not what multiscalar processors are either. Arguably the reservation station for various execution units in modern processors fit the "multiscalar" paradigm well, and GPU architectures also fit it to a degree (via both massive parallelism and through thread suspension). The question about "RHT" (which is basically an AMD fan wet dream), which supposedly uses more than one core to execute a single thread, has been done on the architectures I listed. All happen to be VLIW CPU implementations. I get your point about VLIW in the general sense not being about RHT, but in the context of my post it should have been clear I was speaking about production processors, especially when I listed explicit examples.
 
I wonder what kinda brains it would take to make a Fab chip plant in America. Mos intel chips are Vietnam, Costa Rica, or Malaysia.

Money is the big thing. There are only a few companies in America that make 300mm (12in) wafers anymore. So the equipment is pricey (or used). The regulations to build the fab is out roof if you want to do it anything resembling a populated area. It's crazy expensive to get one going. But the upshot is that so few are doing it, that they are able to pay off the building fairly quick due to very little competition.
 
That's not what multiscalar processors are either. Arguably the reservation station for various execution units in modern processors fit the "multiscalar" paradigm well, and GPU architectures also fit it to a degree (via both massive parallelism and through thread suspension). The question about "RHT" (which is basically an AMD fan wet dream), which supposedly uses more than one core to execute a single thread, has been done on the architectures I listed. All happen to be VLIW CPU implementations. I get your point about VLIW in the general sense not being about RHT, but in the context of my post it should have been clear I was speaking about production processors, especially when I listed explicit examples.

Um, what I described is PRECISELY what Multiscalar processors are. Not only am I well versed in the Multiscalar research, I worked for several years with one of the principle researchers of Multiscalar(and read his double ream thesis on more than one occasion, well skimmed, it was a double ream thesis after all). Now you might be confusing the terms Superscalar and Multiscalar but they are completely independent. Multiscalar is the use of multiple hardware contexts to run a single instruction stream either explicitly or implicitly for the purpose of reducing total execution latency.

It is has nothing to do with reservation stations and is completely orthogonal to GPUs which are designed to run multiple independent threadlets.

None of the examples you listed have anything to do with multiscalar (sometimes referred to as RHT by uninformed forum people). It has not been done on any Itanium, Transmeta, or Elbrus designs. All are pretty straight forward VLIW architectures with varying degrees of speculation within the instruction set. All execute instruction streams strictly in order. None of them executed a single instruction stream using multiple hardware contexts. Anyone saying that they have is ignorant of their history or just generally uninformed.
 
Money is the big thing. There are only a few companies in America that make 300mm (12in) wafers anymore. So the equipment is pricey (or used). The regulations to build the fab is out roof if you want to do it anything resembling a populated area. It's crazy expensive to get one going. But the upshot is that so few are doing it, that they are able to pay off the building fairly quick due to very little competition.

A) there are numerous semiconductor fabrication plants in the US from both US and international companies.

B) location makes almost no difference on the cost to build a fab. It costs virtually the same to make one in china as the US and basically the same to run as well.

C) there have been only a few companies EVER that have MADE 300mm wafers. Just like there are only a few companies that have ever built a 300mm fab. A significant number of the 300mm fabs in the world are in the US.

D) The regulations to build a fab are pretty minimal compared to basically any other industrial building. Modern fabrication facilities are almost completely self contained and have an incredibly small environmental footprint.

E) the real driver of where a fab actually gets built is tax breaks and tax incentives.
 
B) location makes almost no difference on the cost to build a fab. It costs virtually the same to make one in china as the US and basically the same to run as well.

The company I work for has Test and Assembly facilities in multiple locations through out Asia. They actually moved these operations from America to Asia. It is solely due to the labor costs. The uncertainty of geophysical and infrastructure actually makes the building itself more costly in order to handle earthquakes and floods (as opposed to say central United States).

The rest of it I think we agree on. My overall point was that it's very hard for a startup company to get into the 300mm business, thus the established companies are at a major advantage.
 
The company I work for has Test and Assembly facilities in multiple locations through out Asia. They actually moved these operations from America to Asia. It is solely due to the labor costs. The uncertainty of geophysical and infrastructure actually makes the building itself more costly in order to handle earthquakes and floods (as opposed to say central United States).

The rest of it I think we agree on. My overall point was that it's very hard for a startup company to get into the 300mm business, thus the established companies are at a major advantage.

T&A is both significantly more labor intensive and significantly less capital intensive compared to fabrication. A top end state of the art high volume T&A facility with a 15-20 year lifetime can be built for ~1B just about anywhere in the work. In contrast, a fab will set you back in the range of 5-8+ B with a lifetime closer to 6-9 years.
 
Back
Top