Moore's Law Is Dead

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
I honestly believe the number of "Moore's Law Is Dead" articles doubles every two years.

We can no longer guarantee that the computer chips will get smaller and more powerful year on year, we're getting close to capacity. What does this mean for the future of computing?
 
Heh... it means engineers and scientists will come up with new ways to make things smaller and faster using different techniques. Like they've been doing. There are limits to each process - change the process, change your limits.
 
It's amazing that we have been able to push the engineering of planer MOS technology as far as we have.

Having said that we are rapidly heading towards a problem far more fundamental than any encountered previously. We are simply running out of atoms. Eventually you reach a number of atoms where the "bulk material with a percentage of dopants" ideas that underlay semiconductor design break down.

The lattice constant of silicon is about 0.5nm so 20nm is only about 40 atoms
 
Moore's Law is dead when Moore's law fails, not before I wish these "futurists" would just understand that fact.
 
I think it's software's turn to become more powerful and efficient.

Yea, Dump .net! I still use VB6 because it has a much lower memory footprint and processor hit compared with .net bloatware.
 
Heh... it means engineers and scientists will come up with new ways to make things smaller and faster using different techniques. Like they've been doing. There are limits to each process - change the process, change your limits.

Yep. I for one welcome our new Graphene overlords ;)
 
It's amazing that we have been able to push the engineering of planer MOS technology as far as we have.

Having said that we are rapidly heading towards a problem far more fundamental than any encountered previously. We are simply running out of atoms. Eventually you reach a number of atoms where the "bulk material with a percentage of dopants" ideas that underlay semiconductor design break down.

The lattice constant of silicon is about 0.5nm so 20nm is only about 40 atoms

Well, there are still some other technologies out there that might let us use more computing power in a smaller space.

For the same die, with have vertical layering. I can see how we may not be able to take this too far due to too much heat in too small a space.

Nanoscale optical computing might let us put a lot of CPU cores "nearby" on single processor wafers and give us crazy parallelism with a ton of fast cores.

And hell, over 20 years, we might have discreet CPUs on a die for different tasks to make what we do every day faster. We may end up with a quantum processor on die.
 
With Intel planning to move away from silicon within the next 5-6 years, I would believe Moores law will change.
 
I mean, sure, eventually an electron won't fit through a circuit properly but come on, this story is getting old.
 
You can't get narrower than one atom so there is an eventual wall unless someone figures a new kind of electrical component that can be placed on a chip that replaces gobs of the component together try to simulate.
 
What year was it when they said the same thing, because 100mhz speed was going to be the practical limit for silicon?
 
Someday the "Moore's law is dead" people will be correct....
So will those nuts walking around naked except for a cardboard box that says "The end is near."
 
With Intel planning to move away from silicon within the next 5-6 years, I would believe Moores law will change.



Intel has no plans to move away from silicon in 5-6 years, they're researching it from now until 2020 and they "expect" Moore's Law to carry them down to the 7nm mark, although troubles with 14nm seem to hint Intel is struggling with whatever method they think they can achieve that goal.

Intel even admits that Moore's Law probably has no chance past 10 years without significantly new materials that take us out of the silicon only world. It'll probably a mixture of silicon and other materials at first before moving straight to a new material.


Moral of the story, Moore's Law is Dead, Long Live Moore's Law!
 
Lol, that article was retarded. If the author had a clue he wouldn't have written it.
 
You can't get narrower than one atom so there is an eventual wall unless someone figures a new kind of electrical component that can be placed on a chip that replaces gobs of the component together try to simulate.

You'll run into trouble long before you get down to one atom. The biggest issue is that after you shrink the transistors down to a certain size, you enter the "quantum twilight zone" where things start behaving in radically different and unintuitive ways. In particular, as transistors shrink, you have to start worrying about quantum electron tunneling where the electrons have a chance of simply being on the other side of the transistor, regardless of whether the gate has power.
 
You'll run into trouble long before you get down to one atom. The biggest issue is that after you shrink the transistors down to a certain size, you enter the "quantum twilight zone" where things start behaving in radically different and unintuitive ways. In particular, as transistors shrink, you have to start worrying about quantum electron tunneling where the electrons have a chance of simply being on the other side of the transistor, regardless of whether the gate has power.

This.
 
Meh people are already making transistors that operate off of quantum tunneling that isn't really an issue. The biggest issue is transferring the pattern to the wafer. We are now using 193 nm wavelength light. To make 14 nm features. This has been achieved through a combination of magnifying lenses (about 5x shrink) immersion fluids (achieves about 2x) and now with the 14 nm point we are also using double patterning. This in essential means that you do everything for each layer twice but slightly offset. This doubles all the costs involved in the chips, doubles the defect chance, BUT it gets you to 14 nm. So not bad right? The issue is going smaller. Lenses have hit their practical limits as have the immersion fluids. To go smaller we need to use a different method of lithography. We can't just make improvements. We need to go to something like E-Beam, EUV, or imprint lithography. Something that changes the starting point.

Intel is betting a lot of money on EUV. They wanted EUV for the 14nm step. It relies upon a far smaller wavelength of light than the 193 we currently use. The issue is that EUV is absorbed by EVERYTHING . Air even absorbs it. This means you need to take the entire complicated lithographic system. Which already requires incredibly ultra pure environments with no dust, and you have to vacuum seal them. You need to create a vacuum chamber into which you can put wafers, put your resist on, spin coat, bla bla bla, which doesn't have any contaminants. Oh and traditional lens materials also absorb EUV so you need new lenses as well which also must be incredibly pure. It is a huge pain in the ass but Intel is claiming they can do it. If you want more info just ask and I can expand on most of this stuff.
 
Meh people are already making transistors that operate off of quantum tunneling that isn't really an issue. The biggest issue is transferring the pattern to the wafer. We are now using 193 nm wavelength light. To make 14 nm features. This has been achieved through a combination of magnifying lenses (about 5x shrink) immersion fluids (achieves about 2x) and now with the 14 nm point we are also using double patterning. This in essential means that you do everything for each layer twice but slightly offset. This doubles all the costs involved in the chips, doubles the defect chance, BUT it gets you to 14 nm. So not bad right? The issue is going smaller. Lenses have hit their practical limits as have the immersion fluids. To go smaller we need to use a different method of lithography. We can't just make improvements. We need to go to something like E-Beam, EUV, or imprint lithography. Something that changes the starting point.

Intel is betting a lot of money on EUV. They wanted EUV for the 14nm step. It relies upon a far smaller wavelength of light than the 193 we currently use. The issue is that EUV is absorbed by EVERYTHING . Air even absorbs it. This means you need to take the entire complicated lithographic system. Which already requires incredibly ultra pure environments with no dust, and you have to vacuum seal them. You need to create a vacuum chamber into which you can put wafers, put your resist on, spin coat, bla bla bla, which doesn't have any contaminants. Oh and traditional lens materials also absorb EUV so you need new lenses as well which also must be incredibly pure. It is a huge pain in the ass but Intel is claiming they can do it. If you want more info just ask and I can expand on most of this stuff.

I am interested. Please expand.
 
Why does it matter that an arbitrary observation is going to run into trouble? It's not like we are going to hit a process wall, and all of a sudden space aliens will appear and probe us for failing to do it.

We'll be absolutely fine, and one day something neat will be done, and we will start working on that. And useless journos will still find some nervous nelly bullshit to write about it, hoping vainly to get click throughs.
 
Alright so basically EUV has been a clusterfuck to implement but we're getting a lot closer now. We have the lens systems, we have the whole setup. The biggest issue remaining is power. You see the way we transfer patterns with photolithography is you shine light through a cut out. The light lands on a surface and makes some of the surface become hard or soft. The soft surface is washed off and now you are left with a nice mold to add whatever you want. The issue is how much light does it take for the surface to change? Remember the average chip might have dozens of layers. If each layer takes even an hour it is simply way too long. For this reason you want to use high intensity light so that you keep the times low. The issue with our EUV setups so far is that they simply aren't intense enough to pattern a wafer quick enough for it to be economical. Ideally you want to be able to measure the time in minutes and we just aren't there yet.

A few other technologies that are so called up and comers are techniques such as E-Beam, nanoimprint, and self assembly. I'll give a basic run down and you can ask questions after.

E-Beam is the most relatable to photolithography. If shrinking the wavelength of light gives you a way to make smaller features, why not use electrons instead? Electrons have incredibly small wavelengths. The issue is that if you imagine a light as small as say your laser pointer. It puts out thousands of light particles at a time in a nice blanket layer. Electrons on the other hand can be made in individual beams but since electrons have a negative charge and like charges repel you can't just have a coherent blanket of electrons. Thus you have to literally go electron by electron to fill up the whole wafer which takes FOREVER. Like I'm talking you want to do one layer a cm by a cm you better leave it overnight and god help you if someone across the block decides to tear up their driveway because you're screwed. To counter this they just use fancy lenses to do thousands of beams at once but it still is too slow.

So next up is nanoimprint. Imagine you have a see through glass stamp with your features in it. You simply stamp down on the wafer and the photoresist on the surface is now only where the grooves in the stamp are. Simply shine a light through the stamp all at once and ta-da. Perfect layer. Of course you're aiming for features below 10 nm. If you get even the smallest contaminant on your stamp, maybe a few atoms get transferred from the stamp to the photoresist on the wafer the stamp is gone. Now you need to make a new stamp. Btw the stamps are made almost atom by atom and cost well over 20k. So try not to press too hard. Super cool technique if they get mask destruction figured out because you do the whole wafer at once, and you can get atomic resolution because you can take weeks to make the perfect mask. Some groups have tried adding a release lubricant to the stamp which helps some, but again not close yet.

Last but not least: self assembly. This is tricky. If you want to make a house all you need is some square shaped molecule with a female plug at the top and a triangle molecule with a male plug at the bottom. Mix them up and you will get a triangle on a square or a house and you only need one pair of binding sites. If you want to do this for an entire processor with say thousands of unique patterns you need thousands of unique binding sites. You then need to construct all the molecules with the proper binding sites. They have some cool stuff with DNA done, smiley faces and the like, but nothing to the level of complexity of chips.
 
God forbid there be an edit button: To add a bit to E-Beam. The issue with scaling up the number of beams is that each beam of electrons carries negative charges. If you want to precisely aim the electrons at the surface with nm precision you need to account for the charge of each other beam when you pattern the surface. You have 10k beams then you have to account for 10k interactions for each beam for each step. If I remember correctly the data requirements were in the terabits for each beam. Simply getting the calculations done and the data sent to the machine to control all the parts is an incredibly expensive task.
 
I have created a new law, called Cyzzledyzzle's Law.

My law states that people will eventually stop saying Moore's Law is dead; the phenomenon is proven to occur upon death and is observable by the layperson.
 
Meh people are already making transistors that operate off of quantum tunneling that isn't really an issue.

Perhaps it is a difference of terminology, but I am not aware of any practical transistors that operate "off of quantum tunneling". There are 1 atom transistors but they don't work unless cooled to near absolute zero (and I doubt that Microsoft is going to be releasing a Surface with a liquid helium cooling system anytime soon). Quantum tunneling basically means that you no longer have a guarantee that a given electron will be on a specific side of the transistor. Quantum physics dictates that a given subatomic particle doesn't have a position until you measure it; it exists as a superposition of multiple eigenstates (when you measure it, you have something called wave function collapse which "locks" it in to a given position, that position being the result of the measurement). In essence, it is almost as if it is everywhere at once. With transistors that are big enough, this isn't a problem because when you scale things up, the laws of physics change and you transition from the realm of the quantum to the realm of the classical. But at the quantum level, what ends up happening is that you can no longer guarantee that an electron will be on a given side of a transistor. A transistor that can't act as a gate and control the flow of electrons is very much useless.
 
Quantum tunneling isn't the same thing as uncertainty. You don't need super cooling. Basically what happens is if you apply a voltage across a barrier that corresponds to a specific energy level across a barrier the electron can tunnel through the barrier to fill that energy level. It is not uncertain it is definite and repeatable. I'll think about a good way to break it down and explain it by the weekend if you're still interested by then. I unfortunately am not running on much sleep atm so I feel I am writing slightly crazy.

Chenming Hu's group at Berkley (the same guy that came up with finfets) has been doing a lot of work into so called Tunelling Field Effect Transistors (TFETs). If you care to read it I found a recent thesis from his group which is publically available. Linky

I'm fairly confident TFET's will be online before that tunneling becomes an issue. The limits have almost always been on the lithographic tools not on the transistor design.
 
Every time I see an article that's a harbinger of the end of Moore's Law, a new technique is developed to make even smaller transistor gates. I imagine we will hit the wall eventually, but it's probably safe to assume that it isn't today. :D
 
I guess if they declare its dead enough times. One of those times may end up being true. BUT! Today is not that day. The near future is not looking to good ether.
 
There's actually another threshold for semiconductor manufacturing innovation: cost. While Intel continues to go it alone, and manages to stay a couple of years ahead of its competition for high performance MPU manufacturing, a big chunk of the industry have formed the Common Platform Alliance to develop further processes.

The Alliance's upcoming 14/16nm node (reality note: 20nm is being ramped up in Q1'14, and the schedules CPA gives for new process nodes have always been too optimistic so take any "it's amost here" breathless PR as basically hot air) is a hybrid node with performance characteristics and density not improved much from the 20nm node. Moore's Law is alive and well at Intel, but it's nearly dead at foundries.
 
Back
Top