Rumor: Intel Releasing Three Generations Of 10nm Processors

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
I am positive that “Icecake” is an error and is actually “Icelake”; the former sounds like something Google would use. The initial 10nm family, Cannonlake, is expected in 2017.

The third generation would be called Tigerlake and thus is indicative of a second tock in the tick-tick model. This year we will see Kaby lake based on 14nm, this is also an extra tock in the sequence. The tock basically means that Intel is not using a new production node with a smaller fabbed architecture yet sticks to the fabrication process.
 
Given how increasingly difficult it will be to shrink the process, the tick tock model is probably impossible to sustain going forward. So no surprise there I guess
 
Given how increasingly difficult it will be to shrink the process, the tick tock model is probably impossible to sustain going forward. So no surprise there I guess

1nm would be the absolute theoretical limit for feature sizes- since atomic distances are in that range. Realistically, Quantum tunneling puts the minimum Si based transistor feature sizes to around 5nm. We will hit that almost certainly as soon as they move out of 10nm feature sizes.
 
Really? I'm pretty sure the typo is on the other half. It will actually be "Ricecake" not "Icecake".
 
Seeing as how I just got my shiny new 6600k setup and humming along this weekend... I think i'm set for another 5 years at least. I was rocking the 2500k heavily overclocked since 2011. It took everything I threw at it with ease.

I hit 4.5 right out of the gate without even breaking a sweat fully stable. Hoping to hit 4.8 or so with a little fiddling and fine tuning.
 
Without something to replace silicon we're going have to scale out rather than up. Consumer grade multi-cpu motherboards might make a come back. Skulltrail II bitches.
 
It will be more about thermal management now. They could squeeze more out of the current technology if some how they figure out a way to bend the laws of physics a bit.
 
Seeing as how I just got my shiny new 6600k setup and humming along this weekend... I think i'm set for another 5 years at least. I was rocking the 2500k heavily overclocked since 2011. It took everything I threw at it with ease.

I hit 4.5 right out of the gate without even breaking a sweat fully stable. Hoping to hit 4.8 or so with a little fiddling and fine tuning.

I just upgraded from my old E8400 system to a 6700K system this weekend ... I am also looking forward to staying with my system for years before needing to upgrade to a new processor ... it was a little bit of a shock going from a Windows fully up time in the minutes to fully running desktop in seconds
 
So.... what do we get out of this? The whole processor arena has been very disappointing for the last 2 years.

Hell, I'd say anyone that still has an original i7 920 is still good to go for a while.
 
So.... what do we get out of this? The whole processor arena has been very disappointing for the last 2 years.

Hell, I'd say anyone that still has an original i7 920 is still good to go for a while.
Until we finally move away from silicon I don't think it will be getting any more exciting. Graphene, CNT, black phosphorus, InGaAs, photons... So many possibilities.
 
Without something to replace silicon we're going have to scale out rather than up. Consumer grade multi-cpu motherboards might make a come back. Skulltrail II bitches.

I really doubt that we would see Multi CPU since to take proper advantage you need to be a threading wizard and as it has been shown, going from 4 threads to 8+ with proper benefit has been hard as all hell from the software side, since not everything can be as threaded, and even if it can be threaded, the benefit may not be worth the cost in man hours.

And yeah, Planar Materials are gonna be a huge leap, when they finally end up being useable. We already know that Graphene alone doesn't work since it is a semimetal and thus doesn't have bandgap, then the "new" planar material darling showed up in the form of black phosphorus, aka Phosphorene, which does have a bandgap but currently it is harder to mass produce and requires an oxygenless enviroment in which it can be manipulated. In theory if they find a way to use them both (graphene interconnects, phosphorene transistors), the jump would be huge.

But alas, that is only theory and a couple small experiments for the time being, we will have to wait and see what happens from 2020++
 
Without something to replace silicon we're going have to scale out rather than up. Consumer grade multi-cpu motherboards might make a come back. Skulltrail II bitches.

Nah, Intel could fit 4x the cores on a chip by just ditching the iGPU.
 
It will be more about thermal management now. They could squeeze more out of the current technology if some how they figure out a way to bend the laws of physics a bit.

We bend or break all other types of laws all the time why do we hold laws of physics to a special level?
 
Read an article recently about them using light emitters in the processors with a very simple tool change. The got something like 8x the performance out of the same area as conventional transistor cpu. The researchers claimed it was a very easy process to retool for and we should expect retail stuff in the next 5-7 years. I'll see if I can dig the article up on lunch break.
 
Given how increasingly difficult it will be to shrink the process, the tick tock model is probably impossible to sustain going forward. So no surprise there I guess

We need a new triple cadence now.

Tick, Tack, Toe?

Snap, Crackle, Pop?
 
I thought Intel said they were leaving silicon for 7nm (http://arstechnica.com/gadgets/2015...d-to-10nm-will-move-away-from-silicon-at-7nm/). So no surprises here really. I suspect they are going to need an extra year or more if they are switching the semiconductor substrate.

I suspect others will be able to make silicon work with 7nm, but who knows. Similar to how Intel did finfet 22nm but others did planar 20nm.

People who have been complaining about the snail pace of progress on the cpu side, just get prepared for it to get slower. Unless somebody comes up with a real novel solution, I suspect shrinks are going to become even more rare. Who knows they might push out to every 3-5years or longer. I think we are a ways from hitting a complete roadblock though. I suspect they could even keep the pace @ 2years if it was economical, but I'm sure it takes awhile to get yield up on all these exotic processes and make it so they don't have to charge 2x as much.
 
So.... what do we get out of this? The whole processor arena has been very disappointing for the last 2 years.

The only thing I can think of is maybe AMD might actually catch up..? :D

AMD is releasing Zen @ 14nm later this year. I'm really hoping it might actually be able to compete this time around.
 
The only thing I can think of is maybe AMD might actually catch up..? :D

AMD is releasing Zen @ 14nm later this year. I'm really hoping it might actually be able to compete this time around.

It has no choice to. AMD is dead if they can't compete for another generation. We all know that.
 
We bend or break all other types of laws all the time why do we hold laws of physics to a special level?

It's not special, it's just really difficult. Mitigating heat would lead to larger overclocks. Personally I'd like to see coolant channels running into a chip or between stacked chips.
 
So.... what do we get out of this? The whole processor arena has been very disappointing for the last 2 years.

Hell, I'd say anyone that still has an original i7 920 is still good to go for a while.

I'd say it's been disappointing a lot longer than 2 years. As you alluded to even Nehalem is still pretty fast when overclocked, and those were first released at the end of 2008. Everything since then has only been small incremental improvements. Those improvements will eventually add up to something worth upgrading to, but it will take awhile.

I think only part of it is a limitation of physics/manufacturing processes though. The other part is a lack of competition. About the same time AMD stopped releasing competitive CPUs was when performance improvements from Intel slowed to a crawl. Since then Intel has just been reducing power consumption more than they are increasing performance. They are also holding back a bit in some areas. Skylake with eDRAM would be an impressive performance jump especially for gaming/single-threaded performance, but they aren't going to make such a chip...
 
We bend or break all other types of laws all the time why do we hold laws of physics to a special level?

US Export laws. We can break them, but they aren't. It's like an NSA encryption thing. They can break it, but we can't... :D

I'd say it's been disappointing a lot longer than 2 years. As you alluded to even Nehalem is still pretty fast when overclocked, and those were first released at the end of 2008. Everything since then has only been small incremental improvements. Those improvements will eventually add up to something worth upgrading to, but it will take awhile.

This is my hopes. Even if upgrading a generation or two isn't going to yield much, give it 4,5, 6 generations of small upgrades and it'll end up being a much bigger upgrade. I am getting ready to finally upgrade. Clock speed won't be much different (hopefully, I can overclock a bit). But, going M.2, USB3, etc. alongside some core improvements will net a pretty good upgrade experience.
 
I think only part of it is a limitation of physics/manufacturing processes though. The other part is a lack of competition. About the same time AMD stopped releasing competitive CPUs was when performance improvements from Intel slowed to a crawl. Since then Intel has just been reducing power consumption more than they are increasing performance. They are also holding back a bit in some areas. Skylake with eDRAM would be an impressive performance jump especially for gaming/single-threaded performance, but they aren't going to make such a chip...


I don't think it is lack of competition. Everything hits the point of diminishing returns and we have been there for a while on single threaded performance everywhere. You pour in enormous resources and only get small gains. Much like my first CPUs were less than 1 MHz. We have had more than thousand fold gain clock-speed since then, but we hit a wall there as well.

The leapfrog gains of the 80's and 90's are gone forever, the new reality is incremental gains.

If you look at any mature technology, getting 3% performance increase/year would be a great number.

Incremental gains aren't the anomaly, they are the normal situation everywhere. It was the 80's-90's massive gains that were the anomaly. It is just that many people grew up with that and think it is normal.
 
I'd say it's been disappointing a lot longer than 2 years. As you alluded to even Nehalem is still pretty fast when overclocked, and those were first released at the end of 2008. Everything since then has only been small incremental improvements. Those improvements will eventually add up to something worth upgrading to, but it will take awhile.

I think only part of it is a limitation of physics/manufacturing processes though. The other part is a lack of competition. About the same time AMD stopped releasing competitive CPUs was when performance improvements from Intel slowed to a crawl. Since then Intel has just been reducing power consumption more than they are increasing performance. They are also holding back a bit in some areas. Skylake with eDRAM would be an impressive performance jump especially for gaming/single-threaded performance, but they aren't going to make such a chip...

Intel does have to align itself to market demands and the market is much more interested in power consumption ... laptops have been the dominant computing platform for a long time and making more power efficient chips helps Intel compete in that market

Also, without a fundamental shift in computing technologies there are likely not too many massive upgrades left to make to the existing chips ... the integration of more features into the die and reducing the number of motherboard chips (by integrating functions) has much greater impact and is substantially easier than speed jumps
 
I don't think it is lack of competition. Everything hits the point of diminishing returns and we have been there for a while on single threaded performance everywhere. You pour in enormous resources and only get small gains. Much like my first CPUs were less than 1 MHz. We have had more than thousand fold gain clock-speed since then, but we hit a wall there as well.

The leapfrog gains of the 80's and 90's are gone forever, the new reality is incremental gains.

If you look at any mature technology, getting 3% performance increase/year would be a great number.

Incremental gains aren't the anomaly, they are the normal situation everywhere. It was the 80's-90's massive gains that were the anomaly. It is just that many people grew up with that and think it is normal.

Very good points, and spot on.

For me personally, I'm kind of enjoying not having to replacement my main system components every 1.5 to 2 years now..
 
Incremental gains aren't the anomaly, they are the normal situation everywhere. It was the 80's-90's massive gains that were the anomaly. It is just that many people grew up with that and think it is normal.

Reminds me of the book review I just read, "The Rise and Fall of American Growth". Not sure if I agree with its premise, but seems to be very well written and interesting nonetheless. One of the premises is just as you say, the huge gains seen were the anomaly, not the incremental growth we're seeing now.

http://www.nytimes.com/2016/01/31/books/review/the-powers-that-were.html
 
I don't think it is lack of competition. Everything hits the point of diminishing returns and we have been there for a while on single threaded performance everywhere. You pour in enormous resources and only get small gains. Much like my first CPUs were less than 1 MHz. We have had more than thousand fold gain clock-speed since then, but we hit a wall there as well.

I don't think it's ONLY a lack of competition either but I do think it's at least a small part of the reason for the lack of large performance increases on desktop CPUs. If AMD had a CPU right now that could match Skylake (or even Haswell) I think suddenly Intel would be offering up some Skylake eDRAM goodness (which wouldn't take "enormous resources" and provided a pretty substantial gain in single-threaded performance on the Broadwell chip that had it).

Of course part of it is also hitting the physics/technology wall. Even Intel isn't immune to that. Another reason for the small gains is that the only people who want faster consumer desktop CPUs at this point are gamers and some more niche uses like video encoding/editing. For the things most people do their CPU isn't the bottleneck. Slow HDDs, bloatware, and battery life are more likely to be bottlenecks for "normal" people.

For me personally, I'm kind of enjoying not having to replacement my main system components every 1.5 to 2 years now..

Yeah that is the upside. At the rate things are going this CPU/mobo will probably have a lifespan of 7-8+ years in my system, whereas I used to upgrade every couple years. Of course I just end up spending that money on different PC components though. :)
 
My last system (Core i7 930) lasted me close to 5 years, and I would not complain if my current system last me for another 5 years.

Just leaves more money for other stuff such as GPU and monitor
 
Very good points, and spot on.

For me personally, I'm kind of enjoying not having to replacement my main system components every 1.5 to 2 years now..
I'm enjoying it immensely. While I still have fun putting a new system together, I don't miss hearing that clock rapidly counting down to obsolescence from the moment you first boot it up.
 
1nm would be the absolute theoretical limit for feature sizes- since atomic distances are in that range. Realistically, Quantum tunneling puts the minimum Si based transistor feature sizes to around 5nm. We will hit that almost certainly as soon as they move out of 10nm feature sizes.

From what I understood, with current methods and materials, the absolute limit may be closer to 7nm due to leakage current. Insulating regions aren't thick enough to prevent this issue to an extent even on higher sized process nodes and the shrinking process has definitely slown down. I'm not saying it's impossible, but we've gotten damn close to that point where we simply have to find something other than silicon to build on if we want to be able to go smaller.
 
Back
Top