"Nehalem" Lead Architect Rejoins Intel to Work on New High-Performance Architecture

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,897
"Pat Gelsinger leading Intel is expected to have a big impact on its return to technological leadership in its core businesses, as highlighted in Gelsinger's recent comments on the need for Intel to be better than Apple (which he referred to as "that lifestyle company") at making CPUs, in reference to Apple's new M1 chip taking the ultraportable notebook industry by storm. The other front Intel faces stiff competition from, is AMD, which has achieved IPC parity with Intel, and is beating it on energy-efficiency, taking advantage of the 7 nm silicon fabrication process."

https://www.techpowerup.com/277533/...-to-work-on-new-high-performance-architecture
 
Fingers crossed on no more cut-corners, security exploits, unnecessary market segmentation, high TDPs, or any more pluses to their existing 14nm+++ designs.
 
Fingers crossed on no more cut-corners, security exploits, unnecessary market segmentation, high TDPs, or any more pluses to their existing 14nm+++ designs.
Not sure I would count on that.... I think Pat has a 1000% better chance of getting Intel through this next few years over Swan. The bean counter was going to flush Intel. Pat may save it if he pulls most of their engineers and his own head out of their asses. :)

Lets not forget when Intel made all the design choices that lead to a decade of crap security... and cheating branch prediction that didn't bother to check basic security on reads. Pat was Intels chief technical officer, and if he was paying attention he likely knew exactly what they where doing to gain a few % points in performance.

I hope Intel learns a little humility and gets down to it. Like it or not for Intel Apple isn't just a lifestyle company anymore... they are one of the leading fabless designers of high performance high efficiency processors. AMD is a known quantity... but Pat is also going to have to deal with Nvidia in the next few years. At some point they close on ARM and I wouldn't be shocked to see Nvidia produce socketed ARM chips and try and push the game industry to ARM. Leather Jacket man so wants to say a 100% Nvidia system is the way its meant to be played. Really though there coming for Intels server business. In 3 years from now x86 will either be entrenched for another 20 years or be relegated to niche in under 5. And that 100% sits with Intel and Pat... if Nvidia comes out strong AMD could switch gears to support ARM as well. Under Pat x86 will either live or die, no pressure.
 
Fingers crossed on no more cut-corners, security exploits, unnecessary market segmentation, high TDPs, or any more pluses to their existing 14nm+++ designs.

idunno, I was kind of curious to see how long they could keep selling skylake to so many people.

it seemed to go so well for so long, and people are still eating it up (“10th” gen).
 
Fingers crossed on no more cut-corners, security exploits, unnecessary market segmentation, high TDPs, or any more pluses to their existing 14nm+++ designs.
It is actually pretty amazing that Intel's chips on the larger process can hit 5.3Ghgz. The speed advantage gives them ~+10% advantage cpu core vs cpu core. But their cpu's have less cores.

I know that the transistors on Intels 10nm chips are the same physical size as TSMC's 7nm process transistors. The 14nm transistors are probably equal to 10 or 12 nm in 'TSMC' scale process numbers. It's basically 1 maybe 2 process generations behind. Can't believe after all this time they haven't figured out how to do 10nm (TSMC 7nm equivalent).
 
It is actually pretty amazing that Intel's chips on the larger process can hit 5.3Ghgz. The speed advantage gives them ~+10% advantage cpu core vs cpu core. But their cpu's have less cores.

I know that the transistors on Intels 10nm chips are the same physical size as TSMC's 7nm process transistors. The 14nm transistors are probably equal to 10 or 12 nm in 'TSMC' scale process numbers. It's basically 1 maybe 2 process generations behind. Can't believe after all this time they haven't figured out how to do 10nm (TSMC 7nm equivalent).

Its worse then that... Intel never figured out the 10nm they envisioned. All the talk of their 10nm being equal to TMSC 7nm is no longer correct. Intel never got that version of 10nm to work. Their engineers admitted a year ago already that they where too ambitious with their 10nm and they had to completely scrap it and reduce the transistor density to make it work. In other words their 10nm really is just 10nm, there is no longer any truth to the Intel 10nm being > line. The increased density is why they could never get a chip to work with all its cores, or its GPU.... they always ended up with defects somewhere. So they had terrible yields and a handful of parts they could sell as non GPU parts or GPU with low core counts.

Intel is working on their own 7nm... and perhaps at that node they can improve density enough to claim they are = to TMSC 5nm. I would hope they would learn from their own history and not swing for that... however the more I hear Intel engineers open their mouths the more I realize they haven't learned any humility. (which is probably why they haven't been able to fix anything)
 
Link to source?
https://www.notebookcheck.net/Intel...admitted,than Moore's Law actually postulates.

"Intel's reason for lagging behind the competition when it came to 10nm CPUs (and consequently 7nm) was due to its over ambitious goals, according to CEO Bob Swan who made a public appearance for perhaps the first time at Fortune's Brainstorm Tech conference in Aspen, Colorado earlier this week. The ambitious goal was going after a 2.7x transistor density improvement over 14nm."
 
They should bring back the Atom as their ultra-power efficient mobile line and ramp it up to take the place of the i3-i9 mobiles.

Atom = mobile
i3-i9 = desktop
i3-i9 K and X = HEDT/enthusiast (Skull logo!)
Xeon = workstation and server

There's so much that Intel could accomplish by merely simplifying their CPU portfolio, IMO.
 
They should bring back the Atom as their ultra-power efficient mobile line and ramp it up to take the place of the i3-i9 mobiles.

Atom = mobile
i3-i9 = desktop
i3-i9 K and X = HEDT/enthusiast (Skull logo!)
Xeon = workstation and server

There's so much that Intel could accomplish by merely simplifying their CPU portfolio, IMO.
Yeah, Intel is killing themselves with too many options catering to too many services, even if they called it back a little they would be far more agile. Like a restaurant with 200 menu items, most of them half-assed and almost no difference between them, scale that back to a dozen and stick the landing.
 
<voice> "Good news everyone! The design is flowing again!" </voice>

But seriously - yes.
 
They should bring back the Atom as their ultra-power efficient mobile line and ramp it up to take the place of the i3-i9 mobiles.

Atom = mobile
i3-i9 = desktop
i3-i9 K and X = HEDT/enthusiast (Skull logo!)
Xeon = workstation and server

There's so much that Intel could accomplish by merely simplifying their CPU portfolio, IMO.
I mean with literally 99% of laptops having a ULV CPU that's basically already done except for calling it an "Atom".
 
it'll be nice when some day in the distant future intel can sell their cpu's on the merit of them and not thru backroom deals and anti-competitive practices limiting how oems can utilize and market other manufacturer's chips.
Yeah, really the only reason we don’t see more OEM AMD systems is because they have just as hard a time getting the chips as we do. If they manage to get that sorted then things are golden.
 
Link to source?
Lakados already responded with the first google return which is Bob Swan himself saying they where too aggressive with transistor count.

For the record I was thinking about Dr. Murthy Renduchintala who was in charge over there... and a year back or so at one of the chip conferences he was very candid about what went wrong. Bottom line as Bob Swan said they had very aggressive goals and when they ran into massive issues they didn't adjust and kept going for a 2.7x bump in transistor count. Which has never worked. At the time he said they would start shipping 10nm before the end of the year with an adjusted transistor count that would allow them to have actual decent yields. And for what its worth they fired him in the summer. lol
 
Pat Gelsinger going back to Intel I think will be a large sized boon to them. I owned a Nehalem Core i7 (Core i7 920 CPU) that he played a huge part of designing and yes indeed it was a giant step up from what I was using before (AMD Athlon FX-62 CPU at the time). If there's anyone who could put both Apple and AMD in Intel's rear view mirror for a good, long time after he really starts changing the game over there it's Gelsinger. I've used a product he designed. From that, it gives me confidence Intel is going to start excelling again. Like LL Cool J said in one of his songs, it could very well be Gelsinger who makes Intel again shout out, "Don't call it a comeback...I've been here for years!" Out!
 
Lakados already responded with the first google return which is Bob Swan himself saying they where too aggressive with transistor count.

For the record I was thinking about Dr. Murthy Renduchintala who was in charge over there... and a year back or so at one of the chip conferences he was very candid about what went wrong. Bottom line as Bob Swan said they had very aggressive goals and when they ran into massive issues they didn't adjust and kept going for a 2.7x bump in transistor count. Which has never worked. At the time he said they would start shipping 10nm before the end of the year with an adjusted transistor count that would allow them to have actual decent yields. And for what its worth they fired him in the summer. lol
I knew about the aggressive quote, but have not seen anyone, anywhere quoted saying lower transistor count.

Is this just inferred or verifiable quote?
 
I knew about the aggressive quote, but have not seen anyone, anywhere quoted saying lower transistor count.

Is this just inferred or verifiable quote?
Not sure there is one.
It was way back at 2018 Intel arch day.... I am not sure now if he actually said specifically they lowered the transistor count, I think you are probably right he likely just hinted at major changes that where "less aggressive". He for sure said they stuck with something that wasn't working for way too long and they basically had to throw it all out and start over. Most companies don't admit exactly what is going wrong with stuff like that but Intel has blamed multi patterning as 10nm doesn't have EUV. People smarter then me have read between the lines of what some of the techs have said and figure it has to do with Cobalt usage and how Intel plates instead of depositing it which may be causing voids with pitches as low as 36nm. So while its true Intels 36nm pitch size is around the same as TMSC 40nm (Samsung 8nm is 44nm). It sounds like Intels 10nm++ they are talking about using now has either solved their issues by changing their cobalt deposit/plate method or more likely by simply adjusting the pitch size slightly. (they have also done a ton of work decoupling die designs from process nodes... which lets them back port to 14 but also makes it easier to use their old 10nm core designs with a 10nm++ that may have a slightly different gate size requirement)

Bottom line your right I am probably reading in a bit to something I read a year and a half ago. :) I don't believe Dr. Murthy actually specifically said they decreased pitch gate..... and I don't know for sure if Intel is or isn't. To be honest it sounds like they may have continued to be hardheaded on 36nm at 10... cause it still doesn't work right. lol I joke but ya they may have adjusted pitch and transistor count or they may have fixed their issues with plating, but they haven't been public on exact issues or fixes.

Without a doubt though they started over from the start with 10nm the previous fired head of the Intel Fabs said that much. Also at the same conference Raj K said Intels issues with 10nm showed them that they needed to decouple their core designs from specific fab processes. Which seems to have lead to what we have no with the 10nm core being backported to 14nm. They can't stuff as many cores in but they can implement those newer designs at 14nm I guess. I read another Intel engineer thing awhile ago (and I'm sorry for paraphrasing things I don't have handy to link) that with Intels decoupling of nodes they also realized AMD was doing the right thing with IO dies. So going forward Intel was going to do its best to design their IO bits to work with multiple designs and nodes.
 
If there's anyone who could put both Apple and AMD in Intel's rear view mirror for a good, long time after he really starts changing the game over there it's Gelsinger.
I'm not so sure about that. Nehalem was a performance win, but it wasn't that much greater in performance and especially price/perf ratio than AMD's Phenom II offerings IIRC. I remember looking at building a new computer, and comparing the i5 and the Phenom II 1060 - they were pretty neck and neck in the benchmarks, with the AMD chip a bit better on cost. Then Sandy Bridge and Bulldozer happened.

AMD is not the same company it was then, by many metrics. Intel would just go back to being the company they were then, but at much less advantage. Against AMD, the best case scenario I see for Intel (and consumers) is that we see a motivated Intel that starts innovating again, competing with a lean and energized AMD that is determined to continue its rise.

As far as ARM competition goes, that's anybody's guess. The M1 is a good chip because in addition to the CPU, there are many specialized accelerators added on. There isn't anything keeping Intel or AMD from doing the same thing, but this really only works in a closed hardware system like Apple. The chip is tailor made for the Apple computers, and to get the same type of performance, you'd have to abandon the generic general purpose CPU idea completely. I feel rather that hyper competition between AMD and Intel will leave ARM in the dust.

Just to update: Apparently the market isn't too encouraged by these shakeups. This morning so far Intel stock price is down almost 9%
 
Last edited:
I'm not so sure about that. Nehalem was a performance win, but it wasn't that much greater in performance and especially price/perf ratio than AMD's Phenom II offerings IIRC. I remember looking at building a new computer, and comparing the i5 and the Phenom II 1060 - they were pretty neck and neck in the benchmarks, with the AMD chip a bit better on cost. Then Sandy Bridge and Bulldozer happened.

AMD is not the same company it was then, by many metrics. Intel would just go back to being the company they were then, but at much less advantage. Against AMD, the best case scenario I see for Intel (and consumers) is that we see a motivated Intel that starts innovating again, competing with a lean and energized AMD that is determined to continue its rise.

As far as ARM competition goes, that's anybody's guess. The M1 is a good chip because in addition to the CPU, there are many specialized accelerators added on. There isn't anything keeping Intel or AMD from doing the same thing, but this really only works in a closed hardware system like Apple. The chip is tailor made for the Apple computers, and to get the same type of performance, you'd have to abandon the generic general purpose CPU idea completely. I feel rather that hyper competition between AMD and Intel will leave ARM in the dust.

Just to update: Apparently the market isn't too encouraged by these shakeups. This morning so far Intel stock price is down almost 9%
Intel already does some of the specialization with their myriad of specialized extensions. But too few things take advantage of them honestly Intel and AMD need to sit at a table together and standardize a good number of them so Microsoft and Linux can adopt them properly as a standard on the x86. They can call it x86-64+, then as they need to they can just add more +. Specialized hardware is only going to get cheaper and more abundant Intel and AMD need to work together on updating the platform before it’s too late.
That said the stock drop was fair, they were a little over inflated after announcing their partnership with TSMC and their outsourcing plans.
 
I knew about the aggressive quote, but have not seen anyone, anywhere quoted saying lower transistor count.

Is this just inferred or verifiable quote?
They changed the design, they were trying something interesting and very aggressive with the design of their transistors and the dummy gates that surround them. It didn’t pan out as planned. They have since gone back to a more traditional design but their new patents on vertical transistors also look very interesting.
 
I'm not so sure about that. Nehalem was a performance win, but it wasn't that much greater in performance and especially price/perf ratio than AMD's Phenom II offerings IIRC. I remember looking at building a new computer, and comparing the i5 and the Phenom II 1060 - they were pretty neck and neck in the benchmarks, with the AMD chip a bit better on cost. Then Sandy Bridge and Bulldozer happened.

AMD is not the same company it was then, by many metrics. Intel would just go back to being the company they were then, but at much less advantage. Against AMD, the best case scenario I see for Intel (and consumers) is that we see a motivated Intel that starts innovating again, competing with a lean and energized AMD that is determined to continue its rise.

As far as ARM competition goes, that's anybody's guess. The M1 is a good chip because in addition to the CPU, there are many specialized accelerators added on. There isn't anything keeping Intel or AMD from doing the same thing, but this really only works in a closed hardware system like Apple. The chip is tailor made for the Apple computers, and to get the same type of performance, you'd have to abandon the generic general purpose CPU idea completely. I feel rather that hyper competition between AMD and Intel will leave ARM in the dust.

Just to update: Apparently the market isn't too encouraged by these shakeups. This morning so far Intel stock price is down almost 9%

Nehalem absolutely slaughtered the phenom II.

Phenom II was competitive with the core 2 arch. (but still often lost out), but it was close. Phenom II did provide a fantastic value for the $ and also overclocked incredibly. well.

https://www.anandtech.com/show/2658
 
Not sure there is one.
It was way back at 2018 Intel arch day.... I am not sure now if he actually said specifically they lowered the transistor count, I think you are probably right he likely just hinted at major changes that where "less aggressive". He for sure said they stuck with something that wasn't working for way too long and they basically had to throw it all out and start over. Most companies don't admit exactly what is going wrong with stuff like that but Intel has blamed multi patterning as 10nm doesn't have EUV. People smarter then me have read between the lines of what some of the techs have said and figure it has to do with Cobalt usage and how Intel plates instead of depositing it which may be causing voids with pitches as low as 36nm. So while its true Intels 36nm pitch size is around the same as TMSC 40nm (Samsung 8nm is 44nm). It sounds like Intels 10nm++ they are talking about using now has either solved their issues by changing their cobalt deposit/plate method or more likely by simply adjusting the pitch size slightly. (they have also done a ton of work decoupling die designs from process nodes... which lets them back port to 14 but also makes it easier to use their old 10nm core designs with a 10nm++ that may have a slightly different gate size requirement)

Bottom line your right I am probably reading in a bit to something I read a year and a half ago. :) I don't believe Dr. Murthy actually specifically said they decreased pitch gate..... and I don't know for sure if Intel is or isn't. To be honest it sounds like they may have continued to be hardheaded on 36nm at 10... cause it still doesn't work right. lol I joke but ya they may have adjusted pitch and transistor count or they may have fixed their issues with plating, but they haven't been public on exact issues or fixes.

Without a doubt though they started over from the start with 10nm the previous fired head of the Intel Fabs said that much. Also at the same conference Raj K said Intels issues with 10nm showed them that they needed to decouple their core designs from specific fab processes. Which seems to have lead to what we have no with the 10nm core being backported to 14nm. They can't stuff as many cores in but they can implement those newer designs at 14nm I guess. I read another Intel engineer thing awhile ago (and I'm sorry for paraphrasing things I don't have handy to link) that with Intels decoupling of nodes they also realized AMD was doing the right thing with IO dies. So going forward Intel was going to do its best to design their IO bits to work with multiple designs and nodes.

They changed the design, they were trying something interesting and very aggressive with the design of their transistors and the dummy gates that surround them. It didn’t pan out as planned. They have since gone back to a more traditional design but their new patents on vertical transistors also look very interesting.
Thank you both for the clarifications. I asked just in case I had missed something.
 
  • Like
Reactions: ChadD
like this
Thank you both for the clarifications. I asked just in case I had missed something.
https://fuse.wikichip.org/news/525/...ntels-10nm-switching-to-cobalt-interconnects/

This article describes some of the changes they were trying to implement in their 10nm designs, I do not know which of these got kept, which got tossed, and what ones they scaled back. But if all of them had panned out it would have been a major breakthrough, and honestly many of them still may be possible but that will likely be something they have to implement smaller than 7nm.
But the sheer number of changes is what bit them in the behind, and to make things worse they were designing their chips for the 10nm process with all of them in place so not only have they had to scale back the actual implementations for their 10nm but all the designs they made during that 2-3 year stretch had to be tossed because they couldn't get the node to work.

https://www.extremetech.com/computi...ong-10nm-delay-caused-by-being-too-aggressive

This article here describes the changes they were going to have to make for future designs, and talks a little more about the stuff they couldn't implement, and how they were not capable of scaling their 10nm designs back to their 14nm process and how they were going to have to rethink their design process going forward.
 
Last edited:
Fingers crossed on no more cut-corners, security exploits, unnecessary market segmentation, high TDPs, or any more pluses to their existing 14nm+++ designs.
With accountants at the helm, good luck. Cutting corners is all they know how to do
 
With accountants at the helm, good luck. Cutting corners is all they know how to do
Technically they didn't really cut corners, none of those attack types were things when it was designed, it is just an overall flaw that has become abundantly apparent in the modern age, Intel needs a completely new architecture design but since their 10nm failures scrapped all their new architectures they have to start from scratch and limp along with their flawed one for the time being. Really they just keep stepping on rakes, and if it weren't for the fact it is so disastrously inconvenient for all of us this would be funny as hell.
 
Technically they didn't really cut corners, none of those attack types were things when it was designed, it is just an overall flaw that has become abundantly apparent in the modern age, Intel needs a completely new architecture design but since their 10nm failures scrapped all their new architectures they have to start from scratch and limp along with their flawed one for the time being. Really they just keep stepping on rakes, and if it weren't for the fact it is so disastrously inconvenient for all of us this would be funny as hell.
I have a general disdain for accountants, so I still think having an accountant in leadership positions is bad. They don't directly affect technical issues, but they indirectly affect everything. Typically negatively. Blinded by red and black.
As annoying as the semiconductor industry is right now, I do think it's hilarious all the issues Intel has been having. I figure it's karma for their shady behavior towards AMD in the early 2000s.
 
I have a general disdain for accountants, so I still think having an accountant in leadership positions is bad. They don't directly affect technical issues, but they indirectly affect everything. Typically negatively. Blinded by red and black.
As annoying as the semiconductor industry is right now, I do think it's hilarious all the issues Intel has been having. I figure it's karma for their shady behavior towards AMD in the early 2000s.
I have a mixed relationship with Accountants as in all my positions I have had for the last 2 decades in IT I always answer to the accounting department as I am basically treated the same as Maintenance, that said though I am given a great deal of autonomy to operate. Something the size of Intel is tricky they need somebody at the top who is not only very technically minded but also understanding of the accounting situations they are in otherwise the share holders will just eat them for lunch.
 
Technically they didn't really cut corners, none of those attack types were things when it was designed, it is just an overall flaw that has become abundantly apparent in the modern age
They did take shortcuts for performance vs security, where as AMD more often than not did both well.
 
Technically they didn't really cut corners, none of those attack types were things when it was designed, it is just an overall flaw that has become abundantly apparent in the modern age, Intel needs a completely new architecture design but since their 10nm failures scrapped all their new architectures they have to start from scratch and limp along with their flawed one for the time being. Really they just keep stepping on rakes, and if it weren't for the fact it is so disastrously inconvenient for all of us this would be funny as hell.
The only shortcut that wasn't taken, and universal exploit (regardless of microarchitecture or ISA) was Spectre.
Outside of that, all of the rest of the exploits found on Intel's CPUs (dating back to 1995 with the Pentium Pro) were absolutely caused by taking shortcuts for performance gains, rather than implimenting them properly and taking the standard performance hit for such proper checks.

Intel screwed the pooch, and is paying the price for it.
 
  • Like
Reactions: ChadD
like this
The only shortcut that wasn't taken, and universal exploit (regardless of microarchitecture or ISA) was Spectre.
Outside of that, all of the rest of the exploits found on Intel's CPUs (dating back to 1995 with the Pentium Pro) were absolutely caused by taking shortcuts for performance gains, rather than implimenting them properly and taking the standard performance hit for such proper checks.

Intel screwed the pooch, and is paying the price for it.

Exactly, Intels latest 9 point stock drop... isn't because Pat said look I'm bring back past rockstars and where going to fix the fabs. Leading investors to go ahhhh snap we wanted you to outsource the whole shebang.
NO no no the big investors are NOT stupid believe it or not. What they heard from Pat was.... hey in order to fix what ails Intel.... I'm going to bring back the team that FUCKED Intel once already. Cause the security issue IS costing intel billions right now today. AWS is killing Intel right now... everyone and their dog is looking to switch their servers to ARM to save money, AND to bypass the need for Intel and what many see as a massive security hole and a culture that has convinced many that there are going to be even more security land mines down the road with Intel. Nehalem lead to a decade of server chips with massive security fails. ARM servers got much much more attractive when Intels performance for many tasks got halved with a security patch.

So ya his plan so far seems to be to bring back the assholes that decided a branch prediction system on a chip doesn't need to bother to check a basic 0 or 1 rights check before a read to gain a small little bit of performance. No one that designed that didn't understand exactly what it was doing. By bring those folks back he also reminded Investors.... who ya wait a second Pat was the CTO when those decisions where made. So either he knew and said fuck it we are smarter then everyone else and no one will every figure out how to abuse this so take the 5% performance bump we get not bothering to rights check reads. ORRR perhaps worse those decisions where made and Pat as CTO and no freaking clue. I am honestly not sure which is worse. (I tend to believe he knew now that he is bringing those folks back.... I mean if he didn't know wouldn't he be wondering how they missed that and perhaps not thinking they are great for the company????)

Anyway ya investors went from excited to see the bean counter go... to waking up to realizing Intel replaced him with the guy who as CTO may have actually set Intel on a 10-20 year suicide run. I guess he gets a chance to fix his own mess. Anyway ya that recent 9 point dip when this and the fab revival was talked about... got some big investors to consider exactly what brining Intels previous fail CTO in as CEO may actually not be such a great plan. He sounds like the same arrogant ass that gave AMD and more importantly ARM its biggest every gift in the fight against Intel in the server market.
 
Last edited:
He's probably more qualified to be CEO more than most CEOs of big Corps thst just wing it.
 
  • Like
Reactions: ChadD
like this
He's probably more qualified to be CEO more than most CEOs of big Corps thst just wing it.
Bob Swan was a disaster no doubt. All he was good at was hiding losses and inflating stock prices. He was not equipped to get Intel through the ARMageddon that is coming for them. Pat may be able too.... I am disappointed to hear him talking like its still 2010. I believe bringing back the nehalem folks is a massive mistake... if nothing else in the optics as plenty of the big Intel stake holders are very much unhappy about all the security fiasco. And they are smart enough to know it all started with choices made in that generation of Intel chips.

I do hope Pat gets it together and I'm way off about him having his head up his Kester. Intel needs a smart hand at the wheel... they are going to have to do almost everything exactly right if there going to remain relevant in the same way they had been.
 
I just want to have a drink with someone smart enough to advance the modern CPU in any kind of meaningful way. They are voodoo smart, and I’d love to meet someone that smart. Never have, and probably never will. It goes back to that idea that if new inventions depended on the majority of us, we may have gotten about as far as a wheel barrow in our entire lifetimes. The tiny percentage of true inventors, and geniuses push the ball forward and the rest of us enjoy the benefits. Even a company as big as Intel can’t find someone in their ranks or hire externally to advance the cause these last ~ 10 years.
 
Back
Top