Backward compatibility rules

i worked for 2 years ('98-'00) on porting an OS over to Itanium (#ifdef IA64) and the "Itanic" as we called it. our name for it fit because it was so horrible.
one fourth the speed at twice the price. what a bargin! intel paid my company hundreds of millions of dollars for this effort that in the end was not worth a single penny.
actually along the way there was a touch of RUHROH with the code mingling and the nice people at SCO had their lawyers file a nice lawsuit. granted it was deserved imho. but clearly showed you cant have people in the same project - let alone the same hallway - working to the same code base but using different as their "inspiration".
yes SCO won...and at that time SCO was the single ower of 32bit ATT System V Unix (HP had bought and owned the 64bit rights). now look at either one of them. lignux has crushed sco into nothingness and hp did that on their own as well. even sun is now gone (at least the hardware) and sunos is now just a mauve version of lignux under boracle.
intel made so many revisions to itanic (was it 10 or 11 or?) to try and get its performace up to where it needed to be to compete with others...or even their own line of pentium/core chips. so much effort for such no nothing product.
Project Monterey died in the '01 time frame. it only took intel/hp another 20 years to figure out what other had already realized.
 
https://www.tomshardware.com/news/last-itanium-shipment

Intel and HPs multi billion dollar attempt at a clean break from x86. Something that sounded good in theory ( and on message boards) but no one signing checks actually wanted.
Remember this, those clamoring for Microsoft to drop legacy support. It ain't gonna happen.
I don't think it's so much asking Microsoft to drop legacy support, full stop, as it is asking Microsoft to further shy away from its "legacy support at all costs" mindset. This is the company that took flak in 2015 for discontinuing a 2009 feature (XP mode) that helped run a 2001 OS (XP) which let companies run 1996 apps (NT 4.0). There's a point where backwards compatibility shifts from "being considerate" to "holding technology back."

And there's a concern that Microsoft isn't doing enough to initiate a transition to (or at least, robust support for) ARM precisely because of that obsession with legacy. Yeah, x86 is fine now, but the world is not a static place. What if Apple gets to a point a few years from now where its chips offer an unambiguous advantage over most AMD and Intel equivalents? Remember, Apple's phone chip line evolved from being merely competitive to blowing away rivals in most respects. Microsoft could be right in hoping that Intel finally gets its act together and builds competent x86 chips... or it could be making a horrible mistake that creates an opening for Apple.
 
Wasn't HP making their own UNIX workstations with their own chip that predates the Itanium, and that Itanium was based on when Intel and HP collaborated on its design?
 
I don't think it's so much asking Microsoft to drop legacy support, full stop, as it is asking Microsoft to further shy away from its "legacy support at all costs" mindset. This is the company that took flak in 2015 for discontinuing a 2009 feature (XP mode) that helped run a 2001 OS (XP) which let companies run 1996 apps (NT 4.0). There's a point where backwards compatibility shifts from "being considerate" to "holding technology back."

And there's a concern that Microsoft isn't doing enough to initiate a transition to (or at least, robust support for) ARM precisely because of that obsession with legacy. Yeah, x86 is fine now, but the world is not a static place. What if Apple gets to a point a few years from now where its chips offer an unambiguous advantage over most AMD and Intel equivalents?
If the world ran databases, photo editors and word processors, you might be right. But a lot of pc uses in buisness/industry involve other equipment that is orders of magnitude more expensive to replace.
The itanium was a good technology, they just executed it poorly. I think they either needed to sell them as x86 prices to get a large enough user base, or charge the crazy price and just include a full blown x86 processor on the card for backward compatibility.
 
I remember when Athlon came out, I knew a bunch of dumb asses who were all like "yeah, but wait until Intel releases Merced." :ROFLMAO:
 
i worked for 2 years ('98-'00) on porting an OS over to Itanium (#ifdef IA64) and the "Itanic" as we called it. our name for it fit because it was so horrible.
one fourth the speed at twice the price. what a bargin! intel paid my company hundreds of millions of dollars for this effort that in the end was not worth a single penny.
actually along the way there was a touch of RUHROH with the code mingling and the nice people at SCO had their lawyers file a nice lawsuit. granted it was deserved imho. but clearly showed you cant have people in the same project - let alone the same hallway - working to the same code base but using different as their "inspiration".
yes SCO won...and at that time SCO was the single ower of 32bit ATT System V Unix (HP had bought and owned the 64bit rights). now look at either one of them. lignux has crushed sco into nothingness and hp did that on their own as well. even sun is now gone (at least the hardware) and sunos is now just a mauve version of lignux under boracle.
intel made so many revisions to itanic (was it 10 or 11 or?) to try and get its performace up to where it needed to be to compete with others...or even their own line of pentium/core chips. so much effort for such no nothing product.
Project Monterey died in the '01 time frame. it only took intel/hp another 20 years to figure out what other had already realized.
I'm very curious if you can explain why it sucked in non-programming terms. Was it just the cost/performance? Required funky and complex programming?
 
Wasn't HP making their own UNIX workstations with their own chip that predates the Itanium, and that Itanium was based on when Intel and HP collaborated on its design?
yes they most certainly did. it was called pa-risc if my memory is right.
it was hp/hpux/pa-risc vs sun/sunos/sparc vs ibm/aix/rs6000 for the entire '90s
 
Last edited:
I'm very curious if you can explain why it sucked in non-programming terms. Was it just the cost/performance? Required funky and complex programming?

well i think i did say that it was 1/2 the speed at 4times the cost. would you pay for say an internet service that was 1/2 the speed but cost you 4 times as much?

intel was little endian and we had to do byte swapping to big endian (ie the #ifdef IA64) in all the code to make it work.
had itanic had a option to switch to BE instead of LE on power up (which some chips can do nowadays) ... the porting conversion would have been done in months not years and probably ran better. to what extent nobody knows.
 
well i think i did say that it was 1/2 the speed at 4times the cost. would you pay for say an internet service that was 1/2 the speed but cost you 4 times as much?

heh. Some people don’t have an option for internet service and have to do that anyways. :)

anyways, I remember Intel pushing Itanium hard, claiming it would be way faster running native code than x86, but still support x86 via emulation or translation or some such.

for some reason the “about the speed of a pentium II @ 300MHz” sticks out in my mind, but this was after the Pentium 3, which started at 450+MHz and had SSE instructions, had already been available for a couple years, and then there was the P3 copper mine refresh. So before you even get to Athlon 64 (2003 for server parts), itanium was basically slower than anything Intel had in the consumer space unless you went back to 1997ish performance levels.

intel just figured everyone would bend over backwards because they were Intel, but they had been selling higher performing parts to everyone for more than half a decade and no one wanted to go backwards. AMD took care of the server market when opteron with x86-64 launched in 2003, so IA-64 didn’t have anywhere to go but into the bin.
 
Was it at least fast running native code? I understand programming for it was insanely difficult.
 
I was at Match.com when HP duped us into buying a bunch of Itanium rigs. They were huge, power hungry and took up LOTS of cabinet space. We try to make code for it and in short ordered told HP to come take their **** back.
 
Was it at least fast running native code? I understand programming for it was insanely difficult.

It could be fast, but only if it was running well tuned code. The problem was it was hard for people to hand tune code for it, and even harder for people to write compilers which would output tuned code.

The basic issue is that each instruction was more like 4ish? instructions mashed together, but there were complex rules about what could be mashed up with what, so often people and compilers would put nops in many of the slots. If you've got nops everywhere, most of the processor's computation units will be idle and your program is gonna be slow.

x86 optimization goes the other way, the instruction stream is single instructions, and the processor more or less looks to see what it can run out of order when it needs to wait for something slow, and predicts if branches will be taken to get started on those instructions. You still get crap performance if the processor is mostly idle, but scheduling is simpler because the scheduler works on actual state at runtime, instead of predicted state at compile time; there's still compile time work at times though.
 
Och I don't recall ever seeing an example of a consumer-grade piece of software for it that ran at any pace other than what would be considered "proof of concept", but gaming press didn't get into "next gen" parts for mainframes (or whatever we called them in the early 2000s). I think I had a build of Windows.... 2000? for IA64? I don't think it was NT Workstation 4.0. But there were never any desktop IA64 processors so it just ended up as a coaster.

It was pretty clear with the initial release that it would never make it in the consumer and commodity server (like 1U, 2U rackmount stuff) market, so the only hope it had of gaining any traction was in megacompute farms like what SGI, Sun, etc used to run. but as sunruh mentioned, all the companies who had products where it might have been useful already had their own processor lines and they weren't interested in using something from a competitor, especially if it was slow as balls.

And then most of those companies folded, became patent trolls or were bought by other companies to be used as patent fuel anyways.
 
Alpha was a clear, legitimate challenger to x86. It was lost when compaq bought dec and hp bought Compaq an then it (alpha) was a threat to itanium.
 
Alpha was a clear, legitimate challenger to x86. It was lost when compaq bought dec and hp bought Compaq an then it (alpha) was a threat to itanium.
No. Alpha pretended the hardest, but in the end-of-the day nt4 on Alpha lost Compaq money.

The CPU prices seemed impressive ( same price as to the Pentium II for twice the performance), but the rest of the platform bled you dry : Proprietary motherboard with 128-bit ram to match-up(plus no option to add AGP GPU)

I'm sure the Alphas motherboard probably cost you as much as the processor , while mass-produced Slot 1 Taiwanese motherboards started at na little over $100. Also, within a single year, Deschutes PIIs were dropping prices , and raising clocks!, while the Celeron with 128-K onboard cache gave you 90% of the performance of the Pentium 300... for 120!

The final smack down that doomed the Alpha - the entry-level Athlon 550 creamed almost every inexpensive DEC chip in existence.

https://www.macinfo.de/bench/specmark.html

What killed DEC: because they sold a lot fewer processors, always took them several years just to revise a design - both Intel and AMD could do it in one!

Athlon turned into Thunmderbird 256k inside year, and then Shortly after that, we got the Athlon Xp (AMD ride that rev over 2ghhz)
 
Last edited:
I understand Chinese acquired Alpha license and base their supercomputer on it.
 
I think you forgot MIPS and Alpha, lol.

yes mips in the sgi ... i just loved saying Crimson Reality Engine ... how could you not think that wasnt the coolest name ever
alpha in the dec ... love how dec used compaqs money to buy them
heck even 68000 in the early sun that then went to sparc
it was an impressive decade of explosive growth
we could even mention the romp chip in the rt that then was the power chipset (it was 5,7,9 chips) in the rs6000 that then became just 1 in the 1st power chip (motorola/apple/ibm)
 
I understand Chinese acquired Alpha license and base their supercomputer on it.


Well, that explains why it only does LINPACK well; the architecture is in-order 4-wide (so you need an optimizing compiler, plus non-branchy code to really make magic happen.)

high-level overview o the 21264 pipeline: seven stages,similar to the earlier in-order 21164

https://safari.ethz.ch/digitaltechnik/spring2019/lib/exe/fetch.php?media=alpha_21264.pdf

But yeah, all they did with the 21264 was add OOO, and developing that took DEC THREE YEARS! When you have such low unit sales.,you tend to take longer to finance performance improvements (and whe you are also handling the expense porting tom a completely new OS,. you're drained twice as quickly)!
 
The company I was working for at the time acquired a few HP workstations running Itanium which we used for Computational Fluid Dynamics simulations. Our software vendor supported Itanium running HP-IA64 natively. I will tell you that, at the time, it was the fastest workstation in our fleet (we had a mixture of DEC-Alpha, HP PA-RISC, and Itanium workstations). I remember benchmarking and it was around 60-80% faster (at the time).

We were absolutely an isolated case, because CFD codes parallelize nearly perfectly (due to the nature of the coding and problem discretization).
But unfortunately even that was short-lived, because once x86-64 (more specifically the extra memory addressing that x86-64 allowed...CFD simulations require large amounts of memory) emerged, then Linux clustering quickly began to rule, and the rest is history.
 
Last edited:
Itanium could have been something very different. I mean it wouldn't have existed if not for HP... so I guess that market was where it was headed. However EPIC design is actually very interesting. Intel had the ISA that could have kept ARM from rising. A mobile Itanium could have competed very well with ARM in terms of efficiency and performance. (Perhaps I mean who knows really the extra complication could have been an issue but if Intel nailed the internal prediction perhaps) Of course with HP involved the entire idea initially was to replace all the custom chips the Alphas and the DEC stuff ect ect. Ironically Intel got this market anyway as Itanium stopped all those other ISAs as everyone jumped on Itanium... and when it failed well in comes x86-64 now that the purpose built stuff is gone.

Intel a decade ago though... instead of trying to make terrible x86 mobile chips, perhaps Itanium was the solution. I imaged the question would have been could they pack Itaniums more complicated internal prediction bits into a smaller package. I know Itanium got a rap for being power hungry... but the design really isn't the issue imo its that by the time Intel got it out it was 2 process nodes behind in fab, and for later Itanium 2 Intel didn't even try to make them on their latest fabs. When comparing Apples to different flavor Apples Itanium stacked up pretty well. Compared to IBM power Itanium was actually sucking a good bit less power... which is impressive as Power has been the efficiency leader in those markets forever. Who knows if Intel had built something from Itanium to compete with ARM... the mobile market might have looked very different at this point and Intel wouldn't have had to worry about ARM doing the same thing they tried to do with Itanium (dominate a niche market... built a software ecosystem and then come for all the cookies.) Worse case it would have went the same way the Server Itanium went... cockblocking ARM and when it failed x86 would have been the defacto chip still around.
 
Itanium could have been something very different. I mean it wouldn't have existed if not for HP... so I guess that market was where it was headed. However EPIC design is actually very interesting. Intel had the ISA that could have kept ARM from rising. A mobile Itanium could have competed very well with ARM in terms of efficiency and performance. (Perhaps I mean who knows really the extra complication could have been an issue but if Intel nailed the internal prediction perhaps) Of course with HP involved the entire idea initially was to replace all the custom chips the Alphas and the DEC stuff ect ect. Ironically Intel got this market anyway as Itanium stopped all those other ISAs as everyone jumped on Itanium... and when it failed well in comes x86-64 now that the purpose built stuff is gone.

Intel a decade ago though... instead of trying to make terrible x86 mobile chips, perhaps Itanium was the solution. I imaged the question would have been could they pack Itaniums more complicated internal prediction bits into a smaller package. I know Itanium got a rap for being power hungry... but the design really isn't the issue imo its that by the time Intel got it out it was 2 process nodes behind in fab, and for later Itanium 2 Intel didn't even try to make them on their latest fabs. When comparing Apples to different flavor Apples Itanium stacked up pretty well. Compared to IBM power Itanium was actually sucking a good bit less power... which is impressive as Power has been the efficiency leader in those markets forever. Who knows if Intel had built something from Itanium to compete with ARM... the mobile market might have looked very different at this point and Intel wouldn't have had to worry about ARM doing the same thing they tried to do with Itanium (dominate a niche market... built a software ecosystem and then come for all the cookies.) Worse case it would have went the same way the Server Itanium went... cockblocking ARM and when it failed x86 would have been the defacto chip still around.


No, Intel Itaniium was always guaranteed to suck power , regardless of process node - it was wasting so many cyles with unfilled instruction bundles. When you combined that with thee much lower instruction density, you needed to waste more I/O performance loading "mostly-empty" instruction bundles into cache.

This is why Itanium had the LARGEST ON-DIE CACHE Intel had ever shipped at the time.

Itanium's VLIW instruction bundles offered speculative execution to avoid failed branch prediction costs, but the practice of executing calculations that were discarded most of the time ate into the CPU power budget, which was becoming an increasingly limited resource at the time Itanium was released. (so no magical perf/ watt similar to Transmeta)

See here for more details

https://softwareengineering.stackex...m-processor-difficult-to-write-a-compiler-for

The reason Code Morphing worked at all was because people were willing to take 1/4 the performance of a Mobile Pentium 4 to get 1/2 the power consumption - I mean, look at the early success of the Via C3 - until Banias destroyed them both!

You can only hack your way so much to turn a high-latency VLIW platform into low-latency, while the low-latency platform can take better advantage npf compiler optimizations and a more deterministic fetch
 
Last edited:
I miss open VMS.

The Alpha's really did kick ass in the early to Mid 90's when DEC wasn't doing dumb moves to lose money and lawsuits (Although selling their stuff to intel for 700million bought them a little extra life) , then having intel essentially kill off the Alpha when some of their mutlithreaded stuff in the pipeline was actually interesting. I guess they did get to live on in hypertransport and AMD for a while

Wish I still had my old alphastation, was always interesting running Windows NT4 on it, which for the most part ran terrible besides when crunching numbers, I probably was running SETI on it when it first came out just because it was interesting.

That and it had a metal 120mm fan on it that would probably cut fingers off. Thing was hoss.
 
No, Intel Itaniium was always guaranteed to suck power , regardless of process node - it was wasting so many cyles with unfilled instruction bundles. When you combined that with thee much lower instruction density, you needed to waste more I/O performance loading "mostly-empty" instruction bundles into cache.
Yes.
The bad thing - this was proven as the design was being pitched via code analysis. It was abundantly clear general purpose workloads would fill a fraction of the IW most of the time. The per-thread ILP simply isn't there. The "magic compiler" can't extract parallelism that doesn't exist.

Lots of arguments, lots of egos.
 
Yes.
The bad thing - this was proven as the design was being pitched via code analysis. It was abundantly clear general purpose workloads would fill a fraction of the IW most of the time. The per-thread ILP simply isn't there. The "magic compiler" can't extract parallelism that doesn't exist.

Lots of arguments, lots of egos.


Yup, after PA RISC started to hit the "Everyone else already has their own brand of RISC processor running their own brand of UNIX" wall in the 1990s, they started to go fishing for "the next big thing" And after they failed to find it, they settled for this 1200th-ranked option :D

Then somehow they convinced Intel to get involved....after that, it was guaranteed to be a catfight.

HP only bough-in because they had nightmares of POWER and SUN and x86 servers rolling them over, and Intel only obliged them because they had dreams of AMD being barred from any Itanium 64-bit compatible...neither one of them was actually trying to design an efficient architecture.
 
Last edited:
Then somehow they convinced Intel to get involved....after that, it was guaranteed to be a catfight.
It didn't take much arm twisting.

Intel desperately wanted something to which AMD (and any other competitor) had absolutely zero rights. They wanted to kick the stool out from people making, what they viewed as - their chips.
Some folks pitched a more traditional clean-slate design, but there was this pervasive love of the VLIW moonshot. Just look how well it does in ideal cases!
Couldn't get through that the ideal case had almost nothing to do with most workloads customers actually had. That was dismissed with a "compilers will improve". But... but... sigh.

And of course, we basically do handle the ideal cases for VLIW (batch processing) today with a degenerate targeted version of VLIW - SIMD instructions.
 
And there's a concern that Microsoft isn't doing enough to initiate a transition to (or at least, robust support for) ARM precisely because of that obsession with legacy. Yeah, x86 is fine now, but the world is not a static place.
Both Microsoft and Intel have tried to take down x86. Neither company likes x86 but not for the reasons you think. AMD being able to make x86 chips is probably the driving force for the creation of Itanium. The reason AMD can make chips is due to an age old deal made between Intel and IBM so that Intel didn't dominate the market, and also why Intel was such a dick and eventually was sued to hell and back at preventing AMD from selling their chips. Microsoft has obviously made Windows on ARM long before Apple even created the iPod, let alone the iPhone. The failure was that Windows Mobile was bad and Intel's Itanium was slower than x86 and more expensive.
23bits-pocketpc-superJumbo.jpg
What if Apple gets to a point a few years from now where its chips offer an unambiguous advantage over most AMD and Intel equivalents? Remember, Apple's phone chip line evolved from being merely competitive to blowing away rivals in most respects. Microsoft could be right in hoping that Intel finally gets its act together and builds competent x86 chips... or it could be making a horrible mistake that creates an opening for Apple.
The rise of the Apple M1 is the failure of Intel. Intel for some reason didn't get into graphics and didn't want to improve their CPU design since nobody was able to compete. AMD on the other hand has long surpassed the M1 but chooses not to implement their technology into their entire product line. Upcoming AMD APU's are going to have RDNA1 and not RDNA2 despite that Valve's Deck is going to use RDNA2.

Apple is not a hardware company and they never will be. Their ARM CPU is not great because Apple but because Intel failed and everyone ignores AMD when comparing to x86. Apple has a time to live with their ARM products because AMD is making even better CPU's while Intel definitely has the resources and drive to make an even better product. The M1 is a combination of ARM's technology along with Imagination's GPU technology all under the umbrella of Apple with a heavy sprinkle of Apple money. This won't last and eventually Apple will have to go to someone else to make future ARM products.
 
Last edited:
  • Like
Reactions: Wat
like this
The rise of the Apple M1 is the failure of Intel. Intel for some reason didn't get into graphics and didn't want to improve their CPU design since nobody was able to compete. AMD on the other hand has long surpassed the M1 but chooses not to implement their technology into their entire product line. Upcoming AMD APU's are going to have RDNA1 and not RDNA2 despite that Valve's Deck is going to use RDNA2.

Apple is not a hardware company and they never will be. Their ARM CPU is not great because Apple but because Intel failed and everyone ignores AMD when comparing to x86. Apple has a time to live with their ARM products because AMD is making even better CPU's while Intel definitely has the resources and drive to make an even better product. The M1 is a combination of ARM's technology along with Imagination's GPU technology all under the umbrella of Apple with a heavy sprinkle of Apple money. This won't last and eventually Apple will have to go to someone else to make future ARM products.
Yes and no. Apple was definitely helped by Intel's inability to move to 10nm, but Apple also has very, very good engineers. Remember, its first foray is also beating AMD chips in some areas, not just Intel's. And don't forget that Apple has made many, many customizations to the CPU and GPU. This isn't just a reference ARM design with PowerVR slapped on top. That's like saying some Call of Duty games are just Quake mods because they include some Q3A code. At a certain point you have to accept that it's different enough to be its own beast. And besides... if it really was 'just' ARM + Imagination, wouldn't it make the x86 landscape look that much worse?

The point is that you shouldn't count on the existing pecking order remaining in place. I don't picture Apple claiming a giant chunk of the computer market or outperforming every x86 chip (at least not by a wide margin), but there is a chance it could change the competitive landscape. I wouldn't underestimate the company that beat Qualcomm to 64-bit and makes mobile chips so fast that they often outperform rivals from the following year.
 
Yes and no. Apple was definitely helped by Intel's inability to move to 10nm, but Apple also has very, very good engineers. Remember, its first foray is also beating AMD chips in some areas, not just Intel's. And don't forget that Apple has made many, many customizations to the CPU and GPU. This isn't just a reference ARM design with PowerVR slapped on top. That's like saying some Call of Duty games are just Quake mods because they include some Q3A code. At a certain point you have to accept that it's different enough to be its own beast. And besides... if it really was 'just' ARM + Imagination, wouldn't it make the x86 landscape look that much worse?

The point is that you shouldn't count on the existing pecking order remaining in place. I don't picture Apple claiming a giant chunk of the computer market or outperforming every x86 chip (at least not by a wide margin), but there is a chance it could change the competitive landscape. I wouldn't underestimate the company that beat Qualcomm to 64-bit and makes mobile chips so fast that they often outperform rivals from the following year.
Apple also has a big lead on power requirements thanks to the M1, look at what the M1 can do in that power bracket, it currently punches well above its weight class and that is a problem for Intel and AMD the new California and EU power requirements make apples M1 chips very attractive especially in the enterprise space. With more and more business-critical systems being web-based or running off a central server that you are accessing via Citrix or other RDP clients the drawbacks of using Apple almost vanish, and you are just left with the large power savings, and those savings are noticeable. Pair that with recent studies from a number of accounting firms showing the TCO on Apple being better than just about all other OEM's and that leaves them in a strong position.

But the biggest thing that Intel and AMD need to worry about is the rate that big tech companies, Amazon, Facebook, Google, are moving to ARM. As they implement more ARM-based servers they create more services to run on them and once they have made that conversion, it is a huge task to try to get them to convert back over to x86. Not a sales pitch that either team could reasonably pull off, so those sales are just gone and the pie they are fighting over gets just that much smaller.

This is one of the reasons that Intel needs their GPU division to take off, the big supercomputers aren't running hundreds of thousands of CPU's anymore, they are running the amount needed to process the feeds from those hundreds of thousands of GPU's, which NVidia and AMD have the full run of the house on.
 
Yes and no. Apple was definitely helped by Intel's inability to move to 10nm, but Apple also has very, very good engineers.
The inability for Intel to move to 10nm is not the driving force of their failure. Everything Intel has made since 2011 is based on Sandy Bridge. No massive changes have been done since. Pentium 3 to P4 is a massive but bad change. P4 to CoreDuo was a massive change. CoreDuo to Sandy Bridge was the last massive change Intel has done. Everything since then is 10%, 10%, 3%, and barely any difference with each new iteration. There's a very good reason why lots of people are still using 2500K's or similar CPU's because the technology hasn't evolved all much. It's so bad that all we're doing is comparing power usage as performance is rather boring.
Remember, its first foray is also beating AMD chips in some areas, not just Intel's.
It's beating AMD chips that are based on Zen2 and Vega grapics. Zen 2 is like two years old and Vega is five years old. Not to mention AMD is on 7nm while Apple is on 5nm. AMD is a sleeping giant that for some reason isn't selling their latest technology to everyone, and I think a lot of that is thanks to Sony and Microsoft putting restrictions on their technology.
And don't forget that Apple has made many, many customizations to the CPU and GPU. This isn't just a reference ARM design with PowerVR slapped on top.
It's not too far off from the truth. I'm sure it took a lot of money and a lot of engineering to get the M1 where it is but at this point why doesn't Apple just make their own CPU architecture? They certainly don't have a problem making developers learn to code new hardware. It's far cheaper to use ARM design and modify it than build your own. It's no where near the same level of AMD where the only designs they're given from Intel are the old 286 designs that they barely got their hands on. AMD made an entire CPU from scratch while Apple edited ARMs and Imaginations.
That's like saying some Call of Duty games are just Quake mods because they include some Q3A code. At a certain point you have to accept that it's different enough to be its own beast.
Honestly, Call of Duty would probably be a better game for it.
And besides... if it really was 'just' ARM + Imagination, wouldn't it make the x86 landscape look that much worse?
I don't follow?
The point is that you shouldn't count on the existing pecking order remaining in place. I don't picture Apple claiming a giant chunk of the computer market or outperforming every x86 chip (at least not by a wide margin), but there is a chance it could change the competitive landscape. I wouldn't underestimate the company that beat Qualcomm to 64-bit and makes mobile chips so fast that they often outperform rivals from the following year.
My prediction is that Apple at some point won't be as profitable while engineering their own chips and will have to switch to Nvidia or even Qualcomm to stay competitive. Everyone is scrambling to make something better because for nearly a decade they're using the same technology with small changes. That won't do anymore since AMD has clearly surpassed Intel and Apple has clearly surpassed Qualcomm. Switching to a new CPU architecture is a risky venture that does have great rewards but could easily destroy the market for Apple. There's a reason why Itanium failed and it isn't just because it was expensive and slow but it wasn't Wintel. Like I said, Intel and Microsoft have tried to kill Wintel and that's their market.
 
My prediction is that Apple at some point won't be as profitable while engineering their own chips and will have to switch to Nvidia or even Qualcomm to stay competitive. Everyone is scrambling to make something better because for nearly a decade they're using the same technology with small changes. That won't do anymore since AMD has clearly surpassed Intel and Apple has clearly surpassed Qualcomm. Switching to a new CPU architecture is a risky venture that does have great rewards but could easily destroy the market for Apple. There's a reason why Itanium failed and it isn't just because it was expensive and slow but it wasn't Wintel. Like I said, Intel and Microsoft have tried to kill Wintel and that's their market.
Apple has been designing their own ARM chips since 2008, their first launch in 2010 was the A4, the rest of the industry has been playing catchup ever since. Apple is currently worth $2.08 Trillion, if there comes a point where some other manufacturer is building a better package then I can see them moving over but as it currently stands I don't see it happening in the immediate future. I mean never say never, but the Apple Silicon is 90% of what makes most Apple products Apples, they've optimized its hardware & software to a point where other Flagships can beat it on specific metrics, but as an overall device, they feel better. Be it UI, or screen quality, or weight, they have a better user experience from their top-down integration. You can get better performing hardware from Intel, AMD, and yes even Qualcomm, but when packaged up into a full bundle they tend to fall short, some of that is software, but owning the full stack gives Apple an edge, and I don't see them giving it up without one hell of a fight. And Apple has the resources to go down swinging.
 
Apple also has a big lead on power requirements thanks to the M1, look at what the M1 can do in that power bracket, it currently punches well above its weight class and that is a problem for Intel and AMD the new California and EU power requirements make apples M1 chips very attractive especially in the enterprise space.

Not having used one, the M1 seems like a very nice chip, but as a single customer CPU maker, Apple can do things Intel and AMD really can't. Apple doesn't have to play MHz war games, so it's OK that max clocks are 3.2 GHz rather than 4+ GHz for mobile Intel and AMD chips. They make up for lower clocks with wider execution, and lower clocks means less heat too, which is good because Apple was never one to provide for heat dissipation. Intel and AMD can't design a chip that peaks at 3.2 GHz, because the other one would throw their PR department at it; even if perf was outstanding, it's hard to convince someone to buy a 3.2 GHz processor instead of a 4.5 GHz.

All that said, just because Apple makes a great processor for themselves, doesn't mean they would have an easy time selling it to others, even if they wanted to. Apple does not have a good history of enterprise support, and look at how little market share Epyc has in servers, even though I think it's a faster and more capable processor than Intel at every price point; it takes a lot to move enterprise clients, and outstanding performance isn't enough, you also need good hardware and software vendor relationships and a believable support commitment. Apple sells enough phones that it makes sense for them to design their own chips as long as they keep doing it well, and may as well sell them in computers too, since they can make that happen without a lot of extra fuss.

The inability for Intel to move to 10nm is not the driving force of their failure. Everything Intel has made since 2011 is based on Sandy Bridge. No massive changes have been done since. Pentium 3 to P4 is a massive but bad change. P4 to CoreDuo was a massive change. CoreDuo to Sandy Bridge was the last massive change Intel has done. Everything since then is 10%, 10%, 3%, and barely any difference with each new iteration. There's a very good reason why lots of people are still using 2500K's or similar CPU's because the technology hasn't evolved all much. It's so bad that all we're doing is comparing power usage as performance is rather boring.

Those 10% differences are boring, but after a few of them, they do add up. Being stuck on getting 10 nm working really stalls their development pipeline though, which is a big problem. Most of the 14 nm respins didn't add much at all, and they've been doing those for years. Maybe now that 10 nm seems to be mostly working, we might see bigger changes in new processors, and they might get their groove back.
 
All that said, just because Apple makes a great processor for themselves, doesn't mean they would have an easy time selling it to others, even if they wanted to. Apple does not have a good history of enterprise support, and look at how little market share Epyc has in servers, even though I think it's a faster and more capable processor than Intel at every price point; it takes a lot to move enterprise clients, and outstanding performance isn't enough, you also need good hardware and software vendor relationships and a believable support commitment. Apple sells enough phones that it makes sense for them to design their own chips as long as they keep doing it well, and may as well sell them in computers too, since they can make that happen without a lot of extra fuss.
Apple doesn't really need to sell its chips to anybody else, their existing product stack is more than capable of meeting the needs of most day-to-day enterprise requirements, I can't say I have had a bad experience with Apple support for Enterprise or Education at this stage. The few times I have had to use them for anything they have been very on point and all Enterprise/Education purchases come with Applecare included and they keep sales records for your account for auto-enrollment to partnered MDM's. In terms of mass deployment, the only easier products are Chromebooks.

EPIC's biggest problem is AMD's lack of support for OEMs, I currently operate 3 Dell EPYC (hope to make it 5 this year) servers and 6 Supermicro EPYC Embedded servers. All of them have had a number of BIOS and driver issues which make performing updates on them a pain, Dell has been awesome at working with me to get them all resolved but they are issues that I have not had to deal with on a Xeon platform in over a decade, mostly involving stepping through specific driver/bios versions before performing specific OS updates. Or in my most recent one changing the power delivery mode from Single w/ Redundant (uses one single PSU with the second as the failover) to Loadbalanced (shares load across both) to get the system to stop randomly soft locking on idle. It started happening after an OS update and was fixed in a Bios update where I could then put it back to the default Single w/ Redundant mode for the power supplies. But those are the sorts of weird things you have to watch for with EPIC's, my Supermicro's have had mostly storage controller issues, again fixed with a combination of Bios, Driver, and OS updates. This is really just a platform maturity issue, AMD has spent all their efforts into building great chips, and now that they have that locked in they just need to shift their focus onto the surrounding platform to really dial that in, once they have that I think OEM's will be more than happy to offer up more of their retail space to the EPIC servers.

That said my EPIC are beasts and I would need 2x the hardware from Intel to get that amount of work out of them so the headaches are more than worth the space and electrical savings they give me.
 
Back
Top