Apple Plans to Use Its Own Chips in Macs from 2020, Replacing Intel

Nice wall of text, but- none of that addresses that Intel stopped producing CISC processors with the Pentium over two decades ago, and AMD followed suit with the Athlon- and even before that they were tacking on various types of extra compute units.
 
Nice wall of text, but- none of that addresses that Intel stopped producing CISC processors with the Pentium over two decades ago, and AMD followed suit with the Athlon- and even before that they were tacking on various types of extra compute units.

At best the newest x86 designs can be considered hybrid designs. At their core they are still x86 instruction set chips. They still have caches and prediction engines, and perform calculations across cycles which is the very definition of a CISC design.

ARM is no doubt about it a pure RISC design. The faster memory gets the faster ARM will get >.<

For those not up on the technical stuffs.. a RISC design stores its math in memory of some type, not transistors. So long complicated math is slowed down not only by the generally lower clock rate of a RISC design but by the speed of the memory its storing that math in. ARM chips have been catching up to x86 as memory speeds increase and that isn't going to slow down.

Like it or not x86 is not an architecture that is well suited to the future of general computing.

There are a lot of reasons why ARM will win in the end and end up running the vast majority of all computing devices including high end servers and even super computers.

http://semiengineering.com/coherency-cache-and-configurability/
Read this its in fairly plain English and it should give you a good idea why ARM will no doubt win in the end. Its simply not possible for Intel to roll out hardware coherency in the same way... the day Intel cracks that in an x86 design and gives us a CPU that includes cores that are completely different from each other I will agree that x86 has a future.
 
"Sure, we've absolutely learned from our past mistakes. What we now know is that you need to have a significant market presence in order to force people to use proprietary technology."
 
ARM will win in the end!

I about fell out my chair.

Glad you got a chuckle. Yes if x86 is still around in 10-15 years I would be fairly shocked. As awsome as things like the thread rippers of the world are. The market for that stuff is trying up.

Sure some people got on the ARM is going to take over the server market wagon way to early a few years back... but that is still going to to happen. In the last 12 months multiple ARM super computer projects have gotten underway... including Crays highest end machines using thunderx2. Some called it to soon but the ARM take over of that market is still coming. One big reason is what I have been talking about coherency of design and interconnect of co-processors on silicon, in a way that doesn't drive the costs of software development into orbit.

There is a ton of ARM r&d happening right now in some very interesting stuff that is going to start filtering into every day products at some point.
https://www.engadget.com/2018/03/27/nvidia-arm-ai-iot/
https://www.engadget.com/2018/02/13/arm-machine-learning-processors/

The market for high end x86 chips is getting smaller... and yes ARM is going to overtake x86 designs in performance sooner rather then later. If and a company like Apple does start shipping Mac laptops and all in ones running ARM chips with specific ASIC units that make them chew threw the types of software most people are using day to day. Yes its going to eat into the x86 market even more.
 
x86 was supposed to die 20 years ago. SInce it isn't really x86 anymore, it can be morphed into whatever is needed.
 
Well it explains why Apple has virtually abandoned HPC.
They don't need high powered computing anymore. Hipster flakes can edit photos and edit ARM compatible Final Cut YouTube video's on future Apple products just fine. They are just abandoning markets that made them zero money. The desktop isn't dead, but it is to Apple.
It is the right move for the company and will financially rewarding.
 
  • Like
Reactions: ChadD
like this
Well if their cpu is not as powerful as to replace what is on the mac pro, they can still do it in notebooks, and small desktops. Also been thinking abou amd videos cards.. would it come a day when they add a cpu to it, and have a whole compute unit in there?. Dont know maybe that is how you make a mac pro... But that sounds too complex software wise.. maybe they can do arm- radeon ssg mac pro and still be very powerful anyway... Regardless, i still think this is about extracting things from Intel, which intel may or may not care, I think they ate busy enough to maybe dump apple .
 
I'd never buy a mac, but I find this interesting. Yes the later power PC chips in the past were pretty awful. However, the earlier designs from apple were extremely fast. Well ahead of their time. Apple just fell behind badly as time went on.

ARM has proven itself to be very efficient. It's well established by now, so if they're using that I'm curious what they'll do with it.

The main problem with PPC was that IBM wasn't really able to clock the architecture up (due to PPCs design), and they ran out of easy ways to improve IPC. When Intel released Core 2, it was pretty much dead in the market. Prior to that, PPC chips were quite competitive with everything else out there; it was a solid architecture.
 
At the end of the day x86 is an ancient design... created for a single threaded world where the solution to complicated math was to take a ton of CPU cycles to crunch things. It has never been a great instruction set for threading... almost all threading has be done a code level. The chips themselves don't lead themselves to doing their own math splitting and its why they have use things like thread prediction which is very inefficient in general. The way x86 works even doing some form of "auto threading" with a compiler is hard to accomplish. The other side of the coin is this isn't an issue for the ARM architecture..... if a software developer compiles for ARM if the silicon on X machine can do that math on which ever core it deems best, a Android/ios developer doesn't have to specific X low power core or Y high power general core should do this list of things and this other one that. x86 software tends to suck unless its created with multi threading in mind... and even then the programmer is going to have to decide which to send where... they are more likely going ot have to know if they are dealing with 1/2/4/64 cores ect, and to make matters worse because the math is done over multiple cycles and using prediction ect a lot of things are just forever going to have to be coded as singled threaded operations.

Uhhh...no.

Thread assignment is [typically] handled by the OS at runtime for both ARM and x86. Compilers have nothing to do with it [unless developers manually make assumptions about the underlying host CPU, which is typically frowned upon]. Even ARMs Big/Little is handled at runtime by the OS. I all have to do as a developer is spawn a thread, and leave the rest up to the OS scheduler.
 
I honestly believe the PPC Mac's were the last real Mac's.

Like the Motorola 68k, nothing lasts forever in the tech world - Not even Windows will last forever.
 
One big reason is what I have been talking about coherency of design and interconnect of co-processors on silicon, in a way that doesn't drive the costs of software development into orbit.

That has basically nothing to do with ISA. The reality is that ISA largely doesn't matter. RISC vs CISC is largely a pointless debate devoid of any real substance or impact. ARM is basically confined to the markets they've always been confined to and intel processors likewise are largely confined to the markets they've always been confined to. The reality is that legacy software systems matter more than ISA. Always have, always will.

As far as machine learning/AI, that is all moving to dedicated hardware and away from both CPUs and GPUs.

You don't appear to have any fundamental understanding of computer architecture.
 
The main problem with PPC was that IBM wasn't really able to clock the architecture up (due to PPCs design), and they ran out of easy ways to improve IPC. When Intel released Core 2, it was pretty much dead in the market. Prior to that, PPC chips were quite competitive with everything else out there; it was a solid architecture.

No, the primary issue was that IBM/Motorola weren't making enough money off of the limited market available to sustain development of consumer level chips.
 
Well duh... we have been predicting this for years. Apples silicon is might competitive compared to Intel and the power consumption is the icing on the cake.
 
One issue is x86 and the other is the PC platform, which are both built on a horrible pile of legacy stuff dating back to early 1980s. It is absolutely holding implementors back, but Intel being an x86 one-trick pony will of course tell you otherwise.

I mean, how long did it take until we got rid of the A20 gate?
And Intel keeps piling cruft upon cruft. Just recently, we got 5 level paging (each level is 1 layer of indirection, translating a virtual address to another, until finally to a physical address).

Apple makes choices for their ARM platforms that get rid of legacy stuff. Current generation iDevices no longer support 32 bit apps, which allows Apple to drop 32 bit from their hardware entirely. I think we are at least a decade away from x86 processors which are 64-bit only.
 
x86 is dead, the time of its inevitable technological obsolescence is here. ARM has been on an easily identifiable disruption trajectory with x86 since smartphones began (actually even earlier than that). The ubiquity of the ARM ISA will explode in the next couple of years with the incoming IOT explosion which will be dominated by ARM the same way mobile was.

Not only the Apple Mac ARM switch should be talked about but Microsoft's Windows and Azure ARM transition occurring in parallel should be recognized. Just last week Microsoft expanded Windows on ARM to include 64-bit app ARM support for both UWP and Win32. This opens the floodgates to completely transition Windows off x86/Win32. Recall that Microsoft also announced a major reorganization of Windows the prior week that marks the beginning of the end of the old Win32/x86 paradigm of Windows.

Not only is Microsoft backing ARM ISA but getting in on the custom ARM chip game themselves. It was announced last week that HoloLens 2 will switch from Intel to custom Microsoft ARM w/ neural co-processors. You're going to see the same trend continue with Surfaces and even XBOX. Surface Andromeda folding tablet concept may make its debut next month at Build and it's an ARM device.

Microsoft's commitment to ARM servers is real and high-level and not some experimental niche exercise. Microsoft's endgame is to transition Azure to its own custom ARM silicon saving billions from Intel dependency and giving it a competitive advantage over AWS.

https://www.geekwire.com/2018/micro...nds-everyone-company-house-silicon-expertise/
 
I'd never buy a mac, but I find this interesting. Yes the later power PC chips in the past were pretty awful. However, the earlier designs from apple were extremely fast. Well ahead of their time. Apple just fell behind badly as time went on.

ARM has proven itself to be very efficient. It's well established by now, so if they're using that I'm curious what they'll do with it.

It's funny, because I used to buy Apple, and this is always my lament. Even though the later PowerPC chips lagged, it always felt like Apple switching to Intel was when they lost their way. No, I'm not ignorant to the fact they also started making gobs of money around this same time. It just started to feel like your brilliant pothead best friend grew up to be a self-help guru.
 
Look a the instruction sets on a CPU. See what they can do. Those are what we would call task specific instruction sets. Pathways for specific types of tasks to be able to be executed swiftly without having to use all of the silicone of a CPU. These are what you can call a RISC path. Today's Intel CPU's and many of the others in your phones and other devices follow a similar process of design and actually license the instruction sets from other manufacturers to ease development. (This is also the reason that vulnerabilities like Meltdown/Specter afflict so many different CPU types/generations.)

An RISC processor is one that has support for Reduced Instruction Sets. Meaning it doesn't have a tone of built in general use logic in the CPU itself. If it is going to do tasks that need general CPU those tasks must be coded in a way as to run in that sort of environment. The benefit of such a CPU is that you do not need a ton of task specific silicone in your design. (Specifically targeted RISC Unit's if you will.) You simply let developers code how they want their program to work. Then compile it using a compiler and code language that knows how to take advantage of the RISC unit to complete it's needed compute.

The problem here is then you run into a different roadblock. Instead of having a lot of specific Instruction sets you can call in to speed things like your PCIE calls and other such you must now rely on your compiler in order to properly handle that for a RISC based CPU. Yes this lets your program be able o run on a large variety of CPU's presuming they all use the same sort of RISC pipeline. But the performance of what we are calling an x86 CPU (inaccurate as it is.) with in essence multiple RISC path's built into it is going to execute those paths more efficiently. So the trade-off is... do you want the faster instructions for a smaller subset (of very popular) instruction set executions. OR do you want the generalist that can handle anything at near the same speed provided it was coded well and using a cutting edge compiler?

Remember when knowing if a game was written in C+ or C++ actually had an impact on how it would run? If we go RISC across the board that will be needed again.

RISC is AWESOME for grid based compute. Because you don't rely on specific instruction sets and can write code that will execute in predictable timings across dozens to hundreds, to thousands of CPU units. You don't need to spend a ton of CPU cycles on assigning x to y for z. These are great things. For cutting edge science, and super computer needs that is the kind of real performance you want/need. But for a more limited use device. (Laptops and Desktops and such. Even many mid to large size single servers.) You want a good performing CPU that can run at a break neck speed for each and every specialized compute request you throw at it.

Do I think RISC units will replace 'x86'. No.

My prediction is we will drop the term x86 as we move into 64bit and 128bit compute units in our devices. We will have very RISC like instruction cycles but it won't be a true RISC unit any longer because of the presence of so many other task specific logic paths.

But that's just my take... I could be wrong.
 
No, the primary issue was that IBM/Motorola weren't making enough money off of the limited market available to sustain development of consumer level chips.
Bingo!

Due to a lack of native applications, all OS's that were ported to PPC didnt received much attention from regular desktop users.

Apple was rumored or expected to release their OS on other compatible PPC systems, but that never happened, so in the end, the only company that was buying PPC in any quantity was Apple and that wasnt enough to justify the development.

A shame really, PPC was always faster than anything intel had, until they started lagging in development.
 
Look a the instruction sets on a CPU. See what they can do. Those are what we would call task specific instruction sets. Pathways for specific types of tasks to be able to be executed swiftly without having to use all of the silicone of a CPU. These are what you can call a RISC path. Today's Intel CPU's and many of the others in your phones and other devices follow a similar process of design and actually license the instruction sets from other manufacturers to ease development. (This is also the reason that vulnerabilities like Meltdown/Specter afflict so many different CPU types/generations.)

No this is loaded with BS. The reason that Meltdown/Spectre affects so many CPUs is that it utilizes holes that are in widely used structures for CPU design.

An RISC processor is one that has support for Reduced Instruction Sets. Meaning it doesn't have a tone of built in general use logic in the CPU itself. If it is going to do tasks that need general CPU those tasks must be coded in a way as to run in that sort of environment. The benefit of such a CPU is that you do not need a ton of task specific silicone in your design. (Specifically targeted RISC Unit's if you will.) You simply let developers code how they want their program to work. Then compile it using a compiler and code language that knows how to take advantage of the RISC unit to complete it's needed compute.

Oh BS. That's quite possibly the worst load of bollocks in this whole thread.

CISC isn't really a thing except to say not RISC. RISC really isn't a thing either. More of an idea if anything. There are very simple CISC and very complex RISC designs. And idea of CISC didn't exist until after RISC was coined. There is no real well defined break point that makes something CISC or something RISC. Could reasonably make a RISC design that allows REG-MEM ops.
 
No this is loaded with BS. The reason that Meltdown/Spectre affects so many CPUs is that it utilizes holes that are in widely used structures for CPU design.



Oh BS. That's quite possibly the worst load of bollocks in this whole thread.

CISC isn't really a thing except to say not RISC. RISC really isn't a thing either. More of an idea if anything. There are very simple CISC and very complex RISC designs. And idea of CISC didn't exist until after RISC was coined. There is no real well defined break point that makes something CISC or something RISC. Could reasonably make a RISC design that allows REG-MEM ops.

Perhaps you should read up on the topic.

https://en.wikipedia.org/wiki/Reduced_instruction_set_computer

I hope that helps!
 
Heh! This will happen once Apple feels like they can make a lot of money off it. I mean, what's a better way to do planned obsolescence than releasing the new version of OSX (or OSXI) where folks can't just upgrade - they MUST purchase new hardware because of a new custom chip? And then after a while, they can revert back to a different type of CPU that forces this all over again?
 
Poor Mac users going to be force fed yet another architecture switch.

Other than that though, I'm curious to see what Apple brings here.

I'm no Apple fan. I loathe their designs, their operating systems and their software with their walled garden approach, but as unlikely as this would have sounded a few years back when they switched from Samsung's design to their in-house A4 series chips in ~2010, Apple makes the best, highest performing mobile chips, hands down. Nothing from the competition comes even close.

If I could get an iPhone with an unlocked bootloader and available drivers so you could roll your own android ROM I'd do it in a second. The hardware is that good. Nothing comes even close to touching the A11 chip they are selling now.
 
I honestly believe the PPC Mac's were the last real Mac's.

Like the Motorola 68k, nothing lasts forever in the tech world - Not even Windows will last forever.


Speak for yourself.

I just took apart one of the instruments my company makes, pulled out the main board in it and looked at the chip. A Motorola 68030 SOC. It made me smile.

This shit lives on forever in embedded applications :p
 
Speak for yourself.

I just took apart one of the instruments my company makes, pulled out the main board in it and looked at the chip. A Motorola 68030 SOC. It made me smile.

This shit lives on forever in embedded applications :p

Don't worry, I've got a device running a 68030 here myself. Nothing like the good 'ol days!

In fact the 68000 series are still used in a number of automotive applications, chances are your car has one in it's PCM.
 
If I could get an iPhone with an unlocked bootloader and available drivers so you could roll your own android ROM I'd do it in a second. The hardware is that good. Nothing comes even close to touching the A11 chip they are selling now.

And this is a valid point, there's ARM and then there's Apple's ARM based SoC - Apple's custom designs are far faster than the competition.
 
Glad you got a chuckle. Yes if x86 is still around in 10-15 years I would be fairly shocked. As awsome as things like the thread rippers of the world are. The market for that stuff is trying up.

x86 gone in 10-15 years? Really doubt it.
Many small companies will still be running the systems they are buying today :p
Office workers will still be running Windows 10 and office 2016 on an i5 CPU.

More than likely the higher end users will be switching to the 256 bit version Windows 15 and complaining that our old 32 bit apps no longer work.
 
No, the primary issue was that IBM/Motorola weren't making enough money off of the limited market available to sustain development of consumer level chips.

Chicken/Egg.

But yeah, having Macintosh as your only consumer level customer (legacy PPC still rules the embedded market; even ARM hasn't really broken in yet) certainly hurts. Even then the arch was attractive until it topped out, and Mac went x86 as a result.

Such a shame too; like 68k before it, PPC is a MUCH cleaner arch then x86 is.

I honestly believe the PPC Mac's were the last real Mac's.

Like the Motorola 68k, nothing lasts forever in the tech world - Not even Windows will last forever.

68k based chips are still used in the embedded market as cheap micro-controllers, where their price/performance/power ratio makes them attractive.

Then again, this is the market where you still see Z80's and 286's in every day use.
 
Chicken/Egg.

But yeah, having Macintosh as your only consumer level customer (legacy PPC still rules the embedded market; even ARM hasn't really broken in yet) certainly hurts. Even then the arch was attractive until it topped out, and Mac went x86 as a result.

Eh what? ARM has literally owned the embedded markets since its first release. It is what it retreated to when it bombed out of the PC market on their first product. The only place in the embedded markets where PPC has ever had a toehold was at the very highest end of it. All the volume of the embedded market has been ARM for decades. Hell, more ARM cores ship in disk drives every year than PPC has sold in its entire history.

Such a shame too; like 68k before it, PPC is a MUCH cleaner arch then x86 is.

Architectural cleanness basically means F all. Always has, always will.
 
So I guess Apple really does want to try to convince me to spend $1000+ on a fancy Chromebook.
 
So I guess Apple really does want to try to convince me to spend $1000+ on a fancy Chromebook.

Nothing wrong with a Chromebook, they're undoubtedly growing in popularity. The Google Pixelbook is upwards of $1000.00.
 
Nothing wrong with a Chromebook, they're undoubtedly growing in popularity. The Google Pixelbook is upwards of $1000.00.

Pixelbook has an i5 processor and 8GB of RAM.

My point is that I can buy a cheap ARM based Chromebook for $100-150 rather than a $1500 ARM based Macbook that's good for surfing the internet.
 
Pixelbook has an i5 processor and 8GB of RAM.

My point is that I can buy a cheap ARM based Chromebook for $100-150 rather than a $1500 ARM based Macbook that's good for surfing the internet.

You have to consider the fact that if Apple go down this path the processor is going to be heavily modified and most likely quite a capable device. Besides, everything's going to the cloud, I wouldn't be surprised to see Microsoft push their OS towards the cloud since they've announced that Windows is a low priority in favor of their cloud devision.

Times are changing and there was a time where no one thought the Motorola 68k would be toppled from it's perch regarding personal computing in the day.
 
I doubt it will happen, but Moore's law doesn't apply to Apple chips or what?
 
I doubt it will happen, but Moore's law doesn't apply to Apple chips or what?

It's happened time and time again in the past, there's absolutely nothing to state that history doesn't apply to x86/64.
 
Pixelbook has an i5 processor and 8GB of RAM.

My point is that I can buy a cheap ARM based Chromebook for $100-150 rather than a $1500 ARM based Macbook that's good for surfing the internet.

I think you mistake ARM as low end chips. ARM is just an architecture. The Apple A11 is 5-10x faster then the chips you find in the cheapo $100 chromebooks. The A11 in the current iphones is actually faster then the mid range 4 core i5s. If Apple does switch their laptop/desktop machines they won't be using A11s. They will be using A12x or A13 with custom ASIC designed for those machines. Yes Intel should be very worried, even if Apple doesn't switch in the next few years to their own custom ARM chips. ARM is with out a doubt starting to gain serious traction in Servers and even Super computing.
 
Back
Top