Macs Could Jump to ARM in 2020

Personally, the biggest issue i see is Bootcamp/Paralells/Vmware Fusion support for Windows apps. That is probably the biggest dealbreaker of all.
 
Personally, the biggest issue i see is Bootcamp/Paralells/Vmware Fusion support for Windows apps. That is probably the biggest dealbreaker of all.

I do like Virtualizing my Windows partition on MacOS with Fusion. I expect support for ARM MacOS to happen eventually, but not right away.
 
It's only Intel's older offerings ;).

https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/2

What is quite astonishing, is just how close Apple’s A11 and A12 are to current desktop CPUs. I haven’t had the opportunity to run things in a more comparable manner, but taking our server editor, Johan De Gelas’ recent figures from earlier this summer, we see that the A12 outperforms a moderately-clocked Skylake CPU in single-threaded performance. Of course there’s compiler considerations and various frequency concerns to take into account, but still we’re now talking about very small margins until Apple’s mobile SoCs outperform the fastest desktop CPUs in terms of ST performance. It will be interesting to get more accurate figures on this topic later on in the coming months.

Emphasis mine. But hey, whateve helps you sleep at night.

If you don't think Apple could make a run at things with a desktop chip where power and thermals were handled in a way they aren't for mobile; well, go back and read that Anandtech article again. Based on their testing it's even better than I was thinking.

I think it's safe to say without question Apple has probably a whole series of chips baking in a lab, and I'm sure Intel is aware of at least some of that work. Apple has been screwed and left out to dry by suppliers in the past. People thought they were nuts for the whole build their own mobile CPU but it seems to be paying off. They have made huge investments in micro LED displays. They just acquired/hired a power management firm to bring it in house.

Anyway, time will tell. If they ever do move to switch, even partially, things will certainly get even more interesting :)
 
Last edited:
Can laugh about a A12 doing 4k, but Apple has been running right around 1440 for a long time now... a bit more or less depending on the form factor.

I don't think it's quite Crysis level graphics yet, but it's also not shabby or running on a 1000W PSU SLI rig either.

Also, folks may be making the assumption that an Apple ARM Desktop would continue to use the derivative Imagination graphics. Most games are not CPU bound, particularly at 4k. Using an integrated GPU wouldn't necessarily be the case; there would be no reason they couldn't go to a much more powerful GPU, and have had pretty strong relationships with both AMD and nVidia in the past.

Apple have a very bad relationship with NVIDIA which is why they have to make their own OSX drivers. NVIDIA keep pushing Cuda which Apple have no interest in as its proprietary and not owned by them. AMD generally perform better than NVIDIA at OpenCL tasks.

Apple has no issues with 4K rendering on the Apple TV and a 2 gen old chip. Not to mention it has hardware x265 HEVC decoding with ATMOS which is still a pain in the ass for PC’s
 
Well, I highly doubt the existing A11 can just scale up and scale out to desktop-level power consumption and still manage good performance. It's not designed to do so, and it'll never be able to manage it as is.

Look at Nvidia and their GPU design philosophies. When they stuck with monolithic big dies, throwing more transistors and power at the problem and try to scale down, it failed fantastically. Fermi came to mind. And then they went back to the design board with Maxwell and design specifically to be able to scale across extreme power envelopes, while maintaining performance scaling. It look them 3 tries to get it to work well (Maxwell was a good start, Kepler was a good improvement, but Pascal took it to another level).

Intel also had a stint with NetBurst where they tried to just throw more power at the problem. They also had to scrap the whole ordeal.

Now you tell me that Apple had the secret sauce to the problem all along, while 2 semiconductor giants were clueless? I'm at least skeptical, and laughing a little at the notion.

Apple have a long history of not rushing products and releasing them when they are good and ready. Just because they havent done it yet, doesnt mean they dont have an ace in the hole. Maybe they have spent all this time preparing the framework and ecosystem to support it rather than launching a half assed product like most companies will.
 
Last edited by a moderator:
Apple have a long history of not rushing products and releasing them when they are good and ready. Just because they havent done it yet, doesnt mean they dont have an ace in the whole. Maybe they have spent all this time preparing the framework and ecosystem to support it rather than launching a half assed product like most companies will.

Well, I don't doubt that they are very careful about it. However, I do doubt their CPU capabilities, due to the myriads of unknowns still out there, biggest 2 are the frequency/power curve, and the absolute frequency ceiling of their A12 chips. Another one I have doubts about is actually the tests themselves, since the results so far are all apples to oranges to me. Too many compromises to call it a fair comparison.

I can see a future, maybe as early as next year, where they drop the Air and promote the iPad Pro to be its replacement. All they need is a dock mechanism like with the surface, and full blown Mac OS with their proprietary software on it. That'll work for the Air folks, absolutely. People doing office work that can be done with Apple's office apps will absolutely love the 15+ hours battery life that it'll offer.

Or go with MS and Razer's route, and have a docking "shell" for the iPhone that turns it into an Air, pretty much. That'll work too, and the Apple fans will love that.

Though I still don't see Apple switching to ARM for folks who need performance, like me. Until they can hit enthusiast level CPU performance at enthusiast level power consumption, I'll continue to be skeptical.
 
Well, I don't doubt that they are very careful about it. However, I do doubt their CPU capabilities, due to the myriads of unknowns still out there, biggest 2 are the frequency/power curve, and the absolute frequency ceiling of their A12 chips. Another one I have doubts about is actually the tests themselves, since the results so far are all apples to oranges to me. Too many compromises to call it a fair comparison.

I can see a future, maybe as early as next year, where they drop the Air and promote the iPad Pro to be its replacement. All they need is a dock mechanism like with the surface, and full blown Mac OS with their proprietary software on it. That'll work for the Air folks, absolutely. People doing office work that can be done with Apple's office apps will absolutely love the 15+ hours battery life that it'll offer.

Or go with MS and Razer's route, and have a docking "shell" for the iPhone that turns it into an Air, pretty much. That'll work too, and the Apple fans will love that.

Though I still don't see Apple switching to ARM for folks who need performance, like me. Until they can hit enthusiast level CPU performance at enthusiast level power consumption, I'll continue to be skeptical.

I personally think they will, it all makes sense looking back on the last few years of Apple. They put the fastest intel chips in their devices and still get torn to shreds by people. They need to differentiate themselves from PC’s and move on as Intel is holding them back.

You see it in the Macbooks throttling as Apple want to make their laptops sleek and fast but Intel simply cant deliver a product that does both yet Apple know they can do it because they have a phone already with similar burst performance. I bet its frustrating for their engineers being hamstrung by market offerings.

We also need to keep in mind Apple havent just been sitting on their hands. IOS was a brand new OS to the market and to see it evolve over the years to iPads and AppleTV has shown how flexible it is. Likewise it started off with just shitty games and now we have full desktop replacement software for it. Then we look closely at how Apple have almost flown under the radar all this time as their silicon grows exponentially in compute power and capabilities with every couple of years.

It has reached the point now where I think android silicon is safely 2 generations behind and that is before we even start talking about how Apple has baked NVMe storage controller into the chip allowing for desktop level storage performance.
 
Well we could be seeing the start of the ARM movement, new ipad pro launch claims faster than a PC, this is the first time Apple have ever compared a mobile device to a PC in performance. Wonder if the A12 will be let of the leash with more thermal mass?
 
Last edited by a moderator:
Have a look at the new iPad Pro and then we'll talk. I call bullshit on your post.

With the announcement of Adobe Photoshop CC for the iPad Pro, I think this will be the generation where other companies pay more attention to what Apple are doing, especially Intel.
 
Well I just realised why Apple is so confident, the iPad will has the A12X not A12... 2 more CPU cores, 3 more GPU cores and higher power dissipation. Cant wait to see this chip butt heads with i7.

Early word is that its single thread performance outpaces the hexacore i7 and multicore performance is getting a massive 90% bump in performance!
 
Apple engineered A series ARM instruction set macs would be amazing. I would buy in a heartbeat. I hope they do a whole ARM lineup: Mac Pro to the unltrs portable MacBook laptop.
 
Early word is that its single thread performance outpaces the hexacore i7 and multicore performance is getting a massive 90% bump in performance!

Let me guess, for 10 second before it starts to thermally throttle. So it has enough thermal dissipation to handle a benchmark, but good luck encoding a video on it. Add charging to the mix, and it will throttle even more due the heat from charging.
 
Let me guess, for 10 second before it starts to thermally throttle. So it has enough thermal dissipation to handle a benchmark, but good luck encoding a video on it. Add charging to the mix, and it will throttle even more due the heat from charging.

The iPad Pro has a very large thermal mass, more than enough to hold up a chip that will probably be 10w at the most.

I the iPhone has already proved it can encode video faster than PCs and that is a sustained load.
 
Give that A11 SoC a 30-95 watt TDP - I would say that would make a pretty good contender against modern quad/six-core Core i5/i7 CPUs.
Because we all know architecture's performance scales lineary with power.

There are some hilarious dreams people in this thread have. Intel and AMD must be incompetent as the might of Apple ARM is laughing at them on all fronts. They are just stunned.
 
Because we all know architecture's performance scales lineary with power.

There are some hilarious dreams people in this thread have.
Not sure why this is so "hilarious" to you, as we have all shared some very valuable and telling data from what we know so far - two years from now, we will know for sure either way.

Geekbench, "full blown desktop Intel Core i5"... :yawn:
Once again, that was an iPhone 8 A11 ARM64 CPU with a 5 watt TDP compared to an Intel i5 x86-64 notebook CPU with a 28 watt TDP.
Yes, the i5 was older (Ivy Bridge if I remember right), but still, when a 5 watt TDP CPU outperforms a 28 watt TDP CPU, that is very telling as to how far Apple ARM64 CPUs/SoCs have progressed.

Again, the A11 CPU in the iPhone 8 is also 40% faster, overall, than the 8-core AMD Jaguar in the PS4 Pro, which has around a 50-55 watt TDP (not counting the GPU).

Intel and AMD must be incompetent as the might of Apple ARM is laughing at them on all fronts. They are just stunned.
They will be when the physical limits of transistors have been reached, SMB scaling efficiency has diminished returns after so many cores, and x86-64 software efficiency has run its course.
 
It's synthetic tests and they aren't compared to desktop CPUs from Intel and AMD. There is no useful information to be extrapolated to real desktop performance and wattages and it is premature to proclaim (almost) parity, or even superior performance/watt, with current full desktop I5s.
 
I don't know what everyone is getting their panties in a twist over. The new snapdragon 850 is about the same as a low to mid range i3 with much better power usage. I know the A11 is considerably faster than the CPU in the latest gen PS too. Arm is essentially the future of generic computing as far as I can tell.
 
Moto 68000-> Power PC->x86->ARM

Each time it has been extremely paintful for the mac community. I expect this to be no different. I realize this is towards their universal operating system platform. While a single platform for every computing device makes sense in terms of maintenance and app crossover, it's a pipe dream. Even Microsoft is failing with the x86 to ARM emulators and slow performance, or we all would be running Windows ARM. It's obvious Steve Jobs has left the house.
 
Moto 68000-> Power PC->x86->ARM

Each time it has been extremely paintful for the mac community. I expect this to be no different. I realize this is towards their universal operating system platform. While a single platform for every computing device makes sense in terms of maintenance and app crossover, it's a pipe dream.
Yeah, no, that was all back in the 1980s to 2000s - things have changed dramatically since then.
Back then, Apple did not have any pre-existing software or OS foundations for the new architectures, unlike now.

Smartphones and tablets were a massive paradigm shift for the world and even more so for Apple.
I'm no Apple fanboy, but it is interesting to see people tout these same tired ideas that haven't held up for more than a decade now on multi-architecture support.

I know this thread is getting a bit long, but you might want to go back and read post 68:
Something to consider: we're seeing containerization and cross-platform frameworks grow to a point that the OS and hardware is becoming irrelevant. Code like Java, Go, and Python- not to mention the web languages- can be compiled and run anywhere. More important is the consideration of platform- desktop, laptop with touch, tablet, phone, watch... fridge, car, IoT?

As the frameworks evolve, expect that to be tackled too.

Even Microsoft is failing with the x86 to ARM emulators and slow performance, or we all would be running Windows ARM.
Windows RT died because ARM (ARMv7) CPUs were not nearly as powerful or capable as they are now - at best the fastest units were similar to low-end Intel Atom CPUs, and it was 32-bit only.
ARM, now ARM64 (ARMv8.x) has come a long way since 2012 when Windows RT was introduced.

Again, if this were earlier this decade, I would fully agree with you.
Those ideas and concepts no longer apply at all and certainly won't in 2020.

It's obvious Steve Jobs has left the house.
So because Steve Jobs isn't with Apple any more, they are now "dumb enough to move from x86-64 to ARM64" or something?
You realize it was under Steve Jobs that the company moved from PowerPC to x86, and later x86-64, and then ARM for iOS in the mid-2000s, right?
 
It's synthetic tests and they aren't compared to desktop CPUs from Intel and AMD. There is no useful information to be extrapolated to real desktop performance and wattages and it is premature to proclaim (almost) parity, or even superior performance/watt, with current full desktop I5s.
Go back and read post 93 - in fact, it might be a good idea for you to re-read the thread as these comments are starting to repeat the same tired, and obsolete, ideas that no longer apply.
Those are hardly "synthetic" results and they are comparing them to server x86-64 CPUs, let alone desktop CPUs.
 
Go back and read post 93 - in fact, it might be a good idea for you to re-read the thread as these comments are starting to repeat the same tired, and obsolete, ideas that no longer apply.
Those are hardly "synthetic" results and they are comparing them to server x86-64 CPUs, let alone desktop CPUs.

As optimistic as I am about the performance of modern ARM implementations, it's still fair to say that a 1:1 test has not been performed- and it will be very hard to do one without relying on Apple.
 
As optimistic as I am about the performance of modern ARM implementations, it's still fair to say that a 1:1 test has not been performed- and it will be very hard to do one without relying on Apple.
You are right, and that really is the bottom-line.
Once we have that info then we will know for sure - this is assuming Apple doesn't obscure their results.

It will be extremely important that they don't do that, especially since their own products are currently x86-64, and I'm sure they want to avoid cannibalizing their own markets, as well as avoiding the Osborne effect.
 
Go back and read post 93 - in fact, it might be a good idea for you to re-read the thread as these comments are starting to repeat the same tired, and obsolete, ideas that no longer apply.
Those are hardly "synthetic" results and they are comparing them to server x86-64 CPUs, let alone desktop CPUs.
Geekbench, SPEC, specific server workloads, Bulldozer and low clocked SkyLake. Am I missing anything? Please point me to a real world desktop comparison with current X86 desktop CPUs. Until then, low power ARM desktop beast is a pipe dream.
 
Yeah, no, that was all back in the 1980s to 2000s - things have changed dramatically since then.
Back then, Apple did not have any pre-existing software or OS foundations for the new architectures, unlike now.

Smartphones and tablets were a massive paradigm shift for the world and even more so for Apple.
I'm no Apple fanboy, but it is interesting to see people tout these same tired ideas that haven't held up for more than a decade now on multi-architecture support.

I know this thread is getting a bit long, but you might want to go back and read post 68:



Windows RT died because ARM (ARMv7) CPUs were not nearly as powerful or capable as they are now - at best the fastest units were similar to low-end Intel Atom CPUs, and it was 32-bit only.
ARM, now ARM64 (ARMv8.x) has come a long way since 2012 when Windows RT was introduced.

Again, if this were earlier this decade, I would fully agree with you.
Those ideas and concepts no longer apply at all and certainly won't in 2020.


So because Steve Jobs isn't with Apple any more, they are now "dumb enough to move from x86-64 to ARM64" or something?
You realize it was under Steve Jobs that the company moved from PowerPC to x86, and later x86-64, and then ARM for iOS in the mid-2000s, right?

Take it from an expert, Hardware emulation from one platform to another is PAINFULLY slow.

Maybe you should read this:

https://www.tomshardware.com/news/windows-on-arm-too-expensive,37997.html

Now if everything ran on UWP (Universal Windows Platform) it would be a much better situation as pseudo code (intermediate language) is compiled at load time and optimized for the platform in particular. But Apple runs mostly native code still on the desktop. They really don't have an equivalent of a UWP. But even if they did have something like a UAP (Universal Apple Platform) you would need to convince developers to flock to it. This is the problem Microsoft is facing and failing at, even if UWP will run on iOS, Android, and Windows x86, Windows ARM from one code source. It seems like a win win. But developers don't want to learn another standard.

As to the PowerPC->x86, it was a necessity as PowerPC was no longer keeping up speed wise. Apple was getting hammered here. And one of the lures of paying more for an Apple is the fact you can boot a Windows OS and run it. If they go ARM, the Windows side of things will suffer massively and turn off more users, in addition to native apps that run in emulation mode.
 
Geekbench, SPEC, specific server workloads, Bulldozer and low clocked SkyLake. Am I missing anything? Please point me to a real world desktop comparison with current X86 desktop CPUs. Until then, low power ARM desktop beast is a pipe dream.
So first you think it is "hilarious" when I said to give an Apple ARM64 CPU 30-95 watts, and now you are acting like we said a low power ARM desktop processor is going to be a beast... ok then. (trolling or just bad reading comprehension?)
A true desktop ARM64 CPU has not been released yet, and until it does and just as IdiotInCharge stated, a true 1:1 test will not be possible.

With the information we have, though, we are starting to be able to paint a much clearer picture of what Apple ARM64 CPUs/SoCs are capable of compared to semi-modern x86-64 CPUs.
It seems like people continue to think that this is only in the realm of fantasy, of which 5-10 years ago, I would have agreed with, but not with the technology we have now, let alone two years from now.
 
Take it from an expert, Hardware emulation from one platform to another is PAINFULLY slow.

Maybe you should read this:

https://www.tomshardware.com/news/windows-on-arm-too-expensive,37997.html

Now if everything ran on UWP (Universal Windows Platform) it would be a much better situation as pseudo code (intermediate language) is compiled at load time and optimized for the platform in particular. But Apple runs mostly native code still on the desktop. They really don't have an equivalent of a UWP. But even if they did have something like a UAP (Universal Apple Platform) you would need to convince developers to flock to it. This is the problem Microsoft is facing and failing at, even if UWP will run on iOS, Android, and Windows x86, Windows ARM from one code source. It seems like a win win. But developers don't want to learn another standard.

As to the PowerPC->x86, it was a necessity as PowerPC was no longer keeping up speed wise. Apple was getting hammered here. And one of the lures of paying more for an Apple is the fact you can boot a Windows OS and run it. If they go ARM, the Windows side of things will suffer massively and turn off more users, in addition to native apps that run in emulation mode.
I am aware of how difficult it is to emulate, and I doubt Apple will be emulating much of anything with their ARM64 CPUs in 2020.
As others have stated here, they will most likely either have an x86-64 core in their SoC for software binaries that require it to avoid emulation, or they will have an x86-64-like ASIC in their SoC to perform the same functions.

Apple also had dual-binaries in 2006 for both PowerPC and x86/x86-64, along with Rosetta, which seemed to work well enough to fill the gap until x86-64 fully succeeded PowerPC in their code binaries and compilations.
Emulation will be the last thing Apple, or anyone else, does as that is the last resort for compatibility at the sacrifice of performance.

The reason Microsoft went the emulation route with Windows RT is because they did not previously have an ARM hardware platform for their code-base to work on, unlike Apple with iOS and over a decade of established software and in-house hardware.
 
Geekbench, "full blown desktop Intel Core i5"... :yawn:
You can't handle it. It's like your mind is blown right now. All those years on the Intel Kool-Aid. Not to worry, that will dissipate eventually. You will be fine in a few years. Probably running Apple hardware. But fine otherwise. :pigeon:
 
As others have stated here, they will most likely either have an x86-64 core in their SoC for software binaries that require it to avoid emulation, or they will have an x86-64-like ASIC in their SoC to perform the same functions.

They can't do that due to licensing agreements on x86 instruction set. They would have to add an actual x86 chip from Intel or AMD. Yep that will do wonders for the price and compatibility won't it?

And my article was running full Windows 10, Not RT. And the performance was horrible even on the latest snap dragons. Getting software to switch over is a lot harder than you think or UWP/Windows store would be a lot bigger.
 
So first you think it is "hilarious" when I said to give an Apple ARM64 CPU 30-95 watts, and now you are acting like we said a low power ARM desktop processor is going to be a beast... ok then. (trolling or just bad reading comprehension?)
A true desktop ARM64 CPU has not been released yet, and until it does and just as IdiotInCharge stated, a true 1:1 test will not be possible.
No, it was hilarious to equate its current performance to a "full desktop I5" and then to extrapolate its future performance by giving it more TDP and scaling the performance lineary.

Yes, until it does you have no basis on which to claim it will be as powerful as X86 CPUs with a significantly lower power draw. So live by your own rules.
 
They can't do that due to licensing agreements on x86 instruction set. They would have to add an actual x86 chip from Intel or AMD. Yep that will do wonders for the price and compatibility won't it?
I will agree with you on this point, and it will be interesting to see how Apple deals with this.
As for the compatibility, it will most likely be a stop-gap until a fully native ARM64 SoC, along with the following OS iteration, is released.

Obviously I don't know for sure, but based on how things have happened in the past, that will be the most likely scenario.

And my article was running full Windows 10, Not RT. And the performance was horrible even on the latest snap dragons. Getting software to switch over is a lot harder than you think or UWP/Windows store would be a lot bigger.
Not that I want to turn this thread into an OS war or Windows-bashing session, but Windows 10 is hardly optimized for ARM64 and is extremely bloated overall compared to iOS and OS X, both being UNIX-based operating systems.
Another thing, Snapdragon laptop they used in that article only had 4GB RAM, which is barely sufficient for even a casual Windows-based desktop/laptop environment this day and age.

I'm not saying the Snapdragon CPU wasn't the limitation (Windows ARM64-based code optimization needed?) but 4GB RAM is paltry, even for a modern laptop.
If it had 8GB RAM like the x86-64 laptop the article used, it would have been much more apples-to-apples, especially when testing out Chrome with multiple tabs since it can quickly burn through the remainder of that 4GB RAM that Windows itself hasn't already used, which would be around ~1.5GB - seriously, 4 tabs of HTML5 can burn through 1.5GB RAM like it was tissue paper.

Again, Microsoft has an uphill battle to get code-optimization going for ARM64 - I never said this was or would be easy.
But I am saying that Apple has had an ARM/ARM64 optimized code-base and in-house developed hardware for over a decade now - that gives them one hell of an edge over Microsoft, so using Microsoft as the example of what Apple is going to have to go through isn't really accurate at all.

Even Linux and most BSD branches have far more optimized ARM/ARM64 code than Microsoft does as this point since they have been working with that platform for ages now, whereas Microsoft has only been attempting it since around 2012.
 
Apple also had dual-binaries in 2006 for both PowerPC and x86/x86-64, along with Rosetta, which seemed to work well enough to fill the gap until x86-64 fully succeeded PowerPC in their code binaries and compilations.

Something I'm keeping in mind here is that when Apple went from Power to x86, they went from a slower architecture to a faster one. Here they're largely doing the reverse; their saving grace in my mind will be that most of the applications that need a compatibility layer will likely not be resource intensive as they'll have most of the native first- and third-party stuff already running on the new architecture.
 
No, it was hilarious to equate its current performance to a "full desktop I5" and then to extrapolate its future performance by giving it more TDP and scaling the performance lineary.

Yes, until it does you have no basis on which to claim it will be as powerful as X86 CPUs with a significantly lower power draw. So live by your own rules.
It's called speculation...
How dare we use critical thinking with thoughts and ideas based on history and modern technological capabilities to speculate on potential future technologies. :rolleyes:
 
Where is the plant ? Is it in the way of the next Typhoon, Hurricane or Earthquake ?

If I was an investor, I’d wanna know :)
 
Where is the plant ? Is it in the way of the next Typhoon, Hurricane or Earthquake ?

If I was an investor, I’d wanna know :)

I know there's a "when yo' moma" joke is in here. But I'll resist.
 
It's called speculation...
How dare we use critical thinking with thoughts and ideas based on history and modern technological capabilities to speculate on potential future technologies. :rolleyes:
Oh, you may speculate all you want. Claiming is a different beast.
 
Back
Top