Macs Could Jump to ARM in 2020

x86-64 wasn't technically innovative- double the length and number of registers, done. What made x86-64 innovative is that Intel refused to do it, and when AMD did it, Microsoft eventually supported them. Good gamble on AMD's part for sure, but not really technical innovation.
Don't forget about the Intel Itanium ISA (ia-64) - that one didn't work out too well, though.
AMD definitely had a win on 64-bit extensions for x86, at least at the time, and also extending the address bus beyond 32-bit (and 36-bit PAE) was definitely a necessity at the time that is being fully utilized today.
 
My point is that If you list AVX512 as innovation, you can't bemoan x86-64 as.. not.

One could just as easily dismiss AVX512 as a wider AVX256 (or FMA3 really) with a couple dozen new extensions.

Well, I do, and not because of what AMD did- but because it was so stupid simple and yet Intel refused to do it.

AVX, in contrast, brings new types of computing to the table.

And upon closer inspection, I believe that it would be fair to say that x86-64 and AVX are more similar than different.
 
The Apple A11 CPU is as fast as a full blown desktop Intel Core i5. And it's one generation old. If Apple ramps up the power envelope they can smoke Intel. And I hope they do this.
benchmarks (with credible links) showing that?,

when you take into account what a PC does vs a mobile phone...

Arm is great for what it's intended use is, but there is not a chance it is going to take over the x86 market.. even with Windows 10, you can still run software from the 80's on the machine in most cases with no issues

ARM is RISC and X86 is a General Purpose Computing Platform...

Ask that ARM to do some 4K gaming and see how fast that thing thermally implodes....
 
Last edited:
This is a bad idea, there is a reason outside of creative type software there wasn't a ton of programs available on mac's pre intel. Almost ALL software is written for an x86 environment since that computing format DOMINATES everything by a HUGE margin. If Apple moves to ARM you are going to be back to the dark ages of mac computers and they are going to lose what small market share they have. Outside of products like Photoshop and content editing software noone is going to develop for Mac anymore.
 
That is such a load... The A12 already runs circles around many Intel chips and does so with far less power consumption. At the rate Apple has been improving their processor designs, it was only a matter of time before this happened. Just imagine a few of these chips running in parallel. Just because Microsoft failed at ARM with their shitty development, doesn't mean Apple will.

You can believe whatever you like, but that doesnt make it true.... What I said is grounded in current, actual facts.

If you want to open the discussion to "imagining" what several A12's would be like in parallel, why stop there? Why not imagine a world where Intel CPU's reduced power draw to a level lower than an ARM processor? There is nothing wrong with ARM based solutions. They have their place, but if you believe the current Mac Book Pro could be re-released with A12 ARM processors and *NOT* suffer major performance disadvantages, then you either dont do much on your Mac or you are just living in denial.
 
A lot of people including you make a big deal out of the processor technology ARM, RISC CISC all this shit doesn't really mean anything now days processors are so complex and so large that the ideas of some simply idea of how to run something being inherently more efficient or better is garbage. All these guys implement and copy the features and advantages of the other guys and if any instruction or architecture is shown to be better the next guy will just copy it. An ARM processor in a modern smart phone is not some specialized calculator its an all encompassing system on a chip that does tons of different things on par with a modern desktop. The only difference is that they are smaller and focused more on being cheap once companies start scaling them up and they already are they are going to get competitive with x86. Its only a matter of time when they have so much going for them in volume already. Except now all the apps coded for phones will run natively on some new desktops and laptops and for a time that will be enough to bridge the software gap until those companies can get their apps mouse and keyboard friendly. And lets not kid ourselves a mouse and keyboard could work perfectly fine in a phone GUI with zero modifications. Literally click instead of touch your finger. Finally even though for some time x86 will have a raw performance lead there will be a massive market for people who don't care about that which is the massive majority of consumers to grow into until they get the power acceptable enough to replace their entire line. Even Microsoft is flirting with ARM again this year cause they know damn well this is coming sooner or later.

The only play intel really had was to make atom work in phones forwhatever reason they did not focus enough money and time on that and they have lost that battle.

To me the irony of this all is that apple was the last company that needed to go arm but they will likely be the first to actually do it. The company that should have been pushing the shit out of arm in laptops and desktops was google they should have when all in and got rid of chrome os and just made it android laptops.

I am not making a big deal out of it, I am just telling you the truth. The complexity of a chip says nothing about how fast it is or how efficient it is. And we are not discussing "features", we are talking about architecture and design. Can you make an architecture change to better support a feature? Yes.. But you dont have to do it that way. Take Ray Tracing for example. Its been around for a *long* time. You could render ray traced scenes on your computer going back to ~1982 or so. The problem was that you could only render in software as there was no consumer based hardware accelerator, so it simply wasnt a viable option to use. Ray tracing is a perfect example where RISC based setups shine and will always outperform CISC solutions.

So, I will say again that there are no use case scenarios where you *can't* use ARM. There are instances where they are much better than everything else out there, and there are instances where they are horrendous compared to everything out there, but if you *currently* think you are not going to compromise your computing experience using an ARM based solution in a Mac by 2020 as compared to alternatives out at the same time, you are not using your computer for much of anything.
 
I am not making a big deal out of it, I am just telling you the truth. The complexity of a chip says nothing about how fast it is or how efficient it is. And we are not discussing "features", we are talking about architecture and design. Can you make an architecture change to better support a feature? Yes.. But you dont have to do it that way. Take Ray Tracing for example. Its been around for a *long* time. You could render ray traced scenes on your computer going back to ~1982 or so. The problem was that you could only render in software as there was no consumer based hardware accelerator, so it simply wasnt a viable option to use. Ray tracing is a perfect example where RISC based setups shine and will always outperform CISC solutions.

So, I will say again that there are no use case scenarios where you *can't* use ARM. There are instances where they are much better than everything else out there, and there are instances where they are horrendous compared to everything out there, but if you *currently* think you are not going to compromise your computing experience using an ARM based solution in a Mac by 2020 as compared to alternatives out at the same time, you are not using your computer for much of anything.

And likewise the specific architecture tells you nothing about how fast or efficient it is.Yes there are specific scenarios where ARM is not going to work, say for hardcore PC gaming right now. But my point is that once ARM makes inroads to the laptop form factor in the ultrabook and light end they will solve those problems and move through everything if they are motivated.

Now lets talk about ray tracing. What stops someone in a CISC environment from implementing a portion of their complex massive SOC from running things in a RISC based manner? And in the size of a chip how much will it really matter when you have billions of transistors. An SOC can even place an entire co processor right on the SOC just like they do with the GPU. See this is what I mean it doesn't matter period the ideas of old where you had a single use CPU that had inherent advantages or disadvantages have long gone away we live in a world where a CPU is way more than just a CPU its a collection of products aggregated onto a single chip that is so massive you can add more, shuffle things around etc and ultimately phones are now running pretty much all the same applications that desktops are for the average user. And the average user is driving the larger trends. And yes in fact the average user has ALWAYS compromised the user experience in some way or another they just get used to it.
 
Last edited:
there are specific scenarios where ARM is not going to work, say for hardcore PC gaming right now

Should say, 'by all indications, will work very poorly'- saying 'not going to work' implies a hard stop of sorts, and that is very far from the case- a great many games, perhaps the majority, are played on ARM right now. Some are even 'hardcore' in nature, i.e. Fortnite. But I do get where you're coming from here.
 
And likewise the specific architecture tells you nothing about how fast or efficient it is.Yes there are specific scenarios where ARM is not going to work, say for hardcore PC gaming right now. But my point is that once ARM makes inroads to the laptop form factor in the ultrabook and light end they will solve those problems and move through everything if they are motivated.

Now lets talk about ray tracing. What stops someone in a CISC environment from implementing a portion of their complex massive SOC from running things in a RISC based manner? And in the size of a chip how much will it really matter when you have billions of transistors. An SOC can even place an entire co processor right on the SOC just like they do with the GPU. See this is what I mean it doesn't matter period the ideas of old where you had a single use CPU that had inherent advantages or disadvantages have long gone away we live in a world where a CPU is way more than just a CPU its a collection of products aggregated onto a single chip that is so massive you can add more, shuffle things around etc and ultimately phones are now running pretty much all the same applications that desktops are for the average user. And the average user is driving the larger trends. And yes in fact the average user has ALWAYS compromised the user experience in some way or another they just get used to it.

You are correct. The specific architecture doesnt tell you how fast or efficient it is either. But, the context we are discussing is with Apple moving the Mac Book Pro over to an ARM based architecture so take the architectures we are discussing and combine it to the 2020 Mac Book Pro context. That is what we (or I at least) am discussing which is why I keep bringing up the *use* of the device in question. Can you re-invent the entire software and compiler base of desktop/laptop computers from the last 30+ years and design them in a way to take advantage of ARM processors? Yes. Could you stick with current designs and implement emulation (through SOC changes) that runs the code within 25% of the speed of native code? Yes. But *BOTH* those things have disadvantages any competitor that sticks with current design philosophy wont have to deal with, so anyone who uses them is compromising. If those compromises dont affect your usability (either through accepting lower performance or by changing the way or what you use the device for), thats your prerogative. Having said this, those that expect their computing device to deliver a performance first experience will not find those compromises acceptable. Apple's Mac Books were a joke for performance users until they switched to Intel CPU's. Chrome Books are equally a joke today (for performance users). This is not going to change by 2020.
 
If Apple ramps up the power envelope they can smoke Intel. And I hope they do this.

It is when that iPhone 8 A11 ARM64 CPU has a 5 watt TDP - compared to that x86-64 i5 "notebook" CPU with a 28 watt TDP.
Yes, that is extremely telling.

The CPU in the iPhone 8 is also 40% faster, overall, than the 8-core AMD Jaguar in the PS4 Pro, which has around a 50-55 watt TDP (not counting the GPU).
I know the Jaguar is already over a half-decade old at this point, but again, that shows the strides that the ARM64 ISA has come in recent years.

Give that A11 SoC a 30-95 watt TDP - I would say that would make a pretty good contender against modern quad/six-core Core i5/i7 CPUs.

NetBurst says, solid plan gents......YOU CAN DO IT!
 
Of a desktop implementation? Outside of a new Mini, it's doubtful we'd see such a thing for even longer. You still need desktop-level hardware for many tasks.

The overall point though is that below those tasks, you don't need a desktop CPU ;).
True. Maybe a Macbook or Mac Mini - both are entry level and held back spec-wise. I'm curious to see if the Mac Mini gets a refresh this year...after all, it's only been four years since the last one. It's still over priced.
 
benchmarks (with credible links) showing that?,
when you take into account what a PC does vs a mobile phone...

Benchmarks? How about the real world. ARM only for mobile? Huh:

https://www.datacenterknowledge.com...rm-servers-it-expands-its-data-center-network
https://www.nextplatform.com/2017/02/01/arm-server-chips-challenge-x86-cloud/
https://www.networkworld.com/articl...processor-challenges-for-the-data-center.html

I remember when this came out and I scoffed - seems they were right (look at the date): https://www.wired.com/2013/05/hp-arm-memcached-chip-paper/

Arm is great for what it's intended use is, but there is not a chance it is going to take over the x86 market.. even with Windows 10, you can still run software from the 80's on the machine in most cases with no issues

Intel stockholders sure hope you are right!

Here's where if I were such a stockholder I might be a little concerned. Which is a larger part of the x86 market - servers or desktops? Mobile isn't even on the table since ARM totally obliterated them and mobile has been outpacing desktops in growth and volume for some time now, so the writing is on the wall for desktops too. I don't think they are ever going away (I hope not since I like mine too!), but to pretend things are going to stay as they always has is just silly.

ARM is RISC and X86 is a General Purpose Computing Platform...

What does that even mean? ARM is RISC and x86 is CISC - both are capable of supporting general purpose computing platforms; just different philosophies. And really there aren't any pure RISC or CISC designs any more - CPUs are fare more complex these days.

Ask that ARM to do some 4K gaming and see how fast that thing thermally implodes....

Mobile ARM? Why would you ask mobile ARM chips to 4K game - that's ridiculous. ARM chips targeted at handling 4K? Different story. Just because they don't exist today doesn't mean they couldn't exist in the future.

There is zero reason an ARM chip couldn't take on Intel on the desktop. X86 has a lot of technical baggage that so far has been able to be overlooked because "ooh, backwards compatibility" but software toolchains are maturing. Windows and Linux are starting to straddle ARM as well as X86: Raspberry Pi alone is causing a HUGE shift in interest for ARM based machines and there are more and more general purpose ARM boards geared to performance, not cheapness, coming out daily.

You can poo poo it all you want - the market won't care :)

You should though. Even if you never own an ARM based boxed, the pressure should spur both Intel and AMD to keep the pedal to the metal and that's good for everyone!
 
benchmarks (with credible links) showing that?,

when you take into account what a PC does vs a mobile phone...

Arm is great for what it's intended use is, but there is not a chance it is going to take over the x86 market.. even with Windows 10, you can still run software from the 80's on the machine in most cases with no issues

ARM is RISC and X86 is a General Purpose Computing Platform...

Ask that ARM to do some 4K gaming and see how fast that thing thermally implodes....
Just compare a Geekbench score of an A11 to a quad core desktop i5 of your choosing. I personally compared it to my old Haswell based i5 4690k, and they were within 10% of one another. So I figure a non-K quad core i5 at a lower clockspeed would actually be slower than the A11.

It's kind of laughable when you compare the TDP of the two chips. The die size also. Apple could crush Intel if they wanted to. And from the sounds of it they will do just that soon enough. What Apple's CPU division has done is nothing short of remarkable. I am a PC hardware enthusiast and I'm not really a Mac user at all, but the inner geek in me is nothing short of extremely impressed at what Apple has done in terms of their CPU design and engineering.
 
Just compare a Geekbench score of an A11 to a quad core desktop i5 of your choosing. I personally compared it to my old Haswell based i5 4690k, and they were within 10% of one another. So I figure a non-K quad core i5 at a lower clockspeed would actually be slower than the A11.

It's kind of laughable when you compare the TDP of the two chips. The die size also. Apple could crush Intel if they wanted to. And from the sounds of it they will do just that soon enough. What Apple's CPU division has done is nothing short of remarkable. I am a PC hardware enthusiast and I'm not really a Mac user at all, but the inner geek in me is nothing short of extremely impressed at what Apple has done in terms of their CPU design and engineering.

When something like that is truly laughable, yet should be true and is not....what does that tell you?
 
Wow, how did this thread get to the point where people are arguing that Apple SOC is near Haswell and whatever ... ? And geekbench really ?
Anyway, just architecture wise, most SW won't even work, good luck.
 
I'm hearing lots of bluster and complaining about the fact that I'm saying that Apple's A11 is as fast as a quad core i5. I have shown that much by linking to and discussing benchmarks. Instead of all the bluster, how about you guys actually prove me wrong? I would love to see it. Perhaps throw in the A12 while you're at it.
 
When something like that is truly laughable like that, yet should be true but is not....what does that tell you?
But why is this "laughable" - I'm legitimately asking.
IdiotInCharge brought up some good points which I am now taking into account, but if you could, please elaborate on your stance, or at least throw out some benchmarks or examples to backup your thoughts or opinions? :)


NetBurst says, solid plan gents......YOU CAN DO IT!
Also, what does Netburst have to do with any of this?
It isn't like the ARM64 CPUs/SoCs have super long pipelines or run overly hot at their respective clock frequencies and TDPs, so why say this?
 
But why is this "laughable" - I'm legitimately asking.
IdiotInCharge brought up some good points which I am now taking into account, but if you could, please elaborate on your stance, or at least throw out some benchmarks or examples to backup your thoughts or opinions? :)



Also, what does Netburst have to do with any of this?
It isn't like the ARM64 CPUs/SoCs have super long pipelines or run overly hot at their respective clock frequencies and TDPs, so why say this?

Ask Sickbeast, I quoted him..

NetBurst? If you can't figuring how throwing more power at it doesn't give you competitive performance then...well.....
 
Well, I highly doubt the existing A11 can just scale up and scale out to desktop-level power consumption and still manage good performance. It's not designed to do so, and it'll never be able to manage it as is.

Look at Nvidia and their GPU design philosophies. When they stuck with monolithic big dies, throwing more transistors and power at the problem and try to scale down, it failed fantastically. Fermi came to mind. And then they went back to the design board with Maxwell and design specifically to be able to scale across extreme power envelopes, while maintaining performance scaling. It look them 3 tries to get it to work well (Maxwell was a good start, Kepler was a good improvement, but Pascal took it to another level).

Intel also had a stint with NetBurst where they tried to just throw more power at the problem. They also had to scrap the whole ordeal.

Now you tell me that Apple had the secret sauce to the problem all along, while 2 semiconductor giants were clueless? I'm at least skeptical, and laughing a little at the notion.
 
This has been posited for years now. IT was originally dreamed-up in 2011 by the master of bullshit,

Initial BS pitch to get paid readers:

https://semiaccurate.com/2011/05/05/apple-dumps-intel-from-laptop-lines/

BS pitch part two:

https://semiaccurate.com/2012/09/24/an-update-on-apple-moving-away-from-intel/

But seven years later, there's still no sign of this sea-change...because Charlie dreamed it up to get paid subscriptions.


Anyone besides Charlie Demerjian can see what a acid trip that is. Apple was suffering from poor performance under PowerPC and 68000 series because the rest of he market was using Intel. But now everyone else uses Intel, so the challenge then is to maintain market share (see shiny and pointless Macbooks wit touhscren function key rows). But since both Qualcomm and Intel sucks, they can take their sweet time transitioning.

The reality is that Apple can slowly swallow their own x86 Mac market by making it redundant. They're doing this by making iPad Pro, an making iOS more functional. And they'e convincing major developers to make the switch.

Apple is not going to kill x86 Macs tomorrow, it's going to be death by a thousand papercuts from iOS.
 
Last edited:
Ask Sickbeast, I quoted him..

NetBurst? If you can't figuring how throwing more power at it doesn't give you competitive performance then...well.....
At the very least they could redesign the A11 so that instead of only two high performance cores and four low performance cores, they could give it six high performance cores instead. Plus add more cores also if they have the die space available, which I suspect they would. They are going to have a ton of thermal headroom. And most certainly they will be able to clock the chips higher with a higher TDP. I'm not saying it's a magic bullet because it's not. But most certainly they can make these chips *much* faster and they won't have to worry at all about heat or throttling if they put them into a desktop or a laptop.
 
Or people could just use Libre Office today which is already ported to ARM.

They don't have to ARM dominates the world of computing now. its in every phone, every tablet, a handful of laptops. All they have to do is extend the functions of office made for iphones and this is surely going to be what apples plan is, don't make everyone redevelop x86 for arm just spur them to continue development for ios then they can release some mac laptop that runs on arm and talk about its magical ability to run ios apps. over a couple of year period they upgrade the whole mac line.
 
This has been posited for years now. IT was originally dreamed-up in 2011 by the master of bullshit,
ARM (now ARM64) has changed radically since 2011 to what it is now in 2018.
Almost as much, if not more, than x86 (now x86-64) changed from 2001 to 2008.

No, in 2011, Cortex-A7/A9/A15 CPUs were not going to be replacing modern desktop or laptop CPUs, even from 2011.
Those then-CPUs did not have the IPC needed yet, very good thermals (that is where the "Netburst" point that Paul_Johnson made would have actually made sense), and were still 32-bit.

I'm not saying that this year, ARM64 is going to replace x86-64 - obviously it is not ready for that just yet, but we are starting to see the signs that it is possible in a few scenarios outside of mobile and embedded controllers.
This isn't going to be like "The year of Linux", though, as unlike that, this is definitely going to happen, at least with Apple, within the next 2-5 years, and the rest of the industry most likely around 2025-2030.

Everyone thinks that because Apple won't be on x86-64 any more, that no one is going to want to develop software for them or OS X any more - that is total bullshit.
IdiotInCharge even made an excellent point regarding that very concept:

Something to consider: we're seeing containerization and cross-platform frameworks grow to a point that the OS and hardware is becoming irrelevant. Code like Java, Go, and Python- not to mention the web languages- can be compiled and run anywhere. More important is the consideration of platform- desktop, laptop with touch, tablet, phone, watch... fridge, car, IoT?

As the frameworks evolve, expect that to be tackled too.
 
  • Like
Reactions: DocNo
like this
Or people could just use Libre Office today which is already ported to ARM.
For personal or home use, Libre Office is great, and you are right, it has been ported to ARM/ARM64 and works great on those platforms.
For enterprise, it flat-out is absolutely not - Microsoft Office isn't just needed in enterprise, it is required.

Again, though, Microsoft already has a legitimate mobile version of Microsoft Office for iOS (ARM64) - is there any reason they couldn't do this for OS X on ARM64?
It isn't like once Apple moves to ARM64, "that's it, game over man!" - that concept is just ridiculous, and back in the 1990s or 2000s it most definitely would have been an uphill battle for them to get that compatibility.

The reason this isn't such a thing any more in the 2010s (and beyond) is because unlike in the 1990s and 2000s, ARM-based devices were hardly mainstream and there wasn't the vast industry support for them like there is now.
Smartphones and tablets were a massive paradigm shift, and the tools and software written for multi-platform and multi-processor architectures are far more vast now than they ever were back then.

Hell, I can compile modern programs from source to run on my Sharp X68000 with the m68k ISA from 1989... so how is it that Apple and all of these other major companies couldn't do the same thing for the very modern and very supported ARM64 ISA with their current SDKs???
Bottom-line: the opportunity cost itself is what is going to make or break this decision with Apple, and the rest of the industry and market, and every year this becomes more and more of a viable and real-world option.
 
This is a bad idea, there is a reason outside of creative type software there wasn't a ton of programs available on mac's pre intel. Almost ALL software is written for an x86 environment since that computing format DOMINATES everything by a HUGE margin. If Apple moves to ARM you are going to be back to the dark ages of mac computers and they are going to lose what small market share they have. Outside of products like Photoshop and content editing software noone is going to develop for Mac anymore.
I think we are at the point where Apple worship will be enough. Apple will internally develop what is needed, and the rest could be emulated, for after all the Apple legions could care less. iPhone, iPad, MacBook, that is all that is needed. It is an elitist system and proud of it.
A permanent market share as long as times are good.
 
You are correct. The specific architecture doesnt tell you how fast or efficient it is either. But, the context we are discussing is with Apple moving the Mac Book Pro over to an ARM based architecture so take the architectures we are discussing and combine it to the 2020 Mac Book Pro context. That is what we (or I at least) am discussing which is why I keep bringing up the *use* of the device in question. Can you re-invent the entire software and compiler base of desktop/laptop computers from the last 30+ years and design them in a way to take advantage of ARM processors? Yes. Could you stick with current designs and implement emulation (through SOC changes) that runs the code within 25% of the speed of native code? Yes. But *BOTH* those things have disadvantages any competitor that sticks with current design philosophy wont have to deal with, so anyone who uses them is compromising. If those compromises dont affect your usability (either through accepting lower performance or by changing the way or what you use the device for), thats your prerogative. Having said this, those that expect their computing device to deliver a performance first experience will not find those compromises acceptable. Apple's Mac Books were a joke for performance users until they switched to Intel CPU's. Chrome Books are equally a joke today (for performance users). This is not going to change by 2020.
Those who expect their computing device to deliver performance first weren't buying macs. Their user base is well known for accepting compromises in performance and function if apple tells them to do it. The main thing that holds any company back is software compatibility, the thing is that apple doesn't have the greatest software library on macs, so switching to ARM is not even a problem because what it does is gives them the massive software library on iOS. Changing the way you use your device is also not a problem for mac users when apple says something changes, they deal with it.
 
Those who expect their computing device to deliver performance first weren't buying macs. Their user base is well known for accepting compromises in performance and function if apple tells them to do it. The main thing that holds any company back is software compatibility, the thing is that apple doesn't have the greatest software library on macs, so switching to ARM is not even a problem because what it does is gives them the massive software library on iOS. Changing the way you use your device is also not a problem for mac users when apple says something changes, they deal with it.

Where do you get that from, pretty much all high end software runs on both OSX and Windows. OSX also has Final Cut Pro X and Logic X which are OSX only. As for changing the way you use the device, what are you refering to exactly lol. I am using OSX the same way I have been for 10yrs, if anything its gotten much better with time.
 
Those who expect their computing device to deliver performance first weren't buying macs. Their user base is well known for accepting compromises in performance and function if apple tells them to do it. The main thing that holds any company back is software compatibility, the thing is that apple doesn't have the greatest software library on macs, so switching to ARM is not even a problem because what it does is gives them the massive software library on iOS. Changing the way you use your device is also not a problem for mac users when apple says something changes, they deal with it.

I would agree with all this as well.. But they already have the iPad pro, so again, it would make more sense to get rid of one of them entirely. Either make the iPad pro a Mac Book Pro or get rid of the Mac Book Pro entirely.
 
Apple has always been in the best position to transition to ARM.

MS toying with ARM windows is going to fail hard a second time.

Chrome books can get by on arm but the hardware is still way to dispersed over multiple manufacturers to really make anything shine.

Apples advantage is having complete control of the hardware and software stacks. IMO this is a move Apple should have pushed even harder to a couple years ago already. The advantage they have controlling everything is being able to bake in specific Apple only bits into their hardware. Then exposing them through their own APIs. They have been doing that with their mobile stuff... and it enables them to do interesting unique things, that they can Market as they do.

I know it sounds crazy... but when Apple does make this jump its not going to be how much do we loose for some battery life. It will be how many software packages are making use of the ASIC units Apple decides to bake in, cause they will likely destroy consumer x86 in performance. The A12 has their neural network bits that they claim up to 5 trillion operations per second on. I would expect chips they intend to aim at higher end Mac books... could easily bake in GPU like hardware (tensor cores why not) to handle math for software like Photoshop. (which is why I believe they should have been all over this 2 years back already before companies like Adobe started really offloading work to GPUs) Still if they can bake in ASICs that can do that same work at a fraction of the power and push them into Macbooks.... they may be much more attractive then many of us will want to admit. lol
 
Where do you get that from, pretty much all high end software runs on both OSX and Windows. OSX also has Final Cut Pro X and Logic X which are OSX only. As for changing the way you use the device, what are you refering to exactly lol. I am using OSX the same way I have been for 10yrs, if anything its gotten much better with time.


That's just the high end software from large companies, and there is some specialty software too, but no matter how you cut it the library of applications available for OSX is much smaller than say Windows. Just go look at the steam store to see. here is the more important point all the big companies can afford to port to ARM and many already have iOS options, so the point still stands its really not that big of a deal for apple to switch. Now Windows that's a different ball game for MS to get people onto ARM is massive since they have the largest library of most used applications running on windows and almost no presence on ARM.
 
I would agree with all this as well.. But they already have the iPad pro, so again, it would make more sense to get rid of one of them entirely. Either make the iPad pro a Mac Book Pro or get rid of the Mac Book Pro entirely.


Semantics, does it matter if they call it upgrading the ipad pro to an attached keyboard or changing the mac pro to ARM or calling it a MacBook sliver or whatever they want? And no matter what they do they are going to keep selling the older models for a bit.
 
Semantics, does it matter if they call it upgrading the ipad pro to an attached keyboard or changing the mac pro to ARM or calling it a MacBook sliver or whatever they want? And no matter what they do they are going to keep selling the older models for a bit.
Well, define "for a bit".
I remember once Apple moved from IBM/Motorola PowerPC to Intel x86, nearly all PowerPC-based products were dropped very quickly and software support only continued for about two years with universal (dual) software binaries.

So if they do make the move from x86-64 to ARM64, they will most likely do something very similar, assuming they don't have built-in ASICs like ChadD pointed out, and will end x86-64 software support not long after.
If they do have built-in ASICs for x86-64, then they will probably have continued software support for some time, but just no further development with the older x86-64 SDKs.
 
Software support is my main concern, not performance. My desktop is a Hackintosh and I own a Macbook and being able to run older software is very important for me. However, I was very impressed with Rosetta performance back in Snow Leopard, so I am honestly not too worried seeing how Microsoft + Qualcomm are doing a decent job at x86 emulation. But I do wonder how this transition will happen. I do like the idea of a Mac with an AXX chip + an x86 chip. Being able to run the ARM chip the majority of the time to save battery, and switching to the Intel chip when using legacy software. This sounds messy, but without an Intel chip at all, I don't think I would do it. MacOS is one of my favorite productivity OS's and I no longer use iOS with no money invested in their App Stores.

I could see the Macbook Pro being shipped with a hybrid solution, whereas the 'lower end' models become ARM only. But jeez, that will confuse some of their customers.... If I had thousands of $$$ invested into Apple's iTunes and App Store for iOS, moving to an ARM mac could be relatively painless I suppose. Maybe they will even include a touchscreen as an optional extra.... :p
 
Can laugh about a A12 doing 4k, but Apple has been running right around 1440 for a long time now... a bit more or less depending on the form factor.

I don't think it's quite Crysis level graphics yet, but it's also not shabby or running on a 1000W PSU SLI rig either.

Also, folks may be making the assumption that an Apple ARM Desktop would continue to use the derivative Imagination graphics. Most games are not CPU bound, particularly at 4k. Using an integrated GPU wouldn't necessarily be the case; there would be no reason they couldn't go to a much more powerful GPU, and have had pretty strong relationships with both AMD and nVidia in the past.
 
Last edited:
Software support is my main concern, not performance. My desktop is a Hackintosh and I own a Macbook and being able to run older software is very important for me. However, I was very impressed with Rosetta performance back in Snow Leopard, so I am honestly not too worried seeing how Microsoft + Qualcomm are doing a decent job at x86 emulation. But I do wonder how this transition will happen. I do like the idea of a Mac with an AXX chip + an x86 chip. Being able to run the ARM chip the majority of the time to save battery, and switching to the Intel chip when using legacy software. This sounds messy, but without an Intel chip at all, I don't think I would do it. MacOS is one of my favorite productivity OS's and I no longer use iOS with no money invested in their App Stores.

I could see the Macbook Pro being shipped with a hybrid solution, whereas the 'lower end' models become ARM only. But jeez, that will confuse some of their customers.... If I had thousands of $$$ invested into Apple's iTunes and App Store for iOS, moving to an ARM mac could be relatively painless I suppose. Maybe they will even include a touchscreen as an optional extra.... :p

This is Apple we are talking about... if this happens. They line up the handful of software that has kept Apple fans loyal for years.... and they will introduce software upgrades from them exposing new API extensions (or completely new ones). That will leverage what makes the Apple chips special. With the A12 they baked in "neural net" cores. They could easily add Metal extensions... or make a new API intended to speed things like Photoshop processing, Logic Pro audio processing, Final Cut performance. Ect ect.

To be honest it would be pretty easy to market. That is Apples in. The biggest OsX users imo have always been the creative types. Its why Apple has went and bought software companies like emagic to ensure they have always top tier software in specific creative segments.

IMO its one reason why Apple refuses to accept Vulcan. If they do release a A14 chip or something aimed at macbooks... it is going to have not just "neural net" stuff, its going to have some form of tensor / shader cores which they will expose through Metal.

The basic software will run just fine on ARM... so much of the every day basic software people run is designed to be cross platform already anyway. Most projects these days can be compiled for x86 / arm and 2-3 different UI APIs with out having to keep multiple code paths. What will sell the systems... will be those core programs Photoshop / Final Cut / LogicX / Lightroom / Affinity Photo / Premiere / Audition / Ableton / Cubase and the like. If Apple gets all those major software developers (some of which they actually own) to hock as much as they can into Apple ARM exclusive ASIC processing cores exposed by the Metal (or some new) API. It might in fact be an extremely compelling product. (I can't imagine a live electronic musician using Ableton would care if their machine was x86 or ARM... if Ableton flipped on the "Newest Greatest Marketing term cores" in Apples ARM chip that allowed them to smoothly run 1.5-2.0x the audio effects live... If that is true they would hardly care if safari ran 10% slower. :)"

A Macbookpro that is capable of speeding photoshop transforms 50-60% without the use of a power hungry GPU. Instead doing the same type of math on the single Apple ARM chip. Considering Apples clout with the FABs... I don't think its crazy to expect they could push performance on those core creative (sells macs) type software well beyond anything x86 is capable of without leaning on a secondary GPU.
 
Last edited:
Semantics, does it matter if they call it upgrading the ipad pro to an attached keyboard or changing the mac pro to ARM or calling it a MacBook sliver or whatever they want? And no matter what they do they are going to keep selling the older models for a bit.

Well, yeah it does matter.. As we have already discussed ad nauseam, there are advantages and disadvantages to both architectures. So we have come full circle where the conclusion is they will exit the high end PC performance segment entirely and will turn the Mac Book Pro into a Chrome Book type device.
 
Can laugh about a A12 doing 4k, but Apple has been running right around 1440 for a long time now... a bit more or less depending on the form factor.

Resolution isn't really relevant until the detail settings are upped. Intel integrated GPUs can run basic games at 4k.

Also, folks may be making the assumption that an Apple ARM Desktop would continue to use the derivative Imagination graphics.

They ditched that, are you assuming that most don't know?

Most games are not CPU bound, particularly at 4k.

Most games are CPU dependent, particularly single-core/thread dependent. They're less dependent at 4k with today's GPUs, but that's a 'for now' thing, not a hard fact. Also note that maximum frametimes, the minimum FPS that you feel, are still CPU bound. You can get choppy performance at 120FPS, and you can certainly still get it at 60FPS at 4k. Trying to get i7-grade games running on ARM is certainly going to be a challenge for Apple!

Using an integrated GPU wouldn't necessarily be the case; there would be no reason they couldn't go to a much more powerful GPU, and have had pretty strong relationships with both AMD and nVidia in the past.

When they scale beyond the ultrabook-grade TDP level, integrating other GPUs shouldn't be too difficult, especially if they do an interposer setup like AMD did with Intel. Obviously larger than that they can go full discrete.
 
Back
Top