Apple ARM Based MacBooks and iMacs to come in 2021

An unknown framerate and unknown settings is far more relevant than anything Futuremark is going to show me.

Not in this reality. You are leaping to massive conclusions based on nothing.

I am showing an actual benchmark of actual GPU performance. This gives a rather decent sanity check on your outlandish flight of fancy (RTX 2060-2070 performance)

The iPad Pro GPU is more likely around GTX 1050 power level. Which again is a sane result given the amount of silicon involved. Actually even that is an impressive result.

Certainly Apple can devote more silicon, and greater cooling to achieve better results in the future, but today this is not something comparable to an RTX 2060.
 
Not in this reality. You are leaping to massive conclusions based on nothing.

I am showing an actual benchmark of actual GPU performance. This gives a rather decent sanity check on your outlandish flight of fancy (RTX 2060-2070 performance)

The iPad Pro GPU is more likely around GTX 1050 power level. Which again is a sane result given the amount of silicon involved. Actually even that is an impressive result.

Certainly Apple can devote more silicon, and greater cooling to achieve better results in the future, but today this is not something comparable to an RTX 2060.

The A12Z dev kit is capable of rendering 3x 4k streams in ProRes in real time. As demoed here:

That means adding color grades and giving play back in real time (as in, not rendering the time line) while using the neural engine to do cropping and tracking.

So if you want relevant, there is relevant. Show me your graphics hardware that is capable of doing the same thing. You're telling me this is flights of fancy. No, this is real work getting done.

EDIT: It's all going to depend on the type of workload getting done. And I wouldn't say it's as simple as saying it's directly equivalent to other graphics cards. In that aspect of previously comparing it to specific cards I'm probably also wrong. But I'll stand by my statements that it's a powerful chip, it's capable of doing more than a 1050 certainly in certain types of use cases and we're not even seeing final hardware. We'll know more as dev kits get out in the wild. Apple apparently is shipping them to Devs (as they stated at WWDC) "today". So it won't be long before people are running all sorts of benchmarks on them and we have a better understanding of the hardware itself and what sort of clock speeds its running.
 
Last edited:
It is going to be a powerful chip, but I think the software side is what is going to set it apart. Apple has made it a priority to include fast GPUs in all of their products, much more than anyone else. I firmly believe they are going to be pushing the envelope of what can be done on a GPU workload, and because they have a direct tie to the fabrication of the processor, control of the software stack, and only a few different skus in their entire lineup we really need to think of Apple as more of a console than a computer. I definitely agree that specific workloads outside of gaming will see much more refinement, so it's not going to be a simple comparison. Sure Apple will be able to game, but I think they will be more focused on GPGPU than gaming, where the focus tends to be the other way around on the typical desktop.

I don't even think I could speculate on what they want to do power wise yet, but we also don't know what type of system they are going to release first. The mac mini is an obvious choice, but whether we see a Macbook Air or a Macbook Pro will be the more interesting one. An MBA might actually perform slower than an Ipad Pro simply because of it's design, where a macbook pro would no doubt have to be much faster. But there are also things like an iMac that could go in several directions, mainly revolving around the price point. No matter what it's safe to assume performance will have to be above whatever we have now otherwise they will have a hard time selling computers. The question will be what kind of balance they want, go all out and increase performance while pushing power, keep the same power level but increase performance, or try to dial back power by making the chips only slightly faster. They could end up doing all 3 depending upon the product segment and what the typical workload is. No matter what we definitely will need to break out the popcorn as it should be interesting to see what they can do with full control.
 
The way it was worded very carefully in the keynote is a “transition” to ARM. This implies that they are moving completely to it over 2 years. Time will tell how it plays out but there was no mention of maintaining the Intel product lines.
 
Agreed, it will be significantly faster from the form factor alone and thermal constraints being removed.

The form factor really isn't a lot different between those two (12 x 8 vs 11 x 8.5). Most of the internal space is just dedicated to a battery so the components aren't going to seeing massive differences in size. The air is thinner one side but it is a decent amount thicker overall. The main difference is I didn't think the MBA had fans in it, but apparently it does. So yes despite it having really low clock speed cores in it, and only a "10W TDP" processor, the MBA should perform better simply because of the fans. There are no TDP numbers for the iPad Pro, but there is speculation it's in the 8 - 10W range. But that's an actual 10W, not what Intel calls 10W. Despite the low clocks Intel's CPU is probably like 30W then and they are trying to use fans to make up the difference.

So you are correct, I was thinking the air was thinner than the pro, but overall it's not, and because they crammed in some fans on the fat end they can dissipate more heat. Price wise they both compete in the same price segment, with the Ipad Pro actually costing more. (Maybe not once they release an i7 MBA)
 
The way it was worded very carefully in the keynote is a “transition” to ARM. This implies that they are moving completely to it over 2 years. Time will tell how it plays out but there was no mention of maintaining the Intel product lines.
There was in the sense that they stated there are both Intel and ARM Mac's coming. You can watch near the end of the video I linked.
However, I do agree with you that it's clear their plan is to move everything to ARM over the course of two years, even though they will continue to release Intel parts here and there for the time being.

It is going to be a powerful chip, but I think the software side is what is going to set it apart. Apple has made it a priority to include fast GPUs in all of their products, much more than anyone else. I firmly believe they are going to be pushing the envelope of what can be done on a GPU workload, and because they have a direct tie to the fabrication of the processor, control of the software stack, and only a few different skus in their entire lineup we really need to think of Apple as more of a console than a computer. I definitely agree that specific workloads outside of gaming will see much more refinement, so it's not going to be a simple comparison. Sure Apple will be able to game, but I think they will be more focused on GPGPU than gaming, where the focus tends to be the other way around on the typical desktop.

I don't even think I could speculate on what they want to do power wise yet, but we also don't know what type of system they are going to release first. The mac mini is an obvious choice, but whether we see a Macbook Air or a Macbook Pro will be the more interesting one. An MBA might actually perform slower than an Ipad Pro simply because of it's design, where a macbook pro would no doubt have to be much faster. But there are also things like an iMac that could go in several directions, mainly revolving around the price point. No matter what it's safe to assume performance will have to be above whatever we have now otherwise they will have a hard time selling computers. The question will be what kind of balance they want, go all out and increase performance while pushing power, keep the same power level but increase performance, or try to dial back power by making the chips only slightly faster. They could end up doing all 3 depending upon the product segment and what the typical workload is. No matter what we definitely will need to break out the popcorn as it should be interesting to see what they can do with full control.
Everything we've seen so far is just a place holder. I'm willing to bet that release hardware will not be A12Z. They hint at that when they state that they are making all custom Arm chips for Mac but the power is that there will be "a common architecture" across all of their product lines.
You can hear Johnny talk about those things here:

So, the A12Z then won't be in a final Mac. But if it was it would be faster in an Air just by virtue of having any amount of active cooling, whereas in the iPad Pro it has none.
I'd recommend watching the 11 minute~ Macworld recap I just linked, because it touches on all of the key points about the move to ARM without having to watch entire keynotes.

Anyway, as has been surmised by others, whatever is released later in the year will have its own custom chip, which isn't getting talked about now so that people can be wowed by its much better performance in a desktop/laptop system as compared to an iOS device. Certainly A14 is being released this year, obviously they want their macOS hardware to be even much faster than even that chip will be as it will have access to more power and active cooling. And certainly faster than a two year old iPad Pro chip.

The dev units are just that. For devs to get working on and testing on. Previously when Apple did the same thing with the move from PPC to x86, said dev units had to be returned and were not considered final hardware. Most believe something similar will happen here. This is just a stop gap to get people programming now, while hiding a future more exciting product announcement. All things considered though it's still quite fast and powerful, but like I say, I would expect far more actually rather than far less.
 
There was in the sense that they stated there are both Intel and ARM Mac's coming. You can watch near the end of the video I linked.
However, I do agree with you that it's clear their plan is to move everything to ARM over the course of two years, even though they will continue to release Intel parts here and there for the time being.


Everything we've seen so far is just a place holder. I'm willing to bet that release hardware will not be A12Z. They hint at that when they state that they are making all custom Arm chips for Mac but the power is that there will be "a common architecture" across all of their product lines.
You can hear Johnny talk about those things here:

So, the A12Z then won't be in a final Mac. But if it was it would be faster in an Air just by virtue of having any amount of active cooling, whereas in the iPad Pro it has none.
I'd recommend watching the 11 minute~ Macworld recap I just linked, because it touches on all of the key points about the move to ARM without having to watch entire keynotes.

Anyway, as has been surmised by others, whatever is released later in the year will have its own custom chip, which isn't getting talked about now so that people can be wowed by its much better performance in a desktop/laptop system as compared to an iOS device. Certainly A14 is being released this year, obviously they want their macOS hardware to be even much faster than even that chip will be as it will have access to more power and active cooling. And certainly faster than a two year old iPad Pro chip.

The dev units are just that. For devs to get working on and testing on. Previously when Apple did the same thing with the move from PPC to x86, said dev units had to be returned and were not considered final hardware. Most believe something similar will happen here. This is just a stop gap to get people programming now, while hiding a future more exciting product announcement. All things considered though it's still quite fast and powerful, but like I say, I would expect far more actually rather than far less.


For sure, the A12Z is just a taste of the platform and reflects what can be delivered as is with no customisation. I think Apple have something big coming that no one is expecting and don’t want to reveal it to competitors. I personally think it will be at least double the performance of an iPad Pro in a performance variant.
 
I'm curious to how good for music production this will be, and also how well the emulation will work with x86 coded VST's since many will probably never be rewritten for ARM.
 
The A12Z dev kit is capable of rendering 3x 4k streams in ProRes in real time. As demoed here:

That means adding color grades and giving play back in real time (as in, not rendering the time line) while using the neural engine to do cropping and tracking.

So if you want relevant, there is relevant. Show me your graphics hardware that is capable of doing the same thing. You're telling me this is flights of fancy. No, this is real work getting done.

EDIT: It's all going to depend on the type of workload getting done. And I wouldn't say it's as simple as saying it's directly equivalent to other graphics cards. In that aspect of previously comparing it to specific cards I'm probably also wrong. But I'll stand by my statements that it's a powerful chip, it's capable of doing more than a 1050 certainly in certain types of use cases and we're not even seeing final hardware. We'll know more as dev kits get out in the wild. Apple apparently is shipping them to Devs (as they stated at WWDC) "today". So it won't be long before people are running all sorts of benchmarks on them and we have a better understanding of the hardware itself and what sort of clock speeds its running.


Nice goalpost shifting.

We were discussing 3D throughput, as it's applied to gaming. Video encoding/processing while useful is an entirely different function.
 
Time will tell how it plays out but there was no mention of maintaining the Intel product lines.

Yes there was. Full software support and updated processors are incoming. This is factually incorrect in every way.
 
Apple ARM will be 60% cheaper as they claim. This sounds like pure profit for Apple. Probably worth dumping Intel even though performance may not be as good.

By the time Apple SoCs will be out, the Intel will have 7nm on the table and we'll how that turns out.
 
Apple ARM will be 60% cheaper as they claim. This sounds like pure profit for Apple. Probably worth dumping Intel even though performance may not be as good.

Did you miss the part of the keynote where Apple said that the ARM Macs will be faster than the x86 ones?
 
Did you miss the part of the keynote where Apple said that the ARM Macs will be faster than the x86 ones?

I missed that part. Unfortunately I doubt their chips will be faster, but we shall see.
 
Apple ARM will be 60% cheaper as they claim. This sounds like pure profit for Apple. Probably worth dumping Intel even though performance may not be as good.

By the time Apple SoCs will be out, the Intel will have 7nm on the table and we'll how that turns out.

I don't think anyone knows what Apple pays Intel, nor the true cost of building their own SoC.

We also don't know what the final performance of Apple Mac SoC will be, but even the current iPad SoC is very competitive with actively cooled laptops, and the and higher end Mac ones will likely have active cooling for higher clocks and more cores, so I don't see any kind of issue with mainstream Macs being competitive with desktop Intel parts.
 
I missed that part. Unfortunately I doubt their chips will be faster, but we shall see.
One nice thing about Apple controlling the entire ecosystem end-to-end is that they can optimize the heck out of the OS for that chip. The chip doesn't have to be faster on paper. The OS just has to exploit everything the chip has. Apple's more than capable of doing that.
 
Nice goalpost shifting.

We were discussing 3D throughput, as it's applied to gaming. Video encoding/processing while useful is an entirely different function.
Whatever. We're talking about what is this SoC capable of doing. And the answer is: we don't know yet. And the other answer is: it's not even what will appear in commercial hardware. As Macs are going to have their own specialized SoC (and not a 2 year old tablet chip).
Video rendering is still more than relevant. Maybe the A12Z is not as capable as a 2060 in some areas but definitely more powerful than a 2060 in others. Which is essentially what my "edit" and addendum was talking about if you read it. If you want the meaningless victory like you are often want to do, then take it in lieu of another thread derailment like basically all of page 2 and 3.
 
you guys pre-orderin an ARM-based Mac?

I already have an iPad Pro so I'm not planning on jumping to a desk/laptop with ARM for home use. For work, since we're discussing 2021 budgets right now, I've made my recommendation that we get the last available Intel-based MacBook Pro before phasing out using Macs entirely.
 
Whatever. We're talking about what is this SoC capable of doing. And the answer is: we don't know yet. And the other answer is: it's not even what will appear in commercial hardware. As Macs are going to have their own specialized SoC (and not a 2 year old tablet chip).
Video rendering is still more than relevant. Maybe the A12Z is not as capable as a 2060 in some areas but definitely more powerful than a 2060 in others. Which is essentially what my "edit" and addendum was talking about if you read it. If you want the meaningless victory like you are often want to do, then take it in lieu of another thread derailment like basically all of page 2 and 3.

Hand wave away the actual argument with "whatever" and change direction, Got it.

The reality was you looked at a few seconds of a video game, and made a ridiculous claim that the SoC was as powerful as an RTX 2070.

Get called on that, and "hey look at this video editing."

Goal post shifting and arguing in bad faith, is what you want to do, I guess.
 
Hand wave away the actual argument with "whatever" and change direction, Got it.

The reality was you looked at a few seconds of a video game, and made a ridiculous claim that the SoC was as powerful as an RTX 2070.

Get called on that, and "hey look at this video editing."

Goal post shifting and arguing in bad faith, is what you want to do, I guess.
Okay, so you do want to derail the thread for another 2 pages.
I stand by my statements. Even if you want to manipulate them into saying things they don't say.
 
Video editing really isn't a big deal if you build hardware for it. Almost no one actually does, or cares to, because almost no one actually needs something faster than can be had now.

And that's mostly because Apple is doing it and people get warm and fuzzy over Apple.

But the reality is that they're throwing some fixed-function hardware at the problem and showing that off like it's a revolution, when it's really just math, from physics to economics.

The other reality is that Apple is actually targeting end-user usecases with hardware, and while I'm usually dismissive of much of their efforts (they build appliances), their custom hardware is quite impressive, not just in terms of performance, but in terms of how well it is tuned to meet users' actual needs.

One example is this: if you strip a desktop OS down, and you make it do just what most people do with a computer, you can run the whole damn thing off of a potato. And I don't mean a small computer -- I mean a small computer running from the voltage generated by an actual godsdamned potato.

Video editing? So long as the 'sandbox' of what types of processing users can do is well defined and implemented in hardware, not a big deal. It's more of a throughput question (storage and RAM) than processing power if you've had as much time as Apple has to bake the logic in. Same for the rest of the 'heavier' tasks users might need to do.

I'm curious to how good for music production this will be, and also how well the emulation will work with x86 coded VST's since many will probably never be rewritten for ARM.
Given that this is a task that Apple has already been optimizing for, should be a breeze. As for the older stuff.. assume Apple will have something new to sell you alongside.
 
Video editing really isn't a big deal if you build hardware for it. Almost no one actually does, or cares to, because almost no one actually needs something faster than can be had now.

And that's mostly because Apple is doing it and people get warm and fuzzy over Apple.
I disagree with just generally the notion that people are okay with what they have now, or what they have now is somehow good enough.

If I buy any laptop off the shelf and try to edit multiple 4k clips, even if I trans-code them into a more fat and friendly codec that is easy to decode, I can easily slow them to a crawl with just color grade and a look stack.

If you look at the requirements for Davinci Resolve (written by Blackmagic's technical team) it highly recommends basically as a synopsis: the most cores, the most powerful GPUs, the most RAM, and the fastest/largest SSD's you can afford.

So if you mean that people are satisfied with just good-enough being $10k worth of Mac Pro, or $7500 worth of iMac Pro, or a $10k Wintel HEDT machine, then sure, I guess people are satisfied. Or consider it to be good enough. But not just on any general system.

And that's just dealing with compressed video. Not 8k Raw video exceeding 1600Mb/s with multiple streams and grades etc in real time.

In terms of actually specialized hardware I only know of two items: and that's RED's Rocket-X card and Apple's Afterburner card. And yeah, Apple's FPGA in that case is pretty incredible. But you can also see Red's card which is decidedly not. From a "general computing" standpoint, there is no way you can argue to me that doing video requires basically nothing, because all you will see is how it more or less crushes machines.

I have 8 cores, 32GB of RAM, and a Radeon VII. I still can slow editing to a crawl. And it's not even hard.

But the reality is that they're throwing some fixed-function hardware at the problem and showing that off like it's a revolution, when it's really just math, from physics to economics.

The other reality is that Apple is actually targeting end-user usecases with hardware, and while I'm usually dismissive of much of their efforts (they build appliances), their custom hardware is quite impressive, not just in terms of performance, but in terms of how well it is tuned to meet users' actual needs

One example is this: if you strip a desktop OS down, and you make it do just what most people do with a computer, you can run the whole damn thing off of a potato. And I don't mean a small computer -- I mean a small computer running from the voltage generated by an actual godsdamned potato.
I'd like to see that. Even a potato capable of just doing one thing: browsing the internet. I'd say you're using hyperbole to say the least.

Video editing? So long as the 'sandbox' of what types of processing users can do is well defined and implemented in hardware, not a big deal. It's more of a throughput question (storage and RAM) than processing power if you've had as much time as Apple has to bake the logic in. Same for the rest of the 'heavier' tasks users might need to do.
Perhaps, but then you could argue that that is what happens also from a graphics and gaming perspective. Which was the point of what I was saying.

Given that this is a task that Apple has already been optimizing for, should be a breeze. As for the older stuff.. assume Apple will have something new to sell you alongside.
All ProApps have already been ported. But his question about third party VST's is incredibly relevant. Apple isn't Native Instruments. They will never offer a huge library of virtual instruments. And I would say the fear is at least some-what founded that more esoteric ones may never get ported over.
 
Last edited:
I disagree with just generally the notion that people are okay with what they have now, or what they have now is somehow good enough.
Most people are using phones, so...

If I buy any laptop off the shelf and try to edit multiple 4k clips, even if I trans-code them into a more fat and friendly codec that is easy to decode, I can easily slow them to a crawl with just color grade and a look stack.
First you'd have to have the hardware worth recording 4k video worth editing. Then, you'd have to actually be editing, which is another level altogether. Relative to the general consumer market, that number is basically a rounding error, and it's already well-served by Apple.
If you look at the requirements for Davinci Resolve (written by Blackmagic's technical team) it highly recommends basically as a synopsis: the most cores, the most powerful GPUs, the most RAM, and the fastest/largest SSD's you can afford.
Sure!

Mostly because you're using a collection of components that are not actually designed for the purpose and you're chasing bottlenecks. Apple is designing hardware for that purpose. That doesn't mean that no one else can, just that Apple is putting their engineering effort in that sector.

So if you mean that people are satisfied with just good-enough being $10k worth of Mac Pro, or $7500 worth of iMac Pro, or a $10k Wintel HEDT machine, then sure, I guess people are satisfied. Or consider it to be good enough. But not just on any general system.
People are satisfied with their phones :)

And that's just dealing with compressed video. Not 8k Raw video exceeding 1600Mb/s with multiple streams and grades etc in real time.
If you can afford to rent the camera...

In terms of actually specialized hardware I only know of two items: and that's RED's Rocket-X card and Apple's Afterburner card. And yeah, Apple's FPGA in that case is pretty incredible. But you can also see Red's card which is decidedly not. From a "general computing" standpoint, there is no way you can argue to me that doing video requires basically nothing, because all you will see is how it more or less crushes machines.
Take the FPGA, which is probably the most inefficient way to do things aside from actually etching logic, then just effing etch the logic, and you have Apple's solution.

I have 8 cores, 32GB of RAM, and a Radeon VII. I still can slow editing to a crawl. And it's not even hard.
Same!

I'd like to see that. Even a potato capable of just doing one thing: browsing the internet. I'd say you're using hyperbole to say the least.
That's all I'm suggesting that a potato would be capable of. That's all the 'general compute' power Apple needs -- the rest can be dedicated logic for stuff like this.

Perhaps, but then you could argue that that is what happens also from a graphics and gaming perspective. Which was the point of what I was saying.
Graphics perhaps; gaming is a workload that is constantly redefined. For Apple to be good at the level of gaming that desktop users expect with an ARM product, they're going to have to restrict the definitions of gaming. Of course, that's something that Apple users are already conditioned for, so it's probably not a big deal.

All ProApps have already been ported. But his question about third party VST's is incredibly relevant. Apple isn't Native Instruments. They will never offer a huge library of virtual instruments. And I would say the fear is at least some-what founded that more esoteric ones may never get ported over.
For this one, who knows.

Emulating x86 on ARM is a dead-end; you're making something that's already slow (by design) significantly slower, on purpose. And then you want to run latency-sensitive stuff in emulation?

I'd recommend just buying the proper hardware in the first place. Perhaps in a decade or so ARM will be able to emulate x86 as it runs today.
 
I'm surprised Apple took this long to move chips to their laptops. They have been running in house chips in their phones for ages and they aren't any slouch.
 
I'm surprised Apple took this long to move chips to their laptops. They have been running in house chips in their phones for ages and they aren't any slouch.
I'd bet the timing is tied to two basic things:
  • Third-party software houses porting to their APIs
  • Fab capacity to produce the desired part with the appropriate performance level at volume
They could have bullied their way to more fab capacity, but getting software ported is something that they simply couldn't force to happen on their terms.
 
  • Like
Reactions: zehoo
like this
I'd bet the timing is tied to two basic things:
  • Third-party software houses porting to their APIs
  • Fab capacity to produce the desired part with the appropriate performance level at volume
They could have bullied their way to more fab capacity, but getting software ported is something that they simply couldn't force to happen on their terms.

But there is already SO MUCH software written for their phones that should be easily portable to the laptop, no?
 
But there is already SO MUCH software written for their phones that should be easily portable to the laptop, no?
From an API standpoint, sure. From an architecture standpoint? The software should run, but it won't be optimized to take advantage of the additional resources offered, and thus would be less performant than if run on a traditional desktop CPU. That second part is the hard part for Apple, as they have to have an extremely compelling launch to get consumer buy-in and get an install base that's actually attractive to holdouts. Otherwise they'll be pulling a WindowsRT.
 
Most people are using phones, so...
Sure, but I was talking about gaming and video editing. If all we're talking about is general purpose computing for things like web browsing and YouTube, then yes I agree. But that wasn't the sort of work load I was referring to.

First you'd have to have the hardware worth recording 4k video worth editing. Then, you'd have to actually be editing, which is another level altogether. Relative to the general consumer market, that number is basically a rounding error, and it's already well-served by Apple.
I definitely chuckled at this. But I would also say it's not quite true. Not anymore. Film-making as an art has been democratized. It's now a hobby that a lot of people are involved in. And frankly it's the face of a lot of businesses now (even more so with Covid 19). There are many different levels of production. But at this point an iPhone with $50 in software (Filmic Pro) can shoot 4k RAW. A used 4k consumer camera can be had for around $500 (Sony A6300). Maybe less, as I'm not necessarily paying attention to every used 4k camera option.

Compared to all the people in all the world using computers? Sure it's only a small subset, but that small subset is enough to fully fund dozens of sites on the subject and plenty of YouTube channels, and don't even get me started on hardware vendors. There is plenty of space for all of those because there is such a large and diverse demand.

Everyone wants to be a Youtube and Instagram star these days. It's surprising how much cheap hardware has democratized film and made it easy and accessible to do so at any level.

Mostly because you're using a collection of components that are not actually designed for the purpose and you're chasing bottlenecks. Apple is designing hardware for that purpose. That doesn't mean that no one else can, just that Apple is putting their engineering effort in that sector.
NLE's aren't new and there have been a few various industry attempts at making hardware accelerated anything for film. If Red's Rocket-X is a pile of garbage and most add-in cards are well (Apple Afterburner not withstanding) it at least tells me something about the complexity and difficulty of the task.

If you can afford to rent the camera...
I think anyone could put their pennies together to do so. It's probably not worth it for most though. Here in LA, I can literally go on to Craigslist and see tons of adds for people renting basically every camera at some pretty affordable rates even for hobbyists. I would argue of course that most shouldn't bother as it will be too much camera in the hands of people that don't know what they're doing. That's another problem altogether though.

Take the FPGA, which is probably the most inefficient way to do things aside from actually etching logic, then just effing etch the logic, and you have Apple's solution.
The Afterburner card is FPGA. It is reprogrammable. And I'm sure it was thought through and built that way so that as Apple changes the way it does things in software it is capable of continuing to give class leading performance (like say they want to offer a new flavor of Pro Res or whatever, they can reprogram the FPGA to handle those changes).
https://appleinsider.com/articles/1...t-the-afterburner-accelerator-for-the-mac-pro
The major curiosity for most editors of course was whether or not Apple would ever support other codecs with the Afterburner card, because they could. But as I'm sure you share the same skepticism as me, you and I know that will never happen. Even though it is "possible". Despite this, the Afterburner card is so absurdly fast it makes more sense to take basically any codec (like Red RAW, or Arri RAW) and trans code it to Pro Res Raw in order to be able to edit full sized uncompressed streams in real time (if for some reason you don't want to use proxies or some other method. EG: you're the colorist or a special effects artist, or whatever). No matter how you slice it, that's impressive despite Apple's intentional limitation. So make fun of it if you'd like. But it gets the job done and makes very short work of it at an incredibly reasonable price, especially in comparison to other hardware solutions.

That's all I'm suggesting that a potato would be capable of. That's all the 'general compute' power Apple needs -- the rest can be dedicated logic for stuff like this.
Ehhhhhh, I still think that's stretching it too far. But I won't be pedantic or obtuse and simply take your point.

Graphics perhaps; gaming is a workload that is constantly redefined. For Apple to be good at the level of gaming that desktop users expect with an ARM product, they're going to have to restrict the definitions of gaming. Of course, that's something that Apple users are already conditioned for, so it's probably not a big deal.
I disagree. There are a wide amount of Apple users. Plenty of them are interested in graphics performance, even in things like ray-tracing and VR. It's hard to generalize people from consumer status all the way up to professionals using the hardware.
I more or less could describe PC users in the same way you describe Apple users. Because most computer users in general whether on PC or Mac are as you say: satisfied with their phones. And most probably don't use their computers for anything more than surfing and Youtube.
So to that point, PCMR is also only a tiny percentage as is any consumer doing any real heavy lifting. Like as an example video editing or any form of 3d rendering.
In that case you're really just calling the kettle black. So what's the point in doing that? That obviously can't be the point you're trying to make.

Anyway, that point aside, there are plenty of people still doing tons of graphical work in macOS. They didn't put 4x Vega II MPX cards in a Mac Pro because "no one cares" (which until Ampere and Navi 2 release is basically the single fastest card money can buy on any system). Nor have they consistently placed the highest AMD options available into their machines given the TDP variable and/or power consumption variable. The iMac Pro containing a Vega 64 or 56 (last updated in 2017 sadly). The Regular iMac with a Vega 48 option. And the 15"/16" Macbook Pro for the most part for a long time running mid to mid-high level graphics parts (once again with TDP and power usage generally being the thing that Apple uses as a limitation to be inserted in mobile devices).
Their phones and tablets have been class leading. I remember even the iPad 2, as in the one made in 2011, melted people's brains with games running the Unreal Engine (Infinity Blade was a thing). And there are more than a few casuals playing Fortnite, Call of Duty, and various car racing games that all look and play incredibly well on said phones/tablets. If anything I think that Apple has made casuals expect to have great looking and fluid games far more than any other company catering to casuals. And I would place people playing Nintendo Switch games (nVidia Shield) directly and squarely in that category as a competitor (in terms of graphics, obviously not game-play).

In order for Apple to compete in the space, I fully expect that they will continue to use discrete graphics on their higher end hardware (Macbook Pro 16, iMac, iMac Pro, and Mac Pro lines in particular). I also think they will continue to support EGPUs. Apple will continue to be a first tier AMD partner. I wouldn't be suprised at all that coinciding with the Navi 2 launch (professional cards specifically) there will be a corresponding new MPX card launch for Mac Pro if not immediately, then down the line. The 5700xt was added as an option and I would consider it likely that that trend will continue across product lines.
 
Last edited:
I'm surprised Apple took this long to move chips to their laptops. They have been running in house chips in their phones for ages and they aren't any slouch.

They didn't want ot pull a "Windows RT" like fiasco where they would have both ARM and x86 machines coexist indefinitely.

So they needed to be ready to transition the whole lineup quickly just like they did for PPC->Intel. That means they needed to be confident their in house CPUs were ready to take on even 28 core Xeon in the Mac Pro, plus they needed time to get all the software ready behind the scenes and lay as much groundwork as possible.

If you watch the WWDC 2005, you can see they are running exactly the same playbook this time.
 
I'm surprised Apple took this long to move chips to their laptops. They have been running in house chips in their phones for ages and they aren't any slouch.

They have been waiting for the software and frameworks to be ready to support the change. They have been planning this for many years.
 
You know, I used to be against this idea of moving away from x86, but that was based upon the happy days of my Mac Pro 1,1. I'm actually very intrigued now and definitely want to see the hardware later this year. If the new iMac refresh has a new chip (hopefully more powerful than the A12Z by that time) and performs well I'm probably going to get one instead of the Mini I've been debating.
 
There are a few specialised workloads that make sense for very high end processors/machines
Simulations
Video editing/rendering
Video compression
Compiling (arguable)
Compression
3D rendering
Special effects creation/rendering
2d photo editing (high end filters)

Of the above, the average user does little.

I do a fair bit of compiling, and some gaming, but the majority of what I do could adequately be done on a raspberry pi with 8 gig of ram (which you can get these days). The rest of the above you can build specialised hardware for if you want to..
 
I missed that part. Unfortunately I doubt their chips will be faster, but we shall see.

First, Apple already did beat Intel. Second Apple demoed the performance of the developer kit running full desktop applications (and the developer kit is using an ipad chip on steroids, not the real desktop chip).
 
They have been waiting for the software and frameworks to be ready to support the change. They have been planning this for many years.

They have been waiting for foundry tech as well. Now TSMC has nodes like 7HPC that can be pushed above 4GHz.
 
There are a few specialised workloads that make sense for very high end processors/machines
Simulations
Video editing/rendering
Video compression
Compiling (arguable)
Compression
3D rendering
Special effects creation/rendering
2d photo editing (high end filters)

Of the above, the average user does little.

I do a fair bit of compiling, and some gaming, but the majority of what I do could adequately be done on a raspberry pi with 8 gig of ram (which you can get these days). The rest of the above you can build specialised hardware for if you want to..
Going by what you said a 10 year old laptop will do what most people need. A 5 year old cell phone will do what most people need. But that's not what people want or expect when buying an Apple laptop. They expect the best and their applications to run, what few there will be. The transition from PowerPC to x86 was an easy and smooth transition but I don't believe the same can be said about x86 to ARM for Apple. As it is a number of companies are pulling away from Apple thanks to their Metal API and lack of OpenGL support, so this will just further push developers away. I expect Adobe products and most open source projects to make their way onto these new ARM based Macs, but gaming will be officially dead on these ARM based Macs. So anyone who plans to own these future Macs won't be seeing World of Warcraft or other game ports on these ARM Macs. Rosetta 2 might help but no developer will make games on these ARM based Macs. Without gaming the Mac platform might as well be dead. Which gaming on the Mac platform was already struggling lately.
 
Going by what you said a 10 year old laptop will do what most people need. A 5 year old cell phone will do what most people need. But that's not what people want or expect when buying an Apple laptop. They expect the best and their applications to run, what few there will be. The transition from PowerPC to x86 was an easy and smooth transition but I don't believe the same can be said about x86 to ARM for Apple. As it is a number of companies are pulling away from Apple thanks to their Metal API and lack of OpenGL support, so this will just further push developers away. I expect Adobe products and most open source projects to make their way onto these new ARM based Macs, but gaming will be officially dead on these ARM based Macs. So anyone who plans to own these future Macs won't be seeing World of Warcraft or other game ports on these ARM Macs.
Who in their right mind specifically buys a MacBook, iMac, or Mac Pro for specifically for gaming?
These computers, and their user-base, are almost exclusively developers or video/audio/photo production and media content production professionals.

Rosetta 2 might help but no developer will make games on these ARM based Macs. Without gaming the Mac platform might as well be dead. Which gaming on the Mac platform was already struggling lately.
It amazes me that you apparently can't wrap your head around anyone else not doing what you do with a computer, aka, gaming.
News flash, these aren't gaming computers, and just because they are shifting away from x86-64 to ARM doesn't mean gaming-in-general is dead on this platform - that is pure speculation on your part, not proven fact.

What is your source for saying "no developer will make games on these ARM based Macs"?
From what I have seen thus far, especially in the 2020 keynote presentation, it is quite the opposite.
 
Going by what you said a 10 year old laptop will do what most people need. A 5 year old cell phone will do what most people need. But that's not what people want or expect when buying an Apple laptop. They expect the best and their applications to run, what few there will be. The transition from PowerPC to x86 was an easy and smooth transition but I don't believe the same can be said about x86 to ARM for Apple. As it is a number of companies are pulling away from Apple thanks to their Metal API and lack of OpenGL support, so this will just further push developers away. I expect Adobe products and most open source projects to make their way onto these new ARM based Macs, but gaming will be officially dead on these ARM based Macs. So anyone who plans to own these future Macs won't be seeing World of Warcraft or other game ports on these ARM Macs. Rosetta 2 might help but no developer will make games on these ARM based Macs. Without gaming the Mac platform might as well be dead. Which gaming on the Mac platform was already struggling lately.

PCMR gaming (AKA - AAA) has been effectively dead on Macs for practically the whole history of Macs. Most Macs just have Intel IGP for a GPU. You pretty much have to go to Mac Pro to get half decent GPU, as a result, there was no real AAA presence.

Casual gamers (IOW most people) may actually find an improvement as I expect Apple SoC GPU is likely better than Intel IGP, and there is even more ease and a bigger push to get iPad games on ARM Macs with Catalyst.
 
Back
Top