Apple M1

I don't know man. I dual boot all the time. Having lived through Windows 95 through ME I'm obsessively hitting Ctrl-S every few seconds, and never keep more than one project open at a time, so it is a piece of cake to shut everything down and reboot. especially with how fast NVMe based systems boot these days.
Yeah but at work that isn’t what happens. Things get closed than rebooted. Then coffee, then programs launched and services signed into than break. And next thing you know changing between OS’s is a 30 min job. And it’s never 1 or 2 windows, Mac users never close anything they are always running like 30 browser tabs spread between Safari and Chrome with dozens of PDF’s and the list goes on, and since they mostly use the search to find their stuff when you take them out of MacOS they can never find anything. I know it’s probably only true for my old users but still it’s a common trait they all share.
 
Mac users never close anything they are always running like 30 browser tabs spread between Safari and Chrome with dozens of PDF’s and the list goes on, and since they mostly use the search to find their stuff when you take them out of MacOS they can never find anything. I know it’s probably only true for my old users but still it’s a common trait they all share.

To be fair, you just described 2/3 of my clients, and they're all on Windows.
 
To be fair, you just described 2/3 of my clients, and they're all on Windows.
Point is if they have to shut something down, it’s an all day affair for them to open it all back up. And god forbid you are the one who has to do it because they will remind you about it all week. Because every other day they are “sorry for bothering you, but” because they can’t find a website and you need to find it in their history, or they don’t know where they saved that PDF too, and that open email with the attachments they needed magically disappeared... It’s why Boot Camp for all the flexibility it can offer just didn’t pan out.
 
Curious. What do you do at work?

I've been working from home since March. Ever since my work machine started to refuse connecting to the work VPN, I've been working on my old 2012 era Dell Latitude E6430s. It has a dual core (with HT) Ivy Brige i5-3320m and 8GB of DDR3 RAM.

I have never once noticed it slow down from lack of RAM.

As I am sitting here editing a document change, Task Manager is telling me I am using 39% of the RAM.

Granted, my professional work is not very taxing on a computer. It consists mainly of Office 365, a web browser for some web apps, Minitab and an occasional light jaunt in SolidWorks, but that's about it.

I see the reasoning for 16GB if buying an unupgradeable machine today. You have to have future compatibility. 32GB however seems a bit superflous.

(I mean, I have 64GB in my Threadripper, but that's not for work :p )
I work in product management working with 8 development teams (4 teams for backend, 2 windows client, 1 linux client and 1 mac client). Mostly I can get by with 16 GB, but I have a lot of things going on at the same time and prefer not to keep closing / opening applications to free up memory. While I'm rarely writing any code myself (at least stuff that uses memory), I do need to run VM's to test functionality.

Obviously a lot of this is laziness. With my previous laptop having 16GB of ram I would at least once a month end up running out of memory and swapping like crazy. That's not often, and closing half of the open apps would be enough to get things under control - but with 32GB I never need to give it a second thought.
 
The performance of this chip can not be easily quantified but this is absolutely a beneficial direction for apple to pursue. Being able to craft custom silicon, software, and force devs to embrace it opens up opportunity for abit of "majic" I'm interested to see how this develops in future generations.

However I believe alot of you take this abit too far. This will probably not effect arm chip design significantly due to apples closed ecosystem and a move like this is pulling apple farther away from some tasks. Gaming, cad, virtualization, and other intensive commonly x86 tasks come to mind
 
Last edited:
The performance of this chip can not be easily quantified but this is absolutely a beneficial direction for apple to pursue. Being able to craft custom silicone, software, and force devs to embrace it opens up opportunity for abit of "majic" I'm interested too see how this develops in future generations.

However I believe alot of you take this abit too far. This will probably not effect arm chip design significantly due to apples closed ecosystem and a move like this is pulling apple farther away from some tasks. Gaming, cad, virtualization, and other intensive commonly x86 tasks come to mind

I'd argue that this helps Apple's gaming position in some ways. True, it limits the performance of some x86-only games, but it also brings a whole bunch of iPhone and iPad games to the Mac. You now have a version of Among Us you can play on a Mac, for example! I wish Apple wouldn't have deprecated OpenGL, but it's not as dire as you think.

Also: silicon, not silicone... the former is what you find in chips, the latter you find in phone cases and implants. 😛
 
Yeah, while iOS might not be thought of as a big gaming platform in the traditional “console or gaming pc” sense, there’s a lot of stuff that runs on it and a lot of people play iOS games.

so Mac OS really just got access to a whole lot more games than it has ever had before.
 
I have a friend who thinks that even the Mac Pro will be a ARM based machine in the next couple of years. I, however, very much doubt it and besides, he has an old boss who uses Apple only and claims that it will be 4 x faster than the Intel based model, straight up. Ummm, I think not.

Edit: Not the Mac Pro 4 x faster but, ARM based Mac's being 4 x faster than Intel ones.
 
I personally see the future of gaming continuing to expand on high performance computers. That is exclusively x86 right now. However I will admit there is a mobile gaming ecosystem that will work great with these Macs. As far as what devs do in the future? Apple needs to SIGNIFICANTLY step up there gpu game to pursue many desktop titles and unfortunatly I dont see them doing that with the power envelope they are working with
 
I have a friend who thinks that even the Mac Pro will be a ARM based machine in the next couple of years. I, however, very much doubt it and besides, he has an old boss who uses Apple only and claims that it will be 4 x faster than the Intel based model, straight up. Ummm, I think not.

Edit: Not the Mac Pro 4 x faster but, ARM based Mac's being 4 x faster than Intel ones.
He's right about the ARM-based Mac Pro timing. Apple said in June that it intended to transition the whole lineup to ARM in two years, and that includes pro machines.

Now, four times faster? Probably not. I would, however, expect Apple to include considerably more cores and compete more with AMD's Epyc and Threadripper.
 
The performance of this chip can not be easily quantified but this is absolutely a beneficial direction for apple to pursue.

The details of the microarchitecture show this is a very advanced chip.

However I believe alot of you take this abit too far. This will probably not effect arm chip design significantly due to apples closed ecosystem and a move like this is pulling apple farther away from some tasks. Gaming, cad, virtualization, and other intensive commonly x86 tasks come to mind

Why is the choosing of ARM pulling away from intensive commonly x86 tasks?
 
He's right about the ARM-based Mac Pro timing. Apple said in June that it intended to transition the whole lineup to ARM in two years, and that includes pro machines.

Now, four times faster? Probably not. I would, however, expect Apple to include considerably more cores and compete more with AMD's Epyc and Threadripper.

I will believe it when I see it, that is all I can say to that. Besides, in 2 years, AMD's processors will be a big jump faster than even their Zen 3 stuff so Apple cannot just sit around and hope so. :)
 
I will believe it when I see it, that is all I can say to that. Besides, in 2 years, AMD's processors will be a big jump faster than even their Zen 3 stuff so Apple cannot just sit around and hope so. :)
Because Apple just sits around and doesn’t do anything with their chips...? Of course AMD’s chips will be faster in 2 years, but so will Apple’s chips.
 
Because Apple just sits around and doesn’t do anything with their chips...? Of course AMD’s chips will be faster in 2 years, but so will Apple’s chips.

Not just faster but most likely, significantly faster. This is not Intel, after all.
 
A lot of people here seem to think that Apple can make a faster chip than AMD and Intel without active cooling. Yea... that's not how it works. I would put a whole lot of salt on Apple's M1 benchmarks until reviewers get their hands on it and actually run benchmarks. There's a reason why a lot of top reviewers have stopped using synthetic benchmarks.
 
A lot of people here seem to think that Apple can make a faster chip than AMD and Intel without active cooling. Yea... that's not how it works. I would put a whole lot of salt on Apple's M1 benchmarks until reviewers get their hands on it and actually run benchmarks. There's a reason why a lot of top reviewers have stopped using synthetic benchmarks.
The problem is an Apples to everybody else comparison is going to be hard even in the real world because you will be using different API's so the best we can get is for a specific title the M1 will do X FPS on Metal while the PC does Y FPS on DX12 sort of stuff. So canned benchmarks are going to be needed to some degree.
 
A lot of people here seem to think that Apple can make a faster chip than AMD and Intel without active cooling. Yea... that's not how it works.
That is just like it works if you use an efficient architecture, control the software ecosystem, and go ultrawide at the core level, whereas your competitor make inefficient 5GHz desings.
 
A lot of people here seem to think that Apple can make a faster chip than AMD and Intel without active cooling. Yea... that's not how it works. I would put a whole lot of salt on Apple's M1 benchmarks until reviewers get their hands on it and actually run benchmarks. There's a reason why a lot of top reviewers have stopped using synthetic benchmarks.
My personal theory is that these new systems will offer meaningful performance improvements over the chips in their Intel predecessors, but they won't completely up-end the PC market in terms of raw speed. The biggest value will be the base Air, since even the low-power cores in the M1 are supposed to be as fast as the Core i3 from the previous model. You don't have to pony up for the higher-end Air just to get solid multitasking performance.

Now, battery life on the other hand... that may be Apple's ace in the hole. The stereotype for Windows PC makers is to claim 15 hours of battery life, but only if you do absolutely nothing with the system; Apple's 15 hours of browsing is more likely to be... well, 15 hours (at least using Safari, Chrome is usually a battery hog on any platform).
 
The details of the microarchitecture show this is a very advanced chip.



Why is the choosing of ARM pulling away from intensive commonly x86 tasks?

Well obviously its a advanced chip there is zero point in fabing any chip if its not. Every modern CPU is advanced. The difference is apple controls the software stack so they can optimize there silicon abit for the os. Thats not saying x86 chips dont do the same apple can just make different choices with there silicon now that they dont have too support as much.

Its pulling away from intensive tasks as all of those tasks are developed for complex cisc chips. This arm chip lacks the extensions, optimizations, and frankly raw compute power that these applications currently make heavy use of. Developers could optimize there application for arm if they feel the need however please understand how monumental that task is for some projects and software suites. Maybe some devs will have arm in mind for future projects but dont expect every dev to invest the resources to adapt current projects for a tiny portion of the market.

Some software will never be practical on these chips (heavy virtualization) as the arm chip lacks the extensions and power to make that practically happen

These are not monumentally better processors in any way. They are different and heavily optimized for some tasks. They are an advanced designs on a advanced node so I would expect practical advantages but dont discredit what x86 is currently just because apple throw out some numbers.
 
My personal theory is that these new systems will offer meaningful performance improvements over the chips in their Intel predecessors, but they won't completely up-end the PC market in terms of raw speed. The biggest value will be the base Air, since even the low-power cores in the M1 are supposed to be as fast as the Core i3 from the previous model. You don't have to pony up for the higher-end Air just to get solid multitasking performance.

Now, battery life on the other hand... that may be Apple's ace in the hole. The stereotype for Windows PC makers is to claim 15 hours of battery life, but only if you do absolutely nothing with the system; Apple's 15 hours of browsing is more likely to be... well, 15 hours (at least using Safari, Chrome is usually a battery hog on any platform).

I suspect, this is what I might do in Apple's place, they are focusing on performance per watt. Kind of how NVIDIA is found of saying 50% faster with the per watt in a really small font. I dont agree that all M1 systems will be passive cooled. Once Apple starts to build workstation/HEDT systems (with more cores/sockets?) they will have to use active cooling. Likely an AIO.

With the M1 being custom silicone I wonder if there is an asic built into the SOC just for x86/64 emulation. It would certainly speed up the emulation.
 
I suspect, this is what I might do in Apple's place, they are focusing on performance per watt. Kind of how NVIDIA is found of saying 50% faster with the per watt in a really small font. I dont agree that all M1 systems will be passive cooled. Once Apple starts to build workstation/HEDT systems (with more cores/sockets?) they will have to use active cooling. Likely an AIO.

With the M1 being custom silicone I wonder if there is an asic built into the SOC just for x86/64 emulation. It would certainly speed up the emulation.
I feel we may have to wait for software too adapt abit before seeing workstation offerings.

It wont be a aio thats not the apple way and air coolers are sufficient.

If they do get around to making a higher tdp chip it will be interesting too see what they prioritize. If they go with larger compute cores and do take steps for hardware virtualization that would be a very unique chip. Giving they control the os it would be feasible to directly run x86 apps with good preformance. That product would probably be a market win if it was created but I'm not sure if I see apple embracing that direction.
 
I don't know man. I dual boot all the time. Having lived through Windows 95 through ME I'm obsessively hitting Ctrl-S every few seconds, and never keep more than one project open at a time, so it is a piece of cake to shut everything down and reboot. especially with how fast NVMe based systems boot these days.
I'm usually working on four to eight projects at the same time; ten to sixteen applications open; web browser with quite a few windows and tabs (30+) open...

One project at a time? That's me on vacation. Dual boot? Hell, no. That's too much time wasted. That's what a second or third computer is for or, if you don't like wasted hardware, what a virtual machine, or two, or three, are for.

With that said, I'm not interested in an ARM-based Mac for work. I'd rather Apple had contracted out to AMD for the CPU. Apple moving to ARM has nothing to do with their belief that ARM is a better chip. It's because their phones and tablets are ARM and they can 1) significantly reduce development costs by unifying the OS and CPU stack, and 2) gracefully exit the workstation business, which Apple stopped caring about over a decade ago. Phones, tablets, laptops, watches, and subscription services is where their money is now.

Anti-Apple bias? No; I have an iPhone, an iPad Mini, and an iPad Pro (times two, as my wife has the same); for work, two Apple MacBook Pros, a 12 Core Mac Pro 2010, and a 12 Core Mac Pro 2013. You might even call me an Apple fanboy, but you'd be wrong, almost... ;)
 
I'm usually working on four to eight projects at the same time; ten to sixteen applications open; web browser with quite a few windows and tabs (30+) open...

One project at a time? That's me on vacation. Dual boot? Hell, no. That's too much time wasted. That's what a second or third computer is for or, if you don't like wasted hardware, what a virtual machine, or two, or three, are for.

With that said, I'm not interested in an ARM-based Mac for work. I'd rather Apple had contracted out to AMD for the CPU. Apple moving to ARM has nothing to do with their belief that ARM is a better chip. It's because their phones and tablets are ARM and they can 1) significantly reduce development costs by unifying the OS and CPU stack, and 2) gracefully exit the workstation business, which Apple stopped caring about over a decade ago. Phones, tablets, laptops, watches, and subscription services is where their money is now.

Anti-Apple bias? No; I have an iPhone, an iPad Mini, and an iPad Pro (times two, as my wife has the same); for work, two Apple MacBook Pros, a 12 Core Mac Pro 2010, and a 12 Core Mac Pro 2013. You might even call me an Apple fanboy, but you'd be wrong, almost... ;)

I learned a long time ago that the human brain is absolutely awful at multitasking, and that if you have 8 projects, they get done faster if you do them sequentially than if you try to do them all at the same time.

Only time I have lots of stuff open at the same time if I am working on something that has many source documents I need to switch back and forth between repeatedly.
 
I learned a long time ago that the human brain is absolutely awful at multitasking, and that if you have 8 projects, they get done faster if you do them sequentially than if you try to do them all at the same time.

Only time I have lots of stuff open at the same time if I am working on something that has many source documents I need to switch back and forth between repeatedly.
It really depend on the nature of the project, if the workload has some long compile or render or calculation time it can make sense, otherwise yes I am not it make any sense, I would assume the poster above has such workload (considering the number of core the machine has being a clue).
 
I learned a long time ago that the human brain is absolutely awful at multitasking, and that if you have 8 projects, they get done faster if you do them sequentially than if you try to do them all at the same time.

Only time I have lots of stuff open at the same time if I am working on something that has many source documents I need to switch back and forth between repeatedly.
Your inability to work on multiple projects simultaneously doesn't affect my ability to do so.
 
Your inability to work on multiple projects simultaneously doesn't affect my ability to do so.

Humans can't multitask well. None of them. The science is conclusive on this. It's a fundamental truth about how the human brain works.

In fact, one study I read several years ago found that the more confident a person was in their ability to multitask, the worse they actually were at it. Probably some weird side effect of the Dunning Kruger effect.

Either way, this is getting ridiculously off topic, so I am dropping it.
 
Your inability to work on multiple projects simultaneously doesn't affect my ability to do so.
I think many readers would be curious to which workload make it possible to have 8 different projects going at the same time, very long render time, compile time ?
 
I ordered a Mac pro (13") today for work which is something I thought I'd never do or own. The ability to sync it up to my phone and with everything I would use it for at work just made sense to me. Replacing an older i7 Windows laptop I have currently.
 
I think many readers would be curious to which workload make it possible to have 8 different projects going at the same time, very long render time, compile time ?
It's no secret around here that I produce video. We have multiple instructors for multiple projects that take multiple weeks to get done and multiple tasks required to get raw unedited video to a final publishable product. I'd be absolutely useless to anyone if I had to wait for one of these projects to be complete before working on another.

Either way, this is getting ridiculously off topic, so I am dropping it.
On the contrary. It's very much on topic. Other than gaming, what consumes all the resources of fast, multi-core machines with gobs of memory if it's not multiple tasks running simultaneously if not the desire or need to do more than one thing at a time? If not, why else would we need hardware that can accomplish so much if we can only do so little? The relentless march of technology is only possible because there is a need for it.
 
It's no secret around here that I produce video. We have multiple instructors for multiple projects that take multiple weeks to get done and multiple tasks required to get raw unedited video to a final publishable product. I'd be absolutely useless to anyone if I had to wait for one of these projects to be complete before working on another.


On the contrary. It's very much on topic. Other than gaming, what consumes all the resources of fast, multi-core machines with gobs of memory if it's not multiple tasks running simultaneously if not the desire or need to do more than one thing at a time? If not, why else would we need hardware that can accomplish so much if we can only do so little? The relentless march of technology is only possible because there is a need for it.

Renders, compiles, and other long term number crunching activities are the main purpose.

I only have very limited experience with video editing from a few novice productions I threw together, but I can certainly see finishing the editing of one project and letting it render in the background while working on another.

That is letting the PC multitask (which it is comparatively good at) while not trying to ask the human brain to do the same. So in that case it can work!

Research is pretty clear that the human brain cannot handle keeping trakc of multiple things at the same time very well, and efficiency losses result from context switching. In the overwhelming majority of cases humans wind up spending more time tyring to "find where they were" when frequently switching between tasks than they wind up saving from any efficiencies gained.
 
Back
Top