Apple launched the M2 Pro and the M2 Max

It's not like Mac's back in the PowerPC days where gaming was pretty much next to nothing.
Going to stop you there, PowerPC from 1993 onward had quite a few games for it on Macintosh systems in the 1990s and 2000s, and especially on m68k in the 1980s and early 1990s.
There may not have been as many games on System 6 through OS X 10.5.8, but nearly all of the major ones were ported and done quite well, especially those that supported OpenGL like Doom 3 and Quake 4.

Also, PowerPC clock-for-clock curb-stomped x86 up until Intel released Conroe in 2006.


An SATA SSD at 500MB/s would still be plenty fast enough. The M2 Pro base at 2900MB/s is overkill for that application.
If you scrub through enough uncompressed video, even on an internal drive, SATA can quickly become the limiting factor.
NVMe, and previously large RAID 0 (or 10) HDD arrays and RAM disks, are nearly essential to have decent performance when doing so - assuming the CPU and rest of the system is sufficient leaving the disk as the bottleneck.

SATA has reached it's limit years ago, and even though many individuals still use it casually, which it is adequate for, it certainly isn't for any performance applications made within the last half-decade.


A 3.4 gig per second drive will not make you go 8 times faster than a regular sata ssd (or even hdd) to about anyhing of the sort, what remotely common application would see a 40% opening faster on a slower cpu between those 2 speeds ?
That statement and article is total bullshit.
I've noticed a huge uptick in performance when moving workstations from SATA-III SSDs to NVMe 3.0/4.0 SSDs.
The difference is huge when performing OS/application updates, working with large data sets, and even when working with every day real-world applications.

Opening Chrome with a single tab may not make a big difference between SATA and NVMe, but opening large GIS map data sets with thousands or millions of points and large project files is a world of difference, let alone with video editing.


Why would the same people notice the 20% CPU/GPU performance and not the 40% SSD read performance? Realistically applications will launch 40% faster on the M1 Pro, which is huge. If you're doing video editing then the faster write speeds are more important. When Linus Tech Tips reviewed the M1's video editing performance, they said the SSD was a limiting factor.

Most people since that's why Apple did this. It's meant to save money for Apple, which yes it'll be around $3 of savings per device but when you're pushing millions of units the savings add up quickly. Most people would buy the base model, and this is also a tactic to push people to aim higher when purchasing these machines if they were somehow aware.
Fully agreed, even if it isn't a straight 40% improvement in all apps across the board, there will be a general improvement in disk read/write performance.
I remember seeing the SSD in the M1 being the limiting factor as well when they were first reviewed, it was definitely a bottleneck with what the CPU was capable of at the time.


It's the equivalent of Nvidia putting DDR4 in their GTX 1030 instead of GDDR5, and not telling anyone about it. We enthusiasts knew, the reviewers knew, but the people who go buy the product won't know because they Google'd reviews of the GTX 1030 and not the DDR4 version.
This is exactly the bait-and-switch that Apple is pulling.
If Apple explicitly states this on their specs then it is fine, but if not, it's definitely underhanded and certainly not ethical to their customers, even if their stakeholders are happy with the results.
 
Last edited:
If you scrub through enough uncompressed video, even on an internal drive, SATA can quickly become the limiting factor.
NVMe, and previously large RAID 0 (or 10) HDD arrays and RAM disks, are nearly essential to have decent performance when doing so - assuming the CPU and rest of the system is sufficient leaving the disk as the bottleneck.

SATA has reached it's limit years ago, and even though many individuals still use it casually, which it is adequate for, it certainly isn't for any performance applications made within the last half-decade.
You also didn't bother to read the full context of my comment.

But let's use your example. You have 1x internal drive that only goes at 500MB/s and is a total of 512GB in size. Your system files and applications take up 100GB. How much compressed RAW video can you put on there? I can tell you, it will be approximately 30-35 minutes. If we did what you said: uncompressed RAW it would be less than 10.
There is no editing project that any editor can get done with that drive size. Unless of course they only edit one project at a time, intentionally copy everything to their internal drive, and then move it to an external drive. Heaven forbid they have to go back to a previous project and copy it back and forth again. I don't know what projects they could be editing with only 10-35 minutes worth of footage, but who cares, we're making stuff up right?

I'll say it a third time, maybe those in the back can hear it this time: NO EDITOR USES INTERNAL DRIVES TO EDIT FROM. IT IS NOT PRACTICAL. Especially if you're talking about "uncompressed" video. In the case of editing RAW we're talking about projects that can easily be in excess of 4TB for videos that are short.

So, let's go back to my example, the only time you'd be able to do any appreciable editing from the internal 512GB HDD (which also still has the OS and apps still on it), is if you're dealing with highly compressed small files. In other words, files that still wouldn't benefit from increased internal SSD speed.

Now let's go to the MBP M2 Pro's Base example: it's a drive that has a 2900MB/s read speed. Even if we were saying that it's "fine" to edit from this drive which is wildly impractical and you wanted to edit compressed RAW video: 3:1 8k REDcode RAW 24fps is still far less than the transfer rate of that drive. So EVEN THAT APPLICATION (even if improper) WOULD STILL BE OKAY WITH THE BASE DRIVE SPEED.

So, again for the 100th time, do you or DNX actually have a use case that actually would benefit from a drive that's faster than 2900MB/s? Because so far there has been ZERO benchmarks or even a shred of evidence that it matters. Just a lot of complaining about Apple being an evil company.

EDIT: And this is also not even discussing that it's far more likely that the throughput of the rest of the system being a greater limitation than the speed of the drive. Most editors transcode REDcode or other RAW formats into ProRes specifically because RAW is incredibly taxing on hardware to decode in real time, especially with a color grade and possibly AE/Fusion effects on top. In which case, those 8k REDcode files might be transcoded down to ProRes 422 1080p files. Which also significantly drops the need for faster storage and also then is benefited much more greatly from the faster M2 Pro vs the M1 Pro for all transcoding tasks.
 
Last edited:
Going to stop you there, PowerPC from 1993 onward had quite a few games for it on Macintosh systems in the 1990s and 2000s, and especially on m68k in the 1980s and early 1990s.
There may not have been as many games on System 6 through OS X 10.5.8, but nearly all of the major ones were ported and done quite well, especially those that supported OpenGL like Doom 3 and Quake 4.

Also, PowerPC clock-for-clock curb-stomped x86 up until Intel released Conroe in 2006.



If you scrub through enough uncompressed video, even on an internal drive, SATA can quickly become the limiting factor.
NVMe, and previously large RAID 0 (or 10) HDD arrays and RAM disks, are nearly essential to have decent performance when doing so - assuming the CPU and rest of the system is sufficient leaving the disk as the bottleneck.

SATA has reached it's limit years ago, and even though many individuals still use it casually, which it is adequate for, it certainly isn't for any performance applications made within the last half-decade.



That statement and article is total bullshit.
I've noticed a huge uptick in performance when moving workstations from SATA-III SSDs to NVMe 3.0/4.0 SSDs.
The difference is huge when performing OS/application updates, working with large data sets, and even when working with every day real-world applications.

Opening Chrome with a single tab may not make a big difference between SATA and NVMe, but opening large GIS map data sets with thousands or millions of points and large project files is a world of difference, let alone with video editing.



Fully agreed, even if it isn't a straight 40% improvement in all apps across the board, there will be a general improvement in disk read/write performance.
I remember seeing the SSD in the M1 being the limiting factor as well when they were first reviewed, it was definitely a bottleneck with what the CPU was capable of at the time.



This is exactly the bait-and-switch that Apple is pulling.
If Apple explicitly states this on their specs then it is fine, but if not, it's definitely underhanded and certainly not ethical to their customers, even if their stakeholders are happy with the results.
68k -> PPC killed me back in the day, like some others. It was absolutely brutal, and I had to drop Apple. The move from x86 -> ARM now is entirely different, and the software landscape is way different.
 
That statement and article is total bullshit.
Yet nothing you say contradict it, what application open 8 time faster simply by going sata ssd to a 4500 mbs nvme ?

It is rarely a direct ratio of your raw sequentiel drive read speed change for something like that
 
Yet nothing you say contradict it, what application open 8 time faster simply by going sata ssd to a 4500 mbs nvme ?
When opening programs they rarely use the sequential transfer.
If you used real-world applications in a work place on a daily basis, you would know that.

I also never stated applications open 8 times faster, that was the article that stated that.
In real-world scenarios, large project files do open a multitude times faster due to how NVMe functions compared to SATA's poor limitations.

It is rarely a direct ratio of your raw sequentiel drive read speed change for something like that
I do agree with this to a point, but to say there isn't a difference is not true which is what that article stated.
For the average user though, it really doesn't matter.


You also didn't bother to read the full context of my comment.
I did, and do not agree with you.

I'll say it a third time, maybe those in the back can hear it this time: NO EDITOR USES INTERNAL DRIVES TO EDIT FROM. IT IS NOT PRACTICAL. Especially if you're talking about "uncompressed" video. In the case of editing RAW we're talking about projects that can easily be in excess of 4TB for videos that are short.
You would be surprised how many times I have seen individuals use the internal drive for editing, and not all uncompressed video is 8K and takes 100GB for 10 seconds of video.

But let's use your example. You have 1x internal drive that only goes at 500MB/s and is a total of 512GB in size. Your system files and applications take up 100GB. How much compressed RAW video can you put on there? I can tell you, it will be approximately 30-35 minutes. If we did what you said: uncompressed RAW it would be less than 10.
Yep, and I know many individuals who will do this off of the internal drive (normally bigger than a 512GB disk, though) and using 30-60 minutes of RAW video isn't outside the boundaries of their use-case scenario.
Some of them also use external storage arrays as well, it depends on the project and what their current needs are, and not everyone is editing hours of footage.

There is no editing project that any editor can get done with that drive size.
Yet somehow we made use of disks that were under 80GB in size back in the late 1990s and 2000s with then-large project files.
Perhaps you don't have much experience with video editing because what you are stating is simply not true in many scenarios.

Unless of course they only edit one project at a time, intentionally copy everything to their internal drive, and then move it to an external drive. Heaven forbid they have to go back to a previous project and copy it back and forth again.
Yep, they are normally editing only a single to a few projects at a time, and do tend to copy the finished products as backups.

I don't know what projects they could be editing with only 10-35 minutes worth of footage, but who cares, we're making stuff up right?
Not sure what world you are living in, but there are many projects that don't use more than 10-30 minutes of footage professionally.
Making stuff up???

So if anyone is editing under 10-35 minutes worth of footage, that apparently isn't video editing according to you?
Wow, you might want to get over yourself and be less insulting to those who took time out of their lives to respond to you to have a discussion.

So, again for the 100th time, do you or DNX actually have a use case that actually would benefit from a drive that's faster than 2900MB/s? Because so far there has been ZERO benchmarks or even a shred of evidence that it matters.
This is the first time I have even commented on any of the subjects being discussed here and I was just pointing out some real-world scenarios where I did see a difference between a SATA SSD and an NVMe SSD.
I have no dog in this fight, and quite frankly if this is how disrespectful your responses are to those who are sharing their real-world experiences then I have been wasting my time responding to you.

Pro tip: You aren't as knowledgeable as you think you are.

Just a lot of complaining about Apple being an evil company.
From other threads you tended to relish this subject, especially in regards to Samsung.
Yet here it would appear to be bothersome to you that Apple is being called out for on their double-speak and marketing nonsense - very ironic.

Welcome to the ignore list.
 
Going to stop you there, PowerPC from 1993 onward had quite a few games for it on Macintosh systems in the 1990s and 2000s, and especially on m68k in the 1980s and early 1990s.
There may not have been as many games on System 6 through OS X 10.5.8, but nearly all of the major ones were ported and done quite well, especially those that supported OpenGL like Doom 3 and Quake 4.

Also, PowerPC clock-for-clock curb-stomped x86 up until Intel released Conroe in 2006.



If you scrub through enough uncompressed video, even on an internal drive, SATA can quickly become the limiting factor.
NVMe, and previously large RAID 0 (or 10) HDD arrays and RAM disks, are nearly essential to have decent performance when doing so - assuming the CPU and rest of the system is sufficient leaving the disk as the bottleneck.
Absolutely - but the chances of that being done, regularly, on a 512G internal drive (especially on a laptop) are limited. If you're doing video editing on the laptop, in the field, with uncompressed video, you're almost certainly already buying the larger capacity drives to begin with.
And given that even the base drive has nearly 3GB/s of performance for sequential reads/writes, you're still well over 4x the maximum potential speed of a SATA drive. This isn't 600MB/s - it's still NVMe with all the benefits inherent to that architecture. Just not the fastest NVMe drive out there.
SATA has reached it's limit years ago, and even though many individuals still use it casually, which it is adequate for, it certainly isn't for any performance applications made within the last half-decade.
Yup. But it's a laptop - and it's still NVMe.
That statement and article is total bullshit.
I've noticed a huge uptick in performance when moving workstations from SATA-III SSDs to NVMe 3.0/4.0 SSDs.
The difference is huge when performing OS/application updates, working with large data sets, and even when working with every day real-world applications.
Same, which is why I put NVMe in all my workstations and tend to use SATA for either mass storage (even SATA SSDs) or boot drives on servers, where update time isn't that significant of a barrier (it's all automated anyway). But I'm also not editing large video files on a laptop internal drive - I'm in the lower use case of small files, already compressed (generally), or setting up with external storage if I have to do anything big (which is rare these days - not my job anymore).
Opening Chrome with a single tab may not make a big difference between SATA and NVMe, but opening large GIS map data sets with thousands or millions of points and large project files is a world of difference, let alone with video editing.
Absolutely. But are you doing that on a base model Macbook pro? Chances are you're on an upgraded one if you're doing that - and that eliminates even the potential for the problem.
Fully agreed, even if it isn't a straight 40% improvement in all apps across the board, there will be a general improvement in disk read/write performance.
I remember seeing the SSD in the M1 being the limiting factor as well when they were first reviewed, it was definitely a bottleneck with what the CPU was capable of at the time.
M1 != M1 Pro/Max/etc. The base models had an even slower drive system, which yeah- could definitely be a limitation (1.something GB/s I believe - well under half the speed of the base MacBook Pros now).
This is exactly the bait-and-switch that Apple is pulling.
If Apple explicitly states this on their specs then it is fine, but if not, it's definitely underhanded and certainly not ethical to their customers, even if their stakeholders are happy with the results.
They don't ever post specs for anything really :-/
 
I love my Mn series Macs.
Never get anything with less than 1TB storage so that shouldn't be an issue.
With the M1 Max the 2TB is slightly faster than the 1TB and the 4TB is slightly faster than the 2TB. But nothing as significant as the 512GB version of the M2.
96GB RAM comes in handy too.

The biggest thing though is noise. Rarely do I EVER hear the fans in my 16" M1 Max. The 14" yes I can. Even still, compared to my 12th gen XPS 9720 it's no contest. And when running on battery power (the PC) performance drops off a cliff.

(Finally) WiFi 6E so air drops will exceed 1Gbps! About time!

I did get a M2 Air (1TB / 16GB) and like it much better than the 32GB/1TB "maxed out" Surface Laptop 5 I've been using. Mainly because it is absolutely SILENT! The Intel machine's fan is constantly whirring in the background which is annoying AF.
 
I did, and do not agree with you.
Fundamentally you don't understand then, and that is evidenced by this reply.
You would be surprised how many times I have seen individuals use the internal drive for editing, and not all uncompressed video is 8K and takes 100GB for 10 seconds of video.
First here, I stated 10 minutes, but there is a difference not only in reading, but specifically in comprehension.

HOWEVER: RAW is the ONLY format that would actually need 2900MB/s of read speed. That's the point of EVERYTHING we're talking about.

So either 1.) You're editing RAW which would benefit from having 2900MB/s, but you can't because 512GB is too limiting space wise. But let's say it's magically 4TB in size, 2900MB/s is plenty for 8k REDcode RAW in any level of compression or ARRIRAW (which is uncompressed) or any other flavor RAW.
2.) You're working with a compressed format that you can work with several hours of and you don't need anywhere close to 2900MB/s because the bitrate is significantly lower.

Either way 2900MB/s is more than enough speed either in RAW or compressed. There is no video use case where the M2 Pro Base's SSD isn't fast enough for video editing. Even in RAW you'd have a hard time saturating 10Gbp/s, which is how much bandwidth Pro's in Hollywood have when editing off of a shared server.

EDIT: (I don't think you know what "uncompressed" video is. The only form of "uncompressed video" is something shot in RAW format. If it's not RAW it's not uncompressed. Period. To that point, RED shoots in compressed visually lossless RAW. CDNG or ARRIRAW, which is uncompressed RAW is up to 12x more massive.
But for reference, you can see some examples of data rates of the only RAW formats that exist in the cinema space: https://ymcinema.com/2022/05/23/redcode-raw-evolution-dsmc2-to-dsmc3-and-future-dsmc4/

If you're shooting in ProRes 4444 XQ 12-bit 4.5k, the file type that comes out of the ARRI Alexa LF, 512GB is 38 minutes of recording time. And that is a compressed format. Uncompressed RAW is insanely huge. ARRIRAW, which is uncompressed RAW is 2TB for a paltry 67 minutes worth of recording time.

Whether you're dealing with REDcode compressed RAW, ProRES RAW, or even RAW types that technically aren't RAW (like BRAW), 400GB of space isn't enough to edit off of. Period. Heck, editing ProRes 12-bit in 4k is too large for 400GB. (working with these file types is something I have done).

So what projects have you shot in 4.5k ProRes 4444 XQ? Because that would be <30 minutes of footage, using a compressed format in that 400 GB's of space that you say is plenty. I was being "nice" to your use case by talking about "compressed RAW". Uncompressed? There is no shot.)

Yep, and I know many individuals who will do this off of the internal drive (normally bigger than a 512GB disk, though) and using 30-60 minutes of RAW video isn't outside the boundaries of their use-case scenario.
Some of them also use external storage arrays as well, it depends on the project and what their current needs are, and not everyone is editing hours of footage.
That would be in the best case scenario for editing RAW 6k 24fps 3:1 RAW files. That's an actual use case based on what I shoot with. If you're working with 8k 3:1 RAW that's less. And again any form of uncompressed RAW far less than that.
That means this person intentionally keeps 400 out of 512GB of their SSD space just for video files and never exports to the internal drive. And also literally has zero swap space (which would be a nearly impossible and very unpleasant working experience). And somehow can fill that perfectly every time without 1 GB over. This isn't practical at all.
Yet somehow we made use of disks that were under 80GB in size back in the late 1990s and 2000s with then-large project files.
Exactly. And you know how that was done? With files that 1.) weren't RAW. And 2.) had significantly lower bit rate.
To recap: the only video file types necessary that would necessitate having 2900MB/s in the first place is RAW. And it's not practical to edit RAW because the volume of the drive isn't large enough.
If we're talking about compressed video which you could have several hours of footage inside of 400GB (as many as 4-5), then drive speed is no longer an issue. That's the point.

Having incredibly fast drive speeds wasn't necessary when using hyper compressed video from times in which 80GB was considered to be a large drive. They were working with 100Mbp/s X-AVC 1080p files or similar. That in fact could be done on SATA SSD's, like I mentioned. In fact back in the days of 80GB drives, people edited from just SATA 7200RPM drives, which are even slower or alternatively off of two drives in RAID0. But in that scenario it would be >80GB. Ask me how I know? However this is proving my point and not your point. 80GB drives definitely didn't operate even at SATA SSD speeds. People still mostly didn't operate internally at that time either. They used RAID0 external arrays from companies like G-Technology over Firewire.

In the 80GB SATA days it wasn't 8k. It wasn't RAW. Either compressed or not. And whether back then or now, editors still didn't need anything >2900MB/s. That was the point of everything I enumerated before but you have critically misunderstood. Fundamentally, repeatedly, and literally, you haven't worked with any of this footage. This is just theory for you. It isn't for me.
Perhaps you don't have much experience with video editing because what you are stating is simply not true in many scenarios.
I ask this repeatedly down below: is this your profession? How many projects have you completed? And what cameras were used?

Which of them required <400GB of space and were RAW?
Not sure what world you are living in, but there are many projects that don't use more than 10-30 minutes of footage professionally.
Making stuff up???
Again, context. That means best case scenario, nothing else on the HDD other than apps, OS, and project files. No one operates like that. No one.
Even if you "theoretically think they do", that means zero swap space. And they export to an external drive.
So if anyone is editing under 10-35 minutes worth of footage, that apparently isn't video editing according to you?
Wow, you might want to get over yourself and be less insulting to those who took time out of their lives to respond to you to have a discussion.
Cool, you do this for a living? Brass tacks.
Pro tip: You aren't as knowledgeable as you think you are.
What professional cameras have you shot on and then later edited? I would love a list of any and all cameras that you've used that shoot 6k or above resolution and work with any RAW format. We can expand it to "just" 4k and RAW if you'd like. Because either way that list is incredibly short.
EDIT: Skip that, just what cameras have you edited from? We can skip your camera knowledge and just move straight to editing.
I have done this.
From other threads you tended to relish this subject, especially in regards to Samsung.
Yet here it would appear to be bothersome to you that Apple is being called out for on their double-speak and marketing nonsense - very ironic.
Again, misrepresentation. but certainly in the comparison game: knowingly exposing workers to toxic chemicals, then letting them die of cancer, and then denying it vs not soldering on a second SSD chip to a Macbook Pro mainboard is equal. /s
Welcome to the ignore list.
With open arms.


Bottom line, and it's too late to answer this since I'm on your ignore list: but you still haven't shown an application that needs >2900MB/s. Which is the actual discussion here. Everything else I stated was stating specifically why your and DNX's complaining specifically doesn't matter. That was the purpose in comparison of the base level M2 Pro MBP's SSD vs an SATA SSD. So you've missed the forest for the trees.
 
Last edited:
I love my Mn series Macs.
Never get anything with less than 1TB storage so that shouldn't be an issue.
With the M1 Max the 2TB is slightly faster than the 1TB and the 4TB is slightly faster than the 2TB. But nothing as significant as the 512GB version of the M2.
96GB RAM comes in handy too.

The biggest thing though is noise. Rarely do I EVER hear the fans in my 16" M1 Max. The 14" yes I can. Even still, compared to my 12th gen XPS 9720 it's no contest. And when running on battery power (the PC) performance drops off a cliff.

(Finally) WiFi 6E so air drops will exceed 1Gbps! About time!

I did get a M2 Air (1TB / 16GB) and like it much better than the 32GB/1TB "maxed out" Surface Laptop 5 I've been using. Mainly because it is absolutely SILENT! The Intel machine's fan is constantly whirring in the background which is annoying AF.
As I have stated earlier, my Lenovo Threadripper pro would scream like a banshee when I put it under stressed workloads and in the summer required me to crack a window because the room got too hot too fast.
My Mac Studio while being a minor upgrade at best and a side grade at worst to the TRPro for my workloads is a fraction of the size, whisper quiet, and fits on the desk so on big report days it doubles as a coffee warmer but is still far cooler than my Lenovo ever was.
That said I do not like MacOS, I don't like the window management, and I miss File Explorer, and Safari is a PITA because nothing works with it so I had to create a VM instance on one of the servers that basically functions as a legacy browser so I can access the stuff I need to that still requires Internet Explorer.
 
Last edited:
When opening programs they rarely use the sequential transfer.
If you used real-world applications in a work place on a daily basis, you would know that.

I also never stated applications open 8 times faster, that was the article that stated that.
I thought you said my part of the message and the article mentioned in it was bullshit, sorry. I imagine you then fully agree with the part of my message you quoted ? (which make the
If you used real-world applications in a work place on a daily basis, you would know that.) Incredibly strange, I make real world application in my work place on a daily basis not just use them.
 
Well, I for one am excited to get my $4200 MacBook pro. I wanna see what these M chips can do. I would totally love to drop my window system just for 3-D rendering and games and do everything on this MacBook. I did not know that safari sucked so bad so I will have to look into that.
 
Well, I for one am excited to get my $4200 MacBook pro. I wanna see what these M chips can do. I would totally love to drop my window system just for 3-D rendering and games and do everything on this MacBook. I did not know that safari sucked so bad so I will have to look into that.
eh. It's ok. I use firefox instead.
 
Newer safari ain’t bad. I still had to install edge because it’s the only way to use stupid Teams in web browser mode.
 
As I have stated earlier, my Lenovo Threadripper pro would scream like a banshee when I put it under stressed workloads and in the summer required me to crack a window because the room got too hot too fast.
My Mac Studio while being a minor upgrade at best and a side grade at worst to the TRPro for my workloads is a fraction of the size, whisper quiet, and fits on the desk so on big report days it doubles as a coffee warmer but is still far cooler than my Lenovo ever was.
That said I do not like MacOS, I don't like the window management, and I miss File Explorer, and Safari is a PITA because nothing works with it so I had to create a VM instance on one of the servers that basically functions as a legacy browser so I can access the stuff I need to that still requires Internet Explorer.
Yes the way to browse photos on shared drives is totally nuts compared to Windows explorer for sure.
I have other things that only work in Windows and use RoyalTSX (the Mac equivalent of RemoteNG) to connect to VMs.
The miniLED display is amazing, cannot wait until they have microLED displays!
 
Newer safari ain’t bad. I still had to install edge because it’s the only way to use stupid Teams in web browser mode.
Well I can't even log into the Zoom account management portal through safari, it login button clicks but nothing happens, Microsoft VLSC page will take your username and password but instead of logging you in just kick you back to the main page. Adobe Enterprise portal doesn't let you switch between your tenants so to go between my Adobe sign admin stuff to my creative cloud admin I need to log out and log back in again but in Chrome or Edge I can just click my user portrait and swap between but in Safari that does nothing. Stuff like that.
 
Newer safari ain’t bad. I still had to install edge because it’s the only way to use stupid Teams in web browser mode.
safari will always suck until apple implements forward and backward mouse button support like every other browser since like 1997
 
68k -> PPC killed me back in the day, like some others. It was absolutely brutal, and I had to drop Apple. The move from x86 -> ARM now is entirely different, and the software landscape is way different.
The move from x86 to ARM is much worse, but the difference is there's a lot of open source software today that will make up for the lack of proprietary software. This is why I daily drive Linux because I can get away with using open source alternatives. It's not like Mac OSX didn't already have a lack luster selection of software, but now that it's ARM the selection is getting worse. I still have a laptop that dual boots Windows and Linux because there are just some applications that were made for Windows only. For example because I fix my own cars, BMW's ISTA INPA, Toyota's Tech Steam, and a bunch of tools for GM vehicles. If being x86 on Mac OSX for over 10 years hasn't gain any traction in software, then good luck being on ARM. Mac OSX on ARM is like Linux but without Lord Gabe Newell and the x86 platform for maximum performance and compatibility.
Which list of application opened 40% slower according to anandtech article ? how many second up to how many ?
Just makes sense. That's not entirely going to be a 1:1 relation, as some applications are smaller than others and will load almost as quickly on a slower SSD compared to a faster SSD. I imagine Apple's Safari web browser will load faster than FireFox or Chrome since Apple can cache the browser in ram as a method to boost loading speed. I did ask for Linus Tech Tips to benchmark the SSD on the M1 Pro and the M2 Pro and they did a quick and dirty test they they claim the M1 Pro is nearly double the performance compared to the M2 Pro. Not unsurprisingly they did a terrible job testing the SSD, but at least we can confirm the M2 Pro's 512GB is terrible.
https://youtube.com/clip/Ugkxd1hWNyFM9eJMSsIOn5nUIX0x_eG436i9
You honestly think that people don't want 20% more GPU/CPU performance where it matters in things like Resolve or Octane?
Probably, but if that were the case you wouldn't be using a M2 based Macbook Pro. If you're doing that kind of work you'd be on a desktop PC, especially at those prices since you could build a much faster PC that'll destroy Resolve and Octane, compared to any M2 Pro or Max. Also this is 2023, and codecs like H.264 and H.265 aren't the hot. AV1 is the codec of choice, and the M2's as well as the M1's don't support it through Apple's media engine. Which means you're back on the CPU/GPU, and right now Intel's laptop CPU's are double the performance of the M2, and if you consider Intel's ARC does have AV1 encoding and decoding support, it's clear that any serious video editing work should be done on a Intel. As far as I know, the reason Apple doesn't have AV1 hardware is a dispute with Google, so good luck with that.

You literally can't point to a single application use case that needs faster SSD speeds. Because it doesn't exist.
Needs? No, but benefits from? Yes.
If their goal was actually nefarious like you seem to think, then they WOULD advertise the speed difference to get people to upgrade.
They don't because they hope nobody cares enough to take them to court. The reason nobody has is because Apple doesn't mention read and write speeds, because if they did then either they'll piss of their customers or they'll be in court. Better to say nothing than to say something because otherwise it'll be considered false advertising.
512GB is 512GB. The maximum wear level is exactly the same regardless of if it's two chips or one.
Just gonna address this because the rest is not worth my time. If you write 1GB of data to one 512GB chip, then you're wearing out that chip faster compared to two 256GB chips. You don't just gain double the performance, you also gain double the longevity of the SSD's. The more NAND chips you have, the better the performance and the better the longevity. This is not hard to understand. Every NAND chip has a finite amount of reads and writes before it goes bad. The amount of data you use isn't going to change, so 1GB is now spread to two 256GB, meaning only 512MB is being written to each NAND which is half the wear.
 
Last edited:
Your example of why current Mac's are a no-go because you fix cars and need ancient software is just as applicable to a modern Windows 11 laptop. Yes, I also have to keep an ancient laptop around to use ancient BMW and Volvo software on my cars. That isn't just an Apple issue. This is such a tired argument, mostly because it's not even Apple/Mac specific. No one on the face of the Earth is buying a brand new laptop to use with various highly-proprietary controllers. If they are, they're idiots.

Not sure what you mean by lacking 'ARM selection'. Every single X86 Mac developed app has compatibility thanks to Rosetta 2. Every App moving forward is ARM. Yes, virtualization no longer works, but the majority of the people buying a Mac don't care about that.

Finally, ARM is pretty obviously the future for consumer devices. Just like X86 was obviously the pathway back in the PPC days.

Anyways, this thread is basically OT dead at this point.
 
The move from x86 to ARM is much worse, but the difference is there's a lot of open source software today that will make up for the lack of proprietary software. This is why I daily drive Linux because I can get away with using open source alternatives. It's not like Mac OSX didn't already have a lack luster selection of software, but now that it's ARM the selection is getting worse. I still have a laptop that dual boots Windows and Linux because there are just some applications that were made for Windows only. For example because I fix my own cars, BMW's ISTA INPA, Toyota's Tech Steam, and a bunch of tools for GM vehicles. If being x86 on Mac OSX for over 10 years hasn't gain any traction in software, then good luck being on ARM. Mac OSX on ARM is like Linux but without Lord Gabe Newell and the x86 platform for maximum performance and compatibility.
The reason why this hasn't mattered for millions of Mac users is because quantity of software doesn't matter. Only the quality of available options does.

Does it matter that PC has 20+ different rendering platforms? Or does it matter that it runs 3DSMax? Does it matter if it has 25 video editing programs? Or does it matter if it has Premiere?

That's pretty much the exact same case on Mac. It doesn't matter if there is access to million of the same type of software, only the software that actually get used (IE: the proverbial "killer apps") that actually get users. Having access to "100's of millions of apps" sounds great on paper, except that for work people are at most using a half-dozen to a dozen pieces of software. If those 12+/- pieces of software are the best on any platform, there is ZERO desire to have access to more. Because having "access to more" that never gets used is meaningless.

Said simply:
Quality > Quantity.
and,
Daily used apps > esoteric never used apps

Needs? No, but benefits from? Yes.
Still haven't produced a single app that shows anything but drastically reduced benefit from anything faster than 2900MB/s.
They don't because they hope nobody cares enough to take them to court. The reason nobody has is because Apple doesn't mention read and write speeds, because if they did then either they'll piss of their customers or they'll be in court. Better to say nothing than to say something because otherwise it'll be considered false advertising.
That doesn't make sense.
They would say:
512GB configuration: write speed: xxx
1TB configuration: write speed: yyy

That isn't false advertising. That would be stating benefit. They don't state even the benefit because it's moot. But I basically take this as your acknolwedgement that even internally Apple doesn't make a statement because "if tested there would show no benefit" and then they'd "therefore have to go to court".
Just gonna address this because the rest is not worth my time. If you write 1GB of data to one 512GB chip, then you're wearing out that chip faster compared to two 256GB chips. You don't just gain double the performance, you also gain double the longevity of the SSD's. The more NAND chips you have, the better the performance and the better the longevity. This is not hard to understand. Every NAND chip has a finite amount of reads and writes before it goes bad. The amount of data you use isn't going to change, so 1GB is now spread to two 256GB, meaning only 512MB is being written to each NAND which is half the wear.
Again it's not. You would still have the same maximum TB wear. You only get increased wear ability from having higher density chips.

If I wrote 100TB to 1x 512 vs 2x 256, there would absolutely not be a difference in longevity between those two devices. A 1TB single chip would have a higher TBW than 2x 256GB chips.

Do you remember that really long, super boring video you posted about not having user replaceable SSD's leading to dead machines? Watch that video again, he specifically talks this exact concept. I assume you're too lazy since you won't even discuss all the inaccuracies you post.

So, here you go, queued up for you. Watch his exact section explaining TBW:



It is very explicitly related to size of the SSD's, and not the number of chips.

Stated another way from: https://www.howtogeek.com/806926/what-does-tbw-mean-for-ssds/

TBW rating is higher for larger capacity drives as they have more flash memory cells to write. For example, a typical 500GB SSD has a TBW of around 300, whereas 1TB SSDs usually have 600 TBW.

The only way what you're saying 'could' make sense is only if the TBW per GB on the 256 GB chips is somehow greater than the 512GB chip. And that is incredibly unlikely, scaling is pretty uniform. That hasn't been how it has worked on any consumer or enterprise drive when talking about higher density or quantity of chips.
 
Last edited:
The move from x86 to ARM is much worse, but the difference is there's a lot of open source software today that will make up for the lack of proprietary software. This is why I daily drive Linux because I can get away with using open source alternatives. It's not like Mac OSX didn't already have a lack luster selection of software, but now that it's ARM the selection is getting worse.
What does OSX lack? I have the entire BSD ports tree, a massive selection of open source software that can be compiled from source (it’s still BSD under the hood) if not natively available, a massively pile of closed source software across almost any genre I can think of, windows compatibility layers….

Short of things like your car tools (which tend to be pretty unique and limited to begin with, like most industrial apps), and maybe an enterprise app or two, I’m drawing a blank?
I still have a laptop that dual boots Windows and Linux because there are just some applications that were made for Windows only. For example because I fix my own cars, BMW's ISTA INPA, Toyota's Tech Steam, and a bunch of tools for GM vehicles. If being x86 on Mac OSX for over 10 years hasn't gain any traction in software, then good luck being on ARM.
There’s tons of mac software.
Mac OSX on ARM is like Linux but without Lord Gabe Newell and the x86 platform for maximum performance and compatibility.
That’s only for gaming.
Just makes sense. That's not entirely going to be a 1:1 relation, as some applications are smaller than others and will load almost as quickly on a slower SSD compared to a faster SSD.
That’s honestly not quite how data transfers or file system operations work. You’re right that some are smaller and some are larger, but the access method for opening a file and transferring the data into RAM doesn’t scale directly with throughput - unless you’re dealing with exceptionally large data sets. The bigger advantage is the natively latency reduction on each file IO, since with many small files that will be the limited more than throughput (directly related to IOPS, but different, which is why load times for programs on SATA SSD are so similar to NVMe). You don’t actually load the entire binary from one end to the other into ram. Not anymore at least.
I imagine Apple's Safari web browser will load faster than FireFox or Chrome since Apple can cache the browser in ram as a method to boost loading speed.
Filesystem caching will or can work on any binary. I don’t know off the top of my head how HFS does it, or what it does, but being s home grown app doesn’t change caching capabilities. They may preload the binary into ram like windows does with edge, but in theory you can do that with anything.
I did ask for Linus Tech Tips to benchmark the SSD on the M1 Pro and the M2 Pro and they did a quick and dirty test they they claim the M1 Pro is nearly double the performance compared to the M2 Pro. Not unsurprisingly they did a terrible job testing the SSD, but at least we can confirm the M2 Pro's 512GB is terrible.
https://youtube.com/clip/Ugkxd1hWNyFM9eJMSsIOn5nUIX0x_eG436i9
Sure. It’s slower in a benchmark. No one argues that.
Probably, but if that were the case you wouldn't be using a M2 based Macbook Pro. If you're doing that kind of work you'd be on a desktop PC, especially at those prices since you could build a much faster PC that'll destroy Resolve and Octane, compared to any M2 Pro or Max.
Absolutely. I wouldn’t do heavy editing work on a laptop of any kind, personally - I have mega workstations for my real work. But if you can only have one system, and it needs to be portable at least some of the time…

Well, no one has made a real Threadripper laptop yet.
Also this is 2023, and codecs like H.264 and H.265 aren't the hot. AV1 is the codec of choice, and the M2's as well as the M1's don't support it through Apple's media engine. Which means you're back on the CPU/GPU, and right now Intel's laptop CPU's are double the performance of the M2, and if you consider Intel's ARC does have AV1 encoding and decoding support, it's clear that any serious video editing work should be done on a Intel.
Will watch video shortly, but I will say that AMD and NVIDIA also have their own encoders and decoders- but no one uses a fixed pipeline encoder like that for final output. You always do it on CPU because of the ability to adjust outside the fixed length in silicon to generate the highest quality (or smallest file, depending on your goal). The encoder and decoders are for previews and scrubbing rapidly while editing.
As far as I know, the reason Apple doesn't have AV1 hardware is a dispute with Google, so good luck with that.


Needs? No, but benefits from? Yes.

Some. For some use cases. Ones that likely rarely apply to a base model MacBook Pro. That’s the real rub here - this is the base model. If you’re doing heavy work, you don’t get the base model. But the base model drives sales for people that want the pro name (or bigger screen/etc). They could have just not offered it, but there are folks where it fits a use case. It’ll sell fine.
They don't because they hope nobody cares enough to take them to court. The reason nobody has is because Apple doesn't mention read and write speeds, because if they did then either they'll piss of their customers or they'll be in court. Better to say nothing than to say something because otherwise it'll be considered false advertising.
I’m confused by what you’re trying to say here?
Just gonna address this because the rest is not worth my time. If you write 1GB of data to one 512GB chip, then you're wearing out that chip faster compared to two 256GB chips. You don't just gain double the performance, you also gain double the longevity of the SSD's. The more NAND chips you have, the better the performance and the better the longevity.
Correction: the more cells you have the more longevity on a per TB basis. 512x1 and 256x2 will have (approximately) the same number of cells. Overprovisioning rates may be different between them - without the exact model numbers and internal cell rates per chip we won’t know- but it will be effectively close. There are many ways to combine NAND cells into an actual IC. Thus there is no longevity difference- to your point below, 512MB on 2 chips vs 1GB on one hits the same number of cells.
This is not hard to understand. Every NAND chip has a finite amount of reads and writes before it goes bad. The amount of data you use isn't going to change, so 1GB is now spread to two 256GB, meaning only 512MB is being written to each NAND which is half the wear.
see above.
 
Stated another way from: https://www.howtogeek.com/806926/what-does-tbw-mean-for-ssds/

TBW rating is higher for larger capacity drives as they have more flash memory cells to write. For example, a typical 500GB SSD has a TBW of around 300, whereas 1TB SSDs usually have 600 TBW.

The only way what you're saying 'could' make sense is only if the TBW per GB on the 256 GB chips is somehow greater than the 512GB chip. And that is incredibly unlikely, scaling is pretty uniform. That hasn't been how it has worked on any consumer or enterprise drive when talking about higher density or quantity of chips.
Bingo. All comes down to cell counts. Trying to think of the easiest metaphor here, but assuming cell type is the same (TLC or MLC, given what we’re looking at here), you’re scaling the number of cells by capacity. So 2x256 has the same as 1x512. 1x1024 (if possible, not sure we’re at that density yet honestly) would have (approximately) double the prior number. There’s overprovision ratios that get tweaked at larger sizes, but it’s going to be close.

Doesn’t count for optane - or across manufacturing process (there are different ways of organizing them), so we have to also believe that its similar (probably safe since it’s pretty standardized, the one branded exception before aside).

We’re also assuming a multi use cell - there are specialized ones that change controller priorities, use SLC cache, etc etc etc. but your statement is a safe assumption and starting point for a single vendor and two different but closely related SKU. 😁
 
The reason why this hasn't mattered for millions of Mac users is because quantity of software doesn't matter. Only the quality of available options does.
That really matters when you don't have the software to do the job.
Does it matter that PC has 20+ different rendering platforms? Or does it matter that it runs 3DSMax? Does it matter if it has 25 video editing programs? Or does it matter if it has Premiere?
Wait, 3DS Max isn't on Mac? Holy crap it isn't.
Still haven't produced a single app that shows anything but drastically reduced benefit from anything faster than 2900MB/s.
Not my fault nobody benchmarked it.
If I wrote 100TB to 1x 512 vs 2x 256, there would absolutely not be a difference in longevity between those two devices. A 1TB single chip would have a higher TBW than 2x 256GB chips.
SSD wear leveling. You thrash one chip instead of two you're wearing that one chip out fast.
So, here you go, queued up for you. Watch his exact section explaining TBW:



It is very explicitly related to size of the SSD's, and not the number of chips.

What he said isn't wrong, but the amount of chips to wear level is also a factor.
Your example of why current Mac's are a no-go because you fix cars and need ancient software is just as applicable to a modern Windows 11 laptop.
The software I listed are actually updated constantly. INPA is community built, because you're technically not allowed to have ISPA. PCM Hammer is open source. Same goes for doctor offices that have specialized machines that also only work on Windows. These aren't on Mac because nobody uses Mac for professional work.
Not sure what you mean by lacking 'ARM selection'. Every single X86 Mac developed app has compatibility thanks to Rosetta 2.
Just not the 32-bit ones.
Finally, ARM is pretty obviously the future for consumer devices. Just like X86 was obviously the pathway back in the PPC days.
Don't tell that to Apple in the past where they had ads that told the world that their G3 was twice as fast.

Anyways, this thread is basically OT dead at this point.
Not yet, the M2 Pro and Max reviews are just trickling in, and it's getting good. One thing I said 3 years ago about Apple's move to ARM was that it was a mistake, and their M2's are proving me correct. Whatever advantage Apple had is over, with AMD and Intel surpassing them. Intel is probably the most surprising since their 13980Hx is 20% faster in single threaded, and doubling in multi-threaded against the M2 Max. Remember 20% faster CPU matters, as it's been said often in this thread. Apple's GPU performance is just too far behind to matter. Intel probably has terrible battery life with the 13980Hx, but faster is faster. The 13980Hx was also just released so we'll eventually see benchmarks with those, and likely done to rain on Apple's M2 Pro and Max parade. AMD will have their mobile CPU's out around March, so things won't look better for Apple in the coming months.

Gotta understand that while the M1 was impressive for 2020, that was the result of Imagination's GPU tech being... used by Apple while ARM filed for bankruptcy and was almost bought by Nvidia. This resulted in ARM pushing for stronger licensing. Who is developing Apple's silicon, because it isn't ARM and Imagination? The M2 Pro and Max aren't even built on the ARM v9.
 
That really matters when you don't have the software to do the job.
Literally millions of Mac users have figured it out.
Wait, 3DS Max isn't on Mac? Holy crap it isn't.
Blender and Octane both are. Other Autodesk programs such as Maya and AutoCAD also are. So is Nuke. So like all things you buy the hardware for the software you want to run.
Not my fault nobody benchmarked it.
Tons of people have. You haven't looked. Every controversy gets massive press when it's Apple. Here's one example:



In actual usage the only time it matters is for massive file transfers (if it's a small file transfer of less than 50GB, the M2 Pro base is actually faster) and massive imports (we're talking about importing 100+ RAW files). It also can affect export times while multi-tasking. For everything else the base M2 Pro is faster than the base M1 Pro. (Rendering, encoding, working in timelines, gaming, etc).

However these tests matter so little, as while we've fought over the 512GB model, you can simply get the 1TB model and have a more performative system in every aspect (SSD, CPU/GPU). Which we have stated basically every power user of these systems would do. I haven't bought a 512GB internal machine at this point in 10 years probably. And that's true of most other professional users that buy Macs. So regardless of if we're talking M1 Pro or M2 Pro, I'd still be paying for that 1TB upgrade (at minimum). This whole discussion has basically been moot.

This is again the point we've stated from the beginning. Now 4 pages worth of:
1.) For a majority of applications base M2 Pro is faster than base M1 Pro.
2.) Base M2 Pro is 2900MB/s, that is not slow by any definition, it's just not as fast as it could be.
3.) Even in cases when it's not they can reach parity by moving to 1TB. (It's actually faster than parity, the M2 Pro's SSD is faster than M1 Pro when upgraded).
4.) Most power users will want 1TB+ to begin with.
5.) People that only want 512GB will likely never notice. And in a blind A/B test it would be impossible to perceive. Only in side by side testing would you notice. In all the testing there is no order of magnitude difference when the M2 Pro base loses.
6.) When looking at fully upgraded machines ($4000 M1 Max vs $4000 M2 Max, original MSRP vs original/current MSRP), there is no aspect that is faster on the old machine vs the new one. Both CPU and GPU core counts have gone up on the newer machines giving more performance virtually across the board. While actually improving battery life.

I've given you another list saying the same stuff because you can't get this info that at least 3 people have been saying to you for 2 pages. This just continues to be a talking point for you without enough of a change to matter. It just "feels bad man".
You're complaining about a machine that you will never buy. "White knighting" on behalf of users that you have repeatedly stated are liars twice, implied we're idiots, and now "not professionals". Just so you can complain about Apple.
It's no different than on a PC. If you need the performance: buy it. If you don't you don't. That's basically what all the nVidia 4080/4090 apologists tell me despite it being terrible price/performance and nVidia intentionally withholding RAM on their 4070Ti tier making the 3090 a better buy.
SSD wear leveling. You thrash one chip instead of two you're wearing that one chip out fast.
Again, that is dependent on the number of cells. The number of cells between the two are the same. More chips doesn't mean more cells. You get more cells through density.
What he said isn't wrong, but the amount of chips to wear level is also a factor.
Only if the TBW rating per cell is different. If the cells are made on the same process, then it's highly unlikely for it to be any different other than scaling linearly with size.
The software I listed are actually updated constantly. INPA is community built, because you're technically not allowed to have ISPA. PCM Hammer is open source. Same goes for doctor offices that have specialized machines that also only work on Windows.
I've said repeatedly that you buy the machine that runs the software you want to run.
If you want to game, buy a PC too. There has been no one who has said otherwise.
You know the reverse is also true right? If I want to run Logic or FCPX, Windows can't run it. What's your point?
These aren't on Mac because nobody uses Mac for professional work.
You're either a liar, oblivious, or trolling. There is no fourth option. For specifically car ECU modification? Yes. Professionals in general? Really?
The irony to me is that multiple videos you've posted regarding the M2 Pro the user base has purchased and prefers to use Macbook Pros. Such as multiple users at LTT including Linus himself. Every time a video editor is brought up from any of your videos, they're all on Macs. This goes double if we're talking about editors using laptops, which is the context of everything we're talking about.
Red Falcon complains about his perception of my statements - but frankly the level of condescension from you about users on the Mac platform is far higher than anything I've ever said.
Just not the 32-bit ones.
Please, reply with all the macOS 32-bit productivity apps that users actually want right now. I'll wait. Meanwhile the rest of us actually on Mac's haven't said word 1 about needing 32-bit app support. This is just another talking point for you to complain about Apple and it's completely in a void. It doesn't matter to anyone using Mac's daily to do work.
Not yet, the M2 Pro and Max reviews are just trickling in, and it's getting good. One thing I said 3 years ago about Apple's move to ARM was that it was a mistake, and their M2's are proving me correct. Whatever advantage Apple had is over, with AMD and Intel surpassing them. Intel is probably the most surprising since their 13980Hx is 20% faster in single threaded, and doubling in multi-threaded against the M2 Max. Remember 20% faster CPU matters, as it's been said often in this thread. Apple's GPU performance is just too far behind to matter. Intel probably has terrible battery life with the 13980Hx, but faster is faster. The 13980Hx was also just released so we'll eventually see benchmarks with those, and likely done to rain on Apple's M2 Pro and Max parade. AMD will have their mobile CPU's out around March, so things won't look better for Apple in the coming months.

Gotta understand that while the M1 was impressive for 2020, that was the result of Imagination's GPU tech being... used by Apple while ARM filed for bankruptcy and was almost bought by Nvidia. This resulted in ARM pushing for stronger licensing. Who is developing Apple's silicon, because it isn't ARM and Imagination? The M2 Pro and Max aren't even built on the ARM v9.
That's a pretty funny assessment considering the video you linked is literally called: "Windows doesn't have an answer - M2 MacBook Pro 14" and 16"

I do agree with you that faster is faster. But it's also not in a void. That video, whose title is above notes that most users are going to care about having a laptop you can legitimately not have to charge for two weeks (this is his statement, not mine). There is performance and there is performance.

While undoubtedly Wintel is going to win this round on desktop, in the mobile space (which by the way is the type of machines a majority of users buy) there is little desire to have that trade off. And that coupled with the fact that on battery power the Mac will still perform exactly the same as it does on wall power, whereas the PC will undoubtely choke and lose its entire performance advantage. (This is also ignoring Window's Sleep and Power Management functions which LTT lambasts all PC laptops over). Basically if you intend to use a laptop as a laptop, Mac is still far ahead.

Bottom line: competition is good. And the target always moves. Declaring a performance winner for one round doesn't matter much. I've been watching AMD/Intel fight for almost 3 decades. Sometimes one is ahead, sometimes the other is ahead. It's no different than with Apple/ARM.
The reason to buy one platform over the other for power users is still mostly software. For non-power users that just do browsing/social/streaming/office: there is basically no PC equivalent that a 16GB RAM M1/M2 MBA isn't better at, owing mostly to its mix of keyboard, trackpad, battery life, display quality, webcam, microphones, size, weight, no fan noise, and speed.
 
Last edited:


Note - Skip through video because most Apple reviews are done by annoying people. However, the 3DMark test in particular shows how fast the GPU side is in the new M2 chips.

Granted, we're about to have new laptops with mobile 4090 which will likely prove to be faster while plugged in - But when you account for the performance on battery the M2 Max just blows away anything on the PC side.
 


Note - Skip through video because most Apple reviews are done by annoying people. However, the 3DMark test in particular shows how fast the GPU side is in the new M2 chips.

Granted, we're about to have new laptops with mobile 4090 which will likely prove to be faster while plugged in - But when you account for the performance on battery the M2 Max just blows away anything on the PC side.

Much of it isn’t technically the GPU but all those accelerators Apple crams in there, but for specific workloads that use them the Apple is a straight up boss. But if you are doing something where they aren’t used then it’s hardly better than the Intel iGPUs.
Regardless it’s a strong integration of different compute elements and Apple nailed their implementation. It’s going to be a small struggle for Microsoft to catch up on that unless both AMD and Intel add similar logic to their CPU’s with a clean series of APIs.
Apple threw down a gauntlet, I’m still waiting to see if they can pick it up.
 
TSMC 5 beating a mix of Samsung 8-Intel 10 that are starting to be a bit old by now is not too surprising

A Lovelace, Ryzen 7xxx mobile (i.e. TSMC 5 vs TSMC 5) would be more interesting by the end of this month.
 
TSMC 5 beating a mix of Samsung 8-Intel 10 that are starting to be a bit old by now is not too surprising

A Lovelace, Ryzen 7xxx mobile (i.e. TSMC 5 vs TSMC 5) would be more interesting by the end of this month.
Not quite…
The 7000 series uses 6, 5, or 4nm depending on the model it’s going to be interesting for sure but performance is going to be all over the board.

It’s the AI functions of the 7040 I am interested in. I want to see how that logic can be put to use and how it’s going to compare the the accelerated logic in the Apple silicon.
 
That performance drop on the MSI machine when on battery, though... ouch. I know that won't be true of every Windows laptop, and I want to see how new Ryzen systems fare (not to mention newer GPUs), but it makes you question why people would even consider some modern x86 laptops for creative workflows if they expect to step away from a wall outlet for any meaningful amount of time.
 
That performance drop on the MSI machine when on battery, though... ouch. I know that won't be true of every Windows laptop, and I want to see how new Ryzen systems fare (not to mention newer GPUs), but it makes you question why people would even consider some modern x86 laptops for creative workflows if they expect to step away from a wall outlet for any meaningful amount of time.
I just don't see the PC side of the house coming anywhere near matching what Apple is doing unless there are substantial changes to architecture. Apple can only achieve this because of everything being in that same package and using very little power.

Even more impressive is that Apple is offering nearly that same level of performance in a 14'' device - Not just the 16'' device.
 
That performance drop on the MSI machine when on battery, though... ouch. I know that won't be true of every Windows laptop, and I want to see how new Ryzen systems fare (not to mention newer GPUs), but it makes you question why people would even consider some modern x86 laptops for creative workflows if they expect to step away from a wall outlet for any meaningful amount of time.
Yeah - If I'm on my laptop, I'm mobile. I can't do 2.5 hours of battery, or major performance drops, or any of that - grew out of that a long time ago. I've been on my Macbook Pro since 930 this morning (2.5 hours), and used 17% of my battery so far. On a constant running zoom, screen active the entire time, typing notes and editing stuff in salesforce.
 
I just don't see the PC side of the house coming anywhere near matching what Apple is doing unless there are substantial changes to architecture. Apple can only achieve this because of everything being in that same package and using very little power.

Even more impressive is that Apple is offering nearly that same level of performance in a 14'' device - Not just the 16'' device.
It's not just the packaging, Apple has a very very high degree of feature integration that Microsoft can't match because there are too many CPU's.
Apple launched the M1, it has a lot of incredibly specific accelerators for all sorts of tasks that MacOS is designed to use.
Those very specific accelerators are incredibly efficient at what they do and go a long way toward decreasing power usage while increasing speed.
Neither Intel nor AMD have those sorts of features integrated to that degree, and even if they did it would be on such a small product lineup it would take Microsoft a long time to get those features integrated into Windows.
I sincerely hope this is changing you can see AMD has included their "AI" stuff into the 7040 series, and they claim they are working with Microsoft to get those features cooked into Windows 11, how well that pans out remains to be seen but it is at least a start in the right direction
 


Note - Skip through video because most Apple reviews are done by annoying people. However, the 3DMark test in particular shows how fast the GPU side is in the new M2 chips.

Granted, we're about to have new laptops with mobile 4090 which will likely prove to be faster while plugged in - But when you account for the performance on battery the M2 Max just blows away anything on the PC side.

Max Tech is pretty much an Apple's only reviewer, and those types don't exactly benchmark well. Firstly the MSI Creator Z16P has twice as much ram and SSD storage, so the price comparison here isn't Apple's to MSI. His testing involved seeing how quickly he can zoom in and out of images. He then ran Speedometer to test web browser speeds without telling us what browser he used. I also like how he spent minutes talking about power and heat, but only briefly touched on the MSI winning the benchmark. I also like how in 3D Mark the M2 Max wins, but in Blender it gets destroyed. It's also odd that his 3Dmark test is in frame rate and not a score, that you usually get in 3DMark. Can't even find frame rate results on 3Dmarks website. It's also not the only 3Dmark test he could be running. He then tests video encoding and yes the M2 Max wins but with h.264 or h.265 because he never actually mentions it but a brief photo is shown. He should have included AV1.

This is what I mean by Apple only reviewers benchmarking in such ways that favor Apple, and not actually explaining how they tested. Also, you can buy a 13980Hx with a RTX 4080 to test against the M2 Max, so the Intel test would be current and not against old hardware.
https://www.bestbuy.com/site/asus-r...-1tb-ssd-eclipse-gray/6531333.p?skuId=6531333
 
Did it really loose in blender when it's a laptop, and when used as a laptop off of battery it's an order of magnitude faster?

You spend a lot of time nitpicking here when you aren't seeing the bigger picture. The new hardware just blows everything away when used as a laptop. Which is doubly impressive given that it's not only performing as good as it is on battery - But still maintains decent battery life even when running at full tilt.
 
Literally millions of Mac users have figured it out.
I'm a Linux user and I can tell you that I haven't figured it out, and I'd like to be in the position to not have to. Also as a Linux user I probably have it easier to run old/current Windows applications, compared to ARM Mac.
Tons of people have. You haven't looked. Every controversy gets massive press when it's Apple. Here's one example:


Oh hey, you did. And he tested it on a external drive.... Then tested the CPU... Ah finally, real tests.

Lightroom M1 Pro = 8s
Lightroom M2 Pro - 18s

Lightroom Paste M1 Pro = 59s
Lightroom Paste M2 Pro = 1m 5s

He then tested exporting which is faster on the M2 Pro, but isn't limited at all the SSD because he's exporting. He then begins to test more CPU bound stuff and not the SSD itself, including gaming. If you're testing the SSD for real world stuff you test how quickly an application launches, not CPU and GPU stuff.

But yea, there is some outrage of Apple putting slower SSDs in their 512GB M2 models.

Starts at 4:00 mark.
You're either a liar, oblivious, or trolling. There is no fourth option. For specifically car ECU modification? Yes. Professionals in general? Really?
Anything a Mac can do, the PC can do it better and lots more of it. In any professional environment you don't see Mac, with the exception of video editing and DJs.
 
He then tested exporting which is faster on the M2 Pro, but isn't limited at all the SSD because he's exporting. He then begins to test more CPU bound stuff and not the SSD itself, including gaming. If you're testing the SSD for real world stuff you test how quickly an application launches, not CPU and GPU stuff.
Application launch would be quite a CPU heavy stuff usually, the import of image in lightroom here is probably a good way to do has close to real world raw large sequential read speed testing.
 
Back
Top