Apples M2 looks like a beast.

He is not too, he is simply walking around on an already made scene.

And like pointed out below:
Ray tracing, Lumen, and Nanite are not supported on Mac. The reflections in this scene are standard probs and planar reflections.

I think we can safely discard both has not UE dev but people wanting to make content.....
Raytracing and Lumen have been working since preview 2.

Nanite is going to be a while as Metal currently has a limit of 500,000 resources per argument buffer and nanite needs at least 1 million.

That resource buffer limitation is one of the reasons there are so few DX12 ports to Mac currently.

Apple is aware that that limit is a huge thorn in our collective sides and they say they are tackling the issue but don’t have a current ETA.
 
Last edited:
In fairness the RDNA2 graphics on Zen4 are quoted as being “serviceable” for troubleshooting we aren’t talking games - it’s Intel UHD at best.

They're calling their integrated graphics RX 660M and RX 680M for a reason.
 
Raytracing and Lumen have been working since preview 2.
Yes but not on the version on the video we are looking at from my understanding, I think for both video we do not have actual UE dev giving a solid feedback, looking at my video it seems to be the case (and he is honest about it, being an artist working on the asset side)
 
The main performance increase in the M2 seems to be related to the GPU since as Apple said they have "UP TO" twice as many GPU cores compred to the M1. Which is odd since it only adds 35% more performance even though they added twice as many GPU cores. Again Apple is comparing their GPU performance to integrated graphics from PC's, which according to Apple is a 2.3x increase. Kinda sad since they have 100GB/s of unified memory bandwidth which is more than a RTX 3090

No, not even close to a 3090. A 3090 has a memory bandwidth of 930GB/sec, with the Ti version just breaking 1TB/sec. 100GB/sec is faster than your normal desktop CPU, those usually being more in the 40-50GB/sec but in the range of a server or workstation. For graphics, 100GB/sec is pretyt low end. A 1050Ti has about 112GB/sec memory bandwidth.

So it is pretty reasonable for an integrated GPU, as I said about double what you'd get for an Intel integrated GPU, but nothing to write home about compared to dedicated accelerators or the game consoles or the like.
 
Yes but not on the version on the video we are looking at from my understanding, I think for both video we do not have actual UE dev giving a solid feedback, looking at my video it seems to be the case (and he is honest about it, being an artist working on the asset side)
Yeah, developing for Mac is weird you have to make accommodations for Metal’s lack of game centric design. Things you can do in DX12 and Vulkan with asset management you just can’t do on a Mac so when configuring your dev environment you have to make tweaks.

This is further complicated to Apples past approaches to graphics and displays. They could skimp out on the actual rendering and make up for it with stunningly beautiful displays, the panels they use cover up a lot of weaknesses to the Metal API. The Metal API is now pretty damned old, and it needs a serious overhaul if Apple is going to try and win over game developers. I’m hoping that the new Metal 3 API introduces a lot of those fixes and features.
 
Last edited:
The OpenBSD 7.1 release notes claimed that "support for Apple Silicon Macs has improved and is ready for general use", though "ready for general use" means different things to different people; e.g., forget about accelerated graphics. I've not compared the supported features of OpenBSD with those of Asahi Linux. The latter is geared toward a different audience.

(People, please try to contain your excitement.)
 
No, not even close to a 3090. A 3090 has a memory bandwidth of 930GB/sec, with the Ti version just breaking 1TB/sec. 100GB/sec is faster than your normal desktop CPU, those usually being more in the 40-50GB/sec but in the range of a server or workstation. For graphics, 100GB/sec is pretyt low end. A 1050Ti has about 112GB/sec memory bandwidth.

So it is pretty reasonable for an integrated GPU, as I said about double what you'd get for an Intel integrated GPU, but nothing to write home about compared to dedicated accelerators or the game consoles or the like.
DDR5 100GB/sec is getting easy to reach so it's catching back up on mainstream on the pc side too.
 
Nothing really to be locked into. I move from Windows to Mac and back all day. My gaming laptops are Windows. My productivity laptops are Mac. My gaming PCs are Windows. Everything works fine. It's not 1998, anymore.

Biggest issue is ARM and VMs - you can't (reasonably) emulate anything that isn't ARM, obviously. But with Docker and other options it is getting easier.
My point is that M1 and M2 are Apple only things.

Just as I'm moderately interested in IBM Power or Oracle Sun Sparc or Intel Itanium.

Edit: Becomes a bit more interesting with the Linux port, which is pretty early on. But even still it's much like the following:

Do you have Linux ported to the M1?

Somewhat, but Apple won't help.

Apple: We're releasing a new firmware, Try it now.

Yep, it broke everything...

... repeat...
 
Last edited:
Here we go again with PC fanboys refusing to believe that Apple now makes the fastest laptops in the world in their respective formfactors (Ultrabook, thin and light 13 and 14).
They make the most power efficient in those formfactors. A Ryzen laptop with a RTX 3060 would easily beat any M1 in performance.
JhTTFWu.jpg

I have a loaded MBP 14. I'll paypal you $10 if anyone here has a 13 or 14 inch laptop in the same class (not a 4 inch thick desktop replacement) which outperforms it in any of these benchmarks or specs:

Cinebench
Storage Space (8TB)
Amount of RAM (64GB)
Storage Speed (read and write)
Geekbench
Premiere Pro video 4k video rendering
Amount of time the fan doesn't run
Handbrake encode
Is storage space and ram really impressive to you on a M1 mac when it's that large? I could get 8TB's of storage if I wanted to spend over $700. Also, how much does the laptop cost with those specs? The rest of the benchmarks were beat by the Ryzen 5950X. The benchmarks that show this start at 3:23 if my link doesn't take you there.


Also Geekbench, Cinebench, Premiere Pro, and Handbrake are all really good games but you forgot to mention Photoshop. Still all faster on a RTX 3060.

When you factor in that most PC laptops can't even win at one of these things and Apple combines all of them - plus killer battery life, great thermals, an accurate screen, etc, there is nothing even close from an overall system performance standpoint in the PC world.
You can add bad thermals to the M1 since Apple updated them and they now easily reach 90C+ frequently. Battery performance is still better on M1 but as Apple releases new interations of their M1's they started to lose some of that battery life, while AMD with their newer Laptop chips have nearly doubled in battery performance. This is with the 6900HS a few months ago, and now that AMD is moving already onto 7000 series with 5NM and RDNA2 graphics, we can expect this gap to close even further. I guarantee you that Apple's move to M2 with more GPU cores and still using 5NM will probably get slightly worse battery life compared to the M1. So who knows what a Apple M2 vs Ryzen 7000 will look like in battery life. Probably not enough to care.



Even little things like the headphone jack in the new Pros....it can actually drive my Sennheiser HD650s to deafening volumes whereas on the previous gen I needed a standalone amp. It's just better.
Is going deaf from loud headphones a special thing? I don't see this as a Pro.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
  • Like
Reactions: kac77
like this
The rest of the benchmarks were beat by the Ryzen 5950X. The benchmarks that show this start at 3:23 if my link doesn't take you there.
That is a joke of a review.....
The M1 soundly beats the 5980HX.
You don't generally compare mobile parts to desktop parts and call it a fair comparison
 
You don't generally compare mobile parts to desktop parts and call it a fair comparison

Why not? The mobile part is only 100MHz slower. Also, it's a generation out, the current mobile series 6000 is already shipping.

I own Macs and Windows machines. And a Deck. I have no dog in the fight. Or rather, I have both dogs in this fight. Saying Apple charges more for hardware of the same parity to its alternative systems' competition is like saying the sky is blue. There's nothing at all controversial about it.
 
Why not? The mobile part is only 100MHz slower. Also, it's a generation out, the current mobile series 6000 is already shipping.

I own Macs and Windows machines. And a Deck. I have no dog in the fight. Or rather, I have both dogs in this fight. Saying Apple charges more for hardware of the same parity to its alternative systems' competition is like saying the sky is blue. There's nothing at all controversial about it.
Well the M1 launched before either the 5900HX, or the 5980HX processors so they are AMD's latest and greatest mobile CPU's.
But why stop the comparison there at the 4x larger and 10x hungrier 5950x, why not then compare the 5950x against the similarly larger Xeon and EPYC parts? I meah yeah the EPYC's are 2x the size and use 3x more power but they will run circles around it while being 1200 MHz slower.
 
Well the M1 launched before either the 5900HX, or the 5980HX processors so they are AMD's latest and greatest mobile CPU's.

I was saying the difference between AMD's desktop and mobile performance is not really all that different.

And the 6000-series is currently available. 208mm versus M2's 145, so not that much larger, with 8 full cores, 16 threads, probably similar if not stronger integrated graphics, and a TDP that's configurable down to 15-28W (M2 is 10-20).

They're both doing cool things, and for the first time in my life, I can confidently say I'd rather have an Apple CPU than an Intel CPU, but that says more about Intel than Apple. And I still think AMD is sandbagging their APUs in terms of graphics; they could do way more and it's annoying that they aren't.

Hopefully Apple can get AMD to make a real gaming APU and get our waiting over with.
 
I was saying the difference between AMD's desktop and mobile performance is not really all that different.

And the 6000-series is currently available. 208mm versus M2's 145, so not that much larger, with 8 full cores, 16 threads, probably similar if not stronger integrated graphics, and a TDP that's configurable down to 15-28W (M2 is 10-20).

They're both doing cool things, and for the first time in my life, I can confidently say I'd rather have an Apple CPU than an Intel CPU, but that says more about Intel than Apple. And I still think AMD is sandbagging their APUs in terms of graphics; they could do way more and it's annoying that they aren't.

Hopefully Apple can get AMD to make a real gaming APU and get our waiting over with.
There are significant differences between AMD's desktop and mobile parts...
The AMD 6980HX manages to Tie the M1 Max in most performance tests, while the M1 still uses 15W less. But now you are comparing AMD's brand new top-of-the-line mobile processor against Apple's now 2-year-old part. Which is at least a fair comparison, as will the equally fair comparison of the M2 against the 6980HX, the similarly timed launching of the Intel 13'th gen mobile parts, and of course the older M1 parts.
 
There are significant differences between AMD's desktop and mobile parts...
The AMD 6980HX manages to Tie the M1 Max in most performance tests, while the M1 still uses 15W less. But now you are comparing AMD's brand new top-of-the-line mobile processor against Apple's now 2-year-old part. Which is at least a fair comparison, as will the equally fair comparison of the M2 against the 6980HX, the similarly timed launching of the Intel 13'th gen mobile parts, and of course the older M1 parts.
The 6900HS is a 8 core CPU while the M1 Pro is a 10 core. Even still this video shows that it still beats the M1 Pro. It also beats it in DaVinci Resolve. Hand Brake was twice as fast on the 6900HS as well. 7zip, V-Ray, and etc all faster on the 6900HS. The only notable benchmark where it lost was Adobe Premiere. He also shows that the M1 pro used half as much energy but we all knew that Apple M1's were more power efficient.

Here's the thing though in that the 6900HS is new but yet using a lot of old AMD tech, relatively speaking. Finally a CPU from that uses RDNA2 graphics... the same graphics technology that was introduced in the Xbox Series X and PS5 about 2 years ago. The Zen3 cores were also introduced about 2 years ago. The 6nm process is still inferior to the M1's 5nm, which the M2 is going to continue to use. The Ryzen 7000 series will continue to use RDNA2 while also using the new Zen4 cores and 5nm like the Apple M1.

I'm not saying that AMD's 7000 chips are going to be a massive upgrade from the 6000 chips but like the M2 it will be faster, but unlike the M2 it will be more efficient than it's predecessor. Probably not as efficient as the M1 but the gap is closing. Remember Apple has only so much time to get people onboard their ARM laptops before X86 catches up in efficiency, and trust me it will.

 
The 6900HS is a 8 core CPU while the M1 Pro is a 10 core. Even still this video shows that it still beats the M1 Pro. It also beats it in DaVinci Resolve. Hand Brake was twice as fast on the 6900HS as well. 7zip, V-Ray, and etc all faster on the 6900HS. The only notable benchmark where it lost was Adobe Premiere. He also shows that the M1 pro used half as much energy but we all knew that Apple M1's were more power efficient.

Here's the thing though in that the 6900HS is new but yet using a lot of old AMD tech, relatively speaking. Finally a CPU from that uses RDNA2 graphics... the same graphics technology that was introduced in the Xbox Series X and PS5 about 2 years ago. The Zen3 cores were also introduced about 2 years ago. The 6nm process is still inferior to the M1's 5nm, which the M2 is going to continue to use. The Ryzen 7000 series will continue to use RDNA2 while also using the new Zen4 cores and 5nm like the Apple M1.

I'm not saying that AMD's 7000 chips are going to be a massive upgrade from the 6000 chips but like the M2 it will be faster, but unlike the M2 it will be more efficient than it's predecessor. Probably not as efficient as the M1 but the gap is closing. Remember Apple has only so much time to get people onboard their ARM laptops before X86 catches up in efficiency, and trust me it will.


That laptop is doing a hell of a lot better than the one I found that's for sure. That's good though, hopefully, AMD can get enough time at TSMC to get them out there. I'm due for a new one in Oct and I want it in the running.
 
Interesting to me that the base M2 has ProRes encode/decode now instead of how the M1 only has ProRes encode/decode in the Pro/Max/Ultra variants. Also interesting that it will drive a 6K display, which Apple will happily sell you for stratospheric prices.

I am still waiting for my 16" MacBook Pro with the M1 Pro in it. For my work as a video engineer, it will be adequately fast for its price.
 
The 6900HS is a 8 core CPU while the M1 Pro is a 10 core. Even still this video shows that it still beats the M1 Pro. It also beats it in DaVinci Resolve. Hand Brake was twice as fast on the 6900HS as well. 7zip, V-Ray, and etc all faster on the 6900HS. The only notable benchmark where it lost was Adobe Premiere. He also shows that the M1 pro used half as much energy but we all knew that Apple M1's were more power efficient.

Here's the thing though in that the 6900HS is new but yet using a lot of old AMD tech, relatively speaking. Finally a CPU from that uses RDNA2 graphics... the same graphics technology that was introduced in the Xbox Series X and PS5 about 2 years ago. The Zen3 cores were also introduced about 2 years ago. The 6nm process is still inferior to the M1's 5nm, which the M2 is going to continue to use. The Ryzen 7000 series will continue to use RDNA2 while also using the new Zen4 cores and 5nm like the Apple M1.

I'm not saying that AMD's 7000 chips are going to be a massive upgrade from the 6000 chips but like the M2 it will be faster, but unlike the M2 it will be more efficient than it's predecessor. Probably not as efficient as the M1 but the gap is closing. Remember Apple has only so much time to get people onboard their ARM laptops before X86 catches up in efficiency, and trust me it will.



My laptop doesn't have the M1 Pro, it has the M1 Max. The GPU is twice as fast on the max and the CPU has twice as much memory bandwidth, plus double RAM capacity and double the video encode engines.

So once again, my point holds despite you trying to cherry pick benchmarks. That laptop with poor battery life and thermals is still slower than Apple's architecture which is older.
 
Last edited:
I will withhold judgement until there are some reliable 3rd party benchmarks. Apple's marketing has a long history of, shall we say, "exaggerating" the performance for their hardware. I never trust vendor claims, but I trust their claims even less than most.
I agree however seeing how impressive the M1 was would certainly suggest the M2 will be something special too.
 
I wouldn't call the M2 a beast. Just looking at Apple's photos it looks to be like a small performance boost. Looking at what Apple wrote I can conclude some things about the M2. Firstly the CPU performance increase seems to be from more cache, just like how AMD is adding more cache to their CPU's lately. That's where most of the performance increase is coming, and according to Apple it's mostly multi-threaded performance increase. They like to compare the M2 to PC cores, which is most likely Intel as AMD has done wonders lately with their Zen4 Ryzen CPU's.

Well, there is a big difference between what Apple and AMD are doing here.

AMD expands the L3 cache. M1 does not have a L3 cache, they increased the L2 cache. The latter should make for a bigger improvement for general-purpose tasks.
 
QEMU. I do it the other thing round, I run ARM Linux VMs on an Intel Mac using UTM (a KVM port to macOS).
I haven't been able to get x86 VMs running in QEMU on M1. Seems the only way is to emulate ARM Windows and go from there with stuff.
 
Well, this shows the utter superiority of the Intel macs over that Apple Silicon nonsense :D
 
Well, this shows the utter superiority of the Intel macs over that Apple Silicon nonsense :D
Yeah it’s a huge problem where I work - so we’ve containerized our product. People insist on MBPs!
 
Yeah it’s a huge problem where I work - so we’ve containerized our product. People insist on MBPs!
Makes it far easier to manage. By this time next year nobody will be running anything on their laptops, everything will be centralized and their laptops will be glorified terminals. Everything will be nicely contained in the VPN and I’ll be able to sleep easier knowing I have that many less security holes to keep plugged.
 
Company: You can make exceptions to our standard issue laptops on approval.

Employee: I require the Dragon's Breath 21" Mega Monster RGB Special Edition 8K with built-in smoke, 32GB, 2TB NVMe and 3090 (mobile) please. I need it for productivity work. The normal laptop can't handle the spreadsheets I use.
 
Company: You can make exceptions to our standard issue laptops on approval.

Employee: I require the Dragon's Breath 21" Mega Monster RGB Special Edition 8K with built-in smoke, 32GB, 2TB NVMe and 3090 (mobile) please. I need it for productivity work. The normal laptop can't handle the spreadsheets I use.
lol honestly the only reason I got a macbook for my work laptop is they let you get ISO layout keyboards while lenovo is like the only other brand that does in the US I know of but we can't get those.
 
lol honestly the only reason I got a macbook for my work laptop is they let you get ISO layout keyboards while lenovo is like the only other brand that does in the US I know of but we can't get those.

Lenovo Thinkpad keyboards will give you brain damage as it is, although I do wish Alt Graph was something more common. I have to work with special characters pretty often and I can't tell you how many times I just do a search for them and copy-paste.

Going from a Mac keyboard to a Thinkpad keyboard, either way, is probably physically nauseating.
 
Lenovo Thinkpad keyboards will give you brain damage as it is, although I do wish Alt Graph was something more common. I have to work with special characters pretty often and I can't tell you how many times I just do a search for them and copy-paste.

Going from a Mac keyboard to a Thinkpad keyboard, either way, is probably physically nauseating.

Believe it or not, I used to hate Apple with a passion (before they started making superior products on the laptop side). I was a die-hard Thinkpad fanboy. Had so many of them - X series, T series, P series, etc. I was one of the "nipple" die hards too that refused to use the track pad.

While I still love the feel of the Thinkpad keyboards, the modern ones don't compare to the older ones. And what I have found is that while the feel of the Thinkpad keyboards is superior to the Apple ones, I actually type slightly faster on the Apple ones overall (I am a super nerd and benchmark typing speed on everything, and enjoy doing the online competitive tests). But what really freaks me out is the Apple keyboard I like the least from a feel perspective (the 2019 butterfly switch keyboard) is the one I am fastest on. The latest Apple keyboard I am not quite as fast on as the butterfly one and doesn't feel quite as good as a Thinkpad, but it strikes a nice balance of both.
 
While I still love the feel of the Thinkpad keyboards, the modern ones don't compare to the older ones.

Yeah, they're mushier now. And the "lifestyle" keyboards are nice, but they are center-sprung, and get crud in them real easy. And standard as far as layouts are concerned.

But the layout for Thinkpads is still bomber. Basically encourages you to drink coffee and smoke cigarettes and just get work done. They are still cyberpunk.

I'm writing this on two Lenovos stacked using a Mac as a window blind.

And it's nighttime.
 
Believe it or not, I used to hate Apple with a passion (before they started making superior products on the laptop side). I was a die-hard Thinkpad fanboy. Had so many of them - X series, T series, P series, etc. I was one of the "nipple" die hards too that refused to use the track pad.

While I still love the feel of the Thinkpad keyboards, the modern ones don't compare to the older ones. And what I have found is that while the feel of the Thinkpad keyboards is superior to the Apple ones, I actually type slightly faster on the Apple ones overall (I am a super nerd and benchmark typing speed on everything, and enjoy doing the online competitive tests). But what really freaks me out is the Apple keyboard I like the least from a feel perspective (the 2019 butterfly switch keyboard) is the one I am fastest on. The latest Apple keyboard I am not quite as fast on as the butterfly one and doesn't feel quite as good as a Thinkpad, but it strikes a nice balance of both.
Seriously? I have a thinkpad for work and the keyboard is absolute garbage. By far the worst I've ever used. It's a "full sized" keyboard as in it has all the keys and a numpad, but the keys and area it takes up is much smaller than a real full sized keyboard. And the switches are just fucking terrible. Keypresses often don't register. And the alt and function key placement? WTF? I've never used a worse keyboard my entire life.

edit: oh just realized you said the old ones were good and not the current models
 
FN key location never bothered me just map control to caps lock like anyone that actually uses a computer does lol.
 
My laptop doesn't have the M1 Pro, it has the M1 Max. The GPU is twice as fast on the max and the CPU has twice as much memory bandwidth, plus double RAM capacity and double the video encode engines.

So once again, my point holds despite you trying to cherry pick benchmarks. That laptop with poor battery life and thermals is still slower than Apple's architecture which is older.
Yea, or that I haven't found anyone that tested the new AMD 6900HS against the M1 Max. Feel free to find one though cause I can't. The M1 Max is a lot faster at video rendering, probably due to its media engine but at 3D... not holding up against a RTX 3060. Laptop 3060, not desktop 3060. Doubling the GPU cores compared to the Pro is not doubling the GPU performance. This falls in line with the M2 doubling it's GPU cores and yet only getting 35% increase in GPU performance, at least according to Apple. I'm guessing the M1 Max is probably not faster than the 6900HS, due to how little of a difference there is in GPU performance. The M2 might be but that really depends on how accurate Apple's statements are.

My opinion is that the M2 is not the M2 Apple wanted to release due to it looking like a M1 Maxamer model. Apple probably wanted to release it on 3nm and got screwed so they went back to the M1 and just improved it. The 6900HS is not getting that much media coverage because nobody cares about it's GPU performance when you could just stick a discrete GPU and call it a day. That and AMD already announced the 7000 series, which just seems strange. I'm betting that the 7000 series won't get that much faster GPU performance either since AMD doesn't seem to care. They'll make more money selling GPU's anyway, even for laptops. I'm hoping that Intel actually comes out punching with good built in graphics performance. I expect more from Intel since Intel has more to prove with Apple leaving them than AMD.

 
For those that think Apple makes their hardware entirely from scratch, just keep in mind their M1 chips have the PACMAN flaw that effects ARM. So yea, Apple didn't make their ARM chips from scratch, unless Apple engineers were so good at making CPU's that they somehow arrived at the same flaw through convergence. To give you an idea, Spectre and Meltdown didn't effect both AMD and Intel equally. In fact Meltdown didn't effect AMD at all, because AMD is not allowed to copy Intel's designs since 1984. Most likely the M2 has the same flaw, but it goes to show the differences in hardware design between ARM CPU's in general compared to x86. What Apple is doing isn't any different than the 90's and 2000's MIPS era where everyone made their own unique MIPS CPU, but they were all still MIPS. Sony at least made their own GPU's for those consoles, which was no small feat. Unlike Apples GPU's.

https://appleinsider.com/articles/2...le-silicon-is-an-echo-of-spectre-and-meltdown

6jf89b.jpg
 
Thinkpad has been shit since IBM sold out to Chinese Lenovo.

Still better than the rest. The only thing that's gone down, really, is bloatware. So do a clean install, which you should do with any laptop, anyway.
 
For those that think Apple makes their hardware entirely from scratch, just keep in mind their M1 chips have the PACMAN flaw that effects ARM. So yea, Apple didn't make their ARM chips from scratch, unless Apple engineers were so good at making CPU's that they somehow arrived at the same flaw through convergence. To give you an idea, Spectre and Meltdown didn't effect both AMD and Intel equally. In fact Meltdown didn't effect AMD at all, because AMD is not allowed to copy Intel's designs since 1984. Most likely the M2 has the same flaw, but it goes to show the differences in hardware design between ARM CPU's in general compared to x86. What Apple is doing isn't any different than the 90's and 2000's MIPS era where everyone made their own unique MIPS CPU, but they were all still MIPS. Sony at least made their own GPU's for those consoles, which was no small feat. Unlike Apples GPU's.

https://appleinsider.com/articles/2...le-silicon-is-an-echo-of-spectre-and-meltdown

View attachment 482345
Apple has a perpetual architectual license so they're allowed to both use and modify ARM technology to their specification. M(1,2) chips are simply an extension of their ARM strategy, which includes the chips designed for their phone and tablet lineups.

So, yes, Mx chips aren't designed-from-scratch technology. They are an evolution of the ARM architecture with Apple-specific extensions and enhancements that most, if not all, purveyors of ARM chips simply have not done yet. I don't think anyone who is seriously posting in this thread has either said or believed that Apple isn't using ARM technology. For that matter, anyone who has any cursory knowledge of the Mx architecture is well aware of the lineage.
 
Back
Top