Ex-Intel Engineer Slams Misguided And Flawed Apple M1 Benchmarking Practices

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
"Exacerbating the problem with accurate IPC comparisons is that we don’t have a lot of apples-to-apples real-world apps that can easily be benchmarked across platforms, that will have identical workloads. And even then, if M1 has to run in Rosetta emulation, though it’s still real-world and what a user would experience, it’s not the same level playing field as an Apple M1-compiled app, obviously.

In short, not all canned benchmarks show the M1 in the proper light versus other x86 architectures, versus real application performance. To be clear, none of this means that M1 systems are being mis-represented in their actual performance capabilities. As Piednoël notes, "Apple has a very fast chip because it has a very large L1, but don't conclude the IPC is outstanding because of this, because the benchmarks are all getting tricked right now."

Have thoughts on what François had to say? Share them with all in the comments section below."


https://hothardware.com/news/ex-intel-engineer-slams-apple-m1-benchmarking-practices
 
It's an odd position to make. Considering that even at the end of the day he notes "winning is winning" - not in so many words, but it is. (No rational person cares 'how' or 'why' one setup is faster, they only care which one is faster. If it really is: "just because of cache" then so what? The end result of getting more work done more quickly is still the same. The benefits to much lower temps, much lower wattage, and much long battery life while doing what M1 does is also not disputed).
However this is something that I've mentioned long before M1 Macs came out - so much of why one piece of software runs faster than another on different OS' is optimization and 'certain people' on this forum asked me to prove what is already self evident. Obviously each dev is going to spend different amounts of time and resources developing an app on each platform if its cross platform.

Having a piece of software running faster in Linux vs Windows doesn't necessarily show that Linux is a faster OS - merely that that piece of software is more optimized on that OS. One other kink in the chain of course is how easy/hard it is to optimize a piece of software for a given OS. At this point, Apple I think is class leading. Because they more or less have everything done though universal binaries, coding in Swift and Metal, programmers really just have to code things once and it works across platforms. It's more restrictive in the sense that that is your only option to code on, but again considering how easy its all designed to be it's a lot nicer environment than trying to code on the quagmire that is Windows as an example. The other final kink in this discussion as well of course is how do you conclude that its an actual architecture that is faster when the software being run is so different?

Frankly there is no other real way other than what has already been done. Bench similar software or tasks - whether it's all about the hardware or it's all in the software is moot (and obviously it's some combination of both). At the end of the day one platform is obviously and clearly the winner. If the shoe was on the other foot, people would take a dump on Apple. I think it's only fair in this case to do the opposite.
 
Last edited:
Let's be honest here, canned benchmarks are garbage.

Real workload tests are about the only meaningful thing here, take a bunch of popular software that runs on the PC and on Mac and do the same tasks in them and give a result then let the users decide what is important to them.
I mean for GPU and CPU reviews we don't accept time spy as the end all be all, reviewers are kind enough to test the GPU's and CPU's in a whole gambit of actual games and applications to take averages, hi's, and low's, and really that's what the M1 needs. Running native or emulated doesn't matter because it will come out with the numbers it gets and those are the numbers presented, and if those numbers matter to somebody they can see them and choose accordingly.

But really the Intel Engineer can be throwing all the shade at Apple they want, but at the end of the day, if Intel had done a better job at predicting market trends and got their house in order to present a worthwhile product to meet them Apple wouldn't have had to do this, Intel's stagnation forced their hand.
 
Considering that even at the end of the day he notes "winning is winning" - not in so many words, but it is. (No rational person cares 'how' or 'why' one setup is faster, they only care which one is faster. If it really is: "just because of cache" then so what? The end result of getting more work done more quickly is still the same
If you talk real actual task for sure, it is different if you talk synthetic benchmark that do a task very well when it fit in cache but not so well when it is not the case and that in the real world it usually do not (I think we saw that occur with RDNA2 and rumors of ultra fast mining performance that used benchmark that fitted in the giant cache result instead of actual mining) it can give a misleading impression.
 
Real workload tests are about the only meaningful thing here, take a bunch of popular software that runs on the PC and on Mac and do the same tasks in them and give a result then let the users decide what is important to them.
The thing is you don't even have to do that. You can just test Mac to Mac. Test Intel Mac's vs Arm Mac's and see if the workload and speed increases are worth it to you. In a lot of ways it helps to illustrate on a 1 to 1 level (not even an OS difference) how M1 architecture is faster than Intel at a lot of different tasks.

MaxTech as an example has more or less put M1 Mac's versus every Intel Mac over the course of several months, you can dig through their channel here: https://www.youtube.com/channel/UCptwuAv0XQHo1OQUSaO6NHw
They even put up an M1 Mini vs a $15k Mac Pro, and without an afterburner card it's surprising how many video tasks that M1 is actually ahead on or roughly equal to. And even when it loses its forgivable for a machine that costs 1/15th as much.
Of course they did the obvious testing of the M1 Macbook Pro vs a fully maxed out 16" Macbook Pro and the M1 again is ahead in a lot of workloads.

As for PC to Mac, that's an entirely different debate. If you have users willing to change their software and their workflow switching from PC to Mac could be worth it. But I imagine it will be a slow process of erosion for anyone who is doing very particular tasks with very particular pieces of software on the PC side.

If you talk real actual task for sure, it is different if you talk synthetic benchmark that do a task very well when it fit in cache but not so well when it is not the case and that in the real world it usually do not (I think we saw that occur with RDNA2 and rumors of ultra fast mining performance that used benchmark that fitted in the giant cache result instead of actual mining) it can give a misleading impression.
Sure. But you can ask anyone who is in the photo video world and see all the side by side testing in terms of rendering speed in 4k video work loads. You can create the argument or question: is this fair? Doesn't the M1 have ASIC's and specialized decoders and encoders for video? Sure - does that mean it's any less fast at doing those workloads? No. So I more or less am positing the same things. It doesn't matter "why" something is faster, only that it is faster. The M1 is definitively faster in a number of workloads versus Intel using real world software. Whether those are work loads you care about or not is another matter.
 
"Exacerbating the problem with accurate IPC comparisons is that we don’t have a lot of apples-to-apples real-world apps that can easily be benchmarked across platforms, that will have identical workloads. And even then, if M1 has to run in Rosetta emulation, though it’s still real-world and what a user would experience, it’s not the same level playing field as an Apple M1-compiled app, obviously.

In short, not all canned benchmarks show the M1 in the proper light versus other x86 architectures, versus real application performance. To be clear, none of this means that M1 systems are being mis-represented in their actual performance capabilities. As Piednoël notes, "Apple has a very fast chip because it has a very large L1, but don't conclude the IPC is outstanding because of this, because the benchmarks are all getting tricked right now."

Have thoughts on what François had to say? Share them with all in the comments section below."


https://hothardware.com/news/ex-intel-engineer-slams-apple-m1-benchmarking-practices

He is simply dismissing what Apple engineers have achieved. For a SSOOO microarchitecture, the IPC is given approximately by

IPC ~= (a / L) sqrt(W)

here a is a parameter that depends on the workload. L is the average instruction latency and W is the ROB size. The average instruction latency is given by nonunit instruction execution latencies and short L1 data cache misses, but Firestorm core has a massive ROB with about 630 instructions, which is about twice higher than best Intel core.
 
From the article (Not going to watch a 5 minute video) the sticking point seems to be that M1 has a large L1 cache. So what? If they throw 100MB L1 cache on the die and all applications can fit into L1, then that is represenetative of what the processor can do. If I'm paying for a chip with a large cache, I don't care that "It's not fair". Stop whining about how the competitor offers more cache than your product does and follow suit. Till then they are mopping the floor with you, and your products are inferior because they have less cache.
 
This is a stupid argument from a sore loser.

MacOS runs on both intel and M1 chips. In real workloads, like Adobe Premiere Pro, the M1 straight up annihilates any Intel mobile processor. That's all that matters to the consumer. I have a loaded Intel Macbook Pro and the second the higher end 13inch M1 version comes out (4 thunderbolt ports, M1x chip, etc) I'm switching. I can't wait for better performance, less power consumption, and quieter cooling - win, win, win.
 
It would be intriging to benchmark cpus without L1 L2 and L3 just for kicks.
 
Does it matter how the sausage is made ?

At the end of the day I don't care if its more cache or a built in ASIC. If it does the job I need to do faster that is the end of the argument. Both of those things are design choices. Intel has been making the wrong ones for a long time now.... Apple seems at least so far to be making the right ones lately.

Same argument could be made in GPUs with AMD vs Nvidia. Nvidia has faster memory no doubt, they have way more mem bandwidth and any benchmark of that would prove it. However AMD made a design choice and included a ton of cache in their die package instead of baking in Tensor cores. Do we say ya AMDs got slower ram so where all being tricked ? Or do we say ya look they made a design decision came at the problem another way and got the same or better performance for cheaper.

Apple made that same choice... yes they bumped up the cache and reordered how their ARM chip deals with memory writes and reads to better emulation speeds. Those where good decisions frankly. His words reinforce what I have felt about Intel for a long time... their leadership has no idea how the tech works, and their engineers don't understand the market in anyway. So they just keep doing what they have done for 30 years slightly improving things. When they do R&D into things that could be game changers the Intel leaders can't recognize it and don't commit in any real way and by the time Intel does anything useful with it someone else has went all in and did it better. (Intel was talking about 3D stacking for years before AMD went chiplet as one example).

Perhaps a hard market dive... and a forced shakeup at the top at Intel is exactly what they need. Intel needs to find a CEO who has a technical background but somehow still understands the business end (or at least is humble enough to get the right people to help them). Those types of people are hard to find. If they keep putting accountants in charge there going to end up being a footnote in another 30 years.
 
It would be intriging to benchmark cpus without L1 L2 and L3 just for kicks.
Does it matter how the sausage is made ?

At the end of the day I don't care if its more cache or a built in ASIC. If it does the job I need to do faster that is the end of the argument. Both of those things are design choices. Intel has been making the wrong ones for a long time now.... Apple seems at least so far to be making the right ones lately.

Same argument could be made in GPUs with AMD vs Nvidia. Nvidia has faster memory no doubt, they have way more mem bandwidth and any benchmark of that would prove it. However AMD made a design choice and included a ton of cache in their die package instead of baking in Tensor cores. Do we say ya AMDs got slower ram so where all being tricked ? Or do we say ya look they made a design decision came at the problem another way and got the same or better performance for cheaper.

Apple made that same choice... yes they bumped up the cache and reordered how their ARM chip deals with memory writes and reads to better emulation speeds. Those where good decisions frankly. His words reinforce what I have felt about Intel for a long time... their leadership has no idea how the tech works, and their engineers don't understand the market in anyway. So they just keep doing what they have done for 30 years slightly improving things. When they do R&D into things that could be game changers the Intel leaders can't recognize it and don't commit in any real way and by the time Intel does anything useful with it someone else has went all in and did it better. (Intel was talking about 3D stacking for years before AMD went chiplet as one example).

Perhaps a hard market dive... and a forced shakeup at the top at Intel is exactly what they need. Intel needs to find a CEO who has a technical background but somehow still understands the business end (or at least is humble enough to get the right people to help them). Those types of people are hard to find. If they keep putting accountants in charge there going to end up being a footnote in another 30 years.
at what point with the cache do we achieve diminishing returns?
 
That's application / data specific.
Scope this line of reasoning and thought out, buds:

"Can AI be used for branch prediction?​

It is quite easy to wonder if this kind of problem could be solved by artificial intelligence algorithms, and the answer is yes, they can. Since a long time ago, indeed. An example of a branch predictor that uses this kind of approach is the perceptron predictor, also addressed in the Sparsh Mittal survey. Due to recent advances in the field of artificial intelligence, the combination of these two areas is probably a hot trend inside the buildings of major tech companies such as Intel and AMD, and we can expect much more to come.

So if you enjoy computer architecture and artificial intelligence, this is the research area that you can use all your knowledge in order to improve even more the processors that we have today."


https://ieeexplore.ieee.org/abstract/document/9086500
 
So far there really isn't any CPU design that hasn't been positively effected by more cache. Modern software would probably see improvements all the way up to insane amounts at this point.

The problem has always been Die size. Cache often seems to be the after thought.... after we get all the logic in we can squeeze this much cache in. Apple and AMD both lately have seemed to make cache a larger part of their design choice. AMD has been doing it with Ryzen and Radeon as an example. AMD with Ryzen needed cache to solve issues with CCXs and chiplets. They seem to really understand how that has bumped performance at this point. Chip designers in general are better at balancing logic vs cache the last few years. As Apple is showing sometimes its better to reduce the logic a bit and crank that cache up... Apple could probably have thrown a couple extra ARM cores into the M1, instead they choose to give the cores extra cache. Seems that was the correct way to go.

One sort of non related prediction on the server side. Amazon later this year or early next year is going to invade the performance server market. Graviton2 is a IPC match for Intel... but its biggest fault is a fairly smallish cache system. Rumors are Amazon is working on a Graviton3 that will be little more then a die shrink with a massive cache bump. If that ends up being true I am pretty sure Amazon will have the cheapest and fastest server offerings around.... Intel is in so much trouble in the next few years I really don't see how they can't contract TMSC at this point. They are going to need server parts just as badly as they are going to need consumer parts.
 
Scope this line of reasoning and thought out, buds:

"Can AI be used for branch prediction?​

It is quite easy to wonder if this kind of problem could be solved by artificial intelligence algorithms, and the answer is yes, they can. Since a long time ago, indeed. An example of a branch predictor that uses this kind of approach is the perceptron predictor, also addressed in the Sparsh Mittal survey. Due to recent advances in the field of artificial intelligence, the combination of these two areas is probably a hot trend inside the buildings of major tech companies such as Intel and AMD, and we can expect much more to come.

So if you enjoy computer architecture and artificial intelligence, this is the research area that you can use all your knowledge in order to improve even more the processors that we have today."


https://ieeexplore.ieee.org/abstract/document/9086500


Get some AI spectre flaws maybe? :( ChadD
 
Get some AI spectre flaws maybe? :( ChadD
Perhaps all the cloud servers need to turn into skynet is a little more cache to flex. :) lol

Its interesting to read all the branch prediction papers. Its always been a interesting and confusing subject.

I'm not convinced AI is going to do much for Branch prediction.... but I could be wrong. With Branch prediction it has always been about finding algorithms that first fit on the chip... simple chips like MIPS had one level branch predictors. Anyway the newer more complicated predictors are the result of lots of trial and error... so perhaps some sort of AI assisted design on new forms of prediction could help ? I can't claim to know much about that. The design issue has always been accuracy vs general speed vs mem efficiency. You can design a predictor that is super accurate but it will eat more ram, or cycles. There is a sweet spot of the 3.... but that sweet spot also seems to fluctuate based on other factors like pipeline depth. I could see someone training a AI to run thousands of slight variations testing to find the best trade off accuracy/speed/mem usage for a given design goal. (That makes a lot of sense)

Your right though its not just more cache.... its also using it properly. That was Intels early claim to fame... way back they had more elegent cache systems, they have had less cache going back to the first Pentium and P pros. Where as AMD seemed to use a less refined branch predictor... which probably had something to do with the pipeline choices. Looking at like the bulldozer stuff AMD had a longer pipe... and a Branch predictor that had to be more accurate as a result so it used more cache. (guessing wrong if your pipeline is 20-40% longer means that many more cycles required for the CPU to do the right work). So AMD had more cache... but it didn't translate into better performance.

These days AMDs Branch prediction seems very similar to Intels. There pipelines are more the same these days... and both companies base their BM algorithms off the same academic research papers. :)

Comparing to ARM gets more complicated obviously.... the simpler the chip the less complicated the BP needs or even can be. I'm no chip scientist... some people think Apple has aligned their ARMs simpler BP cache system to operate a little more like x86s. So when its emulating.... things aren't overflowing the in general smaller simpler ARM cache spaces. So where x86 code on x86 is writing branch prediction in a cycle or two... its also doing that on Apples chip. (where other ARM chips not doing that like say Qualcomms are having that info take 2-3x as many cycles cause the simpler BP cache is taking more logic cycles to write and perhaps even read)

Apples engineers aren't stupid... some people have suggested its some infringement on Intel or x86 patents. That is silly. All they seem to have done is adjust their BP to detect the type of math its being asked to predict and dynamically adjust the way it writes and reads from the cache to get the job done in the same (or perhaps even less) cycles then native x86 chips. Its smart if you know your designing a chip that is going to be executing 2 different types of code. ARM code is simpler... and actually writes faster but it requires larger cache space. So for ARM code the chip is writing a lot of small blocks... using more ram in general. For x86 code you write fewer blocks overall but those blocks have larger individual space requirements. Apple seems to shift its cache writing scheme on the fly based on code type. (I get the feeling that their talk of code recompiling is only half true... there may be some bits that can't really be translated and for those bits the hardware seems to be adjusting instead)
 
Last edited:
At best it is arguing over semantics, at worst it is sour grapes. I can care less how 'canned' some benchmarks are. From my daily usage comparing to a 3900x and 4900hs the M1 is very competitive (code compiles, video editing etc) and it runs silent. Super impressed with the chip.
 
At best it is arguing over semantics, at worst it is sour grapes. I can care less how 'canned' some benchmarks are. From my daily usage comparing to a 3900x and 4900hs the M1 is very competitive (code compiles, video editing etc) and it runs silent. Super impressed with the chip.
are the benchmarks cherry picked too, and not just canned? (salt in wound?)
 
Last time I checked, AMD's answer to latency issues between chiplets was a large L1 Cache also. No one seemed to complain.
 
It's marketing, don't read much more into it than that.

This is a very powerful chip (even if not dominant at all things), and I 100% assure you internally at CPU houses (like Intel), this is not being dismissed, at all. This is a Iowa-class shot across the bow.
 
So far there really isn't any CPU design that hasn't been positively effected by more cache. Modern software would probably see improvements all the way up to insane amounts at this point.

The problem has always been Die size. Cache often seems to be the after thought.... after we get all the logic in we can squeeze this much cache in. Apple and AMD both lately have seemed to make cache a larger part of their design choice. AMD has been doing it with Ryzen and Radeon as an example. AMD with Ryzen needed cache to solve issues with CCXs and chiplets. They seem to really understand how that has bumped performance at this point. Chip designers in general are better at balancing logic vs cache the last few years. As Apple is showing sometimes its better to reduce the logic a bit and crank that cache up... Apple could probably have thrown a couple extra ARM cores into the M1, instead they choose to give the cores extra cache. Seems that was the correct way to go.

One sort of non related prediction on the server side. Amazon later this year or early next year is going to invade the performance server market. Graviton2 is a IPC match for Intel... but its biggest fault is a fairly smallish cache system. Rumors are Amazon is working on a Graviton3 that will be little more then a die shrink with a massive cache bump. If that ends up being true I am pretty sure Amazon will have the cheapest and fastest server offerings around.... Intel is in so much trouble in the next few years I really don't see how they can't contract TMSC at this point. They are going to need server parts just as badly as they are going to need consumer parts.
TSMC capacity is all tapped out through 3nm. I don't know how Intel will be able to squeeze in.
 
  • Like
Reactions: ChadD
like this
A cross platform AAA game would make for a good comparison (although GPU would come into play).
Another good one is x264/x265 encoding.

Stuff like geekbench, etc is garbage

Who buys an arm macbook with a shitty GPU to play games and encode HEVC videos ???


I'll hold.
 
TSMC capacity is all tapped out through 3nm. I don't know how Intel will be able to squeeze in.
No doubt TSMC is in high demand. Your probably right too... I have a feeling though if Intel was willing to make a large enough deal TSMC would spend the money to tool up some more space.

If I was in charge of negotiations at TSMC... I would require Intel to order a sizable number of wafers, I wouldn't even offer them any boutique style smaller run. If Intels offer is fab this stuff for 6 months till we can do it ourselves I would say ya sorry no space. I would require they commit... and in return I would give them the type of pricing deal that they would be insane to refuse. (and if they refused I would leak the details... so their shareholders would destroy their leadership for not taking the deal).

Cause if your TSMC the opportunity to really screw over one of your largest rivals is at hand. If Intel spins off their fab business I have no doubt it would go the same way as global foundries... they would compete for the first few years a little bit, but in the end they would end up a boutique supplier of smaller run chips for companies like Broadcom ect.

Then again TSMC may well believe (I also tend to think this way) that the market is going to solve Intel for them anyway. Good chance a ton of the high end chip business is coming their way no matter what Intel does at this point. Between Apple in the consumer space and Amazon and the other ARM server players not to mention 100% of AMDs business... TSMC may feel Intels future potential isn't what it was anyway.
 
No doubt TSMC is in high demand. Your probably right too... I have a feeling though if Intel was willing to make a large enough deal TSMC would spend the money to tool up some more space.

If I was in charge of negotiations at TSMC... I would require Intel to order a sizable number of wafers, I wouldn't even offer them any boutique style smaller run. If Intels offer is fab this stuff for 6 months till we can do it ourselves I would say ya sorry no space. I would require they commit... and in return I would give them the type of pricing deal that they would be insane to refuse. (and if they refused I would leak the details... so their shareholders would destroy their leadership for not taking the deal).

Cause if your TSMC the opportunity to really screw over one of your largest rivals is at hand. If Intel spins off their fab business I have no doubt it would go the same way as global foundries... they would compete for the first few years a little bit, but in the end they would end up a boutique supplier of smaller run chips for companies like Broadcom ect.

Then again TSMC may well believe (I also tend to think this way) that the market is going to solve Intel for them anyway. Good chance a ton of the high end chip business is coming their way no matter what Intel does at this point. Between Apple in the consumer space and Amazon and the other ARM server players not to mention 100% of AMDs business... TSMC may feel Intels future potential isn't what it was anyway.
I'm sure if the deal was good enough. It will just be a couple years before we see any Intel chips made by TSMC.
 
I'm sure if the deal was good enough. It will just be a couple years before we see any Intel chips made by TSMC.
Probably I know the rumors are they are talking about 4nm..... I don't know how aggressive TSMC is, I could see them trying to make a deal on a bit sooner process to get something out by the end of 21. Still 4nm still slated for volume production early in 22.... and they just announced Apple will be getting early 3nm test silicon this year with mass production mid to late 22. So I'm pretty sure if Intel can get a design together quickly they could probably have test silicon by the summer... and products to announce in the fall. (and depending what Apple drops... who knows perhaps they will talk up stuff that won't be out in volume till early 22)

Intel chips made at TSMC might be sooner then any of us would guess. Obviously not in a few months.... still I could see Intel having products to talk about before the end of this year.
 
If you want an illustration of how much the PC industry has changed, just look at this thread... folks are skeptical of that Intel engineer and defending Apple (within reason, of course). If Apple had tried this a few years ago, I suspect we'd have had people defending Intel to the hilt regardless of its actual performance.
 
I will never buy an ARM for a desktop box. There. I said it.

Before I would consider it, the ARM CPU would have to be so fast that it could emulate x86 software and maybe even an entire x86 OS faster than the fastest native x86 chip. I don't plan to give up x86 software or Windows any time soon - but if it got to the point where even x86 emulation was that good, then there would be no point fighting it anymore. I doubt that will happen, certainly not too soon, but we'll see.
 
anandtech wrote up a long article on this, and how the apple m1 core gets utilized 100% meanwhile modern core/thread setups from intel or AMD don't fully load a core, merely loading part of the ccx, and not utilizing one core with both core + thread at capacity. It's something that apple was happy to take advantage of in their canned benchmarks, regardless, the battery life on the M1 chip is fucking ridiculous. Good luck with that arm competitors.
 
Who give a fk about battery life on a deskop? ARM already owns mobile.
 
I just have one question. When can I use my GPU instead of an onboard processor?
 
This is a stupid argument from a sore loser.

MacOS runs on both intel and M1 chips. In real workloads, like Adobe Premiere Pro, the M1 straight up annihilates any Intel mobile processor. That's all that matters to the consumer. I have a loaded Intel Macbook Pro and the second the higher end 13inch M1 version comes out (4 thunderbolt ports, M1x chip, etc) I'm switching. I can't wait for better performance, less power consumption, and quieter cooling - win, win, win.


Sorry, but this needs a caveat. Namely that's all that matters to the APPLE consumer. And at that to the strictly apple consumer. For the most part, Apple's intel machines seem to launch about a generation behind the general wintel laptop market. Which is one of the things I have found odd about all the articles saying intel was holding apple back.

I really don't like a number of things about OSX, and the apple design can be the pinnacle of human achievement, but tied to their OS and high margin product lines, it won't dominate the industry. That is assuming it doesn't run into issues once you try to layer some real expandability on it.

I'm open to living on ARM, but not with the limited expansion that apple has produced so far. If nvidia and MS go off in a corner and sort their shit out, I could very much see a SOC from nivida done well being very attractive.
 
Not sure what you mean about limited expandability, the Macbook Pro has the highest performance I/O of any laptop in the world. It has the most thunderbolt ports (4), the fastest ssd in any laptop, etc. I routinely have mine hooked up to an eGPU, 40gbps thunderbolt adapter, 10gbe, external SSD array, docking stations, etc - all at full speed. No PC laptop on earth can do that.
 
Who give a fk about battery life on a deskop? ARM already owns mobile.
ARM is the better performance ISA.

Considering what ARM has been used for the last 20 years, people forget ARM was designed as a Consumer Desktop ISA. Of course Apple leaned into the Acorn Risc Machines unintended huge leap in low power usage when they decided they wanted to make the newton (tech in general just wasn't ready for touch screen computing yet).... and ARM from then on got locked in as a low power low heat good performance option.

We know from the server world there is nothing slow about the ARM... quite the opposite its easily the highest performance ISA around right now. (provable... by lists of super computers in terms of performance and efficiency).

I look forward to proper ARM desktop chips.... I wish it wasn't Apple first to do it, but I guess its fitting in a way as they where in on the Intial ARM ownership when Acorn spun them off. Its also a little known fact of history that Apple worked with Acorn way way back in the 80s... and had a Apple II running a ARM CPU that they never released. (They worried it would confuse their Mac market)

I do hope we get ARM consumer options outside of Apple at some point. As much as I cringe at the thought of wanting a Nvidia CPU.... seeing what they are planning is going to be interesting.
 
Back
Top