M1 successor should be coming this summer

Yes and no. A key problem in the States and other service-oriented economies is that it's difficult to find people who are just qualified enough. Trade education is in relatively short supply; you're more likely to find people with engineering degrees, and they're not going to work for typical factory pay.
That's a problem that China would like to have. There's no future in manufacturing as it'll be automated, so having engineers instead of simple laborers is kinda what you want. The only reason we still have people putting stuff together is because we don't want to use engineers to make a design that's automated friendly. A lot of what goes into designing products is making it easy to put them together.
I also can't imagine Foxconn et. al. having much luck convincing entitled Americans that their future will be bright sitting on an iPhone assembly line.
The problem isn't convincing entitled Americans but convincing entitled businesses to pay more. If Americans would work for $0.2 Cents an hour then you'd build a factory here, but then you run into the problem of who would buy them? People making 2 cents per hour aren't going to be able to afford food, let a lone an iPhone.
For that matter, let's not forget that any new or updated factories will likely include a lot of robots, wherever they're built — companies may get around labor conditions simply by having fewer people involved.
Now you know why China wants their own version of Silicon Valley. Factory work has no future, so what can China do to employ their citizens? The reasons why we still have people working in factories is because it's cheaper than paying the engineers to make products to not depend on human hands, and intricate work is just not easy to automate.

It pisses me off hearing people say 'this is no longer possible here'. No, it's possible, we just need the people with the money to start re-investing that money in our country. No, it won't instantly happen. I get that.
At least you get it. Apple has so much money laying around that Apple could afford to make their products here in USA and pay people $20-$30 per hour without an issue. Just like Jeff Bezos can give every employee $100k and still be a Billionaire. The problem is nobody likes losing money. Humans put more value in loss than in gain, so of course it's not possible.
And yes, i'll repeat it, I have no problem paying two to three times the cost of whatever product you can name if it were actually made in the USA. It's the same reason I have no problem buying expensive watches, and certain cars. Where something is built matters to me. I don't give a flying fuck about people in China. People in the UK, Ireland, most of western Europe, US & Canada? I care about those people because we generally share the same values.
You may not care about paying 3x more but 99.99% of people certainly do. That doesn't mean if Apple built their products in USA or Europe then suddenly prices triple in cost. The cost of an item is mostly priced based on what you're willing to pay and not what the product costs. The iPhone 12 costs $373 to make and 22% of its parts come from USA, which at $800 for an iPhone 12 means Apple is making a lot per sale of iPhone. If you treat these devices like game consoles, then Apple will continue to make money off these devices from the sale of digital goods, but unlike game consoles these devices aren't sold at cost. Even printers now sell for cheap and depend on ink sales for revenue. Apple could easily make their products here and still charge you the same, but that would mean less money for them. They may not employ as many people as they do in China but they aren't employing anyone in USA to make any of their goods.


Honestly, I loved my mac laptop from 2003 and as a media machine it was fantastic. That said, I always ended up using either linux or windows for work and never got into the ecosystem. I was issued an m1 macbook pro for work and GD does it blow away the 10750h cpu in my msi laptop. Granted, that msi laptop has a 2070 super in it and I think the 8core gpu in the m1 is somewhere around a 1050ti in performance, when it comes to getting actual work done that macbook just runs circles around it. I have legitimately pulled an 18 hour day completely on battery with it and I don't believe that is possible on any laptop I have laid my hands on. Certainly not one with desktop cpu levels of performance. I look forward to seeing the performance of the 14 and 16 inch releases over the next couple of years
I still have a PowerBook G4 that I tried putting Ubuntu 16.04. It isn't something you can use daily, and you can forget about watching YouTube videos. Too many loops I had to jump just to get it to even boot, let alone all the features working. I still can't sleep the laptop without it crashing and staying on. This is what I mean by I hate dealing with other architectures that aren't x86 because the support for these devices on Linux is just not good. There's work trying to get Linux onto the M1 but I suspect it'll never get good enough to be 100% usable. My Lenovo laptop runs Mint 20 perfectly, and I have zero issues getting it to do anything I want. When the M2 and M3 are released, you can bet that just makes things more problematic for porting Linux onto them. Who knows if Apple starts to block other OS's from being installed? The community can get very far without the support of the manufacturer, but you won't get it working without any issues without the help of the manufacturer.
 
Again, I would be happy to pay upwards of $3k for whatever the latest iPhone was if that meant it was fully assembled in the US. Apple has the money to build the plants in the US, and slowly build everything up here. I'm aware of the lack of people to do the job, etc. However, this is all just bullshit. Time and money fixes that. If you were paying these people $20-$30 a hour with decent benefits the problem of lack of labor would solve itself fast. It pisses me off hearing people say 'this is no longer possible here'. No, it's possible, we just need the people with the money to start re-investing that money in our country. No, it won't instantly happen. I get that.

And yes, i'll repeat it, I have no problem paying two to three times the cost of whatever product you can name if it were actually made in the USA. It's the same reason I have no problem buying expensive watches, and certain cars. Where something is built matters to me. I don't give a flying fuck about people in China. People in the UK, Ireland, most of western Europe, US & Canada? I care about those people because we generally share the same values.
I'm not sure you could really do that even with labor solved and robotics out of the picture. Many of the raw resources (like rare earths) aren't in the US. And if they were, could you convince every single contractor to set up shop in the US, or find equivalents? I wouldn't rule out that last bit, but the most likely scenario would be having final assembly in the US while most of the pipeline remains in countries like China and Vietnam.
 
I'm not sure you could really do that even with labor solved and robotics out of the picture. Many of the raw resources (like rare earths) aren't in the US. And if they were, could you convince every single contractor to set up shop in the US, or find equivalents? I wouldn't rule out that last bit, but the most likely scenario would be having final assembly in the US while most of the pipeline remains in countries like China and Vietnam.
From what i'm aware of most of those rare earth materials are present in the US/Canada. The issue is regulations that prevents anyone from mining it in those areas. I get the environment impact concerns, but honestly, i'd rather we control and impact our own lands & provide the labor along with the national security of having our own production versus the money ultimately going to the PRC in some operation in Africa or Asia.
 
Last edited:
From what i'm aware of most of those rare earth materials are present in the US/Canada. The issue is regulations that prevents anyone from mining it in those areas. I get the environment impact concerns, but honestly, i'd rather we control and impact our own lands & provide the labor along with the national security of having our own production versus the money ultimately going to the PRC in some operation in Africa or Asia.
The problem is a lot of the places where it is found in high concentration are really close to popular urban areas. So you have what were multi-million dollar homes, now worth less than a quarter of that because of a new mine like 50km away.
North American cities are just bad, they popped up where they did because the area was rich in natural resources so a city formed around it, but now we can't access those resources because a city has grown on top of them.
 
The problem is a lot of the places where it is found in high concentration are really close to popular urban areas. So you have what were multi-million dollar homes, now worth less than a quarter of that because of a new mine like 50km away.
North American cities are just bad, they popped up where they did because the area was rich in natural resources so a city formed around it, but now we can't access those resources because a city has grown on top of them.
A lot of wealthy people use real estate as a way to maintain their wealth, and even grow it. I like the nuclear power debate where people are for it, but no multi-million dollar homes will want it within a 50km radius. So they're hoping to stick it near the poor. If real estate wasn't such an over valued commodity then we could mine those resources.
 
I still have a PowerBook G4 that I tried putting Ubuntu 16.04. It isn't something you can use daily, and you can forget about watching YouTube videos. Too many loops I had to jump just to get it to even boot, let alone all the features working. I still can't sleep the laptop without it crashing and staying on. This is what I mean by I hate dealing with other architectures that aren't x86 because the support for these devices on Linux is just not good. There's work trying to get Linux onto the M1 but I suspect it'll never get good enough to be 100% usable. My Lenovo laptop runs Mint 20 perfectly, and I have zero issues getting it to do anything I want. When the M2 and M3 are released, you can bet that just makes things more problematic for porting Linux onto them. Who knows if Apple starts to block other OS's from being installed? The community can get very far without the support of the manufacturer, but you won't get it working without any issues without the help of the manufacturer.

TBH trying to get any modern OS to run on a 30 year old system is problematic. But you are definitely [H] for trying.

That being said, I must disagree overall. We have all known that the shift to ARM was going to eventually happen as a byproduct of the market shift to mobile as the primary compute platform of the majority of users, Apple is just pushing it out there sooner than we expected. I honestly thought the market would shift earlier when I started seeing snapdragon cpu's outperforming some of intel's I3 cpus which is all that 90% of the users of the world really need.

As for my setup, I have parallels installed with Kali and Fedora installs up and running (basically 1 click installs!) and they have no real noticeable performance hit. Mint does not have an ARM distro out, but since Ubuntu has had an ARM distro for years I cannot believe that it wouldn't be in the works as soon as they have enough user base that wants it. With all the linux distro's I have tried, if you give them 4 cores and 8gb ram you would be hard pressed to even tell they are not installed on bare metal. On top of that I can run them without the virtual machine drinking down my battery like a drunken sailor.

Ultimately, let's be honest here. This m1 chip is a first generation cpu designed for ultra mobile hardware and you have to go to somewhere around an i7-10700k desktop cpu to outperform it, and even then only in multi core because the m1 is essentially a quad core cpu. The m1 still eats that 10700k for breakfast in single core, and the very vast majority of applications are single core or only take advantage of a small number of cores which erodes any real world benefit of that desktop cpu. But this is why I am excited for the m1x/z and further iterations coming down the pipe. Not because I need more performance, but with performance like this, the other manufacturers will have to get onboard to compete; how could they not? When you start getting the 12-16 performance core variants coming out they are going to vastly outperform desktop cpus, and that is before you even take into account the gpu performance.

So lets get dirty and cheer apple on because what they are doing is great for all users. They are pushing and kicking the stagnant cpu and software markets in the direction it should have gone years ago and it is going to benefit all of us in the end.
 
Last edited:
TBH trying to get any modern OS to run on a 30 year old system is problematic. But you are definitely [H] for trying.

That being said, I must disagree overall. We have all known that the shift to ARM was going to eventually happen as a byproduct of the market shift to mobile as the primary compute platform of the majority of users, Apple is just pushing it out there sooner than we expected. I honestly thought the market would shift earlier when I started seeing snapdragon cpu's outperforming some of intel's I3 cpus which is all that 90% of the users of the world really need.

As for my setup, I have parallels installed with Kali and Fedora installs up and running (basically 1 click installs!) and they have no real noticeable performance hit. Mint does not have an ARM distro out, but since Ubuntu has had an ARM distro for years I cannot believe that it wouldn't be in the works as soon as they have enough user base that wants it. With all the linux distro's I have tried, if you give them 4 cores and 8gb ram you would be hard pressed to even tell they are not installed on bare metal. On top of that I can run them without the virtual machine drinking down my battery like a drunken sailor.

Ultimately, let's be honest here. This m1 chip is a first generation cpu designed for ultra mobile hardware and you have to go to somewhere around an i7-10700k desktop cpu to outperform it, and even then only in multi core because the m1 is essentially a quad core cpu. The m1 still eats that 10700k for breakfast in single core, and the very vast majority of applications are single core or only take advantage of a small number of cores which erodes any real world benefit of that desktop cpu. But this is why I am excited for the m1x/z and further iterations coming down the pipe. Not because I need more performance, but with performance like this, the other manufacturers will have to get onboard to compete; how could they not? When you start getting the 12-16 performance core variants coming out they are going to vastly outperform desktop cpus, and that is before you even take into account the gpu performance.

So lets get dirty and cheer apple on because what they are doing is great for all users. They are pushing and kicking the stagnant cpu and software markets in the direction it should have gone years ago and it is going to benefit all of us in the end.
I am really hoping the NVidia - Mediatek partnership goes somewhere, I think the two of them combined could put together some seriously interesting hardware options that Linux could really take advantage of for a good number of things.
 
That being said, I must disagree overall. We have all known that the shift to ARM was going to eventually happen as a byproduct of the market shift to mobile as the primary compute platform of the majority of users, Apple is just pushing it out there sooner than we expected. I honestly thought the market would shift earlier when I started seeing snapdragon cpu's outperforming some of intel's I3 cpus which is all that 90% of the users of the world really need.
The problem for ARM is if x86 can be as efficient, which it's just a matter of time before x86 is as efficient. Currently Intel is on 14nm+++++ while AMD is actually ahead of Intel using 7nm. At that point, ARM will have to depend on being cheaper, and that doesn't always work out.
Mint does not have an ARM distro out, but since Ubuntu has had an ARM distro for years I cannot believe that it wouldn't be in the works as soon as they have enough user base that wants it.
There's MintPPC.
Ultimately, let's be honest here. This m1 chip is a first generation cpu designed for ultra mobile hardware and you have to go to somewhere around an i7-10700k desktop cpu to outperform it, and even then only in multi core because the m1 is essentially a quad core cpu. The m1 still eats that 10700k for breakfast in single core, and the very vast majority of applications are single core or only take advantage of a small number of cores which erodes any real world benefit of that desktop cpu.
Most benchmarks show the 10700k to be faster in everything. What benchmarks you see that the M1 is faster?
But this is why I am excited for the m1x/z and further iterations coming down the pipe. Not because I need more performance, but with performance like this, the other manufacturers will have to get onboard to compete; how could they not? When you start getting the 12-16 performance core variants coming out they are going to vastly outperform desktop cpus, and that is before you even take into account the gpu performance.
The faster a CPU performs the more power inefficient it becomes. The x86 CPU's we see are not concerned about power efficiency when you talk about clock speeds of 5Ghz. I suspect Apple's high IPC is due to them moving the ram so close to the SoC, which lowers latency and increases performance. The downside of moving the ram so close is that you can't upgrade the ram, since it's soldered onto the board. The M1 is fast, but with 8GB and 16BGB of ram it won't be as usable as x86 machines with 32GB+ memory. If we're talking about GPU performance then Apple will certainly need to use GDDR6 or even HBM2, which will hurt the IPC of the CPU.
So lets get dirty and cheer apple on because what they are doing is great for all users. They are pushing and kicking the stagnant cpu and software markets in the direction it should have gone years ago and it is going to benefit all of us in the end.
That Apple is doing, but that's not something Apple should welcome. Apple wasn't stupid for releasing the M1 back in late 2020 when they did. They knew the position Intel was in, first hand. They knew that AMD won't have their Zen3 mobile APU's for a long time. You know what they say about making a good first impression, and Apple certainly has done that.
 
The problem for ARM is if x86 can be as efficient, which it's just a matter of time before x86 is as efficient. Currently Intel is on 14nm+++++ while AMD is actually ahead of Intel using 7nm. At that point, ARM will have to depend on being cheaper, and that doesn't always work out.
Maybe, but I don't think so. I bet they could get x86 to be as efficient, but I simply don't believe they will be able to get it to be as efficient and as performative at the same time. The primary advantages x86 had was performance and compatibility, but ARM has been consistently gaining ground and closing that performance gap. I think Apple making this switch will push the software compatibility over the edge with it. Ultimately, this M1 is really just a first generation product that is just now starting to get scaled, and I think Intel is going to need more than a die shrink to not fall behind. When you consider that the vast majority of the market has moved to an almost entirely mobile environment where the efficiency of these cpus have tremendous advantage it seems pretty clear where the overall market will go. Even things like gaming have moved primarily to mobile which has traditionally been a pc stronghold. I think mobile gaming is somewhere around 50% of the market share. Virtually every software developer I know that works for a major company has told me their companies have switched to a mobile first approach.

I didn't realize that existed. It is honestly pretty awesome that people make stuff like this. All the best geeks are linux geeks.

Most benchmarks show the 10700k to be faster in everything. What benchmarks you see that the M1 is faster?
Nearly everything I have seen or tested myself honestly. Just comparing the compile time of webkit is jaw dropping. When you look at browser tests the performance gap is so large that it is not even comparable, and that is important because that is where the very vast majority of real world interaction and usage takes place. Although I prefer real world performance over benchmark battles, geekbench bares this gap out as well. My MBP lands a single core score of ~1750 and a multi core score of ~7650. The 10750h in my MSI has a single core of ~1130 and multi core score of ~5400 (plugged in, lower on battery which is how I tested the MBP.) The 10700k desktop cpu lands a single core of ~1340 and a multi core of ~8860. That is important because the M1 handily trounces that full 6 core intel and is just under 13.7% slower than the 8 core desktop cpu in multi core, despite the M1 only being essentially a quad core cpu for the most part. It is faster than both cpu's in single core by a very wide margin. Since most software applications run on a single core, or take advantage of a small number of cores (rarely more than four in my experience) the M1 is going to be faster than even the 10700k in virtually every instance.

The faster a CPU performs the more power inefficient it becomes. The x86 CPU's we see are not concerned about power efficiency when you talk about clock speeds of 5Ghz. I suspect Apple's high IPC is due to them moving the ram so close to the SoC, which lowers latency and increases performance. The downside of moving the ram so close is that you can't upgrade the ram, since it's soldered onto the board. The M1 is fast, but with 8GB and 16BGB of ram it won't be as usable as x86 machines with 32GB+ memory. If we're talking about GPU performance then Apple will certainly need to use GDDR6 or even HBM2, which will hurt the IPC of the CPU.
Ya, that is basically what they did. But as I said, it is just starting to be scaled up and the new macbook pro's, mini's, and mac pro's are going to support 64gb of ram and something ridiculous like 32 to 40 performance and gpu cores. Since the average user will be hard pressed to use even 8gb of ram on a mac (although, they can easily surpass that on windows lol) I think it is more than enough. I am easily in the power user category and have never used more than 27gb of ram even pushing vm's and solidworks/autocad/catia.

That Apple is doing, but that's not something Apple should welcome. Apple wasn't stupid for releasing the M1 back in late 2020 when they did. They knew the position Intel was in, first hand. They knew that AMD won't have their Zen3 mobile APU's for a long time. You know what they say about making a good first impression, and Apple certainly has done that.
Why should Apple not welcome it? All of us should welcome it, especially for the software industry. As mobile takes more and more market share these gains will be better for everyone.
 
Again, I would be happy to pay upwards of $3k for whatever the latest iPhone was if that meant it was fully assembled in the US. Apple has the money to build the plants in the US, and slowly build everything up here. I'm aware of the lack of people to do the job, etc. However, this is all just bullshit. Time and money fixes that. If you were paying these people $20-$30 a hour with decent benefits the problem of lack of labor would solve itself fast. It pisses me off hearing people say 'this is no longer possible here'. No, it's possible, we just need the people with the money to start re-investing that money in our country. No, it won't instantly happen. I get that.

And yes, i'll repeat it, I have no problem paying two to three times the cost of whatever product you can name if it were actually made in the USA. It's the same reason I have no problem buying expensive watches, and certain cars. Where something is built matters to me. I don't give a flying fuck about people in China. People in the UK, Ireland, most of western Europe, US & Canada? I care about those people because we generally share the same values.

I would play more for it be manufactured locally, but not 2-3x the amount. Unless it was fully repairable. $3K for a phone which may die after the 1 year warranty, and you can’t fix it because the .02c sensor is dead. Well you could fix it after you buy a non working phone from eBay for $600.

Make it locally, get rid of the Anti Repair bullshit, and require all parts in it the devices to be sold to the public. Code included as well, fuck them for soft locking features because it’s not an oem part (that you can’t buy anyways).
 
Maybe, but I don't think so. I bet they could get x86 to be as efficient, but I simply don't believe they will be able to get it to be as efficient and as performative at the same time.
AMD and Intel have done it before. It's just a matter of time.
The primary advantages x86 had was performance and compatibility, but ARM has been consistently gaining ground and closing that performance gap.
Not entirely. There have been other CPU architectures that proved to be better but ultimately were not. Think of x86 as the Chevy 350 of computers. It's old and others have made better designs but it's extremely easy to work with and there are plenty of parts available for it. It's mostly x86's relationship with IBM compatible, as in most PC's are IBM compatibles. You can build a faster PC with more ram and drive storage. You can also do it for less than an Apple M1. You can install any OS you want onto x86, including Mac OSX. Since x86 has such high demand then AMD and Intel supply it, despite x86 being old and clunky. You aren't going to get that from ARM just because Apple caught up to x86 performance. PowerPC has caught up to x86 and it's dead now.


When you consider that the vast majority of the market has moved to an almost entirely mobile environment where the efficiency of these cpus have tremendous advantage it seems pretty clear where the overall market will go.
The overwhelming majority of the market is also cell phones, which everyone needs today. That's like saying the overwhelming majority of people are buying electric cars so therefore gas is old hat, but the reality is gas prices are high and people need cars. The software on mobile is horrible, as you don't exactly get updates like on PC. High market share doesn't mean people are moving towards ARM intentionally.
Even things like gaming have moved primarily to mobile which has traditionally been a pc stronghold. I think mobile gaming is somewhere around 50% of the market share. Virtually every software developer I know that works for a major company has told me their companies have switched to a mobile first approach.
Yes, and the majority of their games are bejeweled clones. Most of the games on mobile are false advertising. I've seen what's on mobile, and I'm not worried.

Ya, that is basically what they did. But as I said, it is just starting to be scaled up and the new macbook pro's, mini's, and mac pro's are going to support 64gb of ram and something ridiculous like 32 to 40 performance and gpu cores. Since the average user will be hard pressed to use even 8gb of ram on a mac (although, they can easily surpass that on windows lol) I think it is more than enough. I am easily in the power user category and have never used more than 27gb of ram even pushing vm's and solidworks/autocad/catia.
I imagine if Apple does push for 32+ cores then they'll be having manufacturing issues. AMD solved this with the chiplet design, which they will be hopefully using for GPU's. Though the chipset design does increase latency which will decrease performance. It will be interesting to see what rabbit Apple will pull out of their hat.
Why should Apple not welcome it? All of us should welcome it, especially for the software industry. As mobile takes more and more market share these gains will be better for everyone.
For Apple, I mean. For us it'll work out better because competition does bring better products, which we've needed badly for nearly 10 years now.

I would play more for it be manufactured locally, but not 2-3x the amount. Unless it was fully repairable. $3K for a phone which may die after the 1 year warranty, and you can’t fix it because the .02c sensor is dead. Well you could fix it after you buy a non working phone from eBay for $600.

Make it locally, get rid of the Anti Repair bullshit, and require all parts in it the devices to be sold to the public. Code included as well, fuck them for soft locking features because it’s not an oem part (that you can’t buy anyways).
You guys need to watch this video and stop pretending that Apple isn't the worst company when it comes to slavery and how they treat their workers.
 
Last edited:
You guys need to watch this video and stop pretending that Apple isn't the worst company when it comes to slavery and how they treat their workers.


Can you please stop with the "watch this video and you will instantly think the way I do" crap? It doesn't help your argument, and makes you sound as bad as the people who fall down anti-vax and QAnon rabbit holes on YouTube. If you can't articulate your point with your own words, you probably don't understand things as well as you think you do.

And no, Apple is not "the worst company." It's just one of the most prominent examples of companies with labor problems it has to address. There's a good chance many of the devices you use are made in similar conditions; you just pretend they're made in great conditions because those companies are either smaller or aren't as closely scrutinized. If you promised to use tech only made by people who work reasonable hours in good conditions for fair pay... well, you probably wouldn't own much tech, unfortunately.
 
Can you please stop with the "watch this video and you will instantly think the way I do" crap? It doesn't help your argument, and makes you sound as bad as the people who fall down anti-vax and QAnon rabbit holes on YouTube. If you can't articulate your point with your own words, you probably don't understand things as well as you think you do.
I'm supporting my argument with facts. I get you Apple people don't like facts, but they are in fact... factual. If I link articles will you believe them? If I tell you what's going on, will you believe me? You don't want to admit that you're supporting not only a bad company who makes bad products, but also support unethical employment and even slavery.
And no, Apple is not "the worst company." It's just one of the most prominent examples of companies with labor problems it has to address. There's a good chance many of the devices you use are made in similar conditions; you just pretend they're made in great conditions because those companies are either smaller or aren't as closely scrutinized. If you promised to use tech only made by people who work reasonable hours in good conditions for fair pay... well, you probably wouldn't own much tech, unfortunately.
So instead of supporting your argument with facts you're white knighting Apple. The video explains that Apple is by far the worst offender. Just because other companies does it, doesn't excuse that Apple does it worse. The video even mentions about Apple white knighting arguments. If Asus is caught using children to put together HDMI ports like Apple was, I can go with another company for my computer. If Dell doesn't pay their factory workers in India for a year like Apple did, then I can choose to go with another competitor. How many Apple products can you buy that isn't from Apple? You're an Apple user and therefore at the bottom of the morality ladder. You're not going to change that just because other companies do it too, because Apple does it by far the worst. Apple has the money to be the best when it comes to preventing child labor and employment abuse, but instead they are the worst. There's a reason why Apple is so financially valuable when they put very little value into human life.
 
I thought this thread was about M1's successor. This is like someone made a thread about which burger tastes the best and some PETA guy hijacking the thread and keeps on arguing about why consuming meat is evil. :rolleyes:
 
Arm is the superior ISA. That is just the plain and honest truth. x86 has a gaming industry built around it and a ecosystem of builder friendly parts. At the end of the day however its been true since the first Acorn ARM chips. Pound for pound arm wins... no one has ever build a heavy weight ARM chip is all. However it seems like Apple is planning to change that.

Yes Intel and AMD have slimed x86 down... added risc bits and pushed their branch prediction enough to make up for x86s inherent draw backs.

However at the end of the day things are not going to change... x86 is still going to need a very good branch prediction engine, a ton of fast cache... and its still going to throw out 20-30% of the work it does anyway. (big caches matter more to x86 for a reason) ARMs pipes are growing as they have in the M1... but are still not going to need overly complicated or perfectly refined branch prediction at all, and large caches are not required for high performance. (most people probably don't really understand most arm chips don't have branch prediction at all it accounts for a massive chunk of x86 silicon)

Like it or not its simple physics. The Acorn guys setout to design a ISA they could build inexpensively... on old fab processes at the time and still have something competitive. That has never changed... its a elegant isa that simply does more with less. And turns out not storing tons of useless data in a cache... or running tons of cycles calculating predicted work that never gets used burns a shit ton less power and uses much less silicon space. (modern arm has also gotten really good at running the repeated stuff that would be stored or predicted on x86 through even more cut down efficient cores)

Alder lake is imo the make or break point for x86. If Intel can PROVE that x86 can be as efficient with small cores with stripped down branch prediction and cache... and intelligently give the big inefficient big cores the work that could see benefit from them . Then perhaps x86 has a road to relevance long term. My bet is its going to be a mess.... I can't imagine intels internal chip scheduling processes are going to manage that well enough to really take on chips like M1. I would be quite happy to be wrong... I suspect I'm not. Alder lake will probably suck especially if Apple does drop a M2 aimed at Mac Pros on the market around the same time. We'll find out soon enough.
 
Last edited:
Another point on Alder lake... a lot of people think its all about battery life. And for sure that should be one benefit. However the real uplift in performance should come from the changes they make in branch prediction.

One of the biggest advantages of big.Little on ARM isn't about running low requirement stuff on smaller cores. Its about shuffling simple bits from larger work jobs to very short pipeline efficient cores instead of running things all through a big pipe core and storing the result in a cache. Cache burns power like a mother. x86 chips store tons of data in L1 L2 and L3 caches in case it needs it again... so it doesn't have to run it back through a big pipeline it just pulls it from cache. Arm chips... don't store much of anything... burning no power on things it won't need, if it turns out it needs that simple bit of math again it just runs it though one of the very short pipeline efficent cores and the performance difference vs a l3 cache is pretty much equal. Yes L1 caches are super fast and high performance ARM chips will keep some of that high level type cache. But L3 is a system designed to fix a flaw in x86 design.

Back to Alder lake.... the cut down small cores on Alder lake. If Intel really has something special will be used more to replace a big chunk of the branch prediction system then to specifically run in battery saving modes. Instead of having a super aggressive branch predictor that is doing 30% extra work and saving results in caches, they may well reduce that to a min... and run the type of things normally stored in a l3 cache through the smaller pipeline cores.

As much as I say I think it will be a mess and a flop. IF Intel actually does pull of big.Little in the way ARM chips actually work... and have some sort of hybrid that reduces BP to a min, and the need for massive power sucking caches. It may actually prove to be pretty cool. I mean it sounds like the core counts will still be lowish... and it may not be super impressive at the top end. But I can see the potential.
 
I thought this thread was about M1's successor. This is like someone made a thread about which burger tastes the best and some PETA guy hijacking the thread and keeps on arguing about why consuming meat is evil. :rolleyes:
Well it is somewhat relevant to point out Apples moral problems. I mean they do exist in the PC market... its just like Duke says if you have issues with Lenovo using Chinese slaves to build laptops... you can choose another PC maker who didn't use Hefei Bitland to build machines.

If you buy into Apples ecoysystem you buy Apple no other option... even if M2 is the best thing ever. If the camera on the machines or the touch screens continue to be made at say o-film in china... ya there is a good chance a slave built it for you. Also if you own a iphone built in the last few years ya good chance a Uyghur transfered to work their by the ccp helped build the camera the touch screen and potentially a few other bits.

To be fair though... ya that problem effects so much of the PC market as well that it is almost impossible at this point to buy a phone laptop or even desktop that doesn't have the same supply chain issues. As bad as some of the sourcing on iphone parts is... Samsung also bought all their touch screens from o-film... the moral arguments against basically all the big tech players at this point is pretty much the same.
 
I'm supporting my argument with facts. I get you Apple people don't like facts, but they are in fact... factual. If I link articles will you believe them? If I tell you what's going on, will you believe me? You don't want to admit that you're supporting not only a bad company who makes bad products, but also support unethical employment and even slavery.
No, you're not, and you're being childish.

The problem is ultimately that you're citing poor sources. A video like this is not an authoritative source; it's taking snippets from news pieces (I'm not disputing the reputable sources, like WaPo and AP) and putting a tremendous amount of spin on it while stripping out the context. Despite what you claim, it doesn't really touch on other tech companies and uses a sensationalist tone throughout. Hell, it even cites pieces where Apple is cutting off supplier ties and otherwise taking corrective action to improve labor conditions. It's not an in-depth investigation into labor conditions for the broader tech industry; it's a narrowly-focused piece built to rack up views from people who already agree with its "Apple is evil" premise.

Let me know when you find a video that also mentions Amazon's child labor issues, numerous violations in Samsung factories, and problems at laptop suppliers like Quanta (which serves Acer, HP and others in addition to Apple). You see the issue here? Facts absolutely matter, and Apple still has a lot of labor issues it needs to address. But you need a complete set of facts that acknowledge complexity and subtlety, and clickbait YouTube videos like this won't provide that. If you submitted this as your main evidence when I was marking research papers in university, I'd have given you a failing grade.

So instead of supporting your argument with facts you're white knighting Apple. The video explains that Apple is by far the worst offender. Just because other companies does it, doesn't excuse that Apple does it worse. The video even mentions about Apple white knighting arguments. If Asus is caught using children to put together HDMI ports like Apple was, I can go with another company for my computer. If Dell doesn't pay their factory workers in India for a year like Apple did, then I can choose to go with another competitor. How many Apple products can you buy that isn't from Apple? You're an Apple user and therefore at the bottom of the morality ladder. You're not going to change that just because other companies do it too, because Apple does it by far the worst. Apple has the money to be the best when it comes to preventing child labor and employment abuse, but instead they are the worst. There's a reason why Apple is so financially valuable when they put very little value into human life.
White-knighting would be letting Apple off the hook; I'm not. It uses suppliers that commit serious labor violations, and its corrective measures sometimes aren't enough. But when you pretend that Apple is "by far" worse than others, even though real evidence shows otherwise, you're really just trying to excuse your own choices. You know damn well that your Android phone, your Windows PC and other tech were likely made in conditions that weren't much better (if at all) than for Apple gear. Have you looked into where and how your devices are made? Don't just assume; check.
 
Last edited:
I’m well aware. That’s why they need to start switching now and building it all back up slowly here. We have all the materials here. The issue is the plants were all shut down decades ago. With the amount of money Apple has, there is no excuse they should not be dumping it across the US to rebuild everything from mines to the assembly factories along with paying decent wages.

If Apple did this, I would have to seriously re-evaluate my resolve to never buy an Apple product.
 
Not entirely. There have been other CPU architectures that proved to be better but ultimately were not. Think of x86 as the Chevy 350 of computers. It's old and others have made better designs but it's extremely easy to work with and there are plenty of parts available for it. It's mostly x86's relationship with IBM compatible, as in most PC's are IBM compatibles. You can build a faster PC with more ram and drive storage. You can also do it for less than an Apple M1. You can install any OS you want onto x86, including Mac OSX. Since x86 has such high demand then AMD and Intel supply it, despite x86 being old and clunky. You aren't going to get that from ARM just because Apple caught up to x86 performance. PowerPC has caught up to x86 and it's dead now.
This is mostly nonsense. This is like saying because we have been building houses out of pine 2x4's since the 1800's no modern build style will ever be as as strong or versatile as stick frame houses. It is just not true to say because x86 came decades early, no architecture will ever be as good or as fast as x86 can be. ARM is decades behind x86 in expansion and refinement, but is progressing much faster because it ultimately a superior architecture. It will eventually, and in many cases already has surpassed x86 just like a great many of the people on this board have said it would. Like I said in my previous post, this 10w cpu designed for an ultra portable device has the relative compute power somewhere in the ballpark of a desktop 10700k while simultaneously having the gpu power of ~1050ti graphics card. No x86 processor can even remotely touch that, and we are just now starting to scale these types of processors... I think next week ish they are announcing the new variants with 12-20 compute cores and many more gpu cores. I guess we will see if I am right when reviewers start getting their hands on them.

Arm is the superior ISA. That is just the plain and honest truth. x86 has a gaming industry built around it and a ecosystem of builder friendly parts. At the end of the day however its been true since the first Acorn ARM chips. Pound for pound arm wins... no one has ever build a heavy weight ARM chip is all. However it seems like Apple is planning to change that.
This is what so many of us have been saying for years. Apple is just forcing the change on a stagnant market that has been increasing sliding towards mobile anyway.



The overwhelming majority of the market is also cell phones, which everyone needs today. That's like saying the overwhelming majority of people are buying electric cars so therefore gas is old hat, but the reality is gas prices are high and people need cars. The software on mobile is horrible, as you don't exactly get updates like on PC. High market share doesn't mean people are moving towards ARM intentionally.

That mobile game is no more representative of the mobile market than the 11 Leisure Suit Larry games are of the PC market. Call of Duty, Minecraft, PUBG, basically all of the final fantasy games, etc are mobile games now. The exciting thing really is this processor has more graphical and waaay more processing power than a PS4 in a form factor you can put in a tablet...
 
Last edited:
However at the end of the day things are not going to change... x86 is still going to need a very good branch prediction engine, a ton of fast cache... and its still going to throw out 20-30% of the work it does anyway. (big caches matter more to x86 for a reason) ARMs pipes are growing as they have in the M1... but are still not going to need overly complicated or perfectly refined branch prediction at all, and large caches are not required for high performance. (most people probably don't really understand most arm chips don't have branch prediction at all it accounts for a massive chunk of x86 silicon)
ARM has branch prediction, otherwise ARM wouldn't be susceptible to Spectre. Old Intel Atoms don't so they aren't effected by Spectre. Large cache is not unique to x86, or a fault of x86. The purpose of cache in a CPU is to reduce latency from ram. Even the Apple M1 has 12MB of L2 cache. You see this more with x86 because of removable ram increases latency, AMD's chiplet design increases latency, and faster ram increases latency.
Alder lake is imo the make or break point for x86. If Intel can PROVE that x86 can be as efficient with small cores with stripped down branch prediction and cache... and intelligently give the big inefficient big cores the work that could see benefit from them . Then perhaps x86 has a road to relevance long term. My bet is its going to be a mess.... I can't imagine intels internal chip scheduling processes are going to manage that well enough to really take on chips like M1. I would be quite happy to be wrong... I suspect I'm not. Alder lake will probably suck especially if Apple does drop a M2 aimed at Mac Pros on the market around the same time. We'll find out soon enough.
Firstly, I wouldn't put any value of anything Intel does for another year or two, just because the company is in a huge mess right now. I'd put more value on efficient x86 CPU's from AMD, since they already have a huge head start. Secondly, Intel already has an efficient mobile SoC and they're called Intel Atom dual-core Z2520, and you can find them in Asus ZenFones. Intel gave up on it for the same reasons why Nvidia gave up on Tegra, and that's because Qualcomm is dominant in the market. Samsung pretty much gave up on Exynos because Qualcomm is better.
 
It is just not true to say because x86 came decades early, no architecture will ever be as good or as fast as x86 can be.
I'm not saying that. Nobody is saying that. Going back to my example of the Chevy 350 with it's pushrod 16v design, you would think that Toyota, Honda, and etc would be light years ahead of GM when it comes to making engines. Yet the constant evolution of the Chevy 350 became the LS1 engine, which is one of the greatest engines ever made. Still using 16v with pushrods, but you won't find many 32v overhead cam engines able to produce more power and yet be as reliable. Not because the 16v pushrod design is superior, but because the market demands for this and GM invested in improving this outdated and ancient design.

Nobody has made an ARM ATX motherboard with UEFI boot loader. Most IT and tech support people don't want to deal with proprietary designs that are usually associated with ARM. ARM is a better design but it isn't a better ecosystem. PowerPC was like ARM in that it was superior, but then Intel got their act together and stopped pretending the Pentium 4 was faster and efficient and made CoreDuo which was. AMD comes in and they have a 64-bit version of x86 and Intel takes it and creates Core2Duo. This relationship that x86 has with AMD and Intel breeds innovation and constantly makes the old and bad x86 architecture still relevant today. ARM realistically doesn't have competition, especially Apple's M1. I say this after AMD fucked up with Bulldozer for years and Intel decided to stop innovating and that's why they're stuck on 14nm.
That mobile game is no more representative of the mobile market than the 11 Leisure Suit Larry games are of the PC market. Call of Duty, Minecraft, PUBG, basically all of the final fantasy games, etc are mobile games now.
Call of Duty and PUBG aren't exactly the same games on Mobile when compared to console and PC. We're comparing a 5.5GB Call of Duty install for Android to the 100GB install for Call of Duty: Warzone on PC. Minecraft will run on anything. Also, portable consoles like PS Vita and 3DS were able to play games like these on that hardware but everyone knew it wasn't 100% the same. Most phones don't have 100GB of storage, let alone enough storage to have multiple AAA games. The nearest thing you have to what is on PC and console was the Nvidia Shield TV and that wasn't anywhere near what PC and console could do.
The exciting thing really is this processor has more graphical and waaay more processing power than a PS4 in a form factor you can put in a tablet...
So when can I expect to play Resident Evil 8 on Android? Probably never as most mobile games are too busy being free crap meant to hook whales into their micro-transaction store. It may have the processing and graphics of a PS4, but you won't get the games of a PS4 just because of the OS, ram, and storage.
 
ARM has branch prediction, otherwise ARM wouldn't be susceptible to Spectre. Old Intel Atoms don't so they aren't effected by Spectre. Large cache is not unique to x86, or a fault of x86. The purpose of cache in a CPU is to reduce latency from ram. Even the Apple M1 has 12MB of L2 cache. You see this more with x86 because of removable ram increases latency, AMD's chiplet design increases latency, and faster ram increases latency.

Firstly, I wouldn't put any value of anything Intel does for another year or two, just because the company is in a huge mess right now. I'd put more value on efficient x86 CPU's from AMD, since they already have a huge head start. Secondly, Intel already has an efficient mobile SoC and they're called Intel Atom dual-core Z2520, and you can find them in Asus ZenFones. Intel gave up on it for the same reasons why Nvidia gave up on Tegra, and that's because Qualcomm is dominant in the market. Samsung pretty much gave up on Exynos because Qualcomm is better.
Not that I disagree with anything you are saying. But for the record M1 is an oddity. The Qualcomm laptop chips as an example have 1MB of L2. As do any other Cortex 75 based ARM cores. This is probably a big part of the reason they suck at emulating x86 code and Apples don't. I would suggest even in Apples case M1 doesn't require anywhere close the the entirety of the L2 they have slapped on M1 while running Arm compiled code. I believe its probably there more for x86 emulation then anything.

You are right of course ARMs performance cores have simplified branch prediction. ARM doesn't even call it BP they call it program flow prediction.... but anyway yes the longer pipe cores have prediction of some sort, the smaller pipe cores like their a55 and the like do not. So indeed they have cache and they do use some basic BP (Apples may be beefed up no idea) still they have had more then a decade of refining the internal schedulers that decides to use conditional branches... or to simply feed the cut down short pipe cores. My point being that the "efficient" cores on Arm are not exactly what people assume, yes they draw less power but stuff the big cores are crunching still engage the smaller cores using them in the same way a AMD chip might use a massive L3 cache. AMD and Intel crunch a ton more conditional branches and shunt the data to the L2... and L3. Arm does far less prediction, which means it runs into situations where its missing something but instead of running some basic calc through the 13 stage a75 core it will run it through a 8 stage a55.

Apple doesn't detail specifics, I'm sure the M1 does a real good job of dividing that work up between its large and small cores. Apple does have to have some secret sauce at work their especially when its dealing with x86 code. I agree Intels next chip may be a complete wash. Still their is a very outside chance that they have figured out how to get little cores working for x86. I know everyone at first like with Arm just assume those little cores are there for when your device is doing next to nothing to just save a bit of battery... but Arm little cores are so much more then that. I admit I'm intrigued by Alder Lake. Ya its probably going to be terrible, and confused and the marketing is going to be even worse. Outside chance though that it is the next step in solving some of the biggest x86 flaws. It may allow them to simplify the BP engine a bit... in favor of a smarter core scheduler that can fully utilize x86 cores with drastically smaller pipelines. If it can feed the simple stuff to full x86 cores with half the size pipelines as Arm does, it might be surprising seeing uplifts in real world stuff. Intel has talked about a new hardware scheduler for Alder... if they have pulled that off right instead of thinking of a 6/2 chip for example as 6 big 2 little... its more like 6 cores with 2 co processors, there is potential that it could actually boost single core performance. (and again Apple doesn't talk about specifics as we all know... I think its a safe bet based on what we know about Arm in general however to assume M1s great single thread performance probably owes a bit to great internal hardware scheduling that is allowing those small cores to work as essentially math coprocessors.)

Anyway no matter what happens at Intel it will be interesting. Its either going to be a big step up for x86 that AMD will be forced to copy, or it will be a massive sad bust. Either way it will be entertaining. lol
 
Not that I disagree with anything you are saying. But for the record M1 is an oddity. The Qualcomm laptop chips as an example have 1MB of L2. As do any other Cortex 75 based ARM cores. This is probably a big part of the reason they suck at emulating x86 code and Apples don't. I would suggest even in Apples case M1 doesn't require anywhere close the the entirety of the L2 they have slapped on M1 while running Arm compiled code. I believe its probably there more for x86 emulation then anything.
I'm not going to pretend to know how Apple was able to get good performance with x86 emulation, but I did hear they implemented something about memory access. Whatever Apple did, they put x86 like features into the M1 in order to perform well in x86 emulation. Even if they did put more cache to emulate x86 better, it maybe due to emulation in general. Most emulators on PC like to cache a lot of stuff in order to perform better, so it maybe just a feature that helps with emulation performance. Emulating the Switch on PC can create gigs worth of cache, for example. Without knowing the details of how Apple did it, we can only speculate.
Apple does have to have some secret sauce at work their especially when its dealing with x86 code.
Not a big secret. Whatever Apple did they implemented some x86 features into the M1. The best way to emulate something is to have hardware that's more similar to it. For example graphics in emulators are technically cheating by not actually emulating the hardware. Since most GPU's are similar enough in functionality, the trick is to push the texture data straight to OpenGL or Vulkan and then give the GPU a similar instruction that would perform the same function. Older consoles like the Sega Genesis and SNES do not have a graphics chip that's similar to PC, so therefore that hardware needs to be emulated the hard and slow way. This is the same problem with the Sega Saturn as nothing on PC uses quads, so emulation is harder and more cpu intensive. So Apple could have implemented some x86 functions into the M1 in order to perform better in x86 emulation, because traditional emulation would be dog slow. Look at Microsoft's Surface tablets with Qualcomm in how poorly they perform with x86 emulation.
Still their is a very outside chance that they have figured out how to get little cores working for x86.
It's not hard, just look at the Intel Atom chips. In order to gain IPC you need to waste a lot of resources. The reason x86 has large cache is because x86 is all about IPC. Higher clock speeds waste electricity but do gain better IPC. Branch prediction is all about wasting clock cycles in order to increase IPC. Intel will do what they did with the Larrabee project and use a bunch of Pentium like cores to do x86, because they are far more efficient.
I think its a safe bet based on what we know about Arm in general however to assume M1s great single thread performance probably owes a bit to great internal hardware scheduling that is allowing those small cores to work as essentially math coprocessors.)
That's... not how things work. I explained this to the Good Old Gamer as he thought you could combine cores to get a super mega core that could increase IPC, but that's not how things work. Apple's single threaded performance is due to distance from ram. Apple mounted the ram right next to the SoC which has a huge impact in reducing latency. Lower memory latency means quicker at fetching data and then quicker at executing it. Branch prediction also helps in IPC as it tries to predict what's the next instruction that needs execution by literally executing instructions ahead of time and then storing the results in cache. The idea behind L1 L2 and L3 is how often this prediction is correct. The more correct the branch prediction is then the lower down it sits in the cache. Less frequently used output is stored in L3, but it's still faster than going to ram. We know the Apple M1 uses branch prediction because it has an L2. Pentium MMX chips didn't have an L2 because the Pentium Pro was the first to introduce branch prediction. All chips that use branch prediction are vulnerable to Spectre vulnerability, and the M1 is vulnerable. Ever wonder why so many chips are vulnerable to Spectre? Lots of CPU designers have been copying the same brand prediction design for years.

AMD did say they're working on a similar setup to the M1 where they put the ram right next to the SoC. Which is great for IPC, but sucks in that you can't upgrade the ram. Which I'm surprised that the Apple M1 owners aren't pissed about having an 8GB laptop made in 2020 that you can't possibly ever upgrade. That ram is also shared with the GPU as well, so it's even more limiting.

2020-11-20-08-11-01-1.jpg
Anyway no matter what happens at Intel it will be interesting. Its either going to be a big step up for x86 that AMD will be forced to copy, or it will be a massive sad bust. Either way it will be entertaining. lol
Intel losing Apple as a customer and also gaining them as a competitor is going to be more interesting. Intel was a sleeping giant for 10 years that didn't have any real competition. AMD with Ryzen and Apple with the M1 is going to push them to make better products from now on. Hopefully cheaper too.
 
AMD did say they're working on a similar setup to the M1 where they put the ram right next to the SoC. Which is great for IPC, but sucks in that you can't upgrade the ram. Which I'm surprised that the Apple M1 owners aren't pissed about having an 8GB laptop made in 2020 that you can't possibly ever upgrade. That ram is also shared with the GPU as well, so it's even more limiting.
It's half-annoying. Yeah, it means having to choose your RAM config very carefully and pay a premium, but users have also noted that 8GB of RAM on an Apple Silicon Mac goes further than it does on an Intel Mac. That is, there's not as much pressure to upgrade in the first place. My fiancée juggles several apps on her MacBook Air and doesn't ever seem to run into hitches.
 
It's half-annoying. Yeah, it means having to choose your RAM config very carefully and pay a premium, but users have also noted that 8GB of RAM on an Apple Silicon Mac goes further than it does on an Intel Mac. That is, there's not as much pressure to upgrade in the first place. My fiancée juggles several apps on her MacBook Air and doesn't ever seem to run into hitches.
Apple's M1 doesn't use less ram. There is no Apple magic. If anything it maybe using the SSD more often then it should. Realistically 8GB is enough for browsing the internet and playing several year old games. If you're dealing with serious content creation then you'll need more. The problem with letting Apple price ram for your Apple M1 is that the price of more ram does go up by a lot more when you can't choose where to get it. Apple has done this in the past and many people bought Macs and then upgraded the ram themselves at a much cheaper price.

There are other downsides to moving the ram right next to the SoC. The amount of ram for one, or you didn't notice that in 2021 the most ram you can have is 16GB? There's only so much ram you can sit right next to the SoC. Then there's the GPU which is fast but no where near as fast as AMD's discrete GPU's that Apple had in the past. Apple can't just scale up their GPU cores and expect things to scale linearly. They'll have to put more ram around the SoC and further away, which will hurt IPC on their CPU, or move the GPU away from the SoC. BTW, Apple's M1 design should look familiar to Vega and Fury owners. It's the same design, but on a GPU only. Remember when people were upset when AMD's R9 Fury had only 4GB of VRAM? Same problem with the Apple M1. It wasn't more efficient than the R9 390 8GB when it come to memory usage.

800-vega-10-xt.jpg
 
There are other downsides to moving the ram right next to the SoC. The amount of ram for one, or you didn't notice that in 2021 the most ram you can have is 16GB?
True, but just looking at physical geometry, the M1 has 8GB per side of the CPU, so you could go up to 32GB by using all four sides, and ignoring any other potential issues.
 
I'm not going to pretend to know how Apple was able to get good performance with x86 emulation, but I did hear they implemented something about memory access. Whatever Apple did, they put x86 like features into the M1 in order to perform well in x86 emulation. Even if they did put more cache to emulate x86 better, it maybe due to emulation in general. Most emulators on PC like to cache a lot of stuff in order to perform better, so it maybe just a feature that helps with emulation performance. Emulating the Switch on PC can create gigs worth of cache, for example. Without knowing the details of how Apple did it, w

Intel losing Apple as a customer and also gaining them as a competitor is going to be more interesting. Intel was a sleeping giant for 10 years that didn't have any real competition. AMD with Ryzen and Apple with the M1 is going to push them to make better products from now on. Hopefully cheaper too.

Not to go on tangents about Arm... yes since Arm moved to dynamicIQ their second gen big little they have included L3. The L3 is there because they moved the big and little cores into the same CPU complex. Yes Arm can treat one big and one or more little cores as one CPU internally. This was one of the big changes Arm came up with with dIQ. I mean it was mostly about being able to build SOC with say 2 big cores and 4 small and other odd configs. But also all the cores regardless of being big or little now share the L3 which allows the little cores to operate like co processors. In fact that is one of the other big advancements of dIQ big little AND accelerators can all access L3 directly. Meaning if you feed a bunch of math to a AI accelerator as an example it can write directly to L3 for retrieval or other processing on the other cores. There are obvious advantages to that for video photo audio editing ect. But in general yes you can use a small core as a co processor. It is also an idea that has been played with by the server Arm plays as well... Fujitsu as an example have 13 cores in a complex on there a64x super computer chip the 13th core in each complex isn't exposed in software the OS sees 12 logical cores Fujitsi calls the 13th cores assistant cores. Sa A64 chips have either 2 or 4 assistant cores depending on the layout, they can be tasked with IO and the like but they can also be used to speed single threaded computation on the standard cores.

Now obviously you can't just glue endless amounts of cores together to make one big super single threaded chip. With big.Little it started off being a big and a small core complex so they didn't really talk... and has evolved into one integrated complex where yes math is not only dynamically split in some cases yes its possible to feed single threated operations too both big and little cores much like old pentium chips used co processors for specific math types. The difference now of course is if a developer is going to request such a thing they are mostly sending things accelerators anyway... as they can all write back into the same cache pool.

Apple actually moved that way before Arm proper. I believe as early as a6 Apple had added a L3 cache.... being apple I am not sure if their accelerators could write at that point to L3. But I'm sure they went that way long before M1. M1 is probably really the third or forth chip Apple has made that is able to share cache like that with accelerators. To be fair of course I can't say for 100% sure Apple is using their small cores as extended co processers for single threaded stuff. I do believe they are... Fujitsu is and for sure Arm proper dynamic IQ is capable of doing it.
 
Last edited:
Nobody has made an ARM ATX motherboard with UEFI boot loader.
This one, which is mini-ITX, has a UEFI boot loader:
https://shop.solid-run.com/product/SRLX216S00D00GE064H08CH/

I'm not going to pretend to know how Apple was able to get good performance with x86 emulation, but I did hear they implemented something about memory access. Whatever Apple did, they put x86 like features into the M1 in order to perform well in x86 emulation. Even if they did put more cache to emulate x86 better, it maybe due to emulation in general. Most emulators on PC like to cache a lot of stuff in order to perform better, so it maybe just a feature that helps with emulation performance. Emulating the Switch on PC can create gigs worth of cache, for example. Without knowing the details of how Apple did it, we can only speculate.
I know everyone keeps saying that Apple is using emulation, but it isn't, Rosetta 2 is using the translation-layer, not the emulation-layer.
This is how the M1 is getting such good performance, and the M1 does not have x86-like features into the architecture; unless there is a source to state that specifically, I have not seen any empirical evidence stating this as such, at least thus far.

Translation ≠ Emulation

This is a great article showcasing the difference between translation and emulation.
From the article:
Now the question comes, how does this translation happen and how is Rosetta managing to run heavy x86 apps on ARM Macs seamlessly? You can attribute the main reason to the Ahead-of-time (AOT) compiler that Apple has deployed on Rosetta 2. Earlier with Rosetta in 2006, Apple was only using the Just-in-time (JIT) compiler for static binary translation. Now with the AOT compiler on Rosetta 2, Apple Silicon is able to translate and compile the code on the fly through dynamic binary translation.

Apple's M1 doesn't use less ram. There is no Apple magic. If anything it maybe using the SSD more often then it should.
Different CPU ISAs will use different size code-bases, so this isn't entirely true.
Firefox, for example, won't be the same install size, nor use the same RAM footprint across PPC, PPC64, x86, x86-64, ARM, and AArch64 - this is not specific to the M1.

For example, my old ODROID-U3 with an ARM Cortex-A9 32-bit CPU was paired with 2GB of RAM, and SWAP was disabled entirely, so everything I ran on it had to fit inside that 2GB RAM footprint; it was impossible for any memory-usage to hit the disk once it loaded into RAM.
The software I ran on it would generally run with less RAM-usage than on x86 and substantially less than on x86-64 for the exact same applications and all things equal (including disk-usage, file access, etc.), albeit with less processing power due to the then-slower Cortex-A9 CPU.

So I think that is less to do with "Apple magic" like everyone claims, and that the AArch64 code-base simply being cleaner and using a smaller footprint than the x86-64 code-base.
Realistically 8GB is enough for browsing the internet and playing several year old games. If you're dealing with serious content creation then you'll need more. The problem with letting Apple price ram for your Apple M1 is that the price of more ram does go up by a lot more when you can't choose where to get it. Apple has done this in the past and many people bought Macs and then upgraded the ram themselves at a much cheaper price.
The 8GB to 16GB limit definitely needs to be addressed, as 8GB is anemic for all but casual-usage, and 16GB is not enough for any content-creator, no matter how clean and efficient the AArch64 code-base is - it simply isn't enough.
Low-latency will help, but it is hardly a fix, and I agree with you that upgradeable RAM should be a mandatory option, at least on their Mac Mini.
 
True, but just looking at physical geometry, the M1 has 8GB per side of the CPU, so you could go up to 32GB by using all four sides, and ignoring any other potential issues.
Assuming you have the memory able to access all sides. You would have to surround the entire CPU+GPU core with memory access in order for this to work. This sounds like a job for chiplet design.
This one, which is mini-ITX, has a UEFI boot loader:
https://shop.solid-run.com/product/SRLX216S00D00GE064H08CH/
I stand corrected, and costing $750.

I know everyone keeps saying that Apple is using emulation, but it isn't, Rosetta 2 is using the translation-layer, not the emulation-layer.
This is how the M1 is getting such good performance, and the M1 does not have x86-like features into the architecture; unless there is a source to state that specifically, I have not seen any empirical evidence stating this as such, at least thus far.

Translation ≠ Emulation
According to the wiki, Rosetta 2 uses "just-in-time (JIT) translation support and ahead-of-time compilation (AOT)". Those are emulator features, more specifically Dynamic recompilation. Basically Apple is recompiling the x86 code into ARM code. Seriously, turn on an emulator and look at the settings. RPCS3 for example allows you to choose ASMJIT for the SPU emulation. None of these are new features to emulation. Maybe to a Mac users since they never seen an emulator before.
Different CPU ISAs will use different size code-bases, so this isn't entirely true.
Firefox, for example, won't be the same install size, nor use the same RAM footprint across PPC, PPC64, x86, x86-64, ARM, and AArch64 - this is not specific to the M1.
32-bit and 64-bit are a thing, and there's a difference in size in that regard.
For example, my old ODROID-U3 with an ARM Cortex-A9 32-bit CPU was paired with 2GB of RAM, and SWAP was disabled entirely, so everything I ran on it had to fit inside that 2GB RAM footprint; it was impossible for any memory-usage to hit the disk once it loaded into RAM.
The software I ran on it would generally run with less RAM-usage than on x86 and substantially less than on x86-64 for the exact same applications and all things equal (including disk-usage, file access, etc.), albeit with less processing power due to the then-slower Cortex-A9 CPU.
Also 32-bit vs 64-bit. I have an RPI3 and tried to install 64-bit Ubuntu to find that it doesn't have enough ram for it. At least not enough for me to wait patiently.
So I think that is less to do with "Apple magic" like everyone claims, and that the AArch64 code-base simply being cleaner and using a smaller footprint than the x86-64 code-base.
The M1 version of VLC is 44.3MB download. The Intel version is 50MB. The 64-bit Windows version is 40.7MB. The Windows 32-bit version is 39.6MB. The Windows ARM64 version is 57.6MB. Nope, not seeing a benefit here.
Low-latency will help, but it is hardly a fix, and I agree with you that upgradeable RAM should be a mandatory option, at least on their Mac Mini.
The low latency is what gives the M1 higher IPC performance. The moment you introduce memory slots then you slow down the IPC. The solution is to add more cache.
 
I stand corrected, and costing $750.
Heh, I said it had UEFI - I never said it was cheap! :D
According to the wiki, Rosetta 2 uses "just-in-time (JIT) translation support and ahead-of-time compilation (AOT)". Those are emulator features, more specifically Dynamic recompilation. Basically Apple is recompiling the x86 code into ARM code. Seriously, turn on an emulator and look at the settings. RPCS3 for example allows you to choose ASMJIT for the SPU emulation. None of these are new features to emulation. Maybe to a Mac users since they never seen an emulator before.
That is interesting, and I will look into that more.
32-bit and 64-bit are a thing, and there's a difference in size in that regard.

Also 32-bit vs 64-bit. I have an RPI3 and tried to install 64-bit Ubuntu to find that it doesn't have enough ram for it. At least not enough for me to wait patiently.
There is a difference, but there is also a difference between ISAs, on top of each ISA's 32-bit and 64-bit variants, just as you are seeing with your Raspberry Pi 3.
I tried that as well, and yes, it dug heavily into SWAP due to 1GB RAM not being nearly enough, even for a minimalist 64-bit installation with any GUI.
The M1 version of VLC is 44.3MB download. The Intel version is 50MB. The 64-bit Windows version is 40.7MB. The Windows 32-bit version is 39.6MB. The Windows ARM64 version is 57.6MB. Nope, not seeing a benefit here.
The installer, and installation size variations are also true, though this may or may not affect the RAM-usage footprint as it really depends on the application.
Disk-usage isn't as much of a concern (for smaller applications) as much as the RAM-usage is, and it would be great to get true comparisons across multiple programs, operating systems, and ISAs while leaving all else equal for the applications themselves.
The low latency is what gives the M1 higher IPC performance. The moment you introduce memory slots then you slow down the IPC. The solution is to add more cache.
Eh, lower latency isn't going to improve IPC, as the IPC is in the CPU itself.
Lower latency will decrease the CPU wait time, as it has to wait less on system RAM, which can improve performance and responsiveness, depending on the application.

High latency is one of the reasons that the infamous Pentium D had such terrible performance.
Two Pentium 4 cores glued on the same substrate, and communicating with one another across the slow FSB where the memory controller was located at, resulted in horrendous CPU wait times on memory access for multi-threaded processes.

Getting the RAM (and memory controller) physically closer to the CPU itself will result in lower latency, and thus lower CPU wait times, but it won't necessarily improve IPC as that is a function of the CPU itself.
Fantasy scenario: If we put the M1 cores onto the Pentium D's design and substrate, it would still have the IPC of the M1 (way more powerful than the Netburst cores in the Pentium D) but would still have the extremely high latency of that terrible design, so single-threaded processing would kick ass and multi-threaded processing would be terrible due to the high CPU wait times caused by the high latency of that design.
 
Regardless of your take on it, using a full SOC solution has some strong performance benefits but some disastrous disadvantages towards serviceability and upgradability, but at this stage in the game, Apple seems to know their users pretty damned well and has a good amount of usage metrics to back their decisions up. I mean they might not be sharing their data with 3'rd parties and while it is a fraction of the data that FB or Google collects Apple does get its fair share of numbers to back up its decisions. This is where the next revision of the M1 is going to come in, in its current state it is perfectly fine for casual or productivity tasks. It's going to crush web browsing and excel worksheets like nobody's business. You want to get into light content creations yeah it has enough for that too, hobbyists or somebody just trying to get things started then its current lineup is about right for what it delivers, if you need something mobile for business then really its performance/battery/weight category is basically unrivaled. But if you are an enthusiast or a serious content creator whose machine is actively paying your bills then the current M1 is not the machine for you. Its next revision seems to very much be more in line with those tasks and by the time it is scheduled to launch it seems that many of the major suites not yet available with an ARM version should have it down, and if they don't their competitors probably will.

I am personally interested in seeing how Apple further evolves the Rosetta software, back in 2006 when they launched it for their PowerPC to Intel conversion it was pretty rough but it did the trick, the Intels at the time were far more powerful and could get away with using brute force more than not. ARM doesn't have that luxury, and Rosetta 2 is doing a great job but I really want to see what 2.1 and future iterations start to look like because that is going to be a big one. I wonder if Apple is looking at the possibility of developing a Graviton-like CPU for their own data centers, while it's not something I think they would ever break even on if anybody could afford that kind of loss leader it would be Apple. It would be one hell of a set of bragging rights and would be a pretty solid marketing setup.
 
Regardless of your take on it, using a full SOC solution has some strong performance benefits but some disastrous disadvantages towards serviceability and upgradability, but at this stage in the game, Apple seems to know their users pretty damned well and has a good amount of usage metrics to back their decisions up. I mean they might not be sharing their data with 3'rd parties and while it is a fraction of the data that FB or Google collects Apple does get its fair share of numbers to back up its decisions. This is where the next revision of the M1 is going to come in, in its current state it is perfectly fine for casual or productivity tasks. It's going to crush web browsing and excel worksheets like nobody's business. You want to get into light content creations yeah it has enough for that too, hobbyists or somebody just trying to get things started then its current lineup is about right for what it delivers, if you need something mobile for business then really its performance/battery/weight category is basically unrivaled. But if you are an enthusiast or a serious content creator whose machine is actively paying your bills then the current M1 is not the machine for you. Its next revision seems to very much be more in line with those tasks and by the time it is scheduled to launch it seems that many of the major suites not yet available with an ARM version should have it down, and if they don't their competitors probably will.

I am personally interested in seeing how Apple further evolves the Rosetta software, back in 2006 when they launched it for their PowerPC to Intel conversion it was pretty rough but it did the trick, the Intels at the time were far more powerful and could get away with using brute force more than not. ARM doesn't have that luxury, and Rosetta 2 is doing a great job but I really want to see what 2.1 and future iterations start to look like because that is going to be a big one. I wonder if Apple is looking at the possibility of developing a Graviton-like CPU for their own data centers, while it's not something I think they would ever break even on if anybody could afford that kind of loss leader it would be Apple. It would be one hell of a set of bragging rights and would be a pretty solid marketing setup.

I don't think rosetta is actually important at all long term. I think 2.0 is about as important as 1.0 was back in the ppa->x86 days. Very important for a very short time... then almost completely unimportant. Yes x86 software is going to be around for awhile... but eventually when you put new things in front of people they learn to use things the way they're intended to be. Apple users will migrate to ARM software... Apple developers will (and most of the majors already have) move their stuff over to Arm.

MIcrosoft is also going to put a big push on Windows Arm again... and Windows Arm already runs on something like Parallels just fine. For the few mac users that actually care to still run windows, I suspect it won't be long before they just run the Arm version of windows 10 via parallels if need be. As for the people that need Linux... parallels does a very good job of running Arm distros from what I understand.

Rosetta will move to rarely used status fairly quickly imo. I seriously doubt its even worth Apples time to bother with a ton of speed improvement type updates... its probably actually counter productive to what they want to do. Don't give the Adobes of the software world reasons to drag their feet with proper Arm builds for their rest of their software.

As for the Apple server Arm chips... I think it would only make sense if Apple wants to sell cloud computing. Which I don't know careful what you wish for. Joking. Who knows though I could actually see Apple deciding at some point to compete with Amazon and MS. I mean you would think they would have done it already if they wanted to. However like you say if they are already 90% the way to a solution that would slay.... I have predicted crazier things. lol
 
I don't think rosetta is actually important at all long term. I think 2.0 is about as important as 1.0 was back in the ppa->x86 days. Very important for a very short time... then almost completely unimportant. Yes x86 software is going to be around for awhile... but eventually when you put new things in front of people they learn to use things the way they're intended to be. Apple users will migrate to ARM software... Apple developers will (and most of the majors already have) move their stuff over to Arm.

MIcrosoft is also going to put a big push on Windows Arm again... and Windows Arm already runs on something like Parallels just fine. For the few mac users that actually care to still run windows, I suspect it won't be long before they just run the Arm version of windows 10 via parallels if need be. As for the people that need Linux... parallels does a very good job of running Arm distros from what I understand.

Rosetta will move to rarely used status fairly quickly imo. I seriously doubt its even worth Apples time to bother with a ton of speed improvement type updates... its probably actually counter productive to what they want to do. Don't give the Adobes of the software world reasons to drag their feet with proper Arm builds for their rest of their software.

As for the Apple server Arm chips... I think it would only make sense if Apple wants to sell cloud computing. Which I don't know careful what you wish for. Joking. Who knows though I could actually see Apple deciding at some point to compete with Amazon and MS. I mean you would think they would have done it already if they wanted to. However like you say if they are already 90% the way to a solution that would slay.... I have predicted crazier things. lol
I know long-term it won't really matter I mean it basically went un-updated for 15 years, since its initial launch in 2006, but there are so many x86 libraries out there that I can reasonably see it taking decades to phase them all out. I don't think Apple would get into a consumer cloud business, but for say, supporting enterprise apps on the OSX or iOS platforms or for some more ambitious projects involving Apple Arcade. There are a good number of places where they could tie it into their existing developer's tools, it wouldn't be a stretch to extend their iCloud services in that direction as a "value-added" service for their developers. I know apple is getting a lot of flack for the protection of their ecosystem, and really that flack is deserved if they aren't doing things better or cheaper than their competitors. But I do know that a good number of their developers are hosting their various web components not with Apple but instead with AWS, so if they were to build something there that was more robust, cheaper, or easier to integrate securely or blah blah blah that enticed developers to use their tools. I could see that working for them, all while closing that velvet fist a little tighter.
 
  • Like
Reactions: ChadD
like this
The installer, and installation size variations are also true, though this may or may not affect the RAM-usage footprint as it really depends on the application.
Disk-usage isn't as much of a concern (for smaller applications) as much as the RAM-usage is, and it would be great to get true comparisons across multiple programs, operating systems, and ISAs while leaving all else equal for the applications themselves.
Someone would need to spend the time to compare memory usage but I'm sure there's no real difference. If I had 8GB of ram I wouldn't notice the difference until I did something heavy duty or the hundreds of tabs open that I'm too lazy to close. Not joking, both FireFox and Chrome are open with what is like a combined 100 tabs open.
Eh, lower latency isn't going to improve IPC, as the IPC is in the CPU itself.
Lower latency will decrease the CPU wait time, as it has to wait less on system RAM, which can improve performance and responsiveness, depending on the application.
Memory latency is the biggest problem with IPC. You can't brute force IPC, as you need to lower access time to data. It's the very nature of the code as it's all serial, meaning you can only do each bit of code linearly. You can't do the else part of the code until you do the if statement. You can't do a for loop until you do this other function first. There are ways around this like higher clock speed, but there's a limit to clock speed. CPU's have pipeline stages which try to process data ahead of time but the problem is you can't accurately do this all the time. Branch prediction helps by taking what's frequency used output and stores it in cache. The more frequently used output is stored in L2 and the lesser is stored in L3. Each iteration of DDR memory increases bandwidth but also increases latency. To combat this modern CPU's have a lot of cache.

ARM still works on these same principals and Apple's solution is to fuck memory upgrades and fuck anything more than 16GB. Apple may increase the ram to 32GB for the M2, but it'll likely lower IPC performance since you're introducing more latency. Then you have the GPU which doesn't favor latency but instead memory bandwidth. GPU's are in favor of processing code that doesn't need to be done in serial, like 2 + 2 x 6. GPU's deal in math and math doesn't care what order things are done, especially for graphics. For an APU this doesn't matter too much since you're not expecting to get an RTX 3060 level of performance out of an Apple M1, but then again I don't see Apple using discrete GPU's from AMD or Nvidia. This is why desktop PC's don't use system memory for high performance GPU's, because DDR memory favors latency over bandwidth. GDDR favors bandwidth over latency which is perfect for GPU's.
Getting the RAM (and memory controller) physically closer to the CPU itself will result in lower latency, and thus lower CPU wait times, but it won't necessarily improve IPC as that is a function of the CPU itself.
You said this before and I'm not understanding you here. IPC is a function of the CPU but so is everything. We have more than enough bandwidth for modern CPU's, but we don't have the latency. The Fujitsu a64fx is really good, but not really good at IPC. These CPU's use HBM2 memory, which has high latency and therefore wouldn't be good for serial work loads A.K.A IPC. As end users we're not concerned with processing data that needs more cores and more threads.
Fantasy scenario: If we put the M1 cores onto the Pentium D's design and substrate, it would still have the IPC of the M1 (way more powerful than the Netburst cores in the Pentium D) but would still have the extremely high latency of that terrible design, so single-threaded processing would kick ass and multi-threaded processing would be terrible due to the high CPU wait times caused by the high latency of that design.
Netbursts mistake was using 20 pipeline stages and depending on high bandwidth and clock speed. It was going to use RDRAM, which has a lot of bandwidth but was also expensive. It had shit branch prediction that didn't make use of the 20 pipelines properly. I have to give Intel credit because Intel didn't give up on it like AMD did with Bulldozer. From Intel trying to fix Netburst they created dual channel memory which was able to feed the bandwidth the Pentium 4's needed, and Hyper Threading which took the 20 stages and made two virtual CPU's with 10 stages each, increasing it's efficiency for multi-threaded applications. These are features that are now standard in all modern CPU's.

The Athlon's with their 10 stage pipelines were faster as well, but yea the M1 would also be faster in that position but also because the M1 is just a better design. Most modern CPU's are.
 
Meh, M1A was never adopted by the US Army. It is a limited civilian reproduction of a military firearm that lacks the features and construction of the M14. (Signed, humped an M14)
 
Back
Top