Apple leaks M1 Max Duo, and M1 Ultra

Upgrade path is a joke across the industry if you buy high end. I bought a 3990X and AMD just tanked the whole fucking Threadripper line. Zero upgrade path. DDR4 to DDR5 will require new motherboards.

I don't think you know what upgradability is, because you literally just described it and said in the same talking point that it's not possible. Just because a socket doesn't have a faster CPU available, doesn't mean the platform is not upgradable. Motherboards and RAM in PCs are standardized components and can be swapped out for an upgrade. You can literally swap the CPU, Motherboard and RAM and reuse everything else, then recycle the old parts by selling them to someone else that can build another PC with them. This is sustainability and recycling.

Apple has NEVER had a machine that was designed with that much freedom to upgrade. Their laptops are disposable trash and are now purposefully designed to not be upgradeable or serviced by the user, which is a big problem with all of the hideous design faults in their laptops that result in high failure rates. Like putting 52v backlight pin on the same connector adjacent to a sub 1v CPU pin. All it takes is a bit of atmospheric humidity to allow it to flash over and kill your laptop.

Storage changes connector type virtually every couple years at this point.

What on earth are you talking about? Storage connectors have been stable for literally decades. IDE has been around since the late 80s, SATA since 2003 and M.2 since 2012, and they've done a very good job at maintaining backwards compatibility when new speeds are introduced. Even if we move over to higher end workstation and server stuff like SCSI, sure there were a bunch of different standards in quick succession, but passive adapters solved those problems. Passive and active adapters exist for desktop PC storage also, you can go from IDE to SATA in either direction, and M.2 has lots of different options available.
 
I don't think you know what upgradability is, because you literally just described it and said in the same talking point that it's not possible. Just because a socket doesn't have a faster CPU available, doesn't mean the platform is not upgradable. Motherboards and RAM in PCs are standardized components and can be swapped out for an upgrade. You can literally swap the CPU, Motherboard and RAM and reuse everything else, then recycle the old parts by selling them to someone else that can build another PC with them. This is sustainability and recycling.

Apple has NEVER had a machine that was designed with that much freedom to upgrade. Their laptops are disposable trash and are now purposefully designed to not be upgradeable or serviced by the user, which is a big problem with all of the hideous design faults in their laptops that result in high failure rates. Like putting 52v backlight pin on the same connector adjacent to a sub 1v CPU pin. All it takes is a bit of atmospheric humidity to allow it to flash over and kill your laptop.



What on earth are you talking about? Storage connectors have been stable for literally decades. IDE has been around since the late 80s, SATA since 2003 and M.2 since 2012, and they've done a very good job at maintaining backwards compatibility when new speeds are introduced. Even if we move over to higher end workstation and server stuff like SCSI, sure there were a bunch of different standards in quick succession, but passive adapters solved those problems. Passive and active adapters exist for desktop PC storage also, you can go from IDE to SATA in either direction, and M.2 has lots of different options available.

This is exactly my point, users like you don't understand because you don't max out platforms immediately and are still giving a shit about IDE and SATA. Apple is not targeting those people. They are targeting those who will pay for performance otherwise not available, and if you're used to immediately maxxing out whatever system you have there is generally not an useful upgrade patbh if you want the latest performance. The only exception to this would be GPUs where there is a long time between PCI Express generations, although realistically even this is becoming less useful as the time in between meaningful GPU performance increases continues to grow. If you gave a shit about storage performance, the second M.2 was out SATA was totally irrelevant. And then again when NVME came out, I had to get a whole new motherboard and all new drives to take advantage of it. Whole new platform.

I don't care about upgrading the PCs I own because I buy the best available and by the time there is something that is significantly faster enough to upgrade too, it usually requires a whole new platform anyway. But yeah, if you're cash strapped and using IDE drives to store some porn and do some word processing or whatever I guess I could see why that is important.
 
This is exactly my point, users like you don't understand because you don't max out platforms immediately and are still giving a shit about IDE and SATA.

Let's not defer from the fact you have no idea what you're talking about. You're using straw man fallacies and personal attacks to cover up what you said earlier about storage connectors "changing every couple of years at this point" which is false. You don't get to move the goal post around after the fact by making up additional requirements to try and prop up your original very incorrect statement.

You now want to toss NVME in there? That's been around for 11 years now. Also, NVME is a controller and communication protocol specification for flash memory, it is not a type of connector. It existed before M.2 was standardized, back when SSDs were in PCIe slots. It's like trying to say AHCI is a type of storage connector, which it isn't. You really don't seem to know anything about storage standards.

and if you're used to immediately maxxing out whatever system you have there is generally not an useful upgrade patbh if you want the latest performance.

Except there is an upgrade path, changing out the motherboard, CPU and RAM if necessary. Since memory standards have been stable for long periods of time, you don't always have to buy all new memory. DDR4 existed for 13 years, and is still valid today. DDR5 is insanely expensive and doesn't really provide tangible benefits yet, which has been the same for every prior memory standard introduced. The mature previous standard can often match or exceed the infant new standard for a time.

Just because it is beneath you to upgrade your PC, doesn't mean that it's not possible.
 
I don't think you know what upgradability is, because you literally just described it and said in the same talking point that it's not possible. Just because a socket doesn't have a faster CPU available, doesn't mean the platform is not upgradable. Motherboards and RAM in PCs are standardized components and can be swapped out for an upgrade. You can literally swap the CPU, Motherboard and RAM and reuse everything else, then recycle the old parts by selling them to someone else that can build another PC with them. This is sustainability and recycling.
Cases are generally recycled. Or can be no harder than any other metal box.

As for motherboards/etc being standard - only on consumer kit, and only homebuilts or from the specialty vendors. Connectivity and layout of things changes enough on the enterprise and prosumer side enough that cases aren't often reused - too much proprietary form factor across all of that (see the Lenovo P620, for instance, for an enterprise workstation - that's a completely custom board design, case layout, PSU, etc). I can't pull a Dell or HP motherboard and use it elsewhere (except maybe strapped to a peg board) - go watch the Gamers Nexus review of either of the Dell/HP systems they reviewed - those suckers are 100% proprietary again. Now we can argue all day that they shouldn't be doing that - that's fair - but no business is buying home builts unless it's literally YOUR business.
Apple has NEVER had a machine that was designed with that much freedom to upgrade. Their laptops are disposable trash and are now purposefully designed to not be upgradeable or serviced by the user, which is a big problem with all of the hideous design faults in their laptops that result in high failure rates. Like putting 52v backlight pin on the same connector adjacent to a sub 1v CPU pin. All it takes is a bit of atmospheric humidity to allow it to flash over and kill your laptop.
Because their users don't generally do that. Of course, Dell/HP/etc are doing the same on many of theirs too - this isn't just Apple. Try upgrading the RAM on an XPS 13, for instance...
What on earth are you talking about? Storage connectors have been stable for literally decades. IDE has been around since the late 80s, SATA since 2003 and M.2 since 2012, and they've done a very good job at maintaining backwards compatibility when new speeds are introduced. Even if we move over to higher end workstation and server stuff like SCSI, sure there were a bunch of different standards in quick succession, but passive adapters solved those problems. Passive and active adapters exist for desktop PC storage also, you can go from IDE to SATA in either direction, and M.2 has lots of different options available.
You forgot U.2. :p

Thunderbolt tends to eliminate the need for internal storage upgrades - especially for the users in question here (or RoCE NVMe, or FC, and so on - all of which are available as thunderbolt port connection options). You don't need to put it ~in~ the box - you can let people attach whatever you want, and the interconnect isn't generally the limiter anymore. (TB3 is 40Gbps, more than enough for RoCE NVMe if you wanted to do it).

Again, I build my own workstations, because it's fun and I want those options - but I also SELL the prebuilts to people and they're not even paying attention to those things anymore. Especially since they can pull along whatever connectivity they have to a new system in the future.
 
Except there is an upgrade path, changing out the motherboard, CPU and RAM if necessary. Since memory standards have been stable for long periods of time, you don't always have to buy all new memory. DDR4 existed for 13 years, and is still valid today. DDR5 is insanely expensive and doesn't really provide tangible benefits yet, which has been the same for every prior memory standard introduced. The mature previous standard can often match or exceed the infant new standard for a time.

Just because it is beneath you to upgrade your PC, doesn't mean that it's not possible.
No one but us people on [H] do that. Businesses don't (unless it died, in which case the OEM is the one doing the swapping). Prosumers don't (I work with a bunch of them - just helped a guy pick out an Alienware 17 for doing audio mixing for his band - loaded up with every upgrade and the base video card, because he doesn't game). He's been running on a maxed out Lenovo that hit 6 years old - swap to the new one, copy stuff over, move on. Servers MIGHT get a RAM upgrade, and if doing HCI, might get more drives, but even most of THOSE don't get touched (the majority of my customers buy to a standard and just stick with it until processor generations change, minus things like SAP/Oracle/MSSQL that have specific licensing or other requirements).

We as enthusiasts do. But the Apple studio is as aimed at us as the Lenovo P620 is - not at all. I bitch that there's no Threadripper 5000, but I'm not bitching that Lenovo puts them in a custom motherboard/chassis/PSU system. I'm not their market.

You're arguing that you, and paradoxical, me, and everyone here on [H] shouldn't buy the Studio - you're probably right. We're not the market. But for those who are the market? I don't see a better platform at the moment. And the market they're selling to? Buys to a spec, sticks with it, and doesn't upgrade till the next generation. Even the guys buying the stack of 2080TIs don't touch 98% of hte systems they buy - they just add more of the same box.

Now, if we want to say and argue that "they SHOULD, and stop throwing away all this stuff because the environment is gonna suck" - then I'm ALL for that. I love that some of the big OEMs are switching to open compute designs - makes that easier in the long run - but it's currently not a priority for business, and that's something we'll have to adjust at that level (or incentivize some how) before they'll get there.
 
No one but us people on [H] do that. Businesses don't (unless it died, in which case the OEM is the one doing the swapping). Prosumers don't (I work with a bunch of them - just helped a guy pick out an Alienware 17 for doing audio mixing for his band - loaded up with every upgrade and the base video card, because he doesn't game). He's been running on a maxed out Lenovo that hit 6 years old - swap to the new one, copy stuff over, move on. Servers MIGHT get a RAM upgrade, and if doing HCI, might get more drives, but even most of THOSE don't get touched (the majority of my customers buy to a standard and just stick with it until processor generations change, minus things like SAP/Oracle/MSSQL that have specific licensing or other requirements).

We as enthusiasts do. But the Apple studio is as aimed at us as the Lenovo P620 is - not at all. I bitch that there's no Threadripper 5000, but I'm not bitching that Lenovo puts them in a custom motherboard/chassis/PSU system. I'm not their market.

You're arguing that you, and paradoxical, me, and everyone here on [H] shouldn't buy the Studio - you're probably right. We're not the market. But for those who are the market? I don't see a better platform at the moment. And the market they're selling to? Buys to a spec, sticks with it, and doesn't upgrade till the next generation. Even the guys buying the stack of 2080TIs don't touch 98% of hte systems they buy - they just add more of the same box.

Now, if we want to say and argue that "they SHOULD, and stop throwing away all this stuff because the environment is gonna suck" - then I'm ALL for that. I love that some of the big OEMs are switching to open compute designs - makes that easier in the long run - but it's currently not a priority for business, and that's something we'll have to adjust at that level (or incentivize some how) before they'll get there.

Not to mention "upgrading" something like a motherboard is a massive clusterfuck for those of us that actually use these systems to make money where the value of the hardware locked software licenses is worth many multiples of the $15k computer it runs on. FUCK FLEXLM, if you know you know. The only software worse than Microsoft software.

In fact, the whole reason I got a 3990x was to get around software per-socket license fees. I got one socket, bitch! Just so happens to have 64 cores. And then AMD killed my uPGRaDE PaTH!!!!!"
 
Last edited:
As for motherboards/etc being standard - only on consumer kit, and only homebuilts or from the specialty vendors. Connectivity and layout of things changes enough on the enterprise and prosumer side enough that cases aren't often reused - too much proprietary form factor across all of that (see the Lenovo P620, for instance, for an enterprise workstation - that's a completely custom board design, case layout, PSU, etc).

The P620 is probably not the best example for you to use because it's almost verbatim EATX territory. The rear I/O shield is even removable. The only thing that would require a bit of work is the power supply, and that's not a big deal. There are a huge number of OEM machines from Compaq, HP, Dell, etc. that are bog standard ATX machines, and are easily upgraded with newer motherboards, CPUs and RAM.

I can't pull a Dell or HP motherboard and use it elsewhere (except maybe strapped to a peg board) - go watch the Gamers Nexus review of either of the Dell/HP systems they reviewed - those suckers are 100% proprietary again. Now we can argue all day that they shouldn't be doing that - that's fair - but no business is buying home builts unless it's literally YOUR business.

Again, depends on the machine and era. I've board swapped HP/Compaq OEM machines with Dell motherboards and vice versa. Also put in boards from DIY OEMs like MSI, Asus, Foxconn, Biostar, etc. Yes, there are proprietary machines like the G5 5000, but there are also bog standard ATX boxes sold by big OEMs as well, or at least close enough to swap in a standard part with minimal fuss.

Try upgrading the RAM on an XPS 13, for instance...

XPS what? You're going to have to be exceedingly specific about what XPS machine you're talking about. Dell has used the XPS moniker since the late 90s and there are thousands of different XPS models of both desktop and laptop variety. I have an XPS 13 laptop that I've upgraded the CPU and memory in.
 
XPS what? You're going to have to be exceedingly specific about what XPS machine you're talking about. Dell has used the XPS moniker since the late 90s and there are thousands of different XPS models of both desktop and laptop variety. I have an XPS 13 laptop that I've upgraded the CPU and memory in.

He meant one made in the A.D. era
 
Not to mention "upgrading" something like a motherboard is a massive clusterfuck for those of us that actually use these systems to make money where the value of the hardware locked software licenses is worth many multiples of the $15k computer it runs on. FUCK FLEXLM, if you know you know. The only software worse than Microsoft software.

In fact, the whole reason I got a 3990x was to get around software per-socket license fees. I got one socket, bitch! Just so happens to have 64 cores. And then AMD killed my uPGRaDE PaTH!!!!!"
Build it once, let it work for 5+ years and replace it. At that stage the generational upgrades are far better than any potential upgrades you could throw into a system.
Sure if your building something on a budget and plan on upgrading and selling off parts within a year or 2 then maybe. But anything outside that time frame parts get harder to find and the benefits shrink pretty fast.
 
I don't think you know what upgradability is, because you literally just described it and said in the same talking point that it's not possible. Just because a socket doesn't have a faster CPU available, doesn't mean the platform is not upgradable. Motherboards and RAM in PCs are standardized components and can be swapped out for an upgrade. You can literally swap the CPU, Motherboard and RAM and reuse everything else, then recycle the old parts by selling them to someone else that can build another PC with them. This is sustainability and recycling.

Apple has NEVER had a machine that was designed with that much freedom to upgrade. Their laptops are disposable trash and are now purposefully designed to not be upgradeable or serviced by the user, which is a big problem with all of the hideous design faults in their laptops that result in high failure rates. Like putting 52v backlight pin on the same connector adjacent to a sub 1v CPU pin. All it takes is a bit of atmospheric humidity to allow it to flash over and kill your laptop.



What on earth are you talking about? Storage connectors have been stable for literally decades. IDE has been around since the late 80s, SATA since 2003 and M.2 since 2012, and they've done a very good job at maintaining backwards compatibility when new speeds are introduced. Even if we move over to higher end workstation and server stuff like SCSI, sure there were a bunch of different standards in quick succession, but passive adapters solved those problems. Passive and active adapters exist for desktop PC storage also, you can go from IDE to SATA in either direction, and M.2 has lots of different options available.
Ehh, most ultralights have shit upgradability these days.

Take for exmaple the xps 13 9310


RAM and CPU are soldered to the board, on the bright side the SSD is replaceable.
 
Last edited:
I don't think you know what upgradability is, because you literally just described it and said in the same talking point that it's not possible. Just because a socket doesn't have a faster CPU available, doesn't mean the platform is not upgradable. Motherboards and RAM in PCs are standardized components and can be swapped out for an upgrade. You can literally swap the CPU, Motherboard and RAM and reuse everything else, then recycle the old parts by selling them to someone else that can build another PC with them. This is sustainability and recycling.

Apple has NEVER had a machine that was designed with that much freedom to upgrade. Their laptops are disposable trash and are now purposefully designed to not be upgradeable or serviced by the user, which is a big problem with all of the hideous design faults in their laptops that result in high failure rates. Like putting 52v backlight pin on the same connector adjacent to a sub 1v CPU pin. All it takes is a bit of atmospheric humidity to allow it to flash over and kill your laptop.



What on earth are you talking about? Storage connectors have been stable for literally decades. IDE has been around since the late 80s, SATA since 2003 and M.2 since 2012, and they've done a very good job at maintaining backwards compatibility when new speeds are introduced. Even if we move over to higher end workstation and server stuff like SCSI, sure there were a bunch of different standards in quick succession, but passive adapters solved those problems. Passive and active adapters exist for desktop PC storage also, you can go from IDE to SATA in either direction, and M.2 has lots of different options available.
Old Apple was fairly upgradeable, but we're talking 3 decades ago when they were actually making unique hardware that one could justify the insane costs of back in the day.
 
Not to mention "upgrading" something like a motherboard is a massive clusterfuck for those of us that actually use these systems to make money where the value of the hardware locked software licenses is worth many multiples of the $15k computer it runs on. FUCK FLEXLM, if you know you know. The only software worse than Microsoft software.

In fact, the whole reason I got a 3990x was to get around software per-socket license fees. I got one socket, bitch! Just so happens to have 64 cores. And then AMD killed my uPGRaDE PaTH!!!!!"
Jesus, I'm going to have nightmares tonight about FlexLM. Thanks. Dick. (also true statements).
The P620 is probably not the best example for you to use because it's almost verbatim EATX territory. The rear I/O shield is even removable. The only thing that would require a bit of work is the power supply, and that's not a big deal. There are a huge number of OEM machines from Compaq, HP, Dell, etc. that are bog standard ATX machines, and are easily upgraded with newer motherboards, CPUs and RAM.
The board is, sorta, in terms of ~size~, but it's also using a proprietary PSU with a slot connector, running power through a 16ish layer motherboard, power outputs on the board to all the accessories, etc. Could you shoehorn a normal ATX board in there? Sure. Would it work the same? Nope. Would you have to jerry rig some kind of PSU pass for cables? Yep. No one but us weirdos is doing that - no company ever will, that's for sure. When that thing wears out, you're tossing it and buying a new one. Generally on a 3-5 year depreciation cycle.

Compaq doesn't exist anymore. As for the rest - dare you to find a system from one of them that is using a standard motherboard. G5? Custom board. HP Omen? Custom board. Alienware? Custom board, although it's pretty darned close to an mATX at least, IIRC it has front panel crap integrated again too. Lenovo might - I haven't looked up a teardown on one of the Legions in forever, but almost all the rest of them have gone with custom designs to make integration easier and faster, and damn the folks that want to do something else with them. It sucks.
Again, depends on the machine and era. I've board swapped HP/Compaq OEM machines with Dell motherboards and vice versa. Also put in boards from DIY OEMs like MSI, Asus, Foxconn, Biostar, etc. Yes, there are proprietary machines like the G5 5000, but there are also bog standard ATX boxes sold by big OEMs as well, or at least close enough to swap in a standard part with minimal fuss.
When though? 2008-2015 or so this worked - but they went BACK. Now we're back to proprietary. Hell, Dell uses a proprietary version of ATX12VO on theirs now! It's not ATX, and it's not even the standard ATX12VO! It's something they created on their own! I've not seen one come through in years that doesn't have something proprietary about it - maybe some of the Optiplex stuff, but rarely even then.

But again - you're talking about you. Business wise, they don't care about this, and businesses are going to be the primary consumers of the Studio - or prosumers. And a pile of folks that want shiny, sure, but they don't care either. We care, but we're not the target market. If we were, they missed the mark - but we're not.
XPS what? You're going to have to be exceedingly specific about what XPS machine you're talking about. Dell has used the XPS moniker since the late 90s and there are thousands of different XPS models of both desktop and laptop variety. I have an XPS 13 laptop that I've upgraded the CPU and memory in.
The new SFF ultralight laptop that has been a top seller for the last what, 6 years?
Build it once, let it work for 5+ years and replace it. At that stage the generational upgrades are far better than any potential upgrades you could throw into a system.
Sure if your building something on a budget and plan on upgrading and selling off parts within a year or 2 then maybe. But anything outside that time frame parts get harder to find and the benefits shrink pretty fast.
Yep. And no one on the business side (Where the studio will mostly sell) is going to worry about upgrading it except with external accessories. And there's no issue with external accessories with TB3.
Old Apple was fairly upgradeable, but we're talking 3 decades ago when they were actually making unique hardware that one could justify the insane costs of back in the day.
The hardware is still unique, and as I pointed out earlier, if you've got the right use case, it totally justifies the cost. Not for us, but we're not doing anything that would benefit from the oddball hardware.
 
Sure, but what are you paying for then? Why buy this GPU if you can't game? Why compare it against gaming GPU's?...
Because generally GPU's are built for gaming and Apple has a hard time accepting that nobody is going to make a native game for their M1 hardware. So of course Apple focuses on video editing and very specific work loads against a RTX 3090. If they compared it to nearly any other modern GPU in the market then the results won't change.
You think SteamOS is better for someone making movies (on any perspective) ?
Since SteamOS is now Arch Linux then yes if the software was ported to it. From what I understand rendering on Linux is just faster compared to Windows.
Upgrade path is a joke across the industry if you buy high end. I bought a 3990X and AMD just tanked the whole fucking Threadripper line. Zero upgrade path. DDR4 to DDR5 will require new motherboards. Storage changes connector type virtually every couple years at this point.
Go buy a new motherboard for any Apple product and tell me how that upgrade went. Apple is the king of ewaste.
I prefer to look at my 14 inch Macbook Pro as very good for the environment; since it's so much faster than any comparably sized PC laptop in existence (M1 Max, 64GB ram, 8TB SSD) that I would have to buy a brand new PC laptop every year for the next three years to just hit the performance that my current machine has. That'd be a lot of junked PCs.
Faster at what? That's a $6,000 machine which is insanely priced. The ram isn't particularly amazing but the storage is pretty good but not $6k good. If I spent half as much money I'd still get a better laptop with the ability to upgrade. Also that SSD is soldered to the motherboard so good luck replacing when it goes bad. Pretty clear that Apple is aiming their products at video editing since even their website shows Final Cut Pro as an option for another $300. There's a reason why Apple settled on this niche.
 
Last edited:
  • Like
Reactions: kac77
like this
nobody is going to make a native game for their M1 hardware
Not quite true, Apple Arcade grows every year, Unreal and Unity also have native M1 and Metal support for their engines. Apple is steadily growing their gaming market on their laptops and desktops. Their Moble gaming revenue is disgusting and is the envy of every publisher there is. So yeah not a lot of AAA titles, but lots and lots of everything else.

I’ve been playing BG3 on one of my office Mac’s and it’s doing a great job there M1 native looks beautiful and plays smooth. How much of that is the game and how much is that screen though I can’t tell, the screen on that M1 is just pure candy.

PS the Barbarian is OP…

Eve also has native M1 support with Metal, if I still played the Ultra with that 5k screen would have been on my shopping list. But I don’t and my health is better off for it.
 
Last edited:
Not quite true, Apple Arcade grows every year, Unreal and Unity also have native M1 and Metal support for their engines. Apple is steadily growing their gaming market on their laptops and desktops. Their Moble gaming revenue is disgusting and is the envy of every publisher there is. So yeah not a lot of AAA titles, but lots and lots of everything else.

I’ve been playing BG3 on one of my office Mac’s and it’s doing a great job there M1 native looks beautiful and plays smooth. How much of that is the game and how much is that screen though I can’t tell, the screen on that M1 is just pure candy.

PS the Barbarian is OP…

Eve also has native M1 support with Metal, if I still played the Ultra with that 5k screen would have been on my shopping list. But I don’t and my health is better off for it.

I have a feeling that even 2 years later, Shadow of the Tomb Raider and Baldur's Gate 3 will be the only titles included on Apple Silicon Macs vs PCs gaming comparison benchmarks lol.
 
Faster at what? That's a $6,000 machine which is insanely priced. The ram isn't particularly amazing but the storage is pretty good but not $6k good. If I spent half as much money I'd still get a better laptop with the ability to upgrade. Also that SSD is soldered to the motherboard so good luck replacing when it goes bad. Pretty clear that Apple is aiming their products at video editing since even their website shows Final Cut Pro as an option for another $300. There's a reason why Apple settled on this niche.

Literally every single thing except gaming, which I don't care about since I have a PC for that and this is a work machine. Price doesn't matter for me, work paid for it since it makes us $$.

If you think the ram "isn't particularly amazing" you really don't know anything about the benefits of a unified architecture. For example, you can run specific ML workloads on these machines that a RTX 3090 can't even fit into its memory. Not to mention the memory bandwidth is literally multiples of what a top of the line Intel desktop has.

ITT: gamers with slow and last-gen computers fail to understand there is a whole big world out there where people have money, will pay for performance, and do things with their computers other than game
 
there is a whole big world out there where people have money, will pay for performance, and do things with their computers other than game

Although I don't necessary disagree, if there is a big market for this why don't unified memory accelerator cards like this exist? Or perhaps they do, not really my area of expertise?
 
Although I don't necessary disagree, if there is a big market for this why don't unified memory accelerator cards like this exist? Or perhaps they do, not really my area of expertise?
They do. Nvidia makes several - all the way from the insane A100 (80G) or Bluefield setups (https://www.nvidia.com/en-us/data-center/a100/) to the simpler cards that could do video out (RTX 2000). Bluefield and DGX are nuts (https://www.nvidia.com/en-us/networking/products/data-processing-unit/).

But…. Those are insanely expensive. They require server grade hardware (average 15-20k per hosting system), software licenses (let’s call it about 500-1k per card per year at this level, far more than GRID licenses), support for said stuff, and a datacenter. Hell, the Lambda systems start at 80-90k last I checked- start. Granted it’s been a good bit, but…

Something like a 3090 or M1 can do most of that, no server hardware, no licenses, for a hell of a lot cheaper.

Intel is getting in to this market. They tried with Phi in the past. Amd dabbled a bit. There are others. But the crossover between massive amounts of vector calculations for a game and vector calculations for a graph relation database or protein folding or crypto analysis? That’s a venn diagram that is almost a circle.
 
Intel is getting in to this market. They tried with Phi in the past. Amd dabbled a bit. There are others. But the crossover between massive amounts of vector calculations for a game and vector calculations for a graph relation database or protein folding or crypto analysis? That’s a venn diagram that is almost a circle.

Very interesting. I'd be interested in further discussion on how/what this research is reveling and what this might look like in 10 or so years.
 
Although I don't necessary disagree, if there is a big market for this why don't unified memory accelerator cards like this exist? Or perhaps they do, not really my area of expertise?
They do, NVidia introduced the concept with the Volta architecture, when it came out it was a pretty big deal.
 
Very interesting. I'd be interested in further discussion on how/what this research is reveling and what this might look like in 10 or so years.
Happy to. I work with a LOT of customers in this space, and that data scientist friend (they’re doing language analysis and semantics, primarily ontology work).

As an additional comparison point - the 40G A100 is 11k before software. That much HBM is expensive. Never mind the 80G monster.
 
Literally every single thing except gaming, which I don't care about since I have a PC for that and this is a work machine. Price doesn't matter for me, work paid for it since it makes us $$.
The only good thing about the Apple M1 GPU's is video encoding and decoding, and that really isn't the GPU so much as a fixed function of the M1 hardware. If the compute was good then miners would be all over them. These are terrible GPU's, but great for video editing.
If you think the ram "isn't particularly amazing" you really don't know anything about the benefits of a unified architecture. For example, you can run specific ML workloads on these machines that a RTX 3090 can't even fit into its memory. Not to mention the memory bandwidth is literally multiples of what a top of the line Intel desktop has.
You seem to think that this hasn't been done before... to death. Unified memory is a cost cutting measure, not a performance one. If that were the case then the M1's would win at every aspect of computing but they clearly don't. GPU's and CPU's have very specific work loads. CPU's like lower latency and care less about bandwidth where GPU's are the opposite. While moving the memory closer to the CPU does do amazing things, it also prevents you from upgrading your system since that memory is now soldered. Apple is notorious for charging a fortune to buy more memory or storage. AMD fixed this problem with V-Cache.
ITT: gamers with slow and last-gen computers fail to understand there is a whole big world out there where people have money, will pay for performance, and do things with their computers other than game
Yes, and those same gamers are now using their graphic cards to literally print money. Those slow and last-gen computers will never keep up with the M1. :rolleyes: Which is why those terribly slow RTX cards are going for as much as an M1 laptop. There's a reason why a RTX 3090 costs so damn much right now. You think the miners just overlooked the M1's? Quick Google show's that an M1 with mining software will make... 42 cents per day. The M1's are not compute monsters.
 
The only good thing about the Apple M1 GPU's is video encoding and decoding, and that really isn't the GPU so much as a fixed function of the M1 hardware. If the compute was good then miners would be all over them. These are terrible GPU's, but great for video editing.

Should someone tell him there is also a CPU in the M1 Max/Ultra? Or should we keep letting him think it's just a GPU for fun?


You seem to think that this hasn't been done before... to death. Unified memory is a cost cutting measure, not a performance one. If that were the case then the M1's would win at every aspect of computing but they clearly don't. GPU's and CPU's have very specific work loads. CPU's like lower latency and care less about bandwidth where GPU's are the opposite. While moving the memory closer to the CPU does do amazing things, it also prevents you from upgrading your system since that memory is now soldered. Apple is notorious for charging a fortune to buy more memory or storage. AMD fixed this problem with V-Cache.

Unified memory a cost savings measure? LMFAO The more you talk the more technical users are shaking their head at you in disbelief. You're just digging your hole deeper and deeper.

Yes, and those same gamers are now using their graphic cards to literally print money. Those slow and last-gen computers will never keep up with the M1. :rolleyes: Which is why those terribly slow RTX cards are going for as much as an M1 laptop. There's a reason why a RTX 3090 costs so damn much right now. You think the miners just overlooked the M1's? Quick Google show's that an M1 with mining software will make... 42 cents per day. The M1's are not compute monsters.

I have four RTX 3090 and none of them are being used for gaming. We use them because we couldn't afford the unified memory compute platforms that Nvidia offers that you think are so cheap.

I did mid-understand you - initially I thought you believed computers were only for gaming. Now I understand that you think they are only for gaming and crypto, and if you believe cryptocurrency profitability is the only measure of hardware compute (and the only type of compute) your ignorance on this matter makes a LOT more sense.
 
Literally every single thing except gaming, which I don't care about since I have a PC for that and this is a work machine. Price doesn't matter for me, work paid for it since it makes us $$.
What can't Windows 11 do (for far cheaper and better with custom builds) that MacOS can exactly? ...

Couldn't you argue that you make $$ to work on a Windows machine, thus making your point moot?
 
What can't Windows 11 do (for far cheaper and better with custom builds) that MacOS can exactly? ...

Couldn't you argue that you make $$ to work on a Windows machine, thus making your point moot?
Work with M1 chips?

Your second point is confusing. They pay him to accomplish a task, they provide a tool to do said task.
 
I think base.
I can believe that. Ars had the first crypto “algorithm” benches, and it’s not a 3090, but it’s also very early for the more advanced hardware. I’m watching out of real curiosity as there is a correlation there for other algorithms in the real world.
 
Should someone tell him there is also a CPU in the M1 Max/Ultra? Or should we keep letting him think it's just a GPU for fun?
CPU's don't matter anymore. They all roughly perform the same. So much so that Apple and Intel put in "performance cores" along with energy efficient cores. I'd even argue that older CPU's like the 6600K as still comparable to modern CPU's.
Unified memory a cost savings measure? LMFAO The more you talk the more technical users are shaking their head at you in disbelief. You're just digging your hole deeper and deeper.
Yea sure go ahead and AD Hominem me while providing nothing to the discussion. The Xbox 360 did it. The Xbox One did it. Most game consoles have done it. Cheap AMD APU's have been doing it. The only difference here is that Apple moved the memory closer to the chip to reduce latency and made the memory not up-gradable. Ultimately it's a method to cut costs. Intel and AMD have tried to combat this with eDRAM which was effective but also costly. Why you think Intel's Iris Pro graphics were so good? Do you honestly believe that if Apple segregated the memory it wouldn't perform better?

The problem with sharing a memory pool is that both the CPU and GPU will fight over the memory, which can slow things down. Hence why it's always better to have memory separate. Plus the CPU and GPU like their memory a little different. Why you think we still use DDR for CPU's and GDDR for GPU's? More memory bandwidth usually means more latency and lower latency usually means lower bandwidth. Apple tried to get the best of both worlds by again moving the ram closer to the SoC, but there's only so much that can solve.
I have four RTX 3090 and none of them are being used for gaming. We use them because we couldn't afford the unified memory compute platforms that Nvidia offers that you think are so cheap.
You use them because you couldn't afford to buy an Nvidia Tesla or AMD FirePro or Radeon Pro WX. You can't just use the phrase "unified memory" like it applies to everything.
I did mid-understand you - initially I thought you believed computers were only for gaming. Now I understand that you think they are only for gaming and crypto, and if you believe cryptocurrency profitability is the only measure of hardware compute (and the only type of compute) your ignorance on this matter makes a LOT more sense.
Says the guy who believes that buying an Apple product is saving money, or is the only computer worth buying.
 
What can't Windows 11 do (for far cheaper and better with custom builds) that MacOS can exactly? ...

Couldn't you argue that you make $$ to work on a Windows machine, thus making your point moot?

As Iopoetve said, work with Apple silicon. The performance this platform offers is the driver for using MacOS, not the other way around. We would actually prefer linux for the specific work we do, but Macports and homebrew support almost all of (at least the open source) tools that we use. Actually, a lot of tools just aren't available on windows that are available on MacOS via macports. A shocking amount of Linux software works on MacOS, way more than works on windows.

As one concrete example where the Macbook can do what no other laptop can is doing high sample rate data collection in the field using field tech's personal daily laptops. We previously were using linux desktop machines that we'd throw into a van along with giant battery powered inverters. We needed desktops because windows laptops were deficient in several ways: storage (speed and capacity, needed to be able to write at several gigabytes per second, consistently for bursts of up to about a minute), CPU performance, I/O (need 10gb and 40gb SFP), power consumption (they get horrible battery life running truly balls to the walls), etc, etc, etc. We eventually put everything into a rolling rack that made it easier but it still required a team of people to deal with (one dealing with the hardware, one with the antenna, one to drive, etc).

With the new Macbooks, the CPUs have enough compute to handle it, the SSDs are large and fast enough to do keep up, we can get 40gb with a Thunderbolt adapter, and we can leave the rack and battery pack at home. Data collection can be done by one person in a car instead of a van, just throw the macbook on the passenger seat and press a button while you hold the antenna. And it can be done by any tech with their standard Macbook, no need to grab the van and roll out the rack. Done and done.
 
CPU's don't matter anymore. They all roughly perform the same. So much so that Apple and Intel put in "performance cores" along with energy efficient cores. I'd even argue that older CPU's like the 6600K as still comparable to modern CPU's.

Lol what? My 3990x disagrees. CPUs only perform roughly the same if all you do is game, like you. Quoted for hilarity.

Yea sure go ahead and AD Hominem me while providing nothing to the discussion.

I don't have to ad hominem you for people to laugh when you say stuff like "all cpus perform roughly the same."

The problem with sharing a memory pool is that both the CPU and GPU will fight over the memory, which can slow things down.

Dude, the M1 Ultra has 20x the memory bandwidth of a 10900k.


Hence why it's always better to have memory separate. Plus the CPU and GPU like their memory a little different. Why you think we still use DDR for CPU's and GDDR for GPU's? More memory bandwidth usually means more latency and lower latency usually means lower bandwidth. Apple tried to get the best of both worlds by again moving the ram closer to the SoC, but there's only so much that can solve.

Everything you are saying is incorrect and outdated. A lot of GPUs use GDDR because it is cheaper than HBM, not because GDDR is so great. Apple did get the best of both worlds, which is why their architecture annihilates anything in a similar power envelope from a total system standpoint (cpu plus gpu performance).
 
Ultimately it's a method to cut costs.
Sharing regular DDR ram like on some igpu solution or consoles yes obviously, has it is cheaper ram than the usual vram memory and less of it.

That seem really different than on the soc large amount of fast memory, everything on the soc will cost much more no ?

If it really saved money (even using faster/lower latency solution) at simply the cost at loosing capacity to upgrade later on, almost everyone would choose the cheaper but more faster ram at a lower price no ?
 
Last edited:
Lol what? My 3990x disagrees. CPUs only perform roughly the same if all you do is game, like you. Quoted for hilarity.
Obviously more cores gives better performance but in terms of IPC they are roughly the same. Also more cores only gives more performance for applications that can use it. Not many applications can make use of many cores. Linux kernel compiling is one of the best uses of that many cores.
I don't have to ad hominem you for people to laugh when you say stuff like "all cpus perform roughly the same."
Still not wrong and you're still ad hominemin.
Dude, the M1 Ultra has 20x the memory bandwidth of a 10900k.
Remember, CPU's care less about bandwidth and more about latency. The M1 memory has 100ns while your typical Ryzen CPU has 60ns depending on memory timing settings. Anyone who's ever tuned the memory of their Ryzen CPU's know that the extra clock speed isn't worth it sometimes over the increased latency. This is also why AMD has created V-Cache since cache has much lower latency than ram.
https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested
Everything you are saying is incorrect and outdated. A lot of GPUs use GDDR because it is cheaper than HBM, not because GDDR is so great.
From what I remember HBM is even worse when it comes to latency compared to GDDR. Modern GPU's are still using GDDR so nothing I said is outdated. Also yes, you won't find HBM in a modern GPU because of cost.
Apple did get the best of both worlds, which is why their architecture annihilates anything in a similar power envelope from a total system standpoint (cpu plus gpu performance).
Apple's power consumption can be attributed to many things, like them being on 5nm while AMD is on 7nm and Intel just reached 10nm. The unified memory will also consume less power because again it's closer to the SoC. Being ARM is another benefit as well but the decision to go unified memory is to cut costs. They could easily dedicate a section of memory for the GPU and GPU only while doing the same for the CPU.

Sharing regular DDR ram like on some igpu solution or consoles yes obviously, has it is cheaper ram than the usual vram memory and less of it.

That seem really different than on the soc large amount of fast memory, everything on the soc will cost much more no ?
The Xbox Series X GDDR6 is 560 GB/s which is unified memory and also faster than the Apple M1 Max. It's cheaper to use one memory type than to mix and match different memory types to get optimal performance. Consoles don't do this for better performance but to cut costs. The CPU will hurt in performance but generally CPU performance is less important in gaming than GPU. This is what the PS3 did and it costed more than the unified memory of the Xbox 360 and thanks to the eDRAM the performance impact wasn't as bad.

If the Apple M1 Ultra had 64GB of system memory and 32GB of VRAM it would be faster. Not only faster for the individual parts but because you also avoid both components accessing memory at the same time. Having an insane 800GB/s helps avoid this but it still isn't optimal.
If it really saved money (even using faster/lower latency solution) at simply the cost at loosing capacity to upgrade later on, almost everyone would choose the cheaper but more faster ram at a lower price no ?
Faster ram is objective, but cheaper it is for certain. Has anyone here opened up a laptop for the past ten years? Lots of laptop manufacturers have taken away the option to upgrade memory. Some only have one memory slot if they do. I'm certain that other laptop manufactures will start doing what Apple has done with the M1, but again it's to cut costs while getting better performance out of cheap APU's. Also it forces customers to pay more for higher memory like Apple.

If you buy a laptop with a dedicated GPU then obviously segregated memory because it's obviously better. What many people don't understand about IPC and that includes paradoxical, is that there's not many ways to improve that performance. Lower memory latency is entirely how you improve IPC, because the work load of a CPU is usually linear, meaning you can't process ahead of time as you need to wait for the results of the current task. GPU's though work with math and math doesn't care in what order the work is done, so latency doesn't matter as much and sheer amounts of bandwidth do. The move from DDR to DDR2 to DDR3 to DDR4 and now DDR5 is all about exchanging latency for more bandwidth. As far as I'm aware of nobody has memory that is both low latency and high bandwidth. This is why AMD's V-Cache is a big deal because it's basically an advance version of eDRAM. This is why we have L2 and L3 cache which is to combat memory latency.

If you bought an Apple product because you thought it was the fastest of the fast then you're wrong. Power efficient sure but nowhere near the fastest. Do I gotta link a Linus Tech Tips video to show this? Of course I do because paradoxical is gonna call me names while making fart noises from his mouth while calling himself smart.
 
The Xbox Series X GDDR6 is 560 GB/s which is unified memory and also faster than the Apple M1 Max. It's cheaper to use one memory type than to mix and match different memory types to get optimal performance. Consoles don't do this for better performance but to cut costs. The CPU will hurt in performance but generally CPU performance is less important in gaming than GPU. This is what the PS3 did and it costed more than the unified memory of the Xbox 360 and thanks to the eDRAM the performance impact wasn't as bad.

If the Apple M1 Ultra had 64GB of system memory and 32GB of VRAM it would be faster. Not only faster for the individual parts but because you also avoid both components accessing memory at the same time. Having an insane 800GB/s helps avoid this but it still isn't optimal.

I have to disagree with you on unified memory architecture being just a cost cutting measure. There is going to be some huge advantages having both the CPU and GPU being able to access the same pool of memory without CPU and GPU communicating through PCI-e from system Memory to GPU memory.
Xbox Series X GDDR6 has huge bandwidth but it has also has a high latency, at least 5 times more than typical DDR4-5 memories. LPDDR5 on Mac M1 satisfies the both the ideal low latency requirement for CPU as well as bandwidth for GPU.
 
Last edited:
I have to disagree with you on unified memory architecture being just a cost cutting measure. There is going to be some hugh advantages having both the CPU and GPU being able to access the same pool of memory without CPU and GPU communicating through PCI-e from system Memory to GPU memory.
Xbox Series X GDDR6 has huge bandwidth but it has also has a high latency, at least 5 times more than typical DDR4-5 memories. LPDDR5 on Mac M1 satisfies the both the ideal low latency requirement for CPU as well as bandwidth for GPU.
Metal’s Direct Memory Access API’s allow the GPU to directly access its own private memory in addition to the CPU’s shared memory. Giving it both the functionality of DX12’s DirectStorage and IBM/NVidia’s BaM.

This gives Apple a large advantage when it comes to crunching large datasets, dealing with large AV streams, and many of the tasks that Apple users tend to purchase apples for.
While the M1-Ultra is far from the most powerful chip or even the best chip, it’s very likely that Apple has arranged for it to be the most optimized and efficient.
 
Obviously more cores gives better performance but in terms of IPC they are roughly the same. Also more cores only gives more performance for applications that can use it. Not many applications can make use of many cores. Linux kernel compiling is one of the best uses of that many cores.

Still not wrong and you're still ad hominemin.

Remember, CPU's care less about bandwidth and more about latency. The M1 memory has 100ns while your typical Ryzen CPU has 60ns depending on memory timing settings. Anyone who's ever tuned the memory of their Ryzen CPU's know that the extra clock speed isn't worth it sometimes over the increased latency. This is also why AMD has created V-Cache since cache has much lower latency than ram.
https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested

From what I remember HBM is even worse when it comes to latency compared to GDDR. Modern GPU's are still using GDDR so nothing I said is outdated. Also yes, you won't find HBM in a modern GPU because of cost.

Apple's power consumption can be attributed to many things, like them being on 5nm while AMD is on 7nm and Intel just reached 10nm. The unified memory will also consume less power because again it's closer to the SoC. Being ARM is another benefit as well but the decision to go unified memory is to cut costs. They could easily dedicate a section of memory for the GPU and GPU only while doing the same for the CPU.


The Xbox Series X GDDR6 is 560 GB/s which is unified memory and also faster than the Apple M1 Max. It's cheaper to use one memory type than to mix and match different memory types to get optimal performance. Consoles don't do this for better performance but to cut costs. The CPU will hurt in performance but generally CPU performance is less important in gaming than GPU. This is what the PS3 did and it costed more than the unified memory of the Xbox 360 and thanks to the eDRAM the performance impact wasn't as bad.

If the Apple M1 Ultra had 64GB of system memory and 32GB of VRAM it would be faster. Not only faster for the individual parts but because you also avoid both components accessing memory at the same time. Having an insane 800GB/s helps avoid this but it still isn't optimal.

Faster ram is objective, but cheaper it is for certain. Has anyone here opened up a laptop for the past ten years? Lots of laptop manufacturers have taken away the option to upgrade memory. Some only have one memory slot if they do. I'm certain that other laptop manufactures will start doing what Apple has done with the M1, but again it's to cut costs while getting better performance out of cheap APU's. Also it forces customers to pay more for higher memory like Apple.

If you buy a laptop with a dedicated GPU then obviously segregated memory because it's obviously better. What many people don't understand about IPC and that includes paradoxical, is that there's not many ways to improve that performance. Lower memory latency is entirely how you improve IPC, because the work load of a CPU is usually linear, meaning you can't process ahead of time as you need to wait for the results of the current task. GPU's though work with math and math doesn't care in what order the work is done, so latency doesn't matter as much and sheer amounts of bandwidth do. The move from DDR to DDR2 to DDR3 to DDR4 and now DDR5 is all about exchanging latency for more bandwidth. As far as I'm aware of nobody has memory that is both low latency and high bandwidth. This is why AMD's V-Cache is a big deal because it's basically an advance version of eDRAM. This is why we have L2 and L3 cache which is to combat memory latency.

If you bought an Apple product because you thought it was the fastest of the fast then you're wrong. Power efficient sure but nowhere near the fastest. Do I gotta link a Linus Tech Tips video to show this? Of course I do because paradoxical is gonna call me names while making fart noises from his mouth while calling himself smart.


You realize you're LARPing, right? You don't own any of these systems, and are telling the world which ones should perform best based on...I guess your convoluted understanding of how technology is supposed to work?

I own all of them (3990x, M1 Max MBP, 9900k, 4 RTX 3090) and am telling you which ones actually perform best when doing specific workloads. Airsoft is cool bro, but it's no substitute for combat.
 
The problem with sharing a memory pool is that both the CPU and GPU will fight over the memory, which can slow things down.
Depends on the workload, but true - also why they're not gaming machines. Different use cases.
Hence why it's always better to have memory separate.
Depends on the use case :)
Plus the CPU and GPU like their memory a little different. Why you think we still use DDR for CPU's and GDDR for GPU's? More memory bandwidth usually means more latency and lower latency usually means lower bandwidth. Apple tried to get the best of both worlds by again moving the ram closer to the SoC, but there's only so much that can solve.

You use them because you couldn't afford to buy an Nvidia Tesla or AMD FirePro or Radeon Pro WX. You can't just use the phrase "unified memory" like it applies to everything.
Yes and no. The Tesla cards with video out are close, sure, but things like that A100/etc are both absurdly expensive (as you said, especially when combined with licensing) but require a datacenter level system to cool/power/etc (they are only passively cooled - designed for wind-tunnel style fans in a case). You're right that unified memory isn't always a panacea, but there are use cases where it makes sense (and many where it does not). Apple is aiming at a niche market here - but they also know their market, and so far, that market loves the tool in question.
Says the guy who believes that buying an Apple product is saving money, or is the only computer worth buying.
Definitely not on the worth buying, but saving money? Depends on what you're doing again. I'd take a Macbook Pro laptop (hell, have an intel one now), but other than curiosity, I have very little interest in the Studio. But if I needed to set up an AI/ML farm on-prem? Or do a bunch of professional video editing? I'd be testing one out to see if it would do what I want - hella cheaper than the alternatives, even if only for pre-prod workloads to stage to AWS or a set of actual A100 or DGX systems.
As Iopoetve said, work with Apple silicon. The performance this platform offers is the driver for using MacOS, not the other way around. We would actually prefer linux for the specific work we do, but Macports and homebrew support almost all of (at least the open source) tools that we use. Actually, a lot of tools just aren't available on windows that are available on MacOS via macports. A shocking amount of Linux software works on MacOS, way more than works on windows.
BSD is BSD!
As one concrete example where the Macbook can do what no other laptop can is doing high sample rate data collection in the field using field tech's personal daily laptops. We previously were using linux desktop machines that we'd throw into a van along with giant battery powered inverters. We needed desktops because windows laptops were deficient in several ways: storage (speed and capacity, needed to be able to write at several gigabytes per second, consistently for bursts of up to about a minute), CPU performance, I/O (need 10gb and 40gb SFP), power consumption (they get horrible battery life running truly balls to the walls), etc, etc, etc. We eventually put everything into a rolling rack that made it easier but it still required a team of people to deal with (one dealing with the hardware, one with the antenna, one to drive, etc).
See.
With the new Macbooks, the CPUs have enough compute to handle it, the SSDs are large and fast enough to do keep up, we can get 40gb with a Thunderbolt adapter, and we can leave the rack and battery pack at home. Data collection can be done by one person in a car instead of a van, just throw the macbook on the passenger seat and press a button while you hold the antenna. And it can be done by any tech with their standard Macbook, no need to grab the van and roll out the rack. Done and done.
Yep. Good use case.
Obviously more cores gives better performance but in terms of IPC they are roughly the same. Also more cores only gives more performance for applications that can use it. Not many applications can make use of many cores. Linux kernel compiling is one of the best uses of that many cores.

Still not wrong and you're still ad hominemin.

Remember, CPU's care less about bandwidth and more about latency. The M1 memory has 100ns while your typical Ryzen CPU has 60ns depending on memory timing settings. Anyone who's ever tuned the memory of their Ryzen CPU's know that the extra clock speed isn't worth it sometimes over the increased latency. This is also why AMD has created V-Cache since cache has much lower latency than ram.
https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested

From what I remember HBM is even worse when it comes to latency compared to GDDR. Modern GPU's are still using GDDR so nothing I said is outdated. Also yes, you won't find HBM in a modern GPU because of cost.
Until you get up to the high-end compute cards - then they're stuffing 80G of the stuff in it. With the price tag to match. You're accurate- it also didn't matter much for gaming workloads, and was mostly ditched for that reason too - but it CAN matter for compute workloads, which is why the A100 and hte like still use it. In massive quantities. Different use case, different needs.

If anything, the market for "add-on compute" is getting more and more segregated. We ([H]) mostly use it for crypto or playing games. Others use the same (ish) hardware for other complex calculations. Some use it for real-time video encoding (NVidia GRID, Google Stadia, etc etc etc). Some of those use cases work best with GDDR. Some work best with HBM. Others ... well, don't actually care that much (quiksync is pretty damned powerful, for a single stream, and works fine off of DDR4). Different uses, different needs. The M1 Ultra/Max I ~suspect~ (haven't seen outputs from my customers yet) will fit those niches. It's not a gaming box - but it is a video editing box, and it may be a compute box... we'll see what the next few months bring.
If the Apple M1 Ultra had 64GB of system memory and 32GB of VRAM it would be faster. Not only faster for the individual parts but because you also avoid both components accessing memory at the same time. Having an insane 800GB/s helps avoid this but it still isn't optimal.
Many use cases would be harder to do then - can't load the same thing into the GPU accessible memory without extra calls to different memory segments. This is where unified memory MAY be an advantage, since your memory mapping is much easier to handle (and any abstraction system will impact performance). Workload dependent, again, of course.
Faster ram is objective, but cheaper it is for certain. Has anyone here opened up a laptop for the past ten years? Lots of laptop manufacturers have taken away the option to upgrade memory. Some only have one memory slot if they do. I'm certain that other laptop manufactures will start doing what Apple has done with the M1, but again it's to cut costs while getting better performance out of cheap APU's. Also it forces customers to pay more for higher memory like Apple.
Yep. On all parts.
If you bought an Apple product because you thought it was the fastest of the fast then you're wrong. Power efficient sure but nowhere near the fastest. Do I gotta link a Linus Tech Tips video to show this? Of course I do because paradoxical is gonna call me names while making fart noises from his mouth while calling himself smart.

It's fast at some things, not fast at others, not capable of still others. If the tool is what you need, snag it. If it isn't, buy something else. I'd take an M1 Max laptop because battery life, fast CPU, and I don't really game on the road (got a tablet for that). I used to tote around an Alienware 13 when I was travelling 50% of the time, and in 2 years, I got about 20 hours of actual gaming time on it. The rest of the time it played netflix or dialed into my lab so I could work. I swapped back to a 13" ultra portable, and then my most recent place gave me a 16" MBP Intel (with a 5300M!). I technically installed steam on it, and then subnautica, but... I think I launched the game once to see if it would run? In 5 months? I don't see any personal need for a mobile gaming system, but it's nice to be able to compile code, do some video editing or the like while I'm on the road - things I actually have to do or want to do.
 
Something you all have to remember about Apple, they collect metric shit tones of usage metrics. All anonymous none of it tagged to a user but they know how their devices are used, how long their used, what’s installed, what’s uninstalled, what’s repaired and what’s replaced. In addition to those metrics Apple takes a very active role in consumer feedback, Apple has created a very comfortable ecosystem for their customers and they make very enticing products for those who are on the fence about becoming their customers. They also make it difficult to leave not through lockouts or hurdles alone, but they work to optimize workflows. Nobody likes to change up their work flows, it’s uncomfortable Linux, Windows, Apple, the maintenance guys, secretarial, nobody; try and they will fight you tooth and nail.

With all this data Apple doesn’t release products into a vacuum, they know who their market is, they know how many they are going to sell, and they know how they are going to be used. And most of all they know their claims for those people and their workflows these charts, graphs, and claims will ring true.

Right now I’m on the fence, I can go with the M1 Ultra or a similarly loaded out Lenovo with one of the new Threadripper’s. Pricing for the two won’t be too far apart, and based on leaked benchmarks performance for my needs will be equally similar.

So I need to look over my current workflow, and make some speculations about what I see changing in the next 3-5 years. Apples don’t tolerate change nearly as well as their PC counterparts, but holy hell are the Lenovo’s loud, I know for certain that I can’t have that in my office and maintain a degree of sanity, what little I have remaining is precious.
 
Back
Top