Apple leaks M1 Max Duo, and M1 Ultra

Same reason I haven’t touched a 620. That thing is gonna suck to be around. And by the time TR Pro hits general availability it’ll be exceptionally outdated. I’m hoping for a return of the non pro versions with zen 4.

I don’t have a dedicated need for an Ultra. But there are folks that do. If you want one “do everything” box it ain’t the right choice - but if you do dedicated hardware for something, it might be.
 
I have to disagree with you on unified memory architecture being just a cost cutting measure. There is going to be some huge advantages having both the CPU and GPU being able to access the same pool of memory without CPU and GPU communicating through PCI-e from system Memory to GPU memory.
This isn't a performance issue on todays GPU's. Pretty sure it hasn't been since AGP.
Xbox Series X GDDR6 has huge bandwidth but it has also has a high latency, at least 5 times more than typical DDR4-5 memories. LPDDR5 on Mac M1 satisfies the both the ideal low latency requirement for CPU as well as bandwidth for GPU.
There's nothing wrong with unified memory because it's efficient and cheap but it certainly isn't faster than segregated memory for the reasons I've already explained.
Metal’s Direct Memory Access API’s allow the GPU to directly access its own private memory in addition to the CPU’s shared memory. Giving it both the functionality of DX12’s DirectStorage and IBM/NVidia’s BaM.
Too bad nobody is gonna use it due to Metal being specific to Apple hardware. Didn't you want the Linus Tech Tips video? Most developers use MoltonVK because nobody wants to learn the Metal API. Apple M1's will always have a handicap because of the Metal API.
You realize you're LARPing, right? You don't own any of these systems, and are telling the world which ones should perform best based on...I guess your convoluted understanding of how technology is supposed to work?
You have literally done nothing to bad mouth me and have provided nothing technical to support whatever it is you're claiming. I'm not sure if you work for Apple and are paid to troll forums or just a huge Apple fanboy.
I own all of them (3990x, M1 Max MBP, 9900k, 4 RTX 3090) and am telling you which ones actually perform best when doing specific workloads. Airsoft is cool bro, but it's no substitute for combat.
Do you go around telling mechanics they have no idea how your cars work because they don't own it? You owning it doesn't mean you know how it works either. Lots of people use things that are functional illiterate.
 
Too bad nobody is gonna use it due to Metal being specific to Apple hardware. Didn't you want the Linus Tech Tips video? Most developers use MoltonVK because nobody wants to learn the Metal API. Apple M1's will always have a handicap because of the Metal API.
It’s already in use with dozens of scientific, marketing, video editing, and music editing software, as well as a number of network security toolsets. It’s already being used and has been for a long time. It’s one of the features that made that AMD accelerator card so good.

That feature and those cards made it possible for a number of companies I serviced to easily take on an additional 10k a month in contract work per box. It was something that gave a $50k piece of hardware a 6 month ROI, the M1’s are looking to do almost double that workload at a quarter of the price.
 
Last edited:
Depends on the workload, but true - also why they're not gaming machines. Different use cases.
This isn't a use case. Shared memory is slower than not. This isn't hard to understand.
Depends on the use case :)
No it freakin doesn't. It only doesn't if you don't care that one part of your system is slowing down. This is why Xbox went with DDR3 because they cared more about the CPU than GPU. PS4 went DDR5 because GPU > CPU. They do have a use case but their use case is gaming. Apple products are general use machines so you ideally want everything as fast as it can be.
Yes and no. The Tesla cards with video out are close, sure, but things like that A100/etc are both absurdly expensive (as you said, especially when combined with licensing) but require a datacenter level system to cool/power/etc (they are only passively cooled - designed for wind-tunnel style fans in a case).
I'm never going to defend server grade graphic cards just because they're stupidly overpriced for what is essentially the same as retail cards. The reason companies buy them is for the certification that it won't suddenly crash and waste hours of downtime. I'm on Linus Torvalds side in that everything should be as stable as it can be which means running ecc memory. Guess what the M1 doesn't have?
You're right that unified memory isn't always a panacea, but there are use cases where it makes sense (and many where it does not). Apple is aiming at a niche market here - but they also know their market, and so far, that market loves the tool in question.
Apple is also the same company that removed the headphone jack and has also never discovered the SD card slot. You're giving Apple way too much credit here.
Definitely not on the worth buying, but saving money? Depends on what you're doing again. I'd take a Macbook Pro laptop (hell, have an intel one now), but other than curiosity, I have very little interest in the Studio. But if I needed to set up an AI/ML farm on-prem? Or do a bunch of professional video editing? I'd be testing one out to see if it would do what I want - hella cheaper than the alternatives, even if only for pre-prod workloads to stage to AWS or a set of actual A100 or DGX systems.
As Mystique has already said, what can't a Windows 11 do that Mac OSX can? I'm not going to knock you for using Final Cut Pro and explain to you that you should go Windows and learn some other video editing program. Especially if video editing is your bread and butter, I wouldn't expect someone to sit down and learn some other video editing software for would could be weeks or months of their time. From what I've seen nothing even compares to the Apple M1's video encoding and decoding speed. I don't do video editing so my opinion is lacking on that one. The benefits of this on the Apple silicon is specifically their Media engine. Specifically an ASIC that Apple built that is unique to their hardware. To say you're saving money is gonna depend on things. Adobe Premiere and Davinci Resolve are available on Windows. Lots of laptops are faster than M1's but not as good on power. With AMD's new Ryzen mobile APU's the gap has closed by a lot, to the point where I believe the power advantage Apple has with the M1's are lesser and likely Apple has one more AMD generation before the power advantage Apple has is completely gone. AMD has really closed the gap and they aren't even on 5nm like the M1's have been.

 
Last edited:
With AMD's new Ryzen mobile APU's the gap has closed by a lot, to the point where I believe the power advantage Apple has with the M1's are lesser and likely Apple has one more AMD generation before the power advantage Apple has is completely gone. AMD has really closed the gap and they aren't even on 5nm like the M1's have been.
I hope to god that AMD partners with somebody who puts those chips in something that isn’t slathered in RGB and branded to the 9’s with Gamer XTreme FAP SNUF edition logos all over it. I literally can’t buy those auditing shits the bed, for me right now the perfect laptop is the AMD Alienware with the 3080 and 64GB ram that I can swap out the cheap ass M.2’s they use so I can drop in a pair of fast 2TB’s in raid 0. It would save me probably 3 hours out of my Monday reports from the firewall logs alone. But I can’t because of all the gaming branding, I’m told that there is going to be an M1 native version of my reporting software coming for Sept of this year. My existing i9 with a 2060 takes about 3h to run the report, the M1 max they say does that same report in about 45m. That’s a big deal for me, and unlike the Alienware, auditing and accounting won’t bat an eye at an Apple purchase.

I really want to see the good stuff in non gaming builds but it almost never shows up.
 
It’s already in use with dozens of scientific, marketing, video editing, and music editing software, as well as a number of network security toolsets. It’s already being used and has been for a long time. It’s one of the features that made that AMD accelerator card so good.

That feature and those cards made it possible for a number of companies I serviced to easily take on an additional 10k a month in contract work per box. It was something that gave a $50k piece of hardware a 6 month ROI, the M1’s are looking to do almost double that workload at a quarter of the price.
We are talking about Apple's Metal API right?
 
This isn't a performance issue on todays GPU's. Pretty sure it hasn't been since AGP.
It is not about today's GPUs having "performance issues", it is about improving the performance by eliminating the bottlenecks as much as possible between CPU and GPU. Reducing the time the data has to travel between the CPU and the GPU by being able to access the same memory pool will improve the performance against separate CPU+GPU solution, especially when working with a large data assets.
 
We are talking about Apple's Metal API right?
Yes. A lot of scientific, data analysis, and artistic applications use the various Metal API’s to get access to the various accelerators. Most of them are all Apple Silicon native now as well.

It’s a Metal API that lets the GPU pull data directly from Ram and another to pull from the storage directly without getting the CPU to authorize access or facilitate the transfer like a standard function call would. Also Metal is responsible for the GPU encoders and such.

Nobody is really developing much in the way of desktop games though in Metal directly (outside of mobile), MoltenVK is much better there as it translate too well. Metal just isn’t supported well in the popular development tool kits, and why should it be, nobody games on a mac and it’s only recently that they’ve even had the hardware that makes it feasible. But even with the hardware there most mac users are not “gamers” in the sense that it’s needed. I mean hell my mom gets more game time than I do but it’s all Solitare, Minesweeper, and those spot the object, FarmVille Facebook games. Intel IGP has been good enough for those for the last decade.

Even if the Unreal and Unity engines support Metal their dev kits don’t really, you can do it but it’s a crap load of work with no payoff that would turn a profit. Now that is supposedly changing this year with some updated versions from the major players but that’s new and software upgrades are expensive with again very little pay off. Apple gaming outside of the simple Mobile and Apple Arcade titles is stuck with a chicken and egg problem. The hardware is only now emerging that you could game on, but because there was no hardware there was no software, but because there was no software there is no games, because there were no games there was no calling for the hardware.
 
Last edited:
This isn't a performance issue on todays GPU's. Pretty sure it hasn't been since AGP.
Eh. It is and it isn't. Games don't swap stuff between system ram and GPU ram all that often in the normal sense - that's loading for a level. When it comes to scientific workloads, copying back and forth may not be the most optimal path. One pool vs two. Data moves between the two, but full data migration isn't a thing. Scientific workloads though? That does happen. Regularly. Load dataset, process, move back to CPU/system ram to package, move back, etc.
There's nothing wrong with unified memory because it's efficient and cheap but it certainly isn't faster than segregated memory for the reasons I've already explained.
No one said it was faster, we said it might be better. Speed isn't everything.
Too bad nobody is gonna use it due to Metal being specific to Apple hardware. Didn't you want the Linus Tech Tips video? Most developers use MoltonVK because nobody wants to learn the Metal API. Apple M1's will always have a handicap because of the Metal API.
For games. No one is talking about games for this - or really shouldn't be. If you're buying a studio for a general purpose system you've either got more money than sense or are a moronic apple bigot. It's not a general purpose system (although they'll gladly take purchases from those who fall into that bucket - money is money, after all).

This isn't a use case. Shared memory is slower than not. This isn't hard to understand.
Again, speed isn't everything.

I can name a handful of GPUs or compute cards with >64G of memory to work with. They're all well in excess of $10k for the card, plus software licenses and support, and all take (as I've said before) 20k+ in supporting hardware just to get in the door. This gets you 64-128G of memory accessible directly by dedicated compute hardware for 2.6k-8k. No supporting hardware needed, no licenses needed, support included. That has value to certain use cases. Also includes dedicated hardware encoders for certain video types, etc.

You're ABSOLUTELY right when it comes to general use cases. But for others - a single pool of 64G usable (chop off a gig or two for the OS) by GPU hardware with a direct-access API? Folks can use that. And it's HELLA cheaper than the alternative.

Is it SLOWER than the alternative? OH YEAH. A Studio doesn't even compare to an A100. But ... 40% of an A100 40 with more memory, for 15-20% of the price? Now the math is suddenly working out REALLY different. I can buy 5 studio systems for the cost of ONE A100 equipped box. It will pull less power, generate less heat, give me WAY more video ram across those 5 boxes than the single A100 box would be, and would be a hellaciously compelling experiment - and if it worked, I'd be buying those instead of Nvidia and Dell all day long. For THAT workload.
No it freakin doesn't. It only doesn't if you don't care that one part of your system is slowing down. This is why Xbox went with DDR3 because they cared more about the CPU than GPU. PS4 went DDR5 because GPU > CPU. They do have a use case but their use case is gaming. Apple products are general use machines so you ideally want everything as fast as it can be.
The Apple Studio is not designed to be a general use system. :) Or it shouldn't be - if you're buying it for that, it's like buying a Chevrolet 4500 chassis cab for a daily driver. Will it do it? Sure. Is it expensive, complex, and weird for it? Absolutely.

I'm never going to defend server grade graphic cards just because they're stupidly overpriced for what is essentially the same as retail cards. The reason companies buy them is for the certification that it won't suddenly crash and waste hours of downtime. I'm on Linus Torvalds side in that everything should be as stable as it can be which means running ecc memory. Guess what the M1 doesn't have?
You're thinking traditional workstation cards. Those are for FP32 and signed drivers. Server cards do things that a consumer card can't even dream of.

EG: M10 GPU (Maxwell generation) had 4 GPUs and 32G of GDDRwhatever. It had 30 (IIRC, might have been 25) H.264 dedicated encoders and worked with Nvidia GRID software to carve those 4 GPUs and encoders up into graphical profiles, accelerating real-time workstations and sending the output as a video stream to a device (VMware Blast or Citrix HDX3d). No consumer card can do that. I normally slapped 3 of them into a server to do 96 VDI desktops with hardware accelerated encoding, since the hardware can switch compression profiles 10x as fast as CPU compression can, which matters for remote users. Alternatively, we carved P4 GPUs into 2 segments for 2 AutoCad workstations, or you could do 8 of those on an M10. The profiles were all in software via a version of SR-IOV and would switch on the fly as needed. Heck, I can even move those VDI desktops to other machines now and it would pass the hardware config to the next machine!

You're really thinking of comparing to home workstations and maybe big business workstations. I'm thinking dedicated video editing boxes and datacenters. Two different worlds with only a little overlap.
Apple is also the same company that removed the headphone jack and has also never discovered the SD card slot. You're giving Apple way too much credit here.
I haven't had a need for an SD Card in years, but my 2015 Macbook has one, I think, and the Studio has one up front too. You're dead accurate on the headphone jack. That was stupid as hell - and annoying as fuck.
As Mystique has already said, what can't a Windows 11 do that Mac OSX can? I'm not going to knock you for using Final Cut Pro and explain to you that you should go Windows and learn some other video editing program. Especially if video editing is your bread and butter, I wouldn't expect someone to sit down and learn some other video editing software for would could be weeks or months of their time. From what I've seen nothing even compares to the Apple M1's video encoding and decoding speed. I don't do video editing so my opinion is lacking on that one. The benefits of this on the Apple silicon is specifically their Media engine. Specifically an ASIC that Apple built that is unique to their hardware. To say you're saving money is gonna depend on things. Adobe Premiere and Davinci Resolve are available on Windows. Lots of laptops are faster than M1's but not as good on power. With AMD's new Ryzen mobile APU's the gap has closed by a lot, to the point where I believe the power advantage Apple has with the M1's are lesser and likely Apple has one more AMD generation before the power advantage Apple has is completely gone. AMD has really closed the gap and they aren't even on 5nm like the M1's have been.


Access a dedicated pool of 64-128G of unified memory tied to a GPU and CPU simultaneously. Even windows 11 can't do that with an A100 - we use Linux or (occasionally) Server 2019 for that. Mostly linux. Once in a blue moon someone will do it on BSD - I ~think~ there's a compatible driver out there. Ish. :p Remember, those cards don't have any kind of video out either.

BINGO on the ASIC though - now, there's ABSOLUTELY an argument to be made here again on hardware vs software. Back in the day, we only could do DVD decoding using a dedicated ASIC for that too (I've still got a copy of one of the Zork games that was designed for an offload card). Eventually we rewrote those to use mixed-precision on CPUs (why CPU encoding now is higher quality than using Nvidia or AMD to accelerate it, as those used fixed-length precision for it). Apple swung that back around by using an ASIC with (what looks like) mixed-precision... but if a new algorithm is written, you're forced to fall back on the CPU alone instead of the ASIC for that. That locks you into a hole. Does that matter today? Nope. Does it matter tomorrow? Maybe. Does it matter in 4 years? I'd be wondering! But if I'm a business, I care about today, next quarter, and maybe a 3 year depreciation cycle. 5 years if I'm fiddly. I can squeeze that in easily enough. :) Hence why I wonder about the studio applicability for prosumers, but don't wonder for enterprise business... or at least those reporting to the SEC (because of accounting size/rules, they won't care).

As for the youtube - that was decoding. I'd love to see a comparison of the H.265 AMD encoders (IIRC, the APUs still have hardware ones like the GPUs do) compared to the M1 encoders, on battery and on external power. That'd be a sweet comparison. No doubt the M1 encoders are higher quality (fixed precision vs mixed again), but if you're doing it on battery, speed is the concern over quality... external power is different.
I hope to god that AMD partners with somebody who puts those chips in something that isn’t slathered in RGB and branded to the 9’s with Gamer XTreme FAP SNUF edition logos all over it. I literally can’t buy those auditing shits the bed, for me right now the perfect laptop is the AMD Alienware with the 3080 and 64GB ram that I can swap out the cheap ass M.2’s they use so I can drop in a pair of fast 2TB’s in raid 0. It would save me probably 3 hours out of my Monday reports from the firewall logs alone. But I can’t because of all the gaming branding, I’m told that there is going to be an M1 native version of my reporting software coming for Sept of this year. My existing i9 with a 2060 takes about 3h to run the report, the M1 max they say does that same report in about 45m. That’s a big deal for me, and unlike the Alienware, auditing and accounting won’t bat an eye at an Apple purchase.
Another use case - although I'm curious, what's the report running on software wise? DB queries or some graph / doc database?
I really want to see the good stuff in non gaming builds but it almost never shows up.
Sad but ayup. I miss my old Inspiron 9300 that had a 6800GT and the Core2 in it. Looked clean and professional, but had horsepower under the hood. The glowing alien head got annoying on my A13.
It is not about today's GPUs having "performance issues", it is about improving the performance by eliminating the bottlenecks as much as possible between CPU and GPU. Reducing the time the data has to travel between the CPU and the GPU by being able to access the same memory pool will improve the performance against separate CPU+GPU solution, especially when working with a large data assets.
For those data sets, yep. For games? Nope. All use case driven.
Yes. A lot of scientific, data analysis, and artistic applications use the various Metal API’s to get access to the various accelerators. Most of them are all Apple Silicon native now as well.

It’s a Metal API that lets the GPU pull data directly from Ram and another to pull from the storage directly without getting the CPU to authorize access or facilitate the transfer like a standard function call would. Also Metal is responsible for the GPU encoders and such.
Another thing here - Metal is designed to use the memory as a unified pool. Most other setups have two different APIs - there's not a unified API for access to the CPU and to the GPU. Or wasn't - I haven't directly touched that code in a while or had to get to that level. ~shrug~.
Nobody is really developing much in the way of desktop games though in Metal directly (outside of mobile), MoltenVK is much better there as it translate too well. Metal just isn’t supported well in the popular development tool kits, and why should it be, nobody games on a mac and it’s only recently that they’ve even had the hardware that makes it feasible. But even with the hardware there most mac users are not “gamers” in the sense that it’s needed. I mean hell my mom gets more game time than I do but it’s all Solitare, Minesweeper, and those spot the object, FarmVille Facebook games. Intel IGP has been good enough for those for the last decade.

Even if the Unreal and Unity engines support Metal their dev kits don’t really, you can do it but it’s a crap load of work with no payoff that would turn a profit. Now that is supposedly changing this year with some updated versions from the major players but that’s new and software upgrades are expensive with again very little pay off. Apple gaming outside of the simple Mobile and Apple Arcade titles is stuck with a chicken and egg problem. The hardware is only now emerging that you could game on, but because there was no hardware there was no software, but because there was no software there is no games, because there were no games there was no calling for the hardware.
I just don't see gaming ever being a big use case for apple hardware. IMHO, it falls into 3 categories:

1. Mini/iMAC/Macbook Air for low power, dead simple, basic "it browses the web and does stuff". I've seen clusters of Mac Minis, but that's a bit weird. The imac is a solid all-in-one for basic use cases.
2. Macbook Pro - high battery life, decently powerful, can do a bit of everything but play games, if you're not a gamer. We're not really the market anymore (which does suck, I guess - you used to be able to use a MBP as a gaming-on-the-go box a bit), but plenty of horsepower for scientific work or other use. It's a premium laptop or cheap-ish ultraportable for non-gaming cases. YMMV, buy if it fits, don't if it doesn't.
3. Studio/MacPro (soon). High power for video editing or scientific work. Not a general purpose system.

Were I buying a laptop right now I have no idea what I'd get. Probably a G14, but I'm not sure. If I HAD to build a workstation right now I'd probably swear at myself and get a P620... MAYBE Alder Lake, but I see AL as a transitional architecture before RL comes out. Gaming box? Go snag a 5900X or 12900K and have fun.
 
Another use case - although I'm curious, what's the report running on software wise? DB queries or some graph / doc database?
DB queries, every week my firewall generates some 40GB of logs that are required to be analyzed and those reports submitted weekly for review. It’s some sort of SQL for sure and it eats my machine while it’s being processed. New requirements for v11 that I will be upgrading to in the fall is 32 cores, 128gb of ram, and 2TB of storage.
I’m currently on v9 because my firewalls weren’t compatible with v10 but we’re replacing the lot of them and skipping 10 for 11.

The software doesn’t have an install, it’s a closed box VM that has options for either Hyper-V, VMWare, and with v11 Parallels.

With v11’s requirements and the huge creep from v8’s it may just go on a dedicated server instead. Not sure how we want to play it.

Edit: v8’s requirements were 8 cores, 16GB of ram and the image for it was an expanding vHD configured for 128gb. This was the version I was on when I got my desktop and it took about 45 min to run but the logs were a lot smaller with v8, as it tracked much less. V9 kept the same requirements but our logs have ballooned as v9 added additional features and we’ve enabled more options and the rules have gotten far more complex and specific.
 
Last edited:
Yeah. That’s to the point I’d dump it to a VM on a server. Ick.
This brings me to my dilemma over a new Lenovo P620 or the M1 Ultra, my racks are full so this lives in my office, and the existing P620 lineup is loud, supposedly the next generation with the 5000 series TR Pro's has a different mounting configuration to tackle the noise problem. So I wait for actual reviews because how loud the systems are is going to be a big deciding factor.

Edit:
Yeah so I talked with Accounting and I was talking with Microsoft about moving my domain controller to Azure then centralizing my DHCP servers there off that, then they were like well what if we put that system up there, and the Microsoft guys said yeah we support that software and they are getting me numbers so I may not need either because Azure only charges for data that comes down the uploading of the 40GB doesn't cost us anything and I would only be paying for the 40MB or so file that it generates on the way back down so it could potentially be way cheaper than any of the local options. So maybe no new toys... :'(
 
Last edited:
I guess it's a shame that the VFX industry mostly uses Linux desktops, servers and clusters for their rendering. It's obvious Apple want to be the next Sun Microsystems, especially with their insistence on using their own proprietary API that absoultely no one is interested in outside of Apple while deliberately shunning Vulkan and freezing all development on OGL.

Essentially, based on Apple's business practices that seem to have had a knock on effect with every other tech company in the USA following suite, you couldn't pay me to own a Mac Studio (or any Apple product for that matter). If I'm going to muck around with ARM processors on the desktop, I'll use the open source products - They may not be anywhere near as fast, but they're more fun to tinker with.
 
Pretty ridiculous that this device still only has HDMI 2.0. Apple is smoking something potent.
 
Pretty ridiculous that this device still only has HDMI 2.0. Apple is smoking something potent.
Almost everyone uses TB displays with them. Heck, most of the enterprise folks I know use TB ~everything~ now... Daisy chain the crap forever it seems. Baffles me, but I'm still using DP for everything on a computer, except the one HTPC. I'm not even sure why the cards have HDMI except for audio out :p
 
As someone who exclusively uses TB I think it's also reasonable to be annoyed out of OCD principle that Apple only included HDMI 2.0. Would I ever use HDMI 2.1, etc? No, it's totally pointless for my use case. But in such an expensive machine it's a little bit like, "really?"

That being said, the fact that PCs often include no or one Thunderbolt ports is absolutely unforgivable. Once you have adapted your workflow to Thunderbolt nothing else is acceptable from a performance and convenience standpoint. Everything else is just obsolete. I am even "that guy" who was pissed that apple removed a thunderbolt port in favor of MagSafe and HDMI. Would much rather have 4 thunderbolt ports instead of 3 and MagSafe/HDMI. I have not used MagSafe a single time since I got my laptop, but I would have used all four TB ports a few times (and used to use all 4 on my previous M1 laptop).
 
Last edited:
Eh, I like that it has magsafe. When you're using the thing plugged in but not at your work desk it's nice to have a connector that just pulls off.
 
Eh, I like that it has magsafe. When you're using the thing plugged in but not at your work desk it's nice to have a connector that just pulls off.

The battery life is so good on the M1 Macbook Pros (even my M1 Max) I don't think I have ever had it plugged in except for at my work desk or during a trans-atlantic flight when I was hammering the CPU the whole time, and when it's at my work desk it's in a full fledged thunderbolt dock setup.
 
As someone who exclusively uses TB I think it's also reasonable to be annoyed out of OCD principle that Apple only included HDMI 2.0. Would I ever use HDMI 2.1, etc? No, it's totally pointless for my use case. But in such an expensive machine it's a little bit like, "really?"

That being said, the fact that PCs often include no or one Thunderbolt ports is absolutely unforgivable. Once you have adapted your workflow to Thunderbolt nothing else is acceptable from a performance and convenience standpoint. Everything else is just obsolete. I am even "that guy" who was pissed that apple removed a thunderbolt port in favor of MagSafe and HDMI. Would much rather have 4 thunderbolt ports instead of 3 and MagSafe/HDMI. I have not used MagSafe a single time since I got my laptop, but I would have used all four TB ports a few times (and used to use all 4 on my previous M1 laptop).
I’m going to have to disagree with you on the MagSafe thing. I’ve got a lot of Mac’s in a lot of board rooms and ultimately somebody has always forgotten to charge theirs and it’s near dead so it’s needing to be plugged in, ultimately somebody always trips on that power cord. MagSafe has probably saved half the MacBooks in my fleet, and I’ve lost half the non MagSafe models to that very accident, they don’t bounce well.
 
I’m going to have to disagree with you on the MagSafe thing. I’ve got a lot of Mac’s in a lot of board rooms and ultimately somebody has always forgotten to charge theirs and it’s near dead so it’s needing to be plugged in, ultimately somebody always trips on that power cord. MagSafe has probably saved half the MacBooks in my fleet, and I’ve lost half the non MagSafe models to that very accident, they don’t bounce well.

I totally understand how it is useful for most people, and agree that most people don't need 4 thunderbolt ports. I recognize I am in the minority there. But I do think with the M1 laptops in general you will barely ever see them plugged in. My Intel MBP 13 was constantly tethered to a cord, M1 not so much.
 
I totally understand how it is useful for most people, and agree that most people don't need 4 thunderbolt ports. I recognize I am in the minority there. But I do think with the M1 laptops in general you will barely ever see them plugged in. My Intel MBP 13 was constantly tethered to a cord, M1 not so much.
Mine are almost always plugged in, if it’s not on a desk to their monitors, it’s to a dongle that’s doing USB & HDMI to a presentation setup or a remote conference meeting. The only time their not plugged in is when their working from home or off site, and only because they’ve all lost their second chargers and they are too scared to tell me so I can order more.

How do I know they’ve lost them, because when ever they show up at 8:45 for that 9am meeting it’s always the same question, “do you have a spare charger, I don’t have time to disassemble my desk for that one and my batteries about to die”
 
Eh, I like that it has magsafe. When you're using the thing plugged in but not at your work desk it's nice to have a connector that just pulls off.
I've also seen some third-party USB-C to magnetic MagSafe-like connectors. Not sure if they charge as efficiently but glad the concept didn't die off when Apple originally stopped adding it before this.
 
It is not about today's GPUs having "performance issues", it is about improving the performance by eliminating the bottlenecks as much as possible between CPU and GPU. Reducing the time the data has to travel between the CPU and the GPU by being able to access the same memory pool will improve the performance against separate CPU+GPU solution, especially when working with a large data assets.
If the CPU+GPU talk to each other often then an SoC would have better performance in general, but sharing the same memory pool will hurt performance. The reason this happens is because there's lanes that allow for communication to the memory and when both are active this can slow things down because the memory controller has to manage this. Where as separate memory that specializes on that component would greatly increase speed.

What's funny is that the only API I can think of that makes use of both the CPU+GPU is OpenCL and didn't Apple dump OpenCL? If I remember correctly Apple helped made OpenCL.
 
If the CPU+GPU talk to each other often then an SoC would have better performance in general, but sharing the same memory pool will hurt performance. The reason this happens is because there's lanes that allow for communication to the memory and when both are active this can slow things down because the memory controller has to manage this. Where as separate memory that specializes on that component would greatly increase speed.

What's funny is that the only API I can think of that makes use of both the CPU+GPU is OpenCL and didn't Apple dump OpenCL? If I remember correctly Apple helped made OpenCL.
Yes and no, the M1 Ultra is running 32 independent memory channels, for 800 GB/s and an internal transfer speed between the SoC’s of 2.5TB/s. Those numbers are unheard of in the consumer space. And provide more than enough flexibility to avoid any significant bottlenecks. But every system has a bottleneck and it’s the designers job to minimize their impact and Apples done a good job of that here.

There are lots of scenarios where a shared pool greatly speeds up specific processes I’m not sure what you mean by an API using both the CPU and GPU though as they all do. Unless you mean for the the data access pipeline then I think you have things backwards. The DX12 direct storage API is one of the first mainstream attempts to Minimize the CPU’s participation in the graphics pipeline and you can clearly see the improvements there. Metal did that back in 2014 but as you know “nobody uses metal” outside of iOS and MacOS development.
 
Yes and no, the M1 Ultra is running 32 independent memory channels, for 800 GB/s and an internal transfer speed between the SoC’s of 2.5TB/s. Those numbers are unheard of in the consumer space. And provide more than enough flexibility to avoid any significant bottlenecks. But every system has a bottleneck and it’s the designers job to minimize their impact and Apples done a good job of that here.
I agree and Apple clearly did a great job. Something that AMD and Intel need to take notes on. I've wanted an APU war for nearly a decade now and I'm still not seeing my API war. Intel may just copy Apple because I'm sure they're still butt hurt over losing Apple as a customer, but AMD's solution makes the most sense and that's chiplet. There is another disadvantage in making an SoC and that's bigger chips make for more defects. This can limit clock speeds as no matter how good you are at binning you will have to contend with defects. Especially since Apple is using 5mn which is a very new manufacturing node that I hardly see anyone using. So far nobody has seen an AMD chiplet solution with GPU and CPU separated. I was shocked that the PS5 and Xbox Series are just one big giant chip but I think that's because nobody wanted to pay AMD the license fee to use this tech.
There are lots of scenarios where a shared pool greatly speeds up specific processes I’m not sure what you mean by an API using both the CPU and GPU though as they all do. Unless you mean for the the data access pipeline then I think you have things backwards.
In 99.99% of end user work loads the data is separated between GPU and CPU. I can't think of many situations where you need both the CPU and GPU to cross talk. Davinci resolve for example can make use of both CPU and GPU through OpenCL, which is a bitch to setup in Linux as an AMD user. But even on the M1, Davinci is nearly as fast or faster than Final Cut Pro. Not sure what Davinici uses on the Apple M1 to encode vidoes.
The DX12 direct storage API is one of the first mainstream attempts to Minimize the CPU’s participation in the graphics pipeline and you can clearly see the improvements there. Metal did that back in 2014 but as you know “nobody uses metal” outside of iOS and MacOS development.
This is gonna be a theme with Apple as they seem to be stepping away from their own open source projects. CUPS and OpenCL were both adopted and supported by Apple but have since been left behind by Apple. Though I think with printers, Apple is depending on IPP to take over CUPS. Vulkan is open source and there's really no reason for Apple not to support it on their hardware. Apple has taken the approach that they're too big for open source projects and they're so big that they're willing to bet you'll support their standards even if that means abandoning standards like Vulkan. Anything Apple wanted that wasn't in Vulkan could have been added to Vulkan themselves just like everybody else has been doing. Surprisingly OpenCL continues to evolve but that maybe due to miners making good use of it.
 
In 99.99% of end user work loads the data is separated between GPU and CPU. I can't think of many situations where you need both the CPU and GPU to cross talk. Davinci resolve for example can make use of both CPU and GPU through OpenCL, which is a bitch to setup in Linux as an AMD user. But even on the M1, Davinci is nearly as fast or faster than Final Cut Pro. Not sure what Davinici uses on the Apple M1 to encode vidoes.
See this here is backward, in a normal process the CPU and GPU are always cross-talking and it's generally inefficient, hence the work into DirectStorage and BAM, the GPU under normal circumstances can't directly access anything in storage or system ram and has to use the CPU to fetch data from either of those sources. This just adds latency and intermediary steps. For CPU intensive tasks independent of the GPU it has no impact, or for smaller GPU tasks where the entire task can be loaded into GPU memory in a single go it also doesn't have much of an impact but in larger jobs involving the GPU where it can't all be loaded at once or jobs with lots of coordinated efforts between the CPU and GPU the minor delays caused by latency can very quickly add up to some big numbers.

Part of the fast encoding time is the unified memory as the system doesn't have to spend time and resources moving data back and forth between system and GPU memory, but Black Magic was partnered with Apple and MainConcept for the development of their codec plugin (https://www.mainconcept.com/) so it is very optimized for the M1 ecosystem, Final Cut Pro uses Compressor and I don't know what or how it does its job but it's a known fact that it's not the greatest, which is why many FCP users actually use Adobe Media Encoder instead of Compressor.

Yeah, Apple's current business goals and the general opensource community's goals don't line up at all right now, but most of the features Vulkan and DX12 are only now just implementing Apple's had in Metal since the 20 teens, and it lets them add the features users request to their API's without having to share any of it with the competition. Apple right now is very much settled on making others play by their rules not the other way around, and they aren't afraid to swing their massive wallet around to make it happen.
 
Last edited:
See this here is backward, in a normal process the CPU and GPU are always cross-talking and it's generally inefficient, hence the work into DirectStorage and BAM, the GPU under normal circumstances can't directly access anything in storage or system ram and has to use the CPU to fetch data from either of those sources. This just adds latency and intermediary steps. For CPU intensive tasks independent of the GPU it has no impact, or for smaller GPU tasks where the entire task can be loaded into GPU memory in a single go it also doesn't have much of an impact but in larger jobs involving the GPU where it can't all be loaded at once or jobs with lots of coordinated efforts between the CPU and GPU the minor delays caused by latency can very quickly add up to some big numbers.
The difference here is that you're talking about the CPU being involved in just handing over data where as I'm talking about both crunching numbers at the same and maybe together using the same data. What you're saying does slow things down but this is only an issue if the CPU is not able to keep up. Modern day CPU's are so fast that it hardly matters though I'd like to see the CPU less involved as well.
Part of the fast encoding time is the unified memory as the system doesn't have to spend time and resources moving data back and forth between system and GPU memory, but Black Magic was partnered with Apple and MainConcept for the development of their codec plugin (https://www.mainconcept.com/) so it is very optimized for the M1 ecosystem, Final Cut Pro uses Compressor and I don't know what or how it does its job but it's a known fact that it's not the greatest, which is why many FCP users actually use Adobe Media Encoder instead of Compressor.
The black magic is a very specialized video encoder. This isn't using the CPU or GPU but the media engine that Apple built into their M1. This is an ASIC. ASIC's are very good at doing a specific task. They're also terrible years later when Apple abandons them and nobody supports them.
Yeah, Apple's current business goals and the general opensource community's goals don't line up at all right now, but most of the features Vulkan and DX12 are only now just implementing Apple's had in Metal since the 20 teens,
Metal was made in 2014 and like I said anything wanted could have been implemented themselves into Vulkan. So what, they had a feature nobody else had when nobody still uses Metal besides Apple in 2022? This is the problem with making your own API and going ARM in that you really can't expect everyone to jump on board. ARM is easier since it's been around forever but not Metal. Proabably never Metal as developers will always use the lazy way out to working with it, and that's MoltenVK.
and it lets them add the features users request to their API's without having to share any of it with the competition.
How is this ever a good thing? When has this ever worked? Point to me a standard that has worked when it excluded the competition? Remember X2 and Kflex? Remember Creatives EAX standard that they sued John Carmack to include in Doom 3? Remember 3Dfx's Glide? Remember Nvidia's PhysX? These are all dead today. Even Microsoft who has like 80% of the desktop OS market still allows OpenGL and Vulkan on their OS.
Apple right now is very much settled on making others play by their rules not the other way around, and they aren't afraid to swing their massive wallet around to make it happen.
As did every big company before them. You know what that results in? Problems for anyone using legacy software that needs a standard that is no longer supported. The good thing about CUPS and OpenCL is that since they were open standards we don't need Apple to support them. Metal though...
standards_2x.png
 
The difference here is that you're talking about the CPU being involved in just handing over data where as I'm talking about both crunching numbers at the same and maybe together using the same data. What you're saying does slow things down but this is only an issue if the CPU is not able to keep up. Modern day CPU's are so fast that it hardly matters though I'd like to see the CPU less involved as well.
They don’t though, if they did there wouldn’t be such a coordinated effort to unify the memory. In large datasets the transfer speeds just don’t make it feasible, especially if you have data that the CPU and GPU need to coordinate on, that leads to numerous deadlock scenarios that take time and resources to avoid.
The black magic is a very specialized video encoder. This isn't using the CPU or GPU but the media engine that Apple built into their M1. This is an ASIC. ASIC's are very good at doing a specific task. They're also terrible years later when Apple abandons them and nobody supports them.
Everything works until somebody drops support for it, so if 10 years from now they drop support for it in version umpteen, it still works in umpteen-1 and you’ve gotten 10 years out of it. Either upgrade and move on or stay where you are and continue being happy with it. This is not an Apple problem, it’s a universal problem with any manufactured good.
Metal was made in 2014 and like I said anything wanted could have been implemented themselves into Vulkan. So what, they had a feature nobody else had when nobody still uses Metal besides Apple in 2022? This is the problem with making your own API and going ARM in that you really can't expect everyone to jump on board. ARM is easier since it's been around forever but not Metal. Proabably never Metal as developers will always use the lazy way out to working with it, and that's MoltenVK.
In big development you aren’t programming in DirectX, Vulkan, Metal, or OpenGL, your using the markup language for your platform checking some boxes then it translates the work back down. This was one of the big things that switching back to a low level API let development suites do, multi platform is now a couple of checkbox’s on compile. MoltenVK is a midterm step, most development suites don’t yet support metal outside of Xcode because there was no reason to, that’s changed and the suites are all giving notice of impending Metal support. The M1 platform is Apples first read step towards offering decent GPU capabilities but again Apple users aren’t traditional gamers. This is changing but mostly because Apple is managing to grow their market.
How is this ever a good thing? When has this ever worked? Point to me a standard that has worked when it excluded the competition? Remember X2 and Kflex? Remember Creatives EAX standard that they sued John Carmack to include in Doom 3? Remember 3Dfx's Glide? Remember Nvidia's PhysX? These are all dead today. Even Microsoft who has like 80% of the desktop OS market still allows OpenGL and Vulkan on their OS.
I’m not going to argue against open standards, they are preferable 100% of the time. But they aren’t always best, there are very real and tangible benefits to their closed source competition be it Metal, CUDA, DirectX.

As did every big company before them. You know what that results in? Problems for anyone using legacy software that needs a standard that is no longer supported. The good thing about CUPS and OpenCL is that since they were open standards we don't need Apple to support them. Metal though...
CUPS is born of necessity, even Apple doesn’t have the pull to force HP, Lexmark, Ricoh, and all the other major printer people another proprietary standard. And when your dropping 15k on a copier you expect a decade out of it at least, proprietary formats here are a liability not a benefit and that’s a known thing.
That said I am super happy they are giving alternatives to Bonjour that stupid TTL of 1 is a nightmare to deal with when your printers don’t live on the same subnet as your wireless equipment.

OpenCL for many years has been in a tough spot and Intel is likely to be its saviour. If your running NVidia you use CUDA it’s better, faster, cheaper to implement, with a larger community and better libraries. Apple killed OpenCL on their platforms so you use Metal, so the only place you use OpenCL now is when you have AMD GPU’s, Intel has thrown in with OpenCL and if Intels cards are as competitive in the datacenter’s as they appear to be and Intel keep up with them then this is a big win there.
 
Everything works until somebody drops support for it, so if 10 years from now they drop support for it in version umpteen, it still works in umpteen-1 and you’ve gotten 10 years out of it.
If this were open standards then it would work well beyond support from the manufacturer. To make matters worse is if the world decides to use another video encoder standard then a fixed function video encoder won't be able to handle it.
Either upgrade and move on or stay where you are and continue being happy with it. This is not an Apple problem, it’s a universal problem with any manufactured good.
It is very much an Apple problem and it's e-waste for the brief moment of vendor lock in. It's also an Apple only problem at the moment since Apple is the only ones doing this.
In big development you aren’t programming in DirectX, Vulkan, Metal, or OpenGL, your using the markup language for your platform checking some boxes then it translates the work back down. This was one of the big things that switching back to a low level API let development suites do, multi platform is now a couple of checkbox’s on compile. MoltenVK is a midterm step, most development suites don’t yet support metal outside of Xcode because there was no reason to, that’s changed and the suites are all giving notice of impending Metal support. The M1 platform is Apples first read step towards offering decent GPU capabilities but again Apple users aren’t traditional gamers. This is changing but mostly because Apple is managing to grow their market.
Metal is going to fail. Especially outside of gaming it's going to fail. Game developers have more of an incentive to use Metal since that brings lower system requirements, but for web browsers, CAD, and etc you don't really have that incentive. It's a matter of, should I port my applications to a platform that has only 10% market share? Makes more sense on iOS since it has a lot more than 10% but not on their laptops. Eventually Apple will merge iOS with MacOSX but I feel at that point you don't have a personal computer.
I’m not going to argue against open standards, they are preferable 100% of the time. But they aren’t always best, there are very real and tangible benefits to their closed source competition be it Metal, CUDA, DirectX.
CUDA has been slowly dying and is holding on with it being easier to work with than OpenCL. DirectX is actually being ported to Linux, not that anyone on Linux will use this. DirectX's strength is that Microsoft owns 80% of the desktop OS market, and has Xbox using it as well.
 
The difference here is that you're talking about the CPU being involved in just handing over data where as I'm talking about both crunching numbers at the same and maybe together using the same data. What you're saying does slow things down but this is only an issue if the CPU is not able to keep up. Modern day CPU's are so fast that it hardly matters though I'd like to see the CPU less involved as well.
Yes and no - if you have to move data between the two (GPU and main ram), you have a bottleneck - which doesn't matter for 95% of workloads out there, and probably 99% of consumer ones. But sucks for some HPC workloads and stuff :) This is the point I keep making - it doesn't matter for even US, because it's a WEIRD use case. But that's what Apple is targeting, and apparently believes is a worthwhile market. Cool beans for them. Not a system for me, but I'll probably sell a boatload of them if they're right.
The black magic is a very specialized video encoder. This isn't using the CPU or GPU but the media engine that Apple built into their M1. This is an ASIC. ASIC's are very good at doing a specific task. They're also terrible years later when Apple abandons them and nobody supports them.
Yep. This scares me, but enterprise cares about a 3-5 year cycle. It'll be safe for that long. ~shrug~
Metal was made in 2014 and like I said anything wanted could have been implemented themselves into Vulkan. So what, they had a feature nobody else had when nobody still uses Metal besides Apple in 2022? This is the problem with making your own API and going ARM in that you really can't expect everyone to jump on board. ARM is easier since it's been around forever but not Metal. Proabably never Metal as developers will always use the lazy way out to working with it, and that's MoltenVK.

How is this ever a good thing? When has this ever worked? Point to me a standard that has worked when it excluded the competition? Remember X2 and Kflex? Remember Creatives EAX standard that they sued John Carmack to include in Doom 3? Remember 3Dfx's Glide? Remember Nvidia's PhysX? These are all dead today. Even Microsoft who has like 80% of the desktop OS market still allows OpenGL and Vulkan on their OS.

As did every big company before them. You know what that results in? Problems for anyone using legacy software that needs a standard that is no longer supported. The good thing about CUPS and OpenCL is that since they were open standards we don't need Apple to support them. Metal though...
View attachment 456504
100% totally agree. Personally. And even professionally - but I'm not the one buying the thing :D
They don’t though, if they did there wouldn’t be such a coordinated effort to unify the memory. In large datasets the transfer speeds just don’t make it feasible, especially if you have data that the CPU and GPU need to coordinate on, that leads to numerous deadlock scenarios that take time and resources to avoid.

Everything works until somebody drops support for it, so if 10 years from now they drop support for it in version umpteen, it still works in umpteen-1 and you’ve gotten 10 years out of it. Either upgrade and move on or stay where you are and continue being happy with it. This is not an Apple problem, it’s a universal problem with any manufactured good.
Yep. But the customers they're really selling this to (the ones that buy up front right now) won't care. The end users will care when they're 8 years in, but that's a small part of their targeted market (from what I can tell and am hearing).
In big development you aren’t programming in DirectX, Vulkan, Metal, or OpenGL, your using the markup language for your platform checking some boxes then it translates the work back down. This was one of the big things that switching back to a low level API let development suites do, multi platform is now a couple of checkbox’s on compile. MoltenVK is a midterm step, most development suites don’t yet support metal outside of Xcode because there was no reason to, that’s changed and the suites are all giving notice of impending Metal support. The M1 platform is Apples first read step towards offering decent GPU capabilities but again Apple users aren’t traditional gamers. This is changing but mostly because Apple is managing to grow their market.
Yep. And the GPU on M1 is really for other things than games - if games come along, sweet.

In some ways this is the opposite of Nvidia/AMD. For them, games were first - then the other use cases came later (Crypto/image analysis/AI/ML/etc). For Apple, they're aiming at that secondary market first, and if games come? Cool beans.
I’m not going to argue against open standards, they are preferable 100% of the time. But they aren’t always best, there are very real and tangible benefits to their closed source competition be it Metal, CUDA, DirectX.


CUPS is born of necessity, even Apple doesn’t have the pull to force HP, Lexmark, Ricoh, and all the other major printer people another proprietary standard. And when your dropping 15k on a copier you expect a decade out of it at least, proprietary formats here are a liability not a benefit and that’s a known thing.
That said I am super happy they are giving alternatives to Bonjour that stupid TTL of 1 is a nightmare to deal with when your printers don’t live on the same subnet as your wireless equipment.

OpenCL for many years has been in a tough spot and Intel is likely to be its saviour. If your running NVidia you use CUDA it’s better, faster, cheaper to implement, with a larger community and better libraries. Apple killed OpenCL on their platforms so you use Metal, so the only place you use OpenCL now is when you have AMD GPU’s, Intel has thrown in with OpenCL and if Intels cards are as competitive in the datacenter’s as they appear to be and Intel keep up with them then this is a big win there.
Yep. Fingers crossed for Intel on this one.

And @#%@ bonjour. Period.
 
Yep. But the customers they're really selling this to (the ones that buy up front right now) won't care. The end users will care when they're 8 years in, but that's a small part of their targeted market (from what I can tell and am hearing).
But at 8 years down the road people who were using it to make money will have moved on so it's just the hobbyists left and those using it as their Facebook, email, banking machine. And it will do just fine there until it stops working, not many people casual consumers included gets overly perturbed when an 8+-year-old electronic device dies
Yes and no - if you have to move data between the two (GPU and main ram), you have a bottleneck - which doesn't matter for 95% of workloads out there, and probably 99% of consumer ones. But sucks for some HPC workloads and stuff :) This is the point I keep making - it doesn't matter for even US, because it's a WEIRD use case. But that's what Apple is targeting, and apparently believes is a worthwhile market. Cool beans for them. Not a system for me, but I'll probably sell a boatload of them if they're right.
AI workloads make really good use of unified memory, and just about everything is getting the AI touch right now, it's the hot trend, and Apple is one of the major players in it. You can expect some really big things in the upcoming year with image sharpening and audio enhancements using AI's and they deliver but sweet Jesus are they resource intensive. It's also very useful for anything using an AR interface and Apple is expected to be making some pretty big announcements in that field shortly.
Yep. This scares me, but enterprise cares about a 3-5 year cycle. It'll be safe for that long. ~shrug~
At least that long, and it's not like the old software goes away, you can keep using it I am expecting this to be treated much like the NVEncoder, they just iterate it and eventually move the original over to legacy but Apple has a good track record with their legacy products.
And @#%@ bonjour. Period.
Twice for good measure
 
Apple for many years has had good high-speed peripheral support. I would like for Thunderbolt to get more loving on the PC. My past few motherboards had Thunderbolt built-in and I had a card for my X99 system. USB on crack.
 
Something you all have to remember about Apple, they collect metric shit tones of usage metrics. All anonymous none of it tagged to a user...........................................

You sure about that...with the amount of data points collected, and considering they know your Apple ID and everything else you do, sure It is not as anonymous as you may think if they wanted to narrow down to whom if they needed to. Do not be fooled that Apple care about your privacy anymore than any other company, what Apple does well, is keep your data under their control, so they can choose how to sell it / provide it out to others as anonymous a possible.
 
You sure about that...with the amount of data points collected, and considering they know your Apple ID and everything else you do, sure It is not as anonymous as you may think if they wanted to narrow down to whom if they needed to. Do not be fooled that Apple care about your privacy anymore than any other company, what Apple does well, is keep your data under their control, so they can choose how to sell it / provide it out to others as anonymous a possible.
Apple differentiates their collection between AppleID and Apple hardware differently. I’ve got thousands of Apple products and to comply with FOIPA regulations they have to disclose to me what they are collecting as well as provide tools as to what is contained in their data collections. So unless they are lying and omitting data here the hardware related ones are anonymous. AppleID metrics can’t be because I mean that contains your data, account info, and shitloads of personal data and god knows they run craploads of metrics on that but if you are using any of their MDM solutions then there’s no personal Apple ID’s to correlate and no data to transmit just the corporate stuff. But application statistics go with the hardware stuff not the UserID stuff. So there’s a logical separation.

But Apple collects craploads of usage metrics from all angles, the only difference they have with the others is they blatantly flip off anybody who asks for access to their collected records, but by law they are required to provide individuals with any and all data they have that can be traced back to them, and last audit we did had craploads of it, more than we were equipped to actually audit, but initial reviews of it showed no violations we needed to be concerned with.

So I guess I should have clarified that it’s not that they don’t collect data it’s how they use it? But the ridiculous amounts of hardware/software metrics are how they can know their audiences usage patterns so well which is how they work to optimize their stuff so well.
 
This is interesting.

NUMA will be interesting depending on the capabilities of the interconnect. We generally got to the point of ignoring it for most enterprise workloads by the time haswell became common- it was fast enough and the schedulers smart enough to just make it work. But this will be a new world. Could prove interesting. Will watch tomorrow.
 
NUMA will be interesting depending on the capabilities of the interconnect. We generally got to the point of ignoring it for most enterprise workloads by the time haswell became common- it was fast enough and the schedulers smart enough to just make it work. But this will be a new world. Could prove interesting. Will watch tomorrow.
Apple claims the interconnect can move upwards of 2.5TB/s. Those are some beefy numbers, for CPU workloads you can mostly ignore the differences even when dealing with multiple CPU sockets. But on GPU’s it’s a very different story and with those sorts of numbers it may very well overcome the issues generally associated with multi GPU setups which this technically is.

I really want benchmarks, I mean regardless of what they are I know I am ordering at least 10 to replace what are now 7 year old MacPro’s, because it’s guaranteed to be an improvement in every metric we care about. But it would be nice to know how much of an improvement we can look forward too.

Admin generally loves when I phase out labs of old tech because they often lead to noticeable changes on the electrical but also changes on the gas bill but it’s usually an overall cost decrease. In the summer the AC gets used less but in the winter the gas’s gets used more, normally that’s good but if gas prices stay where they are or god forbid get worse.

I still remember their reactions when I got rid of all the old P4’s and CRT’s, the electrical bill decreased so much they thought they had lost invoices, and the gas bill increased so much they thought they had sprung a leak. Poor maintenance manager worked on those for a week before he mentioned any of it to me and I educated him on the situation.
 
Back
Top