AMD X Apple unveil the Radeon W6800X Duo

I'm genuinely surprised that Apple has stuck with Intel and not switched over to AMD Threadripper, Threadripper Pro or even Epyc chips. The improvement on the Mac Pro 2019's IO would be massive and significantly cheaper as the Threadripper Pro and Epyc chips sport a full 128 PCIe lanes which would remove the massive PCIe bridge chip on the current Mac Pro motherboard. In addition, AMD and Apple could have enabled Infinity Fabric links between the CPU and these GPUs. That's permit flat and fully coherent memory space between CPU and GPU memory pools. That hits the trifecta of more bandwidth, lower latency and improved efficiency. Bonus if AMD/Xilinx also released a FPGA accelerator card that also connected via Infinity Fabric.

While the professional applications on the Mac side of thing are generally written nowadays to expect multiple GPUs (the trash can model standardized that even though it was well... trash) I am always curious how these cards are exposed in Windows. CrossFire is long dead but this might be the weird exception due to the Infinity Fabric links: I'm curious how four Navi 21 chips working together handle gaming.
They show up and for the right professional apps, you can use them combined - but differently than crossfire. Doesn't boost gaming much if at all, especially for the cost. .As for TR - can AMD make enough chips for them? And would they have needed a more complex design for cooling a chiplet based system than a monolithic die? I suspect that was the reason - and the existing agreement which might have had penalties for cancelling, etc. Known solution vs something new...?
 
I'm genuinely surprised that Apple has stuck with Intel and not switched over to AMD Threadripper, Threadripper Pro or even Epyc chips. The improvement on the Mac Pro 2019's IO would be massive and significantly cheaper as the Threadripper Pro and Epyc chips sport a full 128 PCIe lanes which would remove the massive PCIe bridge chip on the current Mac Pro motherboard. In addition, AMD and Apple could have enabled Infinity Fabric links between the CPU and these GPUs. That's permit flat and fully coherent memory space between CPU and GPU memory pools. That hits the trifecta of more bandwidth, lower latency and improved efficiency. Bonus if AMD/Xilinx also released a FPGA accelerator card that also connected via Infinity Fabric.

While the professional applications on the Mac side of thing are generally written nowadays to expect multiple GPUs (the trash can model standardized that even though it was well... trash) I am always curious how these cards are exposed in Windows. CrossFire is long dead but this might be the weird exception due to the Infinity Fabric links: I'm curious how four Navi 21 chips working together handle gaming.
I suspect it may have to do with history and wanting to get off the bandwagon.

People point to AMD's recent successes as examples of what Apple should have done, and they're not entirely unwarranted... but they also forget that AMD's success is not guaranteed. Remember how AMD had a huge moment in the late 1990s/early 2000s... and then spent several years in the wilderness as Intel reclaimed its lead? Apple probably doesn't want to leap to AMD only to watch Intel return to the lead and prompt another change.

More importantly, Intel's struggle was a cue to Apple that it was time to move on. One of Apple's golden rules is that it doesn't let dependence on a third-party hold it back... and Intel was clearly holding it back. The permanent solution to that is to design in-house chips that prevent Apple from chaining the fate of the Mac to Intel, AMD, or anyone else. Apple now gets to advance technology on its own terms, prioritize features as it likes and dictate its own release schedule. Think of how the iPhone's hardware stands out versus the many Android OEMs forced to use the same Qualcomm and MediaTek chips as almost everyone else... why would Apple want to go back to that?
 
They show up and for the right professional apps, you can use them combined - but differently than crossfire. Doesn't boost gaming much if at all, especially for the cost. .As for TR - can AMD make enough chips for them? And would they have needed a more complex design for cooling a chiplet based system than a monolithic die? I suspect that was the reason - and the existing agreement which might have had penalties for cancelling, etc. Known solution vs something new...?

It is a given that multi-GPU scaling is going to be an app by app improvement but at least on the Mac side, their failed trash can endeavor did get app developers to embrace the idea of multi-GPU where it can. This time around there are coherent links between the GPUs and four RX 6800 (albeit the workstation version) is gonna be fast for whose applications that can use it.

Thread Ripper (Pro)/Epyc for the Mac Pro wouldn't actually change much in terms of cooling. They do consume a bit more power than the Cascade Lake Xeons in the new cheese grater tower but not radically more. However, there is power saving in simplifying the motherboard components as there wouldn't be a need for the massive PCIe bridge used in the current system. Platform improvements could easily lead to overall power consumption even if the CPU peak is higher.

You do have a point that supporting AMD would invoke some additional software development as Apple's platform drivers are generally all Intel. Apple has flirted with this idea before as drivers have been found in previous iterations of macOS which would point toward them supporting an AMD SoC many years back.

I suspect it may have to do with history and wanting to get off the bandwagon.

People point to AMD's recent successes as examples of what Apple should have done, and they're not entirely unwarranted... but they also forget that AMD's success is not guaranteed. Remember how AMD had a huge moment in the late 1990s/early 2000s... and then spent several years in the wilderness as Intel reclaimed its lead? Apple probably doesn't want to leap to AMD only to watch Intel return to the lead and prompt another change.

More importantly, Intel's struggle was a cue to Apple that it was time to move on. One of Apple's golden rules is that it doesn't let dependence on a third-party hold it back... and Intel was clearly holding it back. The permanent solution to that is to design in-house chips that prevent Apple from chaining the fate of the Mac to Intel, AMD, or anyone else. Apple now gets to advance technology on its own terms, prioritize features as it likes and dictate its own release schedule. Think of how the iPhone's hardware stands out versus the many Android OEMs forced to use the same Qualcomm and MediaTek chips as almost everyone else... why would Apple want to go back to that?

AMD's success is not guaranteed but with Intel's disasterous 10 nm and 7 nm, I mean now 7 nm and 5 nm process nodes that Intel has had their fair share of issues. Ice Lake is well over a year late to market and by some counts two years. Sapphire Rapids felt the Ice Lake push but it too is being delayed into 2022.

While Apple want to migrate to their own CPU designs, they have yet to demonstrate a high end professional grade platform. On the CPU performance side, they have the IPC down but they need higher clocked designs as well as the ability to scale up to more cores to compete with AMD and Intel on the high end. Much of the scaling has to be done at the platform level. This clearly is in Apple's reach given their talent but I would not expect them to be rolling this out aggressively on their timeline. This is the really hard stuff to design and will likely be the last thing Apple migrates over to their ARM designs. There is room for one more high end x86 platform given time tables, two at most. Beyond that there is still Apple's customers who have a need for x86 interoperability for programs that haven't migrated over yet and x86 virtualization.
 
emperors-new-groove-kronk.gif
Wonder what its ethereum hashrate is! <ducks>
 
Back
Top