AMD Stacked V-Cache

This is why I said that Apple is not a hardware manufacturer. These are the sorta things we'll see from AMD and Intel that'll keep them on top. Though I wonder if this means that AMD beat Intel to 3D stacking technology?

 
They both put it on their roadmaps around the same time, iirc. Just, intel wanted to do stacked chips first, and AMD wanted to do chiplets first.

Intel has seen multiple roadblocks and had to take a few detours, so no surprise AMD is starting on this while Intel is just starting to get it working, sorta...maybe.
 
This is why I said that Apple is not a hardware manufacturer. These are the sorta things we'll see from AMD and Intel that'll keep them on top. Though I wonder if this means that AMD beat Intel to 3D stacking technology?



3D stacking tech is pretty much a TSMC thing. There is nothing stopping Apple from utilizing on their SoC if they choose to implement it.
 
Lisa Sue on her presentation at Computex indicated end of this year on the high end desktop CPUs, 15% increase in performance for games (when CPU limited that is). lol hope that pans out.
 
3D stacking tech is pretty much a TSMC thing. There is nothing stopping Apple from utilizing on their SoC if they choose to implement it.
Apple silicon has made use of a different node then what amd is currently offering. Whether or not this same solution can be possible on the node apple is utilizing is a TSMC thing.
 
Apple silicon has made use of a different node then what amd is currently offering. Whether or not this same solution can be possible on the node apple is utilizing is a TSMC thing.
It can be utilized on N7, N5, and N3 node in the future as well. If Apple uses multiple chiplets on higher end SoC, I wouldn’t be surprised if it also has 2.5/3D stacking tech also.
 
wouldn't it be something if i could drop one of these new types in my board. 192 MB of L3 cache sounds pretty good :) I thought we were done with new stuff for AM4
I wonder what a TR chip would have in L3 cache? Holy smokes that would be tempting. I was surprised no announcement dealing with TR. Anyways this tech could also be used as well in APUs for a rather huge boost allowing more CU's to be incorporated with some spectacular amounts of cache. RNDA2+ for APUs?
 
This is why I said that Apple is not a hardware manufacturer. These are the sorta things we'll see from AMD and Intel that'll keep them on top. Though I wonder if this means that AMD beat Intel to 3D stacking technology?


AMD, and Dr. Su, are the only reasons I haven't totally given up on x86-64, and why I am now saying AMD's x86-64 definitely has a future into the 2030s.
Not to mention, it is giving ARM/AArch64 a real run for it's money; 2 years ago, I swore x86-64 was all but dead, but I was definitely proven wrong at how much life it has left.

As for Intel, a slow sinking ship comes to mind...
 
3D stacking tech is pretty much a TSMC thing. There is nothing stopping Apple from utilizing on their SoC if they choose to implement it.
I'm not saying they won't, but I am saying that it won't be cheap. Apple will have to spend even more money on engineers to get that sort of technology. TSMC can make it for them, but they'll still need to design a chip for them to make. AMD has like 90% of the market to play with, plus consoles, plus servers, and etc. It's going to be hard for Apple to justify the R&D cost for what is 10% of the market.
AMD, and Dr. Su, are the only reasons I haven't totally given up on x86-64, and why I am now saying AMD's x86-64 definitely has a future into the 2030s.
Not to mention, it is giving ARM/AArch64 a real run for it's money; 2 years ago, I swore x86-64 was all but dead, but I was definitely proven wrong at how much life it has left.

As for Intel, a slow sinking ship comes to mind...
Intel is going nowhere. They are not losing money during a time period where people can't buy enough silicon. Pretty sure Intel can't make enough chips. Of course when the pandemic is over then that's a different situation.
 
I'm not saying they won't, but I am saying that it won't be cheap. Apple will have to spend even more money on engineers to get that sort of technology. TSMC can make it for them, but they'll still need to design a chip for them to make. AMD has like 90% of the market to play with, plus consoles, plus servers, and etc. It's going to be hard for Apple to justify the R&D cost for what is 10% of the market.

Intel is going nowhere. They are not losing money during a time period where people can't buy enough silicon. Pretty sure Intel can't make enough chips. Of course when the pandemic is over then that's a different situation.
whats stopping intel from using TSMC like everyone else i wonder?
 
I'm not saying they won't, but I am saying that it won't be cheap. Apple will have to spend even more money on engineers to get that sort of technology. TSMC can make it for them, but they'll still need to design a chip for them to make.
Apple can and they will. Cost will be justified since Apple’s TAM is much higher than Intel or AMD. Heck, Apple makes more money than Intel, AMD, Qualcomm, and Nvidia. Combined. R&D cost is the least of the problem for Apple lol.

AMD has like 90% of the market to play with, plus consoles, plus servers, and etc. It's going to be hard for Apple to justify the R&D cost for what is 10% of the market.

Intel, at one point, owned more than 90% of those market you mentioned, only has 1/3 of revenue of Apple.
 
Last edited:
Capacity at TSMC is fully tapped out down to 3nm, last I checked.

I know Apple sold a bunch of wafers to AMD, they decided that they over-bought. But I don't remember the details.
 
whats stopping intel from using TSMC like everyone else i wonder?
Their hubris.
Apple can and they will. Cost will be justified since Apple’s TAM is much higher than Intel or AMD. Heck, Apple makes more money than Intel, AMD, Qualcomm, and Nvidia. Combined. R&D cost is the least of the problem for Apple lol.
They make more money but Apple would have to make less to pay R&D. Companies aren't like you and me where with that kind of money we would have retired and started funding anti-aging technology. It's all about making MOAR money. You start making less and now you're shareholders are upset.
Intel, at one point, owned more than 90% of those market you mentioned, only has 1/3 of revenue of Apple.
You know who else makes more money than Intel? Google, Microsoft, and etc. You know who's not making as much as Intel? AMD isn't and they certainly seem to be pushing for newer technologies more so than anyone else. It's not how much you make that matters, but how much MOAR.
 
It can be utilized on N7, N5, and N3 node in the future as well. If Apple uses multiple chiplets on higher end SoC, I wouldn’t be surprised if it also has 2.5/3D stacking tech also.
I don't think Apple would find any real advantage in using it. Arm is much simpler and the cores much less complicated... they are not starving for silicon space. Arm also really doesn't require massive caches to see performance uplift. In another thread we where going over Arm cache systems. Apple and most modern arm do have L3 cache... but I'm not sure expanding it over the cores to double or triple the amount would be of any major practical use.

However having said all that... M1 can be passively cooled. So I imagine their is a very outside chance that Apple could actually stack a couple cores on a pro type large core chip. Again though I'm not sure that is all that required... even a 16 core Arm chip isn't going to be much larger then a 8 core x86. Or perhaps accelerators of some type. Accelerators big and little Apple cores all share the same L3 pool... perhaps there would be some advantage latency wise in putting accelerators up top along with L3 basically stacked sideways around a edge. hmmm who knows.
 
Apple can and they will. Cost will be justified since Apple’s TAM is much higher than Intel or AMD. Heck, Apple makes more money than Intel, AMD, Qualcomm, and Nvidia. Combined. R&D cost is the least of the problem for Apple lol.



Intel, at one point, owned more than 90% of those market you mentioned, only has 1/3 of revenue of Apple.
Every time I read one of Duke's posts, I am reminded that he doesn't realize just how profitable, large, and powerful Apple is.

Look, I love Dr. Su and AMD. I bought like 300 something Threadripper cores at work. But Apple could purchase AMD several times over just with its cash balance. This is astonishing for those of us in the corporate world. Apple could buy Intel AND AMD and it would barely make a dent in the balance sheet.

For reference, AMD's EBITDA was about 2.2b in 2021. Apple has 195B cash on hand. One hundred and ninety five billion dollars in cash on hand. Even if AMD got an absolutely outrageous valuation of 20x EBITDA, that would only put their value at 40B - meaning Apple could purchase them three times over in fucking cash. And businesses at this scale are never purchased in cash.

Let's take things one step further and assign a more realistic 10x EBITDA multiple to AMD, Qualcomm, and Nvidia. A 10x multiple would give AMD a valuation of about 20B, Qualcomm about 90B, and Nvidia about 60B. Apple could literally purchase AMD, Nvidia, and Qualcomm in motherfucking cash. If you wanted to add Intel to this, Apple would merely need to take on some low interest financing.

The guy is just delusional when it comes to understanding how large businesses actually work, and how much of a lead Apple has over literally every other silicon company in the world. Because x86. Or something.
 
Last edited:
They make more money but Apple would have to make less to pay R&D. Companies aren't like you and me where with that kind of money we would have retired and started funding anti-aging technology. It's all about making MOAR money. You start making less and now you're shareholders are upset.

You know who else makes more money than Intel? Google, Microsoft, and etc. You know who's not making as much as Intel? AMD isn't and they certainly seem to be pushing for newer technologies more so than anyone else. It's not how much you make that matters, but how much MOAR.
Pretty much every known public company are interested in making more money. I have no idea why you are thinking Apple somehow wouldn't be able to afford to keep up with AMD/Intel, when Apple only needs tiny fraction of their overall revenue for R&D to keep up lol.
I don't think Apple would find any real advantage in using it. Arm is much simpler and the cores much less complicated... they are not starving for silicon space. Arm also really doesn't require massive caches to see performance uplift. In another thread we where going over Arm cache systems. Apple and most modern arm do have L3 cache... but I'm not sure expanding it over the cores to double or triple the amount would be of any major practical use.

However having said all that... M1 can be passively cooled. So I imagine their is a very outside chance that Apple could actually stack a couple cores on a pro type large core chip. Again though I'm not sure that is all that required... even a 16 core Arm chip isn't going to be much larger then a 8 core x86. Or perhaps accelerators of some type. Accelerators big and little Apple cores all share the same L3 pool... perhaps there would be some advantage latency wise in putting accelerators up top along with L3 basically stacked sideways around a edge. hmmm who knows.

I am assuming if they are going with multiple chiplets with unified memory architecture for higher end Macs, they would need a larger amount of cache for CPU/GPU to compensate for interchiplet latency and overall bandwidth.
 
3D stacking tech is pretty much a TSMC thing. There is nothing stopping Apple from utilizing on their SoC if they choose to implement it.
Well, no. TSMC can do whatever a customers design requires. The customer, however, has to do the design work. Apple will need to invest in that work. AMD has, apparently, already done so and that is accounted for in their books. Apple has not, as far as we know, so it will be further R&D costs. The difference is the x86-64 ISA AMD has on tap is a more robust architecture by nature than the ARM architecture Apple has on the RISC front. So, to catch up on certain aspects Apple needs to continue toe evolve the ARM cores with more and more instructions while the x86-64 architecture AMD has already supports those things so all of the penalties in performance are already baked in. This mioght be why AMD eventually dropped the K12. They figured out that to get ARM to x86-64 general compute usage would mean way more additional instruction set implements that would complicate the core and slow its relative performance down relative to continuing with Zen.
 
Every time I read one of Duke's posts, I am reminded that he doesn't realize just how profitable, large, and powerful Apple is.

Look, I love Dr. Su and AMD. I bought like 300 something Threadripper cores at work. But Apple could purchase AMD several times over just with its cash balance. This is astonishing for those of us in the corporate world. Apple could buy Intel AND AMD and it would barely make a dent in the balance sheet.

For reference, AMD's EBITDA was about 2.2b in 2021. Apple has 195B cash on hand. One hundred and ninety five billion dollars in cash on hand. Even if AMD got an absolutely outrageous valuation of 20x EBITDA, that would only put their value at 40B - meaning Apple could purchase them three times over in fucking cash. And businesses at this scale are never purchased in cash.

The guy is just delusional when it comes to understanding how large businesses actually work, and how much of a lead Apple has over literally every other silicon company in the world. Because x86. Or something.

Yet, Apple doesn't. Why? Also how much of that cash balance would be subject to regulatory scrutiny once they brought it in from off shore accounts?
 
Yet, Apple doesn't. Why? Also how much of that cash balance would be subject to regulatory scrutiny once they brought it in from off shore accounts?

Why would they purchase companies that have outdated technology compared to what they already do better via vertical integration? To your point, if they did that they wouldn't just get IP that they wouldn't use - they would also have to deal with regulatory issues.

In Apple's first ever try at a low-power, desktop-worth ARM processor, they already have better single core performance than everything except Intel and AMDs absolute x86 flagships - with 10x less TDP. WTF do they need to buy Intel for?

Apple already has developed their own 5G modem - why do they need qualcomm?

There are only a very few instances where Intel/AMD/Qualcomm do anything better than Apple. Right now, AMD is probably the clearest example since Apple has nothing to compare with top of the line Threadripper performance.

For now.
 
Why would they purchase companies that have outdated technology compared to what they already do better via vertical integration? To your point, if they did that they wouldn't just get IP that they wouldn't use - they would also have to deal with regulatory issues.

In Apple's first ever try at a low-power, desktop-worth ARM processor, they already have better single core performance than everything except Intel and AMDs absolute x86 flagships - with 10x less TDP. WTF do they need to buy Intel for?

Apple already has developed their own 5G modem - why do they need qualcomm?

There are only a very few instances where Intel/AMD/Qualcomm do anything better than Apple. Right now, AMD is probably the clearest example since Apple has nothing to compare with top of the line Threadripper performance.

For now.
Well, it's not just Apples first attempt at low power desktop worthy processor.

Why would they need an x86 license manufacturer? Because they don't have anything that actually competes with them.

Apples attempts are all "and then" after someone else has on the topics you are talking about. To get someone who does it before them would be more valuable than just the dollar amount.
 
Well, no. TSMC can do whatever a customers design requires. The customer, however, has to do the design work. Apple will need to invest in that work. AMD has, apparently, already done so and that is accounted for in their books. Apple has not, as far as we know, so it will be further R&D costs.

Well, of course Apple has to design it first, but it is currently unknown if Apple has done the design already or has plans to do it in the future. Like I said, there is nothing stopping Apple if they decided to implement it on their future SoCs.


The difference is the x86-64 ISA AMD has on tap is a more robust architecture by nature than the ARM architecture Apple has on the RISC front. So, to catch up on certain aspects Apple needs to continue toe evolve the ARM cores with more and more instructions while the x86-64 architecture AMD has already supports those things so all of the penalties in performance are already baked in.

With ARM v9 with SVE2, I would say x86-64 no longer has ISA advantage on vector extensions. Other than obvious core count, can you elaborate how Zen 3 is more "robust architecture by nature" than Apple's M1 in terms of ISA?
 
Last edited:
Well, of course Apple has to design it first, but it is currently unknown if Apple has done the design already or has plans to do it in the future. Like I said, there is nothing stopping Apple if they decided to implement it on their future SoCs.




With ARM v9 with SVE2, I would say x86-64 no longer has ISA advantage on vector extensions. Other than obvious core count, can you elaborate how Zen 3 is more "robust architecture by nature" than Apple's M1 in terms of ISA?

Sure. One question answers it. Is ARM now a CISC architecture?
 
Sure. One question answers it. Is ARM now a CISC architecture?

No, it never will be, ARM just happens to have the most balanced instruction set of all it's RISC competition. No baggage like Register Windows, or ISAs that need twice as many instructions to do what x86 does!

Vectors are special cases for general-purpose processors, so making them wider and smarter is just a matter of course; SVE2 doesn't suddenly make ARM cisc

ARM has eventually added variable-length instructions, all without the massive microcode decoder overhead you get with x86 (each processor only supports it's instruction subset)
 
Last edited:
I wonder what a TR chip would have in L3 cache? Holy smokes that would be tempting. I was surprised no announcement dealing with TR. Anyways this tech could also be used as well in APUs for a rather huge boost allowing more CU's to be incorporated with some spectacular amounts of cache. RNDA2+ for APUs?
64 core 8 chiplet would have 768 MB L3 cache. Fuck That. That's the same as 533 floppies, in 2TB/s cache.
 
Well, no. TSMC can do whatever a customers design requires. The customer, however, has to do the design work. Apple will need to invest in that work. AMD has, apparently, already done so and that is accounted for in their books. Apple has not, as far as we know, so it will be further R&D costs.

Not to mention, patents.
 
  • Like
Reactions: kac77
like this
Why would they purchase companies that have outdated technology compared to what they already do better via vertical integration? To your point, if they did that they wouldn't just get IP that they wouldn't use - they would also have to deal with regulatory issues.

In Apple's first ever try at a low-power, desktop-worth ARM processor, they already have better single core performance than everything except Intel and AMDs absolute x86 flagships - with 10x less TDP. WTF do they need to buy Intel for?

Apple already has developed their own 5G modem - why do they need qualcomm?

There are only a very few instances where Intel/AMD/Qualcomm do anything better than Apple. Right now, AMD is probably the clearest example since Apple has nothing to compare with top of the line Threadripper performance.

For now.
OK we all know they couldn't buy Intel cause the US gov would never allow it. However pretending that wasn't a thing... they could buy Intel and AMD, transfer the 50 employees at AMD worth keeping... and the 2 at Intel. Then close the doors on the x86 PC business. Then they could hire Justin Long to do a Mac vs PC commercial where the PC guys desk is just empty. Buy a Mac cause its faster then nothing. lol
Joking.
I agree though even if it was possible in anyway Apple would have zero reason to buy a x86 chip manufacturer. What for ARM is superior in everyway... and its only a matter of a cycle or two before Apple crushes all doubt of exactly that. If Apple just wants the brain power they have already proven they are more then capable of headhunting the best from both companies.
 
Remember. X86 can't be transferred ya? Intel sure but that is their control. Intel could go IBM and leave consumer space and just serve legacy server for decades and make bank.
 
Wonder if they will be able to do the same thing for thier gpu's infinity cache.

I also wonder what this does to operating temperatures.
 
Well, it's not just Apples first attempt at low power desktop worthy processor.
Are you talking about the AIM alliance's PowerPC 601 CPU?
If so, that wasn't any more "low power" in terms of TDP than other CPUs in the early 1990s.

So if not the M1, what CPU are you talking about?
 
Apple and most modern arm do have L3 cache... but I'm not sure expanding it over the cores to double or triple the amount would be of any major practical use.
They don't have an L3 cache because most ARM devices don't have ram slots. The ram usually sits right next to the SoC. Putting ram closer to the SoC will decrease the need for an L3, though not entirely. ARM is not magically efficient. ARM still works on the same basic principals as x86. AMD's Vcache suggests that AMD isn't about to put ram next to their CPU's like Apple did. 192MB is a hella lot of cache to the point where it's practically ram. That is going to give AMD a huge performance boost while also having removable ram. Especially for APU's if AMD finally decides to put RDNA2 onto their APU's.

This is similar to Intel's eDram cache from Broadwell where it was fantastic in boosting performance, especially GPU performance but then Intel took it away. It was 128MB worth of the stuff, which is really good for something released in 2017. Cache is king, and you won't get far in CPU and APU performance without a good lot of it.
 
Back
Top