Zen 5 Ryzen 8000 Strix Point APUs to sport hybrid, big.LITTLE-esque CPU architecture on TSMC 3 nm process

Monkey See, Monkey Do.
Seriously, Intel and AMD are now copying ARM/Apple?
big.LITTLE works great for ARM - for x86-64, even AMD's variant, I doubt it - it might help with laptop battery life, but certainly not with performance.
 
BIG.little makes sense, at least conceptially for arm and intel because they have big core designs and small core designs. But AMD only has one (large) core design per generation (APU cores are mildly different than CPU, but it's pretty darn close).

Who knows what's in Zen4 or Zen5, but you wouldn't stick 2 zen3 cores and 2 zen2 cores together and get value out of running a low intensity load on only the zen2 cores. Maybe there's an Atom competitor in design, but I don't think that's been shown yet.
 
Seriously, Intel and AMD are now copying ARM/Apple?
You mean ARM/Nvidia right? :)
big.LITTLE works great for ARM - for x86-64, even AMD's variant, I doubt it - it might help with laptop battery life, but certainly not with performance.
I'm not a fan of this design because if feels like a waste using some cores for low power and some cores for high performance, when you could just design better cores.
 
You mean ARM/Nvidia right? :)
Oh dang, you're right, and they were there with that design nearly a decade ago - good memory, I forgot about it!
I'm not a fan of this design because if feels like a waste using some cores for low power and some cores for high performance, when you could just design better cores.
Exactly, and that's precisely what x86-64 should be sticking to at this point.
 
This is for mobile devices. There is no place for this on a desktop.
Tell that to Intel...
Yet now Intel looks to bring a hybrid architecture, in Intel Alder Lake, to our desktop PCs by 2021, and that raises all sorts of questions as to the potential use case for such a power-savvy design.
 
Is it just me, or do projections that far out seem non-nonsensical given the world-wide situation?
 
Strix, huh? I wonder what Asus has to say about that name...

Though I guess it's just a codename so, whatever.
 
I'm not a fan of this design because if feels like a waste using some cores for low power and some cores for high performance, when you could just design better cores.

What if they're tasked better cores? We know pretty well that RDNA 4 looks like it will incorporate MLA chiplets, maybe there's room for them in other applications.

8c/16t with MLA and RDNAX all in one package could be one way x86 makes gains against ARM in the future. It might also be a way to cram console-equivalent performance into a laptop or mainstream desktop.
 
Seriously, Intel and AMD are now copying ARM/Apple?
big.LITTLE works great for ARM - for x86-64, even AMD's variant, I doubt it - it might help with laptop battery life, but certainly not with performance.
Performance isn't really the biggest problem in CPU development anymore.

I'm not saying they shouldn't or will stop improving raw performance. Just the way there more then ever is more efficiency and better integration of many core designs. So yes it is possible if you can squeeze 4 or 8 mini cores into a power envelope instead of half the half or even 3/4 the number of full performance cores. Its possible you could improve overall performance. Assuming you can feed them data properly. Schedulers and branch prediction working well will be the key. I'm not convinced Intel is going to introduce caching and prediction that is up to the task... or confident it will be secure. lol As for AMD I don't know they are likely to brute force it with a ton of cache... which I guess works most of the time but a x86 big little setup is probably going to require a bit more finesse.

All the new stuff in 2 years is going to be exciting even if it sucks after 2 years of nothing exciting to talk about... or nothing current you want to get excited about as you can't get any of it.
 
You mean ARM/Nvidia right? :)
Apple helped to co-develop ARM and their work in ARM literally dates back to the 1980's with ACORN processors. nVidia is late to the party, at best.

Wikipedia: In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[43][44][45] which became ARM Ltd. when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998.[46] The new Apple-ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA.
 
There are a bunch of regulatory bodies with low power and idle state regulations in the works. Mostly coming out of Europe but ARM is the only CPU set that currently meets them, Intel with its Big.Little and 12v PSU’s do with some tweaks to the Ram, DDR5 will bring it fully in line. But AMD has nothing that complies at this time.
 
It's a joke of Nvida about to own ARM.
Eh, we'll still see about that. The amount of multi-country regulation that this deal will have to pass through and lawsuits etc makes this a multi-year process at best. It took well over two years from having the money to all the regulations for T-Mobile to acquire Sprint and it's no where on the scale of this deal.
Either way though it's a misnomer that they had anything to do with this development. I don't think nVidia even has a BIG.little product period, even though they've developed stuff on ARM since 'Denver' in 2014 (which means the actual development cycle was likely 2 years+/- before that).
 
I don't think nVidia even has a BIG.little product period, even though they've developed stuff on ARM since 'Denver' in 2014 (which means the actual development cycle was likely 2 years+/- before that).
NVIDIA's Tegra 3 from 2011 was a quad-core Cortex-A9 CPU with a fifth low-power companion core, which effectively acts the same as the 'clustered switching' version of big.LITTLE.
For the time, it was actually extremely innovative, and I don't remember anyone else doing something like this back then during that time period.

In fact, big.LITTLE wasn't even announced until October 2011, which was months after Tegra 3 was announced in February 2011 and released in the second-half of 2011.
 
In five years there is a really good chance most laptops and desktops are just glorified phone docks. Especially for Apple. An Ipad Pro is the same chip and OS basically as their desktops and laptops. Once that's in the phone....
 
In five years there is a really good chance most laptops and desktops are just glorified phone docks. Especially for Apple. An Ipad Pro is the same chip and OS basically as their desktops and laptops. Once that's in the phone....
Why would that matter? At this point the M1 smokes all competing hardware. So you're mad that computing devices would become smaller, more portable, more powerful, and more convenient? And you can choose how big you want the device to be as you carry it around? I see zero downsides.

This is of course also assuming a world in which Apple is the dominant force in the marketplace. And as much as I prefer them to PC's, Macs taking over all of the PC space ain't happening in 5 years. That ain't happening in 20 years.
We could have a side discussion about ARM and how that affects AMD/Intel; but Microsoft, Qualcomm, nVidia, Google, and quite a few others will definitely be fighting for market-share. It's not as if Apple will be able to get everyone to switch uncontested. Hell, there are quite a few on these boards alone that would make plenty of hyperbolic statements regarding never owning an Apple product and obviously they're not alone.
 
I don't see the issue with this approach. For the vast majority of people they don't just work OR play, they work AND play. Having little cores for work and big cores for play is sound logic in my mind.

With Zen, why can't you have 1 8-core big cores chiplet, and 1 8-core little cores APU chiplet, and get the best of both worlds?
 
The way I see it, it all depends on what the cores exactly are and how they're implemented. If the little cores are mostly ASIC-like cores to speed up specific workloads and make them work more efficiently, I'm fine with that. If they're cut down lower power cores I don't particularly see the use for them outside of the mobile space and even then I don't know that they would be all that useful.
 
I'm not a fan of this design because if feels like a waste using some cores for low power and some cores for high performance, when you could just design better cores.

The reason why engineers are moving to heterogeneous designs is because a single kind of core is not optimal for all the workloads and situations. No single core design can be optimal because the microarchitecture requirements for each situation are antagonistic: high frequency vs low, deep pipeline vs short, large caches vs small, high IPC vs low, sophisticated branching vs simple,... The optimal design point that covers most workloads is a large high-performance core paired with a tiny efficient core. Compare the sizes of A53 vs the A72 or Zephyr vs Hurricane

Apple-A10-Fusion.jpg
 
Im excited to see amd solutions to an adapting silicone landscape.

Im more excited to see what amd ends up doing with their xilinx accusition in regards to silicone solutions. a decent on chip fpga could emulate a arm core in a decent manner, if software gets to a certain point a fgpa could efficiently emulate the alogrithems needed for that software.
 
Why would that matter? At this point the M1 smokes all competing hardware. So you're mad that computing devices would become smaller, more portable, more powerful, and more convenient? And you can choose how big you want the device to be as you carry it around? I see zero downsides.

This is of course also assuming a world in which Apple is the dominant force in the marketplace. And as much as I prefer them to PC's, Macs taking over all of the PC space ain't happening in 5 years. That ain't happening in 20 years.
We could have a side discussion about ARM and how that affects AMD/Intel; but Microsoft, Qualcomm, nVidia, Google, and quite a few others will definitely be fighting for market-share. It's not as if Apple will be able to get everyone to switch uncontested. Hell, there are quite a few on these boards alone that would make plenty of hyperbolic statements regarding never owning an Apple product and obviously they're not alone.
I'm not mad. Its going to happen precisely because it doesn't matter. Once every phone is running something better than an M1 its just easier to move your phone from dock to dock with the OS re-skinning itself.
 
I see a bit of convergence happening here. The latest neoverse and it's MPAM (memory partition and monitoring) show that with large scale processors, housekeeping is more than necessary. The ampre did a great job of prioritizing non-streaming loads memory access, which seemed to allow it to keep pace while having a much lesser cache system.

Mobile devices and to some extent the M1 have accelerators to handle lesser and repetitive tasks.

To me it seems this move to having little cores in some ratio to big is less about power saving and more about provisioning and specialization.
 
NVIDIA's Tegra 3 from 2011 was a quad-core Cortex-A9 CPU with a fifth low-power companion core, which effectively acts the same as the 'clustered switching' version of big.LITTLE.
For the time, it was actually extremely innovative, and I don't remember anyone else doing something like this back then during that time period.

In fact, big.LITTLE wasn't even announced until October 2011, which was months after Tegra 3 was announced in February 2011 and released in the second-half of 2011.
The Nintendo Switch I believe has 8 cores but 4 low power and 4 high performance, but you can't use all 8 cores at the same time. At least that's how I think the Switch SoC works.
 
The Nintendo Switch I believe has 8 cores but 4 low power and 4 high performance, but you can't use all 8 cores at the same time. At least that's how I think the Switch SoC works.
You are right, it technically does have eight cores, and while that is exactly how the vanilla Tegra X1 operates as you described, the Tegra X1 in the Switch only utilizes the four A57 cores and leaves the four A53 cores disabled.
The A53 cores must just not have been needed for the game logic, and they may have been disabled for additional power savings or extending the battery life on the console.

https://en.wikipedia.org/wiki/Nintendo_Switch#cite_note-4
While the Tegra X1 SoC features 4 Cortex-A57 plus 4 Cortex-A53 CPU cores, the Nintendo Switch only uses the former, of which 1 is reserved to the operating system.
 
Back
Top