AMD Wakes Up to Intel's Multi-Threaded Advantage, Ryzen 7 7800X a 10-core/20-thread Processor, Also Readies Ryzen 3 7300X

1_rick

Supreme [H]ardness
Joined
Feb 7, 2017
Messages
5,360
https://www.techpowerup.com/300326/...0-thread-processor-also-readies-ryzen-3-7300x

Intel's Hybrid architecture has a key payoff, and that's with multi-threaded application performance. The E-cores may be tiny, but offer impressive performance, and when deployed in large-enough numbers, have an enormous impact on the multi-threaded performance. The 8P+16E Core i9-13900K beating the Ryzen 9 7950X; and more importantly, the 6P+8E Core i5-13600K beating the Ryzen 7 7700X, is forcing AMD to reconsider CPU core-counts across its product-stack. The first sign of this is the discovery of a Geekbench submission, where the popular benchmark detects the unreleased Ryzen 7 7800X as a 10-core/20-thread processor.

There's another equally interesting processor that surfaced on Geekbench—the Ryzen 3 7300X. [...] The Ryzen 3 7300X is a 4-core/8-thread processor with "Zen 4" CPU cores (confirmed to be "Zen 4" based with the 1 MB/core L2 cache size). The processor has a single "Zen 4" CCD, with four cores disabled, but the L3 cache left untouched at 32 MB. The chip has an impressive 4.50 GHz base frequency, and 5.00 GHz boost.

Like other Zen 4 chips, they're DDR5 only.
 
7300x with the iGPU could be quite interesting, the only issue being how nicely priced the 5600x is right now, if it is priced to compete advantageously with that 6 core $160 option, could be incredible value if DDR-5 continue to get interesting price wise and the AM5 board option continue to goes down down the stack.

Unlike a 3d cache option, bit harder to be that enthusiast for the 10 core, outside the obvious opportunity to push down the 7700x down the price line and continue the pressure on Intel in that price rang.

You get the 79xx cpu 32 mb of L3 cache at a lower price and maybe the enabled core could clock really well-being the 10 best of 16.... but still the I need more than 8 full cores, 12 is a bit expensive let alone 16 crowd
 
Thumbs up for competition.

DDR 5 aside I believe AMD has the better product. Intel will probably still own the but I can buy DDR 4 crowd... but if AMD replaces their 8 core with 10 core. I think the value proposition swings their way at that price point. (With 2 chiplets and the added cache it might game sort of like a 3D cache lite)
If AMD can get a 7800x 10 core out... manages a 3D cache 12-16 core part. I think they would put Intel on the back foot again. Kudos to Intel though I really didn't think they could offer something all that attractive vs Zen 4. Hopefully they keep pushing each other.
 
A 10 core 7800 so soon seems kind of desperate. AMDs best bet would be a 7900 non x with a lower top while maintaining a high single core frequency to compete against the 13700k.

Running a quad core on zen 4 seems like a waste of silicon at this point. A 5600x3d would have helped amd hold on to midrange gamers a bit longer.
 
Thumbs up for competition.

DDR 5 aside I believe AMD has the better product. Intel will probably still own the but I can buy DDR 4 crowd... but if AMD replaces their 8 core with 10 core. I think the value proposition swings their way at that price point. (With 2 chiplets and the added cache it might game sort of like a 3D cache lite)
If AMD can get a 7800x 10 core out... manages a 3D cache 12-16 core part. I think they would put Intel on the back foot again. Kudos to Intel though I really didn't think they could offer something all that attractive vs Zen 4. Hopefully they keep pushing each other.
Yes, I love it. The Z690 crowd is eating up the 13900K (with very good reason - it's awesome!) just like the X570 crowd was eating up the 5800X3D earlier. And so on.

It's the same reason I invested in the AM5 platform versus waiting for Intel's latest. I so enjoyed the 3-4 generations of CPU supported by AM4. With no HEDT out there it's the closest thing, IMO.
 
A 10 core 7800 so soon seems kind of desperate. AMDs best bet would be a 7900 non x with a lower top while maintaining a high single core frequency to compete against the 13700k.

Running a quad core on zen 4 seems like a waste of silicon at this point. A 5600x3d would have helped amd hold on to midrange gamers a bit longer.
I don't think its desperate to change things up when faced with a reality. Pretending the competition doesn't exist is an Intel thing too do.
A 4 core zen 4 I would have to think would be more for OEM markets. Still a lot of OEM machines getting pushed out the door with the lowest end Intel chips and their iGPUs. Seems like AMD has a better way to go after that business now. Instead of trying to sell a little higher end product with a "better" igpu like the last few gens.
 
The E-core setup is stupid. The E-Cores server only two purposes, and that's to make the Intel chips more energy efficient and to win benchmarks like Geekbench. Why Intel put twice as many E-Cores than P-Cores on the Core i9-13900K is beyond stupid. Considering how terrible Intel's chips are at using power, it doesn't seem the E-Cores are severing their purpose too well. AMD releasing a Ryzen 7 7800X as a 10-core is just inevitable considering the products they've released in the past. Honestly AMD is their own worst enemy with motherboard and DDR5 prices deterring anyone from buying them. AMD doesn't need Intel's help to make them lose sales. They're more than capable of doing that on their own.
 
Performance profile of the 7800X is going to be interesting to see... On one hand, much higher L3 cache / core count ratio than 7700X and more cores, on the other hand, the cross-CCD perf hit.

Does anyone know if V-Cache can mitigate cross-CCD issues in gaming? If so, 7800X3D with 10C and 128MB Infinity Cache could be the new gaming champ.
 
The E-Cores server only two purposes, and that's to make the Intel chips more energy efficient and to win benchmarks like Geekbench.
Or actual workload like cinebench, encoding, compilation, most actual MT task.

Despite the name I am not sure how much they are more efficient in actual prolonged task:
https://www.techpowerup.com/review/intel-core-i9-12900k-e-cores-only-performance/7.html

A cinebench run take more current using only e-core on a 12900k than 8pcore HT off or the whole normal CPU.

But you can put 4 of them on a chip using the size of one p-core, and it seem that 4 e-core give you significantly more performance that one P-core on heavy multithreaded workload A(about 50% more on cinebench R-23 on alderlake):
https://www.pugetsystems.com/labs/articles/Intel-12th-Gen---How-do-P-Cores-and-E-Cores-Compare-2289/

I imagine that why the 13600k is so good in heavily multithread task and that it make sense, maybe you can assume that if your 6 HT core and filled of process to run that you are in such scenario.
 
The E cores are just super cut down x86 cores, get rid of all the silicon for AVX 512 encode, decode, encryption, and multi threading and all the stuff needed to secure it and you are left with as pure an x86 processor as you can get. Their actually pretty nifty, I have some old Ryzen embedded servers that run my basic network stacks on the individual sites I would love to replace them with a full e core setup with like 16 threads and a solid 64gb ram. Would make dope little server.
 
The E-core setup is stupid. The E-Cores server only two purposes, and that's to make the Intel chips more energy efficient and to win benchmarks like Geekbench. Why Intel put twice as many E-Cores than P-Cores on the Core i9-13900K is beyond stupid. Considering how terrible Intel's chips are at using power, it doesn't seem the E-Cores are severing their purpose too well. AMD releasing a Ryzen 7 7800X as a 10-core is just inevitable considering the products they've released in the past. Honestly AMD is their own worst enemy with motherboard and DDR5 prices deterring anyone from buying them. AMD doesn't need Intel's help to make them lose sales. They're more than capable of doing that on their own.
This. However, DDR5 is the main culprit and going the Intel route would have doubled the cost of testing, increased the die size, and most likely produced a lower performing part. Not so sure it would have been a win there especially considering that few people are being honest about Intel's strategy here. The Zen 4 is smaller across the board. If the big/little configuration really worked their parts would win across the board. They don't. The 13600k is probably the sweet spot for the configuration. But it doesn't scale. Ever wonder why they have been so quiet on Sapphire Rapids and its taken forever for it to come out?. Their top SKU barely can prevent itself from throttling at load for extended periods of time because with all those cores its damn near a server part already. This is something reviewers really should have gotten into (i think GN did) because it means that the Intel part at load over an extended period of time could have vastly different performance characteristics. Hell most coolers people have today won't be enough to keep it from throttling. That's a big damn deal.
 
Last edited:
The E-core setup is stupid. The E-Cores server only two purposes, and that's to make the Intel chips more energy efficient and to win benchmarks like Geekbench. Why Intel put twice as many E-Cores than P-Cores on the Core i9-13900K is beyond stupid. Considering how terrible Intel's chips are at using power, it doesn't seem the E-Cores are severing their purpose too well. AMD releasing a Ryzen 7 7800X as a 10-core is just inevitable considering the products they've released in the past. Honestly AMD is their own worst enemy with motherboard and DDR5 prices deterring anyone from buying them. AMD doesn't need Intel's help to make them lose sales. They're more than capable of doing that on their own.
The 13700K is 7% faster than the 7700X in games, but nearly 30% faster in apps... Because of E-Cores.
https://www.3dcenter.org/artikel/launch-analyse-intel-raptor-lake/

E-Cores handle multi-threaded workloads better, that's why there are more of them. Lightly threaded apps run better on P-cores, that's why there are fewer of them. Intel has taken the available die space and divided it up based on workload priority. They could give you a full p-core die (10+0), but it would have the same performance in low threaded apps with worse performance in highly threaded apps... Hello...

It's genius and that's why AMD is getting smoked across the board.
 
Last edited:
Performance profile of the 7800X is going to be interesting to see
So it turns out this was a hoax.

https://www.techspot.com/news/96489-geekbench-tricked-ryzen-7-7800x-doesnt-exist.html

"Chips and Cheese has come clean. They fooled Geekbench with a phony name by spoofing the CPUID on what was actually a Ryzen 9 7950X system. They also disabled six cores and reduced the precision boost overdrive clock by 350 MHz to make it look (and perform!) like a middle ground between the very real 7700X and 7900X."

Kinda clever. Odd that the processor name isn't baked into the silicon.
image-4.png
 
The 13700K is 7% faster than the 7700X in games, but nearly 30% faster in apps... Because of E-Cores.
https://www.3dcenter.org/artikel/launch-analyse-intel-raptor-lake/

E-Cores handle multi-threaded workloads better, that's why there are more of them. Lightly threaded apps run better on P-cores, that's why there are fewer of them. Intel has taken the available die space and divided it up based on workload priority. They could give you a full p-core die (10+0), but it would have the same performance in low threaded apps with worse performance in highly threaded apps... Hello...

It's genius and that's why AMD is getting smoked across the board.

I will take all performance cores over a pile of E-cores no matter how many synthetic benchmarks Intel can win.
 
The Zen 4 is smaller across the board. If the big/little configuration really worked their parts would win across the board. They don't.
That seem to me quite the faulty logic, would it not be possible for something that work but still not win across the board against the really good work of AMD and TSMC ? I am not convinced Intel 10 is has better than TSMC 5 and all the different between the product must be explained by design/architecture difference.

on the same logic one could say; if the all similar core configuration really worked it would beat intel and arm-apple across the board, they don't, no ?

I will take all performance cores over a pile of E-cores no matter how many synthetic benchmarks Intel can win.
I am not sure what is going on about the talk that it is something that show up particularly in synthetic, Cinebench is not really a synthetic benchmark, it is doing an actual rendering task and that why it correlated almost perfectly with rendering scene in blender, a 13600k in actual blender render often beat a 7700x.

Actual code Compiling, actual h.265 encoding, actual render, is there an special gap between synthetic and non synthetic benchmark involving the e-core ?
 
Last edited:
I will take all performance cores over a pile of E-cores no matter how many synthetic benchmarks Intel can win.
I am not sure what is going on about the talk that it is something that show up particularly in synthetic, Cinebench is not really a synthetic benchmark, it is doing an actual rendering task and that why it correlated almost perfectly with rendering scene in blender, a 13600k in actual blender render often beat a 7700x.

Actual code Compiling, actual h.265 encoding, actual render, is there an special gap between synthetic and non synthetic benchmark involving the e-core ?

Here is the issue, no one looking at a machine for rendering is going to be looking at core counts that low anyway, so while it can be used to judge performance, no serious user is looking at that cpu for that task. I don't like relying on WIndows to manage the cores correctly, as shown with Ryzen it can struggle to manage cores perfectly.

Also synthetic benchmarks can be tuned to avoid certain uses, that can showcase a product in a better light then reality. Often like, built in game benchmarking tools rarely reflect reality.
 
Here is the issue, no one looking at a machine for rendering is going to be looking at core counts that low anyway, so while it can be used to judge performance, no serious user is looking at that cpu for that task.
And no one looking a machine for gaming will have much interest into an high core counts, so those machines are for who ?

Compiling tend to still have quite single process task and tend to be made on quite the normal machine by a giant share of people, rendering for youtube stuff has well, same for baking independent or personal made small games. I am sure you are right among serious studio (well I would imagine that quick previews before sending something big on the bake-render queue could still happen).

I love my 3900x to compile and a 3070 to render for work, but I would not have paid a Threadripper pro and Hopper class for it, there is a lot of big but not big enough for the best pro tech workload and the personal computer level machine are quite something.

Even not that small youtube channel seem to use those type of CPU for their stuff, not a server room.

Also synthetic benchmarks can be tuned to avoid certain uses, that can showcase a product in a better light then reality. Often like, built in game benchmarking tools rarely reflect reality.
Yes that why I asked that question (because it is certainly possible) but I do not see much disconnect between synthetic benchmark and actual performance and a lot of benchmark used in reviews are not that synthetics but a mix of actual task running on day to day used software. Why talk about synthetic benchmark exactly ?

I don't like relying on WIndows to manage the cores correctly, as shown with Ryzen it can struggle to manage cores perfectly.
And ? What would be the issue at looking at Linux benchmark:

Non synthetic MT affair, like actual code compilation:
https://www.phoronix.com/review/intel-core-i9-13900k/2
rendering:
https://www.phoronix.com/review/intel-core-i9-13900k/5
Encoding:
https://www.phoronix.com/review/intel-core-i9-13900k/11

Does not look like those CPU are doing particularly well in synthetic versus real world ? Or maybe they do, which should be easy to show that the case.
 
The E cores are just super cut down x86 cores, get rid of all the silicon for AVX 512 encode, decode, encryption, and multi threading and all the stuff needed to secure it and you are left with as pure an x86 processor as you can get. Their actually pretty nifty, I have some old Ryzen embedded servers that run my basic network stacks on the individual sites I would love to replace them with a full e core setup with like 16 threads and a solid 64gb ram. Would make dope little server.
You know... ATOM cpus are not a new thing :)
 
And no one looking a machine for gaming will have much interest into an high core counts, so those machines are for who ?

Compiling tend to still have quite single process task and tend to be made on quite the normal machine by a giant share of people, rendering for youtube stuff has well, same for baking independent or personal made small games. I am sure you are right among serious studio (well I would imagine that quick previews before sending something big on the bake-render queue could still happen).

I love my 3900x to compile and a 3070 to render for work, but I would not have paid a Threadripper pro and Hopper class for it, there is a lot of big but not big enough for the best pro tech workload and the personal computer level machine are quite something.

Even not that small youtube channel seem to use those type of CPU for their stuff, not a server room.


Yes that why I asked that question (because it is certainly possible) but I do not see much disconnect between synthetic benchmark and actual performance and a lot of benchmark used in reviews are not that synthetics but a mix of actual task running on day to day used software. Why talk about synthetic benchmark exactly ?


And ? What would be the issue at looking at Linux benchmark:

Non synthetic MT affair, like actual code compilation:
https://www.phoronix.com/review/intel-core-i9-13900k/2
rendering:
https://www.phoronix.com/review/intel-core-i9-13900k/5
Encoding:
https://www.phoronix.com/review/intel-core-i9-13900k/11

Does not look like those CPU are doing particularly well in synthetic versus real world ? Or maybe they do, which should be easy to show that the case.
I mean if you are shopping for a work computer you should absolutely look at the real-world benchmarks for your use cases and environment and Intel this year has actually come to play, it has been a solid 3 generations since Intel really brought out something competitive so it is about damned time.
I mean the 13900K Idles with a draw of 10w which is a big deal in my world. Realistically I only need my machines to go for 10h or so a day when there is somebody sitting in front of them, everything outside of that they are basically hibernating, the Ryzen 7950 is up at a normal ~25 watts. That difference spread over a year easily covers the spread from the 13900k's increased usage under heavy load.
If I look at my management platform under CPU utilization I can see that my 8th-gen CPUs average around 48% usage over a year and trend downward to 32% when I pull up the 12'th gen machines, 12'th and 13'th gen seem to pull pretty closely together so if I assume any 13'th gen purchases hang out in that same ballpark of CPU utilization then the power usage difference between the Intel and AMD offerings this time around come up about even.
Servers I build and plan differently and really they are there to work and that is on a whole different measurement scale than desktops which is why they get classified differently by accounting and I have to track that separately because they are classified as infrastructure, and blah blah blah blah.
 
They should have never released the SKUs as they were.

Ryzen became such a mindshare powerhouse because it gave people more decent cores for the same price while their competitor stagnated and kept the same core counts for generations in a row...

Sound familiar? Now Intel is the one offering more for less and AMD is the one stagnating and being lazy.

Ryzen 7 should be 12 cores, Ryzen 5: 8 cores and Ryzen 3: 6 cores.

Literally the same chips as we have now, just named and priced differently: that would have made this launch go from "eh" to "must buy"
 
They should have never released the SKUs as they were.

Ryzen became such a mindshare powerhouse because it gave people more decent cores for the same price while their competitor stagnated and kept the same core counts for generations in a row...

Sound familiar? Now Intel is the one offering more for less and AMD is the one stagnating and being lazy.

Ryzen 7 should be 12 cores, Ryzen 5: 8 cores and Ryzen 3: 6 cores.

Literally the same chips as we have now, just named and priced differently: that would have made this launch go from "eh" to "must buy"
Toss in a Ryzen 9 which is basically as close to a Threadripper as the platform can support and hell yeah!
But the reality for AMD is a little different right now, AMD's China sales are hurting, between US-based embargos and the Made in China policy for 2025, paired with investor promises of upwards of 50% margins for this coming year while looking at a significant decrease in sales with increases in production costs AMD is playing things close. The Zen 5 architecture is going to be a whole new thing and they are also going the BIG.little route for their next releases just as Intel will likely be using a form of Chiplet for gen 14. Competition is making things fun in the CPU market for the first time in a long while and it's a crapshoot.
I am still eagerly awaiting AMD to announce their G series APUs, the only thing I really need from this gen is a 7700G, I have a MAME cabinet I need to replace and the only thing the 5700G was missing was the AVX512 support (I want to add PS 2 & 3) capabilities to the box.
 
Toss in a Ryzen 9 which is basically as close to a Threadripper as the platform can support and hell yeah!
But the reality for AMD is a little different right now, AMD's China sales are hurting, between US-based embargos and the Made in China policy for 2025, paired with investor promises of upwards of 50% margins for this coming year while looking at a significant decrease in sales with increases in production costs AMD is playing things close. The Zen 5 architecture is going to be a whole new thing and they are also going the BIG.little route for their next releases just as Intel will likely be using a form of Chiplet for gen 14. Competition is making things fun in the CPU market for the first time in a long while and it's a crapshoot.
I am still eagerly awaiting AMD to announce their G series APUs, the only thing I really need from this gen is a 7700G, I have a MAME cabinet I need to replace and the only thing the 5700G was missing was the AVX512 support (I want to add PS 2 & 3) capabilities to the box.

Just thinking about a tiny ITX emulation box that can run everything up to PS2 without a dedi GPU sounds amazing....
 
Just thinking about a tiny ITX emulation box that can run everything up to PS2 without a dedi GPU sounds amazing....
The existing cabinet is run on an old 6700k and a GTX 780, but the hardware is starting to go, and while it does fine for MAME and the classic consoles, if I am having to build a new cabinet then I want new guts and I want to add more functionality to it while I am in there.
I would want to put something like this in there:

but preferably using the new 7000 stuff because... reasons?
Note:
I'd use a PS5 controller though, I find it more comfortable.
 
The E-core setup is stupid. The E-Cores server only two purposes, and that's to make the Intel chips more energy efficient and to win benchmarks like Geekbench. Why Intel put twice as many E-Cores than P-Cores on the Core i9-13900K is beyond stupid. Considering how terrible Intel's chips are at using power, it doesn't seem the E-Cores are severing their purpose too well. AMD releasing a Ryzen 7 7800X as a 10-core is just inevitable considering the products they've released in the past. Honestly AMD is their own worst enemy with motherboard and DDR5 prices deterring anyone from buying them. AMD doesn't need Intel's help to make them lose sales. They're more than capable of doing that on their own.
Honestly, I was anti-E-core earlier, but the truth is that they make a HUGE difference for the die-area they take up. Each E-Core is about Skylake IPC, which means anything with 8 E-cores is essentially running a i7 9700S CPU bolted to it in addition to the "Real Cores".

My analogy is this: If you have 16 real strong fellows and they can build a house in 24 hours, each one of them is able to do every job well, and they all exceed expectations in their work ethic.

But this other group of 8 strong fellows and 16 average Joes can build the same house in 20 hours. The 8 strong fellows are just like the previous group, but they're accompanied by the 16 average joes, who aren't as fast, can't lift as much, and take longer. But in the areas where strength and speed are needed, the Strong Fellows can keep the process moving, and with 16 average joes, more smaller tasks are able to get done at once.

You could say "hey, that second group is cheating" or "That second group aren't as good at their job on-average" or even "I prefer 16 elites than 8 elites and 16 average dudes" but the truth is that the second group built the house faster.
 
So it turns out this was a hoax.

https://www.techspot.com/news/96489-geekbench-tricked-ryzen-7-7800x-doesnt-exist.html

"Chips and Cheese has come clean. They fooled Geekbench with a phony name by spoofing the CPUID on what was actually a Ryzen 9 7950X system. They also disabled six cores and reduced the precision boost overdrive clock by 350 MHz to make it look (and perform!) like a middle ground between the very real 7700X and 7900X."

Kinda clever. Odd that the processor name isn't baked into the silicon.
View attachment 523326
Oh, that is hilarious :ROFLMAO: I had a feeling something was off, 5+5 would be an odd core config- I don't expect real 10C Zen until 12C CCD is a thing.
 
Oh, that is hilarious :ROFLMAO: I had a feeling something was off, 5+5 would be an odd core config- I don't expect real 10C Zen until 12C CCD is a thing.
I suspect we will see AMD doing their own BIG.Little CPUs with 6+4 configurations, 6P cores and 4E cores per CCD That would reasonably fit and scale pretty nicely.
I would be pleasantly surprised by a 12C CCD, but I would expect it in a Threadripper or an EPYC long before I see it in a consumer CPU.
 
I suspect we will see AMD doing their own BIG.Little CPUs with 6+4 configurations, 6P cores and 4E cores per CCD That would reasonably fit and scale pretty nicely.
I would be pleasantly surprised by a 12C CCD, but I would expect it in a Threadripper or an EPYC long before I see it in a consumer CPU.
They have patent about it, but it would be just irresponsible to not at least R&D about it, specially for the mobile-phone-console space:

https://hothardware.com/news/amd-patents-biglittle-core-task-transition-ryzen-8000-zen-5
https://www.freepatentsonline.com/y2021/0173715.html

But has long has their regular core deliver by used mm2 and they keep the ability to put a lot of them on a chips versus the competition, it could be never used or at least not in a near future:
https://www.fudzilla.com/news/54633-amd-not-interested-in-big-little
 
  • Like
Reactions: Axman
like this
Honestly, I was anti-E-core earlier, but the truth is that they make a HUGE difference for the die-area they take up. Each E-Core is about Skylake IPC, which means anything with 8 E-cores is essentially running a i7 9700S CPU bolted to it in addition to the "Real Cores".

AMD will have disparate cores in the future, and not just big and little; they also want to have MCMs with accelerators. Kind of like Apple but with physically separate chips, made on different nodes as needed.

I didn't realize this until I watched the RDNA3 Adored video today, but different nodes don't necessarily offer advantages for some kinds of transistors. That's why the GPU die is 5nm while all the MCs/cache dies are 6nm. Cache in particular scales poorly.
 
AMD will have disparate cores in the future, and not just big and little; they also want to have MCMs with accelerators. Kind of like Apple but with physically separate chips, made on different nodes as needed.

I didn't realize this until I watched the RDNA3 Adored video today, but different nodes don't necessarily offer advantages for some kinds of transistors. That's why the GPU die is 5nm while all the MCs/cache dies are 6nm. Cache in particular scales poorly.

I know AMD announced Zen4C, which is a density-optimised form of Zen4 with less cache, and all the rumours point to Zen4C being standalone, not a "little" core next to a big one. I've not heard of AMD announcing anything about asymmetric multi-core designs like Intel's P/E-core layout, or separate accelerator cores.
 
I've not heard of AMD announcing anything about asymmetric multi-core designs like Intel's P/E-core layout, or separate accelerator cores.

It was in an earnings call or something like that. AMD said they were developing tech to implement it, in particular for custom silicon, but also for the regular market. I mean, the average Joe might not ever use some kind of ai upscaling accelerator, but a render farm might want chips packed full of them.
 
That seem to me quite the faulty logic, would it not be possible for something that work but still not win accross the board against the really good work of AMD and TSMC ? I am not convinced Intel 10 is has better than TSMC 5 and all the different between the product must be explained by design/architecture difference.

on the same logic one could say; if the all similar core configuration really worked it would be intel and arm-apple accross the board, they don't, no ?
Across the board meaning across the product stack.
 
It was in an earnings call or something like that. AMD said they were developing tech to implement it, in particular for custom silicon, but also for the regular market. I mean, the average Joe might not ever use some kind of ai upscaling accelerator, but a render farm might want chips packed full of them.
It was Zen 5 that will have "P" and "E" Cores.
 
  • Like
Reactions: Axman
like this
It was in an earnings call or something like that. AMD said they were developing tech to implement it, in particular for custom silicon, but also for the regular market. I mean, the average Joe might not ever use some kind of ai upscaling accelerator, but a render farm might want chips packed full of them.
This is correct. AMD is already testing accelerators in their CPUs. They started this process years ago.
 
Back
Top