Will Alder Lake be good enough to compete with Zen 3 XT and its 3D cache?

sblantipodi

2[H]4U
Joined
Aug 29, 2010
Messages
3,759
As title,
what is your feeling about Alder Lake? Will be good enough to compete with AMD?
 
Alder Lake will destroy Zen 3 in both performance and efficiency according to roumors and leaks.
 
It depends on what Zen "3D" actually brings to the table. If AMD is able to get higher core clocks and better DRAM performance than what is currently possible, then I think Zen "3D" will be very competitive. OTOH, Alder Lake might start off slow and get "greater later" as DDR5 improves and DDR4 starts going EOL.
 
We have no way of knowing till it arrives. I've got high hopes, but there have been plenty of "high hopes" releases that didn't turn out all that well (Bulldozer, Prescott, etc) on both sides.
 
I have seen recent rumors of X699, it could be very interesting if core count could be higher on X699...
 
Alder Lake will destroy Zen 3 in both performance and efficiency according to roumors and leaks.


The efficiency will only come when the system is lightly-loaded. The new big cores are just playing the same games as Tiger Lake - suck down even more power to get 20% faster than the previous gen, so any increase in power consumption in Zen 3 + will be more than matched in the Intel side!

https://www.hardwaretimes.com/intel...on-to-468w-15-more-than-11th-gen-rocket-lake/

And that performance/watt under low-loads is entirely dependent on how much better Alder Lake's hardware controller is than Lakefield was.
 
Last edited:
The efficiency will only come when the system is lightly-loaded. The new big cores are just playing the same games as Tiger Lake - suck down even more power to get 20% faster than the previous gen, so any increase in power consumption in Zen 3 + will be more than matched in the Intel side!

https://www.hardwaretimes.com/intel...on-to-468w-15-more-than-11th-gen-rocket-lake/

And that performance/watt under low-loads is entirely dependent on how much better Alder Lake's hardware controller is than Lakefield was.

this is obviously not true, Intel clearly clamed that efficiency cores are more powerful than skylake, it will not work that way,
efficiency cores will be used even in high load situations.

they are meant to improve performance by increasing efficiency, not just save some power.
 
this is obviously not true, Intel clearly clamed that efficiency cores are more powerful than skylake, it will not work that way,
efficiency cores will be used even in high load situations.

they are meant to improve performance by increasing efficiency, not just save some power.
And AMD claimed that Bulldozer would outperform Intel. Believe claims never until proof is delivered.
 
And AMD claimed that Bulldozer would outperform Intel. Believe claims never until proof is delivered.

this for sure but I think that it is really reasonable that performance core are not there for power saving only, it would have no much sense on desktops.
 
Don't forget the new legal efficiency requirements. Much like EPA tests, if this can make it look lower power for certain tests, things may pass that wouldn't before - or that may be a concern coming up.
 
As title,
what is your feeling about Alder Lake? Will be good enough to compete with AMD?
Assuming Intel's and AMD's claims are both correct:

I expect Zen 3 with V-cache to perform better in games. I expect Alder lake will excel in general tasks, low threaded tasks, and in memory bandwidth situations.

For multi-threading: I think AMD will still win out in situations which use a lot of threads (like more than 10). But that's a big guess.
 
Last edited:
this for sure but I think that it is really reasonable that performance core are not there for power saving only, it would have no much sense on desktops.
Efficiency is alive and real, even in the desktop world. I think as enthusiasts we tend to think about our uses. We are the 1% literally in the CPU world. For us it may not matter. For the other 99% it will. Many more CPU's are sold for offices and prebuilts, than ever get placed into a enthusiast DIY pc. Intel is not catering to us. They are looking at the bigger picture.
 
And AMD claimed that Bulldozer would outperform Intel. Believe claims never until proof is delivered.
Absolutely this. The leaked Alder lake stuff looks promising, but I don't plan on buying a new build until both Intel and AMD release their 2022 products for testing.

(And in some ways I'd really like to wait until 2023 both to avoid any potential issues with Intel's first big.LITTLE implementation, and to give DDR5 a bit longer to shift away from the generally underwhelming early product cycle price/perf status. OTOH my 4790K is finally starting to feel meaningfully outdated...)
 
Last edited:
i've also been resisting any upgrades until 10 nm, and I'm stuck on a 4670k. i sincerely hope now that intel's implementation of efficiency and performance cores turns out good
 
i've also been resisting any upgrades until 10 nm, and I'm stuck on a 4670k. i sincerely hope now that intel's implementation of efficiency and performance cores turns out good
My gut says dodge first gen. The big+little setup works well for phones, but diving in and buying first gen desktop big+little hardware sounds like unnecessary pain. It's a good idea but I'm expecting a good amount of "ouch" with the first gen desktop implementation.
 
My gut says dodge first gen. The big+little setup works well for phones, but diving in and buying first gen desktop big+little hardware sounds like unnecessary pain. It's a good idea but I'm expecting a good amount of "ouch" with the first gen desktop implementation.
I’m betting we’ll end up with game modes that disable the small cores for a bit.
 
I’m betting we’ll end up with game modes that disable the small cores for a bit.
Very likely, but I bet we'll need a third party utility for some older games at first. New stuff will get a patch, older games... nah, no $ in it for most of them. Some will of course. After a year or two Microsoft will sort it out and you won't have to think about it anymore most of the time. Of course exceptions will still apply.
 
My gut says dodge first gen. The big+little setup works well for phones, but diving in and buying first gen desktop big+little hardware sounds like unnecessary pain. It's a good idea but I'm expecting a good amount of "ouch" with the first gen desktop implementation.
Smart idea as iv been a early adopter once 16gb DDR 3 Corsair Dominator Plats $800 cad lol, nver again,

intels adding too many firsts for me to take a step away from my amd build.

1. new Big.Little
2. DDR5
3. PCie Gen 5
4, New windows 11 Scheduler

when these had some time to mature ill consider it but by then amd will bring out there next gen who knows maybe by 2022 -2023 amd will bring out PCie Gen 6 and DDR 5 in a meaningful way.
 
I saw a recent post about 12900k beating out the 5950x in Geekbench on both ST and MT, but at 250W.

I'm guessing that top end performance will be pretty close, but performance per watt will still be to AMD's advantage.
 
I’m betting we’ll end up with game modes that disable the small cores for a bit.
I really hope not. If MS does they'll almost certainly have to remove it in a future update because the developers of all the random craplets you have running which have no business on anything other than a little core will abuse the flag to put themselves on a main core where their wasting of big CPU cycles will slow down anything compute intensive that you care about - and on laptops drain the battery faster. The latter in particular makes me suspect that if MS does offer any way for apps to request a core type it'll be an opt-in one to request running on little cores intended for the developers of well behaved background utilities.
 
would this mean MS is going to basically look to Android/iOS for there Schedulers for inspiration since ARM has been like this since forever. how long before we start seeing ppl rooting windows 11 to run there own governor schedulers to tweak how the little cores are assigned?
 
I saw a recent post about 12900k beating out the 5950x in Geekbench on both ST and MT, but at 250W.

I'm guessing that top end performance will be pretty close, but performance per watt will still be to AMD's advantage.
That’s almost what my 3960x pulls at stock.
 
I really hope not. If MS does they'll almost certainly have to remove it in a future update because the developers of all the random craplets you have running which have no business on anything other than a little core will abuse the flag to put themselves on a main core where their wasting of big CPU cycles will slow down anything compute intensive that you care about - and on laptops drain the battery faster. The latter in particular makes me suspect that if MS does offer any way for apps to request a core type it'll be an opt-in one to request running on little cores intended for the developers of well behaved background utilities.
They'll be able to do that anyway unless MS figures out a way to force them onto a little core. Set affinity has been in Windows as long as I've had a machine with more than one CPU core. That started with a dual Pentium 3 box running Windows 2000. It is not and has never been a privileged operation. Any user can set affinity, and it can be done with system calls in an application. Same with Linux. Despite being a bit of a Linux geek and knowing more about fun with CPU scheduling on Linux, I never actually messed around with setting affinity on Linux until 2012 when I started a job that involved high frequency trading. Before that I just used it to make games meant for a one core machine run on a dual socket. Most were fine out of the box, but a few flipped out if you had more than one CPU. Like Unreal Tournament (original 1999 edition) was a slideshow on a dual Opteron but ran stupid fast if you forced it onto one CPU.

Personally I think Alder Lake will be nice for laptops running business & "the usual" home computing stuff like web browsers, office, etc. but pretty meh for desktops and gaming for a while.
 
It's funny how Intel comes out with an architecture and Microsoft is ready to rock with scheduler updates. My Ryzen 5800x STILL doesn't report accurate frequencies through task manager. I have to go to Ryzen Master to know what's going on.

Must be nice to be on top.
 
And AMD claimed that Bulldozer would outperform Intel. Believe claims never until proof is delivered.
technically it did eventually. eventually the extra "cores" mattered enough, per unit $. no different than crossfire 480's besting 980's, only to have crossfire retired a year later (massive egg in face imo).
You speak truth however, prove it. or else BS.
 
I think everyone believes Intel will come out with a "stomp" like they did in the AMD dominated dual core days. I just don't see that coming yet. Maybe AMD did something right this time?
 
technically it did eventually. eventually the extra "cores" mattered enough, per unit $. no different than crossfire 480's besting 980's, only to have crossfire retired a year later (massive egg in face imo).
You speak truth however, prove it. or else BS.
Super briefly. That was pretty much eliminated by the early Haswell days, if not sooner. They weren’t bad, mind you- but they were only competitive on price, not on performance. And they weren’t a clear win over Intel like AMD64 was or like Zen3 has been.
 
Super briefly. That was pretty much eliminated by the early Haswell days, if not sooner. They weren’t bad, mind you- but they were only competitive on price, not on performance. And they weren’t a clear win over Intel like AMD64 was or like Zen3 has been.
Still running a Dozer box, keeps up with first gen ryzen in gaming loads. Can't say that about Haswell i5's. It's true they had no hope competing with Intel in the high end. With 4c8t it was a different story. The 4c4t processors were almost immediately obsolete, yet synthetic benchmarks looked great.
 
Oh
Still running a Dozer box, keeps up with first gen ryzen in gaming loads. Can't say that about Haswell i5's. It's true they had no hope competing with Intel in the high end. With 4c8t it was a different story. The 4c4t processors were almost immediately obsolete, yet synthetic benchmarks looked great.
Oh gaming tends to be mostly fine. Started hitting issues with VR stuff on it- my wife was using mine so I built a new one and gave her my 6700K. But I used it as a workstation and it was trounced for that by Intel, never mind zen 1 (which I also had). And true, was only thinking high end- I don’t really do anything smaller than the top-end consumer parts since mine tend to run forever.
 
  • Like
Reactions: travm
like this
Alder Lake is an interesting architecture. If the Intel Architecture Day slides are to be believed, Golden Cove is somewhere between 10 and 20% faster than Rocket Lake in practice (Intel quoted 19% geomean, but there were a bunch of outliers and synthetics aren't a great way to measure performance). That's really freaking good for x86 at this point, but it was achieved by increasing the sizes of all of the buffers and execution windows by 50% again (SKL->TGL was already a 50% increase in a number of internal data structures). That kind of increase in PPA for a 15%-ish increase in performance is bad engineering, but that 15% is what you need to get users to upgrade at this point, now that we are so close to silicon's material limits in terms of leakage, clock speed, etc.

The really interesting thing is the E-cores (Gracemont). One characteristic of current desktop CPUs is that they ship clocked at the lunatic fringe of what the process is capable of (e.g. 11700K vs 11900K separated by just a couple hundred MHz, 10850K as a response to insufficient dies clocking to 10900K speeds, Zen 3000 boost inconsistency, zero overclocking headroom from either vendor). Golden Cove is going to have to make PPA concessions to have such a fat core and still be able to reach last gen's peak clock speeds; +15% IPC is a lot less 'wow' when you lose 500MHz . Gracemont seems really fast - Intel is saying its as fast as Skylake - but its going to make tradeoffs that will likely limit clock speeds to 3.x GHz instead of 5 GHz.

Annoyingly, the mainstream desktop platform is going to feel the drawbacks of such a design the most. On the mobile side you can have some nice 2P/6E or 2P/8E designs which will outperform last gen designs everywhere (laptops don't do all core turbo anywhere close to 5GHz). On the HEDT side you can have something crazy like a 8P/64E which puts all of your single-threaded tasks (linking, CAD, etc) on the P-cores and puts the multithreaded stuff (compiling, rendering, encoding) on the E-cores, which is pretty good. But an 8P/8E is just going to add confusion on mainstream desktops, it will end up performing like a 5900X but with a ton of scheduler-induced drama.

I'm really hoping to see a 12P/0E conventional design with competitive power consumption. It won't thrash AMD, but at this point everyone is so close to the limit of what x86 and silicon can do that its going to be nigh impossible to get a huge leap in real-world workloads.
 
Last edited:
Annoyingly, the mainstream desktop platform is going to feel the drawbacks of such a design the most. On the mobile side you can have some nice 2P/6E or 2P/8E designs which will outperform last gen designs everywhere (laptops don't do all core turbo anywhere close to 5GHz). On the HEDT side you can have something crazy like a 8P/64E which puts all of your single-threaded tasks (linking, CAD, etc) on the P-cores and puts the multithreaded stuff (compiling, rendering, encoding) on the E-cores, which is pretty good. But an 8P/8E is just going to add confusion on mainstream desktops, it will end up performing like a 5900X but with a ton of scheduler-induced drama.

I'm really hoping to see a 12P/0E conventional design with competitive power consumption. It won't thrash AMD, but at this point everyone is so close to the limit of what x86 and silicon can do that its going to be nigh impossible to get a huge leap in real-world workloads.

I don't think Intel is going to release an ADL without e-cores for the simple fact that 4 e-cores only take up as much space as 1 p-core. Intel can call a 6P/4E chip "10-core" and it would fit on a smaller die than an 8P/0E "8-core." Even though I would rather have the latter, the former sounds better on paper, would probably score higher on a lot of benchmarks, and probably costs less to make.
 
I don't think Intel is going to release an ADL without e-cores for the simple fact that 4 e-cores only take up as much space as 1 p-core. Intel can call a 6P/4E chip "10-core" and it would fit on a smaller die than an 8P/0E "8-core." Even though I would rather have the latter, the former sounds better on paper, would probably score higher on a lot of benchmarks, and probably costs less to make.
Maybe Sapphire Rapids will be our savior, it's not showing up for a while but 18+ unlocked Golden Cove cores should be quite competitive on desktops. For the crazy HFT guys that need nanosecond latencies it's no good (you still get some extra latency going through the EMIB) but for the rest of us AMD has shown the chiplet/tile based design doesn't hurt real-world performance.
The real question is, will it show up on time (SPR will probably wind up in servers first) and will it be priced correctly?
 
Depends how arrogant Intel is if they believe they are still the market leader they won't lower prices only increase them we will see how they price the server cous if it's still $10,000+ then you know heads still stuck up there asses.
 
The 4c4t processors were almost immediately obsolete, yet synthetic benchmarks looked great.
I could be misreading everything, but wasn't some of those some of the longest used with the longest relevant life CPU is history ?, I have a college that changed is core i5 4c-4t 2xxx this year at home, doing CAD development on it. Not sure about the split on logical cpu, but 4 cpu is still the most common on steam hardware survey.
 
I could be misreading everything, but wasn't some of those some of the longest used with the longest relevant life CPU is history ?, I have a college that changed is core i5 4c-4t 2xxx this year at home, doing CAD development on it. Not sure about the split on logical cpu, but 4 cpu is still the most common on steam hardware survey.
Sure if you're using single threaded apps like CAD software (I'm a CAD designer) and a cheap assed college, you could use those forever. Along with 90% of the CPUs made at that time. 4c4t CPUs choke on AAA games, did almost at launch. Also multi tasking and multi threaded apps suffer(something professional CAD designers deal with when they get out of school).
Hell I do some CAD work on a Chromebook running Linux. 4c8t CPUs, even going way back are still viable for gaming. 4c4t CPUs are not really.

The point was Haswell i5's, while priced similarly to AMD AM3 FX chips, were obsolete sooner. Just because they might still be in use somewhere for something is more or less irrelevant. I have a phenom X3 in a box in my basement because it still turns on.
 
for something is more or less irrelevant.
In general yes but not when talking about becoming obsolete fast or not, I took obsolete has meaning:
: no longer in use or no longer useful
https://www.merriam-webster.com/dictionary/obsolete

Which sound a really strange to say about something like a 2500K, that it became obsolete almost at launch (I am still using mine 4c-4t of that era in a server right now).

Looking at benchmark between a 2500k a 2600K for game of the time I am still not convinced.

P.S. Not using a CAD, making a CAD, multithread performance changed thing by a giant ridiculous factor (I would imagine maybe has far to say has 6 to 11 time faster for some things) when he went on a 3700x, it was just to point out that it was a CPU that lived for about 10 year's even for people that do a living on their PC, which is all time high historically speaking not short living.

Not one was programming in 2000 on a 1990 even high end cpu, it was a particularly long living generation of processor.
 
Back
Top