Arrow Lake 2024 (and beyond)

Is there any scenario where this new CPU would not be good for like 5 years of gaming? I can’t imagine next gen consoles sporting something faster.
 
depends what one mean by good, someone buying a cpu july of 2019 say a 3700x day one, it is still good, not not necessarily match it with an 7900xtx good... anymore, a 3600 had issue in Baldur Gates 3 city part at least the launch version or StarField would run at half the FPS than 13900k on a 3700x.

And the consoles does not have faster CPU, they do not match it with that fast of a GPU and does not try to do as much as very high details pc gamers do and tend to run more optimized for them title a bit.

Maybe the 2028 PS6 will have some new stacked X2-3D AMD cpu with a 4,000 Tops of 2 bits inference NPU cores that can use that giant new stack amount of really fast memory and a 2024 ArrowLake 2 will leave a lot of performance on that table versus a DDR-6, PCI express 6 Geforce 7800x.... and a 24 GB/s ssd.

3700x CPU should be able to do 30-40 fps for console game at their level of details, because a weaker console CPU with less access to watts does it, but pc gamers tend to want to do more and CPU development has been relatively fast the previous 5 and could be in the next 5 years.

A 9900x could be quite faster than a 3900x.
 
Last edited:
After being bit by them with a defective i226v on x790 and the 14900k degradation, I'm avoiding them a good several generations. :p
 
So the rumor today is apparently that performance cores will top out at 5.7GHz, but efficiency core clocks are up 300 MHz over last gen. Moreover TJMax is 105 degrees.

I would have thought we’d be past 6.0GHz by now but I guess that is an arbitrary number and clock speed is not everything for gaming. Or is it?
 
So the rumor today is apparently that performance cores will top out at 5.7GHz, but efficiency core clocks are up 300 MHz over last gen. Moreover TJMax is 105 degrees.

I would have thought we’d be past 6.0GHz by now but I guess that is an arbitrary number and clock speed is not everything for gaming. Or is it?

It's not everything but it is a big factor. The 14900KS topped out at 6.0Ghz right? So going from 6.0Ghz to 5.7 is a clockspeed reduction of 5% which is going to cancel out some of the IPC gains.
 
It's not everything but it is a big factor. The 14900KS topped out at 6.0Ghz right? So going from 6.0Ghz to 5.7 is a clockspeed reduction of 5% which is going to cancel out some of the IPC gains.
Yes, but ditching hyperthreading could be huge. HT adds a lot of complexity, so a significant IPC uplift from ditching it seems quite plausible.

i9-14900KS tops out at 6.2GHz, so it's more like a 9% difference vs. 5.7GHz.
 
Yes, but ditching hyperthreading could be huge. HT adds a lot of complexity, so a significant IPC uplift from ditching it seems quite plausible.

i9-14900KS tops out at 6.2GHz, so it's more like a 9% difference vs. 5.7GHz.

Yeah the removal of hyperthreading is a pretty big change so I'm excited to see what this brings to the table, but only if it doesn't take until Dec to come out.
 
Personally after some investigations and recent Intel blunders I am reconsidering to drop my plans to get Arrow Lake and rather move to AMD and ASAP.

My testings shows both Windows but especially Linux's scheduler don't work correctly with E-cores.
Running especially Prime95 leads to lag under both systems with especially Linux running so terribly it becomes unusable.
Disable E-cores and suddenly it all works as expected.

I was under assumption that having more real cores would improve multi-tasking experience and cause system to experience severe stuttering at full load. It doesn't happen visibly under all types of full system load but it is enough to happen some of the times to lead to rather underwhelming overall experience. We are way beyond early Alder Lake days and there should be no scheduler issues at this point. Let alone such that cause system to stutter.

So why even get Arrow Lake?
Just disabling HT doesn't fix this issue. It fixes the issue background threads steal performance out of P-cores but if the whole system stutters then it is still unworkable product.

Not to mention I expect Intel be like "we will release Arrow Lake and then even even more overclocked and somehow less power efficient CPU based on the same exact core design and then we will add or remove few pins and claim we needed to change socket because we had to improve power delivery to get more profit from selling chipsets" ~Intel

Maybe if Intel didn't waste time changing sockets and improve power delivery they would focus more on improving power efficiency.
Imho Intel is toasted.
All they had is assumption users had that despite all the BS Intel does they at least get consistent stable experience. It was true for many many years, especially with some of the AMD's blunders like Athlon 64 X2 having timer bugs that to me made these CPUs worse than Pentium D or with all the issues happening when they did release new sockets including AM5. But all that pales in comparison to Intel's recent blunder and at least getting AMD I don't waste money on new motherboards as often as I do with Intel.

Either Intel fixes the P/E core scheduler and also commits for LGA1800 to have actually long life or I am not touching their sheet.
 
  • Like
Reactions: Down8
like this
I think one of the issues is after Intel had 14nm++++++++++ they want to come out with various new tech very fast. It can open them up to more blunders.
By the way, I was surprised by the price of the AMD Ryzen 5500 GT. So maybe Intel forgot how to really compete.
 
I wonder how the Ultra 9 will actually perform. The leaked data so for is underwhelming at best. I don’t like the idea of a new architecture/process that does not significantly outperform the old one. What that suggests to me is a CPU that will age quickly and poorly because you don’t have much additional headroom over 14th gen. Might as well buy a two year old CPU. A new chip like this should trounce 14th gen.

Ultimately, 14th gen is no slouch when the chips actually work, so a chip that only nominally beats it but perhaps is more efficient is ok. Not every chip needs to justify an upgrade from the last gen. There are customers like me with much older chips and any gains over the 14th gen are welcome. But I tend to think if they are not substantially improving upon 14th gen with these chips, that a chip that does amount to a significant improvement can’t be too far behind. Perhaps I will skip this gen after all.
 
Lunar Lake and Panther Lake are supposed to be powerful. Over time though I get confused about which processors are going to be laptop and which ones are going to be desktop.
 
I wonder how the Ultra 9 will actually perform. The leaked data so for is underwhelming at best. I don’t like the idea of a new architecture/process that does not significantly outperform the old one. What that suggests to me is a CPU that will age quickly and poorly because you don’t have much additional headroom over 14th gen. Might as well buy a two year old CPU. A new chip like this should trounce 14th gen.

Ultimately, 14th gen is no slouch when the chips actually work, so a chip that only nominally beats it but perhaps is more efficient is ok. Not every chip needs to justify an upgrade from the last gen. There are customers like me with much older chips and any gains over the 14th gen are welcome. But I tend to think if they are not substantially improving upon 14th gen with these chips, that a chip that does amount to a significant improvement can’t be too far behind. Perhaps I will skip this gen after all.

That 4% improvement was just a single benchmark I believe. Even for AMD there are some synthetic tests where going from Zen 3 to Zen 4 only shows like a 5% performance improvement, but we all know how much faster it actually is across the board. Just wait for some proper reviews, everything else until then just seems like clickbait. Lol even AMD themselves showed a 1% improvement going from Zen 3 to 4 in CPU-Z:

1722361260995.png
 
That 4% improvement was just a single benchmark I believe. Even for AMD there are some synthetic tests where going from Zen 3 to Zen 4 only shows like a 5% performance improvement, but we all know how much faster it actually is across the board. Just wait for some proper reviews, everything else until then just seems like clickbait. Lol even AMD themselves showed a 1% improvement going from Zen 3 to 4 in CPU-Z:

View attachment 669238
Edit: this is apparently fake. Here's a +22% in CPU-Z vs. i9-14900K Arrow Lake leak, and it's likely a Core Ultra 5: https://wccftech.com/alleged-intel-...k-20-percent-faster-single-thread-vs-14900ks/

A full system benchmark, 2 web benchmarks, a multi-core benchmark that's 14.5% faster, and a lone single thread benchmark doesn't convince me that Arrow lake is going to be 4% faster in single thread. Of course it's entirely possible it's barely faster in GeekBench and much faster in CPU-Z, so the total opposite of Zen 3 to Zen 4 getting +1% in CPU-Z and +14% in GeekBench. Personally I'm expecting a pretty good increase in IPC from ditching hyperthreading since it'll save a bunch of complexity. At any rate I'm sure we'll see the usual situation where some apps, games and benchmarks see more improvement than others.

We'll have a better idea once reviews and benchmarks on Lunar Lake are out. Both are supposed to use Lion Cove P-cores and Skymont E-cores. Intel says they're going to launch Lunar Lake on September 3 at an event in Berlin.
 
Last edited:
Here's a +22% in CPU-Z vs. i9-14900K Arrow Lake leak, and it's likely a Core Ultra 5: https://wccftech.com/alleged-intel-...k-20-percent-faster-single-thread-vs-14900ks/

A full system benchmark, 2 web benchmarks, a multi-core benchmark that's 14.5% faster, and a lone single thread benchmark doesn't convince me that Arrow lake is going to be 4% faster in single thread. Of course it's entirely possible it's barely faster in GeekBench and much faster in CPU-Z, so the total opposite of Zen 3 to Zen 4 getting +1% in CPU-Z and +14% in GeekBench. Personally I'm expecting a pretty good increase in IPC from ditching hyperthreading since it'll save a bunch of complexity. At any rate I'm sure we'll see the usual situation where some apps, games and benchmarks see more improvement than others.

We'll have a better idea once reviews and benchmarks on Lunar Lake are out. Both are supposed to use Lion Cove P-cores and Skymont E-cores. Intel says they're going to launch Lunar Lake on September 3 at an event in Berlin.
If you look at that link you posted again, the article was updated to say that those numbers appear to have been faked.
 
Personally I'm expecting a pretty good increase in IPC from ditching hyperthreading since it'll save a bunch of complexity.
I don't think it works like that.
To get better single core performance you need to widen CPU design and optimize decoders, dispatchers, caches, etc. HT/SMT itself doesn't affect IPC asmuch as it uses transistors which could be put to different uses.

What Intel imho needs to do is to improve core design so much they won't need to use as much turbo boost.
Also to commit to LGA1851 longeivity - Intel literally cannot afford changing sockets all the time, especially after their RL blunder.

Otherwise Intel needs to fix scheduler in Linux because it works pretty badly with P/E cores currently.

All things considered to me personally Arrow Lake doesn't look so attractive.
At the beginning of this year I was using Windows and there my RL systems work pretty great. I didn't have any issues and if anyone had it was AMD users. Now I use Linux and for scheduler alone I am considering ditching my current platform for 16 core Zen5 and especially since AMD doesn't plan changing sockets anytime soon - perhaps they will be on AM5 when Intel makes LGA1850 or LGA1852...

So for me Arrow Lake doesn't look that interresting.
And Intel apparently is ditching 8/32 SKUs... I wonder what their plan is. Either they will put more P-cores instead which would be best move or will try to overclock the heck out of Arrow Lake for next generation - it surely worked out last time they did it...
 
So do the efficiency cores actually hinder performance in gaming? Or do the games benefit from the extra E cores?
 
Hi.What number will be the fastest model and faster than 14900K,Intel Core Ultra...?
 
Hi.What number will be the fastest model and faster than 14900K,Intel Core Ultra...?
1. Intel Core Ultra Buckets to 11 Over 9000™
2. Due to the effects of quantum tunneling resulting from such small lithography, the chip will only be faster than the 14900k when you aren't thinking about it and also on February 29.
 
1. Intel Core Ultra Buckets to 11 Over 9000™
2. Due to the effects of quantum tunneling resulting from such small lithography, the chip will only be faster than the 14900k when you aren't thinking about it and also on February 29.
on february 29 will be released not in this year october?
 
Hi.What number will be the fastest model and faster than 14900K,Intel Core Ultra...?
Rumours as it to core ultra 9 285K

https://videocardz.com/newz/intel-c...t-gen-cpus-leaked-alongside-z890-motherboards

has for faster than a 14900K, will depends in what and which power enveloppe here, would not expect to be faster at everything or at least not by much.

This should be
-much better iGPU
-much better efficacy

Generation, considering the growing pain-issue with so much change and the potential to grow with that tile system (meteor lake was not an easy or impressive launch), if they achieve that with AMD latest launch, could be good enough. If both no HT and AVX512 are true, maybe there some keep it simple to make work approach that could pay off....

If we are lucky, Intel rough start was a meteor Lake desktop cpu that they never launched and we are getting a revised one.
 
Last edited:
  • Like
Reactions: hu76
like this
Rumours as it to core ultra 9 285K

https://videocardz.com/newz/intel-c...t-gen-cpus-leaked-alongside-z890-motherboards

has for faster than a 14900K, will depends in what and which power enveloppe here, would not expect to be faster at everything or at least not by much.

This should be
-much better iGPU
-much better efficacy

Generation, considering the growing pain-issue with so much change and the potential to grow with that tile system (meteor lake was not an easy or impressive launch), if they achieve that with AMD latest launch, could be good enough. If both no HT and AVX512 are true, maybe there some keep it simple to make work approach that could pay off....

If we are lucky, Intel rough start was a meteor Lake desktop cpu that they never launched and we are getting a revised one.
oki thx
 
I’m on 8700K. Think it is time to upgrade?
It's a decent CPU that's more than capable for everything you need. I recently picked up a 9700 (non K) second hand for 123 bucks and I'm impressed how well it holds up in todays applications and games.

Maybe take the minor step to the 9700K or a 9900K and kick everything up to 5 Ghz. The TIM is metal on the 9000 series, it was still paste on the 8000 series. They're nearly the same CPUs though.

The 9700K is just 8 cores no HT and the 9900K is 8 cores / 16 Threads.

if you can't sideload into a 9000 series cheap, a super cheap upgrade using your existing RAM is the 12700K or 12900K You can get one and a Motherboard for really cheap. It's good enough to do everything and doesn't have the problems the 13000 and 14000 series did / do.
 
It's not everything but it is a big factor. The 14900KS topped out at 6.0Ghz right? So going from 6.0Ghz to 5.7 is a clockspeed reduction of 5% which is going to cancel out some of the IPC gains.
You are using term IPC incorrectly
IPC is Instructions Per Clock.
Ignoring interactions with rest of the system so thing like memory bandwidth and latency the IPC between different multiplier settings on CPU is the same. Not ignoring these things real-world IPS of 6GHz CPU should be worse than 5.7GHz CPU

To describe performance where improving IPC but also increasing clock frequency improves performance we should use something like IPS - Instructions Per Second or MIPS
 
You are using term IPC incorrectly
IPC is Instructions Per Clock.
Ignoring interactions with rest of the system so thing like memory bandwidth and latency the IPC between different multiplier settings on CPU is the same. Not ignoring these things real-world IPS of 6GHz CPU should be worse than 5.7GHz CPU

To describe performance where improving IPC but also increasing clock frequency improves performance we should use something like IPS - Instructions Per Second or MIPS

What? If two CPUs are running at the same frequency but one has higher IPC then you lower the clock speed of the CPU that has higher IPC, the reduced clockspeed will in turn reduce the performance that was gained from the IPC increase. Say both CPUs run at 6GHz, one CPU outputs 5 more fps due to higher IPC. Now reduce the clockspeed of the higher IPC chip down to 5.5GHz and now it's only getting 3 more fps. The reduced clockspeed offsetted some of the performance gains from the increased IPC. Not sure how I'm using the term wrong here.
 
What? If two CPUs are running at the same frequency but one has higher IPC then you lower the clock speed of the CPU that has higher IPC, the reduced clockspeed will in turn reduce the performance that was gained from the IPC increase. Say both CPUs run at 6GHz, one CPU outputs 5 more fps due to higher IPC. Now reduce the clockspeed of the higher IPC chip down to 5.5GHz and now it's only getting 3 more fps. The reduced clockspeed offsetted some of the performance gains from the increased IPC. Not sure how I'm using the term wrong here.
I reread your post and yeah, I jumped to conclusion just based on how often term IPC is incorrectly used these days. Sorry.
You said it will cancel IPC gains so you used term correctly.
 
Back
Top