Arrow Lake 2024 (and beyond)

Zen 5 is slower in some apps and games though, and sometimes less efficient, so there are some slight regressions. But that's on the desktop chips, for all I know it's a flawless arch for datacenters.
 
Zen 5 is slower in some apps and games though, and sometimes less efficient, so there are some slight regressions. But that's on the desktop chips, for all I know it's a flawless arch for datacenters.
Everything I'm seeing says the DC kit is amazing - especially if you need the core counts. And yeah, sometimes slightly - but also sometimes it's slightly faster. It's mostly a wash, at least for now. I'm hearing rumors that x3d is going to be impressive though.
 
Something I haven't been able to figure out: Does Arrow Lake support DP-Alt mode for the dedicated GPU, over USB-C/Thunderbolt?
 
11th gen was worse. 14th gen was not great, either, depending on whether you do or do not care about efficiency (and if so, how consistently).
11 gen wasn't worse than 10th gen necessarily. It added PCI-E 4.0 and had needed IPC improvement over Skylake.
The issue was 11th dropped two cores, was as much or even more power hungry and competed in IPC with Zen3 - which was very power efficient and you could get 16 core variant which wiped the floor with Rocket Lake. Not to mention X3D variant for gaming and cheap platform prices.

This CPU was too little too late. If anything Rocket Lake that should have been LGA1200's debut CPU and have already at least 10 core and not 10-core Skylake to then get 8 core parts.
Intel apparently bet too much on 10nm being ready 'any moment now' and lost few years and all their advantage in core design and 11th gen was result of pointless effort to backport what was to be 10nm part to 14nm++++++++++++++

14th generation was on the other hand totally pointless. Not even doing new stepping over 13th generation is just reusing the same cores - which itself was already overclocked too much and which bit Intel in the end.
 
Well, I could update the gaming box with 285 and pray they fix it, or it won't matter at 4k, and stick with AMD on the workstation and drop in a 9950X...
Or put a 285 in the workstation, since it's solid on multicore and I don't need a full 24 stack right now - but worry about ram - and drop a 9800x3d into the gaming box
Or buy the best, and do a 9950 in the workstation, and the 9800x3d in the gaming box, and give up on having an intel system for the moment, which gets fun when engine bugs hit and you want a different arch to test on.
No matter how you look raw performance difference between 7950X, 9950X, 13900K, 14900K and 285K is actually miniscule.
1730889148070.png


It of course depends on what you do and how quickly you consume/utilize results of running tasks.
And for efficiency it matters how much such tasks you run for it to make any difference. Especially given cost of buying whole new motherboard...
 
Intel apparently bet too much on 10nm being ready 'any moment now' and lost few years and all their advantage in core design and 11th gen was result of pointless effort to backport what was to be 10nm part to 14nm++
This is true. But I think the reality is actually just that single-threaded performance is plateauing, and that that delay gave AMD and Apple a chance to catch up. (I also said so here.) Basically just as confirmed by the chart in your next post. The exception to this seems to be Apple's SoCs, because they're SoCs. If you pack all compute and memory physically together and give 0 fucks about expandability, you can eek out a lot more single-threaded performance. If you then add a bunch of accelerators for various tasks and deftly hide them behind a software stack you completely control, you get real measurable workload speedups. (And you also render a lot of benchmarking pointless.)
 
After digesting a lot of reviews I've concluded that the 285k is actually better than the 9950x for my use case. (I've also concluded that the quality of reporting on these products has materially dropped since 2010.) Hear me out:

With the exception of AVX-512 workloads, the 285k and 9950x are basically on par for productivity, trading wins. I daily drive Debian. Maybe the latter is 1% faster. For gaming neither is a winner, but since I don't care about Cyberpunk 2077 and I don't care about FPS over 120 (my GPU is the bottleneck), who cares.

Unfortunately the X870/X870E platform sucks. All the boards I can find from brands I'd actually buy seem to give you too many M.2 storage slots and awful PCIe options. If I want a full x16 slot for a GPU and an x4 slot for a cheap Intel 10gbe NIC, I can either get the cheap $250 Asus PRIME Z890-P WIFI on the Intel platform, or I have to spring for at least the $310 TUF GAMING [blah blah] X870 board. But the X870 board is materially worse since it has 2 fewer PCIe slots and two of the M.2 slots compete with the PCIe slots for bandwidth. WTF?

The X870E platform at least bumps the number of the PCIe lanes, but costs at least $400 in a board. (The ProArt X870E-CREATOR WIFI has almost comparable PCIe and M.2 options, except for a bunch of PCIe slot/M.2 port bandwidth sharing.) Outrageous!

(The more expensive boards have more USB-4 connectivity, but the Intel board gives me a single TB-4 and that's enough. More expensive ones come with TB-5.)

I could get a PCIe 4.0 x1 10gbe NIC with the same Marvell chip the ProArt comes with, but it's $50-$100.

Even with the current price cuts (9950X for $584) the Intel platform makes more sense. If only the 285k was real and I could buy one.
 
After digesting a lot of reviews I've concluded that the 285k is actually better than the 9950x for my use case. (I've also concluded that the quality of reporting on these products has materially dropped since 2010.) Hear me out:

With the exception of AVX-512 workloads, the 285k and 9950x are basically on par for productivity, trading wins. I daily drive Debian. Maybe the latter is 1% faster. For gaming neither is a winner, but since I don't care about Cyberpunk 2077 and I don't care about FPS over 120 (my GPU is the bottleneck), who cares.

Unfortunately the X870/X870E platform sucks. All the boards I can find from brands I'd actually buy seem to give you too many M.2 storage slots and awful PCIe options. If I want a full x16 slot for a GPU and an x4 slot for a cheap Intel 10gbe NIC, I can either get the cheap $250 Asus PRIME Z890-P WIFI on the Intel platform, or I have to spring for at least the $310 TUF GAMING [blah blah] X870 board. But the X870 board is materially worse since it has 2 fewer PCIe slots and two of the M.2 slots compete with the PCIe slots for bandwidth. WTF?

The X870E platform at least bumps the number of the PCIe lanes, but costs at least $400 in a board. (The ProArt X870E-CREATOR WIFI has almost comparable PCIe and M.2 options, except for a bunch of PCIe slot/M.2 port bandwidth sharing.) Outrageous!

(The more expensive boards have more USB-4 connectivity, but the Intel board gives me a single TB-4 and that's enough. More expensive ones come with TB-5.)

I could get a PCIe 4.0 x1 10gbe NIC with the same Marvell chip the ProArt comes with, but it's $50-$100.

Even with the current price cuts (9950X for $584) the Intel platform makes more sense. If only the 285k was real and I could buy one.
X870 is basically where B series was before; the only one justified at the high end is the X870E/X670E. I looked hard at the 670 Meg Ace, as it would probably do what you want, or the proart as mentioned. I also looked at the higher end Aorus Master x870; that had two x4 I believe.
 
After digesting a lot of reviews I've concluded that the 285k is actually better than the 9950x for my use case. (I've also concluded that the quality of reporting on these products has materially dropped since 2010.) Hear me out:

With the exception of AVX-512 workloads, the 285k and 9950x are basically on par for productivity, trading wins. I daily drive Debian. Maybe the latter is 1% faster. For gaming neither is a winner, but since I don't care about Cyberpunk 2077 and I don't care about FPS over 120 (my GPU is the bottleneck), who cares.

Unfortunately the X870/X870E platform sucks. All the boards I can find from brands I'd actually buy seem to give you too many M.2 storage slots and awful PCIe options. If I want a full x16 slot for a GPU and an x4 slot for a cheap Intel 10gbe NIC, I can either get the cheap $250 Asus PRIME Z890-P WIFI on the Intel platform, or I have to spring for at least the $310 TUF GAMING [blah blah] X870 board. But the X870 board is materially worse since it has 2 fewer PCIe slots and two of the M.2 slots compete with the PCIe slots for bandwidth. WTF?

The X870E platform at least bumps the number of the PCIe lanes, but costs at least $400 in a board. (The ProArt X870E-CREATOR WIFI has almost comparable PCIe and M.2 options, except for a bunch of PCIe slot/M.2 port bandwidth sharing.) Outrageous!

(The more expensive boards have more USB-4 connectivity, but the Intel board gives me a single TB-4 and that's enough. More expensive ones come with TB-5.)

I could get a PCIe 4.0 x1 10gbe NIC with the same Marvell chip the ProArt comes with, but it's $50-$100.

Even with the current price cuts (9950X for $584) the Intel platform makes more sense. If only the 285k was real and I could buy one.
I bought a 265k 2 days ago, myself.

I'm about to start a podcast, love ITX, and needed a system which would be good for productivity, and don't need the very best gaming.

Asrock's Z890 ITX board supports 3 NVME, has dual Thunderbolt 4, etc
https://www.newegg.com/p/N82E16813162182?Item=N82E16813162182

Microsoft Cashback also had 15% sitewide at Newegg, two days ago. So I got $100 off my order of 265k and the mobo.
The mobo also has a $20 rebate and the CPU comes with Assassin's Creed Shadows.

Solid deal!
 
Cross posting, as the Arrowlake review thread is flooded with chatter about the history of Intel's CPUs >_>

Starting at around 13 minutes, Robert Hallock of Intel, says they know what's wrong. Its a combo of things (bios, firmware, OS, etc). And they are on red alert to fix it. And full details will be made public, soon.


timestamped later in the video. He gives a loose ETA on the fixes, for maybe end of November. And then says they intend to publically explain each part of the problem "line-by-line". He also goes on to insist that Arrow Lake will deliver Raptor Lake gaming performance parity, with less power use.

View: https://www.youtube.com/live/P2OHRH7221w?si=NbQ3ey4ynqwWHOzS&t=1563
 
Interesting. But even if it manages RL parity, 9800 X3D has crushed Intel for gaming. There isn't a single game anymore that Intel can dominate.

And I'm sure 16c X3D parts are coming too, pushing AMD's lead even further in the heavily multi-threaded games.
 
Cross posting, as the Arrowlake review thread is flooded with chatter about the history of Intel's CPUs >_>

Starting at around 13 minutes, Robert Hallock of Intel, says they know what's wrong. Its a combo of things (bios, firmware, OS, etc). And they are on red alert to fix it. And full details will be made public, soon.


timestamped later in the video. He gives a loose ETA on the fixes, for maybe end of November. And then says they intend to publically explain each part of the problem "line-by-line". He also goes on to insist that Arrow Lake will deliver Raptor Lake gaming performance parity, with less power use.

View: https://www.youtube.com/live/P2OHRH7221w?si=NbQ3ey4ynqwWHOzS&t=1563

I'm probably going to end up buying one of these, or rather buying a Z890 board and getting the cpu to go with it. Last time around I built an X299 system because none of the desktop boards had enough PCI-e lanes. Z890 is finally pretty ok. If X870e had been an actual improvement in that regard I probably would have bought a 9950X by now after the Arrow Lake launch, but instead I've just been waiting.
 
Interesting. But even if it manages RL parity, 9800 X3D has crushed Intel for gaming. There isn't a single game anymore that Intel can dominate.

And I'm sure 16c X3D parts are coming too, pushing AMD's lead even further in the heavily multi-threaded games.
They also ask him if Intel has any plans to do something like is doing with X3D. And he smiled and said no comment.

Intel has been working on a sort of L4 cache tile, called Adamantium cache. But it got squashed and R&D has effectively stopped.

However, now that Intel CPUs are being made at TSMC, they can do the exact same vcache stacking as AMD.

I won't be surprised if there is an Arrowlake refresh, which includes some vcache versions.
 
I thought I was going to be building an Arrowlake ITX last night. But, the contact frame I ordered which specifically said it was compatible with LGA 1851 CPUs-------was not compatible.

So I had to eat shit and order Thermal Grizzly's new contact frame, which is $35 goddamn dollars. And it won't be here until Monday. and I think it JUST became available. Because it wasn't on Amazon a few days ago, when I ordered the incompatible one.
 
I'm having strange performance issues with my 285k setup. I'm either in full performance mode or I get stuttery reduced performance.
If I restart my machine it fixes it every time, but I can't figure out what's triggering it.

On "GameTurbo" power plan, XMP set, AI overclocking.
I'll have nothing open aside from Discord and Steam, but Rocket League, Valorant, or anything I play will either be in limp mode or normal...

I did the BIOS update last week, Armory Crate tells me I'm 100% up to date on drivers. Same for Windows updates.
 
I do not doubt a lot of fix can happen and those +20% figure as well, but are talking just fixing gaming performance in the sense of when the 285k is slower than the 13600k or worst, or in the general sense than even when it does well for what he do, still usually slower than the 14700k.

There enough room for something to be fix and still be unimpressive here.
 
I got my Arrow Lake ITX up and running. It definitely has quirks. Using an Asrock Z890i Nova Wifi ITX board with a 265K.

Updated bios with flashback, before the first boot.

First boot was fine. XMP for DDR5 7200 worked perfectly. (will eventually tweak timings, etc).

Very snappy in Windows. Maybe even snappier than 7800X3D.

It certainly runs quite a bit cooler in games.

The E-cores actually add a lot of heat density. 8 pcores + 12 e-cores at 150 watts is......I don't even dare run a full cinibench pass on this little copper AXP90-X47 cooler. 8P + 8E at 150 watts: sits comfortably in the 80's through a complete cinibench pass.
Similarly, I like to use OCCT's stability test using SSE instructions. 8 + 12 P jumps to 95c in like 4 seconds. 8+8 I can run it for a couple of minutes and it will hang in the mid 80's.

And the way it expresses heat is very different. I tried out a 14700 non-K with this same cooler. 100watts from that in any reasonable core config, would hit the 90's. In games, the 14700 was basically mid 80's, would sometimes push 100 watts and hit the 90's. 265K doesn't often get past 71c, with this cooler. 7800X3D was same.



Performance in FF14 Dawntrail benchmark is really bad, just like Gamer's Nexus review data.

Riven runs normal. It actually runs 3-5 frames better in 8+8 instead of 8+12. The heat density probably keeps individual cores from boosting as high.

Elden Ring runs normal------except when I'm staring close up at a wall, I lost 70fps compared to a 7800X3D LOL. ~148fps compared to ~224fps.

Afterburner can't detect the CPU's power usage yet. CoreTemp or OCCT work for power use in Cinibench or OCCT itself.

de-activating cores causes the boot failsafe to trip from my Mobo's logic. I have it set to try twice on a failed boot, before it warns me to go back into the bios.
When I deactivate cores, it fails twice, then boots and tells me it failed and i should go into the bios. But, if I just let it resume to Windows-----the cores are deactivated like I wanted and it works normal. I dunno if that's a CPU problem or a BIOS problem.

I dunno if I will keep it with this small cooler. Its totally fine and quiet for gaming. But I bought this to do work for a podcast. So I may not put it into the cool new super slim pizza box case I wanted to. And instead stick with my Sliger S610. In which I can use my Noctua NH-C14S or do an AIO if I really want.

It will be interesting to see what Intel cooks up on improvements.

The IGPU causing crashing was fixed in the early November BIOS update. So, I haven't had any of the "instability" which reviewers were upset about.
I do also have my Windows install limited to 23H2 only, with local group policy.
 
Last edited:
If the fixes work (and that's a big if), this could be a solid workstation part if you don't need homogenous cores (lighter VM loads) and gaming is a lower priority. The Z890 platform is decent looking, lots of PCIE in comparison to even x670e, and some flexibility on how its used.

IF.
 
If the fixes work (and that's a big if), this could be a solid workstation part if you don't need homogenous cores (lighter VM loads) and gaming is a lower priority. The Z890 platform is decent looking, lots of PCIE in comparison to even x670e, and some flexibility on how its used.

IF.
Seems fine as is, for workstation and normal software. The new e-cores don't suck. They aren't much worse on IPC than the P Cores.

I'm not expecting a magical rebound on gaming performance. However, we just had a similar show with AMD and Zen 5. Where they acted surprised at the reviews and insisted that their internal testing had been more performant. And then after chipset drivers, Windows updates, and BIOS updates: There's been marked improvement.

We'll see with Intel. And it is strange that both brands released side-grade CPU generations and also said similar things in response to reviews. Very weird.
 
e-core seem to be good enough that it give credence to the notion that they would stop 1-2 generation of having p and e cores, able to make a small enough good enough core and able to rack their numbers high enough on a chips like AMD do.
 
If the fixes work (and that's a big if), this could be a solid workstation part if you don't need homogenous cores (lighter VM loads) and gaming is a lower priority. The Z890 platform is decent looking, lots of PCIE in comparison to even x670e, and some flexibility on how its used.

IF.
I guess we'll find out. I ordered a 285k this morning. Newegg sent me a back in stock email, but when I looked they were bundling them and none of the bundles included any of the boards on my list so I checked around. Amazon was taking orders for delivery 11/29-12/9 for $599 (so MSRP), so I ordered from them (Amazon, not some marketplace seller). I figure I'll get it around the time the first batch of fixes are supposed to be ready so I'll just leave it in the shrink wrap until then. If it can roughly keep up with a 14900k after the fixes I'll buy a board and start building. If not I'll send it back and scratch my head a while longer. Z890 PCI-e lanes is why I like this platform.
 
I'm having strange performance issues with my 285k setup. I'm either in full performance mode or I get stuttery reduced performance.
If I restart my machine it fixes it every time, but I can't figure out what's triggering it.

On "GameTurbo" power plan, XMP set, AI overclocking.
I'll have nothing open aside from Discord and Steam, but Rocket League, Valorant, or anything I play will either be in limp mode or normal...

I did the BIOS update last week, Armory Crate tells me I'm 100% up to date on drivers. Same for Windows updates.

Nobody ever got fired for buying Epyc...
 
I'm having strange performance issues with my 285k setup. I'm either in full performance mode or I get stuttery reduced performance.
If I restart my machine it fixes it every time, but I can't figure out what's triggering it.

On "GameTurbo" power plan, XMP set, AI overclocking.
I'll have nothing open aside from Discord and Steam, but Rocket League, Valorant, or anything I play will either be in limp mode or normal...

I did the BIOS update last week, Armory Crate tells me I'm 100% up to date on drivers. Same for Windows updates.
I dunno what "GameTurbo" power plan is. Never seen it. Sounds like it may be trying to do some extra sauce with the scheduling----which may actually be bad for Arrow Lake. I would stick to "balanced" or "High Performance", for now.

I think the forthcoming improvements will probably be tweaks to Windows Scheduling, the CPU's internal thread director, and other microcode tweaks. Seems to me like the P-cores aren't be utilized correctly. I've seen some speculate its due to mistakes in how the cache is being handled.
 
There are two points in the FF14 Dawntrail Benchmark where my 265K consistently dips pretty heavily. Preliminary testing-----turning off "multi-threaded optimization" in the Nvidia driver control panel delivers notably better frames at those two points.

Remember, Dawntrail is a particularly poor performing game for Arrow Lake.

I need to do it a few more times to make certain its truly happening. But I'm pretty dang sure. I'll use a capture card and record it once I'm certain its real.
 
There are two points in the FF14 Dawntrail Benchmark where my 265K consistently dips pretty heavily. Preliminary testing-----turning off "multi-threaded optimization" in the Nvidia driver control panel delivers notably better frames at those two points.

Remember, Dawntrail is a particularly poor performing game for Arrow Lake.

I need to do it a few more times to make certain its truly happening. But I'm pretty dang sure. I'll use a capture card and record it once I'm certain its real.

Dawntrail has made the game surprisingly GPU heavy. My 4090 is actually pegged at 99% usage most of the time now at 4K resolution with max settings. The only time it will the usage will dip below is in areas with tons of players such as the Limsa Docks or New Gridania. 9800X3D really holds up in those areas, getting 100+ fps.
 
Dawntrail has made the game surprisingly GPU heavy. My 4090 is actually pegged at 99% usage most of the time now at 4K resolution with max settings. The only time it will the usage will dip below is in areas with tons of players such as the Limsa Docks or New Gridania. 9800X3D really holds up in those areas, getting 100+ fps.
Well they updated the graphics for Dawntrail. And I think they just did it again in a patch like 2 months ago.

Dawntrail is still also very CPU dependent. With a wide range of performance, depending upon the CPU.

I did a couple more benchmark passes last night, and turning off Multithreaded Optimization in the Nvidia control panel definitely makes a difference for my 265K, in the most demanding points of the benchmark. I will capture it in the next couple of days.
It will likely become moot, after Intel's updates. But, its interesting nonetheless.
 
Well they updated the graphics for Dawntrail. And I think they just did it again in a patch like 2 months ago.

Dawntrail is still also very CPU dependent. With a wide range of performance, depending upon the CPU.

I did a couple more benchmark passes last night, and turning off Multithreaded Optimization in the Nvidia control panel definitely makes a difference for my 265K, in the most demanding points of the benchmark. I will capture it in the next couple of days.
It will likely become moot, after Intel's updates. But, its interesting nonetheless.

For sure. I remember GamersNexus first review of the 7800X3D showed the 13900K beating it by a significant margin in FF14:

1731525373830.png


I actually called him out for this because when I benchmarked my 7800X3D against a friend's 13900K, I got the higher benchmark score in the end, so clearly something wasn't adding up.

252 fps vs 294fps is far outside the margin of error, and it turns out that the reason for this is that even though his title is "Endwalker Benchmark", he actually only tested out SOME SCENES from the entire benchmark, and that scenes just so happened to heavily favor Intel.

1731525972094.png


Fast forward to the 9800X3D launch, looks like he's changed his testing methodology to actually include either the entire benchmark run or at least more scenes and just not some cherry picked Intel favored specific scene so now all the scores fall where you'd expect them to with the 7800X3D gaining a ridiculous 100fps increase:

1731525494381.png


Honestly I'm not sure why he even used "only some of the benchmark" in the first place? That seems borderline cherry picking to me vs using the entire benchmark run. Like how do you know the scenes that you hand picked for the data isn't favoring one CPU over another?
 
For sure. I remember GamersNexus first review of the 7800X3D showed the 13900K beating it by a significant margin in FF14:

View attachment 691476

I actually called him out for this because when I benchmarked my 7800X3D against a friend's 13900K, I got the higher benchmark score in the end, so clearly something wasn't adding up.

252 fps vs 294fps is far outside the margin of error, and it turns out that the reason for this is that even though his title is "Endwalker Benchmark", he actually only tested out SOME SCENES from the entire benchmark, and that scenes just so happened to heavily favor Intel.

View attachment 691482

Fast forward to the 9800X3D launch, looks like he's changed his testing methodology to actually include either the entire benchmark run or at least more scenes and just not some cherry picked Intel favored specific scene so now all the scores fall where you'd expect them to with the 7800X3D gaining a ridiculous 100fps increase:

View attachment 691477

Honestly I'm not sure why he even used "only some of the benchmark" in the first place? That seems borderline cherry picking to me vs using the entire benchmark run. Like how do you know the scenes that you hand picked for the data isn't favoring one CPU over another?
Saving time on testing, is likely the reason. The first two scenes of the Dawntrail Benchmark, for example, are useless for CPU testing. And then the 3rd scene is heavily CPU dependent. The 4th scene has a couple of CPU heavy moments. And the 5th and final scene is somewhat balanced----probably favors GPU.
 
Saving time on testing, is likely the reason. The first two scenes of the Dawntrail Benchmark, for example, are useless for CPU testing. And then the 3rd scene is heavily CPU dependent. The 4th scene has a couple of CPU heavy moments. And the 5th and final scene is somewhat balanced----probably favors GPU.

IMO, there's no point in saving time if you just end up producing incorrect data as a result of it. Anyone who watched the 7800X3D review and mainly plays Dawntrail would be led to believe that the 13900K absolutely stomps it on that game, only to be misled. The 7800X3D going from 252fps to 353fps while the 13900k only goes from 295fps to 300fps isn't the result of some crazy Windows optimization or BIOS update on the 7800X3D side, it's just that proper testing methodology was now being done.
 
IMO, there's no point in saving time if you just end up producing incorrect data as a result of it. Anyone who watched the 7800X3D review and mainly plays Dawntrail would be led to believe that the 13900K absolutely stomps it on that game, only to be misled. The 7800X3D going from 252fps to 353fps while the 13900k only goes from 295fps to 300fps isn't the result of some crazy Windows optimization or BIOS update on the 7800X3D side, it's just that proper testing methodology was now being done.
Its tough to say what actually happened. But, in the 7800X3D review's numbers for FF14 Dawntrail, the 5800X3D is also performing much worse; relative to the numbers they posted for the 5800X3D----in the 9800X3D review's numbers for FF14 Dawntrail.

There have been lots of changes to FF14, since Endwalker. I wouldn't be surprised if it "hits" X3D cache a lot better, now.
 
Its tough to say what actually happened. But, in the 7800X3D review's numbers for FF14 Dawntrail, the 5800X3D is also performing much worse; relative to the numbers they posted for the 5800X3D----in the 9800X3D review's numbers for FF14 Dawntrail.

There have been lots of changes to FF14, since Endwalker. I wouldn't be surprised if it "hits" X3D cache a lot better, now.

That's the thing though, X3D has always slapped in FF14. When I benchmarked Endwalker against a friend's 13900K, it was already winning even back then which is the opposite of GamersNexus results. I really think if he just included the entire benchmark instead of trying to hand pick certain scenes then he would have gotten the same results as his Dawntrail numbers. The changes made to Dawntrail affect the GPU but not really the CPU, If I were to run a CPU test on Endwalker and Dawntrail using 720p lowest settings, I would get the same score on both benchmarks. Anyways, once Intel releases some fixes for your 265k I'm sure it will perform at an high enough level that you shouldn't see those crazy dips anymore.
 
Seems fine as is, for workstation and normal software. The new e-cores don't suck. They aren't much worse on IPC than the P Cores.

I'm not expecting a magical rebound on gaming performance. However, we just had a similar show with AMD and Zen 5. Where they acted surprised at the reviews and insisted that their internal testing had been more performant. And then after chipset drivers, Windows updates, and BIOS updates: There's been marked improvement.

We'll see with Intel. And it is strange that both brands released side-grade CPU generations and also said similar things in response to reviews. Very weird.
Oh it's "fine" - but I, and most [H] folks, aren't buying for "fine."
 
Oh it's "fine" - but I, and most [H] folks, aren't buying for "fine."
I'd argue there isn't anything other than "fine" available unless you have a specific use case in mind. 9800X3D is great for a straight up gaming rig, but it's not enough cores for my other uses and Intel has better chipsets (more PCI-e lanes) if you want to build more of a workstation with a card or two and more M.2 storage. Then dual CCD and e-core setups are both annoying for gaming. 9800X3D is great if all you want to do is game and you're building a rig that just has a board, proc, ram, vid card and an SSD or two in it, but it's no all-rounder and 9950X3D won't be either thanks to AMD's inferior chipsets. For some workloads 8 cores is shit compared to 16 or 8+16. No, the rule of today is "you must sacrifice" unless you're building a single use machine. Plenty of "fine" options, but for some of us there are no good ones.
 
Cant wait for zen6 and 12+ core ccds. Hopefully latency is not overwhelming.
 
I'd argue there isn't anything other than "fine" available unless you have a specific use case in mind. 9800X3D is great for a straight up gaming rig, but it's not enough cores for my other uses and Intel has better chipsets (more PCI-e lanes) if you want to build more of a workstation with a card or two and more M.2 storage. Then dual CCD and e-core setups are both annoying for gaming. 9800X3D is great if all you want to do is game and you're building a rig that just has a board, proc, ram, vid card and an SSD or two in it, but it's no all-rounder and 9950X3D won't be either thanks to AMD's inferior chipsets. For some workloads 8 cores is shit compared to 16 or 8+16. No, the rule of today is "you must sacrifice" unless you're building a single use machine. Plenty of "fine" options, but for some of us there are no good ones.
This is why I'm waiting for the moment. I want to see 9950X3D, and then I'll decide how I'm building. Just hoping no massive tariffs before then. I have a workstation and a gaming / hobby box, so I get to be a little more flexible.

Right now, if I HAD to build, I'd do 9800X3d on the gaming box, Arrow Lake on the workstation, or Storm Peak on the workstation and eat the cost as that's gonna SUCK.
 
Clock speeds on these are weird.

During games or multicore, my 265K does 5.38*** on two P cores. Then the rest are a mix of 5.2 and 5.1

I did a per-core overclock. The two fastest cores which do 5.5 on single/dual thread loads, are now 5.5 always. And the other P cores are 5.4 always. E-cores bumped up to 4.8, from 4.6 stock.

I made no voltage adjustment and it was nearly stable. Though FF14 benchmark did have a couple of hitches. So I increased LLC to level 3 for now. CPU is now totally stable.

Temps are certainly higher. But, not bad. I'll see if I can manually tune voltage to keep stability, but bring temps down. Pretty good FPS increases in games.

There is a ton of complexity for overclocking Arrow Lake. I haven't touched cache speed or ring speed, etc. For now I just wanted to see what would happen if I turned a couple of dials...
 
Intel should if named this Arrow Lake air refresher Waterfalls down by the Sand and Rocks.

It would be in there best interest to skip a refresh. I'm a Intel inel only person
 
Last edited:
Clock speeds on these are weird.

During games or multicore, my 265K does 5.38*** on two P cores. Then the rest are a mix of 5.2 and 5.1

I did a per-core overclock. The two fastest cores which do 5.5 on single/dual thread loads, are now 5.5 always. And the other P cores are 5.4 always. E-cores bumped up to 4.8, from 4.6 stock.

I made no voltage adjustment and it was nearly stable. Though FF14 benchmark did have a couple of hitches. So I increased LLC to level 3 for now. CPU is now totally stable.

Temps are certainly higher. But, not bad. I'll see if I can manually tune voltage to keep stability, but bring temps down. Pretty good FPS increases in games.

There is a ton of complexity for overclocking Arrow Lake. I haven't touched cache speed or ring speed, etc. For now I just wanted to see what would happen if I turned a couple of dials...
It will be interesting if you share what happens with the watts after the clock :)
Did they fall or rise?
 
Back
Top