Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Everything I'm seeing says the DC kit is amazing - especially if you need the core counts. And yeah, sometimes slightly - but also sometimes it's slightly faster. It's mostly a wash, at least for now. I'm hearing rumors that x3d is going to be impressive though.Zen 5 is slower in some apps and games though, and sometimes less efficient, so there are some slight regressions. But that's on the desktop chips, for all I know it's a flawless arch for datacenters.
11 gen wasn't worse than 10th gen necessarily. It added PCI-E 4.0 and had needed IPC improvement over Skylake.11th gen was worse. 14th gen was not great, either, depending on whether you do or do not care about efficiency (and if so, how consistently).
No matter how you look raw performance difference between 7950X, 9950X, 13900K, 14900K and 285K is actually miniscule.Well, I could update the gaming box with 285 and pray they fix it, or it won't matter at 4k, and stick with AMD on the workstation and drop in a 9950X...
Or put a 285 in the workstation, since it's solid on multicore and I don't need a full 24 stack right now - but worry about ram - and drop a 9800x3d into the gaming box
Or buy the best, and do a 9950 in the workstation, and the 9800x3d in the gaming box, and give up on having an intel system for the moment, which gets fun when engine bugs hit and you want a different arch to test on.
This is true. But I think the reality is actually just that single-threaded performance is plateauing, and that that delay gave AMD and Apple a chance to catch up. (I also said so here.) Basically just as confirmed by the chart in your next post. The exception to this seems to be Apple's SoCs, because they're SoCs. If you pack all compute and memory physically together and give 0 fucks about expandability, you can eek out a lot more single-threaded performance. If you then add a bunch of accelerators for various tasks and deftly hide them behind a software stack you completely control, you get real measurable workload speedups. (And you also render a lot of benchmarking pointless.)Intel apparently bet too much on 10nm being ready 'any moment now' and lost few years and all their advantage in core design and 11th gen was result of pointless effort to backport what was to be 10nm part to 14nm++
X870 is basically where B series was before; the only one justified at the high end is the X870E/X670E. I looked hard at the 670 Meg Ace, as it would probably do what you want, or the proart as mentioned. I also looked at the higher end Aorus Master x870; that had two x4 I believe.After digesting a lot of reviews I've concluded that the 285k is actually better than the 9950x for my use case. (I've also concluded that the quality of reporting on these products has materially dropped since 2010.) Hear me out:
With the exception of AVX-512 workloads, the 285k and 9950x are basically on par for productivity, trading wins. I daily drive Debian. Maybe the latter is 1% faster. For gaming neither is a winner, but since I don't care about Cyberpunk 2077 and I don't care about FPS over 120 (my GPU is the bottleneck), who cares.
Unfortunately the X870/X870E platform sucks. All the boards I can find from brands I'd actually buy seem to give you too many M.2 storage slots and awful PCIe options. If I want a full x16 slot for a GPU and an x4 slot for a cheap Intel 10gbe NIC, I can either get the cheap $250 Asus PRIME Z890-P WIFI on the Intel platform, or I have to spring for at least the $310 TUF GAMING [blah blah] X870 board. But the X870 board is materially worse since it has 2 fewer PCIe slots and two of the M.2 slots compete with the PCIe slots for bandwidth. WTF?
The X870E platform at least bumps the number of the PCIe lanes, but costs at least $400 in a board. (The ProArt X870E-CREATOR WIFI has almost comparable PCIe and M.2 options, except for a bunch of PCIe slot/M.2 port bandwidth sharing.) Outrageous!
(The more expensive boards have more USB-4 connectivity, but the Intel board gives me a single TB-4 and that's enough. More expensive ones come with TB-5.)
I could get a PCIe 4.0 x1 10gbe NIC with the same Marvell chip the ProArt comes with, but it's $50-$100.
Even with the current price cuts (9950X for $584) the Intel platform makes more sense. If only the 285k was real and I could buy one.
I bought a 265k 2 days ago, myself.After digesting a lot of reviews I've concluded that the 285k is actually better than the 9950x for my use case. (I've also concluded that the quality of reporting on these products has materially dropped since 2010.) Hear me out:
With the exception of AVX-512 workloads, the 285k and 9950x are basically on par for productivity, trading wins. I daily drive Debian. Maybe the latter is 1% faster. For gaming neither is a winner, but since I don't care about Cyberpunk 2077 and I don't care about FPS over 120 (my GPU is the bottleneck), who cares.
Unfortunately the X870/X870E platform sucks. All the boards I can find from brands I'd actually buy seem to give you too many M.2 storage slots and awful PCIe options. If I want a full x16 slot for a GPU and an x4 slot for a cheap Intel 10gbe NIC, I can either get the cheap $250 Asus PRIME Z890-P WIFI on the Intel platform, or I have to spring for at least the $310 TUF GAMING [blah blah] X870 board. But the X870 board is materially worse since it has 2 fewer PCIe slots and two of the M.2 slots compete with the PCIe slots for bandwidth. WTF?
The X870E platform at least bumps the number of the PCIe lanes, but costs at least $400 in a board. (The ProArt X870E-CREATOR WIFI has almost comparable PCIe and M.2 options, except for a bunch of PCIe slot/M.2 port bandwidth sharing.) Outrageous!
(The more expensive boards have more USB-4 connectivity, but the Intel board gives me a single TB-4 and that's enough. More expensive ones come with TB-5.)
I could get a PCIe 4.0 x1 10gbe NIC with the same Marvell chip the ProArt comes with, but it's $50-$100.
Even with the current price cuts (9950X for $584) the Intel platform makes more sense. If only the 285k was real and I could buy one.
Cross posting, as the Arrowlake review thread is flooded with chatter about the history of Intel's CPUs >_>
Starting at around 13 minutes, Robert Hallock of Intel, says they know what's wrong. Its a combo of things (bios, firmware, OS, etc). And they are on red alert to fix it. And full details will be made public, soon.
timestamped later in the video. He gives a loose ETA on the fixes, for maybe end of November. And then says they intend to publically explain each part of the problem "line-by-line". He also goes on to insist that Arrow Lake will deliver Raptor Lake gaming performance parity, with less power use.
View: https://www.youtube.com/live/P2OHRH7221w?si=NbQ3ey4ynqwWHOzS&t=1563
They also ask him if Intel has any plans to do something like is doing with X3D. And he smiled and said no comment.Interesting. But even if it manages RL parity, 9800 X3D has crushed Intel for gaming. There isn't a single game anymore that Intel can dominate.
And I'm sure 16c X3D parts are coming too, pushing AMD's lead even further in the heavily multi-threaded games.
Seems fine as is, for workstation and normal software. The new e-cores don't suck. They aren't much worse on IPC than the P Cores.If the fixes work (and that's a big if), this could be a solid workstation part if you don't need homogenous cores (lighter VM loads) and gaming is a lower priority. The Z890 platform is decent looking, lots of PCIE in comparison to even x670e, and some flexibility on how its used.
IF.
I guess we'll find out. I ordered a 285k this morning. Newegg sent me a back in stock email, but when I looked they were bundling them and none of the bundles included any of the boards on my list so I checked around. Amazon was taking orders for delivery 11/29-12/9 for $599 (so MSRP), so I ordered from them (Amazon, not some marketplace seller). I figure I'll get it around the time the first batch of fixes are supposed to be ready so I'll just leave it in the shrink wrap until then. If it can roughly keep up with a 14900k after the fixes I'll buy a board and start building. If not I'll send it back and scratch my head a while longer. Z890 PCI-e lanes is why I like this platform.If the fixes work (and that's a big if), this could be a solid workstation part if you don't need homogenous cores (lighter VM loads) and gaming is a lower priority. The Z890 platform is decent looking, lots of PCIE in comparison to even x670e, and some flexibility on how its used.
IF.
I'm having strange performance issues with my 285k setup. I'm either in full performance mode or I get stuttery reduced performance.
If I restart my machine it fixes it every time, but I can't figure out what's triggering it.
On "GameTurbo" power plan, XMP set, AI overclocking.
I'll have nothing open aside from Discord and Steam, but Rocket League, Valorant, or anything I play will either be in limp mode or normal...
I did the BIOS update last week, Armory Crate tells me I'm 100% up to date on drivers. Same for Windows updates.
I dunno what "GameTurbo" power plan is. Never seen it. Sounds like it may be trying to do some extra sauce with the scheduling----which may actually be bad for Arrow Lake. I would stick to "balanced" or "High Performance", for now.I'm having strange performance issues with my 285k setup. I'm either in full performance mode or I get stuttery reduced performance.
If I restart my machine it fixes it every time, but I can't figure out what's triggering it.
On "GameTurbo" power plan, XMP set, AI overclocking.
I'll have nothing open aside from Discord and Steam, but Rocket League, Valorant, or anything I play will either be in limp mode or normal...
I did the BIOS update last week, Armory Crate tells me I'm 100% up to date on drivers. Same for Windows updates.
There are two points in the FF14 Dawntrail Benchmark where my 265K consistently dips pretty heavily. Preliminary testing-----turning off "multi-threaded optimization" in the Nvidia driver control panel delivers notably better frames at those two points.
Remember, Dawntrail is a particularly poor performing game for Arrow Lake.
I need to do it a few more times to make certain its truly happening. But I'm pretty dang sure. I'll use a capture card and record it once I'm certain its real.
Well they updated the graphics for Dawntrail. And I think they just did it again in a patch like 2 months ago.Dawntrail has made the game surprisingly GPU heavy. My 4090 is actually pegged at 99% usage most of the time now at 4K resolution with max settings. The only time it will the usage will dip below is in areas with tons of players such as the Limsa Docks or New Gridania. 9800X3D really holds up in those areas, getting 100+ fps.
Well they updated the graphics for Dawntrail. And I think they just did it again in a patch like 2 months ago.
Dawntrail is still also very CPU dependent. With a wide range of performance, depending upon the CPU.
I did a couple more benchmark passes last night, and turning off Multithreaded Optimization in the Nvidia control panel definitely makes a difference for my 265K, in the most demanding points of the benchmark. I will capture it in the next couple of days.
It will likely become moot, after Intel's updates. But, its interesting nonetheless.
Saving time on testing, is likely the reason. The first two scenes of the Dawntrail Benchmark, for example, are useless for CPU testing. And then the 3rd scene is heavily CPU dependent. The 4th scene has a couple of CPU heavy moments. And the 5th and final scene is somewhat balanced----probably favors GPU.For sure. I remember GamersNexus first review of the 7800X3D showed the 13900K beating it by a significant margin in FF14:
View attachment 691476
I actually called him out for this because when I benchmarked my 7800X3D against a friend's 13900K, I got the higher benchmark score in the end, so clearly something wasn't adding up.
252 fps vs 294fps is far outside the margin of error, and it turns out that the reason for this is that even though his title is "Endwalker Benchmark", he actually only tested out SOME SCENES from the entire benchmark, and that scenes just so happened to heavily favor Intel.
View attachment 691482
Fast forward to the 9800X3D launch, looks like he's changed his testing methodology to actually include either the entire benchmark run or at least more scenes and just not some cherry picked Intel favored specific scene so now all the scores fall where you'd expect them to with the 7800X3D gaining a ridiculous 100fps increase:
View attachment 691477
Honestly I'm not sure why he even used "only some of the benchmark" in the first place? That seems borderline cherry picking to me vs using the entire benchmark run. Like how do you know the scenes that you hand picked for the data isn't favoring one CPU over another?
Saving time on testing, is likely the reason. The first two scenes of the Dawntrail Benchmark, for example, are useless for CPU testing. And then the 3rd scene is heavily CPU dependent. The 4th scene has a couple of CPU heavy moments. And the 5th and final scene is somewhat balanced----probably favors GPU.
Its tough to say what actually happened. But, in the 7800X3D review's numbers for FF14 Dawntrail, the 5800X3D is also performing much worse; relative to the numbers they posted for the 5800X3D----in the 9800X3D review's numbers for FF14 Dawntrail.IMO, there's no point in saving time if you just end up producing incorrect data as a result of it. Anyone who watched the 7800X3D review and mainly plays Dawntrail would be led to believe that the 13900K absolutely stomps it on that game, only to be misled. The 7800X3D going from 252fps to 353fps while the 13900k only goes from 295fps to 300fps isn't the result of some crazy Windows optimization or BIOS update on the 7800X3D side, it's just that proper testing methodology was now being done.
Its tough to say what actually happened. But, in the 7800X3D review's numbers for FF14 Dawntrail, the 5800X3D is also performing much worse; relative to the numbers they posted for the 5800X3D----in the 9800X3D review's numbers for FF14 Dawntrail.
There have been lots of changes to FF14, since Endwalker. I wouldn't be surprised if it "hits" X3D cache a lot better, now.
Oh it's "fine" - but I, and most [H] folks, aren't buying for "fine."Seems fine as is, for workstation and normal software. The new e-cores don't suck. They aren't much worse on IPC than the P Cores.
I'm not expecting a magical rebound on gaming performance. However, we just had a similar show with AMD and Zen 5. Where they acted surprised at the reviews and insisted that their internal testing had been more performant. And then after chipset drivers, Windows updates, and BIOS updates: There's been marked improvement.
We'll see with Intel. And it is strange that both brands released side-grade CPU generations and also said similar things in response to reviews. Very weird.
I'd argue there isn't anything other than "fine" available unless you have a specific use case in mind. 9800X3D is great for a straight up gaming rig, but it's not enough cores for my other uses and Intel has better chipsets (more PCI-e lanes) if you want to build more of a workstation with a card or two and more M.2 storage. Then dual CCD and e-core setups are both annoying for gaming. 9800X3D is great if all you want to do is game and you're building a rig that just has a board, proc, ram, vid card and an SSD or two in it, but it's no all-rounder and 9950X3D won't be either thanks to AMD's inferior chipsets. For some workloads 8 cores is shit compared to 16 or 8+16. No, the rule of today is "you must sacrifice" unless you're building a single use machine. Plenty of "fine" options, but for some of us there are no good ones.Oh it's "fine" - but I, and most [H] folks, aren't buying for "fine."
This is why I'm waiting for the moment. I want to see 9950X3D, and then I'll decide how I'm building. Just hoping no massive tariffs before then. I have a workstation and a gaming / hobby box, so I get to be a little more flexible.I'd argue there isn't anything other than "fine" available unless you have a specific use case in mind. 9800X3D is great for a straight up gaming rig, but it's not enough cores for my other uses and Intel has better chipsets (more PCI-e lanes) if you want to build more of a workstation with a card or two and more M.2 storage. Then dual CCD and e-core setups are both annoying for gaming. 9800X3D is great if all you want to do is game and you're building a rig that just has a board, proc, ram, vid card and an SSD or two in it, but it's no all-rounder and 9950X3D won't be either thanks to AMD's inferior chipsets. For some workloads 8 cores is shit compared to 16 or 8+16. No, the rule of today is "you must sacrifice" unless you're building a single use machine. Plenty of "fine" options, but for some of us there are no good ones.
It will be interesting if you share what happens with the watts after the clockClock speeds on these are weird.
During games or multicore, my 265K does 5.38*** on two P cores. Then the rest are a mix of 5.2 and 5.1
I did a per-core overclock. The two fastest cores which do 5.5 on single/dual thread loads, are now 5.5 always. And the other P cores are 5.4 always. E-cores bumped up to 4.8, from 4.6 stock.
I made no voltage adjustment and it was nearly stable. Though FF14 benchmark did have a couple of hitches. So I increased LLC to level 3 for now. CPU is now totally stable.
Temps are certainly higher. But, not bad. I'll see if I can manually tune voltage to keep stability, but bring temps down. Pretty good FPS increases in games.
There is a ton of complexity for overclocking Arrow Lake. I haven't touched cache speed or ring speed, etc. For now I just wanted to see what would happen if I turned a couple of dials...