Intel Core i9-14900K Review - Reaching for the Performance Crown

was this already posted?

Intel's New 14th Gen CPUs Get a Boost To Gaming Performance With APO Feature (theverge.com)19

Posted by msmash on Wednesday October 25, 2023 @02:00PM from the moving-forward dept.
Intel's latest 14th Gen chips aren't a huge improvement over the 13th Gen in gaming performance, but a new Intel Application Optimization (APO) feature might just change that. From a report:Intel's new APO app simply runs in the background, improving performance in games. It offers impressive boosts to frame rates in games that support it, like Tom Clancy's Rainbow Six Siege and Metro Exodus. Intel Application Optimization essentially directs application resources in real time through a scheduling policy that fine-tunes performance for games and potentially even other applications in the future.

It operates alongside Intel's Thread Director, a technology that's designed to improve how apps and games are assigned to performance or efficiency cores depending on the performance needs. The result is some solid gains to performance in certain games, with one Reddit poster seeing a 200fps boost in Rainbow Six Siege at 1080p. "Not all games benefit from APO," explained Intel VP Roger Chandler in a press briefing ahead of the 14th Gen launch. "As we test and verify games we will add those that benefit the most, so gamers can get the best performance from their systems."
 
No not random, if a processor load goes over a certain amount it triggers that to turbo..
Alright cool. Cause Nexus was talking about the one percent lows being an issue with thread director once in a while it'll throw a E core into the game when it was supposed to be a P core and the 1 percent lows drop. At least that was my understanding.
 
was this already posted?

Intel's New 14th Gen CPUs Get a Boost To Gaming Performance With APO Feature (theverge.com)19

Posted by msmash on Wednesday October 25, 2023 @02:00PM from the moving-forward dept.
Intel's latest 14th Gen chips aren't a huge improvement over the 13th Gen in gaming performance, but a new Intel Application Optimization (APO) feature might just change that. From a report:Intel's new APO app simply runs in the background, improving performance in games. It offers impressive boosts to frame rates in games that support it, like Tom Clancy's Rainbow Six Siege and Metro Exodus. Intel Application Optimization essentially directs application resources in real time through a scheduling policy that fine-tunes performance for games and potentially even other applications in the future.

It operates alongside Intel's Thread Director, a technology that's designed to improve how apps and games are assigned to performance or efficiency cores depending on the performance needs. The result is some solid gains to performance in certain games, with one Reddit poster seeing a 200fps boost in Rainbow Six Siege at 1080p. "Not all games benefit from APO," explained Intel VP Roger Chandler in a press briefing ahead of the 14th Gen launch. "As we test and verify games we will add those that benefit the most, so gamers can get the best performance from their systems."
I could have sworn it was mentioned, but I don't see it. I saw a post of somebody testing Metro with APO turned on, and it seems to be sacrificing frame time stability for maximum FPS.

APO on:
1698328011480.png


APO off:
1698328043558.png


The more frequent and dramatic dips in the first image are concerning.
 
I could have sworn it was mentioned, but I don't see it. I saw a post of somebody testing Metro with APO turned on, and it seems to be sacrificing frame time stability for maximum FPS.

APO on:
View attachment 608972

APO off:
View attachment 608973

The more frequent and dramatic dips in the first image are concerning.
Looks like multi gpu back in the early 2000s. Frames through the roof. Consistency worse than drug resistant STI…
 
I could have sworn it was mentioned, but I don't see it. I saw a post of somebody testing Metro with APO turned on, and it seems to be sacrificing frame time stability for maximum FPS.

APO on:
View attachment 608972

APO off:
View attachment 608973

The more frequent and dramatic dips in the first image are concerning.
Not terribly, the dips happen in roughly the same places and while look bigger they are taking it down to roughly the same place so that just means whatever is happening in those frames makes the GPU cry regardless of how they optimize the threads.
 
The 4770k to 4790k Haswell refresh was more interesting than this. Its basically a 13900ks rebrand.

I'm going to watch my local Micro Center for when they drop prices on 13th gen parts and try to snag a 13900k or ks.

Just got anZ790 Apex Encore mb in today; been running a delidded mid-binned 13900K on a Z690-E Strix mb for past 7 months since I bought the 12700K first and a week later got the 13900K which I took into surgery within the first month (delid/etc.). It's actually the most amazing upgrade (CPU gen) I've encountered since the Core 2 Duo / 2700K / 8086K /10700K voyage over the past 16+ years. I do lots of video work (multi-day renders/upscales/etc.) and the 13900K still blows my mind every time I do real work with the thing.

The Strix Z690-E has been flawless, frankly and the rig still runs the same as the semi rigorous tuning profiles discussed in that above link ... but the Stirx mb has always bugged me with that ridiculous heatpipe in that completely empty single gen5 nvme slot (I taped a sticker on it that says "dumb") ... and being a dual DIMM kind of guy, never foreseeing the need for more than 64gb or 96gb of RAM and also expecting/hoping for faster dual DIMM higher capacity 64/96gb kits in the comings months ... I figured why not use that $160 gift card at AMZ and just get the Apex mb for the shelf as I don't intend to touch the system till I see that the delid/LM needs a redux. Currently happy with a CL32 64gb DDR5 kit but if 7000+ came available in that same higher capacity kit, I figured the Apex board might handle such DIMM kits a bit more readily than the Z690 Strix. And the Encore was so remiscent of my old and dear Max X CODE board that my inner slut couldn't resist.

Then saw this today at my local MicroCenter and thought same as you ... only bothersome nag is what/if a 14900KS might be ... may just go get one in the morning. Not a bad price and easy local swap option if it's an absolute turd...

1698363083848.png
 
Just got anZ790 Apex Encore mb in today; been running a delidded mid-binned 13900K on a Z690-E Strix mb for past 7 months since I bought the 12700K first and a week later got the 13900K which I took into surgery within the first month (delid/etc.). It's actually the most amazing upgrade (CPU gen) I've encountered since the Core 2 Duo / 2700K / 8086K /10700K voyage over the past 16+ years. I do lots of video work (multi-day renders/upscales/etc.) and the 13900K still blows my mind every time I do real work with the thing.
I also had a similar experience - going from a 9900k->13900k was night and day. I have not delidded it or overclocked it to any major extent, just set 4 core turbo to 5.8gig.
 
I also had a similar experience - going from a 9900k->13900k was night and day. I have not delidded it or overclocked it to any major extent, just set 4 core turbo to 5.8gig.
I went from a 6900K to a 13900KS. That's a 7 generation leap. My mind was 🤯 to say the least lol
 
Just got anZ790 Apex Encore mb in today; been running a delidded mid-binned 13900K on a Z690-E Strix mb for past 7 months since I bought the 12700K first and a week later got the 13900K which I took into surgery within the first month (delid/etc.). It's actually the most amazing upgrade (CPU gen) I've encountered since the Core 2 Duo / 2700K / 8086K /10700K voyage over the past 16+ years. I do lots of video work (multi-day renders/upscales/etc.) and the 13900K still blows my mind every time I do real work with the thing.

The Strix Z690-E has been flawless, frankly and the rig still runs the same as the semi rigorous tuning profiles discussed in that above link ... but the Stirx mb has always bugged me with that ridiculous heatpipe in that completely empty single gen5 nvme slot (I taped a sticker on it that says "dumb") ... and being a dual DIMM kind of guy, never foreseeing the need for more than 64gb or 96gb of RAM and also expecting/hoping for faster dual DIMM higher capacity 64/96gb kits in the comings months ... I figured why not use that $160 gift card at AMZ and just get the Apex mb for the shelf as I don't intend to touch the system till I see that the delid/LM needs a redux. Currently happy with a CL32 64gb DDR5 kit but if 7000+ came available in that same higher capacity kit, I figured the Apex board might handle such DIMM kits a bit more readily than the Z690 Strix. And the Encore was so remiscent of my old and dear Max X CODE board that my inner slut couldn't resist.

Then saw this today at my local MicroCenter and thought same as you ... only bothersome nag is what/if a 14900KS might be ... may just go get one in the morning. Not a bad price and easy local swap option if it's an absolute turd...

View attachment 609120
Well my last daily Intel system was a 4790k, so I reckon either choice will be awesome for me.

In my case, it's going on the fun test bench rig to have fun with HWBot and stuff on.

My daily driver is Zen 3. Really I am not impressed with the power draw on these Intels for daily use, but these AMDs are quite boring to play with when I get that itch. Not much to do past PBO+CO and RAM OC can't do much due to fclk.

I think it's going to be that way for a while for me. The fun bench rigs will be on Intel while my dailies on Zen.
 
14900 is STILL using the Intel 7 process, or 10nm. When are they going to make a CPU on a better node? That will be less power hungry and hopefully faster. I'll pass again on this gen.
 
14900 is STILL using the Intel 7 process, or 10nm. When are they going to make a CPU on a better node? That will be less power hungry and hopefully faster. I'll pass again on this gen.
Meteor Lake's CPU tile is on Intel 4.
 
Sounds like it tweaks the scheduler to make sure stuff like "the main thread running on an e-core" doesn't happen. Which I thought was supposed to be built in to Windows 11.
I tell you what, I can't find a download that works for the dtt that it says it needs
 
Sounds like it tweaks the scheduler to make sure stuff like "the main thread running on an e-core" doesn't happen. Which I thought was supposed to be built in to Windows 11.
Well this one tweaks in real time, so it’s not so much about not making things run on e cores, but maybe ensuring 3 things run on 3 E-Cores while a 4’th e core brings the iGPU into play to deal with some BS texture compression thing so the 6 P-cores you have can do what they do best.

Intel has a lot of silicon that goes unused when gaming or working, and they are trying to clean things up. This is maybe step 3 in a 12 step process.

It can’t fix actual bottlenecks but it can try to mitigate them with some procedural AI voodoo mumbo jumbo.
 
Intel's New 14th Gen CPUs Get a Boost To Gaming Performance With APO Feature

Alright cool. Cause Nexus was talking about the one percent lows being an issue with thread director once in a while it'll throw a E core into the game when it was supposed to be a P core and the 1 percent lows drop. At least that was my understanding.

yeah, i was just helping someone out in the Cyberpunk 2077 forums that was having a problem where the game would load up and everything was ok for a few minutes and then it would drop to like 14-20fps and after a few days of back and forth we figured out the solution was to disable e-cores. he had a 13600 (6p+8e)

heck i thought the whole reason for making windows 11 was because that was the only way to take advantage of intels big/little architecture? isn't the windows scheduler suppose to know which cores to run games on? and now there's another layer and that doesn't even work? or does that replace the windows scheduler? if that's the case they could've just done that with windows 10 and saved everyone the headache.
 
yeah, i was just helping someone out in the Cyberpunk 2077 forums that was having a problem where the game would load up and everything was ok for a few minutes and then it would drop to like 14-20fps and after a few days of back and forth we figured out the solution was to disable e-cores. he had a 13600 (6p+8e)

heck i thought the whole reason for making windows 11 was because that was the only way to take advantage of intels big/little architecture? isn't the windows scheduler suppose to know which cores to run games on? and now there's another layer and that doesn't even work? or does that replace the windows scheduler? if that's the case they could've just done that with windows 10 and saved everyone the headache.
Now that the issue is exposed let's see if intel/microsoft can come up with a solution to mandatorily use P cores first for games no matter what.
 
Now that the issue is exposed let's see if intel/microsoft can come up with a solution to mandatorily use P cores first for games no matter what.
really they already knew it was gonna be a problem and why they were telling people, before the processors even came out, that they probably weren't going to be compatible with a lot of older games and no longer even natively support directX 9. as far as this case here i doubt they'll see that thread in the millions of threads on the internet, but who know, stranger things have happened. but i guess the bottom line is that they already know they have problems hence why APO is a thing.

it just kind of sucks for gamers that go for intel builds because we were just living in the golden era of pc gaming where most of the time as long as you had a descent processor and graphics card things, for the most part, just work. intel's kinda taking us a step back to the 90's era of hardware and game (in)compatibility. if they would just work on their efficiency they wouldn't need e-cores to begin with. why not do what they've always done and sell low power chips to normies and sell power hogs to power users? doesn't really make a lot of sense to me. i guess it all comes down to performance & battery life on laptops vs AMD. just seems like there could've been a better way.
 
Just did 2 system upgrades:

9900K -> 14900KF + 7200MT + P5800X (came from 905p) + 4090 (came from 3090)

7700K -> 14600KF + 6800MT + 905p

Performance jump is pretty massive. The 14900KF runs on a 420mm AIO and so far so good. The P5800X is stupid fast. OS is crazy snappy with those 4k qd1 random reads insanity.
For the first time in decades of building and probably 100 builds, I literally got a half-bad Intel CPU (14600KF.) Tore my hair out with Z790 swaps/returns, memory swaps, psu swaps, reseats galore. No joy.
Straight bad intel CPU out of the box - could not stay stable at stock speeds and any memory timing, yet booted fine, acted mostly fine. Would freeze immediately during memtest86 and Prime. Finally got the replacement swapped 2 days ago and perfectly stable, and a beastly little i5 chip at that.. That was weird - never seen a CPU defect like that. Generally they’re either broken or they’re not. Quality differences among the working, sure. I don’t think Amazon, Newegg or ASUS are real happy with me right now, with 5+ expensive, used product returns in about a week and a half.. So yes, it’s possible to have a CPU SO bad out of the box that it still boots fine but cannot handle the slightest of tests. I perhaps hold the record for the worst 14600KF that’ll ever leave the Intel factory.

So now I get to actually get to start using/enjoying these 2 now. The 3090 -> 4090 jump was massive too.
 
Damn though, some of these deals on 13th gen now (well, I say "now"; I don't actually know how new this is):
1698607953006.png


That is a lot of power for $450. Even adding in the price of a better cooler still makes this a great deal all things considered. As usual, though, MC sort of selects the shittiest motherboard and RAM kits that they need to get rid of, so no telling if it'll work. According to Buildzoid, supported RAM settings in Intel have a lot to do with the silicon quality of the CPU, so even getting XMP to work can be a chore if you get screwed. But still, even with 7800X3D existing, if someone just needed CPU for cheaper, that's a heck of a bundle. I don't think the 7700X quite measures up to the 13700k for raw gaming performance, and that's the only other bundle they have that's reasonable. Any 7800X3D will set you back ~$150-200 or so additional. Maybe this is the real win of 14th gen lmao.

I wonder how well it would run off of a Thermalright Peerless Assassin. If it's even remotely well, it's a great buy.

Edit: Actually... hmm..

1698608761340.png


Nevermind, for some reason I expected a 13700k to measure up to the 7700X better than this, but it doesn't. It's only better in some titles. The 7700X bundle is 50$ cheaper. https://www.microcenter.com/product...ries-32gb-ddr5-6000-kit,-computer-build-combo . Oh well. I think it does do better at application loads though.
 
Last edited:
Damn though, some of these deals on 13th gen now (well, I say "now"; I don't actually know how new this is):
View attachment 609716

That is a lot of power for $450. Even adding in the price of a better cooler still makes this a great deal all things considered. As usual, though, MC sort of selects the shittiest motherboard and RAM kits that they need to get rid of, so no telling if it'll work. According to Buildzoid, supported RAM settings in Intel have a lot to do with the silicon quality of the CPU, so even getting XMP to work can be a chore if you get screwed. But still, even with 7800X3D existing, if someone just needed CPU for cheaper, that's a heck of a bundle. I don't think the 7700X quite measures up to the 13700k for raw gaming performance, and that's the only other bundle they have that's reasonable. Any 7800X3D will set you back ~$150-200 or so additional. Maybe this is the real win of 14th gen lmao.

I wonder how well it would run off of a Thermalright Peerless Assassin. If it's even remotely well, it's a great buy.

Edit: Actually... hmm..

View attachment 609719

Nevermind, for some reason I expected a 13700k to measure up to the 7700X better than this, but it doesn't. It's only better in some titles. The 7700X bundle is 50$ cheaper. https://www.microcenter.com/product...ries-32gb-ddr5-6000-kit,-computer-build-combo . Oh well. I think it does do better at application loads though.
That is a killer deal. Anyone looking for a new build should entertain this combo.It's almost like getting the board for free.
 
yeah, i was just helping someone out in the Cyberpunk 2077 forums that was having a problem where the game would load up and everything was ok for a few minutes and then it would drop to like 14-20fps and after a few days of back and forth we figured out the solution was to disable e-cores. he had a 13600 (6p+8e)

heck i thought the whole reason for making windows 11 was because that was the only way to take advantage of intels big/little architecture? isn't the windows scheduler suppose to know which cores to run games on? and now there's another layer and that doesn't even work? or does that replace the windows scheduler? if that's the case they could've just done that with windows 10 and saved everyone the headache.
Windows 11 scheduler takes in data from the Intel Management Engine which is controlled via the Chipset Drivers so when seeing weird scheduling issues start with the Chipset drivers, I like using the Intel site directly for that and not Windows Update as it's often found in the optional updates instead.

Also gotta keep an eye out for the Core optimization mods in Cyberpunk too, most are designed for AMD CPUs and assume you have all P cores then and they will override the Windows Scheduler and they are there to fill out a full CCD and expect 8 performance cores and 16 subsequent threads, not 12 performance threads and 4 certainly not performance threads.
 
Damn though, some of these deals on 13th gen now (well, I say "now"; I don't actually know how new this is):
View attachment 609716

That is a lot of power for $450. Even adding in the price of a better cooler still makes this a great deal all things considered. As usual, though, MC sort of selects the shittiest motherboard and RAM kits that they need to get rid of, so no telling if it'll work. According to Buildzoid, supported RAM settings in Intel have a lot to do with the silicon quality of the CPU, so even getting XMP to work can be a chore if you get screwed. But still, even with 7800X3D existing, if someone just needed CPU for cheaper, that's a heck of a bundle. I don't think the 7700X quite measures up to the 13700k for raw gaming performance, and that's the only other bundle they have that's reasonable. Any 7800X3D will set you back ~$150-200 or so additional. Maybe this is the real win of 14th gen lmao.

I wonder how well it would run off of a Thermalright Peerless Assassin. If it's even remotely well, it's a great buy.

Edit: Actually... hmm..

View attachment 609719

Nevermind, for some reason I expected a 13700k to measure up to the 7700X better than this, but it doesn't. It's only better in some titles. The 7700X bundle is 50$ cheaper. https://www.microcenter.com/product...ries-32gb-ddr5-6000-kit,-computer-build-combo . Oh well. I think it does do better at application loads though.
Unless they are pairing that CPU with a 4090 it would be worth looking at other charts, because those two CPU's really are pretty much within a margin of error. I mean a 4'fps average on the top end for the AMD but a 2'fps advantage to the Intel is basically nothing.
You are talking about a difference that could be completely changed by having Discord or a few browser tabs open.
Because most reviewers are running those on as barebones clean an install they can in order to showcase the hardware, but that is not a realistic scenario for most users, you are going to have a lot extra going on personally that the reviewers aren't.

I want to see reviewers doing a suite with Discord, Steam, Epic, GeForce Experience, and Chrome with at least a half dozen tabs going, maybe the control software for the RGB suite that controls the keyboard and mouse, maybe a few active BT connections as well.
 
Unless they are pairing that CPU with a 4090 it would be worth looking at other charts, because those two CPU's really are pretty much within a margin of error. I mean a 4'fps average on the top end for the AMD but a 2'fps advantage to the Intel is basically nothing.
You are talking about a difference that could be completely changed by having Discord or a few browser tabs open.
Because most reviewers are running those on as barebones clean an install they can in order to showcase the hardware, but that is not a realistic scenario for most users, you are going to have a lot extra going on personally that the reviewers aren't.

I want to see reviewers doing a suite with Discord, Steam, Epic, GeForce Experience, and Chrome with at least a half dozen tabs going, maybe the control software for the RGB suite that controls the keyboard and mouse, maybe a few active BT connections as well.

The 7700X bundle is also $50 cheaper, though. So it depends on how tight of a budget the buyer is in and what their goals are... supposing they can both be properly cooled via $35 cooler such as the Thermalright Peerless Assassin, anyway. If the Intel throttles more while being cooled by it, then the comparison is going to keep getting worse because you might end up at an $80-100 premium to bring the Intel build up to parity when the cooler is factored in. Supposing the PSU is equal, anyway.

I might be using an old benchmark for it, though. I thought I remembered the 13700k very occasionally trading blows with the 7800X3D in a couple of games depending on the review, so I thought it was definitely going to be better than the 7700X; was honestly surprised to see this graph. It's definitely at least the better buy if the user needs to do CPU encoding work or something as well as gaming, though. Well, but the AMD chip is also on a non-dead platform. Shrug... well whatever, if you're playing something like FF14, at least it's definitely a good buy though.
 
Last edited:
Unless they are pairing that CPU with a 4090 it would be worth looking at other charts, because those two CPU's really are pretty much within a margin of error. I mean a 4'fps average on the top end for the AMD but a 2'fps advantage to the Intel is basically nothing.
You are talking about a difference that could be completely changed by having Discord or a few browser tabs open.
Because most reviewers are running those on as barebones clean an install they can in order to showcase the hardware, but that is not a realistic scenario for most users, you are going to have a lot extra going on personally that the reviewers aren't.

I want to see reviewers doing a suite with Discord, Steam, Epic, GeForce Experience, and Chrome with at least a half dozen tabs going, maybe the control software for the RGB suite that controls the keyboard and mouse, maybe a few active BT connections as well.
Most of these programs will be in the background and will just use more RAM, yes the FPS will drop with a few numbers, but at all a little because they will just up a bit the latency and nothing more.
Of course, if Chrome eats most of your RAM then you can get a stutter :)

Reviewers normally stop all this because they can drop randomly your FPS and they looking for a clean and repeatable result.
 
Most of these programs will be in the background and will just use more RAM, yes the FPS will drop with a few numbers, but at all a little because they will just up a bit the latency and nothing more.
Of course, if Chrome eats most of your RAM then you can get a stutter :)

Reviewers normally stop all this because they can drop randomly your FPS and they looking for a clean and repeatable result.
I know that, would still like to see the best case scenario next to the average case one too.

Background programs still use CPU cycles and how well their respective Thread Directors manage that is an important benchmark as well in my opinion.
 
Yes, what if you built the thing yourself

[EDIT] NVM found them.. but it does mean that a bunch of people don't have this enabled...[/EDIT]

So I have used/enabled this and found that windows makes the machine idle lower, and frequencies generally are much more granular - no performance figures at this stage. Why is this not common knowledge?
 
I know that, would still like to see the best case scenario next to the average case one too.

Background programs still use CPU cycles and how well their respective Thread Directors manage that is an important benchmark as well in my opinion.
Thread Director is not a free download, but most of it is implemented in Win 11.
From what I know Windows 11 manages threads well, but Windows 10 produces more FPS. But that's because Win 11 has more built-in garbage and there's a change in memory<->page file management (and it hits Ryzen more than Intel).

So if we stick strictly to that background noise from sleeping programs. You can't get very accurate results because they just produce random activity. But they are well managed and will be moved to the E cores so the P cores will be "free", of course with high latency, but this works fine. So FPS will be a few points lower, but because of the slightly higher latency and nothing more.

I looked at Benchmark with background programs a year or two ago and there wasn't much difference, but I don't remember where.
 
Back
Top