I9 9900K / I7 8700K ?

I guess I don't get the logic here. Going from 4/4 to 4/8 does way more for gaming than going 4/8 to 6/6. The only reason we don't see a boost going from 6/6 to 6/12 is mostly due to gpu limitations or possibly thread limitations in games.


9700k will be slightly faster than 8700k in games with 8 major threads.

It will also come with integrated solder. No expense or risk of delidding needed.

I can see the "don't buy the 9900k" logic in this thread, but anyone thinking you shouldn't buy the 9700k instead is just smoking crack.

Adding hyperthreading adds a theoretical 30% performance increase to 6 cores/6 threads. But you need software that scales to TWELVE threads to realize that 30% performance increase.

Bumping the core count from 6 cores/6 threads to 8 cores/8 threads adds a theoretical 33% performance increase. But you need software that scales to EIGHT threads to realize that performance increase.

We are very unlikely to see new consoles with 16 threads until the 2020s. For now 8 threads is the maximum you can do on the PS4/Xbox one.


8700k = 9700k, if you have 12 thread-aware software.
9700k > 8700k if you have 8-thread aware software. This is the more likely situation were going to see in games in the next five years.

In my experience, having hyperthreading off gives you exactly the same minimum framerates as HT on, if the number of major threads used by the software is not DOUBLE or more the number of cores on the processor. This is why the Core i5s were giving the same minimum frame rate in games as 8 thread processors, back when 4-thread engines were big (but now they're having trouble keeping up in games like Battlefield 1 MP).
 
Last edited:
I guess I don't get the logic here. Going from 4/4 to 4/8 does way more for gaming than going 4/8 to 6/6. The only reason we don't see a boost going from 6/6 to 6/12 is mostly due to gpu limitations or possibly thread limitations in games.
In my experience when running a CPU bound game that needs more threads than cores HT can make a huge improvement to average FPS for example a Dual core i3 in many of the current games that need 4 cores the FPS gain can reach 50% which is well over what we typically see from HT or make a game that just wont work with a dual core CPU work.
But for games that are optimized for both dual core and 4 core its closer to 10-20% improvement to average FPS.

The trouble is its not a smooth experience typically resulting in some stutters when CPU bound while a true quad core would have no such problems.
Although if fast enough with HT to become GPU bound or hit a FPS cap then the stutters may go away.

Maybe these issues have been improved with newer game engines and revisions of HT I don't know but a six core CPU still outperforms a 4 core 8 thread CPU in game benchmarks particularly in min FPS.

By the time we see any games that really need more than 8 cores anyone that had the cash to spend on 9900k will have probably long since upgraded.
 
Last edited:
9700k will be slightly faster than 8700k in games with 8 major threads.

It will also come with integrated solder. No expense or risk of delidding needed.

I can see the "don't buy the 9900k" logic in this thread, but anyone thinking you shouldn't buy the 9700k instead is just smoking crack.

Adding hyperthreading adds a theoretical 30% performance increase to 6 cores/6 threads. But you need software that scales to TWELVE threads to realize that 30% performance increase.

Bumping the core count from 6 cores/6 threads to 8 cores/8 threads adds a theoretical 33% performance increase. But you need software that scales to EIGHT threads to realize that performance increase.

We are very unlikely to see new consoles with 16 threads until the 2020s. For now 8 threads is the maximum you can do on the PS4/Xbox one.


8700k = 9700k, if you have 12 thread-aware software.
9700k > 8700k if you have 8-thread aware software. This is the more likely situation were going to see in games in the next five years.

In my experience, having hyperthreading off gives you exactly the same minimum framerates as HT on, if the number of major threads used by the software is not DOUBLE or more the number of cores on the processor. This is why the Core i5s were giving the same minimum frame rate in games as 8 thread processors, back when 4-thread engines were big (but now they're having trouble keeping up in games like Battlefield 1 MP).

I will say I know VERY little about game development, but if a game does use exactly 8 cores, I don't believe that all of the cores will be maxed out since they are performing completely different things.

No matter what the number of cores used, performance of a CPU with 33% less cores and 50% more threads will be MUCH closer than a CPU with 50% less cores and 33% more threads which is the case for the 8700k vs the 9700k and 7700k vs 8600k.

Already, the *latter comparison is VERY close:
https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/12.html

Even still, it is important to remember that people often have background loads on their PC. Even something as simple as file transferring or CCTV monitoring could really sway in favor of an 8700k even if the game is programmed for "exactly" 8 cores. Also, it would be interesting to see which would livestream better.

** several edits made
 
Last edited:
I will say I know VERY little about game development, but if a game does use exactly 8 cores, I don't believe that all of the cores will be maxed out since they are performing completely different things.

No matter what the number of cores used, performance of a CPU with 33% less cores and 50% more threads will be MUCH closer than a CPU with 50% less cores and 33% more threads which is the case for the 8700k vs the 9700k and 7700k vs 8600k.

Already, the *latter comparison is VERY close:
https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/12.html

Even still, it is important to remember that people often have background loads on their PC. Even something as simple as file transferring or CCTV monitoring could really sway in favor of an 8700k even if the game is programmed for "exactly" 8 cores. Also, it would be interesting to see which would livestream better.

** several edits made

My media server is running 50 background processes right now. But it works smoothly for everything I use nit for, even though it's only running a Core i3 (Games, DVR, universal media server)

You have to understand the difference between major threads and minor threads. Dozens of minor threads running in the background will not max-out the processor, as they do not interrupt the processor very often.

processes.png


Each processing interrupt (to run a different process on one of my four hyperthreads) has to flush the current state of the system to the stack, but it must be done BECAUSE I only have four hardware threads to choose from. There is a lot of overhead involved in this operation, but you don't notice any slowdown (because you have a ton of unused processor time to absorb any overhead).

My 4 thread 2 core HTPC i3 processor continues to game *smoothly* on games with 4 major threads (Like Borderlands 2 or The Witcher 3), because those BACKGROUND PROCESSES DON'T INTERRUPT OFTEN, and Windows will give preference to the highest-load threads getting uninterrupted for long stretches.

It's only when you get up to 6-8 MAJOR threads that you start to see some processor hitching on a Core i3 2c/4t: not so smooth performance. This is because that overhead from more threads using significantly more that all the available hardware threads. So you actually feel the effects of the task-switch overhead.

But LUCKILY FOR ME, the processing performance of my old Core i3 is already giving out, so you don't have to worry - you're already going to upgrade your processor.

So yes, you will get the same smooth minimum frame rates on the Core i7 9700k as you would on the 7770k, but because consoles probably won't see an upgrade for several years (and three more years for game engines to make use of those 16 threads), your system will probably break or get replaced before those are in wide availability.

RIGHT NOW we have 6 threads in wide use, with a few performance-munching games using 8 (Like BF1). People didn't complain about smoothness in games with 6 major threads running on Core i5 4 thread processors (Fallout 4, GTA5), because you have some available overhead for all those task switches (most MAJOR threads are running at 50-75% of a processor's available load each). So, until you double the major thread count and overwhelm the processor, it can handle things fine. The Core i7 9700k should be smooth up to games using 12 threads, and overwhelmed at 16.
 
Last edited:
After I watched some videos with the I9 9900k, i think it's not worth it
Right now, imo, the best CPU for gaming, is I7 8700k
What do you think, guys?
 
Yeah, higher turbos guarantee the 9700k a longer useful lifetime that the 8700k (for high clock, low-thread-count games that continue to stick around).

And for everything else, having 8 cores is slightly faster than having 6 cores with 12 threads (even before you take the higher clocks into account).
 
Last edited:
I wonder how I7 9700k is ( turbo) vs my I7 4790k OC @ 4,5 GHz.
15 % better ? I mean only for gaming, mostly X-Plane 11,which is CPU heavy
What about the temps? Is the I7 9700k getting hot, like the I9 9900k?
 
^^ 9900Ks main heat problem is driving its hyper threading. 9700K will run much cooler, clock for clock.
 
I wonder how a 9900k would clock with Hyperthreading disabled?..Hmmm If you have a the cooling, I think the 9900k will be fun. The only 9900k I know of in the wild is under custom water running cinebench at 5GHz @ 1.22v in the low 60s.
 
I wonder how I7 9700k is ( turbo) vs my I7 4790k OC @ 4,5 GHz.
15 % better ? I mean only for gaming, mostly X-Plane 11,which is CPU heavy
What about the temps? Is the I7 9700k getting hot, like the I9 9900k?


Their engine became more multi-thread with 11.02:

https://developer.x-plane.com/2017/05/three-performance-optimizations-for-x-plane-11-02/

From here you can see the spikes in CPU usage on a six core.

https://www.phoronix.com/scan.php?page=article&item=xplane11-amd-nvidia&num=3

Skylake with DDR4 is 10% faster than Haswell, at same clock so if you get his up to 5.1 all cores, you'll get 25% faster performance. A those numbers could go up in h future, but it's unlikely.

Its hard to sift though the Phoronix GPU performance results (page 2) because each setting will have an affect on CPU use. But the fact that all the NVIDIA cards above he 1060 get same 37fps at highest setting says they're cpu limited.
 
Last edited:
I wonder how a 9900k would clock with Hyperthreading disabled?

Hey Kyle, I know you’re a busy man - any way you could give the 9900K a go with HT disabled? Be curious of the OC results comparison when you get a 9700K in to review. Could be valuable info concerning the power/heat load of HT. Something I’ll surely be experimenting with whenever my 9900K arrives.
 
Hey Kyle, I know you’re a busy man - any way you could give the 9900K a go with HT disabled? Be curious of the OC results comparison when you get a 9700K in to review. Could be valuable info concerning the power/heat load of HT. Something I’ll surely be experimenting with whenever my 9900K arrives.
Kind of like putting a ryzen chip in game mode haha. Disable for higher OCs. I could see that being a thing in situations that needed it.
 
After this tech report article, I think the verdict is still up on which i7 is the better gaming cpu:

https://techreport.com/review/34192/intel-core-i9-9900k-cpu-reviewed/11

The 9700k looks to get more fps than the 8700k, but the frame times tell a different story.

In some games such as Hitman and FC5, the time spent beyond 16.7ms (60 fps) is quite long for the 9700k. This can be FAR more noticeable than getting 180 fps vs 170 fps. I think more testing needs to be done, but buyers should not be so quick to choose a CPU based on fps alone.

The 9700k does seem to clock better, but that seems to be factored in already as the 9700k was running at a higher frequency in this test.
 
After this tech report article, I think the verdict is still up on which i7 is the better gaming cpu:

https://techreport.com/review/34192/intel-core-i9-9900k-cpu-reviewed/11

The 9700k looks to get more fps than the 8700k, but the frame times tell a different story.

In some games such as Hitman and FC5, the time spent beyond 16.7ms (60 fps) is quite long for the 9700k. This can be FAR more noticeable than getting 180 fps vs 170 fps. I think more testing needs to be done, but buyers should not be so quick to choose a CPU based on fps alone.

The 9700k does seem to clock better, but that seems to be factored in already as the 9700k was running at a higher frequency in this test.

For the Hitman results, We're talking fractions of a millisecond here. I don't think you'll notice the difference between 103 minimum fps, and 107 minimum fps.

And I wouldn't put tool much credence in the "time spent beyond" graph, and the data is obviously wrong.

Hitman_time_spent_7.png


Hitman_time_spent_8.png



Hitman_time_spent_11.png


Hitman_time_spent_16.png



How did we go from the 9700k AND 9900k owning the 2600x AND i5 8400 at 120 fps threshold to trailing the 2600x and i5 8400 at 60fps threshold? That results graph sure looks fucking consistent to me :rolleyes:

These results should be extracted from the SAME TEST data (so the graphs should follow the same basic shape, with threshold cutoffs), but they are no. So the top few WINNERS shoulsd stay at the top of the graph (with some Small rankings swapping, depending on your threshold)

The Farcry 5 test results have the same problem.

See here in a previous review where all the test results in the "time spent below" graphs are all from the same data set ; you can tell because the graphs positions don't change MASSIVELY from one threshold to the next.

Instead, you have at most a few rankings swap as you apply different thresholds.

https://techreport.com/review/33568/gaming-and-streaming-with-amd-second-gen-ryzen-cpus/3

Far_Cry_5_time_spent_5.png


Far_Cry_5_time_spent_7.png


Far_Cry_5_time_spent_8.png
 
Last edited:
Yeah some of the data sure does look inconsistent. Still, it would be nice to see more comparisons of which cpus have a more consistant frame rate in very physics intensive scenes, which is usually when you are fighting for your life.
 
The total number of slow frames in the >16.7(60fps) graph is maybe 5-10(66ms worth of frames slower than 60fps) out of what looks like ~9500 for Hitman. What those graphs say is "the 9900K stuttered a couple times worse than the Ryzen". If this is consistent across many titles it would indicate an issue, yeah, but it clearly isn't. They only benchmarked 6 games and out of those 6 the CPU that has the worst frame time spikes looks like completely random noise to me.

If you want to get that deep into the edge cases of microstutter benchmarking you really need to benchmark a lot more titles and in a lot more contexts.
 
I wonder how a 9900k would clock with Hyperthreading disabled?
Every little bit helps but as far as games go tweaking a little more out of the RAM will typically yield more than a extra 100MHz from the CPU.
Going from 3200c14 to 4000c17 brings more performance for me than 6700k from 4-4.2GHz at stock to 4.7GHz in most games, Crysis 3 is the only one I have found that prefers the CPU OC.

If you compared 8700k, 9700k, 9900k at different RAM speeds provided the top frequency still had decent timings you would probably find that the fastest CPU is the one with the highest speed RAM kit.
At least in any games that don't really benefit from all the extra threads the 9900k provides.
 
Last edited:
Every little bit helps but as far as games go tweaking a little more out of the RAM will typically yield more than a extra 100MHz from the CPU.
Going from 3200c14 to 4000c17 brings more performance for me than 6700k from 4-4.2GHz at stock to 4.7GHz in most games, Crysis 3 is the only one I have found that prefers the CPU OC.

If you compared 8700k, 9700k, 9900k at different RAM speeds provided the top frequency still had decent timings you would probably find that the fastest CPU is the one with the highest speed RAM kit.
At least in any games that don't really benefit from all the extra threads the 9900k provides.

Did you have a 3200 C14 set that you OC'd to 4000 C17?
 
How did we go from the 9700k AND 9900k owning the 2600x AND i5 8400 at 120 fps threshold to trailing the 2600x and i5 8400 at 60fps threshold? That results graph sure looks fucking consistent to me :rolleyes:

The highs were higher and the lows were lower. There's nothing inconsistent about that. Whether that impacts game play is up to you, but I think most folks prefer a consistent frame pacing rather than a roller coaster. That's why the [H] reviews which show the frame rate over time are so nice. If you see big ass dips you can see how the experience may be subpar even if the rest of the graph is higher than the competition.
 
The highs were higher and the lows were lower. There's nothing inconsistent about that. Whether that impacts game play is up to you, but I think most folks prefer a consistent frame pacing rather than a roller coaster. That's why the [H] reviews which show the frame rate over time are so nice. If you see big ass dips you can see how the experience may be subpar even if the rest of the graph is higher than the competition.

How does a processor with hyperhreading suffer the same fate as a processor without hyperhreading? Doesn't that point to the test being worthless in the first place?

Be careful what conclusions you draw here - i the test is valid, then it's just as valid for the 9900k as it is for the 9700k. So the point of the test is utterly empty, because you're just hoping for random chance producing better test results.
 
Last edited:
How does a processor with hyperhreading suffer the same fate as a processor without hyperhreading? Doesn't that point to the test being worthless in the first place?.

Dunno, the 97 and 99 seem to grouped as a pair in those graphs, so whatever is affecting the underlying architecture is affecting both of them equally it seems. if I saw them with a large disparity in that metric I'd be concerned of its validity, but there's nothing obviously wrong with their graphs, or the conclusion that those two processors have more time at lower fps than the competition. For smooth gameplay in hitman, the 2600X looks to be a better processor than the 9900k and 9700k.
 
Dunno, the 97 and 99 seem to grouped as a pair in those graphs, so whatever is affecting the underlying architecture is affecting both of them equally it seems. if I saw them with a large disparity in that metric I'd be concerned of its validity, but there's nothing obviously wrong with their graphs, or the conclusion that those two processors have more time at lower fps than the competition. For smooth gameplay in hitman, the 2600X looks to be a better processor than the 9900k and 9700k.

99 percentile frame time is a more useful metric, as it discards crap like this, and tells you what the worst-case frame times are ("BUT NOT ABSOLUTE MINIMUM, which is just random chance). Because games have random stutters, no matter how well programmed they are, and no matter what cpu you use.

It's a 60 second test run;. It's not like they go hardcore into the game, to get a real feel for stuttering. You'd need at lest five minutes to get significant data collection (to make the frame time thresholds worth anything).

A set of high stutter can be completely random, since no two runs are exactly the same, and they're only running for a minute.

How does the frame time threshold add anything to this review, when the graph clearly shows the same number of massive stutters (with lower minimum frame times from the 2600x)

Far_Cry_5_frametime_plot_2600X.png


Far_Cry_5_frametime_plot_9700K.png


But here it shows 1/3 the time spent beyond 16.7ms:

Far_Cry_5_time_spent_16.png


Seems to add a ton of insight to the review, would you say?

A millisecond more time spent waiting AT EACH SPIKE for the frame time to recover is not going to make you notice the stutter *less*. Especially when the drop is even larger.

This doesn't quantify HOW LOW the spikes go, only that the spikes happened, and for how long. The difference between 70 and 200 ms are lost in the noise, when you consider the entire test set is 60,000 ms.

There are 14 peaks above 16ms for both processors. That means 5ms per-peak for the 2600x, and 15ms per-peak for the 9700k. Both processors are inserting the same 1 frame delay (target in this graph is 16ms) every time a spike happens. I don't think you'll notice the difference between the two.
 
Last edited:
99 percentile frame time is a more useful metric, as it discards crap like this, and tells you what the worst-case frame times are ("BUT NOT ABSOLUTE MINIMUM, which is just random chance). Because games have random stutters, no matter how well programmed they are, and no matter what cpu you use.

It's a 60 second test run;. It's not like they go hardcore into the game, to get a real feel for stuttering. You'd need at lest five minutes to get significant data collection (to make the frame time thresholds worth anything).

A set of high stutter can be completely random, since no two runs are exactly the same, and they're only running for a minute.

How does the frame time threshold add anything to this review, when the graph clearly shows the same number of massive stutters (with lower minimum frame times from the 2600x)

View attachment 114499

View attachment 114500

But here it shows 1/3 the time spent beyond 16.7ms:

View attachment 114501

Seems to add a ton of insight to the review, would you say?

A millisecond more time spent waiting AT EACH SPIKE for the frame time to recover is not going to make you notice the stutter *less*. Especially when the drop is even larger.

This doesn't quantify HOW LOW the spikes go, only that the spikes happened, and for how long. The difference between 70 and 200 ms are lost in the noise, when you consider the entire test set is 60,000 ms.

There are 14 peaks above 16ms for both processors. That means 5ms per-peak for the 2600x, and 15ms per-peak for the 9700k. Both processors are inserting the same 1 frame delay (target in this graph is 16ms) every time a spike happens. I don't think you'll notice the difference between the two.

My post was hyperbole. For 5 or 6 times the cost of the 2600x, I expect the 10% improvement in frame-rates the 9700k is showing. The 9700k is clearly the better gaming chip.
 
After I've read / watched a lot of benchmarks, reviews, tests, I'm still undecided between I9 9900K and I7 9700K
As I wrote in the first post " The reason for the upgrade is 95 % gaming ( mostly flight sims, X-Plane 11 is CPU heavy ) "
Money isn't an issue. I could get the I9 9900K ( some Z390 mainboard and 32 GB DDR4 )
It's difficult to make the right choice
The main concern about the I9 9900K is the high temps
 
After I've read / watched a lot of benchmarks, reviews, tests, I'm still undecided between I9 9900K and I7 9700K
As I wrote in the first post " The reason for the upgrade is 95 % gaming ( mostly flight sims, X-Plane 11 is CPU heavy ) "
Money isn't an issue. I could get the I9 9900K ( some Z390 mainboard and 32 GB DDR4 )
It's difficult to make the right choice
The main concern about the I9 9900K is the high temps

Then buy the 9700k.. live on the edge! Test it out. If your not happy in the end flip it.
 
Thanks
How do you find those temps ?

What's the disadvantages for the I7 9700K, with no HT ( for gaming ) ?
 
Because of AMD opening up the Core Wars and Precision Boost 2, and also to justify this just a year after the 8700k, Intel had to go nuts with the turbo boost settings.

The 2700x will turbo to ALL CORES 4.1 GHz, if ytou give it excellent cooling. Even though the base clock is rated at 3.7 GHz.

15248904718h4e44psb5_2_2.png


So as a result, this is the 9900k turbo boost speeds with excellent cooling goes WAY above base clock:

clock-analysis.jpg


9900k 1 thread load Turbo = 5GHz
9900k 8 thread load turbo = 4.7 GHz

Now, for comparison, Haswell has fixed turbo boost speeds (they can't go higher than these numbers)

4790k 4.0 for full load (8 threads).
4790k 4.4 for single thread.

In addition, Skylake has a bout 10% faster IPC over Haswell, so you multiply the clock speed improvements by 1.1 to get a 10% increase.

Single thread = 5.0/4.4 * 1.1 = 23% faster.
8 thread = 4.7/4.0 * 1.1 = 30% faster under the same load.

Since Intel introduced Coffee Lake, these processors have been turbo-boosting a lot more aggressively than their predecessors. The base clock only occurs if you have the worst motherboard VRMs combined with stock Intel cooler.

https://www.tomshardware.com/reviews/intel-coffee-lake-core-i5-8400-cpu,5281.html

The lowly i5 8400 stays at 3.8GHz or higher at full load, even though the base clock is only 2.8.
 
Last edited:
Okay thank you very much
One last question : if I run i7 9700k with Turbo boost, that means, when I play X-Plane 11 ( CPU bound) the processor will use 4,9 GHz / 4,8 / 4,7??
 
Okay thank you very much
One last question : if I run i7 9700k with Turbo boost, that means, when I play X-Plane 11 ( CPU bound) the processor will use 4,9 GHz / 4,8 / 4,7??


Yes, for different levels of load those should be your turbo speeds, but only if you have excellent cooling.

https://www.phoronix.com/scan.php?page=article&item=xplane11-amd-nvidia&num=3

It just gets a bit confusing because X Plane now has some limited threading, so I can't say for sure exactly what clock speeds you will be running at. But be sure, if the game is loading your processor enough to drop it to 4.5 GHz, you will see a massive speedup over your old 4790k.

So, the 9700k should have your covered either way :D
 
Last edited:
Thank you, defaultluser
I'm not an hardware expert but I think, Noctua NH D15 could handle the I7 9700k with Turbo boost
Am I wrong?
With OC@ 4,5 GHz, the temps under heavy load ( X-Plane 11) average, 62 °C
 
Should be fine. Just be sure that you have a Skylake socket adapter (I seem to recall it being slightly different).
 
Skylake socket adapter??
What for?
I think, the Noctua NH D15 should fit into the new mainboard 1151
 
Last edited:
Back
Top