New I7- 8086k CPU Anniversary Edition

That’s more of your perception than reality though. Especially with 980ti at 1440p. There’re tons of benchmarks showing no substantial difference between 6700k/7700k and 8700k (which 8086 essentially is) running 1080ti at 1080p, and everything becoming totally the same at 1440p. You’re facing a gpu bottleneck way earlier with 980ti.

Yeah but saying "it doesn't matter if my CPU is better because GPU bottleneck" doesn't magically make the CPUs the same. The 8700K consistently outperforms the 7700K in 720p and 1080p benchmarks, which is what you use to determine WHICH CPU IS BETTER. The reason 720p benchmarks exist is because "being gpu bottlenecked" is a lot easier in single player AAA benchmarks than it is in real-time multiplayer games. Hell there are even 1440p SP CPU bottlenecked games like the Total War series. If you play Pubg or SC2 or BF1 or probably any battle royale or fps game current or upcoming and you want to hit 144+fps for high refresh rate displays, you're going to be CPU bottlenecked.

For AAA games at max settings where you're looking at <120fps, then yeah, you're GPU bottlenecked, but that's not all of gaming. It's just all that high-resolution benchmarks represent. High-resolution benchmarks only cover a small portion of real-world gaming. IMO publications do a poor job of explaining this issue and it leads to people dismissing the 720p benchmarks because 'no one plays at 720p'... well of course they don't, that's not why 720p benchmarks are important. They're important because they represent all the games that are CPU intensive but too hard to benchmark, which, it turns out, is many, many games.
 
Last edited:
Yeah but saying "it doesn't matter if my CPU is better because GPU bottleneck" doesn't magically make the CPUs the same. The 8700K consistently outperforms the 7700K in 720p and 1080p benchmarks, which is what you use to determine WHICH CPU IS BETTER. The reason 720p benchmarks exist is because "being gpu bottlenecked" is a lot easier in single player AAA benchmarks than it is in real-time multiplayer games. Hell there are even 1440p SP CPU bottlenecked games like the Total War series. If you play Pubg or SC2 or BF1 or probably any battle royale or fps game current or upcoming and you want to hit 144+fps for high refresh rate displays, you're going to be CPU bottlenecked.

For AAA games at max settings where you're looking at <120fps, then yeah, you're GPU bottlenecked, but that's not all of gaming. It's just all that high-resolution benchmarks represent. High-resolution benchmarks only cover a small portion of real-world gaming. IMO publications do a poor job of explaining this issue and it leads to people dismissing the 720p benchmarks because 'no one plays at 720p'... well of course they don't, that's not why 720p benchmarks are important. They're important because they represent all the games that are CPU intensive but too hard to benchmark, which, it turns out, is many, many games.

Wasting your time on writing stuff I know for years, but never mind.

Here’s the thing, even at 1080p it outperforms at very low or no margins at all. Nice processor, but definitely not giving you much over your older 6700k if you use something more powerful than 980ti and definitely nothing while paired with it. So yeah, it does make cpus the same in terms of provided gaming performance, because you hit gpu bottleneck way before the cpu one, while the performance difference in gaming (at least currently) is minimal to begin with, hence your improved gaming experience is only a matter of subjective perception rather than substantially improved framerate.
 
No. Definitely reality. Average FPS isn’t much higher but the in the moment drops are better. Unless my FPS counter, frame time counter, and perception are lying to me. Don’t take my word for it though.



DF had really weird results compared to every other site and reviewer with this launch comparison.
 
So yeah, it does make cpus the same in terms of provided gaming performance, because you hit gpu bottleneck way before the cpu one, while the performance difference in gaming (at least currently) is minimal to begin with, hence your improved gaming experience is only a matter of subjective perception rather than substantially improved framerate.

I don't get why you're repeating things that have been proven wrong. Once again, I literally just posted a single player benchmark that shows 15% improvement in minimums for 7700K -> 8700K, and 7% for averages. And I also told you that most multiplayer games are CPU bottlenecked. That's all there is to it. There is a significant improvement when you get a better CPU in many important classes of games.

Repeating that they're GPU bottlenecked over and over again is not going to make it correct.
 
DF had really weird results compared to every other site and reviewer with this launch comparison.


Not really. It was consistent and expected based on their methodology. Another example would be Gamers Nexus. 8700K showed strong gains in the 1% and .1% lows in several games.

Many other websites simply did average FPS, which is an outdated methodology. My own benchmarking found that in spent far less time below 70FPS in BF1 on 64 player servers than with the 6700K. Clearly, even with an older 980ti, there were some areas that CPU was bottlenecking. When you’re doing HFR gaming, every frame helps.
 
I don't get why you're repeating things that have been proven wrong. Once again, I literally just posted a single player benchmark that shows 15% improvement in minimums for 7700K -> 8700K, and 7% for averages. And I also told you that most multiplayer games are CPU bottlenecked. That's all there is to it. There is a significant improvement when you get a better CPU in many important classes of games.

Repeating that they're GPU bottlenecked over and over again is not going to make it correct.

So 15% and 7% with a beefier card are worthy of an upgrade? The performance uplift is very little to grant perceptive gaming experience improvement.
 
If one thinks they need the best then they will find it, pay for it, and justify and/or create its importance.

If one wishes for the best value, then they will find it, happily pay for it, and gladly tell someone they threw away their money.

One understands diminishing returns, to the other 5% means fifty.

Neither is bad, just don't be deluded about it. Like how there would be no need to tell someone about value when obviously value has nothing to do with it.
 
Last edited:
What mobo did you use, I am thinking of going with ASRock Taichi.... any thoughts?

I got a Taichi no complaints and it has a RGB cool heatsink on the front that you can adjust in the bios.

This binned chip gets like 10-12 FPS increases in some games over the 8700K seen in one video with a 1080ti.
 
So 15% and 7% with a beefier card are worthy of an upgrade? The performance uplift is very little to grant perceptive gaming experience improvement.


The value is really dependent on the person. I’m extremely sensitive to frame time stutter. Microstutter is why I stopped SLI gaming, and bought a GSYNC HRR monitor. To me, anything I can do to increase the smoothness of gameplay, is worth it. While my average FPS may not change much, the frame time drops have been reduced. It’s why I’m also waiting to replace my HDTV for one that supports 120hz and VRR. I also avoid console gaming not because I don’t like consoles, but because they stutter so much.

To be fair to your points, I am planning to upgrade from the 980ti to the 1180ti (or whatever they call it), when it arrives. Also my wife is less sensitive to the stutter.

It’s like when DLP TVs were a thing. Some people were very sensitive to the rainbow effects. I never saw them due to a slight difference in how my eyes perceive color. Another example is how some people can use a cheap headset and feel the audio is great, while others need $1000 studio monitors to be satisfied.

Ultimately, it is, as you said, based on personal perception. You value average frame rates more (at least it seems that way). I value those in the Moment drops like DF showed in their benchmarks. The 8700/8086k may not have the value to you, that it does to me. It doesn’t there isn’t value there, or measure differences in performance.
 
The value is really dependent on the person. I’m extremely sensitive to frame time stutter. Microstutter is why I stopped SLI gaming, and bought a GSYNC HRR monitor. To me, anything I can do to increase the smoothness of gameplay, is worth it. While my average FPS may not change much, the frame time drops have been reduced. It’s why I’m also waiting to replace my HDTV for one that supports 120hz and VRR. I also avoid console gaming not because I don’t like consoles, but because they stutter so much.

To be fair to your points, I am planning to upgrade from the 980ti to the 1180ti (or whatever they call it), when it arrives. Also my wife is less sensitive to the stutter.

It’s like when DLP TVs were a thing. Some people were very sensitive to the rainbow effects. I never saw them due to a slight difference in how my eyes perceive color. Another example is how some people can use a cheap headset and feel the audio is great, while others need $1000 studio monitors to be satisfied.

Ultimately, it is, as you said, based on personal perception. You value average frame rates more (at least it seems that way). I value those in the Moment drops like DF showed in their benchmarks. The 8700/8086k may not have the value to you, that it does to me. It doesn’t there isn’t value there, or measure differences in performance.

The difference in minimums is minimal (lol) too between 6700/7700k and 8700k. Definitely not as dramatic as your posts are portraying it, especially if you use gsync monitor which negates this difference further to 0. I value the overall performance, which when it comes to these process is bottom line about the same in any gaming scenario with negligeble lead for 8700k.
 
The difference in minimums is minimal (lol) too between 6700/7700k and 8700k. Definitely not as dramatic as your posts are portraying it, especially if you use gsync monitor which negates this difference further to 0. I value the overall performance, which when it comes to these process is bottom line about the same in any gaming scenario with negligeble lead for 8700k.


I don’t think you understand the difference between frame time and frame rate. While the overall minimums and averages may be similar, the time spent at the minimums or thereabout is reduced by the 8700k pretty substantially. This reduces the amount of time each frame takes to draw (aka Frame Time). When you have a sudden drop from 120fps to 60FPS, you double the frame time, which will be noticeable. The more you can reduce the drops, which the 8700k does, the less stutter you’ll perceive.

Again, I refer you to the Digital Foundry video which shows in real-time some of those instances.

If you solely count average frame rates, we should all still be using Sandy Bridge 2600K CPUs.

Also having GSYNC in and of itself does not, in anyway, stop frame time dips. Those dips are there regardless. GSYNC, in the case of HRR monitors, allows you to avoid VSYNC stutter by real-time manipulation of the refresh rate.
 
Last edited:
I don’t think you understand the difference between frame time and frame rate. While the overall minimums and averages may be similar, the time spent at the minimums or thereabout is reduced by the 8700k pretty substantially. This reduces the amount of time each frame takes to draw (aka Frame Time). When you have a sudden drop from 120fps to 60FPS, you double the frame time, which will be noticeable. The more you can reduce the drops, which the 8700k does, the less stutter you’ll perceive.

Again, I refer you to the Digital Foundry video which shows in real-time some of those instances.

If you solely count average frame rates, we should all still be using Sandy Bridge 2600K CPUs.

Also having GSYNC in and of itself does not, in anyway, stop frame time dips. Those dips are there regardless. GSYNC, in the case of HRR monitors, allows you to avoid VSYNC stutter by real-time manipulation of the refresh rate.

I understand it all very well. Again, the difference is so miniscule that you won’t really see it, especially with gsync that completely eliminates any stuttering within its range. But you can keep convincing yourself that your minimums and frametimes became all that much better (paired with 980ti lol?). But hey, not my problem really.
 
I understand it all very well. Again, the difference is so miniscule that you won’t really see it, especially with gsync that completely eliminates any stuttering within its range. But you can keep convincing yourself that your minimums and frametimes became all that much better (paired with 980ti lol?). But hey, not my problem really.


Now you're just being rude or are trolling. Either way, I'm done with you. You clearly do not understand the difference between frame times and frame rate because you keep talking about GSYNC doing something IT CAN'T DO. If your frame time doubles from 6ms to 12ms, GSYNC will not be able to magically create a new frame to avoid the doubling of frame time. Now, as the DF video showed, there are times when Crysis 3, A GAME I PLAY, drops by 50% (NOT 15%) on the 7700K to about 100FPS, whereas the 8700K maintains 150FPS. That doesn't matter when you're VSYNC limited to 60FPS, but as I said, I AM using a HRR monitor. I cap it at 144HZ. Now in that scene, the 8700K maintains a constant smooth frame time with no spikes whereas the 7700K drops. Even with GSYNC, you loose a frame because the CPU can't feed the video card enough info to render it. GSYNC then adjusts the refresh rate to compensate, but your frame time is still slower. Considering the normal human eye can perceive up to about 1000FPS, you will start to notice drops. And yes, even with GSYNC, you can still get stutters. They're far fewer and farther between, but they happen.

Also if you want to get all "ePeen" over my 980ti, I also own another system with two 1080s in SLI and a 7820X at 4.5GHZ, all cores, 24-7, with 64GB DDR3200. So don't think your 1080 is anything special, or I don't know what faster video cards perform like. And I promise you someone on the forum has twin 1080tis with even faster CPUS.

Bottom line: The 8700K is faster than your 6700K. Deal with it. Stop giving out bad info.

EVIDENCE

 
Last edited:
Now you're just being rude or are trolling. Either way, I'm done with you. You clearly do not understand the difference between frame times and frame rate because you keep talking about GSYNC doing something IT CAN'T DO. If your frame time doubles from 6ms to 12ms, GSYNC will not be able to magically create a new frame to avoid the doubling of frame time. Now, as the DF video showed, there are times when Crysis 3, A GAME I PLAY, drops by 50% (NOT 15%) on the 7700K to about 100FPS, whereas the 8700K maintains 150FPS. That doesn't matter when you're VSYNC limited to 60FPS, but as I said, I AM using a HRR monitor. I cap it at 144HZ. Now in that scene, the 8700K maintains a constant smooth frame time with no spikes whereas the 7700K drops. Even with GSYNC, you loose a frame because the CPU can't feed the video card enough info to render it. GSYNC then adjusts the refresh rate to compensate, but your frame time is still slower. Considering the normal human eye can perceive up to about 1000FPS, you will start to notice drops. And yes, even with GSYNC, you can still get stutters. They're far fewer and farther between, but they happen.

Also if you want to get all "ePeen" over my 980ti, I also own another system with two 1080s in SLI and a 7820X at 4.5GHZ, all cores, 24-7, with 64GB DDR3200. So don't think your 1080 is anything special, or I don't know what faster video cards perform like. And I promise you someone on the forum has twin 1080tis with even faster CPUS.

Bottom line: The 8700K is faster than your 6700K. Deal with it. Stop giving out bad info.

EVIDENCE



Oh my, your impenetrable ignorance is astounding. Keep looking at the precise numbers and measuring little fps leads, while I’m talking about the overall gaming experience and smoothness at 60+ fps (especially in 100+ range you’re playing) with g-sync. And there’s no way you’re dropping from 150 to 100 with 7700k, but miraculously don’t with 8700k with 980ti. Unless you’re playing at 480p. Enjoy 8700k, but that’s meaningless upgrade for gaming only over 6700k and with bottlenecking gpu in your case. Bottomest line: You seem to be too touchy. Happy to end this useless discussion.
 
Well.... that was an interesting 4 pages of Sunday morning coffee read. :ROFLMAO:

Just ordered one yesterday even after reading the reviews. If it's placebo.. so be it. I'm not planning on selling the CPU when I'm done using it. It will have a home on a shelf right next to my 486dx-4 120 at the end of the day. Picked up the 8086k, an ASUS ROG Max. Hero and 16Gb of Gskill 4133. Probably swap out the components in my main rig next weekend. If I gt all cores at 5-5.1Ghz (will be under water) I'll be happy with the investment.
 
Well.... that was an interesting 4 pages of Sunday morning coffee read. :ROFLMAO:

Just ordered one yesterday even after reading the reviews. If it's placebo.. so be it. I'm not planning on selling the CPU when I'm done using it. It will have a home on a shelf right next to my 486dx-4 120 at the end of the day. Picked up the 8086k, an ASUS ROG Max. Hero and 16Gb of Gskill 4133. Probably swap out the components in my main rig next weekend. If I gt all cores at 5-5.1Ghz (will be under water) I'll be happy with the investment.

Great choice. Loving my two!

36_D163_D8_C2_F1_4648_8676_5_F419_E964_CA4.jpg
 
Last edited:
My first one was quite the dud, would always throw mce (machine check errors in Linux) after 20-30 minutes, needed +0.1v compared to typical, I wasted a lot of time trying to get a stable overclock on air. Rma'd, and 2nd one works as expected. Didn't hit the lottery, but it's average at least, can bench at 5.2 with voltage, boots at 5.3 but crashes on CB. Running at 5.0 because it's easy.

I don't think people are running 5.0 on air, and can pass Prime95-AVX, that sets the cpu on fire and errors out quickly. First PC I've ever built where I said screw it, I don't run 100% AVX/Prime95/IBT/LinPack loads. My 4790K is set to 4.5 because 4.6/4.7 was hard for Prime95 thermals.
 
Last edited:
Got mine installed this afternoon. Under water at 5ghz.. linked all cores @1.225v without issue. Will not boot into windows under 1.2v. Disabled svid while testing.. might try adaptive to get the vcore down at idle. then again at 1.225v who cares.. ?? lol. Prime 95 shows temps in the mid to lower 60's.
 
My first one was quite the dud, would always throw mce (machine check errors in Linux) after 20-30 minutes, needed +0.1v compared to typical, I wasted a lot of time trying to get a stable overclock on air. Rma'd, and 2nd one works as expected. Didn't hit the lottery, but it's average at least, can bench at 5.2 with voltage, boots at 5.3 but crashes on CB. Running at 5.0 because it's easy.

I don't think people are running 5.0 on air, and can pass Prime95-AVX, that sets the cpu on fire and errors out quickly. First PC I've ever built where I said screw it, I don't run 100% AVX/Prime95/IBT/LinPack loads. My 4790K is set to 4.5 because 4.6/4.7 was hard for Prime95 thermals.

I just picked up a Kracken X52 to replace my Cryorig C1. You’re right in that I couldn’t do 5.0 x 6 on air with AVX. Ended up at 100C. Backing off to 4.7ghz x 6 with a negative .090 on VCORE keeps the CPU in the 80s. My case is an NCASE M1 for reference.

If I had enough room for a big Noctua air cooler, I think I could hit 5.0 GHz x 6 with it.
 
I just picked up a Kracken X52 to replace my Cryorig C1. You’re right in that I couldn’t do 5.0 x 6 on air with AVX. Ended up at 100C. Backing off to 4.7ghz x 6 with a negative .090 on VCORE keeps the CPU in the 80s. My case is an NCASE M1 for reference.

If I had enough room for a big Noctua air cooler, I think I could hit 5.0 GHz x 6 with it.

Yeah if I drop to 4.8 + lower volts, then full AVX load still hits 100C but only needs a little throttling. At 5.0 + voltage AVX drops hard to 3600-3800 MHz. I think my repalcement chip is bottom 40% (but not bottom 1% like the first lol), turns out it can't hold 5.2 while compiling things in Linux (but it runs Cinebench at that speed). That or SiliconLottery is overstating how many 8086K will reliably do 5.2, they say 60%
 
Last edited:
^^ Are you setting the vcore manually or letting the MOBO sort it out? Befire I set the vcore manually the ASUS board had the cpu at 1.48v under load. Runs quite a bit cooler at 1.2v and change.
 
Well, I got the NZXT X52 installed and all is well. I've been able to do 5GHZ all cores at 1.325v. It's AIDA stable, though it gets hot. Synthetics can push it into the low 90s. Typical BF1 gaming I'm seeing 70s to 80s.

Just a reminder I did not delid.
 
Last edited:
I watched a bunch of delid videos yesterday, I think I'm going to try this weekend. Not for higher MHz, just to possibly knock 5.0 temps down under load.
 
I watched a bunch of delid videos yesterday, I think I'm going to try this weekend. Not for higher MHz, just to possibly knock 5.0 temps down under load.


Ok. I had an unstable Bios issue. Once I updated I was able to drop voltage substantially.

XMP mode, turbo on, all cores locked to 50X, max wattage at 4096, LLC at 5, VCORE at 1.28.

Aida now is in the low to mid 80s with the kraken at 100% in a pull config. All other tests were lower temps. Gaming was in 70s. System passes Real Bench, CPUZ, Cinebench, 3DMark stress test, Aida, and 1 hour of continuous gaming in BF1.

5.1 GHZ is not stable at this vcore.

I’m seeing about 150 watts in power usage per HWMonitor.

Setting the CAM software to performance mode, the Kraken X52 is very quiet when not at full load. The actual sound a deep woosh at full load that is noticeable but no bad. I prefer the Kraken software to the Corsair software.
 
So when do we get 32 core Intel desktop chips? I'm still on a 3930k running at 4.6 (w cnq) all cores 24/7 since it came out, have booted windows to bench it at 5ghz before, and ran it at 4.8 24/7 for the first few years I had it.... Looks like there isn't much of a reason to stay with intel anymore with threadripper 2 coming out next month, if using it mostly for video editing / non-gaming multithreaded use? Also running 32gb ram, seems like ram prices have not dropped much over the last few years either....
 
So when do we get 32 core Intel desktop chips? I'm still on a 3930k running at 4.6 (w cnq) all cores 24/7 since it came out, have booted windows to bench it at 5ghz before, and ran it at 4.8 24/7 for the first few years I had it.... Looks like there isn't much of a reason to stay with intel anymore with threadripper 2 coming out next month, if using it mostly for video editing / non-gaming multithreaded use? Also running 32gb ram, seems like ram prices have not dropped much over the last few years either....

I noticed a bit of degradation in my 2600K after 5 years of 24-7 at 4.7GHZ. Had to up the voltage a tad.

Reasons to stay with Intel depend heavily on your use case. If you edit in Adobe Premier, and encode in H.264/H.265, it now can take advantage of Quicksync which boosts encoding speed substantially. Not sure about other editing apps. Photoshop still enjoys high frequency as well. Threadripper is awesome but I run ITX systems so it's a no go for me. I'm very interested in Zen 2 though.
 
Hunting Prime 95 AVX stability...

So far my 8086K can run games all day long at 5GHZ at 1.28 Vcore. However, now I’m hunting Prime95 stability. After about an hour of tuning, I got Prime 95 29 stable(ish). I still have more tuning to do.

Vcore at 1.325
AVX offset of 2
System power limited to 150 LL6 (ends up at about 165.

This drops Prime 95 back to 4.3 GHz when running. However, it’s stable, and not thermal throttling my kracken X52. BF1 still runs at 5GHZ.

I really wish Intel hadn’t used their thermal grease under the heat spreader. It would make it so much better if it was soldered.
 
Hunting Prime 95 AVX stability...

So far my 8086K can run games all day long at 5GHZ at 1.28 Vcore. However, now I’m hunting Prime95 stability. After about an hour of tuning, I got Prime 95 29 stable(ish). I still have more tuning to do.

Vcore at 1.325
AVX offset of 2
System power limited to 150 LL6 (ends up at about 165.

This drops Prime 95 back to 4.3 GHz when running. However, it’s stable, and not thermal throttling my kracken X52. BF1 still runs at 5GHZ.

I really wish Intel hadn’t used their thermal grease under the heat spreader. It would make it so much better if it was soldered.

See my thread on my complaints re: 8086k temps. I got around to delidding and got a 20+C temp difference. That's crazy the stock TIM was that bad. It's almost like Intel wants you to delid to void the warranty. I'm at 5ghz 1.295v set in bios and an Artic Freezer 240mm AIO cooler. I can pass 2hrs prime 5Ghz no AVX offset-- 95 v 29.x AVX.

Consider a delid if you can! Also, make sure your temps don't get worse over time. Mine did.
 
See my thread on my complaints re: 8086k temps. I got around to delidding and got a 20+C temp difference. That's crazy the stock TIM was that bad. It's almost like Intel wants you to delid to void the warranty. I'm at 5ghz 1.295v set in bios and an Artic Freezer 240mm AIO cooler. I can pass 2hrs prime 5Ghz no AVX offset-- 95 v 29.x AVX.

Consider a delid if you can! Also, make sure your temps don't get worse over time. Mine did.


Yeah it’s a ruff one. I think I should be able to reduce the Vcore a bit more but this 8086k just doesn’t want to be AVX stable at less. Seems a lot of people are getting AVX stability at much lower voltage, though if I can meet what Silicon Lottery hits I’m happy.

I’m on the edge of Deliding right now. I don’t have a big AVX workload so 5GHZ in games is solid, and 4.3-4.5 in AVX should be ok. I think I might practice on my 8700 nob-k system first before I risk this chip.

Good news is that when gaming in BF1, combined with the 980ti at 1.45ghz, I’m averaging about 95-105 FPS at 1440p Ultra. The 6700K at 4.6ghz was getting about 85-90 avg. This chip has definitely smoothed things out a bit. Can’t wait for the 1180ti/2080ti to come out.

I think it was the GN forums where they had some users reporting degradation of the 8700/8086k after long term stability testing in AVX. I don’t think you’re alone on that.


EDIT: I got it Prime 95 AVX stable without any AVX offset at 1.3250. However I need a better cooler then an X52 to keep the heat down. As I can’t do that with a NCASE M1 case, I opted to limit the power to 150 watts. So I get a turbo of about 4.4ghz all cores under Prime. Literally everything else runs at 5GHZ.

I’ve hit a plateau now that without a delid, I can’t really go any farther. Maybe I can eek out 5.1ghz (if not on all cores then maybe 2 or 4), but it’s really diminishing returns.
 
Last edited:
Back
Top