Intel's 8th Generation Core Family - Coffee Lake (LGA 1151, 6C/12T)

Where do you expect Core i7-8700K's Turbo to land?

  • 3.8/3.9 GHz

    Votes: 0 0.0%
  • 4.0/4.1 GHz

    Votes: 3 23.1%
  • 4.2/4.3 GHz

    Votes: 6 46.2%
  • 4.4/4.5 GHz

    Votes: 3 23.1%
  • 4.6/4.7 GHz

    Votes: 1 7.7%

  • Total voters
    13
  • Poll closed .
2017-2019 INTEL CORE X-SERIES SKYLAKE X CANONLAKE X ICELAKE X TIGERLAKE X

99F9C83359D35AFF135E98


Source - http://tieba.baidu.com/p/5353174380

WOWZER! thanks
 
8C/16T i7-9750K @ 4.5, turbo 5.0GHz o_O pigs be flyin'

^ If that would be true I'd probably have to upgrade soon'ish again. :p
 
Here's the issue there as I see it.

If AMD has a core and thread count advantage at the consumer level with Ryzen, and Intel has the IPC and frequency advantage at the consumer level with Kaby Lake..

What advantage AMD may be able claim in games that benefit from the higher core/thread count slips with Intel releasing 6C/12T Coffee Lake, and where will it be if rumored 8C/16T next year is true? I'm a little worried what will happen if through IPC and frequencies lesser core count i7 8700s are nipping at the heals of Ryzen R7s in more heavily threaded tasks... enjoy AMD starting a fire under Intel, and would like to see it keep going. But when it comes down to it, can AMD improve their IPC and frequencies going forward as easily as Intel can (finally) throw more cores and threads into their product stack?

Thread counts have their specific advantages and adding mass threads to more consumers is never a bad thing so from that I don't consider it a fault. IPC gains, given AMD did improve 50% from BD uARCH to Zen, they obviously can and I think PR will have a small but noticeable gain on SR however I think the clocks are the limiting factor here. Intel stock pushes high frequency single thread and their multicore boost are higher than what AMD is capable of, perhaps the LP node is not capable or mature enough to squeeze high frequency out of them though that issue is one that may be addressed.
 
welp, 6c 8700k can only upgrade to 8750k 10nm as per that chart.. no 8 core.

9700k is new LGA 1161 LOL and its 8 core 10nm+, then can upgrade to 9750k 10nm++, i think this will be great in a laptop.
 
I almost hope that chart is wrong because it makes it hard to justify going for 8700K when Ice Lake is not that far away and with a plausible (well in that document anyway) upgrade path into Tigerlake.
 
I almost hope that chart is wrong because it makes it hard to justify going for 8700K when Ice Lake is not that far away and with a plausible (well in that document anyway) upgrade path into Tigerlake.

i think its a believeable chart. coming from intel going 6c to 8c it makes sense for them to force us get another mobo.
 
2017-2019 INTEL CORE X-SERIES SKYLAKE X CANONLAKE X ICELAKE X TIGERLAKE X

99F9C83359D35AFF135E98


Source - http://tieba.baidu.com/p/5353174380

That looks incredible fake for several reasons. The DDR4 speeds, those speeds requires validation and a lot isn't what chip makers produce as chips or able to produce for the validation time. The Cannon Lake parts are also 100% fake. The successor to SKL-X/SP is called Cascade Lake. And Cannon Lake is only a 2+2 part. Other errors as well. but in short, forget that chart.
 
Pisses me off!

Here I am, getting a little stiffy with the release of the 8700k in a couple of days, and now I feel like I should wait.

AGAIN!

It is pretty hard to buy and expect stagnancy, all you have to ask is whether the 8700K can give you longivity, if the answer is no then wait.
 
That looks incredible fake for several reasons. The DDR4 speeds, those speeds requires validation and a lot isn't what chip makers produce as chips or able to produce for the validation time. The Cannon Lake parts are also 100% fake. The successor to SKL-X/SP is called Cascade Lake. And Cannon Lake is only a 2+2 part. Other errors as well. but in short, forget that chart.

well, time will tell but i do get you. kinda weird how intel would have this leaked while planning so much ahead of course take it grain of salt.

but the 9700k showing intel giving 8c mainstream will need at least another socket/mobo purchase, that part seems realistic.
 
im not too familiar with ram validation and imc speed etc pls share some more how it works.

Cascade Lake alone invalidates the chart.

For CPUs to be validated against memory speeds. The chips needs to exist. Same reason both Ryzen and CFL got a max memory support of 2666Mhz. You are not getting 3200Mhz a year from now, its hard enough to even reach 2933Mhz. The 2800Mhz speed is completely bogus from a validation standpoint, since such chips doesn't exist as stock performance and wont ever do so.
 
Cascade Lake alone invalidates the chart.

For CPUs to be validated against memory speeds. The chips needs to exist. Same reason both Ryzen and CFL got a max memory support of 2666Mhz. You are not getting 3200Mhz a year from now, its hard enough to even reach 2933Mhz. The 2800Mhz speed is completely bogus from a validation standpoint, since such chips doesn't exist as stock performance and wont ever do so.

so cascade is a refresh of CFL? if so then that 8750k seems like thats what it is.

as for memory, even though IMC is say set for 2666 how come we are capable of using overclocked memory? or is that unrelated
 
so cascade is a refresh of CFL? if so then that 8750k seems like thats what it is.

as for memory, even though IMC is say set for 2666 how come we are capable of using overclocked memory? or is that unrelated

Cascade Lake is a refresh of SKL-X/SKL-SP. And Cannon Lake is a ultra mobile dualcore with GT2 graphics.

Overclocked memory is completely unrelated.

A lot of effort may have gone into that chart. Just not a lot of brains.

Also SKU speeds of something that isn't taped out like Tiger Lake for example is completely fictional.
 
Cascade Lake is a refresh of SKL-X/SKL-SP. And Cannon Lake is a ultra mobile dualcore with GT2 graphics.

Overclocked memory is completely unrelated.

A lot of effort may have gone into that chart. Just not a lot of brains.

Also SKU speeds of something that isn't taped out like Tiger Lake for example is completely fictional.

i do find the 10nm+ and 10nm++ unbelieveable because its just way too far out. for cascade lake it could be any of the LGA 2066 10nm non+, the chart doesnt have any name, tdp or date.
 
Wondering if it's worth it to snatch the 8700k and pawn off my 6700k to my brother or if it would be a better idea to wait for Ice Lake.
The upgrade itch is a terrible thing.
 
i do find the 10nm+ and 10nm++ unbelieveable because its just way too far out. for cascade lake it could be any of the LGA 2066 10nm non+, the chart doesnt have any name, tdp or date.

Cascade Lake is a 14nm++ part. SKL-X/SP is 14nm+.

Icelake on 10nm+ is already taped out. But I doubt even SKUs there are close to finalized.
 
Well first off, CFL is limited to Z370 until Q1, which makes the 8400 an odd proposition and also inflates the price quite a bit. R5 also has SMT and can be OC'd enough out of the box to compete.

At $200 the 1600 is priced well enough for now.
The SMT and the lack of H and B370 chipsets, /slightly/ validate your argument. None the less, the GPU, IPC and lack of buying a potential segfault special, tells me the 8400 is the wiser choice.
 
That looks incredible fake for several reasons. The DDR4 speeds, those speeds requires validation and a lot isn't what chip makers produce as chips or able to produce for the validation time. The Cannon Lake parts are also 100% fake. The successor to SKL-X/SP is called Cascade Lake. And Cannon Lake is only a 2+2 part. Other errors as well. but in short, forget that chart.

The second I saw the DDR4 speeds, I thought rubbish.
 
Cascade Lake is a 14nm++ part. SKL-X/SP is 14nm+.

Icelake on 10nm+ is already taped out. But I doubt even SKUs there are close to finalized.

in this case what are the chances of that 10nm 8750k being real? since its likely it'll stay on 2666mhz normally it'll make sense because its intel, giving 2 more cores yet again will need new socket, so a high chance that z370 will stuck on 6c max?
 
in this case what are the chances of that 10nm 8750k being real? since its likely it'll stay on 2666mhz normally it'll make sense because its intel, giving 2 more cores yet again will need new socket, so a high chance that z370 will stuck on 6c max?

0? The chart is fake.

Z370 may be compatible with ICL-S.
 
A core is two threads with AMD, saying AMD has slow cores when their SMT is better than Intel is not the case, it is very competitive.

When six-core CoffeeLake is giving 5--10% less performance than eight-core Zen even in workloads with high SMT yields as CineBench, the conclusion is that Zen cores are slower:

6 CFL ~ 8 Zen ===> 1 CFL core ~ 1.33 Zen core

however suddenly due to AMD, intel had no choice but to give 6 and 8 cores mainstream all within 1.5 yrs, it is now worth the wait and skip 6 cores and wait for 8 cores 10nm+.

It was demonstrated before that Intel roadmaps were planned before Zen tapeout. It was also mentioned that future AMD Raven Ridge and successors are still four-core. So, not only what you say is incorrect, but facts point just in the opposite direction, with AMD stuck on four-core 'forever', and Intel providing the first six-core and eight-core 'APU' for mainstream users.

A whole 6 games, yeah Tomb raider like nobody even plays that anymore and yet Player Unknown Battlegrounds, the world record setting soon to be full release title, genuinely the best and most alpha game on the market has ryzen and strangely enough Vega doing exceptionally well, Battlefield 1 has Ryzen doing well (not sure about vega but irrelevant here).

so yeah if you cherry pick the whole 6 games and base it off that you distort your argument with a warped sense of mental gymnastics.

Intel games better but the aggregate is more like 10%

About six months ago you said us that bad gaming performance was explained by R7 RyZen chips being "relabeled engineering samples". Not only your claim was pure nonsense, but you basically accused AMD of lying to customers.

Latter you changed the version and and said us that bad performance was due to mobos and AGESA not optimized, being a new platfform. You promised us a magic BIOS patch/fix that would freeze Zen real gaming potential. It never happened, it couldn't because was mafic BIOS fix was nonsense again.

Then the excuse changed to games. The problem was on games. Games had been optmized during decades on Intel hardware, and soon optimized games would run faster on Zen showing the true potential. Again this all was nonsense, also funny as suddenly certain people forgot that consoles use AMD hardware. A pair of games, those broken and playing worse were patched, and after patched they continue running slower on RyZen. The patches reduced the gap from huge 50% to something more close to the 30% that is usually measured in average.

Now you change once gain. Now you pretend RyZen gaming performance isn't real but a result of cherry picking benchmarks. Ironic that you mention Battlefield 1, which is an AMD-sponsored game, and amazing how you ignore that the aggregate gap is more in the 30% level

4C/4T Zen ~ i5 Sandy Bridge

i5 Kabylake ~ 1.33 * (4C/4T Zen)

getgraphimg.php
 
Last edited:
When six-core CoffeeLake is giving 5--10% less performance than eight-core Zen even in workloads with high SMT yields as CineBench, the conclusion is that Zen cores are slower:

6 CFL ~ 8 Zen ===> 1 CFL core ~ 1.33 Zen core



It was demonstrated before that Intel roadmaps were planned before Zen tapeout. It was also mentioned that future AMD Raven Ridge and successors are still four-core.

So, not only what you say is false, but facts point just in the opposite direction, with AMD stuck on four-core 'forever', and Intel providing the first six-core and eight-core 'APU' for mainstream users.



Your first excuse, months ago, was that first retail RyZen chips were "relabeled engineering samples" and that true RyZen would launch latter. Not only your claim was pure nonsense, but you even were accusing AMD of lying to customers by selling engineering samples as if were retail chips. LOL

Your second excuse was mobos and AGESA. You promised us a magic BIOS patch/fix that would freeze Zen real potential. It never happened, because it was all nonsense again.

Your excuse now is cherry picking benchmarks and that aggregate gap is 10%. But you mention Battlefield 1, which is an AMD sponsorized game and ignore that the aggregate gap is more in the 30% level.

4C/4T Zen ~ i5 Sandy Bridge

i5 Kabylake ~ 1.33 * (4C/4T Zen)

The AGESA fixed RAM issues people had on initial release where boards would not post with anything higher that IMC rated speeds, that worked so it was not really an excuse.

You have been the one cherry picking benches and no DICE are not sponsored by AMD, whereas PUBG have been sponsored by Intel and Nvidia, again the event hosted for DICE was covered by AMD but there is no bias towards AMD hardware in BF1 nor is there disadvantage to Intel parts, and I don't see the 30% unless you are taking the 5ghz 7700K vs the Stock 1700 running its 3ghz base, on whole the 1700X was around 140FPS with the 5.1Ghz 7700K around 147FPS factoring in variable clocks it looks more like 10% ~ type range of performance.

I haven't seen a single Sandy bridge CPU score 162 in a Single Thread Cinebench R15 Run @ 4ghz or 140+ at 3Ghz so the performs like depends on the title, over a greater spectrum of games AMD's performance relative to Intel is acceptable.
 
The AGESA fixed RAM issues people had on initial release where boards would not post with anything higher that IMC rated speeds, that worked so it was not really an excuse.

You have been the one cherry picking benches and no DICE are not sponsored by AMD, whereas PUBG have been sponsored by Intel and Nvidia, again the event hosted for DICE was covered by AMD but there is no bias towards AMD hardware in BF1 nor is there disadvantage to Intel parts, and I don't see the 30% unless you are taking the 5ghz 7700K vs the Stock 1700 running its 3ghz base, on whole the 1700X was around 140FPS with the 5.1Ghz 7700K around 147FPS factoring in variable clocks it looks more like 10% ~ type range of performance.

I haven't seen a single Sandy bridge CPU score 162 in a Single Thread Cinebench R15 Run @ 4ghz or 140+ at 3Ghz so the performs like depends on the title, over a greater spectrum of games AMD's performance relative to Intel is acceptable.

Newest AGESAs fixed overlocking issues, and allowed people to overclock more and better. Those AGESAs didn't fix the microarchitecture, contrary to your claims. Chips at stock settings perform the same today than at launch.

Battlefield 1 is sponsored by AMD. We already demonstrated this to you before, including a public remark from AMD stating that they are proud sponsors of BF1.

Don't you see the 30% gap on the graph given?

i5-7600K: 133%
R3-1300X: 100.8%
i5-2500K: 100%

The faster 4C/4T Zen is at Sandy Bridge i5 level and 30% behind a Kabylake i5 on average. The gap is higher in some specific games and smaller in others.

Of course the performance gap is not 30% in CineBench R15. No one said it is. In CB15 the gap is 'only' 22%,

cb15-1.png


but in other benchmarks Zen is 47% behind Kabylake

audacity.png


Intel has an evidnt core advantage (both IPC and clocks) and the only reason why RyZen got some momentum was because AMD provided up to twice moar cores and won in rendering/encoding benchmarks when compared with quad-core Kabylake.

This is no longer the case. CoffeLake is neck to neck with R7 RyZen even in AMD-favorable benches such as CB15. And run circles around the R7 in everything else. Moreover despite your 'predictions' about CoffeLake costing $500, the top model is only $360, beating RyZen even in its own game: cheap price.

So CFL is faster, more efficient, overclocks better, comes with a iGPU, and is a lot cheaper. What is the problem?
 
Last edited:
Zen has worse IPC than IvyB and struggles to break 4 GHz. I don't think anyone is arguing that Zen cores are superior/faster, they're a joke. They survive on low price and high core count only.
Anyone see the Reddit post claiming shortages until the end of the year?

Intel could be amputating your legs with a skillsaw and ramming red hot nails through your feet while beating your mother with a tire iron and you guys would still defend Intel to the death. I have no idea where and what drives your loyalty to this extent. Oh I kind of do but that would be getting political and cause triggering and I would get banned. Its like the same 10 dudes on this website that are 1000% Intel no matter what reality is. Just non stop AMD bashing while holding more reverence for Intel than entire religions do for their gods.

It makes you so apparently biased in the eyes of EVERYONE on the site that everything you guys say is just chocked up to irrelevant biased psychobabble.
 
Intel could be amputating your legs with a skillsaw and ramming red hot nails through your feet while beating your mother with a tire iron and you guys would still defend Intel to the death. I have no idea where and what drives your loyalty to this extent. Oh I kind of do but that would be getting political and cause triggering and I would get banned. Its like the same 10 dudes on this website that are 1000% Intel no matter what reality is. Just non stop AMD bashing while holding more reverence for Intel than entire religions do for their gods.

It makes you so apparently biased in the eyes of EVERYONE on the site that everything you guys say is just chocked up to irrelevant biased psychobabble.

You're delusional, I don't even know why you quoted my post, surely must be an accident?
 
A whole 6 games, yeah Tomb raider like nobody even plays that anymore and yet Player Unknown Battlegrounds, the world record setting soon to be full release title, genuinely the best and most alpha game on the market has ryzen and strangely enough Vega doing exceptionally well, Battlefield 1 has Ryzen doing well (not sure about vega but irrelevant here).

so yeah if you cherry pick the whole 6 games and base it off that you distort your argument with a warped sense of mental gymnastics.

Intel games better but the aggregate is more like 10%
that is patently false. There are 10,000s of games and the vast majority are single thread. I would be shocked to see 1,000 properly threaded games. So your aggregate 10% is total bullshit. If you play a diverse amount of games you buy an intel rig plan and simple. If you only plat BF, COD, and shit like that sure...you can get away with it. Also GTA5 was in that list and those 6 games are AAA games and AMD has shit FPS in most of them. 10%...okay. :rolleyes:

I own about 1300 games and i don't even think 100 are properly threaded or even AMD could be competitive with intel in even 100 of my games.

Also as i stated Intel which is 25-50% faster in minimum frame rate doesnt even allow an ideal experience in all games....its just better/close than AMD by far.

but do whatever you like. I'll give unbias factual advice all day long. For a desktop rig you buy an Intel ATM because AMD offers a shotty experience. If you want a great prices threaded server, render, enoding rig...buy thread ripper. If you want the best most powerful server, render, encoder that money can buy...you get intels 18 core. I would never buy Intel for my server because of cost but to each their own.

If i had the money I would get binned 8700K for desktop and a retail threadripper for my server. If i was filly rich i would get intel 2K 18 core CPU....but that would be silly for my needs lol:ROFLMAO:

My system struggles to sustain 100FPS in NewZ, a DX9 game so whats the issue, given that there is lots of rendering in open world games to sustain high FPS at low resolution is going to be problematic, which is probably why more are moving up to 1440 middle ground resolutions. I have seen prices on 1440 monitors drop very close to 1080 as it looks like it is slowly being phased out.

record your GPU load and CPU load and frame rate into a graph or have 2 screens and pay attention when you game.

1) If FPS drops and GPU utilization drops=most likely CPU limited
2) if FPS frops and GPU is maxed=GPU limited
3) if FPS drops and GPU drops and CPU drops=something else and much harder to figure out (could be code, VRAM, system memory, SSD, or a lot of other things)

1 and 2 are most common and easy to pin point...3 is more rare and harder to figure out the limiting factor.
 
Last edited:
Intel could be amputating your legs with a skillsaw and ramming red hot nails through your feet while beating your mother with a tire iron and you guys would still defend Intel to the death. I have no idea where and what drives your loyalty to this extent. Oh I kind of do but that would be getting political and cause triggering and I would get banned. Its like the same 10 dudes on this website that are 1000% Intel no matter what reality is. Just non stop AMD bashing while holding more reverence for Intel than entire religions do for their gods.

It makes you so apparently biased in the eyes of EVERYONE on the site that everything you guys say is just chocked up to irrelevant biased psychobabble.
Some of us expect CPUs released in 2017 to be better than the competition's 5 year old CPUs. And pointing that out doesn't make us biased. Ryzen is not some flawless chip and AMD doesn't get a pass because they're the underdog, as much as many people act that way.

I've already had plenty of negative things to say about Intel in this thread, too.
 
The AGESA fixed RAM issues people had on initial release where boards would not post with anything higher that IMC rated speeds, that worked so it was not really an excuse.

You have been the one cherry picking benches and no DICE are not sponsored by AMD, whereas PUBG have been sponsored by Intel and Nvidia, again the event hosted for DICE was covered by AMD but there is no bias towards AMD hardware in BF1 nor is there disadvantage to Intel parts, and I don't see the 30% unless you are taking the 5ghz 7700K vs the Stock 1700 running its 3ghz base, on whole the 1700X was around 140FPS with the 5.1Ghz 7700K around 147FPS factoring in variable clocks it looks more like 10% ~ type range of performance.

I haven't seen a single Sandy bridge CPU score 162 in a Single Thread Cinebench R15 Run @ 4ghz or 140+ at 3Ghz so the performs like depends on the title, over a greater spectrum of games AMD's performance relative to Intel is acceptable.

we are referring to minimum frame rate here...or at least i am. 99/99.9% percentile and minimum frame rate and frame time shows the smoothness and quality of experience. AMD is consistently 20-50% worse in many games because this is where the single thread issues come into play. IIRC I believe even DOOM ryzen ate shit compared to the 7700K and 8700K

far cry primal 100v68
GTA5 52v31 lawls
doom 137vs83
sleeping dogs 113v75
 
Back
Top