Intel Claims i7-8700K to Be 11% Faster Than 7700K

Lets keep in mind that with the exception of Skylake and Kaby Lake, we've seen around a 100MHz-200MHz clock speed ceiling loss while overclocking K series parts compared to the previous generation. Kaby Lake-X is the only one that really gets us back to the 5.0GHz mark where Sandy Bridge was on the high side. I don't think I've ever seen a Sandy Bridge chip that couldn't do 4.8GHz and I saw plenty that did 5.0GHz or better. I'm now seeing that with Kaby Lake-X, but not even standard Kaby Lake could achieve that reliably for me. Now that Kaby Lake-X hits 5.0GHz, no one hardly gives a shit about it because it's still a fucking quad core and its on a platform it has no business using in its current form.

Yep...in regards to OC potential, I kick myself for skipping the i7-2600K/2700K, because the OC headroom since SB has been, well, rather pathetic and boring. Hindsight and all...

Agree 100% about Kaby Lake-X and its current platform. Nothing, from either Intel or AMD, has impressed me enough to move me off my Ivy Bridge. I keep my fingers crossed with each and every new gen getting released, but it's a parade of fucking letdowns.
 
Most of us feel that Intel changes Chipsets just a little too often, and for very little reason.

I said as much in my editorial on X299 and the HEDT market.

Intel has often changed chipsets on us for little to no reason that people can see but that doesn't necessarily mean that there wasn't good reasons to change chipsets. You have to keep in mind that motherboard manufacturers are customers of Intel. Intel treats them as such. It's a customer that they've got by the short and curly hairs, but a customer just the same. In this case, there were many good reasons for switching from X99 to X299 even though few of its benefits will be apparent to the end users.

HSIO actually helps the manufacturers because it allows them to avoid using all the external PCIe switches and more PCIe lanes negates the need for overly expensive PLX chips that cut into a motherboard makers profit margins. DMI 3.0 and HSIO were two good reasons to make the switch. DMI 3.0 is a change that benefits us more than the motherboard manufacturers, but it's a win win for them as DMI 3.0 allows for us to better leverage M.2 devices and thus making the product more compelling. Power reduction and lower TDPs going from a 32nm node and a smaller package size are more good reasons. PCB real estate is costly on high end boards and power reductions are good for everyone.

Lastly, X299 / LGA 2066 was necessary for one more important reason and that's the move to Skylake-X and Kaby Lake-X. Haswell-E and Broadwell-E had Intel's FIVR (fully integrated voltage regulator) which motherboard manufacturers and enthusiasts didn't care for. Skylake and Kaby Lake did away with FIVR and so does Skylake-X and Kaby Lake-X. The design of the X299 motherboards had to be different than X99's for a lot of reasons. The X99 chipset is also three years old. It's just unfortunate that the improvements or changes are on the back end and DMI 3.0 and extra PCIe lanes are really the only changes we see directly.
 
Yep...in regards to OC potential, I kick myself for skipping the i7-2600K/2700K, because the OC headroom since SB has been, well, rather pathetic and boring. Hindsight and all...

Agree 100% about Kaby Lake-X and its current platform. Nothing, from either Intel or AMD, has impressed me enough to move me off my Ivy Bridge. I keep my fingers crossed with each and every new gen getting released, but it's a parade of fucking letdowns.

I rather like X299 and X399 from Intel and AMD respectively. X299's motherboard designs aren't compelling because of the VRM cooling I've seen on all of them so far. Feature wise I like it. As a multi-GPU user and someone who has a lot of storage in their system, the PCIe lanes and extra bandwidth for M.2 is appealing. I also like the improved DDR4 memory clock speeds that we can potentially see. I've rarely seen X99 break DDR4 3200MHz and no CPU architectures older than Skylake seem to benefit from that in games making it a moot point on X99 anyway. X299 and Skylake-X on the other hand is somewhat compelling as a gaming upgrade, but it's not a cost effective one. Only getting 2 more cores and less than 6% IPC improvement over what I have now is a hard pill to swallow for $1000+ investment.

On the other hand, $1,000 gets me a TR4 based Threadripper CPU with 16c/32t. As someone who's always had an SMT fetish dating back to the Pentium Pro, I approve of the core count increase and being able to benefit from faster memory speeds. X399 itself somewhat disappoints me because it's only got support for 6x SATA devices which is insufficient for me at present. PCIe based storage on X399 is something I'm looking forward to testing as well. That might get me to overlook X399's faults. I like the PCIe lane count, but hate the fact that X399 uses PCIe 2.0 slots for anything. That's horseshit.

Things are getting more interesting on the CPU side, but the platforms are still somewhat stale in my opinion.
 
I dont follow you. At 95W you get 6 cores at 4.3Ghz.

It's simple

95 / 4 = 23.75W/core thermal headroom
95 / 6 = 15.83W/core thermal headroom

Unless Intel has vastly improved the energy efficiency by 33% per core, then you are getting less out of a fully maxed out core. So you might have got 5GHz before with 4 cores, but find yourself limited to 4.2->4.4GHz max with 6 cores. While the majority of games run 4 cores or less. That means your games are running slower.

Hence, why it's better to stick with a lower core count when running today's games.
 
It's simple

95 / 4 = 23.75W/core thermal headroom
95 / 6 = 15.83W/core thermal headroom

Unless Intel has vastly improved the energy efficiency by 33% per core, then you are getting less out of a fully maxed out core. So you might have got 5GHz before with 4 cores, but find yourself limited to 4.2->4.4GHz max with 6 cores. While the majority of games run 4 cores or less. That means your games are running slower.

Hence, why it's better to stick with a lower core count when running today's games.

But they did didn't they.
 
You tell me Shintai. Could they have improved energy usage 33% between 14nm+ and 14nm++? How would you know?

They did between 14 and 14nm+ too if you look at mobile. And the result is here with 8700K vs 7700K.

7700 vs 7700K. 65W vs 95W. But the last 300-600Mhz cost 50% more power. Its all just a matter of the efficiency curve.
 
They did between 14 and 14nm+ too if you look at mobile. And the result is here with 8700K vs 7700K.

7700 vs 7700K. 65W vs 95W. But the last 300-600Mhz cost 50% more power. Its all just a matter of the efficiency curve.

7700 turbos at 4.2 GHz
7700K turbos at 4.5 GHz

Shouldn't the K sustain a higher than base clock for longer because of the higher TDP?
 
TDP is just a sort of made up figure anyway, once you start overclocking anything you can go past TDP no problem with good motherboards. it would be impossible for people to get 7 ghz on a 7700k on 1151 or even 2066 if TDP was some sort of magical hard limit which it is not, its just simply the CPU's design spec for max power usage.

what matters is if the CPU will do it at safe voltages or not that is ALL that matters, considering the pretty solid clock increase we saw between skylake (14nm) and kaby lake (14+ nm ) combine with very respectable stock clocks for a 6 core CPU along with it being made on 14++ I and other overclockers think these chips will probably be doing at least 4.5-4.6 ghz on all cores and if they can even get near 5 ghz at 1.4v they will be really solid performers. if it fits all those things i'll be getting one for my daily rig and relegating my 7700k to benching only or probably even sell it eventually.
 
Since it can turbo to 4.7Ghz, all cores are from factory guaranteed 4.7Ghz. So that part, assuming cooling etc should be a cake walk. Just as all cores are tested at 4.5Ghz on 7700K.
 
7700 turbos at 4.2 GHz
7700K turbos at 4.5 GHz

Shouldn't the K sustain a higher than base clock for longer because of the higher TDP?

Does it? Also by longer you mean all the time in reality?

What is the efficiency curve for KBL at clocks, power consumption and voltage required? 4.5Ghz requires at higher VID for example doesn't it?
 
Since it can turbo to 4.7Ghz, all cores are from factory guaranteed 4.7Ghz. So that part, assuming cooling etc should be a cake walk. Just as all cores are tested at 4.5Ghz on 7700K.

I wouldn't be surprised if that is the case, delidding is going to be a must but for anyone that has been overclocking 1151 like i have they already have the tools to do the job so its no biggie deal to me, my last two cpus i didn't even put in without first delidding first. The big hangup i'll probably have with the upgrade is if Z370 is forced on us then well i'll have to wait for Asus to make a Maximus X Apex, theres no way i'm buying another 4 slot mobo and taking the performance hit on Ram for two slots i'll never need to populate.
 
I'd be happy if Intel could just do two things.

Get their s##t together and use a TIM that doesn't require a delid.

Then disable IGP on the high end K models (and cut the price a couple bucks like the way they increase the price for "features", I mean unless my graphic card melts why would I use it?
 
Does it? Also by longer you mean all the time in reality?

What is the efficiency curve for KBL at clocks, power consumption and voltage required? 4.5Ghz requires at higher VID for example doesn't it?

Both 7700s are rated for 1.52V max default Vcore, so how do you think that would impact the turbo frequencies? Would going from the 7700s 4.2 max to the 7700Ks 4.5 max really eat up an additional 30W sustained at default Vcores, boost times being equal? I would guess not, and also guess that the K would be able to sustain a higher boost for longer. I'm really just guessing at this point...can you shed some light on how that all would work in real-world scenarios?
 
I'd be happy if Intel could just do two things.

Get their s##t together and use a TIM that doesn't require a delid.

Then disable IGP on the high end K models (and cut the price a couple bucks like the way they increase the price for "features", I mean unless my graphic card melts why would I use it?

tbh its not the TIM its the fact that they are not soldered, when i first got my 7600k a few months back i misplaced my conductonaut so i used EK-TIM that i have been using for everything other than the die to IHS for some time that works well and well needless to say it didn't work any better than stock TIM. I re-delidded it as soon as i found my Conductonaut.

as for the iGPU, i mean it doesn't effect performance and probably helps with expanding the terribly small contact area of these dies to the IHS which is only helping with the thermal problem not hurting when the iGPU is disabled so imma go with that being a pretty bad complaint IMHO. the big issue with the temps is transferring them effectively from a tiny surface area to a larger one for good cooling after all, this is why going from normal tim to liquid metal on that interface yields massive temperature drops, meanwhile the difference between the IHS and Waterblock between the two is very small, only about 3C by what i have seen doing it on my rig, its so little i don't even bother with the risk and waste of the expensive stuff doing it anymore.
 
asrock could make it work.

they put an 1156 socket on a p67 board.

the limitation is ARTIFICIAL.

FIVR excluded.

P67 was a special case, since it used the same DMI as P55, so it was a P55 motherboard with the P67 PCH. It was impossible to get a Lynnfield to work on the other Cougar Point chipsets.
 
Both 7700s are rated for 1.52V max default Vcore, so how do you think that would impact the turbo frequencies? Would going from the 7700s 4.2 max to the 7700Ks 4.5 max really eat up an additional 30W sustained at default Vcores, boost times being equal? I would guess not, and also guess that the K would be able to sustain a higher boost for longer. I'm really just guessing at this point...can you shed some light on how that all would work in real-world scenarios?

1.52V isn't what the VID at different clocks are at. The VID at 4.2Ghz is LOWER than at 4.5Ghz. Hence power consumption is lower. And yes, when you go off the curve efficiency goes nut. Look at OCed chips vs stock. Power consumption goes nuts at a certain point real fast. Ryzen at 4Ghz is a ~200W chip for example.
 
asrock could make it work.

they put an 1156 socket on a p67 board.

the limitation is ARTIFICIAL.

FIVR excluded.

Yet you had to mention VRM. Remember how AM3 CPUs didn't work on AM3 boards for the same reason?

If you cant use a CPU on a board anyway due to VRM changes or other electrical changes. Then why bother?

P67 was a special case, since it used the same DMI as P55, so it was a P55 motherboard with the P67 PCH. It was impossible to get a Lynnfield to work on the other Cougar Point chipsets.

DMI is just a PCIe interface. Same with AMDs versions.
 
asrock could make it work.

they put an 1156 socket on a p67 board.

the limitation is ARTIFICIAL.

FIVR excluded.

Sometimes the limitations are artificial, and sometimes they aren't. Without being an electrical engineer it's hard to tell the difference.
 
It’s water under the bridge now Kyle. I Served my time, and try not to debate too much with him... I like this place very much, and don’t need a perma-ban for calling a spade a spade.
Then I suggest you not bring it up here on our forums. If you have issue with certain posts, use the report post button. If you have issues with specific members, you do have the IGNORE feature that might come in handy. :)
 
Yet you had to mention VRM. Remember how AM3 CPUs didn't work on AM3 boards for the same reason?

If you cant use a CPU on a board anyway due to VRM changes or other electrical changes. Then why bother?



DMI is just a PCIe interface. Same with AMDs versions.

125 watt am3 cpus literally wouldn't work in a 95 watt cpu socket. that limitation is real.
 
I said as much in my editorial on X299 and the HEDT market.

Intel has often changed chipsets on us for little to no reason that people can see but that doesn't necessarily mean that there wasn't good reasons to change chipsets. You have to keep in mind that motherboard manufacturers are customers of Intel. Intel treats them as such. It's a customer that they've got by the short and curly hairs, but a customer just the same. In this case, there were many good reasons for switching from X99 to X299 even though few of its benefits will be apparent to the end users.

HSIO actually helps the manufacturers because it allows them to avoid using all the external PCIe switches and more PCIe lanes negates the need for overly expensive PLX chips that cut into a motherboard makers profit margins. DMI 3.0 and HSIO were two good reasons to make the switch. DMI 3.0 is a change that benefits us more than the motherboard manufacturers, but it's a win win for them as DMI 3.0 allows for us to better leverage M.2 devices and thus making the product more compelling. Power reduction and lower TDPs going from a 32nm node and a smaller package size are more good reasons. PCB real estate is costly on high end boards and power reductions are good for everyone.

Lastly, X299 / LGA 2066 was necessary for one more important reason and that's the move to Skylake-X and Kaby Lake-X. Haswell-E and Broadwell-E had Intel's FIVR (fully integrated voltage regulator) which motherboard manufacturers and enthusiasts didn't care for. Skylake and Kaby Lake did away with FIVR and so does Skylake-X and Kaby Lake-X. The design of the X299 motherboards had to be different than X99's for a lot of reasons. The X99 chipset is also three years old. It's just unfortunate that the improvements or changes are on the back end and DMI 3.0 and extra PCIe lanes are really the only changes we see directly.

DMI 3.0 would have been great in the days when SATAIII SSDs were new. A single M.2 running through through the DMI link rather than through the main PCIe lines can saturate it, especially with some SATA, plus the ethernet connection, let alone adding in externally connected USB stuff... and then there's RAID configured M.2.
 
DMI 3.0 would have been great in the days when SATAIII SSDs were new. A single M.2 running through through the DMI link rather than through the main PCIe lines can saturate it, especially with some SATA, plus the ethernet connection, let alone adding in externally connected USB stuff... and then there's RAID configured M.2.

Yeah, for some reason, Intel hates giving bandwidth away. These old DMI links have been a case of chase the leader for years. Just when stuff starts saturating the link, Intel up it by another small notch. I just don’t understand why Intel likes having this huge bottleneck in their Chipsets like this.

I have heard many times over the years that Intel only designs new motherboard Chipsets every 3 or 4 years or so, and that most times when they release a “new” chipset, is it identical to the last, just with some fuses not blown. If true, this could explain why they don’t adopt new technologies very quickly, and that this is the reason they don’t have good bandwidth, as the chipset is designed with longevity in mind, and when new tech hits the market, their Chipsets are just not designed with rapid advances in technology in mind, so it stays this way for several years, until Intel is forced to react, and actually designers a new chipset.
 
Last edited:
Yeah, for some reason, Intel hates giving bandwidth away. These old DMI links have been a case of chase the leader for years. Just when stuff starts saturating the link, Intel up it by another small notch. I just don’t understand why Intel likes having this huge bottleneck in their Chipsets like this.

I have heard many times over the years that Intel only designs new motherboard Chipsets every 3 or 4 years or so, and that most times when they release a “new” chipset, is it identical to the last, just with some fuses not blown. If true, this could explain why they don’t adopt new technologies very quickly, and that this is the reason they don’t have good bandwidth, as the chipset is designed with longevity in mind, and when new tech hits the market, their Chipsets are just not designed with rapid advances in technology in mind, so it stays this way for several years, until Intel is forced to react, and actually designers a new chipset.

You do know AMD uses the exact same design not to mention others as well? And its done for a reason. Not because of "evil greedy companies".
 
I'd be a lot more excited for CFL if Intel would stop the bullshit and allow it on Z270 boards. Sure seems like a greedy move on Intel's part.
 
DMI 3.0 would have been great in the days when SATAIII SSDs were new. A single M.2 running through through the DMI link rather than through the main PCIe lines can saturate it, especially with some SATA, plus the ethernet connection, let alone adding in externally connected USB stuff... and then there's RAID configured M.2.

I had an Intel ssd 750 and a Samsung 960 evo ssd, then later on, two Samsung 960 evo ssds and could never achieve full bandwidth due to the DMI 3.0 link on z170. Moving to the AMD X370 chipset, where they allocated 4 pcie lanes directly to the cpu for a nvme drive made a world of difference. Now the x399 chipset has 12 lanes for nvme drives connected to the cpu. I don't know if Coffeelake is actually 10% faster than Kabylake, but AMD definitely has a better overall platform even if it's slightly behind on single-threaded performance and clock speeds. By the way, Intel, Kabylake was barely 1% faster than Skylake (https://www.hardocp.com/article/2016/12/09/intel_kaby_lake_core_i77700k_ipc_review/3), so we'll need more pcie lanes and more than a 10% ipc increase.
 
I had an Intel ssd 750 and a Samsung 960 evo ssd, then later on, two Samsung 960 evo ssds and could never achieve full bandwidth due to the DMI 3.0 link on z170. Moving to the AMD X370 chipset, where they allocated 4 pcie lanes directly to the cpu for a nvme drive made a world of difference. Now the x399 chipset has 12 lanes for nvme drives connected to the cpu. I don't know if Coffeelake is actually 10% faster than Kabylake, but AMD definitely has a better overall platform even if it's slightly behind on single-threaded performance and clock speeds. By the way, Intel, Kabylake was barely 1% faster than Skylake (https://www.hardocp.com/article/2016/12/09/intel_kaby_lake_core_i77700k_ipc_review/3), so we'll need more pcie lanes and more than a 10% ipc increase.

X399 definitely seems to have a leg up on the M.2 / PCIe storage front. It's lagging behind on SATA and while it does have more PCIe lanes, several of the slots are Gen 2.0 PCIe, not 3.0 which is bullshit. Also, keep in mind that AMD's SATA implementation has far more limited stripe sizes available and they don't do RAID 5 at all. USB performance is something I'm going to test, but AMD typically lags slightly behind Intel on that front.
 
X399 definitely seems to have a leg up on the M.2 / PCIe storage front. It's lagging behind on SATA and while it does have more PCIe lanes, several of the slots are Gen 2.0 PCIe, not 3.0 which is bullshit. Also, keep in mind that AMD's SATA implementation has far more limited stripe sizes available and they don't do RAID 5 at all. USB performance is something I'm going to test, but AMD typically lags slightly behind Intel on that front.

I do hate how the x1 and x4 slots that hang off the chipset are pcie 2.0. Another gripe I have with the MSI x370 gaming pro carbon. Can't speak too much about sata, as I'm moving towards using a nas and trying to leave those ports behind. I don't use usb as much now as I'm unemployed. Still, Intel could've brought a lot more innovation now than the usual 3.5GHz, 8MB L3 cache, quad-core processor with the usual dual-channel memory chipset connected to the southbridge via a 3.96 GB/s pipe.
 
Moving to the AMD X370 chipset, where they allocated 4 pcie lanes directly to the cpu for a nvme drive made a world of difference.

They only allocate 4 lanes if you cut down the SATA ports. Else you only got x2 from the CPU. Compromises everywhere.

By the way, Intel, Kabylake was barely 1% faster than Skylake (https://www.hardocp.com/article/2016/12/09/intel_kaby_lake_core_i77700k_ipc_review/3), so we'll need more pcie lanes and more than a 10% ipc increase.

1% in that review should be 0% because they overclocked the SKL at least for ST. KBL is SKL with higher clocks. And if you ignore CFL with 6 cores and look at the 4 cores. Its exactly the same too.

7700K is 5-7.5% faster than 6700K at stock. Assuming we forget 2133Mhz for 6700K and 2400Mhz for 7700K.

SKL 14nm
KBL 14nm+
CFL 14nm++

Exact same core. And with CFL its slightly different due to 6 cores and increased cache.
 
Last edited:
Got to wonder who is worried about M.2 RAID speeds outside of benchmarking, given the lack of discernable improvement over even a SATA SSD for desktop usage, and spending to go to an HEDT platform just for that seems pretty absurd...
 
Got to wonder who is worried about M.2 RAID speeds outside of benchmarking, given the lack of discernable improvement over even a SATA SSD for desktop usage, and spending to go to an HEDT platform just for that seems pretty absurd...

My mobo's (MSI x370 Gaming Pro Carbon) trash, and can only use 2 sticks of ddr4-3200 ram. If I buy a x399 board, not only can I use 4 sticks of ddr4-3200 ram, it'll be at quad-channel speed and both of my Samsung 960 Evos will run at pcie 3.0 x4 as opposed to one running at 3.0 speed and the second running at 2.0 speed.
 
Sure, but will you notice the difference?

I understand the theory, quite clearly- I was clear on the limitations of Z170 when I picked up my current board- but I'm also clear on what will and won't make a difference and what the cost deltas are.
 
Back
Top