Intel Kaby Lake Core i7-7700K IPC Review @ [H]

You might want to look into POWER by IBM. It's pretty EPIC and designed completely around parallelization. It supports up to 8 way SMT and is user configurable for different thread counts based on workload.

I see what you did there ;)

The same could be said for Sun UltraSparc T1 and, to a lesser extent, the DEC Alpha EV8. Technically, the STI CELL used in the Sony PS3 could issue up to 1+8 threads simultaneous, although one could argue that those 8 SPE threads aren't true workhorse threads.

Microprocessors designed from the ground up with threading in mind isn't a new concept. You can find over 20 years of recent yet relevant history there. Even Intel tried to bring something to the table with Itanium.

I, for one, do not want to see x86 go. You've got 30+ years of compiler technology that shouldn't be carelessly tossed just because someone comes up with a more clever ISA.
 
You mean like ARM?

You might want to look into POWER by IBM. It's pretty epic and designed completely around parallelization. It supports up to 8 way SMT and is user configurable for different thread counts based on workload.

Both are old as such and none of these solves the big issues. Its a shame with all the legacy fallback.
 
I'd honestly like to see a completely NEW CPU, utilizing a better "infrastucture" than the x86. There are so many transistors "wasted" to get around the limitations of the x86 architecture it isn't even funny.

If a company sat down and actually devised a new instruction set, with the idea of muli-core/multi-thread execution in mind, especially with an eye towards massive multitasking. None of these things were on the minds of CPU developers when the x86 architecture was designed. And companies have done amazing work with adding instructions, register renaming, task switching, but it's patch upon patch upon patch. Which in no way can be as efficient as designing something from the ground up to have these abilities.

I don't think its that simple. There are a number of processors on the market that are not x86 compatible. Intel's Itanium, IBM's Power Series, MIPS64 Release 6, and a host of ARM designs including AMD's upcoming K-12. In addition to the numerous software problems that would arise from trying to use these CPUs with x86 emulation, the fact remains that these processors do not have motherboards which are compatible with the typical gaming hardware. They do not have the overclocking capability, layouts, or the device support for things like GeForce series graphics cards. As an example; no company has ever to my knowledge done any development work geared towards building a gaming IBM Power 9 based system. If K-12 was a solid enough performer, it could be in a good enough position to make inroads for us with the announcement that Windows 10 can now run on ARM based systems. Unfortunately, AMD may not want to make that gamble given their cash flow situation even if K-12 has the muscle to do it.

Honestly, i am thinking with a really new design, the machine would be fast enough to "decode on the fly" old x86 code, or even just run a VM with an x86 emulator setup.

I believe this is a misconception. Building a better architecture shouldn't be too much of a problem. In allot of ways we already have better architectures that could be made to work in a desktop form. For certain workloads, some of these alternatives are vastly superior performers. Unfortunately, to my knowledge there isn't a single processor out there which can run legacy x86 code without a massive performance hit. Sure we can run the code of a few years ago without too much trouble via emulation but there in lies the issue of development continuing with legacy x86 code. If x86 development stopped tomorrow, then in a few years we would be in a place where we could emulate x86 code without a performance hit and maintain reliability.

Also, people seem to keep coming back to the idea that we can simply run more threads and that we should see linear gains in all applications with additional threading. Some workloads simply don't benefit much if at all from multi-threading. I don't know enough about non-x86 instruction sets to say, but that might be the case on other platforms as well.

I mean we are still stuck with clocks, when the articles i read are all about clockless designs being the future.

Maybe they will be, but a lot of technologies have been predicted to be the future. Everything from quantum computing to holographic storage and other things have all been predicted. Maybe in a decade or two some of that will start to become a reality. Right now clocks are a simple fact of silicon based semi-conductor designs.

I know what i am talking about has been attempted, but it wasn't by a company with the brain power and deep pockets of intel.

I don't think that companies are simply not thinking of creating architectures that can run legacy x86 code as well or faster than Intel CPUs do today. If someone could do it, they would. If someone had a processor that could be made affordable enough that could not only run native code with far more performance than anything today, but run legacy code without a huge performance hit they would have an instant hit on their hands. I don't think we've seen that, not because it can't be done or because someone doesn't have deep enough pockets. Its because at this moment in history such a processor is beyond our technical capabilities as a civilization.
 
I would have never thought i'd still be on my ivy bridge i5 running at 4.4 ghz. Absolutely no reason to upgrade after all these years. I'm looking forward with cautious optimisim that AMD will give me a reason to throw my money at them. And I will.... gladly.
 
Why do people hold onto this hope when AMD'S own statements about Zen's performance tell you that such hope is misplaced? IPC numbers should be roughly between Ivy Bridge and Haswell based on the statement: "Zen offers 40% more IPC than Excavator." So far the clocks on Zen look way too low to effectively challenge Skylake or Kaby Lake on equal footing. Low clocks most likely indicate relatively unimpressive overclocking.

I admit AMD has gotten better at keeping secrets. They could have higher clocked parts on their hands. Even if they do, we aren't likely to see Vishera type clock speeds with Zen. Intel is gaining clock speed again. Unless Zen clocks a lot better than we think it does and the platform is killer, you are in for disappointment.
 
Last edited:
Why do people hold onto this hope when AMD'S own statements about Zen's performance tell you that such hope is misplaced?

tTa3arG.jpg
 
I'll be the first one to admit I'm wrong and the first one in line to buy Zen if it turns out to tbe the Core i7 killer some of you are hoping for. However, I wouldn't bet on it. There is absolutely nothing that's been leaked that remotely seems credible to indicate Zen is the "Intel Killer" we are hoping for.
 
No one expects Zen to be an "i7" (whatever that means these days, with dual cores being sold as such) killer.

So what if IPC and clocks are not up to Skylake levels? I don't care. I jumped to 2500k in March 2011. Even if AMD gives me the same IPC with twice as much cores that's a win. No, games won't run faster. It's been like that for a decade now. I am fine with that, and I am not alone.
 
No one expects Zen to be an "i7" (whatever that means these days, with dual cores being sold as such) killer.

So what if IPC and clocks are not up to Skylake levels? I don't care. I jumped to 2500k in March 2011. Even if AMD gives me the same IPC with twice as much cores that's a win. No, games won't run faster. It's been like that for a decade now. I am fine with that, and I am not alone.
After seeing the news yesterday I'm pretty sure everyone expects Zen to be an i7 killer.
 
After seeing the news yesterday I'm pretty sure everyone expects Zen to be an i7 killer.

What news, I only saw something about re-purposed Fiji chips

Edit: never mind, checked anandtech. Surprising.
 
Last edited:
...be an "i7" (whatever that means these days, with dual cores being sold as such)

Don't get me started on that. Found out @ work that our development 'workstations' (a joke - they are 14" i5 quad core HP crap laptops) are going to be replaced with Core i7 for 'power users'. I7-6600U. Ultra low power dual core.

And you expect me to dev on a $60B/year application with that????

The past 5 years have been really good to me, but it seems the bigger/better the job, the worse the damn equipment gets.
 
I'll be the first one to admit I'm wrong and the first one in line to buy Zen if it turns out to tbe the Core i7 killer some of you are hoping for. However, I wouldn't bet on it. There is absolutely nothing that's been leaked that remotely seems credible to indicate Zen is the "Intel Killer" we are hoping for.

I don't think that many people are really thinking it will be a "Intel Killer", best case scenario we're hoping for is i5 performance per thread and 8 cores for i7 quad core prices. Will we get that? Who knows, I don't have great hopes.
 
Don't get me started on that. Found out @ work that our development 'workstations' (a joke - they are 14" i5 quad core HP crap laptops) are going to be replaced with Core i7 for 'power users'. I7-6600U. Ultra low power dual core.

And you expect me to dev on a $60B/year application with that????

The past 5 years have been really good to me, but it seems the bigger/better the job, the worse the damn equipment gets.

I've got a strategy for this, you explain to them that the entire -U series is the new Celeron. Even people who don't know anything about computers hate Celerons at this point after Intel ran the brand into the ground.
 
I've got a strategy for this, you explain to them that the entire -U series is the new Celeron. Even people who don't know anything about computers hate Celerons at this point after Intel ran the brand into the ground.
Lol, that might just work. To be entirely fair, a well designed laptop with a -U processor can be okay, since it can mantain boost at all times. But if it was well designed from a thermal standpoint, I guess it would be an odd choice to put a low power SKU into the laptop (unless if the battery life can meaningfully improve from the lower base clocks - I know this was the case with an older Alienware laptop I once had).
 
From the article: AMD, do we matter? Lisa Su? AMD has a hugely influential and substantial fanbase waiting to wave your flag again. We all still have that Blue Core Thunderbird and 9700 Pro love in our hearts. We are older now and have lots of money to spend on tech and its toys. We are established, influential, and well informed, and all our family members and all their friends ask our advice on computer purchasing and then it trickles down. That is the HardOCP reader profile. Wouldn't you love to have us once again direct all those purchasing dollars with a comment like, "Just look for the AMD Zen (and beyond) badge and you will be getting a quality product."

100% TRUE I really wish for these days again.
 
Well if the blender results hold out for Zen across the board then AMD has in fact caught up to Intel. Looks like the clocks are even going to be decent as well.
 
Well if the blender results hold out for Zen across the board then AMD has in fact caught up to Intel. Looks like the clocks are even going to be decent as well.

It didn't hold up in the BF1 that ran tho. The 6900K was up to ~20% faster there and hitting GPU limits. Zen couldn't even reach that steadily.
 
It didn't hold up in the BF1 that ran tho. The 6900K was up to ~20% faster there and hitting GPU limits. Zen couldn't even reach that steadily.

Your hate is funny and exactly where did you see that? I know one guy said the BF1 dropped to 57 fps on the Zen system but said most of the time it was 60 +. He did say it was running a bit faster on the 6900k and that could be due to many reasons. You like to take things and blow it out of proportion, also dont forget the AMD chip was locked at 1 speed while the 6900k was not. I think most of us here are excited to see what the new chip can do, we know it could spit out gold nuggets and you would still find a reason to hate it. Been far too long since people have had real choice in which cpu they want to use.
 
Your hate is funny and exactly where did you see that? I know one guy said the BF1 dropped to 57 fps on the Zen system but said most of the time it was 60 +. He did say it was running a bit faster on the 6900k and that could be due to many reasons. You like to take things and blow it out of proportion, also dont forget the AMD chip was locked at 1 speed while the 6900k was not. I think most of us here are excited to see what the new chip can do, we know it could spit out gold nuggets and you would still find a reason to hate it. Been far too long since people have had real choice in which cpu they want to use.

You should ask yourself how current chips perform in BF1 instead.
 
Intel's Itanium, IBM's Power Series, MIPS64 Release 6, and a host of ARM designs including AMD's upcoming K-12. In addition to the numerous software problems that would arise from trying to use these CPUs with x86 emulation, the fact remains that these processors do not have motherboards which are compatible with the typical gaming hardware. They do not have the overclocking capability, layouts, or the device support for things like GeForce series graphics cards.

Not because they can't, but because they don't want to. It's just a market they're not interested in. Wintel kind of got in there and cornered the market and so now no other company cares anymore.

NVIDIA does have ARM drivers for their discrete GeForce cards, e.g.
 
Well written review. Disappointing results though. I was hoping for a reason to upgrade my good old intel 2500k but I guess this baby will live on for another year. Long live Sandy Bridge!!!
 
Wow disappointing isnt even the word. Looks like they changed lids on the processor and thats it. Awful. Keeping Sandy alive in a couple machines that much longer! Good I guess.
 
Not because they can't, but because they don't want to. It's just a market they're not interested in. Wintel kind of got in there and cornered the market and so now no other company cares anymore.

NVIDIA does have ARM drivers for their discrete GeForce cards, e.g.

The market has always been held back by software compatibility. That's the reason why the market isn't interested in non-X86 based gaming motherboards and DIY PCs.

Wow disappointing isnt even the word. Looks like they changed lids on the processor and thats it. Awful. Keeping Sandy alive in a couple machines that much longer! Good I guess.

The iGPU changed quite a bit and it overclocks better. This is the first CPU Intel is / will be releasing since Sandy Bridge that has any real chance of reaching 4.7GHz+ with any reliability. People are whining about the TIM and heat spreader but the fact remains that the CPU is an improvement in the one area the enthusiast cares about: Overclocking.
 
Last edited:
I don't think that many people are really thinking it will be a "Intel Killer", best case scenario we're hoping for is i5 performance per thread and 8 cores for i7 quad core prices. Will we get that? Who knows, I don't have great hopes.
^This.

Give me Haswell level IPC and 8 cores that hit 4.2 GHz or better at current I7 6700K prices and I'll be all over it.
 
Yep, both for the competition factor, and the excuse to build a new rig!
 
Uh, shit.. The Kaby Lake is out and I didn't even know.. Remember counting the days until the SB release which was 6 years ago.. Good times. Now I spend more time on a stupid smartphone than on anything else.. Maybe I should still upgrade my rig after all cause its fun..
 
Zero IPC gains at similar clocks to Skylake and per TomsHardwares' review stress loading the 7700K yields alarming numbers, thermals hit 104 degrees and load hit 140W. for a company rolling out it's third time at 14nm iteration, this is quite embarrassing. Lets just blame it on the thermal paste.
 
Zero IPC gains at similar clocks to Skylake and per TomsHardwares' review stress loading the 7700K yields alarming numbers, thermals hit 104 degrees and load hit 140W. for a company rolling out it's third time at 14nm iteration, this is quite embarrassing. Lets just blame it on the thermal paste.
Damn. They must have been pouring the voltage to it.
 
Zero IPC gains at similar clocks to Skylake and per TomsHardwares' review stress loading the 7700K yields alarming numbers, thermals hit 104 degrees and load hit 140W. for a company rolling out it's third time at 14nm iteration, this is quite embarrassing. Lets just blame it on the thermal paste.

Maybe
I've come down from nearly 90 to mid 67/68 degrees

Though after having done the delid
I wonder if the distance between the die and IHS might be a bit too much


I was always "taught" the paste is for filling up the imperfections where the 2 surfaces don't make contact

I wasn't very happy about how much of the TIM was sticking to the die
Usually that amount would've been squeezed out between IHS and cooler if there's good contact

Or so I always believed anyway

Edit
to my untrained eye it looks like the 2 surfaces (DIE and IHS) didn't make contact at all, and heat transfer was provided through the TIM only
 

Attachments

  • 20170107_191004.jpg
    20170107_191004.jpg
    107.4 KB · Views: 43
Last edited:
Maybe Intel should just do what AMD do, Solder it.

Worlds most expensive FAB

Worlds most expensive Desktop CPU

Worlds cheapest thermal paste.
 
Some comments to the "Looking Forward" section:

Is this where the lack of competition in the desktop CPU market has gotten us or where the market is not any more? No, I am not blaming AMD...
I agree. Lack of competition is possibly inflating the price point, but doesn't affect the performance.

Has CPU performance scaled to the point that being "faster" does not matter, except to a few?
Yes, somewhat depending on how many "few" are. I event think that point was reached more than five years ago.
Who needs lots of CPU power (at home)?
* Video makers. (For video encoding.)
* Photo enthusiasts. (Image editing.)

For most gaming you don't need a top end CPU. Mid range is sufficient.

Intel, with all its resources, cannot even find a reason for you to replace your 1 or 2 year old laptop.
... which is a good thing! Replacing such a new piece of hardware would be a huge waste of resources.

With that we have to ask, do we want more power on the desktop? You know we do, and I know we do, ... Is Intel throwing in the towel? Is it going to abandon the immensely huge PC gaming market, forcing its game developers to finally get off their asses and utilize all those "extra" cores...
I'm very confident, from experience, that "gaming experience" is not correlated to "CPU usage". (On the contrary, high CPU usage have more often been a sign of bad/lazy coding.)
That said, I'm sure there's a market for higher end consumer CPUs with more computing power (though I don't see "gamers" as the primary target user group). The reason we don't see faster CPUs is that the previous recipe of die shrinking no longer can be applied that easily, due to physical limitations.
So more cores are needed. Double the number of cores and you'll roughly double the die size and power consumption. You'll also double the production cost. Relying on previous production technology and circuit design the development cost can be kept to a minimum, preventing the consumer price from being doubled.
 
Maybe
I've come down from nearly 90 to mid 67/68 degrees

Though after having done the delid
I wonder if the distance between the die and IHS might be a bit too much


I was always "taught" the paste is for filling up the imperfections where the 2 surfaces don't make contact

I wasn't very happy about how much of the TIM was sticking to the die
Usually that amount would've been squeezed out between IHS and cooler if there's good contact

Or so I always believed anyway

Edit
to my untrained eye it looks like the 2 surfaces (DIE and IHS) didn't make contact at all, and heat transfer was provided through the TIM only

But that is wonderful strategy from Intel to sell more chips, chip dies because of overheating and they don't need to give new one from warranty.

In manufacturing, product that barely lasts warranty period is best and certainly looks like they are "optimizing" manufacturing process.
 
I am just concerned about on factor that people are assuming IPC on parts that are not entirely equal and the result is comparing apples to pears.

I used the exact same Anandtech chart for single threaded Cinebench R15 as used in their 6700k review. My issue raised is that clockspeed radically pushes single thread performance by a larger margin than clockspeed affects IPC in SMT scaling.

I am open to explaination though im happy with the final outcome


Test:

6700K (4/4.2) skylake
4790K (4/4.4) devils canyon
4770K (3.5/3.9) Haswell
5775C (3.3/3.7) Broadwell
3770K (3.5/3.9) Ivy Bridge
2600K (3.4/3.8) Sandy Bridge

5820K (3.3/3.6) Haswell E
4820K (3.7/3.9) Ivy E
6800K (3.4/3.6) Broadwell E

Methodology: (lowest frequency is 3.3ghz ie: lowest base clocked SKU)

Single thread score x 3.3 / max turbo frequency

I already know the anandtech database has turbo affected scores, I want to remove the turbo and give an unadjusted baseline. Since single core turbo is the highest listed frequency it was easy to calculate.

6700K - (182) 143
5775C - (157) 140
4790K - (181) 135
4770K - (156) 132
3770K - (143) 121
2600K - (135) 117

6800K - (150) 137
5820K - (140) 128
4820K - (140) 118

* Brackets are Anandtechs stock + turbo

It kind of shows that the higher the base + turbo the higher the adjusted performance vs real performance is.

It works out perfectly as it shows minor architectural single thread gains in the order of generation, if you consider the increase in base clock and turbos that factors in the major performance increase, i dont think the numbers will differ if one had to reverse scale upwards on clockspeed. From Sandy to Skylake you have right on 18% single threaded performance at equal clocks and no turbo.

I was just trying to make all things equal and find a common baseline. I feel in doing so it shows the actual position with intels core architecture and the possibility that the IPC brickwall is closer and possibly why Intel want to reinvent their architecture after 2020 ends their architecture road map.
 
"Also, we now have an on-die USB 3.1 controller and on-die support for Thunderbolt 3, which can support two 4K displays, but I am not sure what value that has to the PC crowd."

Kyle,

Can you share where you found this information?

From what I understand, Kaby Lake is much like Skylake in it's lack for native USB3.1 or Thunderbolt 3 support. All of that is still being done:
a) at the chipset level not the CPU level
b) by add-in controllers, either Alpine Ridge (Intel's Thunderbolt3/USB3.1 controller), or ASMedia solutions.

I would love for Kaby Lake to have native Thunderbolt 3 and USB 3.1 But all of the Z270 chipset coverage so far has shown that motherboard makers are still relying on 3rd party solutions for these connections.

So any information to the contrary would be greatly appreciated.
 
"Also, we now have an on-die USB 3.1 controller and on-die support for Thunderbolt 3, which can support two 4K displays, but I am not sure what value that has to the PC crowd."

Kyle,

Can you share where you found this information?

From what I understand, Kaby Lake is much like Skylake in it's lack for native USB3.1 or Thunderbolt 3 support. All of that is still being done:
a) at the chipset level not the CPU level
b) by add-in controllers, either Alpine Ridge (Intel's Thunderbolt3/USB3.1 controller), or ASMedia solutions.

I would love for Kaby Lake to have native Thunderbolt 3 and USB 3.1 But all of the Z270 chipset coverage so far has shown that motherboard makers are still relying on 3rd party solutions for these connections.

So any information to the contrary would be greatly appreciated.

Native USB 3.1 and Thunderbolt 3 support are not present on Z270. The only change is that it offers 4 more PCI-Express lanes (30 vs. 26) than Z170 and Optane support. For the most part you will only see Alpine Ridge on GIGABYTE motherboards. Most other vendors generally use the ASMedia ASM1142.
 
"Also, we now have an on-die USB 3.1 controller and on-die support for Thunderbolt 3, which can support two 4K displays, but I am not sure what value that has to the PC crowd."

Kyle,

Can you share where you found this information?

From what I understand, Kaby Lake is much like Skylake in it's lack for native USB3.1 or Thunderbolt 3 support. All of that is still being done:
a) at the chipset level not the CPU level
b) by add-in controllers, either Alpine Ridge (Intel's Thunderbolt3/USB3.1 controller), or ASMedia solutions.

I would love for Kaby Lake to have native Thunderbolt 3 and USB 3.1 But all of the Z270 chipset coverage so far has shown that motherboard makers are still relying on 3rd party solutions for these connections.

So any information to the contrary would be greatly appreciated.
As already discussed in this thread, I was incorrect. I did not go back and edit that post, because then people would scream about me being dishonest etc etc, so I left it there.

That said, I was told this by a motherboard MFG that was apparently not well informed either. We did not participate with Intel on our Kaby Lake or Z270 articles.
 
I am just concerned about on factor that people are assuming IPC on parts that are not entirely equal and the result is comparing apples to pears.

I used the exact same Anandtech chart for single threaded Cinebench R15 as used in their 6700k review. My issue raised is that clockspeed radically pushes single thread performance by a larger margin than clockspeed affects IPC in SMT scaling.

I am open to explaination though im happy with the final outcome


Test:

6700K (4/4.2) skylake
4790K (4/4.4) devils canyon
4770K (3.5/3.9) Haswell
5775C (3.3/3.7) Broadwell
3770K (3.5/3.9) Ivy Bridge
2600K (3.4/3.8) Sandy Bridge

5820K (3.3/3.6) Haswell E
4820K (3.7/3.9) Ivy E
6800K (3.4/3.6) Broadwell E

Methodology: (lowest frequency is 3.3ghz ie: lowest base clocked SKU)

Single thread score x 3.3 / max turbo frequency

I already know the anandtech database has turbo affected scores, I want to remove the turbo and give an unadjusted baseline. Since single core turbo is the highest listed frequency it was easy to calculate.

6700K - (182) 143
5775C - (157) 140
4790K - (181) 135
4770K - (156) 132
3770K - (143) 121
2600K - (135) 117

6800K - (150) 137
5820K - (140) 128
4820K - (140) 118

* Brackets are Anandtechs stock + turbo

It kind of shows that the higher the base + turbo the higher the adjusted performance vs real performance is.

It works out perfectly as it shows minor architectural single thread gains in the order of generation, if you consider the increase in base clock and turbos that factors in the major performance increase, i dont think the numbers will differ if one had to reverse scale upwards on clockspeed. From Sandy to Skylake you have right on 18% single threaded performance at equal clocks and no turbo.

I was just trying to make all things equal and find a common baseline. I feel in doing so it shows the actual position with intels core architecture and the possibility that the IPC brickwall is closer and possibly why Intel want to reinvent their architecture after 2020 ends their architecture road map.

This is amazing info - I was always curious if we were to compare apples to apples (different generation at a fixed frequency), what the difference in IPC would be. out of curiosity, do you know why there are differences (albeit minor) when comparing the same generation? (i.e. a 5820k, 4770k, and 4790k should all score the same if they are based on the Haswell architecture)

Also random anecdote: The Westmere Xeon in my sig scores 140 in the C15 single threaded test (and 1051 in multi) when clocked at 4.6Ghz. Goes to show that 8 year old tech can still be very relevant when overclocked.
 
Back
Top