Intel Core i7-7700K CPU Synthetic Benchmark Sneak Peek @ [H]

I read about a year ago all the talent for Intel designing chips went to AMD.

Zen might be a very good chip I wonder what the socket design is going to look like.
 
Mobile is more and more important, so more and more the architectures are optimized for low power use, not for higher performance.

You see it every day in the advertising. I mean trade in this old slow bulky pc for this thin and light surface pro..
 
They got nothing left in the tank apparently. The conditions are prime for AMD to catch up. The timing could not be better for the shrimp to play catch up. Then maybe we'd get some real progress from Intel.

This was always going to happen if AMD survived long enough.

Since each die shrink is more and more difficult to work with, and thus takes more and more money, time and people to complete successfully, it becomes easier and easier for a trailing company to shrink the gap to their competitor.

it's a rather unusual dynamic as things go.
 
32nm was the last process where shrinking the node allowed for increased clocks. It has been more difficult (and extremely costly - Intel has spent billions of dollars trying to solve this) ever since.

In fact, top overclocks seem to have gone down since 32nm.

I wonder how much of this has to do with the process nodes, and how much of it has to do with a shifting focus towards more and more optimizing the arch for low power use though.
 
This was always going to happen if AMD survived long enough.

Since each die shrink is more and more difficult to work with, and thus takes more and more money, time and people to complete successfully, it becomes easier and easier for a trailing company to shrink the gap to their competitor.

it's a rather unusual dynamic as things go.

The things that become harder for Intel, are mountains for AMD to climb, so I don't agree. It's not been easier and easier, but harder and harder. The difference with AMD is they think differently, they have to because they don't have the armies of engineers, the budgets the size of small nations to compete with. They have to think outside the box, like the guerilla filmmaker vs the studios. It's catch up time for AMD because as Intel has gotten closer and closer to lithography limits, they luckily for AMD, have not been progressing for a long ass time. They might have slowed enough for AMD's outside the box thinking to catch up. It might... I dunno if they have luck to pull off another great one.
 
The things that become harder for Intel, are mountains for AMD to climb, so I don't agree. It's not been easier and easier, but harder and harder. The difference with AMD is they think differently, they have to because they don't have the armies of engineers, the budgets the size of small nations to compete with. They have to think outside the box, like the guerilla filmmaker vs the studios. It's catch up time for AMD because as Intel has gotten closer and closer to lithography limits, they luckily for AMD, have not been progressing for a long ass time. They might have slowed enough for AMD's outside the box thinking to catch up. It might... I dunno if they have luck to pull off another great one.

Well, there is that.

But lets say it's 10 times more difficult to get 10nm working right than it is to get 14nm working right, but 14nm is only 2-3% behind 10nm from a performance standpoint.

It then becomes more and more feasible for a follower to get within range of the market leader, while spending much less than them. they'll never overtake them using this strategy, but they will eventually get close enough that the performance differences are pretty much irrelevant.
 
Well, there is that.

But lets say it's 10 times more difficult to get 10nm working right than it is to get 14nm working right, but 14nm is only 2-3% behind 10nm from a performance standpoint.

It then becomes more and more feasible for a follower to get within range of the market leader, while spending much less than them. they'll never overtake them using this strategy, but they will eventually get close enough that the performance differences are pretty much irrelevant.

Again, I would not say easier at all. This is not making a car, their are limits to what AMD can do because they don't have the GNP sized budget of Intel. If there are any gains that they can take advantage of, it's to build a smarter mouse trap, not to build the same mouse trap with the same specs. And I don't like this follower bizness as if it hasn't been shown that along the way Intel hasn't destroyed all comers. It has not gotten easier to compete, only harder. There are NO other competitors, that's how hard it is, not EASIER.
 
Most likely motherboard differences, possibly see bios updates that may improve or bring it up to par with Skylake. But the data looks about like everyone else was saying that these are going to be the same, just factory clock differences. And the iGPU difference, but isn't as interesting to most of us here.
If uefi, then expect driver updates rather than bios updates. EFI hands most things over to the OS to handle.
 
Yeah I know, but it's really not that bad. Even today, almost 5 years later there is very little difference between GPU performance on 16x Gen2 and 16x Gen 3, especially at my 4k resolutions.

(Using Gen3 x8 as a stand in for Gen2 x16 in the test above as they are similar in performance, but Gen2 x16 is actually ever so slightly faster. The test was performed on a 1080, not a titan, but I can't imagine the results would be hugely different)

I kind if doubt lack of Gen 4 will really become a deal breaker any faster than lack of Gen3 has, so even if I buy a CPU/Motherboard right before Gen4 launches, I'll probably be OK for the next 5 years :p

I feel like the latest and greatest PCIe spec is really only useful if you want to go SLI, but down have enough lanes so you have to drop down to 8x, and I will NEVER go SLI again.
Just keep in mind that the last time I checked, PCI-E 4.0 video cards will not work in a PCI-E 3.0 or older slot.
 
Again, I would not say easier at all. This is not making a car, their are limits to what AMD can do because they don't have the GNP sized budget of Intel. If there are any gains that they can take advantage of, it's to build a smarter mouse trap, not to build the same mouse trap with the same specs. And I don't like this follower bizness as if it hasn't been shown that along the way Intel hasn't destroyed all comers. It has not gotten easier to compete, only harder. There are NO other competitors, that's how hard it is, not EASIER.

I'm not saying it is easy, I'm just saying that comparatively it is less work.

With Intel laying off 12,000 people this year, it doesn't exactly look like they are gearing up to take on the increasing challenge of smaller die sizes. Even if AMD doesn't increase its engineering staff, continuing at the same pace always fighting smaller technical challenges (albeit still difficult ones) than the market leader will likely over time shrink the gap, especially once we hit the wall on die shrinks.
 
Just keep in mind that the last time I checked, PCI-E 4.0 video cards will not work in a PCI-E 3.0 or older slot.


Interesting. I have not read up much on PCIe 4.0 yet. What's your source regarding this info?

The PCISIG website states that PCIe 4.0 maintains backwards and forwards compatibility in both software and mechanical interface.
 
Interesting. I have not read up much on PCIe 4.0 yet. What's your source regarding this info?

The PCISIG website states that PCIe 4.0 maintains backwards and forwards compatibility in both software and mechanical interface.

The fuss came with the confusion from sites that you could draw 300Ws from the slot.
http://www.tomshardware.com/news/pcie-4.0-power-speed-express,32525.html
http://www.techspot.com/news/66048-pcie-40-make-auxiliary-power-cables-gpus-obsolete.html
http://www.techspot.com/news/66108-pcie-4-wont-support-300-watts-slot-power.html
 
Yeah, Piledriver proved AMD can make a competitive chip when they don't get distracted with other projects, like integrating GPU with CPU. Llano was a stupid management decision to fast-track, and shelving it would have likely delayed Trinity only six more months, but not distracted engineers from Bulldozer.

Yeah, it's half the IPC of Intel today. But if it had launched in place of Bulldozer in 2011, it wouldn't have been the train-wreck Bulldozer was. It would have matched the IPC of K10 and been faster-clocked, and also added two more cores.

When it comes down to architectural choices, it can make a massive difference, depending on approach, and time available for optimization. Just look at what Maxwell brought to the table. Since Intel has decided to tread water on the desktop until 2018 with Coffee Lake, the door is wide open. We'll have to see what AMD brings to the table.
 
Last edited:
Yeah, Piledriver proved AMD can make a competitive chip when they don't get distracted with other projects, like integrating GPU with CPU. Llano was a stupid management decision to fast-track, and shelving it would have likely delayed Trinity only six more months, but not distracted engineers from Bulldozer.

Yeah, it's half the IPC of Intel today. But if it had launched in place of Bulldozer in 2011, it wouldn't have been the train-wreck Bulldozer was. It would have matched the IPC of K10 and been faster-clocked, and also added two more cores.

When it comes down to architectural choices, it can make a massive difference, depending on approach, and time available for optimization. Just look at what Maxwell brought to the table. Since Intel has decided to tread water on the desktop until 2018 with Coffee Lake, the door is wide open. We'll have to see what AMD brings to the table.


I think you are giving Piledriver more credit than it deserves.

Firstly it launched 3 quarters after bulldozer. You can't just snap your fingers and have three quarters of development time completed.

Secondly, even if it had launched when Bulldozer launched, it would still be competing with Sandy Bridge, and the comparison is not favorable, even in well multithreaded benchmarks where one would expect the 8 core FX-8350 to be dominating the 4 core i7-2600K, and that's not to mention that Sandy-bridge-E launched only a month later and ran away with it, and Ivy Bridge wasn't that far off, and would have grown the gap even further.

I don't think there is any way to sugarcoat it. Everything AMD has done on the CPU front after Athlon 64 has been a fail.

Phenom was a minor fail.
Phenom 2 fixed that slightly but was too little too late, and still disappointing.
And then everything Bulldozer and on has been a complete and total fail, including Piledriver and Excavator.

I'm hoping they can redeem themselves with Zen, but I I'm not betting any money on it.
 
If that turns out to be a typical overclock, and not some cherry picked wunderchip, they might just redeem themselves slightly.
That was with a Corsair AIO water cooling system. Also, both articles say the CPUs are slightly unstable at those clocks if I recall correctly.

edit: on rereading, the bit-tech guys don't say the OC is unstable, but they are running a very high 1.44 volts and have the processor pulling 56 more watts over the non-oc processor at load and even an extra 12 watts at idle. I'd be worried about long-term longevity pushing that hard for a 10% OC.
 
Last edited:
Interesting.

The point I took home is that if I was going to upgrade my platform at this point, I may see a better IMC, which is good.

IDK, I still have 4700K CPUs running, but if the year end brings a bonus in salary, I may redo my main computer all the way around......so this is of interest to me since
the whole thing will be all next Gen.
 
The process isn't all. Yes shrinking the die can allow for higher clocks at lower power and even reduce costs per chip if the development costs don't = nuts.

Having said all that there are still 1001 different factors that can / do effect performance.

To give you an example of what AMD is trying to do with Zen here are some of the differences (and we'll have to wait for benchmarks to see if it really matters later). Just to note AMD hasn't released detailed papers on zen quite yet so some of this could be marketing a bit if things are shared by cores or something we can't really say yet.

- Out of order loads... Zen can process 72 out of order loads, I7 36.
- L1 cache is now write-back vs write-through on excavator (i7 is write-back as well)
- Larger store que... I believe AMD is attempting to remedy something called 4K aliasing which effects intel (and I believe current AMD) chips.
- They have also addressed an issue with earlier AMD CPUs as each core can access every cache with = latency.
- Zen is going to be a 2 thread per core SMT part. SMT should lead to power savings, and possible performance bumps. Its also tech that could possibly slow some software down. P4 used SMT... but then so do the Power chips and it helps them punch well above their weight. So we'll see if AMD got it right. It also has some new micro-operation cache system, that works with the SMT stuff it sounds to me that software optimization will make a big difference with the Micro op stuff... hopefully it doesn't hinder software that hasn't been optimized.
- 5x the bandwidth on the L3 cache vs excavator. and 2x the bandwidth on the L1 and L2 caches.
- The branch prediction system has also been decoupled from the fetch stage.

There is likely a ton more... those are some of the things we sort of know now. The Benchmarks should be fun. Man I really do wish they would learn how to market their chips though. I mean this Zen slide says everything I just said in a few lines... but man I hope they are planning to sell these to the non geek crowd somehow. lol :)
 
Interesting. I have not read up much on PCIe 4.0 yet. What's your source regarding this info?

The PCISIG website states that PCIe 4.0 maintains backwards and forwards compatibility in both software and mechanical interface.
Like I said, last time I checked. The confusion in the way it was being reported initially probably came in from double-length slots used in HPC and server hardware.

26/06/2015 (http://www.kitguru.net/components/g...es-and-new-connector-to-be-finalized-by-2017/):
PCI Express 4.0 will utilize a new connector, but the specification will be backward compatible mechanically and electrically with PCI Express 3.0, which means that it will be possible to use today’s add-in-cards in PCIe 4.0-based systems, but future AICs will not work with PCIe 3.0.

But of course I would believe the source over a third-party (unlike Wikipedia...).

Will PCIe 4.0 products be compatible with existing PCIe 1.x, PCIe 2.x and PCIe 3.x products?

PCI-SIG is proud of its long heritage of developing compatible architectures and its members have consistently produced compatible and interoperable products. In keeping with this tradition, the PCIe 4.0 architecture is compatible with prior generations of this technology, from software to clocking architecture to mechanical interfaces. That is to say PCIe 1.x, 2.x and 3.x cards will seamlessly plug into PCIe 4.0-capable slots and operate at the highest performance levels possible. Similarly, all PCIe 4.0 cards will plug into PCIe 1.x-, PCIe 2.x- and PCIe 3.x-capable slots and operate at the highest performance levels supported by those configurations.
Nope. As pointed out above the information I was going on was 2.5 months older than this. I was always skeptical on how sites were reporting the power to begin with. The only way to eliminate the need for auxiliary cables in this way would be to also update or create a new sub-spec of ATX in which the extra power be provided through the 20+4 cable to the motherboard.
 
I'll be sticking with my E5 [email protected] (GPU = 980ti) - I'll go back to my original plan and buy one of those EK360 AIO coolers & GPU block for my 980ti and have a cooler 2017..
 
Glad I decided to go X99 this summer.

Ya unless KBL goes over 5 GHz fairly easily I might do the same myself. My HTPC packed it in a few weeks ago. Might just slide the 3770K out to the living room and build myself a hex core main desktop. Either that or switch my Server over to Xeon and free up the other 3770K
 
I'm not saying it is easy, I'm just saying that comparatively it is less work.

With Intel laying off 12,000 people this year, it doesn't exactly look like they are gearing up to take on the increasing challenge of smaller die sizes. Even if AMD doesn't increase its engineering staff, continuing at the same pace always fighting smaller technical challenges (albeit still difficult ones) than the market leader will likely over time shrink the gap, especially once we hit the wall on die shrinks.

You realize those 12K layoffs off is more than all of AMD's workforce by a few thousand when last counted in 2015? :meh:
 
- Zen is going to be a 2 thread per core SMT part. SMT should lead to power savings, and possible performance bumps. Its also tech that could possibly slow some software down. P4 used SMT... but then so do the Power chips and it helps them punch well above their weight.

And I3 and I7 and laptops I5s. It just seems weird that you complete negated that All of Intels Hyperthreading is SMT.
and all of those CPU still have issues where it slow down some software (even efter disabling core parking).
 
You realize those 12K layoffs off is more than all of AMD's workforce by a few thousand when last counted in 2015? :meh:

Putting two and two together:
Intel licenses graphics IP from AMD
Intel cutting 12,000 people
Intel wanting to keep this quiet...

Would it be possible Intel is laying off large portions of their graphics team? Is AMD doing the heavy lifting for Intel iGPU design?

I mean once employees find out you are targeting their division for a reduction, things get ugly fast. The best talent is usually the first to leave because it's easiest for them to get jobs elsewhere. This leads to large product delays from old architectures still in play.
 
nah that was in April, this just happened, I wouldn't think they are connected.
 
It seems Intel CPU tech advances involving huge gains in speed/performance have really ground to a slow crawl in recent years. Kaby Lake looks to be shaping up as a very good reason to not bother plunking down any hard earned cash on a new CPU/MB/RAM anytime soon, especially for those already sailing Sky Lake. I'm still plenty satisfied with the performance of my OC'ed Haswell.
 
  • Like
Reactions: rat
like this
The 7700K seems to be a whole lot of meh, but I wonder if there are benefits for downstream parts? For instance, I'm running a 3770S, the 65W part, so I would be interested in the Kaby Lake equivalent. I expect those with T-version CPUs will be similarly interested in low-voltage Kaby Lake CPUs.
 
That is true but we are talking about an IP contract that was signed now as of this week or last week lol, they will not have access to AMD IP prior to that contract being signed.
 
And I3 and I7 and laptops I5s. It just seems weird that you complete negated that All of Intels Hyperthreading is SMT.
and all of those CPU still have issues where it slow down some software (even efter disabling core parking).

My point was only SMT can be implemented well or terribly... I pointed out P4 cause Intel has in the past not done it right. Yes hyperthreading is in the I chips as well, your right of course. I wasn't trying to mislead there. :) Having said that "Hyperthreading" is an intel marketing name... and although its a form of SMT, most programmers would call it a partial SMT implementation based on how it handles threads. Honestly that may be the way to go anyway as full SMT isn't the sort of the thing that can be disabled really as I understand it. AMD had to go with a full SMT implementation which is a Sun patent If I'm not mistaken. We'll have to see how well AMD has done with the implementation I guess. What they have described in loose terms sounds a lot like the SMT implementation on the current high end IBM power chips, of course those chips have the advantage of always being used in very customized systems.

I think we all know AMD is playing catch up... so yes lots of the things you read about Zen so far are exactly that... catch up features, they have added a few new ones and increased some cache sizes and speed of prefetch and cache state stores ect... will it allow them to catch up. We'll see, I'm not expecting them to blow out I7... just get a lot closer then they have been for a very long time. :)
 
Last edited:
for those who need it, better video decoding, and native thunderbolt support?

Still not worth it unless you're coming from a system from before Sandy/Ivy or if you're not the kind to overclock your Sandy/Ivy setups. There is no shortage of CPU power to be able to decode video with Sandy Bridge. If you go for the efficiency argument, that's several hundred centuries of continuous power consumption you'd need to lose out on with Sandy to be able to make up for the power savings on Sky/Kaby Lake in efficiency over the cost of the new motherboard and CPU.

Like with Haswell over Sandy/Ivy... not really a justifiable upgrade if you can overclock. The only people who are going to see significant gains are folks still on Core2 and AM3 setups. I went from AM3/Phenom II x6 to Haswell right when Skylake was announced and I think I picked the right time to do it. If someone had a budget i3 system on Sandy, Ivy or Haswell and was considering Sky or Kaby Lake... nope. I'd still point to maxing out their motherboard's socket capabilities first. So, realistically? Not many of us have a reason to jump unless we got money to burn or burning parts that need to be replaced.
 
  • Like
Reactions: ChadD
like this
Kaby Lake's main marketing push for the enthusiast DIY crowd will likely be based around the idea of "5ghz" OCs. Even though in practice that number is basically symbolic and meaningless compared to what is achievable from Intel themselves already in the market. 5.1ghz Kaby Lake OC over say a Skylake OC from 4.6ghz - 4.8ghz isn't really meaningful in the grand scheme of things.

Otherwise Kaby Lake is not going to be a draw that specific market segment, nor do I think it was a priority.

What Intel can do is actually be aggressive in pricing in the back end channels even if they want to maintain an official MSRP of $350, so street prices fall at a faster rate then it was equivalent for Skylake parts.

With that said I'd have caution over early Kaby Lake OC results. Those 5ghz results may or may not be as common and stable as first impressions might give.
 
Damn it, I'm in need for more CPU power as I'm encoding videos for youtube on a daily basis and would gladly reduce my encoding times even if it's just a min or two. Currently have a i5-4670K but was going to go either Kaby Lake or Zen, now seeing this makes me pissed off, might as well pick up a Skylake now, wasted my time thinking Skylake might have been too little of an upgrade and hopefully Kaby Lake with IPC gains + overclocking maybe an extra 5~10% but this is frustrating. Of the few overclock results I've seen doesn't seem like KL will improve overclocking much either, about a 200~300MHz or so possibly but you also pay a heftier price for that one so it's pretty moot point.

Don't think AMD Zen is capable of blowing Intel out of the water, even matching would be a stretch probably but these times you wished AMD could step up bigtime.

Right now a 3 - 5 min or so video takes 20 ~ 30 mins of encoding with a few effects applied in After Effects and I've done like nearly 2000 of these videos, it sucks that in 2016 I cannot shave this amount down by a considerable amount, you'd expect computer hardware to be in such state it would be only a matter of 5-10 mins or so but CPU development has been so stagnant for years now...
 
Last edited:
Damn it, I'm in need for more CPU power as I'm encoding videos for youtube on a daily basis and would gladly reduce my encoding times even if it's just a min or two. Currently have a i5-4670K but was going to go either Kaby Lake or Zen, now seeing this makes me pissed off, might as well pick up a Skylake now, wasted my time thinking Skylake might have been too little of an upgrade and hopefully Kaby Lake with IPC gains + overclocking maybe an extra 5~10% but this is frustrating. Of the few overclock results I've seen doesn't seem like KL will improve overclocking much either, about a 200~300MHz or so possibly but you also pay a heftier price for that one so it's pretty moot point.

Don't think AMD Zen is capable of blowing Intel out of the water, even matching would be a stretch probably but these times you wished AMD could step up bigtime.

Are you encoding purely via CPU or using hardware acceleration?

If it's via software through the CPU I would suggest waiting for Zen, as encoding is one area that scales well via more cores. Zen is likely to offer more in that area for the $ since it is highly unlikely it will compete at ST performance (or I guess it can not sell).

Unless you can still snag a Skylake build at a large discount but at least in NA to me it seems like prices have gone up somewhat since Black Friday.
 
Right now a 3 - 5 min or so video takes 20 ~ 30 mins of encoding with a few effects applied in After Effects and I've done like nearly 2000 of these videos, it sucks that in 2016 I cannot shave this amount down by a considerable amount, you'd expect computer hardware to be in such state it would be only a matter of 5-10 mins or so but CPU development has been so stagnant for years now...

After Effects benefits most from parallelism, not newer CPU generations. A 30 second raytrace render at quality level 3 that would take 2-3 hours to render on CPU would render in about 10 minutes on a GTX 470 GPU when CUDA is enabled in After Effects. About 7 minutes on a GTX 580. If rendering is important to you, then having a dedicated box (something you upgraded from) with the right GPU makes all the difference in the world.
 
Back
Top