Intel Skylake Core i7-6700K IPC & Overclocking Review @ [H]

If Intel could push out a 6GHz CPU tomorrow that would get all your 2500/2600K holdouts to finally buy something from them again don't you think they would?

They already did, it's called Xeon E5-2699 v3
But they will never release that for enthusiasts because it would impact their (WAY more profitable) professional market.
 
They already did, it's called Xeon E5-2699 v3
But they will never release that for enthusiasts because it would impact their (WAY more profitable) professional market.

Try again. That CPU isn't 6GHz. It doesn't remotely run close to that frequency.
 
Intel wants to change that and get more of your money. Oviously hasn't made a compelling enough product for many of you to do that.

So they've decided to go the way of paying the reviewers. I'm disgusted by both pcper and (what I've heard of, since I haven't yet read the review myself) anandtech, who've stated that it's time for SB owners to upgrade, despite their own benchmarks clearly demonstrating that the perofmrance increase is not worth the huge cash investment.
 
Try again. That CPU isn't 6GHz. It doesn't remotely run close to that frequency.
6ghz doesn't matter, The point is performance increase. That Xeon is a MONSTER. If they released that (unlocked) for enthusiasts they'd have them line up to buy them (or even 2 of them, the SR-2 platform was praised at the time), but if they did that many professional customers (especially small companies) would buy them too, which would damage their Xeon lineup (which costs 4-10 times as much).
Also, we have no idea how much it can overclock, but I'm fairly sure it could at least reach 4ghz on water.
 
This review is ALL OVER THE PLACE.

1) Why use Windows 7 for tests? Why not the more stable, efficient and better optimized Windows 8.1? Better yet, why not Windows 10? What doesn't work properly yet?

2) Why use a Titan? It's a generation old and 3 flagships old GPU. The 780 Ti, 980 or 980 Ti would've been better choices. Also, why use 1 year old drivers? 320.xx drivers? Seriously ?

3) Why are using OLD software to benchmark? Cinebench 11.5? Winrar 4? What in the.... ?

4) In gaming benchmarks, DON'T TEST for LOW settings at 480p. WTH. 1080/1440p tests with max in-game settings. If you want to test the CPU, just use FXAA for AA and voila !

5) You're seriously testing a game from 2007. Why again ?


It's all nice and dandy that 6700K is 30% faster overall than a 2600K at 4.5 GHz, but you DID NOT use realistic settings, tests or ANYTHING. This whole review is a myth in my eyes.

It lines it well against the x99 benchmarks they made a year ago. People can see using the exact same of everything, X99 vs Z170. Using different software, etc renders the comparisons moot.
 
6ghz doesn't matter, The point is performance increase. That Xeon is a MONSTER. If they released that (unlocked) for enthusiasts they'd have them line up to buy them (or even 2 of them, the SR-2 platform was praised at the time), but if they did that many professional customers (especially small companies) would buy them too, which would damage their Xeon lineup (which costs 4-10 times as much).
Also, we have no idea how much it can overclock, but I'm fairly sure it could at least reach 4ghz on water.

There would be a few people who would buy it. The problem is actual tests and consumer use would reveal that 36 threads would go totally unused in the hands of the general and even enthusiast oriented / level consumers. I mention raw clock speed because breaking 5.0GHz seems to be the only way many of you Sandy Bridge guys will upgrade if they can't shatter performance barriers through IPC alone. If it was the Pentium compared to the i486 at half the clocks, you guys would jump. But since the IPC increase has been minimal, you guys are asking for clock speed. I think Intel would give you more of that if you could. That Xeon being a monster doesn't matter when it wouldn't be any faster than a 5960X in the hands of consumers.
 
Same here. My i7 2600 does everything I need it to and more, especially considering I have a "lowly" 2GB GTX 960 paired with it. Zen will either be a winner and a return to greatness (or at least competitiveness), or a flop. The last nine years of history doesn't give me much hope, considering the closest AMD has ever been to competing was Deneb and Thuban versus Yorkfield and Clarkdale - six years ago. If it's a flop, Skylake/Kaby Lake it is. Until then, I'll bide my time.

Great article as always guys.

thats great,
let me tell you my experience.
upgraded from a 2600k to a 3770k to a 4970k all at 4.7-4.8 and the improvement was outstanding for every upgrade
I for one plan on upgrading to a Asus Maximus VIII with a skylake i7 at 4.7 with DDR 4 at 3000+mhz

Also when the IPC is stronger it makes emulators run alot faster. Hardocp should put up some dolphin wii /epsxe ps2 benchmarks at 2560x1440
retro games look incredible
 
This review is ALL OVER THE PLACE.

1) Why use Windows 7 for tests? Why not the more stable, efficient and better optimized Windows 8.1? Better yet, why not Windows 10? What doesn't work properly yet?

2) Why use a Titan? It's a generation old and 3 flagships old GPU. The 780 Ti, 980 or 980 Ti would've been better choices. Also, why use 1 year old drivers? 320.xx drivers? Seriously ?

3) Why are using OLD software to benchmark? Cinebench 11.5? Winrar 4? What in the.... ?

4) In gaming benchmarks, DON'T TEST for LOW settings at 480p. WTH. 1080/1440p tests with max in-game settings. If you want to test the CPU, just use FXAA for AA and voila !

5) You're seriously testing a game from 2007. Why again ?


It's all nice and dandy that 6700K is 30% faster overall than a 2600K at 4.5 GHz, but you DID NOT use realistic settings, tests or ANYTHING. This whole review is a myth in my eyes.

A lot of enthusiasts didn't switch to Windows 8 or were very slow to embrace it. Most didn't until 8.1. Many enthusiasts still think Windows 8/8.1 sucks dysentery infected horse anus. That's why we haven't bothered with it. And if you had actually read the article, Kyle pointed out that we intend to move to Windows 10 testing and revisit our benchmarks. The thing is, these are what we tested those other platforms with and as a result, we have comparable results across four generations of CPU. As for gaming at low resolution, you clearly miss the point spelled out in every motherboard review we do here. It is done to isolate the CPU and motherboard from the graphics cards in the test. If we go to high resolution game testing you'll see pretty much the same results for everything because your GPU bound at that point. That's what GPU reviews are for BTW. The GPU used doesn't matter since we aren't utilizing it in the tests. Games from 2007? Well Lost Planet is one of the few multithreaded games out there that indeed scales well with increased core/thread counts.
 
A lot of enthusiasts didn't switch to Windows 8 or were very slow to embrace it. Most didn't until 8.1. Many enthusiasts still think Windows 8/8.1 sucks dysentery infected horse anus.

hahaha
 
I think Intel would give you more of that if you could. That Xeon being a monster doesn't matter when it wouldn't be any faster than a 5960X in the hands of consumers.

Intel is capable of giving us a 6 core CPU without the IGP. Seriously 99% of people that have a "K" chip don't use the IGP, they have dedicated graphics.
 
i personally liked windows 8.1 with classic start 8 much better than windows 7

also now I am using Windows 10 and it blows the doors off every prior OS

I have no doubt it performs better. But Win7 is already very fast on my rig and it runs all my apps without problems, which win8 couldn't do. I'm not even stepping in that win10 compatibility mess for a while.
 
Intel is capable of giving us a 6 core CPU without the IGP. Seriously 99% of people that have a "K" chip don't use the IGP, they have dedicated graphics.

Your missing the point. I never said they couldn't. Intel has given us the entire Haswell-E line of CPUs. None of which have an IGP. There was an assertion made that Intel could give us higher clocks if they wanted to on Skylake and I don't think they can. Not while concentrating on performance per watt which is what the industry demands in most market segments. HEDT CPUs are derived from server CPU silicon. In huge data centers with a lot of CPU density, power and thermals matter a lot. In the mobile market, CPU thermals and performance per watt matter a lot. Intel's running a business and they are going with what they believe the primary market demand is.

That's not enthusiasts. Again, I think if they could have made Skylake CPUs hit 5.0GHz+ on air, they would have done it.
 
I have no doubt it performs better. But Win7 is already very fast on my rig and it runs all my apps without problems, which win8 couldn't do. I'm not even stepping in that win10 compatibility mess for a while.

lol ok whatever
i have had no issues what so ever
 
I don't. I think if they could create 6GHz chips with reasonable thermals they would be upping the turbo clocks and overclocking headroom at least gradually each generation. They haven't been doing that.

I've spoken to people from Intel. They are indeed aware of the fact that tons of 2500K and 2600K users have had zero reason to upgrade and flat out haven't. They know there is a current 3-5 year life cycle on PC gaming CPUs and motherboards. Intel wants to change that and get more of your money. Oviously hasn't made a compelling enough product for many of you to do that. I think if they could have given us 5GHz overclocks on air they would have. Intel has promised us this before and always fallen short.

Being aware of it and doing something about it are two very different things. And Intel's roadmap speaks volumes about their priorities, IMO. They're just too concerned with getting a foothold in cell phones to care about advancing the desktop.
 
I am wondering how skylake compares to older cpus, especially sandybridge, in real world gaming situations where users will run games at 1080P at least?

Anandtech's benches suggest negligible improvements....

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/16

which is hard to reconcile with their final conclusion that there is enough of an incentive for sandybridge owners to upgrade. Really? There is a 37% improvement in synthetic CPU benchmarks, but who cares in the real world if it makes no difference to current gaming performance. Thoughts?

My thoughts: I agree. I see no point in upgrading my Haswell to this when it won't translate into real world benefits for gaming.
 
Being aware of it and doing something about it are two very different things. And Intel's roadmap speaks volumes about their priorities, IMO. They're just too concerned with getting a foothold in cell phones to care about advancing the desktop.

Fair enough.
 
My i7-950 is rocking just fine with my GTX 980.

Sad really, 14nm, and not a big jump yet.
 
i love how everyone needs to justify their purchase to make them feel good.
After reselling my i7 and mobo see sig, it might only cost me 200.00 or less to upgrade once a microcenter deal is out for skylake. I will take a 10%+ boost for that kind of money. Better than upgrading my 980 to a Tis , and taking a 500-600 .00 hit
 
Try again. That CPU isn't 6GHz. It doesn't remotely run close to that frequency.

Yea, base clock is 2.3Ghz. Turbo maxes out at 3.6

Shit ton of cores though, but cores don't count.

Can't make a baby in 1 month with 9 women.
 
Zarathustra[H];1041777906 said:
Yea, base clock is 2.3Ghz. Turbo maxes out at 3.6

Shit ton of cores though, but cores don't count.

Can't make a baby in 1 month with 9 women.

Ah, good 'ole Brooks. I seriously need to read The Mythical Man-Month. He's got some gems in there :D
 
I would love to go for one of the non-E CPU's, I really don't need 6 cores, I'd be happy with 4.

Problem is with the stupid paste under the lid that cripples them when overclocking and shortens their lifespans, and they have way too few PCIe lanes.

Skylake still only has 16x PCIe lanes coming off the CPU. I know there are many more coming off the chipset, but I still haven't seen anyone test SLI with one GPU on the CPU lanes and one on the Chipset lanes.

In the past this was considered a bad idea, and a contributor to stutter. I wonder how well it works...

16+20=36 lanes is less than the 40 lanes coming off my 3930k, but a hell of a lot better than socket 1150 chips had to offer.
 
Realistically neither could Sandy Bridge. Most chips were clocking around 4.6-4.8GHz on air or water. While we certainly saw chips that were indeed that capable I'll wager most didn't get pushed that far and are 24/7 stable. 28-40% IPC improvement at nearly the same clocks as Sandy Bridge is nothing to scoff at. Whether or not that's worth the cost of upgrading is entirely up to you but that hardly constitutes an epic fail on Intel's part.

I beg to differ. At first they wouldn't go past 4.8 even on water. But later it was pretty common getting 5.0+ on water. There's a reason the 2500k has reamained a favorite for so long.

For gaming (and I mean real gaming, not 640x480) the difference is still neglible.

And it seems to me the 28-40% ipc improvement is more like 10-25% in real world apps.

Plus the upgrade doesn't come cheap. Prepare to spend big bucks on cpu, cooler, mobo and ram to only get a few more FPS on your favorite shooter.

But hey, you can save a minute when encoding your videos. Seems that it really pays off. :rolleyes::rolleyes:
 
Being aware of it and doing something about it are two very different things. And Intel's roadmap speaks volumes about their priorities, IMO. They're just too concerned with getting a foothold in cell phones to care about advancing the desktop.

What apps are you regularly running that are CPU limited? Seems like encoding and content creation are about all that needs more power.
 
I've done a little research on SSD Raid-0 setups. It seems mostly hit or miss. I don't plan to keep any sensitive data on there anyways, just the OS and some games. So most of it will be recoverable or backed up to the cloud (like Steam, etc). I'm not too worried. Besides, I already bought the drives. $120 for 2 240GB SSDs I couldn't resist.:D

What's a good OS migration tool?


Oh hell if I know a good tool.

I use a 1TB SSD for OS/workspace and another 1TB SSD for apps/games (you can set steam to automatically use a different drive for all games, same with origin and such).

I think once you go SSD for games, from a HDD, the loading time is more or less waiting on your processor to decompress files.

Now I've done what you have and used RAID 0 with SSDs. When I started getting smart errors I just disabled smart. :). Drives worked for a few years later, never died I just upgraded.

I might be somewhat a hypocrite. Haha. I also went 3770k to 5960x and with the same SSD my load times got shorter. I made the move mainly for video encoding though. If I didn't do that I would of kept the 3770k for a few more years.
 
A lot of enthusiasts didn't switch to Windows 8 or were very slow to embrace it. Most didn't until 8.1. Many enthusiasts still think Windows 8/8.1 sucks dysentery infected horse anus. That's why we haven't bothered with it. And if you had actually read the article, Kyle pointed out that we intend to move to Windows 10 testing and revisit our benchmarks. The thing is, these are what we tested those other platforms with and as a result, we have comparable results across four generations of CPU. As for gaming at low resolution, you clearly miss the point spelled out in every motherboard review we do here. It is done to isolate the CPU and motherboard from the graphics cards in the test. If we go to high resolution game testing you'll see pretty much the same results for everything because your GPU bound at that point. That's what GPU reviews are for BTW. The GPU used doesn't matter since we aren't utilizing it in the tests. Games from 2007? Well Lost Planet is one of the few multithreaded games out there that indeed scales well with increased core/thread counts.

You are an enthusiast.

You are supposed to know why Windows 8.1 was better than Windows 7 and because you know better, you should've switched to Windows 8.1. And *let's say* you really don't personally like Windows 8.1. OK then. *Your choice*. But be a professional and show us, in an up to date, modern environment, how these hardware pieces compare. I don't care you did these tests a year ago. Windows 8.1 launched 2 years ago, almost to the day.

As for the games at low settings and resolutions ... Meh ? People buy better CPUs to allow their GPUs to push better framerates at max settings, high resolutions. Doing tests in Low-480p is the same as the synthetics tests you already did. At least do some proper game tests. Yes, I know games are GPU limited most of the time. Which is AWESOME, since then people would know they don't need to upgrade their CPU to get better framerates, since their GPUs are probably already maxed out, even if using an AMD FX 6300, or an i7 6700K overclocked.

I hope in the future you'll switch to a newer GPU, newer drivers, newer tests of everything, DROP Low-480p games tests for CPUs cause they're irrelevant (PS: we have 3DMark 2006/Vantage/11/13 for testing CPUs in low rez, physics intensive or not gaming scenarios). Also, if you think Windows 10 is not ready for prime time, for whatever reason, at least be a decent person and switch to the current fastest, most stable, most efficient Windows available, Windows 8.1.
 
Intel is capable of giving us a 6 core CPU without the IGP. Seriously 99% of people that have a "K" chip don't use the IGP, they have dedicated graphics.

Yeah, I don't have an IGP with my 3930k.

If I had one, it might be useful for PhysX or something, so that one of my GPU's in SLI isn't loaded more than the other.

Oh wait, it's not an Nvidia chip :p
 
Realistically neither could Sandy Bridge. Most chips were clocking around 4.6-4.8GHz on air or water. While we certainly saw chips that were indeed that capable I'll wager most didn't get pushed that far and are 24/7 stable. 28-40% IPC improvement at nearly the same clocks as Sandy Bridge is nothing to scoff at. Whether or not that's worth the cost of upgrading is entirely up to you but that hardly constitutes an epic fail on Intel's part.

Yeah, I won the silicon lottery on mine.

That being said, I wonder how much more these Skylake chips could do if they didn't have paste under the heat spreader.

It really seems like the thermals are holding them back big time.
 
If you are complaining that Intel is being complacent towards improving game performance, you need to redirect your anger towards Nvidia and AMD for getting complacent and not rising up to the challenge of handling resolutions higher than 2560x with a single GPU. Single-GPU performance needs to be at least triple what it is now to do 4K decently and catch up to a point where upgrading from Sandy Bridge or later would make a difference. This 20 to 30 percent improvement per GPU generation is not ever going to catch up to be able to do 4K, 5K, 8K gaming; something very different needs to happen.

Some of you might not be aware that Intel has gotten so far ahead that they have had the luxury of improving performance of the Iris Pro IGP up to the level of a GTX 560 or AMD 250x while using about 12W.
 
Yep.
As res increases, cpu matters less.
... unless you increase gpu performance proportionally.
 
If you are complaining that Intel is being complacent towards improving game performance, you need to redirect your anger towards Nvidia and AMD for getting complacent and not rising up to the challenge of handling resolutions higher than 2560x with a single GPU. Single-GPU performance needs to be at least triple what it is now to do 4K decently and catch up to a point where upgrading from Sandy Bridge or later would make a difference. This 20 to 30 percent improvement per GPU generation is not ever going to catch up to be able to do 4K, 5K, 8K gaming; something very different needs to happen.

Some of you might not be aware that Intel has gotten so far ahead that they have had the luxury of improving performance of the Iris Pro IGP up to the level of a GTX 560 or AMD 250x while using about 12W.

Yeah, AMD and Nvidia have had some issues when it comes to production though.

They are fabless, and other companies (*cough* Apple *cough* Qualcomm *cough*) keep buying up all the latest small process manufacturing, because there is more money in mobile chips than in desktop GPU's.

Because of this, they are both still stick at 28nm, until the mobile folks move on to the next process, and they can get some 14nm or maybe even 10nm action going...
 
Well at least the process is mature by the time we see it :p
 
Intel is focused on performance per watt because energy savings is what the industry demands in both the server and mobile markets.

Is there really a decent improvement of performance-per-watt between Haswell and Skylake? From some of the reviews I've seen, it's just not there.

This is a really bizarre launch. Intel doesn't want to talk about any specifics regarding the chip but is overflowing with a marketing blitz. The architecture shows a completely underwhelming improvement (in any category) compared to Haswell (despite the node shrink). And they even tacked a few more bucks on the price. Looking beyond all the marketing out there and viewing this with a skeptical eye, it doesn't look good.

And with all due respect to Intel, the GPU and ARM segments are showing far better gains generation to generation (ok maybe not AMD GPUs). What the hell is going on over there?
 
Last edited:
Yep.
As res increases, cpu matters less.
... unless you increase gpu performance proportionally.

Honestly, CPU doesn't matter much at low resolutions either, as all you are doing is removing the GPU bottleneck, and showing theoretical render speeds, but usually they just go so fast as to be pointless anyway.

Anything above 60fps = I don't care.

Anything below 60fps = I care a lot.
 
This 20 to 30 percent improvement per GPU generation is not ever going to catch up to be able to do 4K, 5K, 8K gaming; something very different needs to happen.

waaat? Look at the difference between Fermi and Kepler and Kepler and Maxwell (which didn't even have the benefit of a node shrink) and compare it to what we get from Intel.

And practically nobody buying an i7-6700K is going to use that IGP. It's ridiculous to force us to pay for something we're never going to use.
 
Zarathustra[H];1041778019 said:
Honestly, CPU doesn't matter much at low resolutions either, as all you are doing is removing the GPU bottleneck, and showing theoretical render speeds, but usually they just go so fast as to be pointless anyway.

Anything above 60fps = I don't care.

Anything below 60fps = I care a lot.

I wasnt being that specific cos there are gamers who use 120Hz+ monitors.
For me, yeah, fairy snuff. (fair enough :p)
 
I must be going oft. Don't really feel the need to upgrade.

That said, it means I get to keep on to my older hardware, much longer.
 
Back
Top