Intel Skylake Core i7-6700K IPC & Overclocking Review @ [H]

Yeah. I just can't overclock my cards which is kind of a bummer. Also since all my other PCI-e slots are dead I can't upgrade. I game @ 4k. I was thinking I might add a 3rd card eventually. Oh well. I guess I'll stick with it for a while.

I needed to upgrade from my 90GB SSD (I think it's broken too, could be the motherboard) to 480GB SSD anyways, now I can do that. I guess I'll find out if my SATA controller is shot.


I think you might have missed the point.

You said your cards run on 90F/70F, that is about 32C at the hottest card. I don't think even water cooled cards can run that cold when they are IDLE, let alone under load.

Perhaps you meant 90C/70C? That would be a far more reasonable temperature.

On topic:

6600k, 6700k are both on sale in Taiwan, along with Z170 mobos from all of the big 3 (currently only Z170 are available, but it is listed as being compatible with H170 and H110 as well.

We have a number of Z170 motherboards available, but only 1 is specifically labeled as DDR3 compatible (Gigabyte Z170-HD3 DDR3), all of the others are labelled as DDR4. 6600k is just a tad more expensive than 4690k and 6700k is just a tad more expensive than 4790k.
 
I beg to differ. At first they wouldn't go past 4.8 even on water. But later it was pretty common getting 5.0+ on water. There's a reason the 2500k has reamained a favorite for so long.

For gaming (and I mean real gaming, not 640x480) the difference is still neglible.

And it seems to me the 28-40% ipc improvement is more like 10-25% in real world apps.

Plus the upgrade doesn't come cheap. Prepare to spend big bucks on cpu, cooler, mobo and ram to only get a few more FPS on your favorite shooter.

But hey, you can save a minute when encoding your videos. Seems that it really pays off. :rolleyes::rolleyes:

One thing I've noticed is that many of the people saying they are going to stick it out with their 2500K and 2600K chips for a little longer are rocking less than 4.7GHz overclocks on their Sandy Bridge chips. That's why I mentioned what Sandy Bridge could realistically do. It wasn't just about what we could get on water in open air test benches but in actual cases, under desks or whatever conditions actual users were running them in. There were more CPUs that couldn't hit 5.0GHz on water than there were chips that could. Plain and simple. What I'm saying is that Skylake isn't all that bad when you consider the clocks people got in the real world with their Sandy Bridge chips and what we are seeing from Skylake now. The two align fairly well. And don't forget, Skylake just came out. People may find settings that help out, CPU silicon may improve with additional steppings and or in specific batches and production runs. You even pointed out that Sandy Bridge got better over time. The very things that improved Sandy Bridge might happen to Skylake.

We just don't know. But realistically, for the average overclocker I think Skylake isn't that far off of Sandy Bridge's realistic every day overclocks. I'm more confident about users being able to get 4.7GHz on average with Skylake vs. 4.8GHz+ on Sandy Bridge.

Now, I'm not saying the cost is necessarily worth it. That's a personal choice.
 
Well I'm now in a quandary? I have overtaken our church media center and need to build a new pc, the old is i5 xxx? 6g ram and struggles and mostly fails. The task required are video production and editing, digital sound board 32 channel mixing and processing, light animation and other stuff like this. I also pump video to 3 projectors and need to add 3 more monitors via a junction box. I use Blender, adobe master suite, handbrake, autocad/vectorwork and a few other programs in this line.

I have been waiting for skylake to come out but not overly impressed so my question is would a 5820K be just about as effective? Should I consider a Xeon processor? Never considered Xeon until just now and never build a system with one. Since gaming is not a concern input would be appreciated. If I played games on this system across 3 giant projectors I would never go back home.

Thanks.
 
Well I'm now in a quandary? I have overtaken our church media center and need to build a new pc, the old is i5 xxx? 6g ram and struggles and mostly fails. The task required are video production and editing, digital sound board 32 channel mixing and processing, light animation and other stuff like this. I also pump video to 3 projectors and need to add 3 more monitors via a junction box. I use Blender, adobe master suite, handbrake, autocad/vectorwork and a few other programs in this line.

I have been waiting for skylake to come out but not overly impressed so my question is would a 5820K be just about as effective? Should I consider a Xeon processor? Never considered Xeon until just now and never build a system with one. Since gaming is not a concern input would be appreciated. If I played games on this system across 3 giant projectors I would never go back home.

Thanks.

Honestly, a 5820 or 5930K actually sounds perfect for you. Those applications can actually leverage the additional cores.
 
The only i7-920 users this would be an upgrade path for would be the board owners that don't support Westmere..

The other group that would see benefits would be socket 775 ,,
 
So Skylake is due to release tomorrow from what I found out at Fry's. My local Fry's is also not getting any Skylakes in for release (shipments delayed), further confirming the shortage theory.
 
Weeeeellllllll gents, 2500k it remains.

Might as well wait till some dx12 games come out and see how dem cores affect dem frames at 4k.

Hoping a beeftastic cpu will be out in time to compliment pascal.
 
Coming from a i7-920, the gains will be big for me. I get those of you only a generation or two out not wanting to upgrade, but this was right about what I expected, and I have already bought half the parts needed for my skylake build.

Now I just need to find the 6700k for preorder in the US for 350 or less
 
Weeeeellllllll gents, 2500k it remains.

Might as well wait till some dx12 games come out and see how dem cores affect dem frames at 4k.

Hoping a beeftastic cpu will be out in time to compliment pascal.

What are you talking about.? 2500k is already going to bottleneck you at 4k with today's games
 
On games that aren't bottlenecked by GPU the Skylake 6600k and 6700k show pretty big FPS increases.

Here is a video of the 6600k vs your 2500k. 4690K has same speeds for clock vs clock speeds.. Still a decent FPS increase. But lets keep in mind.. CPU's aren't only for gaming.

https://www.youtube.com/watch?v=WZ_5p9wd2dk&feature=youtu.be

The [H] review shows a 15-20% review on games when turned way down to be CPU limited.

So 20% is good, but we get much greater bumps from GPU upgrades.


And what game at 4K isn't GPU limited?

Also I don't trust that video as far as I can throw it. It would be the only example of people getting those results.
I trust techreport
http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/7
They show almost no difference.

I also trust anandtech.
http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/16

So maybe you get 10%, but maybe not, both the techreport and anandtech reviews are not overclocked cpus.
For instance the 6700K @ default clock running GTAV @ 1080P does 74fps
the 6700K @ 3ghz running GTAV @1080p does 70.65fps
The 2600K @ 3ghz only does 63fps
The 2600K @ deault get back up to 69fps.

So from that we can see that 74fps is pretty much GPU limited and even at stock the 2600K is very close to that. Jump to 4K and the CPU matters even less.
 
Last edited:
The [H] review shows a 15-20% review on games when turned way down to be CPU limited.

So 20% is good, but we get much greater bumps from GPU upgrades.


And what game at 4K isn't GPU limited?

Oh I agree. I'd love to get more out of CPU's in gaming. But as I said. CPU's aren't only for gaming.

Skylake also brings DDR4 and Sata Express. Things people should be excited about too.
 
I also trust anandtech.

I also trust anandtech to screw up a cpu/ram review with crappy gpu bottlnecked tests that are all just margin of error

Shadow of Mordor on ASUS GTX 980 Strix 4GB
1080p Ultra, Average Frames Per Second

Intel Pentium G3258 3.2 GHz 99.92 FPS
Intel Core i7 6700K 4.0 GHz 99.69 FPS
Intel Core i7 4770K 3.5 GHz 99.62 FPS
Intel Core i7 4790K 4.0 GHz 98.7 FPS
http://www.anandtech.com/bench/CPU/1321
 
I'm saying straight up, that video is a lie.

When every review site's benches tell one story, and some random youtube video shows something else I tend not to believe it.

Especially when the review sites would love to show a 15-20% bump in Skylake. It would super exciting and get lots of clicks.
Yep I call BS on that video for many reasons. NO other review shows anything like that and the 4790k is flat out tanking in spots where it should not. I have a 4770k and I dont drop down like in those same spots so to say its a cpu limit causing those drops is nonsense.
 
One thing I've noticed is that many of the people saying they are going to stick it out with their 2500K and 2600K chips for a little longer are rocking less than 4.7GHz overclocks on their Sandy Bridge chips. That's why I mentioned what Sandy Bridge could realistically do. It wasn't just about what we could get on water in open air test benches but in actual cases, under desks or whatever conditions actual users were running them in. There were more CPUs that couldn't hit 5.0GHz on water than there were chips that could. Plain and simple. What I'm saying is that Skylake isn't all that bad when you consider the clocks people got in the real world with their Sandy Bridge chips and what we are seeing from Skylake now. The two align fairly well. And don't forget, Skylake just came out. People may find settings that help out, CPU silicon may improve with additional steppings and or in specific batches and production runs. You even pointed out that Sandy Bridge got better over time. The very things that improved Sandy Bridge might happen to Skylake.

We just don't know. But realistically, for the average overclocker I think Skylake isn't that far off of Sandy Bridge's realistic every day overclocks. I'm more confident about users being able to get 4.7GHz on average with Skylake vs. 4.8GHz+ on Sandy Bridge.

Now, I'm not saying the cost is necessarily worth it. That's a personal choice.

Pc perspective live podcast is on you tube right now. He mentions in his skylake review at 4.7 with gtx 980s sli in grand theft auto 5 there is a huge 20fps gap at 1080p compared to sandy bridge at 4.7 . just fyi... I guess there is more of a consistsnt frame per second while gameing .. And at 2560x1440 but not as great at 1080p with 20 fps
 
i would be interested to know some details on how they did that video benchmark
 
I also trust anandtech to screw up a cpu/ram review with crappy gpu bottlnecked tests that are all just margin of error

Shadow of Mordor on ASUS GTX 980 Strix 4GB
1080p Ultra, Average Frames Per Second

Intel Pentium G3258 3.2 GHz 99.92 FPS
Intel Core i7 6700K 4.0 GHz 99.69 FPS
Intel Core i7 4770K 3.5 GHz 99.62 FPS
Intel Core i7 4790K 4.0 GHz 98.7 FPS
http://www.anandtech.com/bench/CPU/1321

Here's another bench by somebody else
http://www.pcper.com/reviews/Graphics-Cards/Skylake-vs-Sandy-Bridge-Discrete-GPU-Showdown

once again benching the 2600K @ 3.6ghz and not the 4.6ghz most people run at, the difference is maybe 10%. And then when we jump to 1440p the margin evaporates.

Then if you go SLI and double GPU throughput we start to see the gap widen, but again the 2600K is running at 10% slower clock speed too.
 
these are the opposite end of the spectrum to anandtechs gpu heavy tests these are very cpu heavy

http://translate.googleusercontent....6.html&usg=ALkJrhj0alS52oor9_1qlHHzwwoFrn2_1g
a3.png


http://www.hardware.fr/articles/940-5/cpu-ddr4-vs-ddr3-pratique.html
sky%20r_zpsepkfhqdl.jpg

sky1_zpsasybm8cb.jpg


im running my 2600k at 4.4ghz 1.3v most the time as although it can prime at 4.9ghz 1.4v i have to enable pll overvoltage which breaks standby which i happen to like
 
Last edited:
Wow, those benchmarks are actually useful—a real game with real performance, and multiple generations of CPUs as a reference point. I don't get why reviewers benchmark against only the previous generation and perhaps some selected top end from 2 generations ago. No one who owns those products is considering an upgrade so soon.

Now I can see exactly how my 3570K stacks up against a Skylake CPU in an actual real world scenario, instead of having to guesstimate based on a 4670K's numbers in some synthetic benchmark.
 
Finally, Intel has an answer to bulldozer. J/k
I am surprised that this chip didn't get the graphics of Broadwell - it seems like a step back.
Perhaps Kabylake will combine the best of both. Kabylake may also see some performance
improvements w/o changing the die or architecture. Anyone remember the Conroe E6750? It was
a nice boost from the E6600 and an outstanding overclocker.
 
Finally, Intel has an answer to bulldozer. J/k
I am surprised that this chip didn't get the graphics of Broadwell - it seems like a step back.

It probably will. the Iris Pro stuff will probably appear when they start rolling out their full range of chips.

Perhaps Kabylake will combine the best of both.
While I have no idea what Kaby Lake will be.

But at this point I think we should all just expect more 10%.
 
Thanks for the review. Seems pretty ok. Definitely fine with my 4670. I hope the mobile parts come out soon though I do need to buy a laptop or maybe a Surface Pro 4
 
Id like to see windows 10 skylake at 4.7 running dolphin and epsxe vs haswell i7 at 4.7
 
I'm saying straight up, that video is a lie.

When every review site's benches tell one story, and some random youtube video shows something else I tend not to believe it.

Especially when the review sites would love to show a 15-20% bump in Skylake. It would super exciting and get lots of clicks.



Find me site review that does 980ti SLI 4K comparisons.. You can't automatically say no other site shows a difference because no other site is using actual enthusiast setups. Granted it's probably not just the CPU and partially x16 lanes on the 5830, but still. Shouldn't automatically dismiss it.
 
Oh well. It looks my i3770k Z77 motherboard is really borked. Not only are 2 slots dead but when I tried to install my new 2 SSDs in Raid-0 it refused to boot to any other drive. It just said "Install a valid boot disk". (Worked fine in AHCI) This is on both the Intel and Marvel controller. I even configured the Windows Boot drive to the top of the priority.

I guess I'm going to have to upgrade. At least I can recoup the cost from the CPU and SDRAM.
 
Find me site review that does 980ti SLI 4K comparisons.. You can't automatically say no other site shows a difference because no other site is using actual enthusiast setups. Granted it's probably not just the CPU and partially x16 lanes on the 5830, but still. Shouldn't automatically dismiss it.

4k w/ SLI is roughly the same as 1440p for a single card.

And spending $2000 on video cards doesn't make you an enthusiast, that's some kind of crazy uber ELITE setup. I'd say an "Enthusiast" is single GTX970 kind of card.
 
Finally a CPU and a platform which is a good upgrade to the venerable 2600k. DDR4 memory overclocking shows very good performance improvement in gaming workloads. This chip is built for running DDR4 memory overclocked to 3.2 Ghz. :)
 
2 980Tis are $1200, not $2000. I have a 3770k, 980Ti SLI/4k setup and would like to see such Skylake benches. (Not that it matters- you'll probably see them from me pretty soon anyways ;))
 
damn. every time intel comes out with a new chip, it's another 10% gain in performance.

On the other hand, AMD is still stuck with the same crappy ipc as 6 years ago.
 
Maybe by 2025, software developers will routinely use multithreading and make it worth having more than 4 cores.

Software developers typically produce software for the mainstream market and if the mainstream market has access to no greater than 4 cores, developers won't create mainstream software that can take advantage of more than 4 cores. If more cores were available, mainstream software would probably be better coded to take advantage of them.

But since Intel has no interest in expanding the core count on the mainstream platform, we are unlikely to see much advancement in this regard from developers, except perhaps on prosumer apps that target 2011 users.
 
Gents, gents, let's be civil. Firstly, ddr4 and sata express can kiss my black ass unless they affect fps. I've heard ddr4 can help with minimum fps, which would be spectacular, but I've yet to see proper benchies.

1080p is old hat, and that shit ain't [H]. It's true - generally, as resolution increases, cpu doesn't matter as much. It's why cpu benchies play games at 640x480... That resolution much better illustrates differences in cpu performance than 2160p. Of course, some games are more cpu-bound than others (recurring examples are crysis 3 and friends), but the great majority of games are GPU-bound. Then throw 4k into the mix, and your performance greatly, greatly (greatly!) depends on the GPU. There's no real reason to do sli unless you're pushing 4k and/or you're an aa fiend (love you guys, though I'm not a fan of sli).

Anyhow, were on the cusp of a new, potentially badass API (dx12) but we've yet to see what hardware it prefers. You can guess at that (for example, individual core/thread performance vs the number of cores/threads), but it's just a guess at this point.

An i5 2500k, while not the best cpu, is still very competent given today's gaming landscape, esp. When it's aggressively oc'ed to ~5ghz. And esp when you're pushing 4k. You don't have to believe me, and I'm too lazy right now to give links to supporting data, but yeah. When in doubt, oc. Run a game at 4k/downsample at 4k, then oc your cpu. How does it affect performance? Now set your cpu back to where it was pre-oc, and oc your GPU. How does it affect performance? Ye shall find yer GPU doth affect performance greatly.
 
Hopefully my little video proves that there is no way a 4790k at 4.7 would be a cpu limitation that would cause Crysis 3 to be in the 40s and 50s like most of that video in the same spot. Sorry for quality but this was at 1080 on very high settings with single 980 Ti and 4770k at 4.3. And yes I know it is not 4k but the point is that people keep claiming it is a cpu limitation in that video yet I am getting twice the framerate with a minimum of 83 fps at the same spot they show 40 fps.

https://www.youtube.com/watch?v=6OZR_W627-w
 
Software developers typically produce software for the mainstream market and if the mainstream market has access to no greater than 4 cores, developers won't create mainstream software that can take advantage of more than 4 cores. If more cores were available, mainstream software would probably be better coded to take advantage of them.

But since Intel has no interest in expanding the core count on the mainstream platform, we are unlikely to see much advancement in this regard from developers, except perhaps on prosumer apps that target 2011 users.

When I write multithreaded code (which is all the time), I usually query the runtime to see how many CPUs are available and divide and conquer based on that number. If the algorithm parallelizes well, scaling will be obvious from 4 -> 6 -> 8.
 
Hopefully my little video proves that there is no way a 4790k at 4.7 would be a cpu limitation that would cause Crysis 3 to be in the 40s and 50s like most of that video in the same spot. Sorry for quality but this was at 1080 on very high settings with single 980 Ti and 4770k at 4.3. And yes I know it is not 4k but the point is that people keep claiming it is a cpu limitation in that video yet I am getting twice the framerate with a minimum of 83 fps at the same spot they show 40 fps.

https://www.youtube.com/watch?v=6OZR_W627-w

Unless it's 4k I don't care. You guys can hang onto the past I really don't care. The tired memes of PCIE doesn't really matter, DDR4 doesn't matter, extra cores are useless is starting to crack. I'll catch a bench here and there showing a few frame increase here and there. Maybe average frames the same but minimums way up. You can start to see it and peice it together that these old platforms are aging.

Be defensive all you want about the platform you're on and justify why you're not spending money on the newest stuff, but I though my this was [H], shouldn't we all be on 4k by now trying to scape together every few frames we can get?
 
Unless it's 4k I don't care. You guys can hang onto the past I really don't care. The tired memes of PCIE doesn't really matter, DDR4 doesn't matter, extra cores are useless is starting to crack. I'll catch a bench here and there showing a few frame increase here and there. Maybe average frames the same but minimums way up. You can start to see it and peice it together that these old platforms are aging.

Be defensive all you want about the platform you're on and justify why you're not spending money on the newest stuff, but I though my this was [H], shouldn't we all be on 4k by now trying to scape together every few frames we can get?
You are not comprehending the simple fact that 4k does not make a game more cpu demanding. If I go to 4k all that will happen is that my single gpu will drop me into the 30s and 40s NOT the cpu. It is beyond ignorant to claim a cpu limitation if I can run a lower res and DOUBLE the framerate. :rolleyes:
 
Back
Top