Intel Skylake Core i7-6700K IPC & Overclocking Review @ [H]

After reading this review i was left wandering when EVGA's SR2 owners will have an upgrade path:)
 
I have a 5930K, Rampage Xteme, SLI Titan X, should I sell the mobo / cpu setup and go skylake?
 
I have a 5930K, Rampage Xteme, SLI Titan X, should I sell the mobo / cpu setup and go skylake?

I wouldn't.

In fact, if I were shopping today, I'd buy the 5930k before I went with the 6700k, if you could even find a 6700k for sale.

The 6700k has a very marginal IPC improvement over the 5930k, and the 5930k overclocks higher. At max overclock I'd expect the single threaded performance to be pretty similar, without any major wins for either chip. And the 5930k has more cores...

If you were to switch to a 6700k (if you could find one) You'd be going from 40x PCIe lanes directly to the CPU to 16x PCIe lanes directly to the CPU, which, with your SLI setup means 8x-8x, instead of the 16x-16x you have now (if properly configured). This probably doesn't have a huge impact on raw framerates, but you might find your frame times to be slightly more erratic in SLI at 8x-8x, especially at higher (4k) resolutions.

You would also (obviously) lose two cores if you go to 6700k, not to mention the Quad-channel RAM going to Dual-Channnel RAM.

IMHO, coming from an i7-5930k, the i7-6700k (despite being newer) would likely be a downgrade.


The only reasons to go with the 6700k would be:


- Cost

If buying new, the 6700k would be cheaper, but if you are selling stuff used, it might be a wash.


- Power / heat / noise

The 6700k uses less power, and thus produces less heat, and thus requires less cooling, which CAN be quieter. Some of this is negated by the paste under the spreader though, as opposed to the solder in th e5930k.


- Not overclocking

If you don't believe in overclocking (some people don't), the single threaded performance of the 6700k will be faster out of the box due to being clocked higher stock. If you just want performance without worrying about tweaking and overclocking, the 6700k will be significantly faster single threaded (4.2Ghz max turbo, vs 3.7Ghz, and slightly higher IPC) but then you'd also have to weigh it against the benefits of having more cores, and more PCIe lanes with the i7-5930k


That's all I can think of.
 
I have a 5930K, Rampage Xteme, SLI Titan X, should I sell the mobo / cpu setup and go skylake?

Hellno.

Zarathustra[H];1041785709 said:
I wouldn't.

In fact, if I were shopping today, I'd buy the 5930k before I went with the 6700k, if you could even find a 6700k for sale.

The 6700k has a very marginal IPC improvement over the 5930k, and the 5930k overclocks higher. At max overclock I'd expect the single threaded performance to be pretty similar, without any major wins for either chip. And the 5930k has more cores...

If you were to switch to a 6700k (if you could find one) You'd be going from 40x PCIe lanes directly to the CPU to 16x PCIe lanes directly to the CPU, which, with your SLI setup means 8x-8x, instead of the 16x-16x you have now (if properly configured). This probably doesn't have a huge impact on raw framerates, but you might find your frame times to be slightly more erratic in SLI at 8x-8x, especially at higher (4k) resolutions.

You would also (obviously) lose two cores if you go to 6700k, not to mention the Quad-channel RAM going to Dual-Channnel RAM.

IMHO, coming from an i7-5930k, the i7-6700k (despite being newer) would likely be a downgrade.


The only reasons to go with the 6700k would be:


- Cost

If buying new, the 6700k would be cheaper, but if you are selling stuff used, it might be a wash.


- Power / heat / noise

The 6700k uses less power, and thus produces less heat, and thus requires less cooling, which CAN be quieter. Some of this is negated by the paste under the spreader though, as opposed to the solder in th e5930k.


- Not overclocking

If you don't believe in overclocking (some people don't), the single threaded performance of the 6700k will be faster out of the box due to being clocked higher stock. If you just want performance without worrying about tweaking and overclocking, the 6700k will be significantly faster single threaded (4.2Ghz max turbo, vs 3.7Ghz, and slightly higher IPC) but then you'd also have to weigh it against the benefits of having more cores, and more PCIe lanes with the i7-5930k


That's all I can think of.

The 5930K usually won't out clock the 6700K. The former usually can't break 4.5GHz and many won't clock beyond 4.3GHz. 5960X chips don't typically do any better. So far the 6700k seems to hit 4.7GHz consistently. I would agree that the clock difference is minimal in the grand scheme of things considering what you get from the 5930K feature wise.
 
You can't be more wrong because you can't right? =D. going from your 870 (even if it's high OC'd) to a stock SB 2600K/2700K will be a big upgrade and strongly noticeable specially if you compare directly to 4.5+ghz SB 2600K, your chip isn't even Socket 1366 to have the advantage of Triple channel.

If you say you wouldn't notice a difference its because you have never used anything newer than Nehalem, that chip its old and it's actually slow to the point it will be a bottleneck in a lot of games. Things it's just even worse if you compare it to high-ends X79 Chips.. so not even talk about Haswell X99 here..

For the things I do right now I wouldn't notice a difference. If I was still doing video encoding or whatever there would definitely be a difference, but even then that's a task that I just set up in a script and walk away from since I have another computer that I use for mundane tasks.

Even going from SATA2 to SATA3 would probably be hardly perceptible.

I'm only going to spend so much on tech, and so instead of putting that money into a platform upgrade I'd rather put it into something more valuable like displays, a new GPU, or a smartphone or tablet. I'm not saying I'll never upgrade again; just that it would be a waste of money. If my use requirements change then maybe it would make more sense for me. Until then I will sit where I am and let those 5% improvements generation over generation accumulate.
 
Hellno.



The 5930K usually won't out clock the 6700K. The former usually can't break 4.5GHz and many won't clock beyond 4.3GHz. 5960X chips don't typically do any better. So far the 6700k seems to hit 4.7GHz consistently. I would agree that the clock difference is minimal in the grand scheme of things considering what you get from the 5930K feature wise.

Oh, interesting.

I googled it before responding and found a 5930k overclock to 4.8, and assumed that was typical, not extraordinary. Must have been from someone who won the silicon lottery then.

My bad on that mistake.

What really stands out to me here is that my 4 year old Sandy-E i7-3930k which easily hits 4.8Ghz, and with a little coaxing will hit 5.0 (flat out refuses 5.1 though) is - when overclocked faster than the two generation newer equivalent Haswell-E.

I did not expect that.

I wonder why the newer models clock so much lower...
 
Not really sure why this is such a good upgrade path for 2500K/2600K users. The improvements seem very minor.
Not going to upgrade because of the new chipset either. We all know how intel can be with chipsets.

A 4.7 Ghz Skylake is equivalent to 5.875 Ghz Sandy Bridge if you just use the average 25% IPC improvement as shown in Anandtech cpu tests. That's a huge difference and I never seen anyone clock their 2600k or 2700k beyond 5.1-5.2 ghz 24/7 stable.
 
Zarathustra[H];1041785768 said:
Oh, interesting.

I googled it before responding and found a 5930k overclock to 4.8, and assumed that was typical, not extraordinary. Must have been from someone who won the silicon lottery then.

My bad on that mistake.

What really stands out to me here is that my 4 year old Sandy-E i7-3930k which easily hits 4.8Ghz, and with a little coaxing will hit 5.0 (flat out refuses 5.1 though) is - when overclocked faster than the two generation newer equivalent Haswell-E.

I did not expect that.

I wonder why the newer models clock so much lower...

Binning on Haswell-E is pretty inconsistent to say the least. But 4.8GHz is definitely not normal. Clocks for all the more recent architectures have dropped 100-200MHz on average since Sandy Bridge. Some of the refreshes have improved clocks to some extent. Devils Canyon specifically addressed this compared to Haswell. Those chips picked up some clock speed over standard Haswell chips by replacing the TIM under the heat spreader. So far Skylake clocks as well as Devils Canyon albeit more consistently.

Gulftown aside, as more cores get added the binning gets more difficult and the TDP goes up. So the hex and octo cores haven't done as well as their mainstream counter parts in that department. Haswell and Haswell-E suffered primarily because of the TIM used but also because the FIVR and added complexity of the CPU made chip lotteries more inconsistent. The IMC of Haswell is also inconsistent. Some can handle DDR4 2400 speeds or more with 4x DIMMs while others couldn't reliably handle more than DDR3 1600MHz with 4x DIMMs.

I think CPU complexity combined with aggressive tick/tock product releases strategies as well as ever increasingly difficult process node shrinks are the cause of this. Intel's focus also has shifted from raw performance to performance per watt also changes things. Intel isn't refining it's designs anymore over 4-5 year periods. They are releasing a new aarchitecture every two years with an annual refresh.

Think about it. The original Pentium design lasted a long time. So did Netburst. Their core architectures saw minor and even somewhat major changes, but their longer life cycles allowed for wide reaching manufacturing process improvements which greatly improved clock speeds over that life cycle.
 
Last edited:
Zarathustra[H];1041785582 said:
I actually disabled hyperthrreading on mine recently.

I was having some frame time spikes in SLI and I wanted to check if maybe it had anything to do with processes accidentally winding up on the same physical CPU.

It didn't seem to make mucb of a difference :p

If you look at something like Cinebench typically you see HT CPU's scaling at roughly 4.8x the their single threaded score. So it can be in best case scenario's a 20% boost. But that's a best case scenario. Until you've got a process that loads 8 threads evenly, you won't see a much of a boost.
 
skylake TIM is a major fail, worse than DC and delidding improves temps by 20c! That and the paper launch make it a fail in my book. Techspot and some other sites are on the same page. Skylake E may hold potential but Skylake is a fail. Haswell-E FTW.
 
Think about it. The original Pentium design lasted a long time. So did Netburst. Their core architectures saw minor and even somewhat major changes, but their longer life cycles allowed for wide reaching manufacturing process improvements which greatly improved clock speeds over that life cycle.

I'd always assumed that we'd been refining the core i7 with each architecture update.

Similiar to how the P6 architecture gave us the pentium Pro, pentium II and pentium III.

Even then the consumer got.
Klamath - 350nm -tick
Deschutes - 250nm tock
Katmai - 250nm tick (P III)
coppermine - 180nm tock

then the tualatin (130nm) PIII came out, but was over shadowed by the netburst P4. even though in many respects the PIII 1.4ghz chip was better than any of the P4 chips.
 
skylake TIM is a major fail, worse than DC and delidding improves temps by 20c! That and the paper launch make it a fail in my book. Techspot and some other sites are on the same page. Skylake E may hold potential but Skylake is a fail. Haswell-E FTW.

Fail is a pretty strong word.
Dissapointment would be better. Failure would be if it's shipment was delayed, if it didn't ship at all. If the clock speed headroom regressed from haswell.

The TIM situation is a failure, but that's a single component of the chip. If delidded skylakes start turning in 5.2ghz o/c's on air, is that a fail?

I think everyone was hoping for more from skylake, but if you want fail go look at some of the AMD releases in the last 5 years. a 5-15% IPC bump plus another 2-3% O/C headroom bump isn't a failure, but maybe not a success either.

How bout this, Skylake is a missed opportunity.
 
Fail is a pretty strong word.
Dissapointment would be better. Failure would be if it's shipment was delayed, if it didn't ship at all. If the clock speed headroom regressed from haswell.

The TIM situation is a failure, but that's a single component of the chip. If delidded skylakes start turning in 5.2ghz o/c's on air, is that a fail?

I think everyone was hoping for more from skylake, but if you want fail go look at some of the AMD releases in the last 5 years. a 5-15% IPC bump plus another 2-3% O/C headroom bump isn't a failure, but maybe not a success either.

How bout this, Skylake is a missed opportunity.

Well, it is to me a clear retrogression from Devil's Canyon and retrogression = failure. Btw, did you order that 5820k from staples you were asking about earlier?
 
It still seems as though I7 6700k is the best thing to buy if you plan to build a new PC

Unless you are running multiple video cards the i7 5820k is not the better choice.
I think either of them makes for a great PC, but the 5820 will cost more money and I do not feel as if it provides any real world advantage.
 
Well, it is to me a clear retrogression from Devil's Canyon and retrogression = failure. Btw, did you order that 5820k from staples you were asking about earlier?

I haven't pulled the trigger yet on a proc.

I don't see how it could be a regression. It's got better IPC, it's got a better IMC, for more memory bandwidth. It's got an upgrade DMI connection to support faster SSDs. It clocks the same or better.

It's the same or better at stock clocks in every benchmark.

[H] got devils canyon to 4.7ghz at launch
http://www.hardocp.com/article/2014/06/09/intel_devils_canyon_good_bad_ugly/#.VcjdJiZVhBc

[H] got Skylake to a 4.7ghz overclock at launch
http://www.hardocp.com/article/2015...76700k_ipc_overclocking_review/7#.VcjdXyZVhBc

It's not moving foward very fast, but that's very different than saying it's moving backwards.
 
Maybe they're planning for something else real soon (< 6 mos).

Doubt it. Sounds like we get Kaby Lake next summer.

Maybe we get an -E processor
gulftown - 3/2010
Sandy - 1-2011
Sandy-E - 11/2011
Ivy - 4/2012
Haswell - 6/2013
Ivy-E - 9/2013
Haswell-DC - 6/2014
Haswell-E - 9/2014
Skylake - 8/2015

There certainly are architecture + E launches closely bunched.
But if we see somthing this year I'd guess it'll be a x99 compatible broadwell-E
 
It still seems as though I7 6700k is the best thing to buy if you plan to build a new PC

Unless you are running multiple video cards the i7 5820k is not the better choice.
I think either of them makes for a great PC, but the 5820 will cost more money and I do not feel as if it provides any real world advantage.

I think the 2 x 16xPCIe slots is a bit of a red hearring.

Having the extra cores is the real benefit.
Having the extra PCIe lanes for your nvme drive straight to the CPU is good.
 
If you look at something like Cinebench typically you see HT CPU's scaling at roughly 4.8x the their single threaded score. So it can be in best case scenario's a 20% boost. But that's a best case scenario. Until you've got a process that loads 8 threads evenly, you won't see a much of a boost.

Yep,

I just wasn't aware of how well the game engines kept track of logical vs physical cores, and if they would inadvertently schedule two intensive tasks on two logical cores belonging the same physical core, while other physical cores sat idle.

This was the situation I was testing for. If true, I would have expected the game to run more smoothly with HT disabled, as then you never inadvertently schedule tasks on the same core while others are idle.

I will probably re-enable it soon, but - honestly - I have found little to no reduction in performance from disabling it either.
 
The TIM situation is a failure, but that's a single component of the chip.

I've always thought of the TIM as less of a failure, and more of an intentional move, and I don't even think its about cost savings.


It has also become apparent that CPU progress is slower these days.

This means that people are holding on to their older CPU's longer, as they have less reason to upgrade.

When people upgrade less often, Intel sells fewer CPU's and makes less money.

Solution 1.) Make people want to upgrade based on introducing new features which could easily be made backwards compatible, but were not. (bootable PCIe SSD's, for instance)

Solution 2.) Use TIM under a semi-permanently attached heat spreader, knowing full well the TIM will degrade over time, making the chip run hotter, eventually throttling at stock speeds, and making laypersons think their computer is "old and slow" and they need a new one, despite the new ones not performing much faster.

Essentially, I see the move to TIM as one of planned obsolescence by Intel, to try to keep people upgrading, in a market that is slowing down due to a combination of lack of competition post AMD being a viable competition, and the increased difficulty of die shrinks due to the limitations of silicon computing.
 
Zarathustra[H];1041786195 said:
Yep,

I just wasn't aware of how well the game engines kept track of logical vs physical cores, and if they would inadvertently schedule two intensive tasks on two logical cores belonging the same physical core, while other physical cores sat idle.

It's typically considered bad form to pin threads to specific CPUs. Typically you just spool up a thread and let the OS decide where to run it.
 
I'd always assumed that we'd been refining the core i7 with each architecture update.

Similiar to how the P6 architecture gave us the pentium Pro, pentium II and pentium III.

Even then the consumer got.
Klamath - 350nm -tick
Deschutes - 250nm tock
Katmai - 250nm tick (P III)
coppermine - 180nm tock

then the tualatin (130nm) PIII came out, but was over shadowed by the netburst P4. even though in many respects the PIII 1.4ghz chip was better than any of the P4 chips.

Well all P6 based processors had a clear design lineage and we're all Pentium Pro variants or successors. Intel uses the name Core i7 for many CPUS but these architectures can be radically different from their predecessors. They Incorporate concepts, ideas and design elements from different processors in the past, or use entirely new design elements. In other words these architectures borrow from so many places that they cannot be traced to a single processor ancestry. For example: Netburst chips came from a single design as all P6 CPUs came from the Pentium Pro. You can't trace a Core i7's roots that way.
 
For the last 6 years you'd have to be a fool to think upgrading every launch is a good idea.

Hell, I don't think most launches in the last 20 years warranted upgrading every time. Things are slowing down. Instead of every 3 years, we upgrade every 5 or so. But in all honesty, do we need an upgrade? In the past, I upgraded, because my system was getting slow. 5 years on, only some adobe software is a bit slow, but my main reason for upgrading is because the platform is getting a bit long in the tooth. This will probably be the biggest single upgrade I've done in at least 10 years.

Will end up with new GPU, Ram, MB, CPU and Monitor(s). In the past, I reused a lot of parts, but this time most will be replaced. Only some SSDs will make the move.
 
Well all P6 based processors had a clear design lineage and we're all Pentium Pro variants or successors. Intel uses the name Core i7 for many CPUS but these architectures can be radically different from their predecessors. They Incorporate concepts, ideas and design elements from different processors in the past, or use entirely new design elements. In other words these architectures borrow from so many places that they cannot be traced to a single processor ancestry. For example: Netburst chips came from a single design as all P6 CPUs came from the Pentium Pro. You can't trace a Core i7's roots that way.

So what features/changes are they making that are being disruptive to the development process?

I'm really curious to what your thoughts are about this, since I'd previously thought our "Architecture" ticks where just more of a added decode block here, tacked on more AVX there kind of changes.
 
Hell, I don't think most launches in the last 20 years warranted upgrading every time. Things are slowing down. Instead of every 3 years, we upgrade every 5 or so. But in all honesty, do we need an upgrade? In the past, I upgraded, because my system was getting slow. 5 years on, only some adobe software is a bit slow, but my main reason for upgrading is because the platform is getting a bit long in the tooth. This will probably be the biggest single upgrade I've done in at least 10 years.

Soon it'll be like automobiles. At one point in their development, top speed increased with every generation. Not all cars could do 80mph much less 100mph.

But now you get a new car because it has bluetooth, or nav, or gets 2 more mpgs, or becaue your car is just starting to break down.

I've never replaced my desktop because I wore it out, but with performance gains shrinking with every launch, that day may be coming sooner than later.
 
I haven't pulled the trigger yet on a proc.

I don't see how it could be a regression. It's got better IPC, it's got a better IMC, for more memory bandwidth. It's got an upgrade DMI connection to support faster SSDs. It clocks the same or better.

It's the same or better at stock clocks in every benchmark.

[H] got devils canyon to 4.7ghz at launch
http://www.hardocp.com/article/2014/06/09/intel_devils_canyon_good_bad_ugly/#.VcjdJiZVhBc

[H] got Skylake to a 4.7ghz overclock at launch
http://www.hardocp.com/article/2015...76700k_ipc_overclocking_review/7#.VcjdXyZVhBc

It's not moving foward very fast, but that's very different than saying it's moving backwards.

We will get a hold of more Skylake CPUs in the future. For now we just have the one. You can make this one run at 4.8 more easily than you can our Devils Canyon chips. Both make you pay in heat and power requirements to do it. Skylake can handle it longer before over heating. It makes me think it's possible if I can get the right board to go with it. Using Devils Canyon it's pretty much up to the IVR.

Doubt it. Sounds like we get Kaby Lake next summer.

Maybe we get an -E processor
gulftown - 3/2010
Sandy - 1-2011
Sandy-E - 11/2011
Ivy - 4/2012
Haswell - 6/2013
Ivy-E - 9/2013
Haswell-DC - 6/2014
Haswell-E - 9/2014
Skylake - 8/2015

There certainly are architecture + E launches closely bunched.
But if we see somthing this year I'd guess it'll be a x99 compatible broadwell-E

As I said there are rumors on some sites about Skylake-E, but nothing official. It seems unlikely. Broadwell-EX, & EP are pushed back into 2016. Enthusiast CPUs for the HEDT platform are based on Xeon E5 silicon. It seems unlikely that Intel would alter the road map of its server products significantly given validation testing for those types of systems. I don't think Intel wants to develop Skylake-E for just the HEDT market. It won't sell enough units to make it worth while. This is assuming it's compatible with the X99 PCH.

Zarathustra[H];1041786195 said:
Yep,

I just wasn't aware of how well the game engines kept track of logical vs physical cores, and if they would inadvertently schedule two intensive tasks on two logical cores belonging the same physical core, while other physical cores sat idle.

This was the situation I was testing for. If true, I would have expected the game to run more smoothly with HT disabled, as then you never inadvertently schedule tasks on the same core while others are idle.

I will probably re-enable it soon, but - honestly - I have found little to no reduction in performance from disabling it either.

Nope, the OS scheduler does that. Physical vs logical CPUs aren't generally known at the application level. The only application I know of that makes that distinction is VMWare.
 
I don't think Intel could win here. Even if it was a 35%-50% increase over the prior release, games are still going to be GPU limited. That isn't Intel's fault.
 
If they released a chip with 35-50% IPC increase I'd call that an awesome win. :D
 
Dan, has there been talk about an [H] de-lidding testing/article? Looking through the last few pages and skimming over the PCPER relinked article, there isn't much substance available yet. I imagine a de-lidding would be sometime after more CPUs are acquired.

If this talk of an "apparent" 20C drop has any truth to it, that would be huge.
 
Dan, has there been talk about an [H] de-lidding testing/article? Looking through the last few pages and skimming over the PCPER relinked article, there isn't much substance available yet. I imagine a de-lidding would be sometime after more CPUs are acquired.

If this talk of an "apparent" 20C drop has any truth to it, that would be huge.

Well that would be entirely up to Kyle. He's the one who ultimately decides if an article would be of value. I most likely will not be using a delidded CPU for motherboard articles as that just increases the chances of breakage during setup and it doesn't isolate the motherboard specifically.

I don't think Intel could win here. Even if it was a 35%-50% increase over the prior release, games are still going to be GPU limited. That isn't Intel's fault.

It isn't, but if there was that kind of improvement in applications and synthetic benchmarks, I think people would be all over it. We didn't even have that kind of jump going to Nahalem. I think the Core 2 Duo was the last CPU to offer increases like that, and even then it wasn't like that across the board.

So what features/changes are they making that are being disruptive to the development process?

I'm really curious to what your thoughts are about this, since I'd previously thought our "Architecture" ticks where just more of a added decode block here, tacked on more AVX there kind of changes.

I think the biggest change is the shift in focus from sheer CPU performance to performance per watt. Simply increasing performance is relatively easy. If Intel wanted to they could easily increase clock speeds, increase core counts and increase things like cache sizes. Ideally, one could ditch the iGPU as well and use that space for something more performance oriented. All that shit takes up space in the physical package.

Back in the "glory days." Intel increased CPU clocks, chip sizes, cache sizes or whatever else they needed to in order to improve performance. This was the most important metric by which products were judged at the time. So long as we could keep things reasonably cool, no one cared how much power anything used. Laptops used desktop processors and so did servers. It wasn't until the Pentium Pro came out that things changed on that front. Today the mobile and high density server markets are where all the money is. Intel needs CPUs that have higher IPC and use less power for it's two target markets. Mobile devices need to be more powerful while using less power to prolong battery life. Server CPUs need to be efficient but also powerful enabling more and more processor cores and more performance in a given area of rack space. Heat generation and energy bills from any desktop won't impact the home user's bottom line too much. In a large data center filled with thousands of severs heat generation and power consumption at the CPU level matters.

Desktops are of little concern to Intel at this point. We may not like it, but that's just good business. So instead of developing desktop CPUs and using them for laptops and servers, Intel develops mobile CPUs and uses them for servers and desktops. Scaling them with CPU cores, etc. in order to do so.

As for the specific architectures, the lessons learned from a previous generation, or aspects of a previous architecture may carry over, but when you get right down to it, we've been getting mostly new architectures every two years or so for awhile now. Nehalem is a great example. It used the cache design and Hyperthreading features of Netburst (Pentium 4 family) and had a deeper pipeline than Sandy Bridge, Ivy Bridge or Haswell do. The pipeline was closer to the amount of stages seen in Northwood cores as opposed to Prescott which was a 30-stage pipeline pig. But cache and HT were carried over because they were good features. Sandy Bridge was radically different from Nehalem. Ivy Bridge carried several things over from Sandy, but it was still a new architecture overall. Haswell was even more different. A few things carried over, but it's still a very different CPU. The FIVR being the most obvious change. We know next to nothing about Skylake's architecture other than what Intel has told us. They are saving that information for IDF, but what we were told is that it's development started six years ago, and it's almost completely new from the ground up. The PCB part of it is much thinner, the FIVR is gone, the memory controller is completely different than Haswell's, the heat spreader is thicker and most liley, the TIM is different than what Devil's Canyon used. But again, Intel is chasing performance per watt first. It's about economies of scale. A design built only for the desktop will be useless in servers and mobile devices. The reverse on the other hand isn't true.

Process node shrinks are expensive as hell from a tooling and R&D standpoint. It costs something like 4 billion dollars USD to retool a fab for a new process. As the processes get smaller and smaller they get harder to implement. We've already seen that with various processes with IBM, TSMC, and Intel. 14nm was a bitch for Intel based on what I've heard. Furthermore we are approaching the end of the road for what may be possible using silicon based CPUs. Intel believes 10nm to be the virtual end of the road, but believes new materials might get the ball rolling to at least 7nm. Only time will tell.
 
Last edited:
Dan, has there been talk about an [H] de-lidding testing/article? Looking through the last few pages and skimming over the PCPER relinked article, there isn't much substance available yet. I imagine a de-lidding would be sometime after more CPUs are acquired.

If this talk of an "apparent" 20C drop has any truth to it, that would be huge.

I've been reading the overclock.net threads about delidding. If I can afford the upgrade, I'll try it. Seems fairly reasonable when you reuse the heat spreader. It's pretty nice that you don't have to lap the cpu, or deal with mounting issues, to see those results.
 
Man, This slump is really starting to suck. i've been on the 2500k since it's release and its been running strong at 4.5 with no hiccups. Honestly there hasn't been anything worth spending the several hundred bones on. Are we plateauing? Is the lack of real competition causing this? What the hell is going on? You would think that 5 years later we'd see like 30-50% improvements. Seems like GPUs are the driving force behind upgrades nowadays.
 
Man, This slump is really starting to suck. i've been on the 2500k since it's release and its been running strong at 4.5 with no hiccups. Honestly there hasn't been anything worth spending the several hundred bones on. Are we plateauing? Is the lack of real competition causing this? What the hell is going on? You would think that 5 years later we'd see like 30-50% improvements. Seems like GPUs are the driving force behind upgrades nowadays.

Let's take a reference point.

Start in March 2000. We got our first ghz processors.

Go back 5 years to 1995. In March of that year the fastest chip was the 120mhz Pentium that just launched.

The 1ghz Athlon was likely more than 10 times faster than the 120mhz Pentium.

So, 5 years? 30-50% is peanuts. To keep up historically speaking we should have seen 1000% increases in this amount of time.

Is it because of lack of competition? Is it because priorities in the market are changing? Is it because we are plateauing due to the limits of silicon?

The answer is yes to all of the above. Each successive die shrink is much more difficult than it used to be, and pretty soon we will reach the point where further die shrinks will not be possible, due to fundamental physics, and will need something to replace silicon, if computing capabilities are to continue to grow.

Also, Intel (and what's left of AMD) have shifted their priorities. Everything is optimized for mobile, with low power being a priority these days. This further impacts the slow growth of performance of top performing parts.

And AMD is no longer putting up a fight in this market, so Intel has no incentive to one up the performance of their own previous performance king, even if they still cared about performance.

Triple whammy, and its only going to get worse from here on out...
 
Zarathustra[H];1041787328 said:
Let's take a reference point.

Start in March 2000. We got our first ghz processors.

5 years later and we saw dual core 3ghz or ~7-8x perf with IPC taken into account
5 years after that quad core 5ghz sandy bridge another 7-8x total perf bump.

So anyone who is like OMG sandy bridge is soooooo slow obviously never tried playing tie fighter on a 4 year old 386. 30%-35% faster just doesn't do it, when you were to doubling clock speeds ever 30 months, or doubling cores.

The business realities are what they are, still bums us old folks out.
 
Intel may have optimised the design for lower voltage, lower MHz. Voltage increases above 1.4v seem to be wasted. Going above 3.6GHz the voltage&power increases more logarithmic, over 4.6GHz and it seems crazy. Heat can increase resistance and leakage like several AMD designs. It would be good to try chilled water and modest voltage increase (over that used for 4.7GHz) to try again. Maybe we wait 3-6 months for the fab / stepping to improve potential.
 
i920 people are probably better off with an ebay westmere 6 core xeon for $100

That could be worth it if you do video encoding, otherwise I don't think you'd notice a difference. I'd have to find the benchmarks, but I'm guessing going from a 3.6GHz i7 920 to 4.4GHz i7 980X gulftown, would be about a 30% improvement in the multimedia benchmarks before accounting for the extra cores. So around 90% improvement in encoding with the extra cores over the i7 920. Comparing a 3.6GHz i7 920 to 4.8GHz Skylake should be about a 70% improvement for encoding on 4 cores, before including the extra instructions per second and memory bandwidth for other applications, and slight incremental improvement in games. Primarily, you'd be missing features like UEFI bios, SATA 3, USB 3.0/3.1, and nVME m.2 support.

Now that I think about it, I'm surprised USB 3.1 wasn't mandatory with the Z170 chipset. Thankfully the majority of 3rd party motherboards already support it.
 
Last edited:
Back
Top