Will Skylake be a big leap forward?

AMD needs to get off their asses and starting bringing the heat to intel.
This comes up every time and I don't think people understand how hard it is to extract more performance from legacy code. The "easy" targets have already been solved: cache/memory subsystem performance (esp. with IMC), pre-fetch, branch prediction, out of order execution, shortened penalties on misprediction, etc. Incremental improvements to these things happen each generation, but there's no magic bullets to greatly increase performance for these problems anymore. AMD is trying to improve performance and so is Intel. It's ridiculous to assume that either company isn't working hard at these very hard problems.

Performance improves as new compilers are released and programs recompiled, and as new x86 extensions are used. TSX returns in Skylake, which should help improve the performance of some types of multithreading. This doesn't really help with immediate gratification, but that how this stuff works. :p
 
First purported benchies are out.. kinda hard to decipher what's what, but here's the linkage:

http://wccftech.com/wipintel-skylake-core-i7-6700k-core-i7-4790k-devils-canyon-performance-benchmarks-leaked-tested-ecs-z170-claymore-motherboard/

The IPC gains are negligible. In fact I'm not even sure you could call them "gains".

This is disappointing if true, but Skylake for me isn't about the CPU really; It's about the platform.

If true, desktop Skylake is at Ivy Bridge levels of meh.

Notice how the stock clocks are significantly cut this time round? Coupled with preliminary reports that the average Skylake can barely OC past 4.2GHz really questions the value of OCing as if the 4790K haven't done enough to harm that already.
 
If true, desktop Skylake is at Ivy Bridge levels of meh.

Notice how the stock clocks are significantly cut this time round? Coupled with preliminary reports that the average Skylake can barely OC past 4.2GHz really questions the value of OCing as if the 4790K haven't done enough to harm that already.

Not sure how true this is. Stock clocks on the 6700k are 4.0/4.2, so I have to think it'll hit 4.5 without much effort; We don't know what kind of TIM is on there yet, but the removal of the FIVR alone should allow for better OC than Haswell.
 
Not sure how true this is. Stock clocks on the 6700k are 4.0/4.2, so I have to think it'll hit 4.5 without much effort; We don't know what kind of TIM is on there yet, but the removal of the FIVR alone should allow for better OC than Haswell.

4.5 really isn't enough to excite me. Unless it's at least 5Ghz or the IPC gain is impressive I'm going to stick with my 2500k...
 
4.5 really isn't enough to excite me. Unless it's at least 5Ghz or the IPC gain is impressive I'm going to stick with my 2500k...

I already have 4.5 ghz on my 980x. So gimme some more cores at least.
 
4.5 really isn't enough to excite me. Unless it's at least 5Ghz or the IPC gain is impressive I'm going to stick with my 2500k...

If the IPC of Skylake ends up being around 10% above Haswell clock for clock, then a Skylake at 4.5 GHz would still be roughly 25-30% faster than a Sandy Bridge at a whopping 5 GHz. 35-45% faster when both are at 4.5 GHz.
 
IMO, any type of paste is completely inadequate and Intel should quit fucking we, the paying customers, over and switch back to solder.
 
50% extra processing lag? Where do you get that from?

Right there in the chart in the first link.

dUchrzx.png


Look at how much longer it takes for Frame 1 to completely finish processing.

This is a pretty well known tradeoff of pipelining your hardware; throughput improves, but latency gets worse. In this example it doesn't look like a very good tradeoff; I wouldn't accept 20 ms of additional input lag just to get 4 fps higher.
 
Intel has more of an incentive to improve PCIe lanes/performance than it does improving Clock rates (which could really derail R&D) or even IPC. I'm surprised they are still seeing increases in IPC honestly.

Most people just want more lanes, Intel already offers many products with lots of cores. I like their direction so far with Skylake.
 
Not sure how true this is. Stock clocks on the 6700k are 4.0/4.2, so I have to think it'll hit 4.5 without much effort; We don't know what kind of TIM is on there yet, but the removal of the FIVR alone should allow for better OC than Haswell.

It's a fact that headroom is becoming lower and lower since SB and I have already seen several 4690Ks that couldn't even hit stock 4790K clocks. The 2500K era of free lunch is over, besides Intel isn't stupid to let us have that anymore as seen by how intelligently they priced the 4790K ($100 more over 4690K for much higher clocks and HT) and 5820K ($50 more over 4790K for extra 2C/4T, more PCIE lanes) to court the "save money on mobos to spend more on the CPU" crowd this time round.
 
The same is true for the average consumer. The real sloths in the industry are software developers. How many programs that businesses and consumers use every day are still single-threaded only, manage memory poorly, have bloated superfluous code from ten years ago, and would be bogging PCs down even worse if it weren't for SSDs masking the complete lack of performance improvement?

I wouldn't necessarily agree with that as a general rule.

All too often, new program / architectures are saddled with crippling (backwards) compatibility / interoperability requirements in order to maintain particular business-side objectives (such as keeping newer versions compatible with older, or business services continuity throughout an enterprise stack).

These types of requirements almost always cause the usage of old code frameworks, single-threaded modules, or slower languages (looking at you, Java ;-))

Edit - Way OT post there -----^

Back on track:
I'm not too interested in upgrading to Skylake coming from a Z77 / 3770k @ 4.4 setup. Skylake's chipset looks pretty much like a rehash of the 8/9 series. It doesn't even include native SATA-E x4 / USB 3.1. :rolleyes:
 
Last edited:
The same is true for the average consumer. The real sloths in the industry are software developers.

Average consumers don't like paying for software. And if they do pay it can't be any more than 99 cents and it better cook them breakfast too or it's getting a one-star.

What software that you use do you think should be multi-threaded but isn't?
 
I know I don't have much backing to his claim but he has told me straight up.

"when Skylake comes out.. upgrade."
 
Skylake will be a good entry point into the realm of DDR4 and the new socket. It'll have life, you'll have memory that can last for 5+ years and a chipset that will probably only see marginal upgrades with the z180.
 
If the IPC of Skylake ends up being around 10% above Haswell clock for clock, then a Skylake at 4.5 GHz would still be roughly 25-30% faster than a Sandy Bridge at a whopping 5 GHz. 35-45% faster when both are at 4.5 GHz.

This is what annoys me whenever people talk about how new chips aren't better than Sandy... People still don't seem to have learned that clock speeds are incomparable between generations as a metric. It's like we've all forgotten when a 1.8Ghz athlon was faster than a 3ghz Pentium.

By all metrics, each generation has been faster, cooler and less power demanding than the last by a real material margin. You can say that it isn't enough to warrant an upgrade expense but not that the performance isn't improving.

The gains in CPU tech in the last five years are outstanding.
 
This is what annoys me whenever people talk about how new chips aren't better than Sandy... People still don't seem to have learned that clock speeds are incomparable between generations as a metric. It's like we've all forgotten when a 1.8Ghz athlon was faster than a 3ghz Pentium.

By all metrics, each generation has been faster, cooler and less power demanding than the last by a real material margin. You can say that it isn't enough to warrant an upgrade expense but not that the performance isn't improving.

The gains in CPU tech in the last five years are outstanding.

Exactly. Skylake may not be worth the price of admission for many that are using Haswell or Haswell refresh DC, but for most on IB and back or any AMD, it's probably going to be a great choice.
 
I'm on Sandy Bridge. I will upgrade for Skylake. I may not 100% need to, but it's time.
 
The IPC bump will be nice, but I don't expect anything earth-shattering...

The real story will be the Iris Pro graphics and gaming... For low end systems, iGPUs should be able to cover all the basics and then some, running low end 'esports' games fine... For high end gaming systems, with DX12 it should have a HUGE impact on future games being able to use it for post-processing...

Most people aren't anticipating the Iris Pros impact on DX12. Big :)

Does this mean that DX12 will use both the IGPU and your Gaming GPU?

If so, I wonder if Adobe will be able to make use of both of them for PS/LR.
 
It's not only the CPU that matters now, but the whole platform. Yes, there might not be big difference in speed between 2500k and 6700k, but there is large difference between z170 and z68. Pcie 3.0, USB 3.1, m.2 x4 - if you want to use devices that would benefit from those features, then z68 will hold you back.

Though, I'm staying on my 4790k. Before iris pro will have any influence on dx12 games or before ddr4 will be cheaper or more features will be added to motherboard, we will be propably in cannonlake range. But if I had still my 2500k, I'd upgrade not for the cpu but for the chipset.
 
I don't believe Skylake is currently slated to implement USB 3.1 natively on the platform.

I was originally anticipating Skylake but some issues now are -
- PCIe 3.0 is likely at it's last generation for implementation with PCIe 4.0 likely supplanting it next year.
- DDR4 availability is still in the relatively early phase for cost and performance.
- Gaming performance requirements may regress due to DX12.
- Still 4c/4t and 4c/8t at the same cost levels for the mainstream platform. Only 6c/12t costs have gone down but on HEDT which is delayed behind mainstream.
- CAD (and most currencies) have weakend considerably relative to the USD, so costs are effectively much higher for everyone outside the US
- Some questions now on the actual progress for the GPU side.

These factors are now making me want to just hold on and wait for at least Kaby Lake (or even later) and Skylake-E. Or at least giving Zen a chance to show.
 
Iris Pro was exciting when it was first announced, but it's been on only a tiny percentage of the total number of CPUs shipped. Iris (non-pro) is not quite as impressive, and that's still not the low end of iGPU. Initially you couldn't even buy a socketed CPU with Iris Pro (I believe they're now available with some Broadwell Ks?)

I hope that intel ships more Iris Pro GPUs, but I don't think that's a foregone conclusion given how very expensive it is to provide for intel in terms of die space, relative to the price increase it commands. Looking at SKUs and available configurations, Skylake looks like it too will have lots of options, and only a few have GT4e and EDRAM.
 
10% is not much to someone that upgrades their processor every generation, but for most of Intel's customers (enterprise and non-enthusiast consumers) that typically wait 3-5 years between systems, they are getting quite an increase in performance.

Current system: 100 (baseline)
Next gen: 110 (10% increase each gen)
Next gen: 121
Next gen: 133
Next gen: 146

Except that in many cases the increases have been half that. Skylake's main potential, aside from the token IPC gains, will be a lot of PCI-E lanes for all that SSD:age.
 
Except that in many cases the increases have been half that. Skylake's main potential, aside from the token IPC gains, will be a lot of PCI-E lanes for all that SSD:age.

7-10% with each gen after SB is the norm.
 
I tend to agree with the above in that it's the chipset that is getting the most improvements. More PCI-e lanes for everything we want to do with them these days (USB 3.1 cards, SSD's and GPU's). As external devices get more bandwidth hungry and latency sensitive the "access to the CPU" quotient is getting raised via additional channels and attempts to make those channels as low latency and immune to bottlenecking as possible. In fact that is what the main point of HEDT is. Skylark simply takes that same focus and applies it to the Consumer side.
 
i don't think that native usb 3.1 support is a big deal. I also don't expect to have anything that will require 3.1 for many years. Eventually I will get a new kindle with 3.1 and many many years later all ports can be 3.1 including the mouse and keyboard and we will be able to fit more ports but that time is not upon us. An external SSD or NAS are the only practicle application. Also, I heard that phone coming out now with c-type are usb 2.0.

I think that the imporements of the CPU and platform together are just enough to be worth upgrading but it is always more worth while to wait.
 
i don't think that native usb 3.1 support is a big deal. I also don't expect to have anything that will require 3.1 for many years. Eventually I will get a new kindle with 3.1 and many many years later all ports can be 3.1 including the mouse and keyboard and we will be able to fit more ports but that time is not upon us. An external SSD or NAS are the only practicle application. Also, I heard that phone coming out now with c-type are usb 2.0.

I think that the imporements of the CPU and platform together are just enough to be worth upgrading but it is always more worth while to wait.

Depends how often you update your MB. My system is 5 years old. USB 3 wasn't part of intel's chipset at that time, but I got it on my MB. USB3 thumb drives have been around for years. If I bought a new CPU/MB today, I'd want 3.1 built in. And if I was buying your used Skylake board a year from now, I'd want 3.1 even more.
 
I WANT MOAR CORES, MOAR CORES DAMNIT :D unlocked ones.

Although for now I'll take a 10ghz 5960x vs 2x 5960x

You and I both. Intel doesn't care, however, and refuses to offer unlocked HCC (high core count) chips regardless of what people are willing to pay for them. They're offering what they feel like offering and you and I have no choice other than to buy what they're offering at the price they feel like offering it at (or not to buy anything at all).


I think Intel still has an incentive to make faster chips. People won't replace unless there is a reason to. If there's nothing worth upgrading from my 2500k to then I'll just keep running it. This = no $$$ for Intel.

Intel could give a rat's posterior about you not buying their chips. They care about their high margin datacenter clients. The only incentive that Intel has to release faster chips is when sales to said clients slows down.


AMD needs to get off their asses and starting bringing the heat to intel. I'm still on the 2500K but i really want to upgrade to something worthwhile, but with no real competition, intel has no reason to make anything astounding performance jump wise. mid-range intel cpus should have been at at least 6 cores standard already and enthusiast grade cpus should have been OCing to 5Ghz with no effort by now. Seems like power consumption has been the only thing that intel has been really chasing.

Seems like only GPU's and SSD's have brought any significant performance gains in the last 6 years.

(This was not meant to be some childish AMD bash, intel has gotten lazy as shit as well)

/rant :)

Agreed. Intel only releases their best when AMD pushes them to. They could easily bring their best to market with the snap of a finger, but refuse to. Back when the AMD FX first came out, Intel immediately responded by taking their Gallatin core Xeon MP, putting it into a Socket 478 format and launching it, just like that. They'd have never released such a chip in a million years without AMD pushing them to. A similar instance occurred with the AMD 2x4 system. Intel immediately responded with Skulltrail. An Extreme Edition chip, in the Socket 771 format....BOOM.....just like that.

With AMD so far behind Intel performance-wise, it's highly unlikely we'll ever see Intel's best ever again.:(


IMO, any type of paste is completely inadequate and Intel should quit fucking we, the paying customers, over and switch back to solder.

Why should Intel care about you or I? Sure, we're customers, but we don't have any other viable choice when it comes to processors and Intel knows this and really don't care if we're satisfied or not.
 
http://vr-zone.com/articles/intel-skylake-flagship-core-i7-6700k-cpu-gets-reviewed-launch/95653.html

When compared with 4790K, both CPUs were compared using a number of popular benchmarks such as PCMark 8, 3DMark Fire Strike Extreme, Cinebench R15, 3DMark Sky Diver, 3DMark Cloud Gate, and Sandra 2015. As expected, the Skylake CPU isn’t significantly faster than the Haswell i7-4790K, but it does show good gains in quite a few benchmarks. The maximum increase was seen in the Sandra 2015 Multimedia 1 test where it managed to score 29.1% higher. In most other tests, the gap wasn’t really all that high.
 
http://vr-zone.com/articles/intel-skylake-flagship-core-i7-6700k-cpu-gets-reviewed-launch/95653.html

When compared with 4790K, both CPUs were compared using a number of popular benchmarks such as PCMark 8, 3DMark Fire Strike Extreme, Cinebench R15, 3DMark Sky Diver, 3DMark Cloud Gate, and Sandra 2015. As expected, the Skylake CPU isn’t significantly faster than the Haswell i7-4790K, but it does show good gains in quite a few benchmarks. The maximum increase was seen in the Sandra 2015 Multimedia 1 test where it managed to score 29.1% higher. In most other tests, the gap wasn’t really all that high.

The same FUD from some chinese website fishing for hits. Why do people still believe this nonsense? It may or may not be true but putting full faith in this sort of "review" is honestly quite bizarre to me. The answer to "Will Skylake be a big leap forward?" is wait and see.
 
Doubt it. It'll be another disappointing marginal 10-15% performance increase over the previous stuff most likely.
 
If I had a prototype laptop with Intel core I7 0000 ES sample chip, I would benchmark it.

I would clock my i7 920 to 2.8ghz to compare clock for clock with the ES.

In Cinebench R16 I'd see the old nehalem put up 390 and the skylake put up 620 or 60% faster clock for clock. Also comparing it to Haswell it's about 10% faster clock for clock.

But you know who's lucky enought to have prototype hardware.
 
I'm still sitting on my 3+ year old 3770 Ivy bridge. It easily handles anything and everything I throw at it. Barring a catastrophic failure, I have little incentive to drop coin on a new CPU/mobo for at least another couple of years. The move to SSD is by far the most noticeable performance improvement I've seen in years.
 
I'm still sitting on my 3+ year old 3770 Ivy bridge. It easily handles anything and everything I throw at it. Barring a catastrophic failure, I have little incentive to drop coin on a new CPU/mobo for at least another couple of years. The move to SSD is by far the most noticeable performance improvement I've seen in years.

Likewise, still sitting with my X58 Xeon X5650 chip and it runs like a champ. And agreed with the SSD.

Everyone was excited about DDR4 but it seems like that was a letdown; at least I haven't read of any people raving about a performance increase.
 
Likewise, still sitting with my X58 Xeon X5650 chip and it runs like a champ. And agreed with the SSD.

Everyone was excited about DDR4 but it seems like that was a letdown; at least I haven't read of any people raving about a performance increase.

If you're happy, you're happy.

But Skylake is roughly 60% faster clock for clock. So once the SKylake xeons come out you might have something to jump on.
 
Likewise, still sitting with my X58 Xeon X5650 chip and it runs like a champ. And agreed with the SSD.

Everyone was excited about DDR4 but it seems like that was a letdown; at least I haven't read of any people raving about a performance increase.

has there ever been a huge jump in performance early on? I can't remember DDR2 vs DDR, but I distinctly remember people saying that DDR3 wasn't any faster than DDR2 when it came out.
 
has there ever been a huge jump in performance early on? I can't remember DDR2 vs DDR, but I distinctly remember people saying that DDR3 wasn't any faster than DDR2 when it came out.

With Intel platforms, faster RAM really doesn't help all that much in most real world apps and games. This has been more or less true since the Core 2 Duo days. So upgrading a platform just based on RAM is foolhardy. With that said, DDR4 does allow for greater single stick RAM density (think single stick 16GB DDR4 RAM) and potentially cheaper RAM upgrades down the line.
 
has there ever been a huge jump in performance early on? I can't remember DDR2 vs DDR, but I distinctly remember people saying that DDR3 wasn't any faster than DDR2 when it came out.

One of the points of memory is to keep the CPU fed. We sat on DDR1 for so long that it seemed like we were actually boosting performance when we upgraded speeds when in fact we were lessening the choke point on the CPU.

DDR1 to DDR2 initially was a big step, but since then we haven't sat on slower memory for too long. Moving to faster memory will benefit in certain situations, but mostly for those who are using the iGPU these days(or synthetic bench's). Remember the triple channel DDR3 X58 vs the double channel DDR3 of z68? Yea those 1156 CPU's were sure hurting... :D

So moving to faster memory won't improve performance, even if we moved to 5ghz DDR4 it will only show marginal increases as the CPU is already pulling info as fast as it can already.
 
Back
Top