Intel Core i7-7700K CPU Synthetic Benchmark Sneak Peek @ [H]

Expected, if you already have a Skylake you are gaining nothing really, if you are building new then go with the latest unless cost is a factor.
Cost, Kaby should push down Skylake prices making them even more practical for a new build cost wise.

I see the 6600K being relevant for another year or two and reasonable.

My 2500k lasted for 5 years, and truth be told I didn't really *need* to upgrade to my current 6600k.

The way things are going, I'll be rocking my current CPU for another 4 years simply due to the fact that there's nothing that will offer that sort of price/performance numbers for a long long time.
 
I would honestly be shocked if AMD can beat the most advanced semiconductor company there is. No one has the resources, physical or financial, that Intel does.

I know they did it once before, but Intel got themselves into a very unfortunate position at that time and payed dearly for it. They are not in a bad place right now.

Well to play some devils advocate. They beat Intel more then once, just never in terms of marketing.

It also seems to me like right now is very much like Intel pre Athlon. Sitting on their backsides. Slowly upgrading their Pentium line. Not taking their competition very seriously... and assuming the only thing they could do was chase their bottom end. (which is why back then they went with the obvious anti AMD ploy of using those stupid sockets for P II... thinking AMD would just try and slot their chips in Intel boards again) Joke was on them that time and sadly AMD didn't have the bank to market the heck out of it and make them really pay long term.

It would be a a huge leap saying this time AMD can pull off the impossible. However I don't doubt they could make intel hurt with customers like us. Because its very possible Zen will perform.... it still won't matter long term anyway. If Zen is indeed I7s equal and possible better, Intel like last time will just drop a half a billion in ad cash. lol
 
We all knew this. This is just iGPU increase. Seems like the main focus for kaby lake was portable side which igpu needed help in. Not much on desktop side when it comes to real IPC gains. This is actually good for AMD, even if Zen is 20% slower it just gives them more time to release higher clocked versions later next year. Unless AMD fails hard, I mean intel has literally given them a free pass for a year. So we will see. I guess if there is one company that can fail even at this its AMD lol. Keeping my fingers crossed.
 
Thing is, best rumors (i know right) have zen competing with Haswell at best.

Kabylake still looks like a worthwhile upgrade from someone like me on a 3570k. Higher clock speed, IPC, ddr4 and new mobo features from a newer motherboard (more usb3 slots, pcie bandwith, etc) might make this worthwhile unless we are looking at another 500$CDN cpu.

Would've been better to just have upgraded to Skylake. Black Friday in Canada you could've gotten a 6700k for $370, a good z170 motherboard for <$140 after rebate (Asus z170 pro gaming aura, Gigabyte z170mx-Gaming5 or z170 Gaming 5 or equivs, MSI z170 sli plus), DDR4 3000 for $100 or DDR4 3200 for $115. So $610 all together.

7700k at MSRP availability and just direct exchange rate would $465 already (+$95). Memory and mobo prices have also gone up.

If it overclocks well, I'll be upgrading. If not, then I'll sit back. Again.

Dammit, Intel. I WANT to give you my money. I want to upgrade to a much faster CPU. Give me a reason to spend my money! I just can't do it right now. I'm not hurting for performance, but I'd love an upgrade. I just want to make sure it's reasonable for the price I'm going to pay.

If it can hit 5Ghz or higher, I'll probably jump on it. It won't give me a huge jump in performance, but it'll be 5+Ghz. :D

They'll probably try to push the 5Ghz OC angle marketing wise, even though in practice that is basically a useless advantage over Skylake.

My plan to wait for 10nm just became more solidified.

You'll be looking at 2019 if not 2020 for desktop 10nm in the form of Ice Lake.

Makes you wonder if multi-CPU systems are in everyones future

No need. They can always design and introduce CPUs with more cores. There is no real technical challenge or limitations in increasing cores over the current 4 core mainstream standard.

Wait so IPC is basically the same (margin of error stuff) and power is the same... KL is just a bigger iGPU?

The video encode/decode block is updated. The actual execution units is the same, so the iGPU isn't really "bigger" on the performance side. If you mean the higher performance in mobile tests it's because of higher effective clock speeds in the those same lower TDP envelopes.

(*x79 and Sandy-Bridge-E got unofficial PCIe 3.0 support, but there was no PCIe 3.0 hardware to validate it with at the time, so it was never validated, and to this day can suffer minor timing issues in PCIe 3.0)

If you upgrade now though you'll be in the same spot as your previous build in terms of PCIe, which is your upgrading at the EOL stage of the current PCIe version.

Personally in my opinion the upgrade point Intel wise was Skylake. Otherwise you're better off waiting for Skylake HEDT or Ice Lake (yes not even Coffee Lake). The other option is the wait for Zen.

Unless the market reacts properly to Kaby Lake and retail prices are much below MSRP it is not going to anywhere near worth the money.

Wasn't there an update the next day that showed this to be related to 1 specific motherboard?

No that was just some people misinterpreting (or intentionally misleading). The 6700k sample on the motherboard change also used less power. That motherboard used less power for both the 7700k and 6700k. The 7700k still used more power relative to the performance gain stock to stock (+200mhz).

I get the feeling that Intel may have held a bit back with this one as a just in case.

Zen perhaps gives them a run in the mid range and perhaps even at the top tiers. Then again perhaps not. Still I feel Intel may have held just a bit back so that if needed they can drop another chip in the spring to claim the title of benchmark winner if need be.

Intel's crown is HEDT not the mainstream platform. They already have higher core count Skylake arch based CPUs to be released and likely higher clocked Kaby Lake CPUs as well. These were already released out in the road maps.

This is actually good for AMD, even if Zen is 20% slower it just gives them more time to release higher clocked versions later next year. Unless AMD fails hard, I mean intel has literally given them a free pass for a year. So we will see. I guess if there is one company that can fail even at this its AMD lol. Keeping my fingers crossed.

Any success AMD will have will be from basically maneuvering into market segments less focused on by Intel. In the short term this is very hard for Intel to react to beyond fighting via cash (basically lowering margins).

On a longer scale there really isn't anything impressive in Zen that Intel cannot react to design wise. The draw for Zen at the moment even with optimistic projections is that it will be more cores at given segments while undercutting Intel's high end. But adding more cores isn't very difficult, hence the rumors about Coffee Lake. They don't even need a completely new uarch or have one waiting in the wings as with Netburst->Core.

We need to keep in mind we are still discussing, even at very optimistic Zen projections, individual cores which are slower than Intel's latest. I haven't seen anything to suggest that those cores also somehow much more economically more feasible either, it's just die size trade off of no GPU. If it does get proven that AMD's target market is very important, than Intel will just config future designs with that in mind.
 
Intel's crown is HEDT not the mainstream platform. They already have higher core count Skylake arch based CPUs to be released and likely higher clocked Kaby Lake CPUs as well. These were already released out in the road maps.

That's the main issue right now with chips. Intel doesn't have to do anything but release roadmaps. They where doing the same thing when the first pentium dropped. They had Pentium II already running in house before PI was even mid life. There is no need to rush products out the door when your not worried about your competition. If zen is a bust I think its a safe bet to assume you can just add 1-3 Quarters onto the release time of everything on their roadmap. If its a rockstar you can expect them to subtract the same amount of time from those maps. Its so obvious an annoying that unless zen is 50% slower or has major issues multiplying or something I'll be building AMD next. Even if its 5-10% slower as I fully expect it to be the cash I save will make it easy to go raid 0 SSDs or bump a GPU upgrade up a bit, overall I won't be any worse off anyway. Unless it really is 50% slower... then man help us where all screwed and we'll all have to just suck up 2-5% performance bumps every year from now until ARM based devices replace Intel in 90% of our devices anyway. :)
 
I don't know how you can say that when I see my 4770k getting pegged at times in Watch Dogs 2 and there are plenty of other games that are using 60 to 70% of my CPU or more at times so I can't even imagine trying to game on an i5 over the next year or two if you want the best experience in every title. My God even the first Watch Dogs will stutter if I disable hyper-threading on my CPU and that game is a couple years old now.
I've got Watch Dogs 2 installed but have had limited time to test it, let alone play it. I agree that it's pretty demanding on both CPU and GPU. I may have to settle with high/medium settings until the next PC build.
 
Good to see proper testing so the silly IPC rumours are gone :)
 
Nice try Intel , what HAVE you been doing for the past year??

anyway i`m happy with my 6700K , which is hardly breaking a sweat 99% of the time anyway.
 
I never dreamed of the day when I would be able to game on the same CPU for 5 years at max visual fidelity without needing an upgrade. Intel must hate that it didn't build some sort of obsolescence into Sandybridge because right now I see no compelling reason to upgrade from an 2700K clocked at 4.7ghz apart from major chipset improvements (like NVMe support). AMD needs to pull their collective thumbs out of their asses and start making Intel compete again.
 
Well to play some devils advocate. They beat Intel more then once, just never in terms of marketing.
l

Well I always felt that AMD's Marketing budget only ever stretched to a new pack of Post-Its each year. I have never known a company that is so inept/pathetic at marketing to the public.
 
  • Like
Reactions: ChadD
like this
5+ years for mainstream desktop to only be maybe 40% faster? Intel has nose dived and planted it's face on the concrete like a bad blooper reel.

I'll stick with my 2600k, as I've said for the past 3 generations!
 
Damn I'm getting old. I'm more interested in the the power savings for the ultra mobile version. My surface 3 i5 is just not cutting it for me and I'm interested in the features of the 270 chipset
 
Damn I'm getting old. I'm more interested in the the power savings for the ultra mobile version. My surface 3 i5 is just not cutting it for me and I'm interested in the features of the 270 chipset

I am eyeballing a Kaby Lake based Surface 5.

For the desktop there is a long wait for Icelake or whatever makes sense at the time.
 
Im running a 2600k (gen 2 I believe) and I still can't see a reason to upgrade. I think the last couple of chipsets/cycles were better for laptops than desktops overall.

My game machine has a 4700k (I think). It's a gen 3.

The Gen 1 920 I sold still has a very happy owner that has not had any desires to upgrade.
 
Tell me about it. I'm still on my i7-3930k I bought in the end of 2011 on launch, and while I am considering upgrading in the next 6 months, it has absolutely nothing to do with performance.

It hits 4.8Ghz, and gives me a Cinebench 11.5 single threaded performance of 1.92, which places me in pretty good company, even today, 5 years later.
.

Nice, I'm still on a [email protected] on a $30 Zalman HSF. I leave the whole "drop back to 1.6ghz" mode on and it's never given me an issue, has survived 2 motherboards, damn thing has been bulletproof. The only thing that doesn't like it is the Oculus Software wants a newer processor and throws up a nagware kind of alert in its UI that tells me my processor isn't ideal. At 4.7ghz I have no complaints, I figure I might be losing, at best, 10fps by moving to a newer, more modern CPU (other considerations aside, like power, noise, etc). In the end, this is by-far my longest lived build and like you state, I'll upgrade because of aging components, not for additional performance. Hmmm, I wonder if I still have my old Celeron 450mhz around here.....in that cartridge the size of a cell phone.
 
Just my personal testing with a [email protected]. Memory test DDR3@2400Mhz.

4790k test.PNG

4790k memory-test.PNG



With these results. I would say that Haswell is still in the game as well. I don't really know what Intel is thinking. AMD is coming out with a CPU designed by probably the best CPU designer that AMD has ever had and Intel is coming up with new processor's that basically give no better performance than the last gen.
 
I have never known a company that is so inept/pathetic at marketing to the public.

It has been a difficult job for the last 10 years especially since 2011.
 
Nice try Intel , what HAVE you been doing for the past year??

anyway i`m happy with my 6700K , which is hardly breaking a sweat 99% of the time anyway.

I can only hope making super awesome breakthroughs in solid state storage.

But I do appreciate better integrated graphics. Intel does have much more improvement to make in that realm and while ent[H]usiasts may not care so much, it greatly benefits the general public. Like being able to run 4K monitor(s) with good desktop performance without additional hardware.
 
Isn't this a 95W part instead of 90W? How the hell is it basically getting the same scores, or less than 1% faster? Totally not worth it unless it overclocks like a beast due to increased power availability.
 
Holy crap I'll be keeping my 2500K no friggen way .... what's the next one, cannondale or something? When's the next one due.... seriously.
Don't hold your breath for Cannonlake. There are reports Intel is having difficulty getting down to 10nm, and that Cannonlake is going to be a repeat of Broadwell (U- and Y- low power / SoC SKUs). Word is it won't come out until late 2018. Due to the troubles with Cannonlake Intel is going to have "Coffee Lake," which is a second optimization to Skylake on 14nm+ and due for release in early 2018. If you want a desktop replacement we're going to have to wait for Icelake, which may not be ready until 2019 at this pace...
Honestly, we should all know by now when Intel abandoned the Tick-Tock model (hinted July 2015, official March 2016) that we would be receiving an optimization of the same architecture. Search your feelings; you KNOW it to be true! In terms of respect for AMD, I lost that 8-10 years ago. Here's to building with 4790K or 6700K. Cheers.
Tik-Tok10.jpg
vader.jpg
Ja, they're on a "Process-Architecture-Optimization" cycle now. Broadwell was the "Process," being the first 14nm chip.
  • Process = Broadwell (14nm)
  • Architecture = Skylake
  • Optimization = Kaby Lake
  • Optimization 2 = Coffee Lake
  • Process = Cannonlake (10nm)
  • Architecture = Icelake
  • Optimization = Tigerlake
 
One of the other sites claims to
There are a lot of everyday tasks and work programs that choke on lower end dual core cpu's now days. SSD's have extended the life of dual core inefficiencies but many employees can endure multi tasking even better with a quad core cpu. We are going quad core from now on for every employee going forward.
It's not that two cores are too slow. It's that the core 2 duo processors have been severely outperformed by the i3 through i7 processors. Even a quad core Q6600 will get slaughtered in just about every task by the lowest end i3 sold today.
 
Don't hold your breath for Cannonlake. There are reports Intel is having difficulty getting down to 10nm, and that Cannonlake is going to be a repeat of Broadwell (U- and Y- low power / SoC SKUs). Word is it won't come out until late 2018. Due to the troubles with Cannonlake Intel is going to have "Coffee Lake," which is a second optimization to Skylake on 14nm+ and due for release in early 2018. If you want a desktop replacement we're going to have to wait for Icelake, which may not be ready until 2019 at this pace...

Ja, they're on a "Process-Architecture-Optimization" cycle now. Broadwell was the "Process," being the first 14nm chip.
  • Process = Broadwell (14nm)
  • Architecture = Skylake
  • Optimization = Kaby Lake
  • Optimization 2 = Coffee Lake
  • Process = Cannonlake (10nm)
  • Architecture = Icelake
  • Optimization = Tigerlake

Coffee Lake includes 6 cores for the desktop tho.
 
With all the platform improvements Skylake brought, at least there was a reason to get on board if you were coming from a 2500k or such. DDR4, USB 3.1gen2 type-C ports, Thunderbolt 3, m.2 using full PCIe 3.0 x4.. But with KL, there's... well, nothing really. An addiitonal 4 PCIe lanes on X270 I guess.

It's been what, 15 months since Skylake launched? I'm really quite disappointed here.
 
I'm really quite disappointed here.

I expect people to be more disappointed with Coffee Lake when the stock clocks are reduced to allow for 2 more cores and stay in the same TDP on the same 14nm+ node.
 
Same as everyone else here- still using a i7 3770 and it looks like another year will go buy and no need to upgrade.
 
Can you add some newer games to the CPU benchmark suite that are supposed to have more than four threads, like Watch Dogs 2? Maybe test with and without HT on?
 
6600k and 6700k release pricing vs 7600k and 7700k? And what about the price of the 6***'s post 7***'s release? It looks like that'll be the only real metric that'll matter much. Performance sure doesn't.
 
One of the other sites claims to

It's not that two cores are too slow. It's that the core 2 duo processors have been severely outperformed by the i3 through i7 processors. Even a quad core Q6600 will get slaughtered in just about every task by the lowest end i3 sold today.

I was not talking about older core 2 duo's or a newer i3 that has hyperthreading. The post I quoted specifically mentioned a Pentium, which is true dual core. Hyperthreading adds a lot to a dual core processor but still can't keep up with 4 true cores in multitasking at work.
 
Last edited:
If you upgrade now though you'll be in the same spot as your previous build in terms of PCIe, which is your upgrading at the EOL stage of the current PCIe version.

Yeah I know, but it's really not that bad. Even today, almost 5 years later there is very little difference between GPU performance on 16x Gen2 and 16x Gen 3, especially at my 4k resolutions.

(Using Gen3 x8 as a stand in for Gen2 x16 in the test above as they are similar in performance, but Gen2 x16 is actually ever so slightly faster. The test was performed on a 1080, not a titan, but I can't imagine the results would be hugely different)

I kind if doubt lack of Gen 4 will really become a deal breaker any faster than lack of Gen3 has, so even if I buy a CPU/Motherboard right before Gen4 launches, I'll probably be OK for the next 5 years :p

I feel like the latest and greatest PCIe spec is really only useful if you want to go SLI, but down have enough lanes so you have to drop down to 8x, and I will NEVER go SLI again.
 
Last edited:
To those that know - is this just a problem with a lack of competition from AMD? Does Intel not have that incentive to go the extra mile and really push it? Or it is because the i-series was such a bad ass when launched but just doesn't scale as well?
 
You know I would love to see a 2700K, 3770K, @4.2 to see if upgrading is worth it for us Sandy Bridge/Ivy Bridge people.

It's been 4 years since Ivy Bridge. We should honestly see at least 50% improvement in general processing scores. Absolutely no one gives a crap about iGPU. Think about it. Spend $300 on a mediocre 7700K iGPU + new motherboard + new memory or spend $300 on a new video card...What's really going to give you the best boost for GPU task?

(And Intel wonders why their numbers are in the toilet)

If you are doing heavy video editing / rendering that is CPU intensive it might be worth upgrading from Sandy / Ivy to Skylake Kaby Lake. From a gaming point tho... not so much.
 
To those that know - is this just a problem with a lack of competition from AMD? Does Intel not have that incentive to go the extra mile and really push it? Or it is because the i-series was such a bad ass when launched but just doesn't scale as well?

Don't believe anyone who tries to pin it on any singular reason, there is a multitude of them working in conjunction.

The other issue also is whether or not Intel is truly stagnant as a whole or just in certain segments, this is a very important distinction. The retail 6700k/7700k market is a very minor part of the entire picture.
 
Sure looks underwhelming to say the least, once again, I see no reason yet again to upgrade my 4690k @ 4.5 (especially since I got an AIO and can go higher with the OC). I mostly game and do a bit of web/graphic design here and there.

Maybe in a few years the bang for the buck will be there... maybe...
 
To those that know - is this just a problem with a lack of competition from AMD? Does Intel not have that incentive to go the extra mile and really push it? Or it is because the i-series was such a bad ass when launched but just doesn't scale as well?


It's a combination of issues, in no particular order:
  • As die sizes shrink it becomes more and more difficult to make them work right, meaning it is harder and harder to make performance improvements. At some point we will reach the point where further die shrinks are no longer feasible, and the only improvements can come from architecture optimizations, and the Core arch is already pretty damned optimized.
  • Mobile is more and more important, so more and more the architectures are optimized for low power use, not for higher performance. Since desktop and mobile parts use the same architecture, when you optimize for low power in mobile, you limit what you can do with performance in the desktop segment.
  • Because most people either use laptops or never buy a discrete GPU, more and more emphasis is on improving the iGPU, at the expense of (in time and money spent, chip real estate and power envelope) of the traditional CPU cores.
  • Because Intel hasn't had a serious competitor in the x86 PC market for some time now. Why spend lots of time and money to develop a next gen, when your current gen already dominates the market?
 
Last edited:
WOW I wonder if Intel hit a Roadblock with this one I will still get one if the Retail Edge program offers it.
People will still get it for the higher SKU number :ROFLMAO::ROFLMAO::ROFLMAO: even the 4790K wasn't a jump over the 6700k
except for DDR4 support and the IGPU
 
WOW I wonder if Intel hit a Roadblock with this one I will still get one if the Retail Edge program offers it.
People will still get it for the higher SKU number :ROFLMAO::ROFLMAO::ROFLMAO: even the 4790K wasn't a jump over the 6700k
except for DDR4 support and the IGPU

They got nothing left in the tank apparently. The conditions are prime for AMD to catch up. The timing could not be better for the shrimp to play catch up. Then maybe we'd get some real progress from Intel.
 
I'm probably going to build a new system anyway because my wife's 2600k is BSODing multiple times per day (watchdog timeout), and I can give her my 3770K. Its also a good excuse to go to an NVME SSD upgrade. Now all I'm waiting for is to see the 1080Ti pricing.
 
As die sizes shrink it becomes more and more difficult to make them work right, meaning it is harder and harder to make performance improvements. At some point we will reach the point where further die shrinks are no longer feasible, and the only improvements can come from architecture optimizations, and the Core arch is already pretty damned optimized.

32nm was the last process where shrinking the node allowed for increased clocks. It has been more difficult (and extremely costly - Intel has spent billions of dollars trying to solve this) ever since.
 
Back
Top