Leaked Intel Core i7-6700K Skylake Benchmarks?

4 cores, 8 threads. Meh. We need 6/8 core chips coming down in prices. I don't want to go Xeon in my workstation.
 
4 cores, 8 threads. Meh. We need 6/8 core chips coming down in prices. I don't want to go Xeon in my workstation.

WORD.

the integrated IGFX that does not work at all can suck it too.I want more L1/2/3 ram not a halfass iGPU
 
WORD.

the integrated IGFX that does not work at all can suck it too.I want more L1/2/3 ram not a halfass iGPU
I want an iGPU if its intelligent enough to shut off my dedicated GPU entirely to save power/heat when not running 3D apps.
 
I want an iGPU if its intelligent enough to shut off my dedicated GPU entirely to save power/heat when not running 3D apps.

Modern GPUs are pretty good at throttling down when not needed, though. I'd rather they put that silicon to better CPU use...or just eliminate it entirely.
 
So those charts that showed core count increasing massively into the future that is now the present only applied to server CPUs? Well that sucks.

Upgraded to Sandy and Haswell, skipping two generations in a row since Broadwell doesn't even count, really. See you at *****lake (I hope its called Spacelake).
 
Seems to be very little incentive to upgrade to it, if these benchmarks are true.
 
So people with sandy bridge CPUs still have very little reason to upgrade. :rolleyes::rolleyes:
 
This seems similar to what I've seen on other sites. Maybe I'll wait and see if AMD's next architecture is worth getting.
 
I am itching for an upgrade. Even though my i7 930 is still going strong, it will be relinquished to nvr duty.
 
this is still going to be a huge upgrade for me, but I realize not everyone is running on 5 year dual core AMD parts :p

There's a great discussion in this thread about the iGPU usage in conjunction with a discrete card under DX12: http://hardforum.com/showthread.php?t=1867747&page=2
Essentially, post-processing and other specific tasks can be offloaded to the iGPU if it can handle it (which Skylake's can) and under DX12 this could give a 10-20% bump in framerates.

While IPC gains are negligible coming form SB or newer, the I/O on Z170 boards will include lots of goodies like USB3.1 type-C, PCIe 3.0, M.2, maybe U.2, and of course lots of extra PCIe lanes. All good reasons to upgrade.
 
I am itching for an upgrade. Even though my i7 930 is still going strong, it will be relinquished to nvr duty.

I know how you feel, although I will be building a new rig at the end of the summer, I just waiting for Skylake to drop so it can get tested and reviewed to show what it can do, before deciding if I'm going to go with an X99 platform (which I don't really need, but boy do I want one) or with a skylake chip and the new Z170 chipsets.
 
I finally upgraded my system after 5 years, from i7 930 to i7 5820K, couldn't be happier with the improvements in performance and heat dissipation. I'll gladly use this for another 5 years.

These days, the performance improvements may not be as huge as what we saw 10 years ago, but then again the performance requirement aren't improving any faster either (unless you are doing media editing or anything that are always performance hungry). So to me, I don't see any reason to complain. It's good to be able to spend our spare money on other stuff as well rather than having to upgrade our system ever couple of years.
 
So people with sandy bridge CPUs still have very little reason to upgrade. :rolleyes::rolleyes:

If anything, it's newer motherboards that are more interesting with USB 3.1 support, M.2 SSD connectors, and more PCI-E lanes for I/O.
 
4 cores, 8 threads. Meh. We need 6/8 core chips coming down in prices. I don't want to go Xeon in my workstation.

Meh,

I've had a 6 core 12 thread i7-3930k in my desktop since late 2011.

Only on a handful of occasions have I actually loaded all cores at the same time (if we exclude prime95 for stability testing, or Cinebench-type testing for benchmarks)

There are only two reasons I don't buy a more mainstream LGA 1155, LGA 1150 and soon LGA 1151 parts, and it's not because of the cores.

1.) Thermal Paste inside cover. I don't want a CPU that's held back like this, or degrades over time as the paste dries up, and I certainly don't want to have to do any delidding.

2.) PCIe lanes. This IMHO, is the real reason to buy the -E parts. Being able to add lots of cards and not hae to worry about your video card dropping out of 16x mode.

If I were doing it all over again (and it were 2011 over again) I'd go with the i7-3820, and its 4 cores, 8 threads, and 40 PCIe lanes.

Intel has figured out that people like me exist though and wants to milk us dry, and have artificially limited the number of PCIe lanes on the current low end Haswell-E parts, so you have to buy at least the i7-5930 to get those precious 40 lanes...

Bastards.
 
My deal desktop CPU?

4 cores is enough, I don't need any more than that. At least not today. I'll sacrifice the extra 2-4 cores in exchange for higher clocks on those 4. Also give me an aggressive down clock and power reduction scheme, so that when not in use they stay very cool.

Ideally, I'd have a motherboard with 8 PCIe 16x slots, all fully electrically available at all times, so, 128 PCIe lanes. I may not use them all, but at least I will have the option.

If that is infeasible (probably would result in a HUGE socket) Give me 76 PCIe lanes. I want 3 slots that are permanently 16x, another 3 slots that are 8x (one of them able to go 16x, if one of the three is unused) and a 4x slot. And maybe an 8th slot connected to the chipset using PCIe 2.0.

Make this chip, and I'll buy it.
 
I'm using my i5 2500k @ 4.0Ghz and while it does get loaded very well when I play BF4 and GTA5 -- all other tasks it's smooth and fast.

Hell I'm using hardware from 2011 and still able to game perfectly at 1440p (Xfire 290s)

I'd like to upgrade... out of principle, but going from a 2500K @ 4.0Ghz to a 6600k @ 4.0Ghz... is it really worth blowing 700 or 800 bucks just for that?

Time will tell I suppose.
 
4 cores, 8 threads. Meh. We need 6/8 core chips coming down in prices. I don't want to go Xeon in my workstation.

a0WObP5.png


sad reality
 
a0WObP5.png


sad reality

I don't think it's sad at all.

Why would they manufacture a product that would mostly go unused?

Except for a few corner cases of people who encode stuff, or render stuff a lot, consumers are barely taking advantage of the cores they have today. Why add more? I mean, it makes sense in enterprise (where many-core chips exist!) but for end users? Pointless.

AMD tried the whole "let's just throw more cores on it" approach, and we see how well it went for them :p
 
If the software dev side could efficiently scale performance to the number of cores we would see 16 core cpus. Unfortunately that hasn't happened yet.
 
If the software dev side could efficiently scale performance to the number of cores we would see 16 core cpus. Unfortunately that hasn't happened yet.
Chicken and egg.

If your 99% userbase only has four core processors, why are you going to spend the time to make your software run well with 16 cores?

"If you build it, they will come." - Abraham Lincoln, Declaration of Independence, 1942

Intel just needs to make them, then we'll see the software.
 
Zarathustra[H];1041728241 said:
Why would they manufacture a product that would mostly go unused?

then why would they promise an exponential growth of core count during the multi core hype? :rolleyes: oh, marketing...
 
http://i.imgur.com/a0WObP5.png[/IM G]

sad reality[/QUOTE]That's as useful as the 10GHz graph for projected clock speeds, which is to say that it's not useful without context and considering whether or not it's an abandoned strategy.

What Intel showed in the "tera-scale" multi-core strategy 9 or 10 years ago wasn't for desktop systems. It was for scalable applications. The hype around "Intel is going to make 60 core processors by X date" was created from something other than what Intel presented. It represents more of the very poor quality of pseudo-academic speculation and click bait posting by "news" web sites than anything Intel promised for the desktop.

From the same slide deck where all the incorrect speculation is based on, this is illustrative why we're not going to see >10 core processors for desktops in the near future:

[img]http://i.imgur.com/IQtmFci.png

Even the most shallow curve at the bottom is more optimistic for parallelization (outside of SIMD) of general desktop code than we still have in 2015.

Tera scale computing led to Larrabee/Xeon Phi, and nothing suggests it was ever made to replace standard desktop processors. The problem with multi-core processors still exists as it has for years: outside of a few classes of problems, it does no good for the vast majority of desktop software. What's more important for most desktop software is making sure 1 to a few threads run as fast as possible. That's the opposite of just add cores and all those problems disappear.
 
Zarathustra[H];1041728241 said:
AMD tried the whole "let's just throw more cores on it" approach, and we see how well it went for them :p
And it would have worked too if it wasn't for you meddling kids and your Intel too! :p But yea it did work for AMD just not in gaming. For productivity applications the 8350 does pretty well compared to Intel chips. But then again who owns an 8350 for productivity? It's the games we these for? But who knows maybe DX12/Vulkan might change this?
 
Modern GPUs are pretty good at throttling down when not needed, though. I'd rather they put that silicon to better CPU use...or just eliminate it entirely.

I don't recall if it will do that (though I'd like it to shut off the internal GPU when it's not needed), but my understanding is that the IGPU will be used for some post processing with DX12 games (apps?).

I wouldn't object to more cores, but honestly, it's pretty rare that an app uses them all. Probably the only app I have that might use the extra cores is Photoshop.
 
Zarathustra[H];1041728241 said:
Except for a few corner cases of people who encode stuff, or render stuff a lot, lazy cheap incompetent development companies are barely taking advantage of the cores they have today.

Fixed.
 
Zarathustra[H];1041728241 said:
Why would they manufacture a product that would mostly go unused?

Except for a few corner cases of people who encode stuff, or render stuff a lot, consumers are barely taking advantage of the cores they have today. Why add more? I mean, it makes sense in enterprise (where many-core chips exist!) but for end users? Pointless.

On my servers at the office, I can use all the cores (and memory) I can afford, especially with all the virtual servers I'm running.

When it comes to desktops, it's i3's and i5's. Most never need more than 2 cores, so a 3.6Ghz i3's works fine. (especially since I'm upgrading them from 2.4Ghz p4's) :)
 
And it would have worked too if it wasn't for you meddling kids and your Intel too! :p But yea it did work for AMD just not in gaming. For productivity applications the 8350 does pretty well compared to Intel chips. But then again who owns an 8350 for productivity? It's the games we these for? But who knows maybe DX12/Vulkan might change this?

If you mean almost on par with a low-end i5 from 5 years ago then yes, the 8350 does pretty well vs Intel chips in muli-threaded situations.
 
That's as useful as the 10GHz graph for projected clock speeds, which is to say that it's not useful without context and considering whether or not it's an abandoned strategy.

What Intel showed in the "tera-scale" multi-core strategy 9 or 10 years ago wasn't for desktop systems. It was for scalable applications. The hype around "Intel is going to make 60 core processors by X date" was created from something other than what Intel presented. It represents more of the very poor quality of pseudo-academic speculation and click bait posting by "news" web sites than anything Intel promised for the desktop.

From the same slide deck where all the incorrect speculation is based on, this is illustrative why we're not going to see >10 core processors for desktops in the near future:

IQtmFci.png


Even the most shallow curve at the bottom is more optimistic for parallelization (outside of SIMD) of general desktop code than we still have in 2015.

Tera scale computing led to Larrabee/Xeon Phi, and nothing suggests it was ever made to replace standard desktop processors. The problem with multi-core processors still exists as it has for years: outside of a few classes of problems, it does no good for the vast majority of desktop software. What's more important for most desktop software is making sure 1 to a few threads run as fast as possible. That's the opposite of just add cores and all those problems disappear.

You got it exactly right.

Chicken and egg.

If your 99% userbase only has four core processors, why are you going to spend the time to make your software run well with 16 cores?

"If you build it, they will come." - Abraham Lincoln, Declaration of Independence, 1942

Intel just needs to make them, then we'll see the software.


I don't think you understand. It's not just that developers have no reason to develop for multi core, and we just need to give them a reason.

There are a limited number of applications which can benefit from multi-core scaling, no matter how hard we try, without running into problems like thread locking dependencies, etc.

Multi-core processing simply isn't as useful as many thought. Even with the perfect programmers, funded infinitely, the majority of tasks simply will never make good use of many core platforms.

That being said, lets give many core platforms credit where credit is due.

1.) Highly parallelized and repetitive operations benefit a lot from many cores, such as scientific modeling/simulation, rendering, encoding, etc.

2.) Running lots of different programs at the same time benefit greatly from many processor applications. Unfortunately, on the desktop we don't do this that much. True, we'll have many programs running at the same time, but mostly our focus is on one at a time, and the ones in the background aren't doing that much. You might have a render/encode job running in the background while doing something else, but then, see #1

The whole "Amd is ahead of the curve with many CPU's for the desktop, and just wait until the software catches up!" line we heard so much in the Phenom II x6 and later Bulldozer days really stems from a lack of understanding of how multiprocessing works.

For the vast majority of tasks there is nothing you can do to improve how they work with many cores. For games type stuff you can spread out different tasks on different cores (one core gets direct x call CPU load, another gets physics, a third gets sound processing, a fourth gets game engine stuff) to try to improve things, but even then you start running into limiting returns rather quickly.

Which leads us back to many cores vs fewer strong cores.

I would pick fewer strong cores 100% of the time. Why? because fewer strong cores will work to their potential in just about all applications, whereas many weaker cors need special conditions to shine.

I'd rather have my extra performance all the time, rather than only those rare moments when many core multithreading works.

So, from that perspective, assuming we can't overclock and have to stick with the stock clocks, I'd argue that a $339 i7-4790k with 4 cores, 8 threads, clocked at 4Ghz base, with turbo at 4.4Ghz will be a better CPU for most people (even high end enthusiasts) than a $999 i7-5960X with 8 cores, 16 threads a base clock of 3.0Ghz and a turbo of 3.5ghz.

If only the LGA 1150 chips came with more PCIe lanes, and didn't have paste under the lid...
 
Zarathustra[H];1041728410 said:
Which leads us back to many cores vs fewer strong cores.

I would pick fewer strong cores 100% of the time. Why? because fewer strong cores will work to their potential in just about all applications, whereas many weaker cors need special conditions to shine.

Definitely. It's just irritating that Intel has all these huge core count chips with crappy clock speeds and they are 'mostly' all locked. A lot of us can benefit from these extra cores but aren't willing to sacrifice clock speed.
 
Lets say I had access to laptop version of skylake.

If I did have said access and ran some benchmarks, I think I would've found similar 5-10% IPC improvements over haswell.

Also if I compared it to my now ancient i7-920, clock for clock at 2.8ghz, it may have shown a 60% improvment.

But who knows if skylake engineering samples are out there.
 
Also if I compared it to my now ancient i7-920, clock for clock at 2.8ghz, it may have shown a 60% improvment.

But who knows if skylake engineering samples are out there.

id say yes if its real looks like a ~10% improvement over the 4970k and based on my 4970k upgrade from an i7 860 60% sound right on the money
 
id say yes if its real looks like a ~10% improvement over the 4970k and based on my 4970k upgrade from an i7 860 60% sound right on the money

The question is what exactly is a intel core i7-0000 ES chip that was tested.
 
My observation is that non-gaming compute in 2015 is done on GPUs. So yes, highly parallel multi-core computing is a thing, it just isn't a CPU thing.

If you follow machine learning at all, you'll notice the best way to learn deep is with a GPU. We leave the more sophisticated instructions to CPUs.
 
Zarathustra[H];1041728648 said:
Why isn't this in your sig? Made me curious.

Makes my 24x screenshot from my server look a little anemic :p

you can only have so many lines and characters in your sig, and this didn't fit. ;)
 
Back
Top