Intel Coffee Lake Slides Leaked

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
There has been no shortage leaks when it comes to Intel's upcoming Coffee Lake CPUs. (Drip, drip, drip.) And today we have another leak, courtesy of PCEVA.com.cn.

We see below that Coffee Lake enthusiast Z370 platform should be upon us soon with a total of 24 PCIe 3.0 lanes with another 16 dedicated to GPU. Certainly this will open up some lanes for storage and other devices that could benefit from more PCIe bandwidth. As we have seen before, we can expect 6-core and 4-core Coffee Lake variants.

Check out the slides.
 
All these features look fully brewed, but heat issues with kaby lake followed by intel's great advice not to overclock make me wait for test results.
 
Well, yeah, the same PCIe lanes as Z270, but a major bottleneck in the CPU/chipset interconnect. Intel's DMI is not better than 4 lanes of PCIe 3.0, so NVMe RAID is meaningless, and so are all those PCIe lanes. The only ones that count are from the CPU, and Intel is very lacking there.
 
Pass, Ryzen 2, here I come!

Not feeding them anymore.
 
To expand on what two other commentors above me have indicated, the user is still bottlenecked by the DMI 3.0 interface, which in addition to somehow managing 24 PCIe lanes is also managing six SATA ports (worth three lanes), all the USB ports, AND the LAN PHY ethernet connection.

No amount of scheduling and switching between drives and devices vying for space will accommodate all of it. It would be like if the regular 16 lanes from the CPU were placed on a motherboard with a pin out of well over 100 lanes and expected PLX chips to automagically make it work as if ALL the lanes led directly to the central processor.
 
To expand on what two other commentors above me have indicated, the user is still bottlenecked by the DMI 3.0 interface, which in addition to somehow managing 24 PCIe lanes is also managing six SATA ports (worth three lanes), all the USB ports, AND the LAN PHY ethernet connection.

No amount of scheduling and switching between drives and devices vying for space will accommodate all of it. It would be like if the regular 16 lanes from the CPU were placed on a motherboard with a pin out of well over 100 lanes and expected PLX chips to automagically make it work as if ALL the lanes led directly to the central processor.

During benchmarks and stress tests, I'd agree. But honestly 99% of users will not notice a difference, since you're very, very rarely going to stress all of those devices at the same time.
 
Though the power of google fu,

Yes there is a lake named "hard lake"
And ironically, there is a MacIntosh Lake in Canada too. I wonder if that or this would raise more brows... :)

But more on topic: I saw some slides on the Anandtech forums which showed that the i7-8700k will have the capabilities of 'IA Overclock' and 'DDR Overclock', while i7-8700 will not.
So, the non-k versions will only use stock clocks? Without 'DDR Overclock' can the users set the RAM to XMP or only to a JEDEC standard profile?

A large change in performance is not really obvious to me from these slides. And I really want to move on from Sandy Bridge, but I don't want a pricy stopgap.
 
bigger issue is that intel and amd have overlapping chip set names... both are using 370 at the same time
 
And ironically, there is a MacIntosh Lake
They would use that one if the rest of us did the right thing and never bought anything from intel, as payback for not giving us cpu with more cores and overpriced.

Since only apple would be using their cpus at that point.
 
I wanna see this lake name on Intel product slides.....

https://en.wikipedia.org/wiki/Lake_Chaubunagungamaug

Also known as Lake Chargoggagoggmanchauggagoggchaubunagungamaugg


1024px-Patch_of_a_lake_with_a_really_long_name.jpg

.
 
Last edited:
They love to make code names based off bodies off water.

Anyone know if there's a Hard Lake somewhere?

Not sure, but I've been waiting for them to use Crater Lake on a really hot running chip. :D
 
During benchmarks and stress tests, I'd agree. But honestly 99% of users will not notice a difference, since you're very, very rarely going to stress all of those devices at the same time.

I think playing a high intensity twitch game, plus streaming it, while archiving to some RAID 0 solid state M.2 drives running solely through the PCH would fill a significant portion of the DMI bandwidth, especially if the computer was performing other background tasks at the time that actively involved a few USB ports or other drives.
 
I think playing a high intensity twitch game, plus streaming it, while archiving to some RAID 0 solid state M.2 drives running solely through the PCH would fill a significant portion of the DMI bandwidth, especially if the computer was performing other background tasks at the time that actively involved a few USB ports or other drives.

Yep...it would. And how often will those events happen all at once on a given computer?

And would you notice if your archiving activity took a 50% performance hit while you're busy playing and streaming a game?
 
Pretty sure the wifi is included to reduce the numbers of those side stepping AMT by using external wifi NICs. But it's still pretty sweet otherwise! Hope the AC units are 3x3 MIMO.
 
Ask all the twitch streamers out there?

The point is, yes, it MIGHT happen, no, the user probably won't notice. It's also unlikely to ever be an issue in 99% of the CPUs they sell.

If it DOES bother you, and your application can't handle this restriction, then you need to go with X299/i9 or a TR-platform with more lanes.
 
The point is, yes, it MIGHT happen, no, the user probably won't notice. It's also unlikely to ever be an issue in 99% of the CPUs they sell.

If it DOES bother you, and your application can't handle this restriction, then you need to go with X299/i9 or a TR-platform with more lanes.

I have in the past (X58 / X79) and I will in the future with TR4 or its 2018 successor. I get annoyed with the marketing around how there are some arbitrarily large number of PCIe lanes tied to, and limited by, the chipset when its obviously not feasible to anything close to all of it at the same time. It was bad enough with some of Intel's marketing over how many people thought the earlier Z series actually had better bandwidth than the X series because of all the "extra" chipset lanes.
 
The point is, yes, it MIGHT happen, no, the user probably won't notice. It's also unlikely to ever be an issue in 99% of the CPUs they sell.

If it DOES bother you, and your application can't handle this restriction, then you need to go with X299/i9 or a TR-platform with more lanes.

If it does bother you... after you've spent how much to figure this out? o_O
 
I have in the past (X58 / X79) and I will in the future with TR4 or its 2018 successor. I get annoyed with the marketing around how there are some arbitrarily large number of PCIe lanes tied to, and limited by, the chipset when its obviously not feasible to anything close to all of it at the same time. It was bad enough with some of Intel's marketing over how many people thought the earlier Z series actually had better bandwidth than the X series because of all the "extra" chipset lanes.

In MOST use-cases, those extra lanes can be multiplexed with almost zero effect on the "desktop" operation. If your 2 PCIe SSDs were sharing lanes, you would likely NEVER notice unless you were benchmarking. Same if the SSD was being shared with a 10GB NIC, or a USB 3.1 port.

Intel is offering up more lanes than before, even if they are a little "gimped". This is a good thing really, since all those low-usage devices can be supported, while still giving full speed to the GFX card. Now we'd all PREFER the TR option, were you just get 64 full lanes for everything. But that solution is going to cost a heck of a lot of money, and 99% of people would never use the feature, so why pay for it?
 
If it does bother you... after you've spent how much to figure this out? o_O

If it does bother you, you should have done your homework before trying to do that many high-I/O tasks simultansously on a standard desktop platform!
 
In MOST use-cases, those extra lanes can be multiplexed with almost zero effect on the "desktop" operation. If your 2 PCIe SSDs were sharing lanes, you would likely NEVER notice unless you were benchmarking. Same if the SSD was being shared with a 10GB NIC, or a USB 3.1 port.

Intel is offering up more lanes than before, even if they are a little "gimped". This is a good thing really, since all those low-usage devices can be supported, while still giving full speed to the GFX card. Now we'd all PREFER the TR option, were you just get 64 full lanes for everything. But that solution is going to cost a heck of a lot of money, and 99% of people would never use the feature, so why pay for it?

This solution is a BS one. It's meant to inflate the pcie lane number to compete. Disingenuous.
 
If it does bother you, you should have done your homework before trying to do that many high-I/O tasks simultansously on a standard desktop platform!

Which begs the question of how the F do you know it won't affect anyone?
 
This solution is a BS one. It's meant to inflate the pcie lane number to compete. Disingenuous.

I disagree, it's meant to let people put more M.2 drives and USB 3.1 ports on a MB without adding a huge amount of cost. It's been done for years on other platforms. It's not as high-powered as a FULL set of PCI lanes, but it also doesn't mean a $200-300 jump in price.

And WHO are they competing with exactly? Ryzen? That's even fewer lanes than Skylake. TR? Why would they compete with a HEDT platform when they have their own HEDT platform?

Which begs the question of how the F do you know it won't affect anyone?

Because I know how often a typical USB 3.1 port is actually used at max bandwidth. Or how often that PCIe SSD in your desktop is running at 100% of the 4x lanes it owns. Or the average activity on a desktop's 10GB LAN connection. Over a typical 8 hour day those items are all pretty much IDLE for 98% of the time. Which means there's a lot of wasted bandwidth for multiplexing.
 
I disagree, it's meant to let people put more M.2 drives and USB 3.1 ports on a MB without adding a huge amount of cost. It's been done for years on other platforms. It's not as high-powered as a FULL set of PCI lanes, but it also doesn't mean a $200-300 jump in price.

And WHO are they competing with exactly? Ryzen? That's even fewer lanes than Skylake. TR? Why would they compete with a HEDT platform when they have their own HEDT platform?



Because I know how often a typical USB 3.1 port is actually used at max bandwidth. Or how often that PCIe SSD in your desktop is running at 100% of the 4x lanes it owns. Or the average activity on a desktop's 10GB LAN connection. Over a typical 8 hour day those items are all pretty much IDLE for 98% of the time. Which means there's a lot of wasted bandwidth for multiplexing.

However, it would not cost an additional $200 to $300 if Intel didn't inflate the costs so much. AMD is proving that, since every single Ryzen die has 32 PCIe lanes, as shown by the Threadripper and Epyc models. It's not even 1 extra mm^2 of die size for 4 PCIe lanes. One AMD's die, it is about 7mm^2 for 32 lanes. On Intel's Sky Lake die, it is about 3mm^2 for 16 lanes. It is not that hard to put more PCIe lanes on a CPU. Intel has chosen to castrate their HEDT CPUs just to force people to pay a lot more for a little more PCIe connectivity. If they weren't bending us over so badly, we wouldn't have this misconception that it costs so much more.
 
However, it would not cost an additional $200 to $300 if Intel didn't inflate the costs so much. AMD is proving that, since every single Ryzen die has 32 PCIe lanes, as shown by the Threadripper and Epyc models. It's not even 1 extra mm^2 of die size for 4 PCIe lanes. One AMD's die, it is about 7mm^2 for 32 lanes. On Intel's Sky Lake die, it is about 3mm^2 for 16 lanes. It is not that hard to put more PCIe lanes on a CPU. Intel has chosen to castrate their HEDT CPUs just to force people to pay a lot more for a little more PCIe connectivity. If they weren't bending us over so badly, we wouldn't have this misconception that it costs so much more.
Then by all means buy the Ryzen chip. Had AMD actually had a competitive product for the last 6 years or so, this wouldn't be an issue now, would it? Of course one company has been profitable, owns fabs, and has been making strides in process (and processor) technology this entire time while another company has been...not.

Intel is a business, not a charity. Why would they sell a product with any more features or for any less than they needed to? I don't see Chevy selling Corvettes for $40k either or Ford selling Raptors for $30k. Why should Intel sell CPUs based on your pricing model when they can sell for what the market will bear? The bigger question you should be asking yourself is this: Why are Ryzen/Threadripper chips selling for so cheap? Answer: Because they appear to be great deals to people that only look at the initial cost of something and not the long term ROI. So far, the Ryzen chips have been 'competitive' with 3 year old technology by throwing more cores at the problem. As for the Threadripper and Epyc CPUs, time will tell how well they actually perform, but from a ROI standpoint, if I have a job measured in hundreds or thousands of dollars per hour, the cost difference between the AMD and Intel CPUs is meaningless. For a Fanboi, a $999 Threadripper is clearly a better deal than a $1999 i9, but for someone doing renders professionally, if the i9 were 10% faster, that $1000 difference is a rounding error. By the same token, if the AMD chip were faster, then the $1000 savings is STILL a rounding error.

Besides, Intel doesn't NEED to sell workstation class performance to a non-workstation crowd. AMD simply doesn't have the funds or the staff to segment their own products accordingly. (Seriously, how many people are 'actually' twitch streamers that are followed and MAKE A LIVING with their streaming? Probably about the same percentage that actually take their "trail rated" SUV onto a dirt road, let alone off road.)

Hey, it's great that AMD has a competitive product again finally, and it's put the spurs to Intel, but unless you're in the chip fab business, I'd suggest you stay out of the speculating just how hard or expensive it would be to add PCIe lanes on a CPU for those edge use cases that they're not selling that product for anyways.
 
However, it would not cost an additional $200 to $300 if Intel didn't inflate the costs so much. AMD is proving that, since every single Ryzen die has 32 PCIe lanes, as shown by the Threadripper and Epyc models. It's not even 1 extra mm^2 of die size for 4 PCIe lanes. One AMD's die, it is about 7mm^2 for 32 lanes. On Intel's Sky Lake die, it is about 3mm^2 for 16 lanes. It is not that hard to put more PCIe lanes on a CPU. Intel has chosen to castrate their HEDT CPUs just to force people to pay a lot more for a little more PCIe connectivity. If they weren't bending us over so badly, we wouldn't have this misconception that it costs so much more.

If Ryzen HAD 32 full PCIe lanes per die, why doesn't Ryzen 16/17/1800 series have a full 32 lanes available to the user?

It's not a question of die size, it's more lands, more pins, more MB traces, more control logic, more testing and validation that needs to be done. All for what? To meet a need that doesn't ACTUALLY exist. A very small number of customers NEED that many lanes. A slightly larger group just WANTS them, despite their need (i.e., they want the epeen of having more lanes but they will not use them). The VAST majority of customers don't even really understand PCIe lanes and just want to play games and use their computer. So why should they do something that will raise costs for EVERYONE, just to satisfy the smallest sub-set of their customers?

Intel isn't "bending you over"...they're basically selling a luxury product with HEDT. Same with TR, it's a WANT, not a NEED. So they charge what the market will bear, which has proven to be a LOT.
 
Back
Top