Intel DMI 3.0 Bottlenecks?

Joined
Apr 21, 2017
Messages
2
How is it that Intel is putting an extra 24 PCI-e 3.0 lanes into the 299X's PCH without massively bottlenecking like EVERY lane not to mention jamming up completely every other function of the PCH being that the PCH can only communicate with the CPU accross the DMI 3.0 x4 interface which is only capable of having a bandwidth of 8 GT/s or 3.93 GB/s when the 24 PCI-e 3.0 lanes need to themselves roughly 48 GT/s or 24 GB/s not to mention the other functions.. the last gen CPUs with 24 lanes had 16 to graphics, 4 to the DMI and 4 to etc..
Are they really selling us 24 PCI-e lanes they are shoving down a 4 lane pipe and then even counting THOSE 4 as lanes (which gave us 40 total last gen)? Isnt there an extreme bandwidth-not-being-there situation?
 
Same way you have certain slots for doing multi-GPUs - they bypass the PCH.

Here is the 6700K:

Proc_Schematic-700x511.jpg




Here is X99 as used in the HEDT for now:

x99-chipset-block-diagram-16x9.png



Here is the soon to be Skylake-X HEDT and Kaby Lake 'X':
So improves even more on X99.
Intel-X299chipset-1.jpg


Cheers
 
44 lanes on the CPU, 24 on the PCH, all PCIe 3.0? Now we're talking!

That should be plenty to allow for a PCIe x4 capture card, a quad-channel USB 3.0 controller card (or maybe a Thunderbolt 3.0 one?), and a few NVMe SSDs alongside the sound card just on the PCH alone, which would make 16 CPU lanes manageable enough.

However, I don't see the point of the Kaby Lake-X offerings on HEDT over Skylake-X if they're just going to be limited to 4 cores and 16 lanes like their LGA1151 mainstream counterparts. Why not just make 'em all Kaby Lake-X?
 
you misunderstood.. the top image says it has 16 lanes for gpu that bypass the PCH. the cpu has 20 lanes total, the other 4 the dmi 3.0 is on, and look off the left of the pch, it adds 20 more lanes from the pch itself giving now 20 from cpu plus 20 from pch. these 20 from the pch cannot interface the cpu directly and must use the dmi 3.0 which is just a gimmick name for the cpu's other 4 pci-e lanes. as far as i can tell these 20 pch lanes have to share the bandwidth of 4 pci-e lanes that actually can interface the cpu (the dmi 3.0). this is a very shameful ploy where intel is taking advantage of consumers not well read and not well versed. damn i used to be an intel fanboy til now. that top pic clearly tells you you only have 20 full bandwidth lanes masquerading as 40 lanes if you do some research
 
Huh, that is peculiar. Why aren't there more DMI lanes for the PCH? I can't see Intel trying to pull a fast one with something akin to PLX chips in the PCH.

That said, I don't see where you're only seeing 20 lanes for the CPU. Skylake-X clearly says up to 44, plus another 4 for DMI 3.0. 48 total. You might have to pay out the nose for it if Haswell-E and Broadwell-E are any indication, but the option is there.

Kaby Lake-X just looks plain gimped, as if they took the LGA1151 chips and dropped them onto a platform they can't even fully utilize - not enough PCIe lanes, no quad-channel memory interface, etc. Intel might as well not have even bothered.
 
you misunderstood.. the top image says it has 16 lanes for gpu that bypass the PCH. the cpu has 20 lanes total, the other 4 the dmi 3.0 is on, and look off the left of the pch, it adds 20 more lanes from the pch itself giving now 20 from cpu plus 20 from pch. these 20 from the pch cannot interface the cpu directly and must use the dmi 3.0 which is just a gimmick name for the cpu's other 4 pci-e lanes. as far as i can tell these 20 pch lanes have to share the bandwidth of 4 pci-e lanes that actually can interface the cpu (the dmi 3.0). this is a very shameful ploy where intel is taking advantage of consumers not well read and not well versed. damn i used to be an intel fanboy til now. that top pic clearly tells you you only have 20 full bandwidth lanes masquerading as 40 lanes if you do some research

Just to add.
The X99 HEDT is 40 lanes giving as they say 2x16 and 1x8 configuration.
X299 Skylake-X HEDT increases that to 44 lanes (still not enough for 3x16 though), but this is a nice amount for having a high performance GPU or 2 and with multiple NVMe devices, including possibly Intel's Optane.

The yellow pertains specifically to the Kaby Lake-X model and you are right it really is far from being a true HEDT CPU as its architecture is in reality a consumer mainstream as seen with the 6700K and is much weaker in way structured for PCIe.

Cheers
 
Last edited:
you misunderstood.. the top image says it has 16 lanes for gpu that bypass the PCH. the cpu has 20 lanes total, the other 4 the dmi 3.0 is on, and look off the left of the pch, it adds 20 more lanes from the pch itself giving now 20 from cpu plus 20 from pch. these 20 from the pch cannot interface the cpu directly and must use the dmi 3.0 which is just a gimmick name for the cpu's other 4 pci-e lanes. as far as i can tell these 20 pch lanes have to share the bandwidth of 4 pci-e lanes that actually can interface the cpu (the dmi 3.0). this is a very shameful ploy where intel is taking advantage of consumers not well read and not well versed. damn i used to be an intel fanboy til now. that top pic clearly tells you you only have 20 full bandwidth lanes masquerading as 40 lanes if you do some research

TBH Ryzen is not too different with their consumer product as well, and in some ways is even more restrictive.
As an example with Ryzen you are limited to just one NVMe x4 PCIe SSD.
Now while on consumer Kaby Lake it is more shared/contended you can install a few more still at x4.

Here is the Kaby Lake consumer structure, and yeah nothing trumps server/HEDT for either Intel or I bet as well with AMD's prosumer offering that is coming out in the future.

r_600x450.png



And the detailed breakdown of those PCH lanes (it is up to 30 as some reserved and so not normally quoted):

HSIO.JPG


Cheers
 
Last edited:
you misunderstood.. the top image says it has 16 lanes for gpu that bypass the PCH. the cpu has 20 lanes total, the other 4 the dmi 3.0 is on, and look off the left of the pch, it adds 20 more lanes from the pch itself giving now 20 from cpu plus 20 from pch. these 20 from the pch cannot interface the cpu directly and must use the dmi 3.0 which is just a gimmick name for the cpu's other 4 pci-e lanes. as far as i can tell these 20 pch lanes have to share the bandwidth of 4 pci-e lanes that actually can interface the cpu (the dmi 3.0). this is a very shameful ploy where intel is taking advantage of consumers not well read and not well versed. damn i used to be an intel fanboy til now. that top pic clearly tells you you only have 20 full bandwidth lanes masquerading as 40 lanes if you do some research

Because the expectation is that the PCI-E lanes on the PCH are not to be in use all at the same time. The PCH doesn't act much different from a switching hub. The reason why they're called PCI-E lanes is to give motherboard manufacturers the flexibility to choose what they want to have on the motherboard.

The only people that are being "tricked" are the ones that expect to get more for less, as this information is only posted on Intel's website (and review sites) for informational purposes only. Others will go by the motherboard specs, and not be tricked in the slightest.
 
Back
Top