Question About PCIe Lanes On CPUs

DarkStar_WNY

2[H]4U
Joined
Dec 27, 2006
Messages
2,363
With PCIe lanes being at a premium in current systems, especially with the advent of the m.2 nvme drives, which were first developed, produced and promoted by Intel, why hasn't Intel increased the number PCIe lanes in their mainstream processors, hell for that matter a few more on the i9s couldn't hurt, could they?

I mean is it mostly a matter of doing so would increase the complexity and cost of their CPUs, or are they simply refusing to add them to their mainstream CPUs so as to allow their extreme edition CPUs to maintain their numerical advantage in PCIe lanes?
 
They have CPU's with lots of lanes.

as many as you want really...

at various upsell pricing!

:)
 
On the mainstream platform its not really needed. Unless you are in for some kind of storage benchmarking setup. The current layout with 16 direct + 24 shared via x4 is plenty and very flexible.

You could even argue the current setup is way overkill for the flexible part. And something like 24-28 direct lanes and no chipset extension would be enough. But the OEMs etc have to accept that.

For the HEDT part is due to both upselling and to binning. The reduced chips have an entire x16 controller disabled. Either mainly by artificial limitation or in a few cases because that part of the silicon is actually broken.
 
It just seems that more and more the issue of shared lanes is being raised in discussion on builds, be it in forums or videos, it used to be limited to SLI builds and/or builds where a lot of add in cards will be used, but it's coming up more frequently now due to more and more people wanting to include m.2 SSD in their rigs, for example for the build I am starting to spec out right now I am set on a 1TB Samsung 960 Pro (I might step that up to 2TB, not sure) for the OS programs and a 1TB 960 EVO for video projects with a plan to add 850 Pro/EVO(s) as more storage is needed (I use a NAS I will be setting up and using for long term bulk data storage.)

I don't know how much, if any, of a difference made by the shared lanes, for example I've seen testing showing needing to use x8 x8 on an two card SLI setup doesn't make any difference in performance both some others show that it does so I'm not sure which to believe but I would think if it could be done fairly easily and relatively cheaply Intel would add lanes so they could say "you can now run our enthusiast CPUs in dual GPU setups are x16 & x16 and add a couple m.2 SSDs without needing to share PCIe lanes."

Archaea, I understand the i9's have more lanes, I was just curious why more have not been added to the i7s as well, even if only the top ones, as that along could be a selling point for some people.
 
Why in short turns out into cost and flexibility. And we are not just talking about Intel here, but rather OEMs.
 
If you want more pcie lanes from the cpu, you also need a new socket with more connections available. Which is a limiting factor today. And kind of the point with the HEDT platform.
 
If you want more pcie lanes from the cpu, you also need a new socket with more connections available. Which is a limiting factor today. And kind of the point with the HEDT platform.
You dont

On Intel the CPU controls the PCI lanes.

IE 6800 has 28 lanes
6850 has 40 lanes.
Same motherboard.
Sixth gen Intel.
(Comes to mind because I owned both)


With AMD the socket and chipset control the amount of lanes.
 
The OP was talking about mainstream processors, in the intel world that's lga 1151 cpu's which would require a socket change to lga 2066
 
You dont

On Intel the CPU controls the PCI lanes.

IE 6800 has 28 lanes
6850 has 40 lanes.
Same motherboard.
Sixth gen Intel.
(Comes to mind because I owned both)


With AMD the socket and chipset control the amount of lanes.

Its the same concept on AMD. CPU lanes and chipset lanes.

On laptops and AMD APUs the amount is even lower. Usually 8+something unless its a H or R based Intel.
 
Last edited:
You dont

On Intel the CPU controls the PCI lanes.

IE 6800 has 28 lanes
6850 has 40 lanes.
Same motherboard.
Sixth gen Intel.
(Comes to mind because I owned both)


With AMD the socket and chipset control the amount of lanes.
You're wrong!

We're talking about mainstream platform, which today means socket 1151. And that do not have any spare pins in the socket for more pcie lanes.

Your HEDT platform already have enough spare pins in the socket to support more pcie lanes, and that was my point in the first place.

So big trophy to you, for missing the whole point of the thread.
 
You're wrong!

We're talking about mainstream platform, which today means socket 1151. And that do not have any spare pins in the socket for more pcie lanes.

Your HEDT platform already have enough spare pins in the socket to support more pcie lanes, and that was my point in the first place.

So big trophy to you, for missing the whole point of the thread.
So you are saying mainstream platform doesn’t include x58 and x79 and x99? And x299? Because I disagree.
 
With PCIe lanes being at a premium in current systems, especially with the advent of the m.2 nvme drives, which were first developed, produced and promoted by Intel, why hasn't Intel increased the number PCIe lanes in their mainstream processors, hell for that matter a few more on the i9s couldn't hurt, could they?

I mean is it mostly a matter of doing so would increase the complexity and cost of their CPUs, or are they simply refusing to add them to their mainstream CPUs so as to allow their extreme edition CPUs to maintain their numerical advantage in PCIe lanes?

PCIe lanes at a premium? On the desktop this is hardly the case, especially with multi-GPU systems falling out of favor among enthusiasts. If your needs grow beyond what the mainstream platform offers, then you can step up to X299 and a 44 lane processor.

You dont

On Intel the CPU controls the PCI lanes.

IE 6800 has 28 lanes
6850 has 40 lanes.
Same motherboard.
Sixth gen Intel.
(Comes to mind because I owned both)


With AMD the socket and chipset control the amount of lanes.

This is true on Intel systems as well. You get up to 16 lanes on the mainstream platform with Z270 and Z370 supporting 24 additional PCIe lanes. With the HEDT segment you can get up to 44 PCIe lanes and another 24 via the chipset. The breakdown between Intel and AMD is different as the latter puts more on the CPU and less on the chipset. Both offer upwards of 68 PCIe lanes.
 
PCIe lanes at a premium? On the desktop this is hardly the case, especially with multi-GPU systems falling out of favor among enthusiasts. If your needs grow beyond what the mainstream platform offers, then you can step up to X299 and a 44 lane processor.



This is true on Intel systems as well. You get up to 16 lanes on the mainstream platform with Z270 and Z370 supporting 24 additional PCIe lanes. With the HEDT segment you can get up to 44 PCIe lanes and another 24 via the chipset. The breakdown between Intel and AMD is different as the latter puts more on the CPU and less on the chipset. Both offer upwards of 68 PCIe lanes.

So why is it that such pains seem to be taken to avoid sharing pcie lanes, and the need of various sockets and configurations to share lanes always seems to be stated as a negative? That's what made me curious.
 
So why is it that such pains seem to be taken to avoid sharing pcie lanes, and the need of various sockets and configurations to share lanes always seems to be stated as a negative? That's what made me curious.
It has more to do with the maximum total bandwidth of the chipset's PCH. On newer Intel platforms, the PCH has a maximum total throughput of only 7.9 GB/second, so no more than eight PCIe 3.0 lanes may be utilized simultaneously (not counting any devices that use the CPU's PCIe hub). This includes any onboard devices as well as expansion cards.
 
Back
Top