How many PCI-E lanes do you need?

Bandalo

2[H]4U
Joined
Dec 15, 2010
Messages
2,660
So with the talk of 28/44 lanes on the new i9s, and 64 lanes on the RTR, I was wondering how many lanes people really need. Especially considering tri and quad SLI/CF are a thing of the past.

Figure a "high end" gaming PC, fully loaded:

16 + 16 for SLI/CF
4+4 for m.2 (either overkill RAID0, or just 1 OS and 1 game/storage drive)

So far that's only 40 lanes. Or only 24 if you don't like SLI/CF.

Now you could add in a few misc devices like video capture cards, or a 10Gb NIC, or a TB adapter, but it's pretty rare to have all those in one machine. Even if you're mining, I don't imagine you'd need 16x of PCI-E to each of your cards.

I've personally got a 28-lane CPU, and I'm only using 16 for one GPU and 4 for one M.2 drive.

So how many lanes do YOU need in your rig and why?
 
i need 24 my board has 32.

16 for 2 gpus and 8 for an infiniband 10gb card.

and soon 8 for a pci-e ssd.
 
I guess we talk about CPU lanes and not extra by chipset.

I could use somewhere between 12 and 20, GPU+M.2. I wouldn't die having x8 graphics, specially not with PCIe 4.0 coming. And Thunderbolt/USB-C moved to the CPU.

3.0 x1 = ~1GB/sec.
4.0 x1 = ~2GB/sec.
 
Yeah, the USB moved, taking up some lanes, and I'm currently running x8 for my video card because I'm maxing the 40 lanes. And yes, I have some extra hardware besides that which is using more lanes (video capture card and PCI-Express SSD.)
 
I guess we talk about CPU lanes and not extra by chipset.

I could use somewhere between 12 and 20, GPU+M.2. I wouldn't die having x8 graphics, specially not with PCIe 4.0 coming. And Thunderbolt/USB-C moved to the CPU.

3.0 x1 = ~1GB/sec.
4.0 x1 = ~2GB/sec.

I think the lack of need for faster lanes has been what's slowed down the roll out of 4.0. When we're barely using 3.0 right now, why spend R&D dollars to push 4.0 out the door?
 
PCIe 4.0 isn't finished spec wise. Its the usual too many chefs issue. I am sure both AMD and Intel wants PCIe 4.0 for their chipset links for example. Datacenter wants it for the HPC cards. And then there is 3DXpoint.
 
Just because a device is supported up to X amount of lanes, doesn't mean you "need" that many, considering you may not be saturating the bandwidth you do have.

Need for more is going to be based on performance bottlenecks.
 
I would be way edge case but for the build I want; 2-user plus storage/plex server, needs to be 12-16 core >3.5ghz and have good IOMMU grouping support (i'm looking at you AMD...):
16x GTX 1060 (mine)
16x GTX 1060 (wife)
8x H310 SAS card (storage)
8x H310 SAS card (storage)
4x 512GB m.2 NVME (mine)
4x 256GB m.2 NVME (wife)
And if the USB controllers can't be split and passed to seperate VM's
1x PCIe USB 3.1 card.

That is 57 lanes. I have done this w/o the NVME drives and one SAS card on a E5-V3 chip. If I am stuck with 44 lanes I guess the wife would get a SATA SSD and I would have to use an expander on the SAS card.
 
2x GPU's = 32
3x M.2 NVMe drives = 12 (two for raid one non-raid)
+ chipset features etc. 4 - 8
Total for sound system and future upgradability without compromise 48-52
If 3-4 way gpu combination for rendering real time ray tracing etc. add another 16x but in reality 8x for each card should be fine for this so 4x8=32 same in the end as before.

AMD platform and cpu skews that supports the platform 100% for pcie lanes is awesome!
 
I've got 1gpu 2 ssd and 3hdd. It sucks moving stuff from one drive to another
 
I think the lack of need for faster lanes has been what's slowed down the roll out of 4.0. When we're barely using 3.0 right now, why spend R&D dollars to push 4.0 out the door?
There is never a lack of need for faster. The problem is real estate. With faster lanes you don't need as large a package on the CPU. One of issues I believe that is holding up the 4.0 spec is amount of power.
 
There is never a lack of need for faster. The problem is real estate. With faster lanes you don't need as large a package on the CPU. One of issues I believe that is holding up the 4.0 spec is amount of power.

I'm not sure if I agree. The size-on-die of the PCIe interface is tiny compared to all the other things taking up space on a typical CPU. And on the motherboard, PCIe 4.0 doesn't require any more space than 3.0. Power is also plentiful.

But no one is clamoring for faster lanes right now that I've seen. A x16 3.0 link can move data faster than almost any hardware can handle.
 
But no one is clamoring for faster lanes right now that I've seen. A x16 3.0 link can move data faster than almost any hardware can handle.

That's not true. The entire datacenter segment wants faster. And something like 3dxpoint based SSDs are directly limited by PCIe speeds.

But the specification isn't even finished yet. There are some prototype hardware around for testing purposes that have added changes to the specification development. It´s always easier to develop something nobody else will use, than something everyone and everything will use.
 
Last edited:
That's not true. The entire datacenter segment wants faster. And something like 3dxpoint based SSDs are directly limited by PCIe speeds.

But the specification isn't even finished yet. There are some prototype hardware around for testing purposes that have added changes to the specification development. It´s always easier to develop something nobody else will use, than something everyone and everything will use.

If a large segment of people were really pushing hard for faster links, then the members of the PCIe spec board would feel that pressure and work on the final standard faster. As it is, they're working on an improvement, but they're not under much pressure to get it done now, now, now.
 
If a large segment of people were really pushing hard for faster links, then the members of the PCIe spec board would feel that pressure and work on the final standard faster. As it is, they're working on an improvement, but they're not under much pressure to get it done now, now, now.

If you only see speed as the only metric, then I can fully understand your view. But there is much more to PCIe than that.
 
If you only see speed as the only metric, then I can fully understand your view. But there is much more to PCIe than that.

From what I've read, the only differences between 3.0 and 4.0 is the speed. I've heard rumor of a new Thunderbolt-style connector as well, but I expect that may be DOA based on the changes that Thunderbolt itself has undergone in the past couple years.
 
I'm not sure if I agree. The size-on-die of the PCIe interface is tiny compared to all the other things taking up space on a typical CPU. And on the motherboard, PCIe 4.0 doesn't require any more space than 3.0. Power is also plentiful.

But no one is clamoring for faster lanes right now that I've seen. A x16 3.0 link can move data faster than almost any hardware can handle.
Entire package, not die size. PCIe needs a lot of pins.
 
You have to look outside of realm of gaming for this. Rendering towers can easily have 4x cards. 4x16 = 64 alone. Factor in potential m.2 drives taking 4 and you can see the problem.
 
You have to look outside of realm of gaming for this. Rendering towers can easily have 4x cards. 4x16 = 64 alone. Factor in potential m.2 drives taking 4 and you can see the problem.

Rendering doesn't generally require 16 lanes at full speed, they can be run just as well with 8 lanes per card with no hit to performance.
 
For a normal desktop.. not so many really. 24 is reasonable if a bit tight feeling for expansions sake.

If you are doing something requiring a LOT of I/O....

Video capture and render of incoming feeds to storage subsystems, while hosting a few VM's that are gaming or streaming content themselves...

Doing large renders or working with "big data" in any real capacity of 1TB + and high throughput needed to an AFA solution sure. (All Flash Array connected by dedicated Fiberchannel controllers using PCIE X8 or X16 lanes.)

So for a single workstation that's gaming, and streaming, and using a pair of high performance video cards, with a pair of NVME drives in raid 0 because you're silly.. Lets see.

SLI at x16 speeds = 32
Pair of NVME drives at x4, 8
Soundcard, 1
10gb nick, 8

So that's 49 lanes for I/O.

Get into a server where you will want 3 or more Fiber cards and multiple 4 port or 2 port 10gb ethernet cards and it gets higher fast.. of course you also have motherboards that support multiple CPU's and dedicated high speed I/O controllers.
 
PCIe 4.0 got one step closer and 5.0 announced. 4.0 was announced in 2011 as a compare.
http://www.businesswire.com/news/ho...-SIG®-Publishes-PCI-Express®-4.0-Revision-0.9
https://techreport.com/news/32064/pcie-4-0-specification-finally-out-with-16-gt-s-on-tap

pcie40-3.png

pcie-bw-table.jpg
 
Last edited:
So with the talk of 28/44 lanes on the new i9s, and 64 lanes on the RTR, I was wondering how many lanes people really need. Especially considering tri and quad SLI/CF are a thing of the past.

Figure a "high end" gaming PC, fully loaded:

16 + 16 for SLI/CF
4+4 for m.2 (either overkill RAID0, or just 1 OS and 1 game/storage drive)

So far that's only 40 lanes. Or only 24 if you don't like SLI/CF.

Now you could add in a few misc devices like video capture cards, or a 10Gb NIC, or a TB adapter, but it's pretty rare to have all those in one machine. Even if you're mining, I don't imagine you'd need 16x of PCI-E to each of your cards.

I've personally got a 28-lane CPU, and I'm only using 16 for one GPU and 4 for one M.2 drive.

So how many lanes do YOU need in your rig and why?

I would need/want 28-lanes.
 
I have my Titan XP's in SLI and I use up to about 50%-60% bandwidth on my current 16 PCI lanes, so plenty of bandwidth left. However, I'm planning on upgrading to X299 soon and I only upgrade motherboards every 5 years or so, so I'd definitely like to have 36+ (32 for video cards and 4 or more for M.2) so I have some room for growth. I upgrade video cards frequently so I don't want to be bottlenecked in 2 years.
 
I have a Ryzen system and wish I had more.

16 -- 980 ti
4 -- M.2 onboard
4 -- M.2 PCIE Addon (or board with second M.2)
----
24

No room for sound card or any other anything. Using any of the PCIE 2.0 X1 slots will remove 1 lane each from the second M.2 drive.

If I use the Second PCIE 3.0 x8 slot it drops the GPU from x16 to x8 - not a big hit to performance, I do push the system to the limit so I would see the loss of FPS.

If I want to add any other fast (nvme) storage I will need to use the second pcie 3.0 slot.

More lanes would be nice.
 
GPU (X16)
2x M.2 (X8)
2x x1 cards (one Wifi, one Soundcard), rounded up to x8 (lanes are switched on and off in sets of 4)

so 32.

However, in my case, it matters more about how the lanes are arranged than how many lanes are there. My Z97x-UD5H is a prime example of a retarded lane setup.
 
16 -- Graphics card
4+4 -- (2) m.2 SSD
8 -- 10Gb ethernet or SFP+ for NAS storage transfers/backups/recording. 10 times faster than the 125MB/s limit of 1Gbe.
4 -- USB 3.0 card for Oculus room VR sensors, pcie 2.0.
4 -- Board I/O such as USB 3.0, thunderbolt, and shared pcie lanes on the chipset.
4 -- Optional video capture card x4 pcie 2.0 or Sound card x1.
----
44 without SLI
 
Last edited:
16 -- Graphics card
4+4 -- (2) m.2 SSD
8 -- 10Gb ethernet or SFC+ for NAS storage transfers/backups/recording. 10 times faster than the 125MB/s limit of 1Gbe.
4 -- USB 3.0 card for Oculus room VR sensors, pcie 2.0.
4 -- Board I/O such as USB 3.0, thunderbolt, and shared pcie lanes on the chipset.
4 -- Optional video capture card x4 pcie 2.0 or Sound card x1.
----
44 without SLI

Not disagreeing but as a general comment it seems insane that Occulus requires all owners to also purchase a PCIe USB card to use the room VR sensors.
More of a curiosity does it fully fail if trying it via the motherboard via DMI3 (I appreciate this comes down to motherboad manufacturers and what they design) or a USB hub on own header (yeah laughing myself while mentioning that)?
Cheers
 
Not disagreeing but as a general comment it seems insane that Occulus requires all owners to also purchase a PCIe USB card to use the room VR sensors.
More of a curiosity does it fully fail if trying it via the motherboard via DMI3 (I appreciate this comes down to motherboad manufacturers and what they design) or a USB hub on own header (yeah laughing myself while mentioning that)?
Cheers
Yeah, for full roomscale VR with Oculus Rift, you need a 3rd or 4th camera sensor depending on the size of the room. Oculus camera sensors are 1080p when connected to USB 3.0, and you may run into errors if you have a 3rd sensor on the same USB controller (They recommend plugging the 3rd camera into a USB 2.0 port for 720p resolution tracking).

By comparison, HTC Vive only requires two lighthouses for full roomscale VR because it uses a sweeping laser scan of the room, with IR tracking sensors located on the headset:
 
I need more, more and more PCI-E lanes...can never have enough lanes...;)

It's always a compromise in determining what I have to do without in the system at the expense of something else. My PCI-E SSDs are sucking back the lion's share of lanes right now, so I have to live with only one video card...:(

So much for 40 lames being "lots"...:rolleyes:
 
IMG_1239.JPG
The unimaginable and unconscionable horror of using Enterprise-grade (gasp) hardware in a....gaming machine! (double gasp).

It's an egregious and blatant violation of the Intel computing morality statute which clearly states that "Enterprise hardware shall not be used in a Consumer system."...;)
 
Depends. Currently PCIe SSD controllers max out at four lanes so you'd need a lot of slots to have a real speedier array. Yes, there are x16 solutions which include a PCIe switch but then the switch introduces a slowdown. There was a motherboard which used a vertical M.2 socket so basically have a 8 of those in a row much like a normal card but instead of a single slot, have 8 slots.. Cooling needs thought, if each eats 15W that's 120W, basically a not-so-small video card. Have a small heatsink on each disk and then use one of those PCI-slot-120mm-fans brackets to cool them with two 120mm fans. So that's 32 lanes on a purpose built motherboard.

Then, to future proof, I would want four 5GBASE-T connectors on the motherboard, Aquantia has an x1 to 5gbps card so that's 4 lanes.

One Thunderbolt is aplenty, that's another 4. These can reused for USB C 3.1 Gen 2 when that's connected.

x8 is more than enough for a GPU. Frankly x4 is enough but let's not go there.

So probably 48-52 is enough.
 
See , that's not cool.

anything over 48 lanes suggests dual processor/commercial gear.
I do 4k video stuff, I assure you 8x isn't good enough for video cards that drive monitors( I use two 4k monitors).I have folding cards that run off 4x, but that's different.

This is an issue for me, I'm building a new rig in the next 6 months....I want raid ssd's/10gbe/all things pcie lanes-ish !!
 
I have upgraded my computer on the release of every hedt chipset update Intel has done since x58. With the pcie lanes being cut down to 28 for every cpu under $1000. I will for the first time in 8 years be sitting on my x99 setup to see what follows. I have the money to spend $1000 on a cpu if I want, but I refuse to do so.

We will see what the future brings.
 
Last edited:
Intel E10G42BT X520-T2 10Gigabit Ethernet Card 10Gbps PCI Express x8 2 x RJ45

That name means it takes 8 pcie channels.
 
Back
Top