GPU not using all pcie lanes...

hitched

Limp Gawd
Joined
Jan 12, 2011
Messages
220
So I noticed last night while browsing through the bios that on my crosshair 6 hero w/ 1700x that on my SLI setup my first gpu slot is connected at 4x and the second is connected at 8x and when I remove the second gpu, the first slot goes to 8x

I have done benchmarks that say my 960 Evo is underperforming as well but I don't know how many lanes it is actually using, it not being connected at 4x is probably the issue with that as well

The only setting I found in the bios about pcie lanes is an is a switch for an auto or enable which in auto if it detects 2 cards it sets the link to 8x8x and if it is set to enable the link goes to 4x4x and I have that in auto

The only other setting I see in the motherboard is setting for the gen speed of the pcie slots.. the only ones I have changed from auto is the first and second pcie slots I set to gen 3 and I set the m.2 to gen 3.. motherboard is on the latest bios as well..

I never thought to look at the gpu post to see how many lanes the gpus are actually connected with, is this just limits of the 1 gen ryzen? Possibly motherboard issues? I was thinking of just getting rid of the CPU and getting a 3600 since all I do is game anyway but now I think I need a new motherboard as well...
 
It's the limitation of the on-CPU PCIe controller on all mainstream CPU platforms. You will need an HEDT rig to handle all of those devices at anywhere near full bandwidth. You see, there are only 16 available PCIe lanes total in that R7-1700X (20 total PCI-E lanes, of which 4 are used for the connection to the motherboard chipset). What's more, in your motherboard the m.2 slot that your SSD is currently connected to is sharing the same bandwidth as the first PCIe x16 slot - and then, the SSD defaults to a PCI-E x2 configuration in that board! That is exactly why I would not recommend an m.2 PCI-E SSD in these mainstream PCs as the OS/boot drive. The boot drive in these mainstream systems should ALWAYS be a SATA 2.5" SSD. The exception is the Zen2 platform, which provide four dedicated PCIe lanes just for the m.2 PCIe SSD.
 
You see, there are only 16 available PCIe lanes total in that R7-1700X (20 total PCI-E lanes, of which 4 are used for the connection to the motherboard chipset)
This is incorrect. The Ryzen 1xxx includes 24 lanes, but 4 are reserved for connecting to the chipset. That leaves 16 lanes for the PCIe slots and 4 lanes dedicated to NVMe storage. The other PCIe slots are driven by the chipset and not directly connected to the CPU, so they don't factor into the conversation.

That is exactly why I would not recommend an m.2 PCI-E SSD in these mainstream PCs as the OS/boot drive. The boot drive in these mainstream systems should ALWAYS be a SATA 2.5" SSD.

In that context, this advice is ridiculous. The reason to recommend against an M.2 PCIe SSD for the OS/boot drive is that they provide no practical performance benefit to users without specialized storage performance needs, and they are generally more expensive than a SATA SSD of the same capacity. It's not that they somehow hamper the rest of your PCIe subsystem performance, which they don't.

So you should be able to achieve x8/x8 and then your full x4 on the NVME drive.

With all of that said, for almost all practical applications your performance will not be affected by running at a lower link speed, so just not worrying about it is totally an option.
 
Last edited:
AM4 socket provides 20 lanes to the user, and 4 more for the chipset, 20+4 total. Of the 20 lanes, 16 are allocated for general purpose (expected to be a GPU, or split between 2 GPUs), and 4 lanes are allocated for a NVMe drive. How strict AMD is with this, I don't know. I do know there is at least one X570 board that only has 16 lanes for the user, and connects the M.2 drive from the southbridge (the CPU's 4 dedicated NVMe drive lanes are not connected to anything), the ASRock X570 Phantom Gaming mITX. However, that board is an extreme outlier, and not relevant to the ASUS Crosshari VI Hero.

ASUS spec page for the Crosshair VI Hero claims it can do x8 x8 for the two GPUs and the NVMe drive is connected to the CPU via AM4's dedicated allocation for the drive, not the southbridge, and not by taking lanes from the main two slots. One more M.2 slot for the WiFi card, providing one lane of PCIe 2.0 from the southbridge.

I am struggling to see why any GPU is working at x4 and able to switch to x8. None of the lane topology/allocations for the C6H indicate that's even possible. The user manual
1585805656437.png


lists a few options, but none would result in the strange behavior you have described.

PCIEX4_3 is the fullsize slot at the bottom/edge of the board, near the "Start" button. That one is connected to the southbridge via PCIe 2.0 x4, and appears to take its lanes from the three PCIe x1 slots on the board. Unless if you are using any of those 4 slots, this setting is irreelvent.

PCIEX16_1 Mode, PCIEX8_2 Mode, and M2 Link Mode are setting the PCIe link speed (and possibly protocol) to PCIe 1 (1.1a?), 2, 3, or automatic. Setting all thre to Gen3 should be fine.






So all of that stuff out of the way, my diagnositc approach after this is:

PCIe slot, mobo, GPU.

GPU slot - check to see if the slot has anything blocking the pins. Check the GPU board itself, too, for the same thing. Either a sticker, tape, or whatever. PCIe slots scale automatically between 1, 4, 8, and 16 lanes (2 lanes are also possible for PCIe connections, but not defined for PCIe slots), and one of the ways to trigger the automatic scaling is to block off some critical pins on the connector or slot.

Mobo- use GPUz to see if the GPU is actually using 16/8 lanes. Use HWINFO to look at the PCIe topology to see if there is anything amiss in the layout

GPU - test each GPU, one at a time, to see if the GPU itself is the source of the problem. e.g, use your secondary GPU as the only GPU (install the secondary GPU into the main GPU's slot), and see if that clears up the problem for the main slot.

If all of that fails to do anything, that leaves either (or both) the CPU or/and Motherboard at fault. Without access to either a second board or a second CPU, it's harder to test that.
 
Last edited:
Back
Top