ASUS - use M.2 PCIe and disable x8 slot... why?

cyclone3d

[H]F Junkie
Joined
Aug 16, 2004
Messages
16,244
I just swapped out my x79 setup for a x99 setup.

Why the crap does ASUS and other mfgs for that matter think it is a good idea to disable an x8 slot when using an x4 PCIe M.2 drive?

Why not just drop the PCIe slot to x4 instead?

Am I missing some rational reason here or are the motherboard mfgs retarded?

And to top it off, another x4 slot has to run at x1 and also have a couple USB ports and Wifi disabled in order for another x1 slot to be enabled... GRRRRR. This is with the ASUS X99-PRO.

I really need:
x16 (video)
x4 (video capture card)
x4 (10Gb fiber NIC)
x1 (Sound Blaster ZxR)

As it is, I have:
x16 (video)
x16 (for whatever)
x1 (sound)
x1 (useless)

At this point I really need to repair my ASUS X99-E WS/USB3.1 motherboard (got it really cheap with CPU socket pins mangled. Need to McGyver a couple broken ones into the socket to get quad channel RAM working as only 3 channels currently work.
 
Last edited:
My only guess is you have to choose between the m.2 link direct to the cpu or the chipset which controls the secondary PCIe slots. Just a guess.
 
To answer this question, I need to know what board and CPU are talking about. However, some slots get their lanes from the chipset (PCH) and others from the CPU's PCIe controller. How the slots behave is largely up to how the PCIe switches are configured. PCIe switches are expensive. The alternative is a board with a PEX PLX8747 chip on it which multiplexes the lanes allowing for even allocation to upwards of 4x PCIe x16 slots. You are still bandwidth constrained to the CPU, but a board with one of those would alleviate your issue. If you had an X79 setup that behaved better than X99, that would likely be the reason why. You had one with a PLX chip on it.
 
Why the crap does ASUS and other mfgs for that matter think it is a good idea to disable an x8 slot when using an x4 PCIe M.2 drive?
Sometimes they reach bifurcation limits of the CPU.
Sometimes it is easier to route PCIe slot lanes 0-3 to the M.2 slot rather than lanes 4-7.
Probably they didn't think that anyone could ever need to use that much PCIe connectivity, and thus didn't care.
 
To answer this question, I need to know what board and CPU are talking about. However, some slots get their lanes from the chipset (PCH) and others from the CPU's PCIe controller. How the slots behave is largely up to how the PCIe switches are configured. PCIe switches are expensive. The alternative is a board with a PEX PLX8747 chip on it which multiplexes the lanes allowing for even allocation to upwards of 4x PCIe x16 slots. You are still bandwidth constrained to the CPU, but a board with one of those would alleviate your issue. If you had an X79 setup that behaved better than X99, that would likely be the reason why. You had one with a PLX chip on it.

The board is the ASUS X99-Pro. The CPU is a XEON E5 1660v3

My old x79 setup (ASUS X79-Deluxe - the newer one and XEON E5 1680v2) didn't have an M.2 slot so I had enough PCIe lanes for everything. Also pretty sure that the 2nd and 4th x16 slots share bandwidth when both are populated.

And yeah, I really just need to fix my ASUS X99-E WS/USB 3.1 board which has PCIe switches. Video card would be in the first slot and covering the second slot. That would leave me with 5 slots, 3 of them being x16 and 2 being x8 from what the manual says.
The block diagram only shows the CPU using 28 PCIe lanes total. Do you know if that would be any different with a 40 lane CPU?
 
Caveat emptor

I notice many, many people here do not understand the routing of the PCIe root complex. First thing I do if I am interested in a motherboard is lookup the manual, and read it to know. With your case, it is what processor you have as X amount of allocable lanes that are bridged from the processor, and routed through to the physical/electrical slots. Generation support is also a factor as latter processors and core logics supported a mix of version 2 and 3. Recent processors and core logics are basically unified with generation 3 or 4. Secondary PCIe bridging is from the south logics, and the HSIO can decrease the possible allocation of M.2, SATA, and PCIe available depending on installations of AIC's, or used logics enabled on the motherboard.

This does not include a switch with multiple endpoints. A good example is a x4 PCIe AIC for dual x4 M.2 ports. This I avoid as it does not make anything better. In other words dividing less to more is too much contention on a bus, and is similar to old PCI even though "better and faster." I prefer all to the root complex directly routed with a non multi endpoint switch, therefore no competition of another device. I'll post a Gigabyte provided routing chart from their forums which is noted in the manual also.

I took a quick look in your manual, and these pages mention the configurations:
Page vii, ix, 1-1, 1-13, 1-14, 1-29
 

Attachments

  • PCIeR.png
    PCIeR.png
    83 KB · Views: 0
Just wanted to leave an update.

I was able to replace the broken pin in the CPU socket on my ASUS X99-E WS/USB3.1 motherboard and now quad channel RAM is working just fine and now I have a board that will work with the cards I want to use.

I used the same method that the guy on youtube did on an LGA775 socket, except I used a pin from a socket I ordered from China instead of using a pin from a dead donor board.

Those pins are soooo crazy small. Lost 1 to who knows where so had to use a second one. Took a bit to get the pin in the place where the broken one was but once I got it in there it fell right into place.
As long as the board isn't turned upside down with no CPU in it, it should be just fine.
 
ya you can bend those pins by simply sneezing on them. They are very delicate.
 
Back
Top