Are there any X570 mobos that supports 16x PCIe 4.0 (GPU) + 4x PCIe 4.0 (NVMe) + 8x PCIe 3.0 (50Gbps NIC)?

pclausen

Gawd
Joined
Jan 30, 2008
Messages
530
My understanding is that the AM4 socket is limited to the following:

1 16x PCIe 4.0 direct wired to AM4 (typically main PCIe 16x slot for GPU or split to 8x + 8x or 8x + 4x + 4x across 2 or 3 PCIe slots)
1 4x PCIe 4.0 direct wired to AM4 (typically M.2 slot for NVMe storage device)
1 4x PCIe 4.0 PCH (LAN, USB, SATA, etc)

I have the following that I want to plug into a X570 mobo:

RTX 3080 / 6900 XT (depending on which I can get my hands on first)
Mellanox MCX4121A dual 25 Gpbs NIC (8x PCIe 3.0)
Sabrent Rocket Q4 NVMe 4.0

I'm thinking that the best compromise would be to run those 3 devices as either:

GPU 8x PCIe 4.0 in PCIe 16x slot 1
NIC 8x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot

Or: (not sure this will work since that effectively leaves no PCIe lanes for the PCH chipset)

GPU 16x PCIe 4.0 in PCIe 16x slot 1
NIC 4x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot

Granted 4.0 is effectively twice as fast as 3.0, so maybe I won't take a performance hit by running the GPU @ 8x (assuming a PCIe 4.0 capable GPU of course).

PCIe 3.0 x 4 = 31.52 Gbps, so as long as I run just one port on the NIC, I'll be fine, but if I want to LAG them both together for 50 Gbps, that becomes a limiting factor.

Do most X570 mobo support either of these configs? (I'm looking mainly at the Asus ROG/TUF offerings without WiFi)

I'm coming from Intel where I have been spoiled with 40 PCIe 3.0 lanes for about as far back as I can remember, so being limited to 24 lanes will take some getting used to. :)

Thanks!
 

Jamie Marsala

Limp Gawd
Joined
Mar 9, 2016
Messages
290
Pretty much every X570 board is setup that way. If you use the first 2 PCIe slots you would limit them to 2 8x PCIe 4.0 slots. The CPU feeds those and the first m.2 slot all at PCIe 4.0. There are 24 lanes on the CPU and the remaining 4 lanes go to the chipset at PCIe 4 speed. Most boards are setup so that you can get the other slots, not the first 2 connected to the CPU at PCIe 4.0 through the chipset. So you can pop that NIC in a 4x slot and get proper PCIe 4.0 speeds through the chipset.
 

jeremyshaw

[H]F Junkie
Joined
Aug 26, 2009
Messages
12,481
All boards X570 boards are:
24 lanes from CPU. 4 for NVMe (AMD requirement), and 4 for southbridge (platform requirement). This leaves 16 from the CPU, and a bunch more breakouts from the chipset. The cheaper boards, like my X570 TUF, put all 16 of the remaining CPU lanes into the main PCIe slot. The other big PCIe slot is given 4 lanes from the chipset (meaning, the CPU has 4 lanes to the chipset, and the chipset has a hub/switch that has ~20 lanes, of which 4 are allocated to that other PCIe slot).

Some of the pricier boards will instead split that main 16 lane allocation from the CPU, into two x8 slots (using switch chips, so the main slot is x16/x8, and the other slot is x8).

IIRC, in my original research around the X570 launch, anything higher than a ASUS X570 TUF or ASRock Steel Legend/Extreme4 in pricing should have the latter layout. I'm somewhat interested, too, so I'll be back later with more findings.
 

jeremyshaw

[H]F Junkie
Joined
Aug 26, 2009
Messages
12,481
Alright, I found the board I was thinking of. The ASUS Pro WS X570 ACE. ~$350+, so not worth it, outside of the configuration curiosity:

3 x16 slots. First two share the x16 from the CPU (so if only one slot is used, the first slot will get x16. If two slots are used, they each get x8). 3rd slot gets x8 from the chipset (obviously, chipset only has x4 to the CPU, but at PCIe 4.0 speeds, it's still more than enough to cover the 50GbE NIC, with enough headroom to spare for the onboard LAN, SATA, USB [though some USB ports are driven directly from the CPU and don't use the southbridge's bandwidth], etc).

Of course, for your utilization, the best board would be one that only has x16 from the CPU to the main slot, and x8 from the chipset to the second slot, but such a board doesn't exist, AFAIK.
 

pclausen

Gawd
Joined
Jan 30, 2008
Messages
530
Yep, I did not find a X570 with x8 to the chipset either.

I was looking at the ROG Crosshair VIII Formula and found this chart:

FormularX570PCIe.PNG


Unfortunately the ASUS manual does not include a block diagram, but from the above table, it would appear that one would run dual x8 cards and also dual NVMe's where the 2nd one would have to pass through the PCH.

I found this which I assume is what all X570 motherboards adheres to.

FormularX570PCIe2.PNG


For my use case, I think the most optimal would be to run the GPU in slot 1 and the NIC in slot 2 to give each x8 lanes directly to the CPU. I could then run dual NVMe's where one would go directly to the CPU and the other one would go through the chipset.

My NIC supports RDMA which I have already deployed between a couple of servers on my network which provides amazingly fast access to data without the CPU having to do any heavy lifting. I have large spinner arrays (48 drive RAID 60) as well as NVMe striped volumes using 4 of those Sabrent Rocket Q4 NVMe 4.0 drives. I want to extend access to that data from my new X570 build, hopefully using RDMA, but I'm not sure if that is supported on a non server OS like Win10, but it will be fun to see how fast I can go.
 

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,426
Yep, I did not find a X570 with x8 to the chipset either.

I was looking at the ROG Crosshair VIII Formula and found this chart:

View attachment 292133

Unfortunately the ASUS manual does not include a block diagram, but from the above table, it would appear that one would run dual x8 cards and also dual NVMe's where the 2nd one would have to pass through the PCH.

I found this which I assume is what all X570 motherboards adheres to.

View attachment 292134

For my use case, I think the most optimal would be to run the GPU in slot 1 and the NIC in slot 2 to give each x8 lanes directly to the CPU. I could then run dual NVMe's where one would go directly to the CPU and the other one would go through the chipset.

My NIC supports RDMA which I have already deployed between a couple of servers on my network which provides amazingly fast access to data without the CPU having to do any heavy lifting. I have large spinner arrays (48 drive RAID 60) as well as NVMe striped volumes using 4 of those Sabrent Rocket Q4 NVMe 4.0 drives. I want to extend access to that data from my new X570 build, hopefully using RDMA, but I'm not sure if that is supported on a non server OS like Win10, but it will be fun to see how fast I can go.
Ever figure this out? Even though the chipset is only x4.. it's pcie 4.0, so it's the same speed as pcie 3.0 x8. If you have a pcie 3.0 x8 connection to the chipset you won't be losing speed (unless it's overloaded with other things). That said, your plan to just use x8/x8 split is probably the safest and most easily supported. If you get a pcie 4.0 GPU you won't really notice the difference between x8 and x16.
 
Top