With four m.2 slots and two drives, does it matter which m.2 slots I install my drives into?

Delicieuxz

[H]ard|Gawd
Joined
May 11, 2016
Messages
1,667
I have an Asus ROG Strix AM5 X670E-A motherboard, which has four m.2 slots. Two are PCIE 5.0 x4, and I guess they run off the CPU. The other two slots are PCIE 4.0 x4, and they run off the X670 chipset. None of the slots share bandwidth with anything else.

I have two Hynix P41 m.2 drives, which are PCIe 4.0. So, I was thinking I could install them in the PCIe 4.0 slots, and save the PCIe 5.0 slots for potential future PCIe 5.0 m.2 drives.

On the other hand, the first PCIe 5.0 slot will get covered by the CPU cooler. So, if I use it now, I can leave one PCIe 5.0 and one PCIe 4.0 m.2 slot unused and easily accessible for the future.

How should I decide which slots to use up first, if there is any rule to it? And should I remove the thermal sticker on the Hynix P41 drives when installing them under the mobo's thermal shroud?


Also, is there any good reason I should do RAID 0 with the Hynix P41 drives? I'll have additional SATA SSDs for file backups.
 
Last edited:
Are you absolutely sure the PCI e 5 slot doesn't split with the PCI e graphics card slot? Every board I've seen with a pci e 5 m.2 slot splits the graphics card slot to 8.
Now the graphics card slot at 8x is still the equivalent to pci e 4 at 16 so it doesn't matter for this generation of cards but in the future?
You need to be 100% sure because sometimes the documentation isn't there or provided in some cases.
For this reason I installed my 2 drives into the 2nd cpu connected for main and 3rd chipset connected slots for games that are pci e 4 so that my main 16x slot doesn't split. I am never going to get a pcie 5 m.2 drive I'm going to be happy with my 6tb of pcie 4 drives for 10 years. So for me it was not even a question I slotted them into the pcie 4 m.2 slots and done.
 
Are you absolutely sure the PCI e 5 slot doesn't split with the PCI e graphics card slot? Every board I've seen with a pci e 5 m.2 slot splits the graphics card slot to 8.
Now the graphics card slot at 8x is still the equivalent to pci e 4 at 16 so it doesn't matter for this generation of cards but in the future?
You need to be 100% sure because sometimes the documentation isn't there or provided in some cases.
For this reason I installed my 2 drives into the 2nd cpu connected for main and 3rd chipset connected slots for games that are pci e 4 so that my main 16x slot doesn't split. I am never going to get a pcie 5 m.2 drive I'm going to be happy with my 6tb of pcie 4 drives for 10 years. So for me it was not even a question I slotted them into the pcie 4 m.2 slots and done.
Pretty much all other mobos I looked at share bandwidth between some of the m.2 drives and expansion slots, and / or SATA ports, but the X670E-A seems to not share bandwidth between any of the m.2 slots and expansion slots.

https://www.memoryexpress.com/Products/MX00122629
https://rog.asus.com/us/motherboards/rog-strix/rog-strix-x670e-a-gaming-wifi-model/
 
Last edited:
I don't see anything indicating sharing, and am5 does have x16 'for gpu', x4 'for m.2 nvme', x4 'general purposw', and x4 'for chipset', so it's not unreasonable for two m.2 nvmes and your first x16 slot to be all tied directly to the cpu; but it does seem unusual; I think a lot of vendors are using the general purpose x4 for something else, but still want two cpu x4 m.2 slots. The m.2 tied to the chipset may have dedicated connections to the chipset, but everything gets squeezed into the chipset's pci-e x4 5.0 link. That's a bottleneck, but it's unlikely to be noticable in normal use unless you're running a CDN node or something.

I would fill the cpu slots first, and if you get a pci-e 5 ssd later, swap it into the accessible slot.
 
Ya luckily the Aorus Master Z790 for example has 2 dedicated CPU M.2 slots and only the first one splits the bandwidth with the 16x GPU slot but the 2nd one doesn't because only the first slot is CPU PCI e 5 speed the second CPU M.2 slot is PCI e 4 speed so that explains it.
So long as you're sure it doesn't split it then yes go for the CPU slot if there is only one, in my case there was two CPU M.2 slots so I went for the one that doesn't split the bandwidth since it is still a CPU connected slot.
 
The m.2 tied to the chipset may have dedicated connections to the chipset, but everything gets squeezed into the chipset's pci-e x4 5.0 link. That's a bottleneck, but it's unlikely to be noticable in normal use unless you're running a CDN node or something.

I would fill the cpu slots first, and if you get a pci-e 5 ssd later, swap it into the accessible slot.
If I install one m.2 drive in a CPU m.2 slot, and another in a chipset slot, will there then be no bottleneck on the m.2 drive in the chipset slot? Or will there always be a bottleneck on anything going through the chipset compared to going directly through the CPU?
 
If I install one m.2 drive in a CPU m.2 slot, and another in a chipset slot, will there then be no bottleneck on the m.2 drive in the chipset slot? Or will there always be a bottleneck on anything going through the chipset compared to going directly through the CPU?
I have benchmarked the m.2 in the chipset and it is even a bit faster than the one in the cpu connected one. Lol I wouldn't worry about it at all.
 
If I install one m.2 drive in a CPU m.2 slot, and another in a chipset slot, will there then be no bottleneck on the m.2 drive in the chipset slot? Or will there always be a bottleneck on anything going through the chipset compared to going directly through the CPU?
Depends on what else is connected to your chipset, but I wouldn't worry too much either way.
 
Back
Top