NVMe PCI-E Adapter Expansion card questions

Retsam

2[H]4U
Joined
Sep 2, 2005
Messages
3,905
Just have a few questions for someone....

I see they make PCI-E Slot Expansion cards to hold NVMe drives. I am just curious as to if there are any drawbacks to using these. IE: Speed/Performance or anything like that.

Any reason to use the slot on the mobo vs one of the pcie adapters? It doesnt hinder using one as a main OS drive does it?

Why you may ask? I just upgraded to a ryzen build. First off, On my new mobo the m.2 slot is literally right above the graphics card(probably not the best design plan), so I was just partly worried about longevity with heat etc since this is data.

Second, I know this is new technology but it seems like this design makes it significantly more difficult to remove when you need to. Not that you are swapping out NVMe drives all the time, but I found it pretty annoying versus just unplugging the cables on my sata drives etc. Within the last few weeks I had a few nvme ssds I was benchmarking and I had to remove my graphics card each time to install and remove the damn things. Life would be so much easier with the pcie adapter card. I would actually prefer if these things sat in drive bays like normal hard drives/sata drives.

The other part of this of course is I am considering having 2 NVMe drives in the system and wondering if that would work as well since the mobo only has ONE m.2 slot.

They only cost like $10-15.
 
TL;DR: yes, it should work without any major problems.



Depends on the Ryzen mobo in question, that M.2 slot on the board may be directly connected to the CPU (part of the 20 lanes available to the user, plus 4 more for the chipset). Does that mean much for performance? That depends on a lot of factors and may ultimately not matter to you at all! What model is your motherboard?

Unless if you are taking lanes from the GPU, any of the other slots are likely to be using lanes from the chipset. With a 300 or 400 series motherboard, that means PCIe 2.0 speeds. With X570, that's PCIe 4.0 speeds. However, anything using the chipset has to share with other chipset devices (some, not all, of the USB ports; LAN; WiFi; etc).

Outside of that, there are questions of: what kind of M.2 "riser" board are you using?
If that board can host 2-4 M.2 drives, does your board support splitting a single slot into 4x4x4x4?
If it's a simple, single board setup, then does your board actually have a spare x4 or x16 slot for the card?
Will the slot (where you install the board into) take lanes from the GPU?


That being said, it's 2020, most motherboards know how to enumerate NVMe devices attached to any slot and know how to boot from them. So that's not an issue.
 
Thanks for the info. I had this question partly answered on another forum already. Basically they said I just need to compare the m.2 slot speed and see if it matches the pcie slot of the mobo, if its the same, then it would be fine. It sounds like thats what the answer is.

MSI B450 Tomahawk max.

And no, part of my question is because I am also considering having 2 nvme drives in the system as the board only has ONE m.2 slot. So I am trying to figure out if that $15 adapter pcie card is worth it. If I can have a storage drive on a second m.2 nvme drive, I would have some killer transfer speeds from the backup nvme drive.

And even if I wanted to have both drives connected to an expansion card(is that even possible?), it would allow the main os boot nvme to be away from the m.2 slot that sits above the high temperatures of the graphics card.
 
Last edited:
Any reason to use the slot on the mobo vs one of the pcie adapters? It doesnt hinder using one as a main OS drive does it?

There should be no difference in performance, but it really comes down to the PCIe lanes.

My previous computer was a i7-5820k on an EVGA X99 motherboard, back in 2015 when I built it, I picked a mobo that didn't have M.2 slots because at that time I didn't think I would go that route. I ended up eventually getting a PCIe M.2 riser/adapter card and it worked fine. Used a Samsung 960 Pro NVMe drive as my boot/OS/games drive.

Not all M.2 slots are created equal. On my Current x570 motherboard, it has 3 M.2 slots. All of the slots are PCIe 4.0, but only two of them support PCIe 4x speeds, while one only supports PCIe 2x speeds (still 4.0, but only 2x). If you had to choose between a M.2 slot at 2x or a PCIe riser/adapter connected to a 4x slot, the PCIe riser/adapter would be faster.

On my new mobo the m.2 slot is literally right above the graphics card(probably not the best design plan), so I was just partly worried about longevity with heat etc since this is data.

Depends on the motherboard obviously, but all 3 M.2 slots on my x570 motherboard have heatsinks. With a riser card you'd have to install your own heatsink somehow. Also, keep in mind that many programs (Crystal Disk Info, Samsung Magician, etc) will actually allow you to monitor the temperature of your SSD.
 
Based on the B450 chipset design, the M.2 slot will be faster than the second PCI-E x16 slot. It appears that the PCI-E x16 slot is connected to the B450 chipset and the M.2 slot is directly connected to the CPU. In addition, the M.2 slot runs on PCI-E 3.0 while the B450 chipset PCI-E lanes run on PCI-E 2.0.

Going through the chipset may slow down random read/write. 2.0 will definitely slow down sequential assuming your NVMe drive of choice can max out a 2.0 x4 interface.
 
So I should have probably thought of this before getting a B450 mobo and went with an X570 is what youre saying?
 
I have read somewhere that actively cooling the NAND chips actually reduce their life. The only thing heat appears to do is throttle the SSD controller, which would reduce your sequential speeds over a prolonged period of time. I am not sure if heat has a meaningful impact on the SSD controller life.
 
I have read somewhere that actively cooling the NAND chips actually reduce their life. The only thing heat appears to do is throttle the SSD controller, which would reduce your sequential speeds over a prolonged period of time. I am not sure if heat has a meaningful impact on the SSD controller life.

A heatsink is passive cooling. Active cooling would require a fan. I can believe that extra cooling might not help or be required, but how could it possibly reduce the life of the NAND?
 
A heatsink is passive cooling. Active cooling would require a fan. I can believe that extra cooling might not help or be required, but how could it possibly reduce the life of the NAND?

https://www.ekwb.com/blog/ssd-cooling/

I hate to link EK, but it was one of the first official sounding articles I can find. Basically, if NAND cells are too cool when writing, the increased resistance (higher temperature reduces electrical resistance) causes the NAND cell to have a shorter lifespan. This really only seems to come into play if the NVMe drive is cooled to near ambient temps. I imagine it will do just fine with a passive heatsink sitting underneath a GPU.
 
I have read somewhere that actively cooling the NAND chips actually reduce their life. The only thing heat appears to do is throttle the SSD controller, which would reduce your sequential speeds over a prolonged period of time. I am not sure if heat has a meaningful impact on the SSD controller life.

It's true: it is less damaging to the cells during programming (writing) if the temperature is higher. However, data is retained longer (reading) if the flash is cooler when idle. While this differences can even be huge at times, for consumer use it's usually a drop in the bucket.

The controllers tend to be ARM-based which can, in theory, handle up to 125C. However they are designed to throttle in the 70-80C range (via sensor/SMART), although again can actually be hotter than this in FLIR or internally. They are pieces of silicon just like your CPU as, in fact, they are microprocessors, so in general will be robust.
 
Going through the chipset may slow down random read/write. 2.0 will definitely slow down sequential assuming your NVMe drive of choice can max out a 2.0 x4 interface.

Absolutely. Older AMD chipsets (pre-X570) are PCIe 2.0 downstream so will be at most x4 PCIe 2.0 for adapters. You have to take from GPU lanes (if there's two GPU slots) to get 3.0.

Going over the chipset also adds some latency to drive operations because the chipset is effectively a PCIe switch (it's x4 PCIe 3.0 upstream on pre-X570).
 
Absolutely. Older AMD chipsets (pre-X570) are PCIe 2.0 downstream so will be at most x4 PCIe 2.0 for adapters. You have to take from GPU lanes (if there's two GPU slots) to get 3.0.

Going over the chipset also adds some latency to drive operations because the chipset is effectively a PCIe switch (it's x4 PCIe 3.0 upstream on pre-X570).

Normally when doing that, the x16 slot for one's video card drops to x8/x8 mode if there is ANYTHING plugged into the 2nd x16 slot. Shouldn't be an issue unless you're running uber high end cards.
 
The B450 chipset (and really, all Ryzen AM4 processors) have 24 PCI-E lanes. 16 for the GPU(s), 4 for NVMe, and 4 to the chipset. Athlon AM4 processors have 2 lanes for NVMe. B450 boards are not allowed to have x8/x8 PCI-E configurations, only single x16.
 
Normally when doing that, the x16 slot for one's video card drops to x8/x8 mode if there is ANYTHING plugged into the 2nd x16 slot. Shouldn't be an issue unless you're running uber high end cards.

Right, just about any current GPU is fine with x8 PCIe 3.0 lanes, for motherboards that are capable of bifurcating in that manner.
 
The B450 chipset (and really, all Ryzen AM4 processors) have 24 PCI-E lanes. 16 for the GPU(s), 4 for NVMe, and 4 to the chipset. Athlon AM4 processors have 2 lanes for NVMe. B450 boards are not allowed to have x8/x8 PCI-E configurations, only single x16.

You're right. I was thinking of my old x470 CH7.
 
I've bought a few of these. IMO the ASUS cards are very reliable. I have a couple of those. The cheaper ones can drop out occasionally. Not good.
 
Back
Top