Dell M.2 NVMe PCIe adapters

Bense

Limp Gawd
Joined
Aug 7, 2013
Messages
130
Does anyone know anything about either of these two Dell M.2 PCIe adapters? I am considering picking one of these up if I can confirm that I can stripe the m.2 nvme drives that i install in there in RAID0.

The dual m.2 PCIe 3.0 x8 adapter appears to be part number(s):
NTRCY
23PX6
JV70F



The quad m.2 PCIe 3.0 x16 adapter appears to be part number(s):
80G5N
JV6C8
PHR9G
06N9RH


 
https://www.servethehome.com/the-dell-4x-m-2-pcie-x16-version-of-the-hp-z-turbo-quad-pro/

Requires PCIe bifurcation support in the mainboard chipset and enabled in BIOS/UEFI, and probably vendor-locked.

Wondering why one would even be considering such a thing. Unless you have a ludicrously fast workstation and have some kind of workload that can actually utilize a stupid-fast scratch disk, such a setup is a huge waste of money. Seeing as how under almost all scenarios one would be hard-pressed to realize any real-world difference between SATA and NVMe SSDs, putting multiple SSDs into RAID0 for a boot/data drive is, frankly, dumb.
 
ludicrously

You answered your question as well as I could have.


Dark Helmet: What happened? Where are they?!?
Colonel Sandurz:I don't know sir! They must have hyperjets on that thing!
Dark Helmet: And what have we got on this thing? A Cuisinart?!?


Come on man. Don't give me that same ole bs rhetoric about how it's overkill, too powerful for anyone to use at home, etc, etc ,etc etc ,etc.
Here, I'll translate it back to Spaceballs speak.

Dark Helmet: [pause] Yes! I always build my systems far too powerful for me to even utilize its full potential. You know that!
Colonel Sandurz: Of course, I do, sir.
Dark Helmet: Everybody knows that!
Crewmen: [covering their groins] Of course, we do, sir!
Dark Helmet: Now that I have ludicrously fast workstation, I'm ready to browse facebook. Where is it?

---------------------------------------
Seriously though..... lets compare it to the Samsung 970 pro 1TB.


Samsung 970 Pro - 1TB NVMe M.2 is $450 on Amazon, and yields:
3500 MB/s read
2700 MB/s write

It's $165 shipped for the quad port adapter.
OEM Toshiba M.2 NVMe 256GB SSDs are $60 shipped on ebay. Each of them yields:
2400 MB/s read
1100 MB/s write

Even if there were a 25% overhead, that's over 200% the read speed of the 970 and over 20% the write speed, and its cheaper.


Seems like a no-brainer to me. Heck, even if you were to use four of the 128GB. They're $36/each. --
8400 MB/s read
2400 MB/s write
You could do the whole setup for less than $300 if you were patient enough to make a few offers on eBay.
 
I'm going to assume we're talking about system/boot drives.

To start, a good illustration of the real-life (non)differences between SATA and NVMe SSDs:
https://techreport.com/review/33545/samsung-970-evo-1-tb-ssd-reviewed/5

Even with ~3-6x+ the rated throughput, NVMe units show zero to minuscule real-life gains over SATA. Given that, how/why would RAIDing a few together produce any kind of substantial difference?

The reason SSDs feel so much faster relative to HDDs. and their real-life performance is so much better, is not raw throughput but access time. There's little difference in the access times between NVMe and SATA SSDs, which is why their real-life performance is roughly equivalent. Current HDDs actually have pretty decent (>200 MB/s in many cases) throughput, but their access time absolutely blows (>100x that of a SSD). That's why they suck as system drives. Firing up an OS, most apps, loading game levels, etc. consists of loading into RAM lots of small files, and HDDs take forever to find and read all those files.

So if access time is key, where does that leave us with these multi-m.2 adapters and RAID0? At best, nowhere. Running multiple SSDs in parallel is going to do nothing for access time. If anything, having to run the drives through the RAID layer might actually increase access time.


Seriously though..... lets compare it to the Samsung 970 pro 1TB.


Samsung 970 Pro - 1TB NVMe M.2 is $450 on Amazon, and yields:
3500 MB/s read
2700 MB/s write


Yeah, good job on not trying to bias the comparison by picking the most expensive consumer SSD.

Seriously, rated only a tad slower in throughput than the 970 Pro:
Samsung 970 Evo 1TB: ~$340
WD Black 1TB: ~$330


It's $165 shipped for the quad port adapter.
OEM Toshiba M.2 NVMe 256GB SSDs are $60 shipped on ebay. Each of them yields:
2400 MB/s read
1100 MB/s write

Even if there were a 25% overhead, that's over 200% the read speed of the 970 and over 20% the write speed, and its cheaper.


Or $405 total for 1TB, ~10% less than the Pro, ~20% more than the Evo. For that you get increased complexity, somewhat higher system load (CPU and heat), a x5 (the number of components) chance of a failure that takes out your system drive, and no warranties. All for benchmark numbers that have no bearing on real life.


Seems like a no-brainer to me. Heck, even if you were to use four of the 128GB. They're $36/each. --
8400 MB/s read
2400 MB/s write
You could do the whole setup for less than $300 if you were patient enough to make a few offers on eBay.


$309 by the prices you give, with all the same inherent shortcomings as above.

Samsung 970 Pro 512GB: $230
Samsung 970 Evo 512GB: $170
WD Black 512GB: $160

The Pro is ~25% than the 4x 128GB SSD + adapter setup. The Evo is ~45% cheaper.


If it is such overkill, then why does every vendor seem to be releasing one of these in the next several months, etc.


There are legitimate uses for such adapters. Top of my head, scratch disk for high-end A/V editing, maybe (assuming proper backups to cover for RAID0) for holding some kind of high-bandwidth database application or a number of high-usage virtual machines. Also, because they're cheap to design and produce and have a good profit margin, and/or can be thrown in with high-end mainboards to sweeten the deal. And they know some people will buy them just to chase silly, useless benchmarks for e-peen.

But, in the end, it's your cash. Burn it however you want. Were it me, I'd rather pick up this.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Yep never have I experienced such disappointment with such a huge leap in performance.

"Is that it?"
 
I didn't post this thread for subjective opinions about what others think that I might need, think that i might be able to notice in 'real world' testing. I am not concerned with what anyone else thinks about the performance metrics and my ability to utilize them efficiently, and cost-effectively.
I've been doing this for a long time. I feel like I've gathered sufficient experience. However, be that as it may, that's not why I created this thread. -- If you're interested in my two main builds, perhaps observe the urls in my signature.

I posted this thread to inquire about the feasibility of using a Dell M.2 dual or quad PCIe adapter. Previously, I've used Dell PERC H310 in a plethora of many systems. Some of which, I have had to block off certain PCIe pins in order to get the motherboard to even post (gigabyte x58 threw a fit over it). A few years ago, I spent many hours on that servethehome site working on the LSI -> perc h310 / perc h700 crossflash, however I had forgotten the site.

M.2 devices appear to be entering the market faster than any other interface that I've seen in recent years. If you take a look at my MicroATX workstation thread, you'll see where I began picking apart the M.2 standard, researching it and having to figure out all the specs / characteristics. This was before the wikipedia page was populated.

I cannot keep up with the m.2 nvme devices that are entering the market, I hadn't even heard of the WD Black. Go back mid/late 2015. You'll see that there were only two M.2 NVMe devices that were worth a crap. One was the Samsung SM951 (OEM version), and the 950 pro (consumer version). Here, lemme try to find the article...
https://arstechnica.com/gadgets/201...rst-pcie-m-2-nvme-ssd-is-an-absolute-monster/

Regardless of whatever the latest and greatest m.2 nvme devices are that enter the market, if I can pick up a sub $200 PCIe card that allows me to stripe up to 4 of them in RAID0 -- That's worth it to me. I've been doing this stuff long enough for me to merit the consideration. I'm not asking for opinions. I'm asking for experience.
 
I can't even remember who owns the 3ware MEGAMAID company anymore.

Who is it now?
Broadcom?
LSI?
Avogadro?
 
MSI now includes a 4 drive adapter with it's top of the line board!
You can Raid 0 with any of the adapters that have been listed but on Intel based boards they require Intel SSD's to be boot-able. Most new AMD mobos can boot off a Raid 0 with correct Bios updates and drivers provided by AMD!
 
Some of us just try to help others avoid wasting their money. But go right ahead.;)
 
I guess the next best thing to knowing anything about the hardware characteristics, or having experience with the card is to respond by criticizing another persons consideration of the card.

On a lighter note, I am appreciative of BlueLineSwinger's reply in this thread. I was completely oblivious to the existence of this WD Black NVMe SSD

WD Black NVMe M.2 SSDs
1000 GB - $330 - 3400 MB/s - 2800 MB/s --- $0.33 / GB
500 GB - $160 - 3400 MB/s - 2500 MB/s --- $0.32 / GB
250 GB - $100 - 3000 MB/s - 1600 MB/s --- $0.40 / GB

The specs on these are superior to the OEM Toshiba XG3 that I'd been looking at. I just looked at the specs of a few other M.2 NVMe SSDs.. For brief comparison sake I'm looking at the ~500GB models (480GB, 500GB, 512GB)
* XPG GAMMIX S10 - $140 - 1800 / 850 - Underwhelming performance.
* Corsair Force MP500 - $280 - 3000 / 2400 - Slower speeds, 75% more than WD Black, 3 year warranty as opposed to WD's 5year
* Kingston Digital SA1000M8/480G A1000 480GB PCIe NVMe M.2 - $113 - 1500 / 900 - Underwhelming performance.
* ADATA XPG SX6000 PCIe 512GB 3D NAND PCIe Gen3x2 M.2 2280 NVMe - $105 - 1000 / 800 - Underwhelming

* Samsung 970 Evo 500GB - $170 - 3400 / 2300 - This appears to be on par with the WD Black, also has 5-year warranty.
* Samsung 970 Pro 512GB - $230 - 3500 / 2300 - Marginally faster than Evo...

The pricing of the OEM Toshiba XG3 drives I've found..
1024 GB - $210 - 2400 / 1500
512 GB - $120 - 2400 / 1500
256 GB - $60 - 2400 / 1100
128 GB - $35 - 2100 / 600


It appears that this newer wave of M.2 NVMe SSDs are here (I'm sure most of you already know this and realize this). The WD Black appears to be the best value, and likely what I'll select. Now that I know this, and since my MSI X299M Gaming Pro motherboard has two M.2 slots, I can conclude that a pair of the WD Black 500GB striped in RAID0 is the best option for me right now. For rough comparison sake, and assuming there were zero RAID overhead, this yields..
1000 GB - $320 - 6800 / 5000

However, four of the Toshiba OEM 256GB ($60/each) in striped on a quad m.2 nvme card .. even if it were $160 -- this would yield...
1024 GB - $400 - 9600 / 4400



Now... in a year or so if I start running low on space, I'll then reconsider a quad m.2 PCIe card.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
As long as you don't want to boot from the M.2's your set-up will work fine. If you want to boot from them you will have to either use Intel drives or switch to AMD mobo!
 
I have the quad version. Installed 1tb 960pro in it and it boots fine. Dell T7810 bios needed to be in legacy mode.

The fan on it gets stupid loud when my GPU heats up. I will be removing it in two weeks when my case mod parts arrive. Adding two 80mm pwm fans to the case door to be speed controlled by this device.

There’s no raid, but I suggest if you want something faster, check this out:
https://m.cdw.com/product/Samsung-P...id-state-drive-1.6-TB-PCI-Express-3.0/4839784
 
Last edited:
Bense is discussing a Raid 0 configuration which would require using VROC on his MSI X299M mobo. Intel restricts booting the Raid array to using Intel drives only on the X299 chipset!
You can Raid drives of any manufacturer as non-boot drives!
 
Bense is discussing a Raid 0 configuration which would require using VROC on his MSI X299M mobo. Intel restricts booting the Raid array to using Intel drives only on the X299 chipset!
You can Raid drives of any manufacturer as non-boot drives!

Guess I'll just install my bootloader onto a USB drive.
 
Bense is discussing a Raid 0 configuration which would require using VROC on his MSI X299M mobo. Intel restricts booting the Raid array to using Intel drives only on the X299 chipset!
You can Raid drives of any manufacturer as non-boot drives!


Just looked into that. Annoying. F#%!ing Intel. As silly as I believe doing RAID0 for the system drive is (but hey, you do you I guess), there's no reason for such a feature to be locked to Intel's own SSDs.

It's almost as dumb as how they've segmented SSD caching (i.e., their version of Apple's Fusion drives) between Optane, and standard SSDs via SRT.
 
Bense is discussing a Raid 0 configuration which would require using VROC on his MSI X299M mobo. Intel restricts booting the Raid array to using Intel drives only on the X299 chipset!
You can Raid drives of any manufacturer as non-boot drives!

Would me having the VROC key enable me to do this?
 
One further suggestion:

Optane.

Even more expensive for less space, but, it actually yields improved user experience. It's the new benchmark for solid-state performance.

And if you're after higher sustained speeds, just grab a few of the WD Blacks as storage drives.
 
X399 supports passive 4x4 cards in x16 slots, at least the ones that bother to put in the proper bios hooks. The Zen PCIe controller/fabric can subdivide every available lane to at least x4, not sure what the root cap is if you keep going down to x2 and x1. Some Epyc server boards use this to wire up a boatload of U.2 connectors for a big fat nvme flash storage platform. (24 drives and a pair of x16 fastest available NICs? Yes pls.)

Last I checked it is still cheaper and better to get the asus/asrock cards new than hunt down a used dell server pull. Kinda rare that the fleabay route is more expensive, it usually wins in similar battles.

X99, X299 and the big xeon chipsets are also capable of similar bifurcation but vendor support and artificial intel lock in bullshit may vary.
 
Would me having the VROC key enable me to do this?
No. Even with the VROC key you can only use Intel drives if you wish them to be bootable.
The only thing the VROC key allows you is the addition of Raid 1, Raid 5 & Raid 10. No key allows only Raid 0!
 
One further suggestion:

Optane.

Even more expensive for less space, but, it actually yields improved user experience. It's the new benchmark for solid-state performance.

And if you're after higher sustained speeds, just grab a few of the WD Blacks as storage drives.

Two or more optane drives in Raid 0 would give you crazy speed but would definitely set you back a few bucks.
Could use the M.2 slots in the MSI board with a pair of these: https://www.newegg.com/Product/Prod...F-000F3&cm_re=905p-_-1Z4-009F-000F3-_-Product
 
Two or more optane drives

Not sure why you'd use more than one; the OS needs the improved latency, the transfer rates are already high for that purpose and thus otherwise meaningless.

Use cheaper, larger NVMe drives for larger fast storage.
 
I didn't post this thread for subjective opinions about what others think that I might need, think that i might be able to notice in 'real world' testing. I am not concerned with what anyone else thinks about the performance metrics and my ability to utilize them efficiently, and cost-effectively.




This is why!
 
Marvel at the four digit transfer speeds then brace for impact at the disappointing realisation that it still grinds to a kbps crawl when it hits millions of microfiles.
 
I just got a Evo 970 500gb yesterday for $140 on Newegg.com after installing windows and stuff i ran a benchmark and i get really close to 2500mb/s it's 2490's something on the write's and over 3400mb/s on the reads. It's slight noticable upgrade from a 250gb 960 evo. I believe the 970 500gb is slightly faster than the 960 pro 512gb. They improved the random performance that's the most perceivable difference when opening a bunch of random programs at once and the system seems more responsive over all.
 
just for sakes... I was searching and hit this post..

so OP what did you go with? did you get one?

I have a dell t5810 with one of these and it has a Samsung 512gb pm960 and a Samsung 512gb sm951...
I just did a copy between them as I'm testing my 10gb network (ruling them out as an issue)...
and I get 1.35GB/s copying a 7gb iso.

uefi in bios and they are the only 2 drives in the machine.
 
New AData XPG SX8200 Pro 1TB M.2 NVMe SSD has my attention. Here is a review of it.
Below are the updated prices of the WD Black

AData SX8200 Pro M.2 NVMe SSDs
1024 GB - $210 - 3350 MB/s - 2800 MB/s --- $0.21 / GB
512 GB - $108 - 3350 MB/s - 2350 MB/s --- $0.21 / GB
256 GB - $70 - 3350 MB/s - 1150 MB/s --- $0.27 / GB


WD Black NVMe M.2 SSDs
1000 GB - $250 - 3400 MB/s - 2800 MB/s --- $0.25 / GB
500 GB - $120 - 3400 MB/s - 2500 MB/s --- $0.24 / GB
250 GB - $80 - 3000 MB/s - 1600 MB/s --- $0.32 / GB

That $250 for the 1TB WD black is on the higher point of what I've been seeing on Amazon.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Yup the SX800 Pro is what I'd get right now. Pretty cheap in the UK at the mo for the 500GB.
 
I'm trying to read up on this intel bootable raid SSD limitation. So if I understand this correctly, if I were to get two of these AData SX8200 NVMe M.2 SSDs, install them in both of my on-board M.2 slots, I wouldn't be able to stripe them in RAID0 and boot from it? -- And the only way to stripe a pair of M.2 NVMe SSDs in my on-board m.2 slots, and create a bootable volume would be for me to use Intel brand NVMe SSDs?

Is there really not a workaround for this?
 
I have been told by a reliable source that on some Asus boards, if you use a Hyper M.2 X 16 AIC to mount the drives you can get Samsung 960 & 970 Pro drives to boot! Haven't actually done it myself, but I believe he did make it work.
Best of luck!
 
I did put a pair of 380GB M.2 Intel Optane 905P drives in a raid 0 boot drive set-up for a customer just recently. Didn't do any benchmarks, but this thing is noticeably faster than even the 905P U.2 single drive set-up.
Customer was using massive SolidWorks files that used to be slow to open, but know open damn near instantly
Engineers time is expensive so anytime saved is money in the bank!!
 
I did put a pair of 380GB M.2 Intel Optane 905P drives in a raid 0 boot drive set-up for a customer just recently. Didn't do any benchmarks, but this thing is noticeably faster than even the 905P U.2 single drive set-up.
Customer was using massive SolidWorks files that used to be slow to open, but know open damn near instantly
Engineers time is expensive so anytime saved is money in the bank!!

Exactly. I guess I should mention that I am a mechanical engineer and I use solidworks, autocad, and inventor to design transmission parts. That's one of the compelling reasons as to why I'd be interested in something like RAID0 on two M.2 NVMe SSDs.

I can buy two 512GB of the WD Black or the SX8200, for the price of one 1024GB drive. If I can stripe them in my on-board slots, that's twice the throughput for the same price. Why wouldn't I want to do that?

Any risk of dataloss is negligible to me; As I use Google Drive, Amazon Drive for photos, iTunes match, etc, etc
 
The set-up I used was an Asus WS C422 PRO/SE, W-2145 CPU (SolidWorks doesn't yet make use of more than 8 cores), 128GB Kingston ECC ram, two 905P M.2 Optane drives in Raid 0 installed in an Asus M.2 X16 PCIe AIC and a PNY RTX 5000 graphics card.
Raid set up was really simple in the UEFI Bios.
Even though the Optane drives are quite expensive, the really low latency makes them quite a bit faster than even the Samsung 970 Pros!
I feel your pain that Intel will only let you run their drives using VROC, but it is quite a marketing idea!
 
Exactly. I guess I should mention that I am a mechanical engineer and I use solidworks, autocad, and inventor to design transmission parts. That's one of the compelling reasons as to why I'd be interested in something like RAID0 on two M.2 NVMe SSDs.

I can buy two 512GB of the WD Black or the SX8200, for the price of one 1024GB drive. If I can stripe them in my on-board slots, that's twice the throughput for the same price. Why wouldn't I want to do that?

Any risk of dataloss is negligible to me; As I use Google Drive, Amazon Drive for photos, iTunes match, etc, etc
You will not get twice speed at all using on board as you will be going through the chipset with 2 blacks...
crystalCapture.PNG
 
You will not get twice speed at all using on board as you will be going through the chipset with 2 blacks...View attachment 147629

Thank you for posting this benchmark. Mind sharing some more specs about the system, motherboard, etc?

I actually thought about this a good bit yesterday. I updated my build thread with some of my thoughts; BenseBuilt Liquid-cooled MicroATX Mini Workstation / Riser Card Research / X99 / X299. Your post confirms some of the speculations/considerations that I had yesterday, and has me leaning towards a PCIe adapter. I'll admit that I hadn't previously taken the time to read up on PCIe Bifurcation until now. I had just assumed that these adapters had some sort of processor on them that let me do whatever RAID configuration I wanted on them. I now realize that this is not the case.

Ideally, I would like to find one that does not require PCIe Bifurcation. One that is agnostic of my motherboard and has an onboard PLX switch / processor / logic / etc that I can use on 'any' motherboard, that lets me RAID the SSDs. If I am unable to find one that meets my criteria, perhaps this MSI M.2 Xpander Aero (see also this link to a review of it) would work in my MSI X299M Gaming Pro Carbon AC.

If I am able to do so in a painless manner, and in the circumstance where I might find myself needing to add another PCIe device, perhaps I could use my on-board m.2 slot that runs through DMI to add a 10Gbe adapter. I discuss this in further detail in my BenseBuilt workstation thread though.
 
Updating prices and link to WD's revamped black SN750

AData SX8200 Pro M.2 NVMe SSDs
1024 GB - $183 - 3350 MB/s - 2800 MB/s --- $0.178 / GB
512 GB - $100 - 3350 MB/s - 2350 MB/s --- $0.195 / GB
256 GB - $60 - 3350 MB/s - 1150 MB/s --- $0.234 / GB


New Version WD Black SN750 NVMe M.2 SSDs
2000 GB - $??? - 3400 MB/s - 2900 MB/s --- $0.?? / GB
1000 GB - $230 - 3470 MB/s - 3000 MB/s --- $0.23 / GB
500 GB - $120 - 3470 MB/s - 2600 MB/s --- $0.24 / GB
250 GB - $83 - 3100 MB/s - 1600 MB/s --- $0.33 / GB
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Updating prices...

AData SX8200 Pro M.2 NVMe SSDs
1024 GB - $160 - 3350 MB/s - 2800 MB/s --- $0.156 / GB
512 GB - $78 - 3350 MB/s - 2350 MB/s --- $0.152 / GB
256 GB - $57 - 3350 MB/s - 1150 MB/s --- $0.223 / GB


New Version WD Black SN750 NVMe M.2 SSDs
2000 GB - $??? - 3400 MB/s - 2900 MB/s --- $0.?? / GB
1000 GB - $228 - 3470 MB/s - 3000 MB/s --- $0.228 / GB
500 GB - $105 - 3470 MB/s - 2600 MB/s --- $0.210 / GB
250 GB - $69 - 3100 MB/s - 1600 MB/s --- $0.276 / GB


I purchased an Asus Hyper M.2 x16 v2 for $54 shipped, along with two of the AData SX8200 Pro 512GB. On my MSI X299M Gaming Pro Carbon AC the board sees it after I enable PCIe bifurcation on the slot that I have it installed in. Like others have suggested, the BIOS does not let me create a RAID0 volume. However, I am reading articles that suggest that there might be a chance of creating a volume from the Intel VROC software application in Windows. I recognize that this will not be bootable. However, I am going to see if I can find a workaround for this.

I might be able to install Windows (I use windows 8.1 x64 enterprise, in case that matters) onto my SATA SSD, create the RAID0 volume, then boot from the Windows 8.1 installation media, load the VROC driver and then install windows onto the RAID0 volume. Then boot from the SATA SSD and then using the Windows bootloader, select the RAID0 NVMe volume.


I should have never gone with X299. My case is mATX. Previously, I had x99 with a crazy, over-engineered watercooling loop that I didn't feel like changing at the time. In October 2017, I built a machine for a client and used my x99 / i7-5820k setup for their computer. At the time, the most lateral step was x299 with this mATX board by MSI. The ASRock X399M was not available at the time. I would consider selling my X299 motherboard and CPU and getting that X399M board. However, with the new AMD stuff that's coming out in a few months, it seems best to just wait. --- In other words, I'm 'stuck' with X299 for now.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top