8x PCIe to two 4x m key NVME m.2 adapter?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,743
Has anyone seen one of these?

Does it exist?

In my googling I've had a difficult time finding one. There are a few dual adapters out there, but they all seem to only hvae one M key slots for NVME and the other slot is for SATA only.

Appreciate any suggestions.
 
Zara-
Aplicata makes a x8 and a x16 multiple card. It seems to work, and can take up to 4 NVME drives per PCIe slot.


Holy shit. $450? Not what I was expecting for an adapter device.

I'm looking for a cheap way to hook up 2x small Optane m2 sticks to a single 8x PCIe slot to replace my aging mirrored Intel S3700 SSD's on ZFS ZIL duty.
 
Holy shit. $450? Not what I was expecting for an adapter device.

I'm looking for a cheap way to hook up 2x small Optane m2 sticks to a single 8x PCIe slot to replace my aging mirrored Intel S3700 SSD's on ZFS ZIL duty.
Well, these aren’t just simple pass through. They have (The 400 for example) PCIe switch onboard with the requisite software to support the card even in machines that don’t support bifurcation, unlike the dell 4 port for example.
 
Well, these aren’t just simple pass through. They have (The 400 for example) PCIe switch onboard with the requisite software to support the card even in machines that don’t support bifurcation, unlike the dell 4 port for example.

Hmm.

I think a simple passthrough device is more what I am looking for. That Dell 4 port looks fine, but it is a 16x device. I wonder how it behaves if placed in an 8x slot. 4x each to two NVME devices, t lanes each to all four slots, or stops working all together?

Also, even that dell adapter is kind of ridiculously priced at $239 I'd expect it to be priced just like a passive PCIe riser (I mean, that's essentially what it is). It should be <$50.
 
The ones without a PCIe switch don't work with all motherboards, so the seller has to be willing to deal with the inevitable returns.
 
The ones without a PCIe switch don't work with all motherboards, so the seller has to be willing to deal with the inevitable returns.


The question is, how common is bifurcation support in server motherboards?

For consumer stuff probably pretty rare, but this is going in my Supermicro X8DTE server board. No mention of bifurcation in the manual though.
 
You're unlikely to get bifurcation working on an ancient motherboard like that. It didn't really become common until the X10 generation boards.
 
You're unlikely to get bifurcation working on an ancient motherboard like that. It didn't really become common until the X10 generation boards.

That stinks, but I appreciate the info.

My dual L5640 (6c/12T each) has been working so well that I have no real reason to replace it and spend a bloody fortune on 256GB of DDR4 RAM.

I wanted to speed up my ZFS pool performance by replacing my two mirrored SATA Intel S3700 SSD's as SLOG/ZIL devices with a pair of NVME Optane devices, as wellas pop in an 1TB NVME drive to replace my two SATA Samsung 850 Pro's that are currently serviing in that role.

Bifurcation seemed like the most obvious way to do this, due to available PCIe slot count, but I think I can accomplish the same if I free up a slot by pulling out one of my two LSI SAS HBA's and replace it with one of those HP SAS expanders. They have gotten so cheap now! I saw one for $18 on eBay.

If I do that, I can fit three single drive m.2 adapters each in their own slot....

...and it opens up 6 more slots for more hard drives so I can add another 6 drive VDEV in the future...
 
Last edited:
There are several options out there for switched PCI-E M key cards. I have been doing research for my x58 xeon rig and have come across the following:

Addonics x8 to 2 x4 https://www.addonics.com/products/ad2m2nvmpx8.php

Highpoint line http://www.highpoint-tech.com/USA_new/nvme-storage-solution.htm

QNAP makes some cards for their enclosures, no reason it wouldnt work on a windows pc. Make sure you get a pcie version https://www.qnap.com/en-us/product/qm2-m.2ssd

SYBA/IOcrest https://www.sybausa.com/index.php?route=product/product&product_id=992

There may be more options coming out due to new switch chips coming online. However, I must ask what sort of networking you have that a SATA 3 SSD would be insufficient for ZIL?
 
There may be more options coming out due to new switch chips coming online. However, I must ask what sort of networking you have that a SATA 3 SSD would be insufficient for ZIL?

Pretty much every network.

With my mirrored 100GB Intel S3700 SATA drives write performance is only about 100MB/s, so that right there is limiting to gigabit ethernet. I also have 10GIG going straight to my workstation and a lot of local VM's pounding the pool on the local machine without any network being involved.

The sequential performance of your ZIL drives is almost completely irrelevant. What matters for ZIL perfomance is write latency, and NVMe drives are by far the best in this regard.

What you want in a ZIL drive is as low latency as possible, and capacitor backed writes, so if the power suddenly goes the last writes are committed to the drives. Other more traditional performance metrics and drive size really do not matter at all.

You also want to mirror two of them because you really don't want to lose your ZIL if something goes wrong.

If not for the fact that Intel's small 16GB and 32GB Optane drives are limited to 2x lanes they would probably be the perfect ZIL drives, because they benefit from write latency and you don't waste space.

Short of that, you'd probably need 280GB Intel Optane drives, which is kind of a shame, as you'll never use more than ~1-2GB of capacity on them even with 10gig or local writes, and you can't share them with any other data, as then you kill the latency performance.
 
There are several options out there for switched PCI-E M key cards. I have been doing research for my x58 xeon rig and have come across the following:

Addonics x8 to 2 x4 https://www.addonics.com/products/ad2m2nvmpx8.php

Highpoint line http://www.highpoint-tech.com/USA_new/nvme-storage-solution.htm

QNAP makes some cards for their enclosures, no reason it wouldnt work on a windows pc. Make sure you get a pcie version https://www.qnap.com/en-us/product/qm2-m.2ssd

SYBA/IOcrest https://www.sybausa.com/index.php?route=product/product&product_id=992

There may be more options coming out due to new switch chips coming online. However, I must ask what sort of networking you have that a SATA 3 SSD would be insufficient for ZIL?


I somehow missed this post first time around. Good info. Thank you.

Now the question is, are any of them affordable :p
 
Does anyone know how these multiplexers work?

So, lets say I stick one of these multiplexers in a 8x Gen2 port on my server. (It's old, but still works)

Lets then say I stick two 32GB Optane NVME drives, which are 2x Gen3 each in the multiplexer. Will they get the full Gen3 bandwidth or will they be limited to the upstream Gen2?

The reason I ask is because I'd be using 4x Gen3 lanes behind the multiplexer, and that's about the same amount of bandwidth as 8x Gen2.

This would make for a great bang for the buck SLOG/ZIL drive for ZFS where latency is king, but storage size and sequential transfer speed are more or less meaningless.

This is what I am talking about. The larger Optane AIC cards are obviously faster, but they also cost MUCH more, and mirroring two of them would take two PCIe slots instead of just one...

This of course depends on the multiplexer being low latency. If it adds a lot of latency it will kill the ZIL performance.
 
There are no multiplexers for PCIe after the fact. There are mux which are commonly implemented on boards that let you run one GPU at x16 or two at x8/x8 in two slots, but this will not help split a slot already on a board. tl;dr version: The support for this mux is baked in when they design and ship the board, you can't add it later.

There is bifurcation supported mostly by recent server chipsets, your BIOS must specifically support this on the specific slots you want to use. These cards are cheap, because they are really only electrical adapters with no logic, which is why you can find 4x4 in x16 cards with fancy heatsinks and fans for a little more than $50.

Then there are PCIE switch controllers. The company that made them has been gobbled up into the mothership by Broadcom who summarily raised prices, so any motherboard or add-in card using these today is pricey as hell. Like $400 for a pretty basic 2x4 in x8 card and it only goes up from there.
Broadcom is trying to leverage SAS cards that also support NVMe, switches built into PCIe cabling etc etc. While I like the technical side of that stuff, from a business side they can go fuck themselves.

AMD's bajillion lanes is kindof making that moot anyways: when one socket in 1U can have a pair of 100Gbit (or better) network ports and 24 directly connected x4 NVMe drives, who needs all that overpriced adapting claptrap? (already available with 3.0, now double the bandwidth with 4.0)

PS: The x2 m.2 optane you can get cheap at microcenter (do it right: buy two and mirror) are more than fast enough to be ZIL for home NAS builds, even 10GbE is no sweat. You only need low latency which these are great at and enough buffer for 5-10 seconds of the max bandwidth of the rest of your array.
 
Last edited:
There are several options out there for switched PCI-E M key cards. I have been doing research for my x58 xeon rig and have come across the following:

Addonics x8 to 2 x4 https://www.addonics.com/products/ad2m2nvmpx8.php


If anyone is curious, I contacted Addonics to ask them to confirm that it has a PCIe Switching chip, and what chip it used.

The answer I got was:

"This controller does have a PCIe switching chip on board. We do not know what chip it uses."

A little odd that they don't know what chip their own product uses, but at $150, it is so much cheaper than the PLX boards, that I'd give it a chance.
 
Back
Top