Introducing a 21 SSD Slot/ 361 TB total PCI-E Add-on Card

Eshelmen

Supreme [H]ardness
Joined
Feb 3, 2004
Messages
6,568
For all you storage fanatics out there. This may be something for you, all for the low price of $2,800.

https://arstechnica.com/gadgets/202...-that-holds-up-to-21-pcie-4-0-ssds-168tb/amp/

The X21's product page says the AIC works in a standard PCIe 4.0 x16 slot. Images show 10 slots and a heatsink inside a pair of printed circuit boards (PCBs).

inside-640x427.jpg

There are also 11 slots on the PCB's exterior. The card is reportedly full-height and full-length and supports QLC, TLC, MLC, and Intel Optane drives. In terms of operating systems, there's support for Windows 10, 11, and Server, plus Linux.

exterior-2-640x427.jpg

"The card is PCIe fanout, so RAID support would be provided via software or third-party hardware solution, like Graid," Hill told Ars Technica.

Apex Storage is also looking forward to the AIC being able to support up to 336TB should 16TB M.2 2280 SSDs come to market.

Speaking of forward-looking, this component isn't technically future-proofed, since it doesn't support PCIe 5.0 (it's backward-compatible with PCIe 3.0). But while the X21 doesn't target budget-prioritizing users who would opt for HDDs, using PCIe 5.0 SSDs would be even more prohibitively expensive. The AIC's cooling requirements would also increase. As it stands, the X21 requires 400 LFM (linear feet per minute) airflow.

In terms of performance, Apex Storage is claiming sequential read and write speeds of up to 30.5 and 26.5GBps, respectively, while a multi-card setup claims 107GBps and 70GBps, respectively. Apex Storage's website also points to 7.5 million IOPS random reads and 6.2 million IOPS random writes with one card, with those figures expanding to 20 million and 10 million, respectively, in a multi-card configuration.
 
So, Gen 4 x16 has a bandwidth limit of 31.5 GB/s. So that's 252Gbit/s

21 x4 slots would have a combined bandwidth of 1324 Gbit/s, so the x16 interface is definitely going to be the limiting factor here.

I presume that this thing just uses some sort of PCIe switch, like a PLX chip to accomplish this. Keep in mind that PCIe switching adds latency, which can impact performance.

If you use all the drives raided, you are only getting ~1.5GB/s per drive. That's equivalent to about 3x gen2 lanes. So in a RAID configuration where you are reading all of the drives at the same time, you could probably use Gen 2 drives (if you can find such a thing) and still max out the x16 slot.

I guess it could be a good way to buy a much of cheaper Gen3 drives and max out an x16 slot. Would seem like a waste for Gen 4 980 pro drives like they have in the sample pics.

I have three four-way 16x AIC in my server so I could add 12 NVME drives, but those use PCIe bifurcation and connect to the host natively, so they are non-blocking, and can thus make full use of the installed drives bandwidth. That, and no added latency.

494154_PXL_20210903_001320912.jpg


These AICs/risers/whatever you want to call them only cost $75 a piece too, far shy of the $2,800 of this thing...
 
Last edited:
Old news, saw this somewhere else last week, and was ROTFLMAO when I saw the price :)

Surely they don't really think anyone outside of pro/commercial users will be buyin this any time soon..or just those folks with moar $$ than brains....

I've used those Asus cards in the past nottaproblemo !
 
Old news, saw this somewhere else last week, and was ROTFLMAO when I saw the price :)

Surely they don't really think anyone outside of pro/commercial users will be buyin this any time soon..or just those folks with moar $$ than brains....
Never understood posts like this. It shouldn't have to be explained that components that cost twice as much as the average users' entire PC is not for general PC users. A graphics card, which is more important to most users such as the 4090 tops out at $1800 or so and is found in less than .01% on Steam Survey. Even in the subset of hardcore Steam gamers, it's <1 in 10,000. Anyone who thinks a storage device that costs more than the best graphics card is going to be found in more consumer level PC's than a 4090 has an incredibly odd way of looking at the world.

With $100 1TB NVME drives, that would add another $2100 to populate this. Most professional users buying such a card will likely only populate it with 4TB or 8TB NVME drives, making this possibly cost as much as a used car. Again, extrapolating all of this out shouldn't be necessary with all of these cost barriers to entry.
I've used those Asus cards in the past nottaproblemo !
Those and the cards from Highpoint may continue to be the sweet spot as they can dedicate 4 lanes to each NVME drive to give maximum performance.

So, Gen 4 x16 has a bandwidth limit of 31.5 GB/s. So that's 252Gbit/s

21 x4 slots would have a combined bandwidth of 1324 Gbit/s, so the x16 interface is definitely going to be the limiting factor here.

I presume that this thing just uses some sort of PCIe switch, like a PLX chip to accomplish this. Keep in mind that PCIe switching adds latency, which can impact performance.

If you use all the drives raided, you are only getting ~1.5GB/s per drive. That's equivalent to about 3x gen2 lanes. So in a RAID configuration where you are reading all of the drives at the same time, you could probably use Gen 2 drives (if you can find such a thing) and still max out the x16 slot.
Likely can also dedicate single lanes to drives to access more drives at once with a hit to speed to the individual drive but using the maximum bandwidth of the X16 slot.

I also assume that anyone who wants to RAID 20 or 21 drives, likely will want some level of redundancy (likely RAID 10), at least halving the amount of drives needing to be accessed at once. RAID 0 would be nearly pointless, other than simply wanting a drive pool that's big and NVME.
 
Anyone who thinks a storage device that costs more than the best graphics card is going to be found in more consumer level PC's than a 4090 has an incredibly odd way of looking at the world.
Well excuse the hell out of me for expressing my opinion (which I thought was ok here), sorry if it offended you (not really), but my "incredibly odd ways", whether YOU agree with them or not, have always served me well, so I'll just stick with them for the time being, thank you !

Note that I could have replied in a less-polite, moar "odd" way, but I chose to remain civil about it, hehehe :D
 
I also assume that anyone who wants to RAID 20 or 21 drives, likely will want some level of redundancy (likely RAID 10), at least halving the amount of drives needing to be accessed at once. RAID 0 would be nearly pointless, other than simply wanting a drive pool that's big and NVME.

That's not exactly how RAID works though. Lets say you mirror two drives. You still need to read the data from both of them. Then the software (or firmware if hardware RAID) compares the data from both, checks it for mismatches, and once everything is good, presents the data to the OS.

So even if you use the equivalent of RAID5 (one redundant drive worth of parity), or RAID6 (two redundant drives worth of parity) or Mirrors, you are still reading from all drives at the same time, just like you would if it were RAID0.

Where this could come in handy is where you make different storage pools for different purposes, that may or may not be in use at the same time. Then each drive in those pools can have access to the full bandwidth, as long as you aren't using too many of them at the same time.

Also, just because a NVMe drive has 4x lanes, doesn't mean it will max out those 4x lanes. So this also results in some spare bandwidth.

Its very similar to a blocking Ethernet configuration in that sense.
 
Well excuse the hell out of me for expressing my opinion (which I thought was ok here), sorry if it offended you (not really), but my "incredibly odd ways", whether YOU agree with them or not, have always served me well, so I'll just stick with them for the time being, thank you !

Note that I could have replied in a less-polite, moar "odd" way, but I chose to remain civil about it, hehehe :D

I think there are some very specialized applications for these, but they are not going to be for most people.

Heck, they arent even going to be for most enterprise/professional users. But there is someone out there who has a corner case he or she needs this for, and htey are likely thrilled.

As for the cost? Probably in part due to needing a freaking 16 -> 84 lane PCIe switch. So, that's a 100 lane PCIe switch. Those ain't cheap. I bet the bulk of the cost of this board is in that PCIe switch alone.

It might also be complicated to run all of those 100 PCIe lane traces, so the board layout and design is probably the rest of it.

Definitely not worth it for most people, but again, there is likely someone out there who will be thrilled.

Some "more money than brains" gaming enthusiast may do it just for bragging rights, but that doesn't mean it is a good idea. Just ask Linus :p



Sometimes I feel he makes more questionable choices than he does good ones, but that said, he makes them in the name of driving views and revenue, so who knows if he really is as much of a moron as he comes across sometimes, or if he just does stupid shit because he knows it will get views...
 
I think there are some very specialized applications for these, but they are not going to be for most people.

Heck, they arent even going to be for most enterprise/professional users. But there is someone out there who has a corner case he or she needs this for, and htey are likely thrilled.

As for the cost? Probably in part due to needing a freaking 16 -> 84 lane PCIe switch. So, that's a 100 lane PCIe switch. Those ain't cheap. I bet the bulk of the cost of this board is in that PCIe switch alone.

It might also be complicated to run all of those 100 PCIe lane traces, so the board layout and design is probably the rest of it.

Definitely not worth it for most people, but again, there is likely someone out there who will be thrilled.

And considering they launched from a kickstarter I don't suppose they have the cash to spend to buy those switches in any kind of reasonable quantity to negotiate prices, at least for launch. Maybe as they get orders they can push it down a little, a little.

And since their engineer is also their sales guy, i have my doubts on their market penetration.
 
Not many 100 lane chips out there either, Microchip the only one i am finding, sound about right? PM40100B1-FEI. Bout $550 unnegotiated at 100pcs-1k

That's the only one I am familliar with, but I am about as far from an expert when it comes to these things as you can get.

That's actually less bad pricing wise than I thought!
 
That's not exactly how RAID works though. Lets say you mirror two drives. You still need to read the data from both of them. Then the software (or firmware if hardware RAID) compares the data from both, checks it for mismatches, and once everything is good, presents the data to the OS.

So even if you use the equivalent of RAID5 (one redundant drive worth of parity), or RAID6 (two redundant drives worth of parity) or Mirrors, you are still reading from all drives at the same time, just like you would if it were RAID0.
I'll assume you know more about storage solutions than I do, but isn't the point of hardware RAID to do all the verification on board rather than the CPU? Why wouldn't the host computer do it's interactions on half, and allow hardware onboard the card do the other half, especially considering it would be faster at verifying bits. In other words, there isn't a need for the computer to "see" the other half of the RAID array other than for the purposes of redundancy, which in my head could also be flipped over to in the event of a failure.
Where this could come in handy is where you make different storage pools for different purposes, that may or may not be in use at the same time. Then each drive in those pools can have access to the full bandwidth, as long as you aren't using too many of them at the same time.

Also, just because a NVMe drive has 4x lanes, doesn't mean it will max out those 4x lanes. So this also results in some spare bandwidth.

Its very similar to a blocking Ethernet configuration in that sense.
Right, all that makes sense. Though having multiple pools on one card only gives the benefit of 'only' requiring a single X16 slot, whereas using multiple X16 slots for multiple pools benefits in virtually every other way, including if for whatever reason it's necessary to copy data from pool to pool.

Well excuse the hell out of me for expressing my opinion (which I thought was ok here), sorry if it offended you (not really), but my "incredibly odd ways", whether YOU agree with them or not, have always served me well, so I'll just stick with them for the time being, thank you !

Note that I could have replied in a less-polite, moar "odd" way, but I chose to remain civil about it, hehehe :D
Not to reverse UNO you, but I was being relatively polite when pointing out that it shouldn't be necessary to talk about a premium device not being for consumers as obvious. Price alone dictates the market. It may shock you to learn that Ferrari's are also not for general consumers, they're also priced out for them.
 
That's the only one I am familliar with, but I am about as far from an expert when it comes to these things as you can get.

That's actually less bad pricing wise than I thought!
I cant even get a datasheet on it without a NDA from them. 1467 pin count on that chip. That just stressed me out thinking about routing that.
 
I'll assume you know more about storage solutions than I do, but isn't the point of hardware RAID to do all the verification on board rather than the CPU? Why wouldn't the host computer do it's interactions on half, and allow hardware onboard the card do the other half, especially considering it would be faster at verifying bits. In other words, there isn't a need for the computer to "see" the other half of the RAID array other than for the purposes of redundancy, which in my head could also be flipped over to in the event of a failure.

If you have a hardware RAID card, then yes, you are correct. This would be done on the board before data gets sent to the CPU across the PCIe lanes.

This - however - is not a hardware RAID card. It just allows each individual drive to be seen by the system.

"The card is PCIe fanout, so RAID support would be provided via software or third-party hardware solution, like Graid," Hill told Ars Technica.

If you want RAID then you have to figure out how to accomplish that using some sort of software solution. I'm partial to ZFS, but it doesn't really work well on Windows (at least not yet), so its not going to be an option for some.
 
If you have a hardware RAID card, then yes, you are correct. This would be done on the board before data gets sent to the CPU across the PCIe lanes.

This - however - is not a hardware RAID card. It just allows each individual drive to be seen by the system. If you want RAID then you have to figure out how to accomplish that using some sort of software solution. I'm partial to ZFS, but it doesn't really work well on Windows (at least not yet), so its not going to be an option for some.
Ah, okay, so I'm not crazy, but therein lies the rub that this doesn't have hardware RAID onboard. I feel like that's a massive oversight that severely limits who would want to buy this card then as it won't offer increased performance for those that are using it with redundancy.

Obviously that would come at a commensurate incerase in price. When I heard it was only "just" under $3000 I thought that "seemed low". I imagine for a hardware RAID implementation that could handle all of these lanes that would increase the cost of the card by at least $4000 if not higher.
 
Last edited:
Ah, okay, so I'm not crazy, but therein lies the rub that this doesn't have hardware RAID onboard. I feel like that's a massive oversight that severely limits who would want to buy this card then as it won't offer increased performance for those that are using it with redundancy.

Obviously that would come at a commensurate increase in price. When I heard it was only "just" above $2000 I thought that "seemed low". I imagine for a hardware RAID implementation that could handle all of these lanes that would increase the cost of the card by at least $4000 if not higher.

Totally with you on that. I wonder what kind of chip you'd have to put on this card for it to be able to handle various different parity calculations in order to support a 31.5GB/s RAID output speed.

No idea what that would take. The intricacies of dedicated RAID parity calculation chips is beyond my understanding of the field, but I'm pretty sure it would be both expensive and hot. Espe3cially considering how expensive and hot my little LSI RAID cards have been, and they only handle a max output of 8x Sata3 (so 6GB/s internally over a 7.88GB/s interface)
 
For all you storage fanatics out there. This may be something for you, all for the low price of $2,800.

https://arstechnica.com/gadgets/202...-that-holds-up-to-21-pcie-4-0-ssds-168tb/amp/

The X21's product page says the AIC works in a standard PCIe 4.0 x16 slot. Images show 10 slots and a heatsink inside a pair of printed circuit boards (PCBs).

View attachment 556744

There are also 11 slots on the PCB's exterior. The card is reportedly full-height and full-length and supports QLC, TLC, MLC, and Intel Optane drives. In terms of operating systems, there's support for Windows 10, 11, and Server, plus Linux.

View attachment 556745

"The card is PCIe fanout, so RAID support would be provided via software or third-party hardware solution, like Graid," Hill told Ars Technica.

Apex Storage is also looking forward to the AIC being able to support up to 336TB should 16TB M.2 2280 SSDs come to market.

Speaking of forward-looking, this component isn't technically future-proofed, since it doesn't support PCIe 5.0 (it's backward-compatible with PCIe 3.0). But while the X21 doesn't target budget-prioritizing users who would opt for HDDs, using PCIe 5.0 SSDs would be even more prohibitively expensive. The AIC's cooling requirements would also increase. As it stands, the X21 requires 400 LFM (linear feet per minute) airflow.

In terms of performance, Apex Storage is claiming sequential read and write speeds of up to 30.5 and 26.5GBps, respectively, while a multi-card setup claims 107GBps and 70GBps, respectively. Apex Storage's website also points to 7.5 million IOPS random reads and 6.2 million IOPS random writes with one card, with those figures expanding to 20 million and 10 million, respectively, in a multi-card configuration.
Next time, take snips of the pictures instead of doing a copy-paste hotlink.

1679002251233.png
 
I think there are some very specialized applications for these, but they are not going to be for most people.

Heck, they arent even going to be for most enterprise/professional users. But there is someone out there who has a corner case he or she needs this for, and htey are likely thrilled.

As for the cost? Probably in part due to needing a freaking 16 -> 84 lane PCIe switch. So, that's a 100 lane PCIe switch. Those ain't cheap. I bet the bulk of the cost of this board is in that PCIe switch alone.

It might also be complicated to run all of those 100 PCIe lane traces, so the board layout and design is probably the rest of it.

Definitely not worth it for most people, but again, there is likely someone out there who will be thrilled.

Some "more money than brains" gaming enthusiast may do it just for bragging rights, but that doesn't mean it is a good idea. Just ask Linus :p



Sometimes I feel he makes more questionable choices than he does good ones, but that said, he makes them in the name of driving views and revenue, so who knows if he really is as much of a moron as he comes across sometimes, or if he just does stupid shit because he knows it will get views...

Ooh! Me me me! I'm one of those corner cases!
https://hardforum.com/threads/pci-e...se-wifi-how-crazy-can-i-actually-get.2019992/

If I had it to do again, I'd at least consider these for that machine. The price tag is a bit steep, but for our oddball edge case, these would be perfect, assuming they work.
 
That's not exactly how RAID works though. Lets say you mirror two drives. You still need to read the data from both of them. Then the software (or firmware if hardware RAID) compares the data from both, checks it for mismatches, and once everything is good, presents the data to the OS.

So even if you use the equivalent of RAID5 (one redundant drive worth of parity), or RAID6 (two redundant drives worth of parity) or Mirrors, you are still reading from all drives at the same time, just like you would if it were RAID0.

That is not necessarily true. In RAID1 you only have to read one of the drives, reading all and comparing is optional for reads. This also speeds up random, concurrent seeks by a factor of how many drives you have in your raid1. Likewise, raid5 data can be read without one drive if you so wish. Whether that is done is a matter of the RAID system/software and configuration.

I have to admit I don't know what ZFS' default way of doing it is. They do checksums in addition to RAID anyway. Anybody knows?
 
That is not necessarily true. In RAID1 you only have to read one of the drives, reading all and comparing is optional for reads. This also speeds up random, concurrent seeks by a factor of how many drives you have in your raid1. Likewise, raid5 data can be read without one drive if you so wish. Whether that is done is a matter of the RAID system/software and configuration.

I have to admit I don't know what ZFS' default way of doing it is. They do checksums in addition to RAID anyway. Anybody knows?

That doesn't make sense.

If it doesn't read partity data with every read, how does it catch silent corruption?
 
That doesn't make sense.

If it doesn't read partity data with every read, how does it catch silent corruption?

The same way as it does with single disk: if the drive doesn't report it the corrupt data goes through to the user.

As I said, it is a choice.
 
Last edited:
The sane way as it does with single disk: if the drive doesn't report it the corrupt data goes through to the user.

As I said, it is a choice.

Yikes. I guess that's what ZFS fans are talking about when they talk about better resiliency against silent corruption.

You can have a bit flip without it resulting in a drive read error. This is why ZFS reads and calculates the full parity with every read.
 

Reading only one mirror in a RAID1 allows the drives to operate in parallel on concurrent reads. That is a speedup of 2 for 2 disks, 3 for 3 disks etc on random block reads.

As discussed in this thread, it also saves bus bandwidth in software RAID.
 
Will it need auxiliary power connectors like a gpu?

I am reading a high end nvme (and well if you have the $$ to get this you would use the best, right) can pull 10 watts.
21*10+overhead>75 watts for the pcie bus (well 75 less what the GPU is sucking away)
 
I think there are very valid use cases for this. I mean, the enterprise handles this different, but on the cheap, stuff like this makes sense (not talking the overpriced version of these cards). For a homelab, could be just the thing.
 
Will it need auxiliary power connectors like a gpu?

I am reading a high end nvme (and well if you have the $$ to get this you would use the best, right) can pull 10 watts.
21*10+overhead>75 watts for the pcie bus (well 75 less what the GPU is sucking away)
1679081417469.png

I guess it regulates the +12V downto +3.3V that M.2 use (in contrast to U.2, U.3, EDSFF which all mainly use +12V)
 
Ah, okay, so I'm not crazy, but therein lies the rub that this doesn't have hardware RAID onboard. I feel like that's a massive oversight that severely limits who would want to buy this card then as it won't offer increased performance for those that are using it with redundancy.

Obviously that would come at a commensurate incerase in price. When I heard it was only "just" under $3000 I thought that "seemed low". I imagine for a hardware RAID implementation that could handle all of these lanes that would increase the cost of the card by at least $4000 if not higher.

Hardware ASIC RAID is legacy/archaic and incompatible with scaling many Gen4 NVMe SSDs due to the severe reduction in throughput and increase in latency. There's a reason there aren't really M.2 based HW RAID cards on market other than a few small-run experimental models (Areca, Highpoint) for certain use cases.

The defacto practice in this space is Gen4 NVMe SSDs direct-attached to the CPU electrically with enough PCIe lanes to not have to need a PLX chip to mux them. Any needed redundancy strategy will be purely software based. In slot-constrained or PCIe-lane constrained scenarios, a PLX chip sitting between the SSDs and CPU can alleviate needing a 128-256 lane CPU/MB, but again at the cost of dimimished throughput and latency.
 
Last edited:
Hardware ASIC RAID is legacy/archaic and incompatible with scaling many Gen4 NVMe SSDs due to the severe reduction in throughput and increase in latency. There's a reason there aren't really M.2 based HW RAID cards on market other than a few small-run experimental models (Areca, Highpoint) for certain use cases.

The name of the game in this space is Gen4 NVMe SSDs direct-attached to the CPU - with as close to zero translation layers as possible (and don't bother on consumer class chipsets/mitherboards - anything more than 3 Gen4 SSDs requires 256 PCIe lane CPU/chipset/MB ala TR Pro, Epyc, Xeon etc). Any redundancy strategy will be purely software based.
Again, I acknowledge not being a storage expert. But, it seems to me then that both RAID 10 and RAID 0 will run significantly worse when fully populating this card. This card affectively has to turn 16 PCIE lanes into 84 PCIE lanes.

While there is already a limitation of throughput on the slot, it also means that it’s not even possible to have one PCIE Lane address every individual Drive on the card.
 
While there is already a limitation of throughput on the slot, it also means that it’s not even possible to have one PCIE Lane address every individual Drive on the card.
Correct throughput is limited to 16 lanes, but this card is targeted at usage scenarios where storage density per slot is more important than throughput. It's a compromise.

Scaling lots of Gen4 NVMe SSDs *and* retaining native throughput and latency otherwise gets very expensive very fast once exceeding around 8 drives - partly because of all the other enterprise gear (100-400Gb Mellanox cards etc) required to get data in and out of the system without also significantly bottlenecking.
 
Last edited:
Correct throughput is limited to 16 lanes, but this card is targeted at usage scenarios where storage density per slot is more important than throughput. It's a compromise.

Scaling lots of Gen4 NVMe SSDs *and* retaining native throughput and latency otherwise gets very expensive very fast once exceeding around 8 drives - partly because of all the other enterprise gear (100-400Gb Mellanox cards etc) required to get data in and out of the system without also significantly bottlenecking.
I have a suspicion then that a drive pool that contains 16 drives will perform much better than any pool containing any amount more.
 
Back
Top