M.2 in RAID 1 or 10?

Weeth

Gawd
Joined
Sep 7, 2011
Messages
662
I asked this question over on Intel Processors as I'm starting to figure out the configuration for my Skylake-X system (yeah, I know I'm way early but I have to start saving my pennies now so I need to know the approximate cost of all this stuff).

So the question is:

I'd also like to get the fastest SSD system that is affordably achievable. Is it possible to set up M.2 in RAID1 or 10? Is there a faster way to get a fairly limited volume on C drive (I can even work with 256GB and the rest of the data on other slower drives).
 
RAID 10 requires 4 or more drives. You can set up M.2 in RAID 0/1. RAID 10 could work if you had multiple PCIe adapters but MB only come with 2 M.2 slots as far as I know.

By time Skylake X comes out XPoint will be out and you will have everything you need in a single SSD. RAID will be beyond pointless for those unless you wanted redundancy. Almost 0 and I mean 0 things will see any difference with a RAIDed XPoint. No one on this forum would see any usefulness of a RAID 0 XPoint. I would love to hear a use case that would see that beneficial.
 
It is possible to setup M.2 drives in RAID. However... due to the additional latency overhead, it isn't always faster. To be honest, I'd recommend using a single M.2 drive for your OS.

If you're doing 4k video editing or some other edge use case, by all means RAID 'em up. But... even a normal M.2 drive is crazy fast.

Your best option is to run a single M.2 drive, and maintain backups. (but hey, we all run backups for everything, right?) ;)
 
By time Skylake X comes out XPoint will be out and you will have everything you need in a single SSD. RAID will be beyond pointless for those unless you wanted redundancy. Almost 0 and I mean 0 things will see any difference with a RAIDed XPoint. No one on this forum would see any usefulness of a RAID 0 XPoint. I would love to hear a use case that would see that beneficial.
Why is that?
 
Why does RAID provide no benefit to XPoint?

It would seem that the Optane type drives are just a few months away so that would definitely fit into my time frame. As for why a RAID wouldn't speed up that drive, is it because it would hit the PCIe data transfer limit?
 
There isn't a lot of reason to RAID an M.2 drive if it's an NVME one. The 950 Pro I put in my NUC reads at over 2.4 gigs per second, it's ludicrously fast.
 
There isn't a lot of reason to RAID an M.2 drive if it's an NVME one. The 950 Pro I put in my NUC reads at over 2.4 gigs per second, it's ludicrously fast.

I'm sure you'll agree that there are always users who want to get the absolute fastest no matter what, so they (albeit not me) might think that if 2.4Gb/s is good, 4.8Gb/s is better. :)
 
I'm sure you'll agree that there are always users who want to get the absolute fastest no matter what, so they (albeit not me) might think that if 2.4Gb/s is good, 4.8Gb/s is better. :)
issue is overhead and that overhead may make things worse
 
issue is overhead and that overhead may make things worse

Yes, fair enough, as there are limits to any technology (and common sense). I did read that this Q4's expected Optane launch would be for Xeons alone, so has there been any information released to expected it for Skylake-X (which is for all intents and purposes a Xeon anyway).
 
RAID 10 requires 4 or more drives. You can set up M.2 in RAID 0/1. RAID 10 could work if you had multiple PCIe adapters but MB only come with 2 M.2 slots as far as I know.

By time Skylake X comes out XPoint will be out and you will have everything you need in a single SSD. RAID will be beyond pointless for those unless you wanted redundancy. Almost 0 and I mean 0 things will see any difference with a RAIDed XPoint. No one on this forum would see any usefulness of a RAID 0 XPoint. I would love to hear a use case that would see that beneficial.

I think RAID will still be valuable for those looking to have the combined IOPS of multiple drives. Even if sequential reads of one drive max out the PCI-E interface, you will still get better IOPS with two drives. (Non-sequential random reads and writes).
 
Kaby Lake and Skylake-X as well or ?
Both are same chipset iirc
I think RAID will still be valuable for those looking to have the combined IOPS of multiple drives. Even if sequential reads of one drive max out the PCI-E interface, you will still get better IOPS with two drives. (Non-sequential random reads and writes).

again small set of users and anyone that falls into this group knows what they are doing and knows they need it.
 
I think RAID will still be valuable for those looking to have the combined IOPS of multiple drives. Even if sequential reads of one drive max out the PCI-E interface, you will still get better IOPS with two drives. (Non-sequential random reads and writes).

Greatly depends on the work type. Low QD stuff won't benefit at all, and may actually suffer in RAID, depending on the controllers involved. Higher QD stuff can benefit, but you will still eventually run into the interface maximum. As SomeGuy133 said, it's a very limited scenario where benefits would be seen, and those that benefit from it will already know about it.
 
Both are same chipset iirc

I believe you on that one I'm just surprised that they would essentially be merging the Z87 series and X99 chipsets. Given that the two platforms have been around for so many years, it's somewhat unexpected.


Greatly depends on the work type. Low QD stuff won't benefit at all, and may actually suffer in RAID, depending on the controllers involved. Higher QD stuff can benefit, but you will still eventually run into the interface maximum. As SomeGuy133 said, it's a very limited scenario where benefits would be seen, and those that benefit from it will already know about it.

Well, let me count myself among the ones who will be perfectly happy with "conventional" Optane speeds.
 
I believe you on that one I'm just surprised that they would essentially be merging the Z87 series and X99 chipsets. Given that the two platforms have been around for so many years, it's somewhat unexpected.




Well, let me count myself among the ones who will be perfectly happy with "conventional" Optane speeds.
i want optane in RAM or on SOC :D

The day CPUs have HBM and RAM is just Optane i will be happy! Move RAM to CPU and move SSD to RAM socket and you just made basically the fastest snappiest PC ever. Only way to make it faster is everyone on the SOC but thats not practical. in regards to storage of data.
 
i want optane in RAM or on SOC :D

The day CPUs have HBM and RAM is just Optane i will be happy! Move RAM to CPU and move SSD to RAM socket and you just made basically the fastest snappiest PC ever. Only way to make it faster is everyone on the SOC but thats not practical. in regards to storage of data.

Count me in on that rig too! Now that would definitely satisfy the speed freak in me!
 
Back
Top