Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
the way they make it sound, you start to avoid raid0 on ssds like the plague.....
is it true that ssds fail on raid0 a lot quicker than hdds? or is it all just BS?
I run two 256 GB Samsung 830 SSDs in RAID1 on my fileserver as OS drive and VM backstorage to increase read IOPS and add some redundancy.
My desktop and laptop run both on a 256 GB Samsung 840 and that is all they ever need.
is it true that ssds fail on raid0 a lot quicker than hdds? or is it all just BS?
Raid 1 gives absolutely no performance in reads or writes period.Your not reading from two drives or stripes ... you are reading from 1 drive 1 stripe 1 stream of data. By the mathematics and scientific design of RAID ... R1 is impossible to boost read or write performance.Unless you are using ZFS zPools etc... not windows. by any means or stretch of imagination.
Raid 1 gives absolutely no performance in reads or writes period.Your not reading from two drives or stripes ... you are reading from 1 drive 1 stripe 1 stream of data. By the mathematics and scientific design of RAID ... R1 is impossible to boost read or write performance.Unless you are using ZFS zPools etc... not windows. by any means or stretch of imagination.
In fact they are ZFS mirrors. I know definitely that mdadm also distributes reads across multiple RAID1 disks as long as they are from different operations or threads.
I have been using mdadm RAID1 for years on harddrives and consistently achieve higher concurrent IOPS with multiple accesses than for single disks.
Someone in this forum said that Areca controllers also do this.
By the way, what you go on about scientific and mathematic design is just blabla. First of all, random read accesses can be very well distributed across several disks. And that is a much more important metric for me (and also for others I think) as far as my VM backstorage is concerned.
And even if you are talking about sequential transfers, that has been true only for harddrive RAID1, because seeks are more expensive than linear reads.
For SSDs this is not true at all. A sufficient modern SSD can randomly read at very high throughput with a high queue depth. While some models can do more than 500 MB/s sequentially, they still do 300-400 MB/s in random 4K QD32 reads.
So a linear read from a two-drive RAID1 of these SSDs could do 800 MB/s with the worst possible access pattern, reading each other 4K block from each disk.