how often do 2x 500gb ssds in raid 0 fail?

AndreRio

[H]ard|Gawd
Joined
Nov 23, 2011
Messages
1,240
is it true that ssds fail on raid0 a lot quicker than hdds? or is it all just BS?
 
I've never seen any data to indicate that. If the data is important to you, back it up. You never know when a drive is going to fail.
 
thanks. the way they make it sound, you start to avoid raid0 on ssds like the plague.....
 
the way they make it sound, you start to avoid raid0 on ssds like the plague.....

They do that because mainly ssd raid0 does not help (can actually hurt performance but you will not notice) much for gaming, OS or most desktop application performance to the point that for most users it is not worth the small risk of failure. Also some raid implementations add time to initialize the raid on boot making the raid0 take longer to boot than a single ssd.
 
MTBF of a SSD is dramatically better than HDD. I have 2 520's that I've been running in RAID 0 for a couple years now and another set of 2 730's in RAID 0. The last time I saw "Raid Array - Failed" was because of cheap ebay SATA cables.

My server OTOH with 4 Seagate drives in it I've already had to replace 2 drives in 12 months.
 
The more devices you have involved, the greater the likelihood of overall failure when it comes to striping. Each added device increases the odds. Perhaps that's what you've heard?, as it's been said for as long as I've been aware of RAID, and long before SSDs were on the market. Always backup your data that is important to you, regardless of how it's stored. Anything else is foolish.
 
I did not have any quality SSD fail. The only drive that died was a Crucial v4 32GB, which I should not have bought in the beginning.

I tend to use RAID1 more often now as it helps read IOPS much more than any other configuration with the added benefit of the protection.
Sequential speed beyond of what a modern SSD can do (500 MB/s) is irrelevant anyway unless you copy data around to something equally fast.
 
I run two 256 GB Samsung 830 SSDs in RAID1 on my fileserver as OS drive and VM backstorage to increase read IOPS and add some redundancy.
My desktop and laptop run both on a 256 GB Samsung 840 and that is all they ever need.
 
is it true that ssds fail on raid0 a lot quicker than hdds? or is it all just BS?

1 SSD is faster than 5 hard drives in raid 0.

2 SSD in raid 0 is stupid in the sense that you are penalizing yourself with data NO integrity for numbers that absolutely make no perceivable difference. you are gonna get over it fast when doing internal file copies gets old just to see such a grandiose number.
 
I run two 256 GB Samsung 830 SSDs in RAID1 on my fileserver as OS drive and VM backstorage to increase read IOPS and add some redundancy.
My desktop and laptop run both on a 256 GB Samsung 840 and that is all they ever need.

Raid 1 gives absolutely no performance in reads or writes period.Your not reading from two drives or stripes ... you are reading from 1 drive 1 stripe 1 stream of data. By the mathematics and scientific design of RAID ... R1 is impossible to boost read or write performance.Unless you are using ZFS zPools etc... not windows. by any means or stretch of imagination.
 
is it true that ssds fail on raid0 a lot quicker than hdds? or is it all just BS?

BS.
SSD normally offers the most reliable secure data you can have today.
any hardware can fail, controller, wiring etc...but normally ssd will run for years without hick ups.
ssd if a cheap consumer version and you use it for databases then things are different due to the heavy write load.
 
Raid 1 gives absolutely no performance in reads or writes period.Your not reading from two drives or stripes ... you are reading from 1 drive 1 stripe 1 stream of data. By the mathematics and scientific design of RAID ... R1 is impossible to boost read or write performance.Unless you are using ZFS zPools etc... not windows. by any means or stretch of imagination.

In fact they are ZFS mirrors. I know definitely that mdadm also distributes reads across multiple RAID1 disks as long as they are from different operations or threads.
I have been using mdadm RAID1 for years on harddrives and consistently achieve higher concurrent IOPS with multiple accesses than for single disks.
Someone in this forum said that Areca controllers also do this.

By the way, what you go on about scientific and mathematic design is just blabla. First of all, random read accesses can be very well distributed across several disks. And that is a much more important metric for me (and also for others I think) as far as my VM backstorage is concerned.
And even if you are talking about sequential transfers, that has been true only for harddrive RAID1, because seeks are more expensive than linear reads.
For SSDs this is not true at all. A sufficient modern SSD can randomly read at very high throughput with a high queue depth. While some models can do more than 500 MB/s sequentially, they still do 300-400 MB/s in random 4K QD32 reads.
So a linear read from a two-drive RAID1 of these SSDs could do 800 MB/s with the worst possible access pattern, reading each other 4K block from each disk.
 
Last edited:
I suppose with large stripe sizes and small files, you might be causing some additional write amplification. Doubt that's going to be a very significant contributing factor, though, versus plain old usage.

At home, I have a RAID 0 pair of SSDs for VMs. I haven't actually bothered to benchmark anything to see if this actually gets me increased IOPs. It might if I crank up the queue depth. Might.

I had my OS on BTRFS RAID 0 for a while, and decided I didn't really need the extra space and converted back to RAID 1. Not noticing any kind of difference, but BTRFS does read from multiple devices.

As others have stated, the main reason to avoid SSD RAID 0 is that it really doesn't speed up typical desktop usage (small files, low queue depths), with the added risk of data loss inherent in RAID 0.
 
RAID does not affect drive reliability at all. It does affect overall reliability.

Say a drive has a 1% chance per year to die and you use 2 in RAID-0. A 1% chance to die means a 99% chance to survive, or 0.99. For 2 drives you square 0.99 which is 0.9801. Your RAID-0 has a 1.99% chance of dying per year.

SSDs are more reliable than HDDs, RAID or not.
 
RAID can affect SSD life, due to the lack of trim support.

But more and more raid implementations are starting to support trim passthrough, so this isn't as much an issue now.

But overall, that is just a small benifit, as the SSD is good anyways :)

I have had ssd's in raid1 configs for 4 years now, none of them have failed yet.
 
Raid 1 gives absolutely no performance in reads or writes period.Your not reading from two drives or stripes ... you are reading from 1 drive 1 stripe 1 stream of data. By the mathematics and scientific design of RAID ... R1 is impossible to boost read or write performance.Unless you are using ZFS zPools etc... not windows. by any means or stretch of imagination.

What you say is true only if queue depth is always less than two.
 
In fact they are ZFS mirrors. I know definitely that mdadm also distributes reads across multiple RAID1 disks as long as they are from different operations or threads.
I have been using mdadm RAID1 for years on harddrives and consistently achieve higher concurrent IOPS with multiple accesses than for single disks.
Someone in this forum said that Areca controllers also do this.

By the way, what you go on about scientific and mathematic design is just blabla. First of all, random read accesses can be very well distributed across several disks. And that is a much more important metric for me (and also for others I think) as far as my VM backstorage is concerned.
And even if you are talking about sequential transfers, that has been true only for harddrive RAID1, because seeks are more expensive than linear reads.
For SSDs this is not true at all. A sufficient modern SSD can randomly read at very high throughput with a high queue depth. While some models can do more than 500 MB/s sequentially, they still do 300-400 MB/s in random 4K QD32 reads.
So a linear read from a two-drive RAID1 of these SSDs could do 800 MB/s with the worst possible access pattern, reading each other 4K block from each disk.

LSI controllers can also increase IOPS and throughput when using RAID1.
 
I am buying a second 500gb evo and putting it on raid! And a 1tb HDD with all my data. Thanks all!
 
Good choice.

Be sure to do the performance restoration upgrade before putting them in RAID.
It will not be possible with the current tool when they are part of an array.
Also make sure the RAID setup supports TRIM, the EVO's performance relies on it.
 
Last edited:
Back
Top