4x 6 TB WD Red - RAID 5 or 6? And which controller?

Quartz-1

Supreme [H]ardness
Joined
May 20, 2011
Messages
4,257
I'm currently falling in love with the idea of the HP Microserver. I was thinking of putting 4x 6 TB WD reds in one. But I'd also want data resilience, so I'm wondering if RAID 5 will be sufficient to cope with uncorrectable bit error rates and failures, or should I go for RAID 6? And in each case, which would be the most appropriate controller card?

The box would be running WSE 2012 under ESXi or MS's Hypervisor. Other stuff may follow.
 
I have done some pretty extensive experimentation on my NAS with both 3 and 4 drive RAID arrays.

First off, I've been very happy with my Dell PERC 5i RAID controller. It's fast, powerful, and commonly available on eBay for cheap. This card does not support RAID6, but it's inadvisable to run RAID6 with a 4 drive array. The slightly newer PERC 6i does support RAID6 if you want the additional capability.

If you are going to be running ESXi, I would suggest that you don't want RAID5 or 6 based on my experience. I've tested RAID5 arrays on ESXi and XenServer, and the way that these hypervisors handle drive volumes results in a LOT of parity recalculations, meaning VERY slow write speed in RAID5, even with a good dedicated controller.

RAID6, as I mentioned, doesn't make much sense with 4 drives. With this setup, you'd be dedicating two drives to parity, meaning that your array size would be equal to the size of two of your HDDs - the same capacity as the same drives in a RAID10 array, without any of the speed benefits of striping.

In my experience, RAID10 is the best option for a 4-drive array. The only thing RAID5 would get you is a 50% larger array at a not insignificant speed cost, and with the possibility of less redundancy.
 
Do not do raid 5 = guaranteed data loss on a rebuild.

4 drives - raid 10.

raid 6 if you can take the double parity hit on performance doing alot of writes.
 
Do not do raid 5 = guaranteed data loss on a rebuild.

4 drives - raid 10.

raid 6 if you can take the double parity hit on performance doing alot of writes.

Good advice. Raid 10 is a good way to go. On platters above 1TB some oems (Dell for example) are flat out saying don't use raid 5 due to rebuild times and increased possibility of data loss.

I run raid 6 on larger (10+) arrays where the write penalty wont be greatly felt.
 
RAID 10 with 4 drives is always more risky since you will lose the volume if the wrong two drives are lost. Since OP mentions only a data resilience requirement, RAID 6 is safest since any two drives can be lost without losing any data.
 
RAID6, as I mentioned, doesn't make much sense with 4 drives. With this setup, you'd be dedicating two drives to parity, meaning that your array size would be equal to the size of two of your HDDs - the same capacity as the same drives in a RAID10 array, without any of the speed benefits of striping.

That is not true (except for the part about capacity being equal to two drives which is true).

There is a benefit from striping with RAID 6. Your sequential reads will be about double that of a single drive.

And it does make sense if you care about fault tolerance, since a RAID 6 of four drives is more fault tolerant than a RAID 10 of four drives.

So, the capacity is the same with 6 or 10, the sequential I/O is about the same, and the main difference is that RAID 6 has better fault-tolerance and RAID 10 has better partial-stripe write performance.
 
RAID 10 with 4 drives is always more risky since you will lose the volume if the wrong two drives are lost. Since OP mentions only a data resilience requirement, RAID 6 is safest since any two drives can be lost without losing any data.

This was my thought. I'd like to learn more about the performance loss through virtualisation, please. Would having a controller with some RAM and a battery unit resolve that?
 
Do not do raid 5 = guaranteed data loss on a rebuild.

From experience (with 70TB+ of weekly scans for well over a decade) I say very unlikely with tested working drives. UREs are much lower (by at least 1 order of magnitude) than stated by the manufacturer for tested working drives and much higher than stated for drives about to die. In either case I recommend weekly scans so you know if more than 1 drive is about to fail.
 
I have some good links for you:

"It's completely inconceivable to use Red drives in a standard parity array. The URE rate is so bad and the storage so large that if a RAID 6 array loses one drive you drop to RAID 5. And RAID 5 on that is no better than RAID 0. So why would you do that?

RAID 6 is safish, but why run with all the overhead of RAID 6 and be "safe-ish) when you can run RAID 10 be much faster and be extremely safe?

RAID 6 is a pure loss here. Saves no money, isn't as safe as equal priced RAID 10 and is quite a bit slower and has far, far worse recovery modes. One break even and three negatives. No upsides to RAID 6.

RAID 6 is never a consideration until you have five drives."

http://community.spiceworks.com/top...-not-an-issue-only-reliability-is-raid6-or-10

"This is very bad advice. R10 for speed AND reliability and redundancy. RAID 6 has literally zero advantages here, just more risk and less performance. A lot less performance. Write performance with RAID 6 is slower than if you had only a single drive!!

RAID 6 cannot be an option here at all. It has no upsides, only downsides."

http://community.spiceworks.com/topic/572891-raid-level-for-small-company-using-4-disks-advice

I did quite a lot of research when I was building my new array at the end of 2014 and I've found that forums like these weren't of any help, in fact, they were actually a hindrance because members would frequently (almost always) recommend the wrong option (as seen in this thread). Your question is really suited more towards a discussion board for IT professionals than a hardware enthusiast forum.

Options:
Mechanical drives with URE/UBER of 10^14 = RAID 10 only
Mechanical drives with URE/UBER of 10^15 or higher = RAID 10 or RAID 6 (RAID 6 only up to a certain point)
 
RAID 6 is a pure loss here. Saves no money, isn't as safe as equal priced RAID 10 and is quite a bit slower and has far, far worse recovery modes. One break even and three negatives. No upsides to RAID 6.

That is absolutely false on all accounts except for IOPS. And I am an IT professional.
 
That is absolutely false on all accounts except for IOPS. And I am an IT professional.

The user who posted that is regarded as one of the most knowledgeable experts regarding RAID.

I will gladly create a Spiceworks account and repost this question there.
 
I would question anything he says making statements like this. Although I did not read the article so he could be talking about a different setup versus instead of a 4 drive raid. I mean if we are talking about a 10+ drive raid of course raid10 would be more reliable.
 
Last edited:
That post title is "4-Drive RAID: **Performance is not an issue, only reliability is**: RAID6 or 10?"

Users who recommended RAID 6: 0
Users who recommended RAID 10: 7

There are many other posts on the subject. This one is closest to what OP was asking.

Edit:

OP of that even writes:

"I have a 4-bay enclosure (QNAP TS-412) with 4x 3TB SATA drives for online backup of archival data. Performance (read/write speed) is NOT a significant consideration for me, since the backups are automated and happen overnight when I'm not around. Data reliability/fault tolerance is the ONLY consideration for me (yes, I do have offline backups in addition to the RAID, but they're offsite and intended only as deep, cold storage and are only updated every couple of months, so there is always data on the RAID that is NOT backed up elsewhere... which is why reliability is so important)."
 
as we know Raid is no a backup anyways so those arrays should be backed up, thus go for Raid 10, and another guy, Scott Allan Miller http://community.spiceworks.com/people/scottalanmiller?source=homepage-feed on spice works is also an "I.T Professional" with far more than 70TB of arrays over his career

http://www.smbitjournal.com/

Each raid has it's place and raid 6 and 10 are the 2 choices these days. The issue with raid 6 and non-enterprise drives is the rebuild, all that parity writing, it kills drives, why raid 5 is useless and you are garunteed to loose data, 1 single bad block in a raid 5 and there goes your entire array which with non-enterprise drives is likley going to happen, unless you bought your drives from different vendors, good chance if you have 1 bad drive you could have more.

Choosing between RAID 6 and RAID 10 should not be incredibly difficult. RAID 10 is ideal for situations where performance and safety are the priorities. RAID 10 has much faster write performance and is safe regardless of disk type used (low cost consumer disks can still be extremely safe, even in large arrays.) RAID 10 scales well to extremely large sizes, much larger than should be implemented using rules of thumb! RAID 10 is the safest of all choices, it is fast and safe. The obvious downsides are that RAID 10 has less storage capacity from the same disks and is more costly on the basis of capacity. It must be mentioned that RAID 10 can only utilize an even number of disks, disks are added in pairs.

RAID 6 is generally safe and fast but never as safe or as fast as RAID 10. RAID 6 specifically suffers from write performance so is very poorly suited for workloads such as databases and heavily mixed loads like in large virtualization systems. RAID 6 is cost effective and provides a heavy focus on available capacity compared to RAID 10. When budgets are tight or capacity needs dominate over performance RAID 6 is an ideal choice. Rarely is the difference in safety between RAID 10 and RAID 6 a concern except in very large systems with consumer class drives. RAID 6 is subject to additional risk with consumer class drives that RAID 10 is not affected by which could warrant some concern around reliability in larger RAID 6 systems such as those above roughly 40TB when consumer drives are used.

In the small business space especially, the majority of systems will use RAID 10 simply because arrays rarely need to be larger than four drives. When arrays are larger RAID 6 is the more common choice due to somewhat tight budgets and generally low concern around performance. Both RAID 6 and RAID 10 are safe and effective solutions for nearly all usage scenarios with RAID 10 dominating when performance or extreme reliability are key and RAID 6 dominating when cost and capacity are key. And, of course, when storage needs are highly unique or very large, such as larger than twenty five spindles in an array, remember to leverage a storage consultant as the scenario can easily become very complex. Storage is one place where it pays to be extra diligent as so many things depend upon it, mistakes are so easy to make and the flexibility to change it after the fact is so low.
 
Last edited:
The issue with raid 6 and non-enterprise drives is the rebuild, all that parity writing, it kills drives, why raid 5 is useless and you are garunteed to loose data, 1 single bad block in a raid 5 and there goes your entire array which with non-enterprise drives is likley going to happen, unless you bought your drives from different vendors, good chance if you have 1 bad drive you could have more.

QFT. And to follow up, URE numbers are fairly accurate. When you have consumer drives experiencing an error every 10^14 bits read, or every ~12 TB, RAID 5 does not make much sense with such large drives: http://www.raid-failure.com/raid5-failure.aspx So when you have over ~6 TB to rebuild on consumer drives, rebuilds are expected (greater probability) to fail.

Edit: Add to the fact that rebuilds are also more stressful with parity RAID and take more time to complete.

I did similar research to OP when I was looking to expand my file server and on Spiceworks the consensus was WD Red = go RAID 10 and avoid parity. When I ran the numbers for myself, a large RAID 10 array of 6 TB Reds was not worth it for me because it would be more costly than going with 4 TB RE drives at the same price and with RAID 6.
 
Last edited:
What about using the RED for just storage, and another array for parity if you have the RAID controller and drives, would that work? Or, simply RE drives for parity, and RED for storage? Do the controllers let you decide which drive is which ?
I have 5x5TB coming for RAID6, but now I'm not so sure....
 
What do you mean for just storage? Bit confused by that first sentence. Parity RAID is suited for enterprise drives that have URE greater than 10^14, shouldn't be used on consumer drives as they are 10^14.
 
ToddW2 said:
What about using the RED for just storage, and another array for parity if you have the RAID controller and drives, would that work?

RAID 3 separates the data drives from the parity drive. And only one parity drive IIRC. I haven't seen that supported in a long time.

I'm off to do some lurking on other sites, but which controller would people here recommend for RAID 10? Or does ESX support the fakeRAID that the motherboard undoubtedly implements?
 
If you go with RAID10 and lose the "wrong" pair of drives you will sorely regret not using RAID6. On the flip side if you go with hardware RAID6 and use a controller that drops drives when it encounters UREs you could end up with a lot of work to recover the data if you're unlucky.

I will agree with drescherjm that in my experience UREs are much less common than the specifications state on relatively new and healthy drives.
 
I would question anything he says making statements like this..

As you should. Those statements are complete nonsense. They are stating the opposite of the truth.

Here are some facts.

1) RAID 6 of four drives is more fault-tolerant than RAID 10 of four drives.

a) If one drive fails, both arrays will be able to recover as long as no more drives fail, but if another drive fails during recovery, RAID 6 will still be able to recover, but RAID 10 has at least a 33% chance of losing the array when the second drive fails (greater than 33% because the mirror drive that is being read during recovery is more likely to fail than the other two)

b) If two drives fail, then RAID 6 will still be able to recover as long as no more drives fail. Of the six ways that two drives out of four can fail, RAID 10 will only be able to recover with four of them. In other words, there is a 33% chance that RAID 10 of four drives will NOT be able to recover from two drive failures

2) As long as the RAID 6 implementation is not terrible, the large sequential I/O speeds will be about the same for a four-drive RAID 6 or RAID 10. The RAID 10 will be faster than the RAID 6 for small (partial-stripe) writes.

a) With large sequential reads, the RAID 6 will be reading from a 4 drives in parallel, but 50% of the data on each drive is parity, so the net result is 50% of the speed of 4 drives, which is 2x the speed of a single drive. RAID 10 will only be reading from 2 drives in parallel and there is no parity so the speed is 2x the speed of a single drive.

b) The reason the RAID 6 is slower for small, partial-stripe writes is that RAID 6 has to read part of the stripe before it can rewrite the parity, but RAID 10 never has to do a read in order to do a write.

3) Encountering a URE in a sector during a rebuild does not automatically kill the array for any decent RAID 6, RAID 10, or RAID 5 implementation. There is no reason why it should. If there is no other source for the data in the bad sector, then the rebuild operation can simply continue to the next sector. You will have a hole in the filesystem of the recovered data, but unless you are very unlucky with the location of the whole, the vast majority of your data should still be intact.

a) For any decent implementation of RAID 10 or RAID 6, there is no reason why a RAID 10 recovery should be able to skip over a bad sector and RAID 6 will not. With either type of array, the recovery should simply skip over the bad sector and continue recovering the rest of the array. Of course, with RAID 10 this will ALWAYS create a hole in the recovered data since it is copying over from a single drive. With RAID 6, if you have one drive failure and a URE on one of the remaining drives during recovery, RAID 6 will still be able to recover all the data since the parity is distributed.

b) If you have a RAID 6 implementation that marks a drive as a complete failure when there is a single URE, but not for RAID 10, then that could be a reason to choose RAID 10 over RAID 6. Alternatively, you could make sure your HDDs are set to return a read error in a reasonably short period of time so that the RAID array does not drop the drive on a URE. Or better yet, you could choose a good RAID 6 implementation that does not mark drives as failed just because of a URE (linux md-RAID6 or ZFS RAIDZ2, for example)
 
Last edited:
As you should. Those statements are complete nonsense. They are stating the opposite of the truth.

Here are some facts.

1) RAID 6 of four drives is more fault-tolerant than RAID 10 of four drives.

a) If one drive fails, both arrays will be able to recover as long as no more drives fail, but if another drive fails during recovery, RAID 6 will still be able to recover, but RAID 10 has at least a 33% chance of losing the array when the second drive fails (greater than 33% because the mirror drive that is being read during recovery is more likely to fail than the other two)

b) If two drives fail, then RAID 6 will still be able to recover as long as no more drives fail. Of the six ways that two drives out of four can fail, RAID 10 will only be able to recover with four of them. In other words, there is a 33% chance that RAID 10 of four drives will NOT be able to recover from two drive failures

2) As long as the RAID 6 implementation is not terrible, the large sequential I/O speeds will be about the same for a four-drive RAID 6 or RAID 10. The RAID 10 will be faster than the RAID 6 for small (partial-stripe) writes.

a) With large sequential reads, the RAID 6 will be reading from a 4 drives in parallel, but 50% of the data on each drive is parity, so the net result is 50% of the speed of 4 drives, which is 2x the speed of a single drive. RAID 10 will only be reading from 2 drives in parallel and there is no parity so the speed is 2x the speed of a single drive.

b) The reason the RAID 6 is slower for small, partial-stripe writes is that RAID 6 has to read part of the stripe before it can rewrite the parity, but RAID 10 never has to do a read in order to do a write.

3) Encountering a URE in a sector during a rebuild does not automatically kill the array for any decent RAID 6, RAID 10, or RAID 5 implementation. There is no reason why it should. If there is no other source for the data in the bad sector, then the rebuild operation can simply continue to the next sector. You will have a hole in the filesystem of the recovered data, but unless you are very unlucky with the location of the whole, the vast majority of your data should still be intact.

a) For any decent implementation of RAID 10 or RAID 6, there is no reason why a RAID 10 recovery should be able to skip over a bad sector and RAID 6 will not. With either type of array, the recovery should simply skip over the bad sector and continue recovering the rest of the array. Of course, with RAID 10 this will ALWAYS create a hole in the recovered data since it is copying over from a single drive. With RAID 6, if you have one drive failure and a URE on one of the remaining drives during recovery, RAID 6 will still be able to recover all the data since the parity is distributed.

b) If you have a RAID 6 implementation that marks a drive as a complete failure when there is a single URE, but not for RAID 10, then that could be a reason to choose RAID 10 over RAID 6. Alternatively, you could make sure your HDDs are set to return a read error in a reasonably short period of time so that the RAID array does not drop the drive on a URE. Or better yet, you could choose a good RAID 6 implementation that does not mark drives as failed just because of a URE (linux md-RAID6 or ZFS RAIDZ2, for example)

1a Except if the added stress causes another failure.

1b Have fun when you hit a URE with 2 drives remaining and have to restore the array from a backup.

3 Of course it does. Tell me how to XOR 0s and 1s with an unknown variable? Next you will say it's possible to divide by zero.

3a Because parity relies on XORing all disks to zero. In the RAID 1 spindle it's just a direct copy. UREs don't matter in RAID 10, just parity RAID.

3b Again you don't understand that UREs only affect parity RAID.

It's funny how many of you are stating how IT professionals with decades more experience are incorrect. As I said before, I've done the research on spiceworks and every post I've seen regarding wd reds or 4 disk arrays, it's always been: avoid parity RAID and stick with 10.
 
It's funny how many of you are stating how IT professionals with decades more experience are incorrect. As I said before, I've done the research on spiceworks and every post I've seen regarding wd reds or 4 disk arrays, it's always been: avoid parity RAID and stick with 10.

The fact is that you are posting nonsense. Your "research" is worthless. And you are making bad assumptions about the relative knowledge of people here versus "spiceworks".

But it is not necessary to make a decision based on who claims to have more experience (or knowledge). Computers are consistent and logical in their behavior, and most questions about computers can be settled with facts and understanding. I explained the statements I made. If you think any of the statements are incorrect, then CLEARLY and IN DETAIL explain what you believe is incorrect and then I will respond. What you have written so far lacks both clarity and detail, so please do not say that you have already done as I asked.
 
The fact is that you are posting nonsense. Your "research" is worthless. And you are making bad assumptions about the relative knowledge of people here versus "spiceworks".

But it is not necessary to make a decision based on who claims to have more experience (or knowledge). Computers are consistent and logical in their behavior, and most questions about computers can be settled with facts and understanding. I explained the statements I made. If you think any of the statements are incorrect, then CLEARLY and IN DETAIL explain what you believe is incorrect and then I will respond. What you have written so far lacks both clarity and detail, so please do not say that you have already done as I asked.

It isn't nonsense. Look at the people like yourself that post opinions and don't even know how RAID 10 works with regards to UREs or how parity RAID functions.
 
I posted facts and explanations. You posted nonsense. Another difference is that I explained clearly and in detail how these things work and why. You make references to dubious authorities who make vague statements without supporting explanations.

Your research is worthless and has led you down a path of confusion. As I already said, I am willing to help you, but you need to post CLEARLY and IN DETAIL about what part of what I wrote you disagree with. Then I will explain further.
 
Another difference is that I explained clearly and in detail how these things work and why.

3) Encountering a URE in a sector during a rebuild does not automatically kill the array for any decent RAID 6, RAID 10, or RAID 5 implementation. There is no reason why it should. If there is no other source for the data in the bad sector, then the rebuild operation can simply continue to the next sector. You will have a hole in the filesystem of the recovered data, but unless you are very unlucky with the location of the whole, the vast majority of your data should still be intact.

a) For any decent implementation of RAID 10 or RAID 6, there is no reason why a RAID 10 recovery should be able to skip over a bad sector and RAID 6 will not. With either type of array, the recovery should simply skip over the bad sector and continue recovering the rest of the array. Of course, with RAID 10 this will ALWAYS create a hole in the recovered data since it is copying over from a single drive. With RAID 6, if you have one drive failure and a URE on one of the remaining drives during recovery, RAID 6 will still be able to recover all the data since the parity is distributed.

----------------------------------------------------------------------------------

Proof is right there. As I stated previously, how do you expect the controller to XOR data with an unknown variable? Please explain.
 
No. I did not ask you for links to nonsense from dubious sources. If you want to dispute what I have written here, then respond yourself, CLEARLY and IN DETAIL to what you believe is wrong. I have repeatedly explained this. Why do you fail to respond CLEARLY and IN DETAIL to the points you disagree with?

As I already explained, if there is a bad sector and no other way to recover that sector, then the RAID recovery algorithm can simply skip that sector and go on to the next.There is no theoretical difference between RAID 6 and RAID 10 on this issue. There could be differences based on RAID implementation, but that needs to be discussed on a specific case-by-case basis.
 
Last edited:
I'll choose an example and describe a recovery process for RAID 6 and RAID 10.

So, a four drive RAID 6 or RAID 10 has one drive completely fail (not just a bad sector -- nothing can be read). The failed drive has been replaced and the RAID is being rebuilt.

1) RAID 10

Of the three drives remaining from the original array, only one of them has the data needed to use to recover the data on the failed drive. So the recovery algorithm simply has to copy that one drive to the replacement drive. If there are any sectors with UREs, then obviously those sectors cannot be copied to the replacement drive. Whether the specific RAID implementation stops the copy upon encountering a URE, or whether it simply skips the bad sector and keeps copying, obviously can only be discussed when a specific RAID implementation is named. If the drive that is being copied from fails during the recovery, then the rest of the data is lost.

2) RAID 6

All three of the drives remaining from the original array contain information that can be used to recover the data on the failed drive. However, for each stripe, only two of the three drives are absolutely necessary to recover the data on the failed drive (i.e., there is an extra drive's worth of redundancy in the RAID 6 array after a single drive has failed). So the recovery algorithm will normally read the components of each stripe from three drives and use the information to recover the data to the replacement drive. If a URE is encountered on any of the drives during the recovery, then the stripe can still be recovered from only two drives. If a drive fails during the recovery, then the rest of the data can still be recovered from the remaining two drives. Only if there is a second drive failure and then a URE on one of the two remaining drives is there any data loss (or a third drive failure, obviously). Again, whether a specific RAID implementation fails the entire drive upon a URE, or whether it simply skips the bad sector and goes on to the next, depends upon the specific RAID implementation.

GENERAL COMMENTS

Note that some hardware RAID implementations have a timeout value (some configurable, some not) and if they try to read from a drive without a response for longer than the timeout value, then they drop the drive from the array as completely failed. There is no theoretical reason why this timeout value should be different for RAID 6 or RAID 10, but obviously it could vary from one RAID implementation to another. Also, some HDDs are able to set a TLER, ERC, or CCTL parameter which controls how long the drive will spend trying to read a marginal or bad sector. By default, a lot of desktop drives will try for as long as a few minutes. But with TLER, ERC, or CCTL (more common with enterprise drives or RAID-marketed drives), this value can be reduced to 7 seconds or less. Which can be desirable for some RAID implementations which default to (or are not configurable with) a 7 second timeout value for failing a drive.

Note that software RAID implementations, such as linux md-RAID or ZFS, will not normally timeout at all. They will wait for the read from the drive to return with either the data or an error. If a read-error is returned for a sector, then they simply continue with the next sector. If there is not enough redundancy left in the stripe to recover the data for that stripe, then the replacement disk will have a hole (essentially a zeroed-sector) written for the corresponding location to the read-error sector.
 
Last edited:
You fail to account for the fact that RAID 6 will be stressing more than a single drive during a rebuild and it's also much slower which gives more time for an additional drive to fail. You know, as I stated before "1a Except if the added stress causes another failure." which you can't seem to grasp as being clear to your points I (and pretty much all the IT professionals on Spiceworks) disagree with.
 
You fail to account for the fact that RAID 6 will be stressing more than a single drive during a rebuild and it's also much slower which gives more time for an additional drive to fail. You know, as I stated before "1a Except if the added stress causes another failure." which you can't seem to grasp as being clear to your points I (and pretty much all the IT professionals on Spiceworks) disagree with.

Nonsense. Reading from a single drive with complete array loss if the drive fails is more dangerous than reading from multiple drives and being able to tolerate another drive failure. And the recovery speed is limited only by the write speed of the drive being recovered for a decent RAID implementation. So the recovery time is the same for RAID 10 and RAID 6 for any decent RAID implementation.

You keep making comments like "can't seem to grasp" but you have no evidence to show that I have failed to grasp anything. You have just vague claims that some dubious sources disagree with the facts that I have clearly stated.

Why are you unable to CLEARLY and IN DETAIL explain why you disagree with the facts that I have stated?
 
Last edited:
Now I know you're 100% trolling. I admit it, you got me.

When you repeatedly fail to produce any specific information to support your ridiculous claims, and only make wild accusations against people who are clearly explaining things to you, then it is obvious who the real troll is.
 
You can't support your statements. Show me one of your posts where you provided a reference. Show me an article which says RAID 6 rebuilds are the same speed as RAID 10 or one where reading from a single drive for RAID 10 is worse than reading from every drive in RAID 6. How does that even make sense to you logically?
 
http://searchstorage.techtarget.com/tip/RAID-6-vs-RAID-10

"RAID 10 is faster to rebuild.

The major weakness of RAID 6 is that it takes a long time to rebuild the array after a disk failure. With even a moderate-sized array, rebuild times can stretch to 24 hours, depending on how many disks are in the array and the capacity of the disks. Since RAID 6 users tend to use the biggest disks they can afford, this is an increasingly serious limitation for RAID 6."

http://www.computerweekly.com/news/...ing-the-best-RAID-level-for-your-organisation

"RAID 10 rebuild times are faster.RAID 10 has among the fastest rebuild times possible because it only has to copy from the surviving mirror to rebuild a drive, which can take as little as 30 minutes for drives of approximately 1 TB. The key drawback of RAID 6 (vs RAID 10) is that the time it takes to rebuild the array after a disk failure is lengthy because of the parity calculations required, often up to 24 hours with even a medium-sized array."

http://www.techrepublic.com/blog/th...d-6-or-raid-1-plus-0-which-should-you-choose/

"Faster rebuild speed. Rebuilding a failed disk that takes part in a mirror is a much faster process than rebuilding a failed disk from a RAID 6 array. If you implement a hot spare, the rebuild process can go quite quickly, making it less likely that you'll suffer the simultaneous loss of a second disk."

http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

"Long rebuild times. As disk capacity grows, so do rebuild times. 7200 RPM full drive writes average about 115 MB/sec - they slow down as they fill up - which means about 5 hours minimum to rebuild a failed drive. But most arrays can't afford the overhead of a top speed rebuild, so rebuild times are usually 2-5x that."

http://www.prepressure.com/library/technology/raid

"This is complex technology. Rebuilding an array in which one drive failed can take a long time."

Edit:

http://www.storagecraft.com/blog/practical-raid-decision-making/ (5 days ago)

"RAID 6 is generally safe and fast but never as safe or as fast as RAID 10. RAID 6 specifically suffers from write performance so is very poorly suited for workloads such as databases and heavily mixed loads like in large virtualization systems. RAID 6 is cost effective and provides a heavy focus on available capacity compared to RAID 10. When budgets are tight or capacity needs dominate over performance, RAID 6 is an ideal choice. Rarely is the difference in safety between RAID 10 and RAID 6 a concern except in very large systems with consumer-class drives. RAID 6 is subject to additional risk with consumer class drives that RAID 10 is not affected by, which could warrant some concern around reliability in larger RAID 6 systems such as those above roughly 40TB when consumer drives are used."
 
Last edited:
You can't support your statements. Show me one of your posts where you provided a reference. Show me an article which says RAID 6 rebuilds are the same speed as RAID 10 or one where reading from a single drive for RAID 10 is worse than reading from every drive in RAID 6. How does that even make sense to you logically?

Don't be ridiculous. Providing a reference is only useful if the reference is both specific and verifiable. Such references are almost non-existent for the subject under discussion. You have certainly not provided any specific and verifiable references, only vague appeals to dubious sources. Quoting web pages for vague situations or obviously different situations than the one under discussion is not helpful.

What is useful is to clearly explain the details and facts of the issue. Which I have done, and you have repeatedly failed to do.

What we are discussing is not terribly difficult to understand. I already explained how a rebuild works for RAID 6 and RAID 10. There is nothing that forces the rebuild speed of RAID 6 to be lower than RAID 10. Yes, there are parity computations to make, but most RAID implementations can easily make these computations at many hundreds of megabytes per second. There are plenty of benchmarks that can measure the read speed of a degraded RAID 6 array. I have run many over the years, and virtually all of them show that the RAID 6 implementations can read a degraded array at far higher than 100 MB/sec. So you are only limited by the write speed of the drive being recovered in the situation we are discussing. Of course there may be some poorly designed or underpowered RAID 6 implementations that cannot read from a four-drive-to-three-drive degraded RAID 6 array fast enough to saturate the write speed of a single drive, but they are few and far between.

As for the difference between complete failure probability for a single drive read with single drive failure resulting in array loss, or a three drive read with two drive failure resulting in array loss, the math is not difficult. The hardest part is agreeing on single-drive failure probabilities when the entire drive is read. The annual failure rate (AFR) for most HDDs is below 5% while they are still under warranty. And most HDDs are rated for hundreds or thousands or more complete drive writes and reads over their rated lifetime. So we can estimate that the probability of failure when a single drive is fully read is well under 5%. If we assume that a typical year might have the drive read 5 times with an AFR of 5%, then we can estimate a 1% chance of failure when a drive is fully read.

For a RAID 10 recovery, we have one drive being fully read, and if that drive fails, the RAID is lost. So a 1% chance of array loss.

For a RAID 6 recovery, we have three (or two) drives being fully read, so the chances of no drives failing is 97.03%, one drive failing is 2.94%, and the chance of two or more drives failing is 0.0298%. Only if two or more drives fail is the array lost. So that is only a 0.0298% chance of array loss.
 
Last edited:
None of your references are verifiably correct and apply to the specific case being discussed. You are simply quoting random statements from web pages with no understanding of the details of the situation under discussion, which only shows your confusion on the subject.

I have not wasted my time searching for web pages that are verifiably correct and cover the specific case we are discussing because :

1) It would take a lot of time to find such references, if they even exist

2) It is far better to clearly explain the facts of the situation, as I have done, rather than rely on vague appeals to dubious sources

3) I have seen the benchmarks on many systems I have used many times over the years

Again, I challenge you to CLEARLY and IN DETAIL refute any of the facts I have explained in this thread. Vague references to dubious and unverifiable sources that are not covering the specific case under discussion are not helpful.

Why do you continue to resort to personal attacks rather than providing a clear and detailed response to the points that you disagree with?
 
Last edited:
Joe Comp and DarkReaper....Congrats to both of you, you have managed to turn this thread into a pissing match ....since neither of you is interested in helping the OP do not post anything further in this thread.....
 
Back
Top