Raid 5 nearly useless for 1.5TB drives?

I only asked since I'm seriously considering going with WHS since I've hit a brick wall for video storage. All my other data would fit just fine with room for 100% growth on a 1TB volume. I do remember seeing the SAT2 in his worklog. I should make my own thread for this, but I figured that since that question was on topic I'd go for it.

The main reason I'm seeing WHS as better than RAID 5 is that you get one giant pool of storage, but you only lose what data was on the drives that failed. Like if you have ten drives, the data is duplicated between them all, and two fail that had the same data, then you've only lost that data. With a RAID 5 array, you just lost everything from those two failures. If you only lose one drive, and you put another in for rebuild, then you're also not stressing every other drive when you're doing the rebuild.
 
I'm having a problem with this URE scare. If this is such an issue let me know how to duplicate. Any suggestions are fine.

Claim is one read error per 12TB read?

You can call it a claim, but this is what the DRIVE MANUFACTURERS are stating. It isn't some magic made up number in some back room. The fact that we are seeing, in this forum alone, multiple RAID5 arrays in the past couple of weeks failing, are we that far off?

It is also a probability. The URE of 10^14th is the minimum it should be. Your drive could land up being 10^16th. It could be better than that even. It would be when we get a LARGE sample of many drives over many lots could we validate or chuck the number.

BTW, as for your test...I have done it in windows, just in another format. It is called moving multiple large file torrents from one directory to another then rechecking them. I have more often than not have to have at least one of the files download a failed piece.
 
Well for all we know specs from the manufacturer are made up in some back room. How often do you see how the numbers are come by? Real world studies never archive the low AFR/MTBF numbers manufacturers state.

Any links to these failed RAID5 arrays that had one drive fail, a rebuild began and second drive got kicked out during rebuild?
 
Well for all we know specs from the manufacturer are made up in some back room. How often do you see how the numbers are come by? Real world studies never archive the low AFR/MTBF numbers manufacturers state.

Any links to these failed RAID5 arrays that had one drive fail, a rebuild began and second drive got kicked out during rebuild?

Check out the HP support links I posted earlier in the thread. A URE won't cause the second drive to drop out of the array, it will only cause the rebuild to fail. I'm not saying a rebuild can't cause a second drive to fail or that it doesn't increase the chances of it happening.

It will result in you having to copy all your data off the degraded array to another storage area, recreate the array manually, and then copy the data back.
 
Notwithstanding that the spec as per the disk manufacturer may or may not be accurate, the "1 TB WD Caviar Black" drives list the NRE as 1 x10^15 bits read. That would equate to 125 TB before encountering such an error. Now, the failure rate is only 1 out of every 10 rebuilds. I suspect disks will soon be at 1X10^16.
 
Notwithstanding that the spec as per the disk manufacturer may or may not be accurate, the "1 TB WD Caviar Black" drives list the NRE as 1 x10^15 bits read. That would equate to 125 TB before encountering such an error. Now, the failure rate is only 1 out of every 10 rebuilds. I suspect disks will soon be at 1X10^16.

That actually puts the WD Black at a great price point. It has the same NRE as Seagate's ES.2 1TB which runs for a bit over $200, at $150.

Ninja:

Taking a look at more spec sheets, I'm finding that other drives are already 1x10^15 as well. I can't find any proof that Seagate's desktop drives are. The WD10EACS is though.

http://www.westerndigital.com/en/library/sata/2879-701229.pdf
 
Notwithstanding that the spec as per the disk manufacturer may or may not be accurate, the "1 TB WD Caviar Black" drives list the NRE as 1 x10^15 bits read. That would equate to 125 TB before encountering such an error. Now, the failure rate is only 1 out of every 10 rebuilds. I suspect disks will soon be at 1X10^16.

That seems more realistic.
 
Back
Top