Please help me choose the best hard drives for a ARECA RAID setup

TinoD

n00b
Joined
May 31, 2013
Messages
23
I'm trying to decide on which drives are best for my raid card and build.

I have an ARECA 1882ix-12 card and I'd like to setup a new 8-12 disk array in RAID 3 for video editing.

The few drives I'm looking at are the HGST Deskstar 4TB drives; http://www.newegg.ca/Product/Product.aspx?Item=N82E16822145912

the HGST Ultrastar 4TB drives: http://www.newegg.ca/Product/Product.aspx?Item=N82E16822145622

and the WD SE drives: http://www.newegg.ca/Product/Produc...m_re=ppssEnterpriseHDD-_-22-236-521-_-Product

or the WD RE drives: http://www.newegg.ca/Product/Produc...m_re=ppssEnterpriseHDD-_-22-236-353-_-Product


I'm finding it really confusing how the manufacturers are dividing the product lines, with the NAS specific labeling, SE vs RE, etc.

Both the WD SE and RE show a 5 year warranty but other than price I can't see a difference between these.

The Ultrastar vs Deskstar have a difference in $100 per drive and a 3yr vs 5yr warranty, along with a 1million vs 2.0 million hours MTBF rating (I honestly have no idea what that means)

So if anyone can please shed some light on this and could recommend which one I should decide on it would be appreciated. Its a pricey purchase for me so the difference of $1000 or more for the drives is a big deal. But if this is something I should spend the extra for the 5 year warranty I'll go for it.

Thanks in advance for your help!
 
One question.

Why have you chosen to overlook the Western Digital Red series?

4TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-599-_-Product

5TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-738-_-Product

6TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-737-_-Product

They're roughly half the price, spec'ed for a 24x7 duty cycle, and meant SPECIFICALLY to go in a NAS/raid box.




If you're REALLY concerned about having the drives vetted, go for the Red Pro series.

4TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-739-_-Product

The standard Reds are 3 years. The Red Pros are 5 years.



MTBF: Mean Time Between Failures
https://en.wikipedia.org/wiki/Mean_time_between_failures

Basically a million hours is ~115 years when not accounting for any other factors. Essentially, a perfectly engineered specimen in an ideal environment would have a mechanical lifetime equivalent to this. Realistically, the drive will develop non-mechanical problems well before this.
 
Last edited by a moderator:
I'm shopping around carefully for new drives as well. I had a series of failures with Seagate Baraccudas drives a few years ago (rated at .75M MTBF I believe), then upgraded to Seagate Constellation (ES? 1.4M MTBF) and there have been no hitches running 24x7 in 3 years, with light-moderate loads, good power+cooling, sleep disabled, etc.

There's been a couple of long-term reliability studies. They may not test the exact drives you're interested in, or match your usage characteristics, that's the trouble. This one suggests that HGST as a brand is much more reliable than Seagate is, although they're not testing the enterprise drives from either company, which you might prefer if reliability is a criteria.

https://www.backblaze.com/blog/best-hard-drive/

I think google did another study a few years ago.
 
One question.

Why have you chosen to overlook the Western Digital Red series?

4TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-599-_-Product

5TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-738-_-Product

6TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-737-_-Product

They're roughly half the price, spec'ed for a 24x7 duty cycle, and meant SPECIFICALLY to go in a NAS/raid box.




If you're REALLY concerned about having the drives vetted, go for the Red Pro series.

4TB: http://www.newegg.com/Product/Produ...re=Western_Digital_Red-_-22-236-739-_-Product

The standard Reds are 3 years. The Red Pros are 5 years.



MTBF: Mean Time Between Failures
https://en.wikipedia.org/wiki/Mean_time_between_failures

Basically a million hours is ~115 years when not accounting for any other factors. Essentially, a perfectly engineered specimen in an ideal environment would have a mechanical lifetime equivalent to this. Realistically, the drive will develop non-mechanical problems well before this.

I haven't looked at the red's. I actually have 8 of them in my synology 1813 (which I'm already not that thrilled about. One just failed recently after a few months and was replaced which cost me another shipping fee).

Again, the various colors, categories etc that these drive manufacturers have split these into is only making it more confusing. I have no idea if drives for NAS is the same as using an ARECA Raid.


I'm shopping around carefully for new drives as well. I had a series of failures with Seagate Baraccudas drives a few years ago (rated at .75M MTBF I believe), then upgraded to Seagate Constellation (ES? 1.4M MTBF) and there have been no hitches running 24x7 in 3 years, with light-moderate loads, good power+cooling, sleep disabled, etc.

There's been a couple of long-term reliability studies. They may not test the exact drives you're interested in, or match your usage characteristics, that's the trouble. This one suggests that HGST as a brand is much more reliable than Seagate is, although they're not testing the enterprise drives from either company, which you might prefer if reliability is a criteria.

https://www.backblaze.com/blog/best-hard-drive/

I think google did another study a few years ago.

Thanks, I'll look into these some more.

btw, why RAID 3? that's ... unusual. Why not RAID 50?

I've tried searching and asking online in the past but never had a clear answer. I found sites claiming RAID 3, which apparently is an ARECA thing, is a good choice for video editing. So I went with that option. I actually tried asking this again recently in the ARECA thread but no one's replied.

Right now I'm back to RAID 3, if 50 is a better choice for performance I'd be up for trying it.
 
I would think that if there's a problem with your array then raid 3 will be much harder to recover from because it splits blocks up (afaik), rather than distributing whole blocks between the drives. That could also mess with advanced format drives.

Do you know what performance you're trying to achieve, say, in terms of bandwidth? And... what failure scenarios you're actually protecting against?
 
Back
Top