RAID HDD Choice

Mastaba

Limp Gawd
Joined
Apr 2, 2011
Messages
228
What HDD would best suit a 24*4TB hdd RAID 6?

-Hitachi 7K4000 Ultrastar.
-Hitachi 7K4000 Deskstar.
-Hitachi 5K4000 Deskstar.

-WD Red.
-WD Green/Black, or whatever else.

-Another HDD brand, but i think there's no others 4TB hdd. (i know Toshiba announced some)

The hitachi ultrastar look like to be certified by areca, which is a good point as i plan to use an areca controller.


I have three main questions:

1/-What kind of drive ? Does the ultrastars worth their price vs the cheapers deskstars?
Does the WD RED serie brings the same advantages for RAID as enterprise/nearline series from other hdd builders?
What are these advantages? I know they have a different TLER, but what else? And does these differences really change something?


2/-What RPM speed ? Does 7200RPM improve things that much?
Does 5400RPM reduce noise/power consumption that much ?

The problem is while i can't test two 24*hdd arrays of 5400 and 7200RPM, i can't know if the performance increase worth the added price/noise/consumption, or if using 5400 drives will really reduce the noise/heat/consumption without harming the performances too much.


3/-How to buy 24 drives at once without risking a cascade failure if all the drives are coming from the same (faulty) production batch?
How do you usually proceed for building your arrays?

Buying each drive separately from a different reseller each time would be annoying, more expensive (shipping*number of drive, some resellers's price are higher, that mean i would have to search for 24 different shops that sell the drive), and finally will not warranty i won't get drives from really different production batchs.

Is there some way to buy alot of drives for the purpose of building a large array?
 
Last edited:
100TB of storage and you don't know how to buy hard drives? 100TB of storage and you don't know the performance difference between 5400 and 7200RPM drives?

---

I would buy all the drives from the same place in one order.

I would look at the performance numbers my application needed and buy based on those.

---

Where do you get 100TB of data?
 
I'd recommend the Hitachi ultrastar drives for your areca card.

The ultrastar drives are designed with more vibration tolerance, so they can fit in a standard server rackmount chassis (with the drives spindles very close together). Ultrastar drives are designed for 24/7 operation, and have better error rates and warranty. Hitachi has a comparison PDF here

If you're using a rackmount server I think its a good idea to do this. If you're using a desktop case with good vibration dampening you might get by without spending the extra bucks.

The difference between 7200 and 5400 drives is the 'IOPS' number, which has to do with the number of operations the drive can do per second. If you're going to be using the storage pool virtual machines or databases, the faster drives can get you more performance.

To answer your third question (interrogation? English must not be your native language)

Don't feel bad about buying drives in batches. Just be sure to test each drive and then stress test your array for a significant period of time.

When vendors order drives they are usually packaged in big things that are similar to egg cartons, and if you order enough drives they will just repackage them in the same cartons and ship them out to you.
 
If you are looking at Ultrastars, there are also Enterprise Western Digital and Seagate drives.

ST4000NM0033 - Seagate ES3 4TB SATA
ST4000NM0023 - Seagate ES3 4TB SAS
WD4000FYYZ - Western Digital RE 4TB SATA
WD4001FYYG - Western Digital RE 4TB SAS
 
@hotcrandel
Thanks for advises!
Indeed english is not my native language, sorry for the errors.

Yes, i'm planning to use them inside a rackmount case, i thought of the RM424pro.
Do you think a desktop case would be more safe about vibrations?


@cactus
Thanks for the references!
Do you have any advise?
Between the Seagate/WD/Hitachi, do you know RAID benchmarks of these?

@GeorgeHR
Thanks for advise!
I heard that's not recommended to build an array composed with hdds all coming from the same batch; because if one fail, there's an increased chance that more follow and fail aswell before the RAID is rebuilt.
So if i buy all the hdds at once from the same vendor, they might probably all come from the same production batch.
That was my concern.

Two arrays are different than two single hdds.
The noise, heat and consumption levels becomes completely different with that much drives packed together, the performances obviously are very different than single drives.
It's obvious that 5400RPM will be quieter, cooler, less power hungry, and less performant, the question was "how much more/less" in the case of a large array of them.

For the use of 74 or 81TB, i already have a lot of single hdds filled with stuff i can't access easily, because that means knowing exactly what's on what drive and connect it, disconnecting another hdd because i don't have enough SATA ports.
Also high definition video and RAW photographic files takes a lot of space and i would like to add more security.
 
Sure you might reduce vibrations by going with something rubber mounted like the lian li cube with their cages, but that is a bad idea. For a 24 drive setup you want hot swap caddies with backplanes and front access.

For a home server those wd reds theoretically look pretty nice right now, at least with the prices I see over here. My main concern is that it's too early to really tell what their reliability is going to be like. I bought 16 so far, none DOA and all running fine but YMMV. If this is a home media server you don't really care about any speed difference between 5400 and 7200 rpm drives. I've got a 10 GbE network at home and I'm still happy with using 5400 rpm drives in my servers, actually I would prefer them because of reduced power and heat. The non-enterprise hitachis are also nice, I have had about 100 of them here that have for the most part worked well in a wide variety of systems with a pretty low number of RMAs.

For an enterprise server get enterprise drives. Well, if you can easily afford it buy enterprise drives for your home server too, just be aware of the price difference.

Edit: of course you can't get 4 TB wd red yet...
 
Last edited:
What is the primary use of the array(s) going to be? A Home server with large sequentially accessed files or a business database server with small, random accesses? Something completely different? That will give us more info to make a suggestion.
 
The primary use is going to be data storage and homeserver, so no real need for astounding performances, but i still want to have some.

My concerns about using 5400RPM are:
-slower access time (probably not too much of a problem, but i don't want to have some regret after spending that much money because of some performance bottleneck).
-non-enterprise hdds, less vibration tolerances as said (for a rackmount chassis), non-RAID TLER and optimized firmware, non-certified hdd...

I can afford enterprise drives if there's any benefit, but if there's no differences i'd prefer cheaper drives off course.

The noise/heat/power consumption of 5400RPM drives are also appealing as i don't want to have a (too) noisy server.

On the contrary if 7200RPM drives give alot more performances for only slightly more noise and watts, i'd prefer 7200RPM.

Since it appears there's no 5400RPM 4TB enterprise drives available it would be a decisive factor if enterprise drives are really needed.
 
Sounds like the 4TB hitachi deskstar drivers, either speed would work fine for what you want to do.
 
But what about the enterprise specific features like vibration tolerance, error rate, TLER etc?

And what about the others brands like the Seagate ES3 & WD RE?
Is there some benchmark comparisons in a RAID environment ?
 
I usually tout RE4s...usually...

Go with the ultrastars

In my recent post on sata multiplexers, i had used 2x 500GB HGST Ultrastars and 2x 400GB WD RE4. There were two reasons why i put up the Hitachis.
1. I collected separate drive data.
2. The RE4s were lacking....i'll post up a picture. Please note that this is a specialized case using RAID1 on a CBS based SATA port multiplexer, and there may have been a cache difference, but still, the difference can't be ignored, as it was a similar environment. To do a benchmark in Ubuntu, the drive can't even have a file system, so it wasn't a dirty volume. :). Also, one of my 400s failed after not being touched for 9 months (I had 3)

400GBBMonseperateTR4Mcrop.png


500GBBMonseperateTR4M-1.jpg


This is just one man's benchmark. Once again, i used to run pure RE's over 5 drives, so i was not to happy.

Look at super micro for their drive array chassis. I love their stuff.

As for getting nearly 100TB...what raid are you planning on building? You might want to do a RAID6 off of 4 drives at a time off your controller then make them all look like a single volume (LVM). There is a substantial write penalty the more drives in the RAID. I'm not sure how a failure will affect the LVM while you rebuild, so hopefully no more than 1 drive fails in an array at a time. Don't buy 24, buy 27-28 so that you have backups in case of a failure, as you never know when a revision change will be made and its' tough finding your same exact ype after the fact. A hot spare is definitely per array is recommend if you want the least amount of down time, but if you'll be monitoring the situation esp with raid6, it won't be that big of a deal. Finally, if you're doing videography, you SHOULD only be using this array for a storage device. If you will be editing, you MAY want a SSD transaction disk RAID array in RAID0 (it's just for caching and random access, like when you're retrieving or editing multiple clips across many drives). This will allow you to use 5400RPM drives without a substantial performance loss when working on larger projects that require you to read once. You basically copy the entire project to the SSDs and use it as a large cache. you probably want a good indexer of your stuff so it knows which projects are across which volumes. The SSDs can handle copying from multiple volumes at once while shooting your workstation raw data. It also lowers the amount of time your drives remain active (less power requirements)

Just remember, your warranty is what you're really buying. Any reputable reseller is where you go. There are reasons why most of us like Newegg and some of us like Tigerdirect.

I have to agree with George- If you REALLY don't know what you're doing, it's great that you asked, it's wonderful that you're willing to learn, but be sure to have a professional on call because stuff goes wrong ALL THE TIME in the WEIRDEST WAYS.
 
Last edited:
haha i've been updating the post, take a read. Take that bench with a grain of salt. Newer stuff has come out since i purchased these.
 
About the chassis, i was thinking of the (way more cheaper) RM424pro from xcase, which has thermoregulated 120mm fans instead of the 80mm from supermicro & chenbro cases.

About the RAID setup, i was thinking building one RAID6 array of 24 drives, or two RAID6 of 12 drives each.
Do you think setting all the drives into only one RAID6 array will significantly increase the risk of failure?

I don't plan to edit video, i was more thinking about some home video server; for storing, sharing and playing hd content through the house network.

I don't start from scratch.
In fact i already built a similar setup in the past, with 16*400GB 7200.8 seagate in a RAID6, Areca ARC1160 PCI-X card, Tyan mobo and dual opteron 250, all installed into a LianLi V2100 chassis.

This first experiment learned me things:

-the importance of the case and cooling, because the V2100 despite his drive capacity was not providing enough cooling to the drives stored in his 12 bottom bays (the hottest drive located on the hotspot, far from the unique 120mm dedicated to them, reached 60°C before i open the side panel and install a desktop fan in front of them, this drive is also the only one that dropped out of the array).

-the importance of the PSU, because alot of drives powered on at the same time need a lot of current on the 12V rail, current my PCP850W couln't provide because only one of it's four 12V rails was powering the molex, so i installed a second PSU for the drives.

Desptite these conception errors the 5TB array worked fine without any data loss, and still store some of my data today.

Now i need more space, and want to build another rig, this time avoiding any mistakes.
 
Is the Ultrastar 5K4000 scheduled ?
http://www.storagereview.com/hitachi_deskstar_5k4000_review
It's not just consumers who are going to benefit either; even the enterprise may let the low cost 5K4000 sneak into a datacenter where big cheap storage is more important than anything else, especially as they wait for the enterprise-grade Ultrastar 5K4000.
Power consumption seems quite lower.
hitachi_deskstar_7k4000_4tb_power_values.png

http://www.storagereview.com/hitachi_deskstar_7k4000_review

Do you think the difference will be significant in a 24*drive on a rackmount case configuration?
 
Biggest power differential is 5.16W at start up. Lets use that value for worst case.
5.16W * 24 drives = 123.84W - Not a small amount, maybe could go with a smaller PSU

How about running cost. I pay ~$0.13 to ~$0.45 US per KWH, so worst case again, $0.45. Also, do running 24x7 with a 94% efficient PSU.
123.84W / .94 (PSU efficiency) * 24 hours * 365.25 days / 1000 Watt/KWatt * $0.45/KWh = $488.51 per year - So that is a good amount, to me, for per year cost difference.

A more realistic usage is 8 hours of Read/Write/RandRead and the rest of the day idle. For simplicity, I will only go off 12V. Use the average of the Read/Write/RandRead difference:
2.85W Read 2.66W Write 3.73W RandRead
Avg: (2.85W + 2.66W + 3.73W) / 3 = 3.08W Avg for equal Read/Write/RandRead

((8 hours * 3.08W) + 16 hours * 2.84W) / .94 * 24 hours * 365.25 days / 1000 W/KW * $0.45 = $294.09 - Still a good amount for just power.

If you pay less for power, the differential cost is going to be less.

As for is the drive will be released, my guess is no after WD's acquisition HGST.
 
WD Reds are a fraction of the price of "enterprise" drives and are designed for 24/7 RAID/NAS storage. They also have true TLDR built into their firmware unlike normal consumer drives and vibration reduction spindle mechanisms. And as is very useful if you are paying for your own power usage, they spin slower than 7200RPM like the Hitachi Deskstar mentioned above (which also adds to cool temperatures in a NAS box). 3 year as opposed to 5 year warranty, though. (No, I don't work for or sell WD ;)).

As to the need for super expensive enterprise drives if you are not buying for and/or have the money of an enterprise, I don't believe in them.
 
Reds also are only 1E14 error rate, equal to consumer Toshiba or Seagate drives, but worse than any Ent. drive.

For OP, if this is a home server, not going to have data you will lose money when you cant access, and you don't have a client/boss to answer to when the array is down, consumer drives are a better choice in my mind. Also, idle wattage between Seagate 7200.14, Toshiba DT01ACA, and WD Reds is <1.3W from spec sheets
 
Last edited:
Thanks!

Anyway i didn't found any 4TB Red, only RE.
Also does SAS drives bring interesting benefits over SATA?
SAS seems to have better error recovery and don't cost much more than SATA.
 
Reds also are only 1E14 error rate, worse than consumer Toshiba or Seagate drives...

Wrong.

Consumer Seagate drives (Barracuda) are also 1E14 "non-recoverable read errors per bits read". Check here. Toshiba's hard drives are mostly of the enterprise type, but their single high capacity "consumer" line (above 1TB) here is also 1E14. Same as virtually all other consumer drives, including WD Reds (which are also "consumer" drives).

Yes, enterprise drives (by all vendors) generally have a better rate of "non-recoverable read errors per bits read" due to tighter tolerances, more complex electronics and so on. Usually 1E15 to 16. Which you pay for.

Thanks!

Anyway i didn't found any 4TB Red, only RE.
Also does SAS drives bring interesting benefits over SATA?
SAS seems to have better error recovery and don't cost much more than SATA.

Yah, Reds are up to 3TB max currently. Blacks from them were recently released at 4TB, though (with less RAID/NAS-specific features than Reds like most consumer drives).

SAS in home use compared with SATA? Not really any major (or minor) advantages. As per Wikipedia: "SAS error-recovery and error-reporting use SCSI commands which have more functionality than the ATA SMART commands used by SATA drives." Not something to be concerned about at home. Unless running a data center with 10000 drives where running SMART checkers would be inconvenient ;).

Unless integrated on your motherboard, will have to spend extra for an SAS card. Which introduces an additional point of failure. As SAS is really designed for enterprises (i.e. 10000 drive data centers), reliable cards with reliable drivers can get expensive, despite the small price differential you've seen on SATA vs SAS hard drives. So even with dealing with large amounts of home data, I wouldn't buy SAS.

Good luck!
 
Last edited:
About SAS, they say here that the error rate is 100 times lower, don't know what it's worth.

I already planned to buy a ARC-1882ix SAS controller, and a RM424pro which has SAS backplanes, so not much added costs if i choose SAS drives.
 
True, transcription error on my part. Was working from a spread sheet I had made a couple weeks ago.

Heck, np np. A reminder of spreadsheets here where me manually copying from them normally results in an initial error rate similar to that of monkey with typewriters :D.

About SAS, they say here that the error rate is 100 times lower, don't know what it's worth.
Not much ;). That article from 6 years ago is quoting SAS/SCSI/"enterprise" hard drives with that error rate, not the connection. You won't find much/any consumer drives currently with SAS connections; certainly none were available in 2006.

Which ties in with what I mentioned above about enterprise drives having a quoted (as opposed to operationally tested) error rate of 1E15 to 1E16 vs. 1E14 for most consumer drives.

Its a concern you can get carried away with (heck, computing reliability is a primary concern for me too), but its just not something I would waste time on. If consumer hard drives were good enough for Google in 2007 (check section 2.2 Deployment Details in the downloadable PDF for info), they are good enough for most/all home usage today.

Since you are intending on RAID 6, the parity calculations inherent with that RAID type should alleviate any data reliability concerns about a single drive. And make sure to back up :). If you are dead set on 4TB drives as opposed to 3TB (the largest the Reds come in), Deskstars would work, as would WD Blacks. But you won't get TLDR with either of those and being 7200rpm, they will eat more power and be hotter in temperature than slower rotating ones.
 
Last edited:
I read some threads about ZFS and wondering what were the benefits of hardware raid?
Does it still make sens?

If i'm correct ZFS would allow me to use cheaper 5K4000 without any need for TLER.
Safety seems also better with the integrated checksums and auto-healing capabilities.
Does hardware cards has become useless or there is still some kind of advantage to use them?
 
Back
Top