How are today's 2 TB drives?

Unabomber

Supreme [H]ardness
Joined
Jan 1, 2005
Messages
6,719
I've been using my 1 TB Hitachi 7200 RPM drive for a couple years now, and am about to build me a new system.

My question is this:

How are today's 2 TB drives holding up? Are they as plagued as the earlier generation ones, regarding errors, firmware mishaps, etc?

I'm looking at this 2 TB, 7200 RPM, 64 MB cache Hitachi drive.
 
Even though I have 30+ 2TB drives I can not really tell if they fall out of the expected 1% to 7% expected annual failure rate that desktop SATA drives in general have.

With that said I have had 1 DOA in the last year and no other failures which is better than my luck with 1TB, 750 GB and 500GB drives of about the same quantity when these were the drives to buy.
 
Last edited:
I've been running 8x2TB Hitachi 7k2000s in Raid6 on an Areca controller for over a year. No failures, no problems. I've recently started running 10x2TB Hitachi 5k3000s in a ZFS RaidZ2 array. Also no problems, though only a month or so of experience. I'm planning on adding 10 more 5k3000s soon. Many others here and on other forums have reported excellent performance and reliability. This drive is one of the very few consumer class drives that handles the rigors of hardware raid without complications.

Before you buy, you should at least take a look at the 5k3000s. While 'coolspin' drives are generally slower, in this case the 5900 RPM version benches just as fast - often faster - than its 7200 RPM brother. Seek time is obviously slower, but if your application does not requires high IOPs it should not make much difference. The 5k3000 runs cooler, is quieter, should last longer (speculative), and best of all is about 30-40% less expensive per drive.
 
Is ZFS better than RAID5/6?

That is not a question that has a black & white answer. It depends on your application, your OS constraints, your comfort with Linux/Solaris vs Windows and dozens of other factors.

Here are the easy answers:
- If you are running Windows and need direct attached storage (local array) then hardware Raid is normally your best answer. ZFS is not an option because there is no ZFS for Windows.
- If you are running Solaris ZFS is always your best answer.
- If you are building a NAS or SAN to support other machines, running ZFS on Solaris or open-source Solaris (OpenIndiana) is often a good solution.

You could go on and on...it gets religious at times (ok, I admit it, it starts out religious and you have to dig to get facts...)
 
Related question: what is a good test to run on a new HDD before putting data? (just got a new Hitachi 5K3000). Is there something that will work on the drive attached via USB or e-SATA?
 
badblocks (available on any linux system or livecd /dvd / usbstick). Depending on what I am going to put on the drive I will perform up to a 4 pass badblocks read/write test. Note for a 3TB drive this will take around 30 hours to complete all 4 passes.
 
I do 'badblocks /dev/sdX' ... also look at SMART with 'smartctl - a /dev/sdX' ..
Does 'badblocks' do read/write by default? Is there something more rigorous?
 
Back
Top