Hitachi Deskstar 3TB 7K3000 Compatibility Thread

alamone

Gawd
Joined
Sep 30, 2008
Messages
589
I'm surprised I haven't seen a thread about this already on [H] but since I had a coupon I decided to grab one of the 3TB 7K3000 (7200RPM) drives for compatibility testing. I would assume all of this data would be equally applicable to the 3TB 5K3000 (5400RPM) drive.

According to my Areca card, the vital stats of the drive:
Model Name Hitachi HDS723030ALA640
Firmware Rev. MKAOA3B0
Disk Capacity 3000.6GB

Test #1: Intel ICH10R on ASUS P6T Deluxe
Driver: Intel RST 9.6.0.1014 (apparently the latest version BSODs the P6T Deluxe)
Results: Semi-compatible. Detects incorrectly as 801gB in some software, however shows the full 3TB in Disk Management. Disk appears to have problems with power management, randomly spins down and up, causing the system to freeze and lock up temporarily.
hitachi3tb-2-intel.png

As you can see, erroneously shows up as 801gB

Test #2: Marvell 91xx SATA 6G Controller on ASUS Combo SATA6/USB3 PCIe 4x card
Driver: Marvell 1.0.0.1045
Results: Fully compatible. However, appears to be capped by some bandwidth limit at about 107MB/s, might be the result of the bridge chip on the ASUS card.
hitachi3tb-1-marvell.png


Test #3: Highpoint Rocketraid 4320 PCIe 8x raid card
Driver: Highpoint 1.2.28.28
Firmware: v1.2.26.5
Results: Incompatible. Erroneously listed as 21334.37 TB. Shows up as negative gigabytes in Disk Management and unable to use.

Test #4: Areca 1231ML PCIe 8x raid card
Driver: 6.20.0.21
Firmware: V1.49 2010-12-02 (not available on website, but available by FTP)
Results: Fully compatible.
hitachi3tb-3-areca.png


As for the drive itself, I've no complaints about the speed, but as far as acoustics it does make a constant noticeable spinning noise; however I would suspect that putting it inside a case or enclosure would help with this issue. Perhaps the 5400RPM version would be a lot quieter.

If anyone wants to add additional compatibility datapoints or different experiences please feel free to add to this thread. Thanks.
 
To update, I've ordered 6 more of these drives and I am planning to test a 7 drive RAID5 on my Areca card once they come in. I'll provide details and benchmarks later.
 
To add another datapoint, it seems the 3ware 9750-24i4e raid card is compatible with this drive, as tested by another [H] member. See: http://hardforum.com/showthread.php?t=1570271

However, I generally advise people against 3ware since their cards are so dog slow. For example, the person in the thread was only getting around 100MB/sec on a 3 drive RAID5. That's slower than the performance of a single drive.
 
Also toward the end of the thread (still slow) but I was getting 190MiB/s or so write for the 3TB x 3 RAID-5.
 
Hey, nice results with more HDDs. I suppose 3ware really brought it up a notch with their newer cards. Historically they're always lagged behind the competition on performance.
 
Here are results of 7x3TB in Raid5 on Areca ARC-1880i.
1880.png


Note that this array was directly transferred from my old 1231ML, hence the array naming.
Also, since I'm using the old free version of HDTune, the array size is incorrect.
I may backup, wipe, and re-create the array to see if there are any performance benefits.
 
@alamone: you can rename raid arrays with the latest 1.49 firmware from Areca.

also, make sure when you bench with HDTune that you use 2MB for block size under options, since the default 64k isn't really testing the usage pattern that a big array of large disks excels at (mostly sequential transfer activity, storing lots of large files, media files, etc)
 
I've been running 5 of these on my Areca 1280ML in Raid 5 for the last couple days. No problems so far and I transfered almost 9TB of data to it with no issues.

One VERY important thing I learned was that turning on the write cache on the disks (disabled by default on the Areca) increased the raid build time 10x. Originaly the progress was moving at 0.1% every 3 minutes (which would have taken over 2 DAYS!). After turning on the write cache on the disks it dropped to 1% every 3 minutes and took around 6 hours (I did some reboots during so it had to resume a couple times).

I will probably do a full byte compare of all the files during the weekend to my old array (still connected but with no drive letter) to see if there are any issues.
 
Hi All,

Have 6 of them on my Adaptec 52445, only thing to do is to download and upgrade the card with the lastest Firmware and embedded OS for these big fat drives to be detected as 3TB. If upgrade is not applied, they are detected as 2TB :confused:

My ESXi works great on this configuration ;-), used RAID 6, so i have to sacrifice 2 drives, but datas are safer than with a basic raid5.
We all know that drives coming of a same prod line do everything to fail almost all together ;-) Gives me few weeks to RMA the first failing one and avoids me to burn a candle everyday at the church while waiting for the new drive :)

Enjoy !
 
Last edited:
We all know that drives of a same prod line do everything to fail in almost all together ;-)

but do we all know that? outside of internet FUD i'm not aware of this happening. unless a shipment of multiple drives was mishandled in transit then at least for current modern drives i'm putting it in the myth category. besides, "its called backup." the idea that people would go through great trouble to source a collection of the same model drive from all different retailers in some hope that they weren't part of the same batch that might be prone to failure, is just silly, these days anyway.
 
Hi All,

Have 6 of them on my Adaptec 52445, only thing to do is to download and upgrade the card with the lastest BIOS and Firmware for these big fat drives to be detected as 3TB. If upgrade is not applied, they are detected as 2TB :confused:

My ESXi works great on this configuration ;-), used RAID 6, so i have to sacrifice 2 drives, but datas are safer than with a basic raid5. We all know that drives of a same prod line do everything to fail in almost all together ;-) Gives me few weeks for RMA the first failing one and avoid the candle everyday at the church while waiting for the new drive :)

enjoy !

Thanks for this info! I was waiting for someone to try it.
 
but do we all know that? outside of internet FUD i'm not aware of this happening. unless a shipment of multiple drives was mishandled in transit then at least for current modern drives i'm putting it in the myth category. besides, "its called backup." the idea that people would go through great trouble to source a collection of the same model drive from all different retailers in some hope that they weren't part of the same batch that might be prone to failure, is just silly, these days anyway.
mmmmh ;-) You have not tried the WD green yet :) a friend who works in a datacenter had three of these which failed in one week ;-) When you order few drives, most of them come from the same shipping box, even if you buy it at the little shop close to your house. A production line is made so: each disk coming out of the line goes into a shipping package, all together. Meaning why you have wide call back on some series of hardware (motherboards, processors, memory, HDD, etc...) because it takes the vendor several time and several RMAs to understand that something went wrong on the prod line. Not rare enough for me to take the risk ;-):p
 
but do we all know that? outside of internet FUD i'm not aware of this happening. unless a shipment of multiple drives was mishandled in transit then at least for current modern drives i'm putting it in the myth category. besides, "its called backup." the idea that people would go through great trouble to source a collection of the same model drive from all different retailers in some hope that they weren't part of the same batch that might be prone to failure, is just silly, these days anyway.
lucky you if you never encountered such "serial failures":

http://www.theregister.co.uk/2010/12/10/flash_fails_more_than_hdd/ (BOTH SDD AND HDD)

http://www.theaustralian.com.au/aus...rates-up-to-13pc/story-e6frganf-1111113339646 (HDD)
http://forums.storagereview.com/index.php/topic/29329-ssd-failure-rates-compared-to-hard-drives (SSD)

btw ~4 and close to 10 percents, buddy ;-) if i can divide risk by two with RAID 6, it worth the risk and ... price (~180$ each)


and to conclude, a little forum extract i like, as the guy has a better english than mine:

While it is possible to monitor some aspects and make predictions based on statistics, there are risk factors that can lead to instant failure without any warning. As the worst case scenario can strike at any time, I'd just plan around it, and not differentiate between failure modes.

So to guard against hardware failures, set up a RAID6 and swap harddisks if the controller tells you they are no longer usable; this protects reasonably well against typical failure modes (total unannounced loss of an entire disk and individual unreadable sectors on single disks), and for everything else (lightning strike, ...) there is your off-site backup.

source: http://superuser.com/questions/243250/is-it-possible-to-estimate-the-death-time-of-a-hdd

cheers :)
 
Last edited:
I have six of the Hitachi 7k3000 drives attached to my Intel SRCSASPH16I card...
They only show up as 2Tb each, so I guess I'm screwed until Intel comes up with some new firmware.
Until I found this thread, I thought maybe I was meant to jumper the drives or something, but now I'm guessing it's just that they are not (yet) compatible.
 
I have 12 of 7K3000 running RAID6 on LSI9260-4i with Intel expander RES2SV240 (total 20 ports available). I didn't benchmark but this thing is flying for sure. :p
 
How hot do these run, I'm considering 10 of them in my server (already has 8 2TB drives) but I've read on newegg that the 7200rpm runs very hot and the 5400rpm is noticeably slower than my current 2TB Samsung drives
 
I'm having odd issues with 2x7K3000 3TB drives in RAID1 on ICH10R, lots of random system freezes and some full lockups. Upon doing a hard reboot I often get "SMART command failed" from the Intel AHCI BIOS screens. I had to use a modified BIOS for my board (Gigabyte X58A-UD9) to even be able to use the drives in RAID as it needed the latest Intel Option ROM which Gigabyte has yet to integrate into an official BIOS, not sure if that is the issue or not. The strangest thing is the array is my secondary Data drive, not the OS/boot array.
 
I have a single 3TB 7k3000 that has problems with everything I've attached it to ICH10, sil3114, sil3132, and H61. It works fine for a while but eventually it runs into major SATA link errors. The link goes up and down... Errors out... Is renegotiated to a slower speed... Errors out some more... Sometimes the controller on the disk goes completely dead and the SATA link can't be brought back up. When that happens the machine won't even POST after the power is turned off. I have to detach the Hitachi to boot the machine.

All the silicon image firmwares and motherboard BIOS's are up to date. The motherboard with the ICH10 is even on Hitachi's supported hardware list for the drive. Yea, I tried a bunch of different SATA cables too.

I'm RMA'ing the thing but I'm nervous about buying another. Do I have a dud on my hands or are these disks compatibility nightmares? What about the 5k3000? Is it any better?
 
What does CrystalDiskInfo say about the drive? Have you tried a different SATA cable?

Also the Silicon Image Card could likely be a problem as well as causing a performance dip. If you install Intel RST 10.6 you can run the drive right off the intel SATA port.
 
Last edited:
What does CrystalDiskInfo say about the drive? Have you tried a different SATA cable?

Also the Silicon Image Card could likely be a problem as well as causing a performance dip. If you install Intel RST 10.6 you can run the drive right off the intel SATA port.

Are you replying to me or Shiranui Gen-An?
 
Unless Shiranui has a Silicon Image card, I was talking to you. :cool: The note could apply to anyone that relates to it though.

I wasn't sure because you said to try it on on motherboard ICH10 and with different cables and I had already said that I did both of those things.

My disk is already packaged up for RMA but in the interest of helping others who may have issues...

All smart attributes are good and have checked "good" throughout my testing.

3k7000.png


Testing consisted of 4 to 6 of this running at once:

Code:
find . -type f -exec md5sum {} \;

And a 1-2 of this:
Code:
cat /dev/zero > fooX

For the non-linux people that's 4-6 processes hashing all files on the disk as well as 1-2 processes writing large contiguous files.

I let each test run go for over 4 hours except those where the disk went completely dead and the OS was not able to bring the sata link back up.

Testing was done attached to motherboard sata connectors on ICH10 and H61 as well as sil3114 and sil3132 add-in cards. I updated the Silicon Image cards' firmwares to the latest. I tried many different brands, lengths, and styles of sata cable on each one. I also attached the disk to a motherboard sata port on an old ICH7 just for kicks.

In all, the disk was run in three different machines so PSU should not be an issue. Also, the machines have lots of other disks that work just fine.

Everything except the H61 is listed on Hitachi's hardware compatibility document:

http://www.hitachigst.com/tech/techlib.nsf/techdocs/EA3C2532A751C279882577DF0059E290/$file/Deskstar_7K3000_CompatGuide_final.pdf

When the sata link was not erroring out, read and write speeds were where they should be. It doesn't look like the disk was having any platter or mechanical issues. The problem seems to be in the disk's onboard controller. It can't keep an sata link up to save it's life.

I'm hoping this was just a bum disk. I really would like to buy another 3TB Hitachi.
 
Back
Top