Seagate 4TB Vs WD 3TB Red (can't decide)

a 4 drive RAID5+1 hot spare so i should be safe even if an infant mortality. if not everyone here will know !

Test your drives before you put vailuable data on them.
 
I do a 4 pass badblocks test on every single drive I get new or back from an RMA. Before after and during the test I look at SMART raw data to make sure the drive is not realllocating sectors. For 2 TB drives this takes 30 or so hours and 4TB drives it is taking double that. For windows users a few full formats (or hdtune surface test) looking at CrystalDiskInfo will have a similar effect for those who are not confortable with linux utilities.
 
Last edited:
By ".11 saga" I'm going to guess he's referring to the firmware issues with the 7200.11 that Seagate had. There was a good number of drive issues that were caused by it and it's one of the more prominent issues that Seagate has had in recent history.

Interestingly enough, while there is a review on Tom's Hardware that lists the power-on hours as 8760 and the WD spec sheet blurb on the first page states it's designed for 24/7, the actual spec table within the pdf itself does not list a power-on rating:
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771442.pdf

IIRC, the definition of power-on rating is the expected usage for which the specifications stated apply. If you use more than the power-on rating then the specifications given are not guaranteed. That being stated I have not heard of anyone getting a RMA refused for an excessive power-on stat.

I have to say I found the statement "Same with the Deathstars from days gone by. Hitachi seemed to of cleaned up their act" a bit funny. It's 12 years after the fact without any serious issues and all Hitachi gets is "well, it seems like they cleaned up their act".

i had two 7200.11 drives, they are still running fine today.

i also had a 40GB deathstar back in the day, that did over 8 years of service before it finally karked it.

just goes to show.
 
Just bought 20 of the new 4TB Seagate NAS drives, did a four pass badblocks run on them before installing (60 hours) and did not find a single bad block and SMART shows 0 reallocated sectors.

No brand is perfect, I had a few of the infamous bad firmware Seagates, and had no problems. Moved on to Samsung, then HGST, now I seem to be back to Seagate.

I have found that how the drives are packaged during shipping determines their reliability more than anything else. I vowed a few years ago to no longer buy drives from Newegg due to their terrible packaging and thus high failure rates. I bought all 20 4TB drives from Newegg and have to say their packaging has improved tremendously.
 
I concur on the Newegg shipments. I received 6 of the Seagate NAS drives and they were packaged in a foam container that segregate each drive in it's own slot (best analogy I can make is an egg carton for hard drives), wrapped in bubble wrap, inside a box (with other parts) filled with peanuts.

Plenty of reviews out there on these new Seagate NAS drives and the WD offering. I choose the Seagates because of what I've seen from the reviews and because I have had nothing but bad luck with WD (and there isn't many options anymore). I know everyone has their own stories, but I honestly have not had a single good WD drive. The best WD drive I've owned was a 1TB Black that makes excessive clicking sounds and had a SMART value right at the edge of being bad. I won't put anything of importance on it, but at least it still functions. I've had the greens which also were bad. Personally, I've heard more bad about the Red's then I have about any drive to date (not just online, from acquaintances that work in B&M shops like tigerdirect and microcenter). I don't know what it is, but I just don't have a single good WD drive around. On the other hand I have stacks of old Seagate PATA drives from the 90's that still work. I have some Samsung spinpoints still chugging along as well, which I miss.
 
Your grammar is horrible, so you must be some sort of idiot. I don't even know why I'm asking, but I have to wonder: how big are your sample sizes?
 
Using a brand you trust can often be irrational but is understandable. It really doesn't make anyone an idiot.

I think it's just a matter of trying to beat the odds - using raidz2, getting drives that "support" NAS usage etc. But what is really crucial is the ability to detect and act on failures.

The last two drives that failed on me (past year) failed slowly. Lots of "hidden" HW errors (SMART), slow access to shares, but no sign of fault from ZFS (no bad checksums). That should have made me nervous for the rest of my servers (26 such drives in use), but I'm not - it's inevitable for a drive to fail. The ability to detect the faults and identify the drive at fault is really important.
 
Back
Top