WD Red drives?

rather than rehash the early death discussion, I'll just quote odditory who summarized everything fairly well:

I'm saying I'm giving WD benefit of the doubt that its not manufacturing errors (remember every drive is fully tested when its born) -- its statistically unlikely based on the percentage of drives these guys received that were bad. Newegg is the common denominator here and they have a longtime tradition of not being able to break up the 20-packs from the factory, repackage them into individual shipments and actually get them into customers' hands trouble-free with any consistency. More perplexing is enough people receive bare drives that *do* look adequately packed yet then are knocking/clicking or grinding with crashed heads as soon as power is applied, to make you really wonder WTF? Broom hockey in the warehouse? Robotic pickers not calibrated properly for delicate items? FWIW, this seems to happen much, much less on the retail boxed drives suspended by clamshell brackets.

And you're absolutely right about drives dying a delayed death being often rooted in earlier stresses. That's why you do the several weeks of burn-in and disk thrashing within the 30-day return window because it'll tend to reveal the weaker drives prone to earlier failure. Its been extremely rare that I have a drive survive that burn-in but then fail say 6 months down the line.

First thing that comes to mind when I see a DOA report is to ask how the drive was packaged, if there was any evidence that UPS dropkicked the package, etc. Even if there's no visible packaging damage, if the drive isn't packaged in retail form (or packaged as recommended by the manufacturers themselves) then that's where I assume the issue originated.
 
i will wait a bit before even think about going to REDs

its new, dont know if all bugs are work out, wait for other manufacturer to release their NAS drives to drive down price
 
i will wait a bit before even think about going to REDs

its new, dont know if all bugs are work out, wait for other manufacturer to release their NAS drives to drive down price

Heh -- the "drive down price" thing used to work when there were 4 major manufacturers competing, now its just WD and Seagate and they can collude to some extent to keep prices higher. If Toshiba makes good on their commitment to pump out desktop drives by end of year then we'll have some more movement in the marketplace but OTOH I keep reading report after report from analysts about prices being expected to stay high for a long time. Who knows if they really know anything, and there's always the X factor - the wildcard - and maybe that'll be Toshiba.
 
I lowered my "drive down price" expectations the second I read the news about Hitachi. Hopefully that wildcard of yours will come true odditory :)
 
Well not to derail this thread further but Toshiba competing with WD & Seagate will theoretically affect WD Red prices (in a good way) so this is worth a read: interview with Joel Hagberg, Toshiba VP of Product Marketing about this very issue..

http://www.xbitlabs.com/articles/storage/display/toshiba-storage-interview-2012_6.html

the takeaway is "We expect to begin production in our facility in 3CQ12 and ramp to full capacity of the existing lines by the end of 2012."
 
FWIW, I just received two 2TB Reds from Newegg. Both were in ESD bags, wrapped in bubble wrap, and in separate individual cardboard boxes inside a larger Newegg box. Both drives passed WD extended diagnostics with zero errors and are running in my server in RAID 1 right now.
 
I dont care if they brand them "Self Destructs In 60 Seconds" as long as the drive is decent.
 
Last edited:
I've always been a WD man, but it was just confusing trying to figure out which drive to use with my Synology box, and there seemed to be a bunch of catches, so that's why I ended up going with some Samsungs initially. Well, I still had to make sure they had the right firmware update on them, so there was still some work, but I've never been a fan of Seagate, so I tried the Sammys.

I hope these Reds turn out to be as good as they seem on paper.
 
FWIW, I just received two 2TB Reds from Newegg. Both were in ESD bags, wrapped in bubble wrap, and in separate individual cardboard boxes inside a larger Newegg box. Both drives passed WD extended diagnostics with zero errors and are running in my server in RAID 1 right now.

Slightly off-topic, but with WD Lifeguard SMART extended diagnostics, are you guys getting a detailed report of any errors occurring? When I run it on my drives, I just get a generic "drive passed" type message and a green check mark, I'm assuming this means no errors detected, but just want to make sure I'm not missing a detailed report somewhere.

Thanks
 
I am pretty sure it will tell you if it found something on the test however you can just look at the SMART raw data after the test (with a program like CrystalDiskInfo) to be sure..
 
Grambo, I'll let you know in 3.5 hours. :)

Got my 3 drives today, each in individual HDD boxes that I usually get, and then packed in a bigger box with a bunch of paper. Running the extended test on the first one now. I don't want to muck with things until it's done, but I'll see if I can run more than one instance of Lifeguard and check the other 2 disks at the same time, otherwise this is going to take all weekend. It's taking about 7 hours for a 3TB drive.
 
Nope, doesn't give you any details.
Ym7Xc.png


Looks like Lifeguard will do multiple instances, so I'll try doing the other 2 drives at the same time tomorrow.
 
I did a long initialize in the new synology, no issues on the 2 2TB drives I got. They were typical newegg packing, inside some sort of white styrofoam frame that covered about 3/4 of the drive, then wrapped & taped with thick bubble wrap. Then in a box with packing peanuts. Not the worst I've seen from them, but certainly not the best.
 
How are these compared to the Western Digital RE4 drives??
Have a look :

So, from results of test of Storage Review,
=> http://www.storagereview.com/western_digital_caviar_black_review_2tb
=> http://www.storagereview.com/western_digital_red_nas_hard_drive_review_wd30efrx

... I made these charts :

oimg
oimg


oimg
oimg

.. RED have more latency access than RE4
... RED are quite similar to RE4 on sequential R/W
.... RE4 are better than RED on Random R/W
..... RED is 3 years warranty ; RE4 is 5 years warranty.

+
This is a BIG HUGE review of WD RED !! ♥♥♥

Western Digital Red 3TB SATA SOHO NAS Drive - Full Review
=> http://www.pcper.com/reviews/Storage/Western-Digital-Red-3TB-SATA-SOHO-NAS-Drive-Full-Review


Cheers.

St3F
 
^ The RE4s are also nearline-class, where as the Red drives are desktop-class.
 
One thing bugs me about the stats on these drives. If these have increased MTBF & are designed for 24x7 then what was the method used in calculating the MTBF for the desktop drives? So:

1. Did their testing for "desktop" drives replicate a process of being active for only 9 hours per day (or whatever the calculated average is) or did they keep the drives on 24x7?

2. Is the MTBF an E2E elapsed time or the time that the drive was active.

I'll look myself in a while but if anyone knows please share!
 
I do not put any faith in MTBF. Why you ask? Deskstars (and other drives with serious problems) had the same MTBF as every other desktop drive at the time of release.
 
2TB reds are sold out at the egg. Customer Reviews are coming in slowly and the results are almost as mixed as the greens. Really though I'm debating grabbing Red's over Green's for the additional 1 year warranty since the price is about $10 more right now.
 
To be fair I don't either (http://storagemojo.com/2007/02/19/googles-disk-failure-experience/ etc) but I just wondered what the testing methodology was. Most of my storage is off during the day so I wondered if the testing of desktop drives "emulated" that.

From Wikipedia (http://en.wikipedia.org/wiki/Hard-disk_failure):

Since hard drives are mechanical devices, they will all eventually fail. While some may not fail prematurely, many hard drives simply fail because of worn out parts. Many hard-drive manufacturers include a Mean Time Between Failures figure on product packaging or in promotional literature. These are calculated by constantly running samples of the drive for a short amount of time, analyzing the resultant wear and tear upon the physical components of the drive, and extrapolating to provide a reasonable estimate of its lifespan. Since this fails to account for phenomena such as the aforementioned head crash, external trauma (dropping or collision), power surges, and so forth, the Mean Time Between Failures number is not generally regarded as an accurate estimate of a drive's lifespan. Hard-drive failures tend to follow the concept of the bathtub curve. Hard drives typically fail within a short time if there is a defect present from manufacturing. If a hard drive proves reliable for a period of a few months after installation, the hard drive has a significantly greater chance of remaining reliable. Therefore, even if a hard drive is subjected to several years of heavy daily use, it may not show any notable signs of wear unless closely inspected. On the other hand, a hard drive can fail at any time in many different situations.

I've never put much faith in those figures, because until the drives have actually been tested and used in the real world for X years, it's impossible to know for sure (and by then, we've moved on to much higher density, faster drives).
 
Yeah, MTBF just means they estimated what the reliability will be with some tests, known experience, simulations...
 
Yeah, MTBF just means they estimated what the reliability will be with some tests, known experience, simulations...

And marketing agenda. There is no standardized method of calculating MTBF so no two vendors do it the same, some factor % of returns into it and no two vendors even validate an RMA as faulty or not the same way. MTBF might as well be last night's lottery numbers for how abstract it is because there's no way to prove it or hold a company to it or make them prove it. I also think there's a particular company that understates them to push the kind of people that bother to look at such figures toward enterprise drives. Most consumers don't know what MTBF is and don't care and an understated figure doesn't matter.

In the end it comes down to (Price / GB) × Warranty. With the exception being that if you're buying a large enough quantity and a drive with a lower warranty is significantly cheaper and all other things being equal then there's an argument for buying cheaper, putting the difference into cold spares just prior to the end of the warranty period and doing your own RMA's beyond that, idea being to approach them more like disposable phones

In other words if company A is selling a drive for 120 and company B is selling it for 170 and you're buying 8, 16, 24, 1000 or more drives then why pay a premium for as many individual insurance policies when with the cheaper drives a handful of cold spares purchased outright for full price in lieu of warranty replacements can cover all of them. One man's opinion.
 
Last edited:
Interesting that there is no "common" method of calculating MTBF - that was kinda what I was getting at. Personally, I don't tend to RMA drives (assuming they don't fail an initial test) so I tend to subscribe to the "cheaper" argument. But forgetting that for a second, within a single manufacturer (WD), I would hope there *is* some standardisation of what the MTBF means, so I still wonder how they come up with these figures since after all this is one of the seling points pushed to the forefront of (yes) their marketing blurb.
 
FINALLY!!! Took them long enough! What's most amazing... I'm about to buy a bunch of low power disks for a raid array. I nearly pulled the trigger last week. I can't believe these disks were released BEFORE I bought my disks rather than AFTER. A first in the history of the universe! :D

I was planning on taking a gamble on the GP-AV drives. They're what WD currently ships in their "Live" consumer nas units. I guess I'll be giving these a shot.

The GP-AV drives have, at times, found their way into retail (my Mom's current boot drive, a 500GB GP-AV, was my own boot drive prior to the passdown, and I upgraded for capacity reasons, not performance reasons) - and they aren't designed for desktop usage at all. The 1TB bootdrive I replaced the GP-AV with is, literally, an ex-NAS (MyBook Essentials, in fact) drive, and rings up a 5.9 WEI disk subscore (Eco features disabled- which is my default). The big Eco-Greens (and the GP-AV drives) all have great price-for-capacity numbers (and that was before WD acquired Hitachi Global Storage) - the issues had to do with performance in *green mode*.
 
Happy with my Reds so far. They're quiet, quieter than my Samsung, but the only real noise from those was just their goofy spinup noise.
 
Once I get my m1015's flashed I'm going to pick up a few reds for a linux software raid. I was planning on waiting for amazon to pick them up but I'm getting impatient. Looking through the list of shops that sell reds I recognize provantage, superbiiz, and directron. Anyone know how these shops package oem disks? I know superbiiz used to do a good job but I haven't ordered from them in ages.
 
Is it somehow possible to disable TLER on the RED's?
I'm planing to use them with ZFS and TLER is not recommended for ZFS.
 
Thanks ST3F, I haven't ordered my RED's yet, I want to clarify the TLER question first.
Maybe someone else have tried this and can report back if the tool still works with the new RED disks?

Have you looked at the AV-GP line? They're rated for 24/7 use. Don't believe all of the fud about them lacking error correction. The lax error correction only comes into play when special ATA commands are used to access the disk. No OS will use those special ATA streaming commands.

EDIT: The special ATA streaming command set is completely different from TLER.
 
The GP-AV drives have, at times, found their way into retail (my Mom's current boot drive, a 500GB GP-AV, was my own boot drive prior to the passdown, and I upgraded for capacity reasons, not performance reasons...

[snip]

You used an AV drive as a boot drive? :eek:
 
Have you looked at the AV-GP line? They're rated for 24/7 use. Don't believe all of the fud about them lacking error correction. The lax error correction only comes into play when special ATA commands are used to access the disk. No OS will use those special ATA streaming commands.

EDIT: The special ATA streaming command set is completely different from TLER.
Yes, the AV-GP would be my second choice I guess. But I think the RED line runs cooler and has a lower power consumption & higher MTBF than the AV-GP. Thats why I would like to use them.


Lucien said:
Why is it not recommended?

I've read about it in a few threads. E.g.:
Do i need to use TLER or RAID edition harddrives?
No and if you use TLER you should disable it when using ZFS. TLER is only useful for mission-critical servers who cannot afford to be frozen for 10-60 seconds, and to cope with bad quality RAID controller that panic when a drive is not responding for multiple seconds because its performing recovery on some sector. Do not use TLER with ZFS!

Instead, allow the drive to recover its errors. ZFS will wait, the wait time can be configured. You won't have broken RAID arrays, which is common with Windows-based FakeRAID arrays.
Source: http://hardforum.com/showthread.php?t=1500505

But in other forums I found:
Re TLER and zfs that statement made by hard forum poster is incorrect. It should say it really does not matter with zfs. Ie .tler on or off makes no real diff. Zfs will wait, or it will correct the error itself.

This has been covered in the open solaris zfs forums.
Source: http://forums.overclockers.com.au/showpost.php?p=11821399&postcount=153

I'm not sure what to believe...but the explaination in the first quote makes somehow sense. But I would be glad to hear that it makes no difference if you use TLER with ZFS or not :)
Unfortunately I haven't found the mentioned open solaris forum thread.
 
Back
Top