WD Red drives?

When were your WD20EADS HDDs manufactured?
Also, look at the newegg reviews: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136344

Almost all of them are saying "highly unreliable", which just echos what I was saying earlier.
Also, those drives are desktop-class and do not feature TLER, so really, you are running a huge risk pushing those things in a hardware RAID array.

I'm glad its worked so long for you and hope it continues to do so.
But damn, you are running a huge risk.


Go with the Constellation drives, they are true nearline-class drives and are far more reliable in, and are designed for, hardware RAID.
I just ordered two Constellation ES HDDs for myself about 30 minutes ago for my server.

The Red drives, while they do have RAFF-like functionality and TLER, they are still just a desktop-class drive and do not have the robustness or full features like that of nearline-class drives.
Remember, high-performance /= reliability.

My WD20EADS do have TLER enabled, some of the later builds could not enable this. The failure rate of these drives does worry me though.

It is a costs vs reliability/warranty issue for me choosing between Reds and proper enterprise drives (SAS ones arent much more so if I did go proper nearline enterprise I would go for SAS). I've only got 4 bays left and although I have a spare chassis I would need to buy a SAS expander and a few other bits to use it as an expander chassis. Maybe I can hang on for 4TB SAS, event at £400 each they might be worth it.
 
Last edited:
If we're talking about for a business setting and youre spending someone else's money, sure, get enterprise class. But a lot of people on this forum are tending to just store media at home and I'd say you're better off buying two desktop class drives for the same price as what the ES drives are going for. Having two physically separate copies of the data on two cheaper disks > 1 enterprise drive with no backup at the same pricepoint. And again, for home storage where uptime is less of a factor.

I bought mine for $89.99 each.
Please tell me where I can find two HDDs that are half that price. :rolleyes:

Even the absolute cheapest, non-refurbished, desktop-class drives are around $70 in the US.
Also, I'm assuming you are talking about RAID 1 when you mean
data on two cheaper disks > 1 enterprise drive
Yeah, no.
I'd take one solid and reliable enterprise-grade drive over two cheap desktop-class drives any day.

But then again, I wouldn't be running just a single one, I'd at least run two in RAID 1.
In the end, I will have to agree with you, all of this means nothing without a backup.
 
Last edited:
My WD20EADS do have TLER enabled, some of the later builds could not enable this. The failure rate of these drives does worry me though.

As I said before, pre-2010, WD made great HDDs, and you obviously are using drives from that era, as am I.
WD's post-2010 HDDs are where the high failure rates were starting to emerge.

FYI, all of the later desktop-class drives from WD, save for these Red drives, do not allow TLER and haven't for many years.
Sounds like you got one of the last batches that actually supported it.

Since they have TLER, you aren't at near the risk I was pointing out earlier.
 
In fairness we don't know how much mishandling actually happens at Newegg, or really what mishandling may have happened at all points between when it passed testing at the end of the assembly line and when it finally got to your door.

http://hardforum.com/showpost.php?p=1037246329&postcount=602

I received 2 drives from Newegg similar to this a few months after this reported incident.
Drives were packaged in traditional Newegg fashion, OEM 20 pack Styrofoam with bubble wrap. I honestly have a hard time believing that drives like this would pass any QA inspection from the factory and am much more likely to believe that damage like this happens from poor handling at Newegg's warehouse.

I do know that every drive I ever received that was defective on arrival just happened to be from Newegg and were always drives out of the OEM 20-packs they get from the factory, just like they're doing with the Reds here. It happened enough times even with Hitachi's that I quickly standardized on only buying retail boxed ones since they're a lot more tolerant to handling abuse along the way. And after 200 purchases of the retail boxed Hitachi's never a single bad one. Hardly a scientific study but has made me wonder about Newegg especially since it mirrors the accounts of many others.

I've observed minimal issues when ordering retail boxed drive products. I can say the same with Amazon's "frustration free packaging" for OEM Hitachi drives. Like others my experience with drives ordered from Newegg has been fairly problematic.
 
Last edited:
You're spot on about bare drives from Amazon. Less problems sourcing Hitachi from there than newegg but still problematic enough times to put me off. At least with amazon returns and customer service are superior. Regardless though amazons bare drive packaging scheme is mostly inadequate.
 
Well I'm looking for some 3TB non-7200 rpm drives, and these seem to fit the bill. I'll probably pick some up if I can, the only other option it seems to me is getting some 5K3000 3TB drives, they still have retail boxes of those at Fry's. Are those a good choice too?
 
Well I'm looking for some 3TB non-7200 rpm drives, and these seem to fit the bill. I'll probably pick some up if I can, the only other option it seems to me is getting some 5K3000 3TB drives, they still have retail boxes of those at Fry's. Are those a good choice too?

If you're running JBOD then maybe going with Hitachi will fit your needs despite low availability and a higher price point.

If you are intending to run drives with parity in RAID you need to not just consider the drives you can obtain now, but replacements in the future. Hitachi put out great products, but the WDC merger ended with parts of Hitachi being sold off as conditions of the buyout. Obtaining new product now is difficult and will be impossible in the near future.

Any Hitachi drive you RMA today will most likely be replaced with a WDC GP drive. Green Power drives post 2009-2010 are absent of TLER toggle capability via a dos tool.

You don't want to be stuck in that scenario if you can avoid it entirely. Two weeks ago I probably would have said take the risk and go with Hitachi if you can find sufficient quantity and spares. The alternative was Seagate's barracuda line and its known problems.

Today, I'm slightly more optimistic about WDC with their decision to create an affordable low power product line compatible with RAID controllers. If money is tight or you can hold off, I'd suggest letting early adopters do diligent beta testing to uncover problems. But if money is no concern, by all means feel free to report your findings.
 
Well I'm looking for some 3TB non-7200 rpm drives, and these seem to fit the bill. I'll probably pick some up if I can, the only other option it seems to me is getting some 5K3000 3TB drives, they still have retail boxes of those at Fry's. Are those a good choice too?

They are the perfect choice without factoring inflated pricepoints, it doesn't get any more perfect than retail boxed 5KRPM Hitachi's from Frys in my many sourcing experiences in terms of likelihood in ending up with a drive you won't have to return. However as 1010 mentioned there's the WDC warranty replacement issue and getting a non-Hitachi back if you did need to RMA it. But in my experience Hitachi's will tend to outlive their warranties anyway and is why I continue buying Hitachi's where I can.

I'll give these Reds I just purchased 6 more months before considering additional quantities seriously both from a pricing standpoint and new model manufacturing kinks standpoint.
 
Any Hitachi drive you RMA today will most likely be replaced with a WDC GP drive. Green Power drives post 2009-2010 are absent of TLER toggle capability via a dos tool.

You don't want to be stuck in that scenario if you can avoid it entirely. Two weeks ago I probably would have said take the risk and go with Hitachi if you can find sufficient quantity and spares. The alternative was Seagate's barracuda line and its known problems.

^^ This. This was the reason I didn't buy the Hitachis. I'm going with the Red's or possibly Seagates. You must consider if any of the drives you buy will be around later. Now if you have the dough to buy everything you need + some spares then Hitachi would be good.
 
With all these reports of doa drives and burn-in, I'm curious what sort of burn-in you guys do on your new drives and what sort of testing (other than smart tests) you do as part of ongoing maintenance.
 
With all these reports of doa drives and burn-in, I'm curious what sort of burn-in you guys do on your new drives and what sort of testing (other than smart tests) you do as part of ongoing maintenance.

Not buy WD. :p
 
Your crusade against WD is annoying, it's not like Seagate is perfect, in fact after the 7200.11 fiasco, they still haven't learned and screwed the 7200.14 too !
 
I guess I didn't understand these drives at first. Lower power (like green) but raid friendly (like RE)?
 
Your crusade against WD is annoying, it's not like Seagate is perfect, in fact after the 7200.11 fiasco, they still haven't learned and screwed the 7200.14 too !

It was just a joke, get over it.
 
I guess I didn't understand these drives at first. Lower power (like green) but raid friendly (like RE)?

Exactly, they also include a RAFF-like function which helps them remain stable in multi-bay setups such as NAS devices and small-usage SOHO servers.

They aren't as robust as nearline-class drives as they are still desktop-class, but they definitely fill a niche that direly needed to be filled.
Nearline-class drives cost far too much for most consumer-based functionality, especially when most individuals want low-power 5400RPM drives with the TLER functionality.
 
I was also interested in this info.

So using badblocks is a good way of stress testing new drives.
I tend to kill far too many hard drives, so anything to help weed out the ones likely to fail is good in may books.
 
From the article:
"The obvious question may be then, what's wrong with the WD Greens and other low power drives that have been performing NAS duty to this point? The answer is really about projected use. The WD Green for instance, while the leading low power drive on the market, wasn't designed for the 24x7 access requirements that NAS systems require. The WD Red was engineered specifically for this duty, complete with customized NASware firmware which includes critical features like intelligent error recovery controls that prevent drives from dropping off the RAID due to long recovery cycles. The drives also are engineered with "3D Active Balance technology" which tunes the drive to eliminate vibration leading to improved reliability and overall performance.

WD has also gone to great lengths to ensure a great user experience. They've worked with Synology, QNAP and other NAS providers to make sure the WD Red was qualified as competible with these popular systems and host chipsets. The drives also offer a good blend of performance and power consumption, which is key given the always on nature of NAS drives. For that little extra push on the performance side, the drives feature a 64MB cache that's been migrated from DDR to DDR2, which should be twice as fast."

The article does a good job of covering the comparison


I've been running WD Greens exclusively in my NAS since I set it up in 2010. (well, when I first set it up, one of the drives was an old WD Black, but that wasn't in there long) I have 5 of them in there right now. They have never given me a single issue. Never dropped out of RAID and never had any problem at all.

That being said, my NAS spends most of its days idling, and when it idles, the drives spin down, so I wouldn't call it a heavy use environment.

I've been considering popping out two of the 2TB WD greens ans using one in a computer build and one for extra storage for my DVR, and replacing them with a new, larger drive (3tb or 4tb), maybe I'll give one of these WD Red's a try. The price is so close to the greens, why not?
 
I tend to kill far too many hard drives, so anything to help weed out the ones likely to fail is good in may books.

How? :eek:

In my 21 years of building my own computers, I have had more drives than I can recollect. Never has any one of them died. They have always served out their usefulness (because they were too small or too slow), and then become fully functional paperweights when I no longer needed them.

Even my old IBM Deathstar never failed on me.

What do you do to your poor drives? :confused:
 
Zarathustra[H];1038942119 said:
I've been running WD Greens exclusively in my NAS since I set it up in 2010. (well, when I first set it up, one of the drives was an old WD Black, but that wasn't in there long) I have 5 of them in there right now. They have never given me a single issue. Never dropped out of RAID and never had any problem at all.

That being said, my NAS spends most of its days idling, and when it idles, the drives spin down, so I wouldn't call it a heavy use environment.

I've been considering popping out two of the 2TB WD greens ans using one in a computer build and one for extra storage for my DVR, and replacing them with a new, larger drive (3tb or 4tb), maybe I'll give one of these WD Red's a try. The price is so close to the greens, why not?

how do you prevent the NAS from thinking the drive failed when they spin down? I tried green drives in a raid once and it was a nightmare because of that. I had to set a cron job that runs every minute to write data to the array so they don't go to sleep.
 
3TB Reds back in stock on Newegg, ordered 3, we'll see what happens.

Drives in general just aren't as good a they used to be...but isn't that true for everything? :) I had 1 of my initial 5 Samsung 2TBs fail after 3 months last year.
 
Drives in general just aren't as good a they used to be...but isn't that true for everything? I had 1 of my initial 5 Samsung 2TBs fail after 3 months last year.

Infant mortality is not always caused by poor component quality or poor manufacturing...

Your example could have been caused by the delivery driver kicking the box after it fell from the truck.
 
how do you prevent the NAS from thinking the drive failed when they spin down? I tried green drives in a raid once and it was a nightmare because of that. I had to set a cron job that runs every minute to write data to the array so they don't go to sleep.

Sounds like a Raid controller issue to me...

I've used them in two ways, one with FlexRAID which is software, user space and sits on top of any mountable file system (so, not block level), and now I am using. Drobo S because it is just so damned easy, and it manages itself without problems.
 
Drive #1 = S/N WMC300005266 --> Will not power on, DOA
Drive #2 = S/N WMC3000510169 --> Fails SMART Drive Tests (Tested using Western Digital's Lifeguard Diagnostics)

Is Lifeguard a free download? I have a WD30EZRX that I bought last week that I would like to run SMART tests on.

Will it test other manufacturers, or is there a generic free program to do so? Curious about SMART on my Hitatchi and Seagate drives as well.

(Sorry for off topic)
 
Zarathustra[H];1038942504 said:
Sounds like a Raid controller issue to me...

I've used them in two ways, one with FlexRAID which is software, user space and sits on top of any mountable file system (so, not block level), and now I am using. Drobo S because it is just so damned easy, and it manages itself without problems.

Was software raid (mdadm) that I was using. Basically, drive spins down, so the software thinks the drive failed, so it gets dropped. It's best for raid drives to constantly be spinning. Do you do any kind of mod to the drives like a firmware upgrade of some sort? I think I recall hearing something about a hack to make them not go to sleep.
 
Was software raid (mdadm) that I was using. Basically, drive spins down, so the software thinks the drive failed, so it gets dropped. It's best for raid drives to constantly be spinning. Do you do any kind of mod to the drives like a firmware upgrade of some sort? I think I recall hearing something about a hack to make them not go to sleep.

No they go to sleep, and that's the way I like it.

With FlexRaid though, since it sits on top of the file system, it wakes the drive before continuing.

I wonder if madm has a setting for how long it waits before a drive times out? If you alter that setting (lengthen it) maybe the drive has time to wind up...

How the Drobo does this I don't know. Closed source wizardry.
 
Was software raid (mdadm) that I was using. Basically, drive spins down, so the software thinks the drive failed, so it gets dropped. It's best for raid drives to constantly be spinning. Do you do any kind of mod to the drives like a firmware upgrade of some sort? I think I recall hearing something about a hack to make them not go to sleep.

I put my WD Green drives to sleep in MDADM all of the time, and have only ever had one failure due to a bad SATA cable connector.

Not once was a drive lost from sleep, that's unfortunate that happened to you.
What HDDs were you using and what RAID level?
 
I put my WD Green drives to sleep in MDADM all of the time, and have only ever had one failure due to a bad SATA cable connector.

Not once was a drive lost from sleep, that's unfortunate that happened to you.
What HDDs were you using and what RAID level?

Was using 4 WD green 2TB drives in raid 5. Two of them would always drop out at the same time, but it was random which ones. Even with my script it was still happening, but not as often. I since switched it to raid 10 and it's been ok. Though the machine has been off for a while. I'm tempting to look if there is indeed a time out setting, since I'd rather have raid 5 on there than raid 10. Performance is not really needed.
 
Zarathustra[H];1038942137 said:
How? :eek:

In my 21 years of building my own computers, I have had more drives than I can recollect. Never has any one of them died. They have always served out their usefulness (because they were too small or too slow), and then become fully functional paperweights when I no longer needed them.

Even my old IBM Deathstar never failed on me.

What do you do to your poor drives? :confused:
I believe you have been very lucky then (and I've been unlucky) :D

Drives just tend to die on me.
I've lost drives in RAID5 and 6 arrays - a few I think were down to inadequate cooling in a 5 by 3 unit.
A 4TB Hitachi died after about a day, so I put that down to shipping, rather than anything I did.
Over the years I've had quite a few "clicks of death" drives as well.

A friends QNAP also lost 2 drives at the same time, so he lost all his data on that - maybe it's contagious.

I have yet to have an SSD die on me (have 8 of them across various systems for the last few years), but I imagine it's only a matter of time. Did have one drop out of a RAID0 array after a firmware update, which was a pain.
 
Hm, i need new drives, and liking the WD RED 3tb ones, there's no problems with using 3tb versions on zfs right?

Could start replacing my 2tb samsungs slowly, hopefully this drives will be reliable.

Thanks
 
I just ordered a couple of 2TB models to go in a synology, we'll see how that works out.
 
Yeesh, NE raised the price to $199, which is $10 more than the MSRP in the news stories announcing the drives, though NE always claimed $199. I hope some other vendors get some, though I expect it's just supply & demand at this point.
 
Yeesh, NE raised the price to $199, which is $10 more than the MSRP in the news stories announcing the drives, though NE always claimed $199. I hope some other vendors get some, though I expect it's just supply & demand at this point.

Yeah, at that price they are not worth it to me.

When they were within $10 of the greens, I was considering it, but at this price is just get a green, as they have worked OK in my NAS.
 
I believe you have been very lucky then (and I've been unlucky) :D

Drives just tend to die on me.
If you don't already, test your drives prior to storing data. It might cost you 2-3 days to run a couple passes scanning all sectors of a HDD, but on the plus side you'll identify most drives with mechanical defects. You can return or RMA those drives.

If you've owned enough drives of a particular manufacturer/model sometimes you can also identify acoustic cues or audible irregularities from a drive which may also show passable SMART logs and sector scans, but in reality may be prone to failure. Testing this way is more of a science, as hardware/firmware may change observable characteristics.

I've lost drives in RAID5 and 6 arrays - a few I think were down to inadequate cooling in a 5 by 3 unit.
It wouldn't happen to be Icy Dock MB455SPF-B?


http://www.icydock.com/goods.php?id=48

That 5x 3.5 hotswap unit was poorly designed. The 80mm fan only circulated air to the right 3 bays. If you pull up one of the images with the fan removed you can see that the PCB hole allowing air to pass through is extremely small. Many years ago I bought one of those units and from testing thermals, HDD from the left 2 bays registered 10c hotter.

A 4TB Hitachi died after about a day, so I put that down to shipping, rather than anything I did.
Over the years I've had quite a few "clicks of death" drives as well.
It could have been a lemon that barely passed QA inspection at the factory, a drive could have been poorly packaged, or mishandled prior to you receiving it. This is why it's good practice to test storage devices rigorously before putting those drives into use (or a production env).

A friends QNAP also lost 2 drives at the same time, so he lost all his data on that - maybe it's contagious.
lol yeah I'm going to say no. Mechanical platter based storage devices can die from any number of reasons.

I have yet to have an SSD die on me (have 8 of them across various systems for the last few years), but I imagine it's only a matter of time. Did have one drop out of a RAID0 array after a firmware update, which was a pain.
SSD are subject to their own unique set of design tolerances.
 
Last edited:
Got my two RMA Replacement drives in today from NewEgg

The were packaged really well and both tested fine with SMART and extended tests.

Will start to build my array and post my results.
 
These drives are finally available here in the uk :) I was thinking of upgrading my old 1tb wd black drives (2009-2010 range) to the new wd 3tb reds, but Im a bit hesitant and will wait for some more feedback from people.
 
These drives are great. I have been looking for some new drives to replace the 4 2TB WD Green drives I have in my Synology box. Ever since WD came out and said they don't support WD Green in RAID environments, i've been nervous but have been rollin the dice and waiting for a drive like Red to come out. I didn't want to pay up the butt for an enterprise RE drive, so these new Red drives were a perfect option. I decided to bite the bullet and buy 4 2TB Red drives to ensure reliability. WD seems to have thought about this and have created a product that fits a particular niche in the market.

They've also come out with a pretty fun video that describes what Red is in case anybody is still unfamiliar with the drives and wants to know more.. http://www.youtube.com/watch?v=4LnXJLMSMNo . Seems like they've got a pretty diverse group of engineers over there at WD.

So far so good with the Red, easy integration and i'm sleeping a little better at night. We'll see what happens...
 
Hey, I'm new here. Been watching this thread closely because I'm looking for 4 drives for my proliant n40l. I actually plan to run them in a non-raid environment (though possibly software raid). I almost bit on some seagate drives then most recently ALMOST on the green drives. I just can't seem to make up my mind.

What's most mind boggling to me is the number of reviews that state that the drive(s) are just DOA. What is the deal with that? Is it really a packaging issue? I can't think of any other product with DOA results like that. I'm not referring to WD alone, all drives seem to have these reviews. I can understand failures after a month or 6 months but DOA out of the box?
 
Back
Top