Western Garbage Green

Does anyone else think the MTBF is complete & utter BS?

I have very, VERY few drives that out last 35k power on hours.

Yeah I usually retire any drive that's over 3 years old (26K) - did have some in the 30K range, but again, with my low power cycle counts (maybe 75), my drives are usually still in good condition.
 
I thought I would share my experience as I just had another wd green drive failure. I have 10 x 2TB WD EARS green drives. The average drive temperate is 33C in a freenas server with head parking disabled.

Death count with hours:
1st drive 720h
2nd drive 2400h
3rd drive 3600h
4th drive 19400h
5th drive 19500h
6th drive 19939h <- today

With 4 drives left who wants to bet when they will die?
7th 19927 no errors
8th 19928 (Multi zone error rate raw value = 2)
9th 19960 no errors
10th 19960 no errors

I think I have learned a lot from this:
1. Don't buy all your drives from the same vendor
2. Burn them in a bit to stagger the hour count because drive 4,5,6 all died less then 1 month of each other.

On the flip side I have a few wd 1TB black drives one is at 27000h and the other is 29000h. Moving forward from this I have switched over to Seagate Constellation es.3 enterprise grade drives, I just hope they last. I am also thinking of mixing it up with some RE4 drives.

Right now I am using raidz2 (8 + 2) but seeing how fast they fail I am almost contemplating going raidz3 (8 + 3).
 
Last edited:
WD Greens are the worst drives for sale right now, as long as you stay away from them, almost everything else is safe.
 
Would actually love to see some hard evidence of what people are claiming. I have 2 green drives, both working.

I guess that means I should say that these drives are the best thing since sliced bread, and provide no evidence for their condition when the arrived to me, how warm they run, under what power conditions, how many hours they run, how many times they have started, inside leg measurements.

Without seeing how many have been sold, and how many have been RMAd because of failure, posting things like "these are the worst drives ever" is pretty useless, proper figures would help the matter.

I guess enterprise drives to run longer than consumer, although how much of that comes from the fact that they have proper power supplies (through large UPS units), the fact that they generally run 24/7 in properly managed arrays, and that they are usually managed better than something stuck into a 3.5" bay and used for downloading lots of pr0n, instead of the fact that are apparently built better remains to be seen.

Always have a backup, or 2, and it doesn't really matter how reliable or unreliable a drive is. As long as you put the safeguards in place, you shouldn't lose everything :)
 
Would actually love to see some hard evidence of what people are claiming. I have 2 green drives, both working.

I guess that means I should say that these drives are the best thing since sliced bread, and provide no evidence for their condition when the arrived to me, how warm they run, under what power conditions, how many hours they run, how many times they have started, inside leg measurements.

I don't know how to give you hard evidence, no one is going to come in here claiming that the drives are horrible for the fun of it. So this is the best I can give you; thermal imaging of the drives, snapshot of wd rma page, and smart logs. I would give you more if I still had the drives.








Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (Adv. Format)
Device Model:     WDC WD20EARS-00MVWB0
Serial Number:    ------------
LU WWN Device Id: --------
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Mon Sep  2 19:00:54 2013 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

.....
.....

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   179   179   051    Pre-fail  Always       -       7416
  3 Spin_Up_Time            0x0027   244   172   021    Pre-fail  Always       -       2775
  4 Start_Stop_Count        0x0032   099   099   000    Old_age   Always       -       1197
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19854
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       79
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       59
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1144
194 Temperature_Celsius     0x0022   123   111   000    Old_age   Always       -       27
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   001   000    Old_age   Always       -       15
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       19
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   175   175   000    Old_age   Offline      -       6742


Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (Adv. Format)
Device Model:     WDC WD20EARS-00MVWB0
Serial Number:    -------------
LU WWN Device Id: 5 ------------
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Mon Sep  2 19:01:12 2013 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

....
....

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   243   170   021    Pre-fail  Always       -       2833
  4 Start_Stop_Count        0x0032   099   099   000    Old_age   Always       -       1205
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       1
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19853
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       80
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       58
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1155
194 Temperature_Celsius     0x0022   123   111   000    Old_age   Always       -       27
196 Reallocated_Event_Count 0x0032   199   199   000    Old_age   Always       -       1
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       156
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   199   000    Old_age   Offline      -       6



Without seeing how many have been sold, and how many have been RMAd because of failure, posting things like "these are the worst drives ever" is pretty useless, proper figures would help the matter.

Western Digital is not going to give you numbers of sold and rma drives that would be stupid of them. Posting such as these are not useless they must be considered in context they are defined. Look at the new egg reviews http://www.newegg.com/Product/Product.aspx?Item=N82E16822136891 if this drive was so great why are there so many negative comments?

Reviews are hard to sort through because this issues does not become apparent until 2 + years. This probably explains why WD lowered their warranty for green drives to 2 years.


Your drives haven't fail right, how many hours on them? Post the smart log lets start keeping records of this.
 
Last edited:
Last edited:
Sorry for the misundestanding

I don't know how to give you hard evidence...

My comment was mainly refering to nekrosoft13 who wrote

WD Greens are the worst drives for sale right now, as long as you stay away from them, almost everything else is safe.

Which is kind of like saying that because a train, somewhere in the world has crashed, that every other train is the worst mode of transport and you should stay away from them.

Posting your user experience with screendumps and smart data shows that you have indeed had a short life from your drives, but with only a 2 year warranty with them, I guess that is what you would expect from this throwaway society, combined with maximising profits from the manufacturers side.

I really hope my 2 reds hold up longer than your greens, but I have backups just in case anyway.
 
Ah you already corrected it, lol I was looking at an old version of the webpage.
 
Yeah I usually retire any drive that's over 3 years old (26K) - did have some in the 30K range, but again, with my low power cycle counts (maybe 75), my drives are usually still in good condition.

Huh, obvious question but, you're saying switching your computer off and then turning it back on is significantly worse for your hard drive over just leaving it on longer?
 
Huh, obvious question but, you're saying switching your computer off and then turning it back on is significantly worse for your hard drive over just leaving it on longer?

That's what i've generally heard. Spinning up the disk is harder on the hardware than just leaving it spinning 24/7.

When I retired my last set of drives they were at around 4 years old with a spin-up count around 100 or so.
 
Maybe I should go buy a lottery ticket. I've never had a Green die on me, or any WD drive for that matter.

I have two greens above 30000 hours, a dozen or so between 15000-20000 and a handful of new ones. The only drive I have that's failed in that last few years was a Seagate 3320620AS that kicked it at just under 3000 hours.

Most of them have a start/stop count of less than 100, so that may explain their good health.
 
That's what i've generally heard. Spinning up the disk is harder on the hardware than just leaving it spinning 24/7.

When I retired my last set of drives they were at around 4 years old with a spin-up count around 100 or so.

Yup I have a 25K hours (~2.9 years) drive with a 106 power on count. Probably added a dozen or so when I was initially testing my Haswell overclocks, or trying to get a non-fail nVidia drivers; finally settled on 326.80's.

Otherwise it'd be 24/7, barring a significant driver updates - I may never power-off for months at a time.
 
Then I must be incredibly unlucky with 3 failing under 2000 hours and 100 cycle count. I wonder how much start and stop effects the drive, I know the consensus is to leave the drives running. When my server was first started there was no spin down timer on any of the drives for that reason. To put it in perspective I have two black drives one with almost 30k hours and 1334 start stops. The other has 27.6k hours with 340 start/stop so I don't know if you can really say how much that is a contributing factor. Again sample size is way to small to make a conclusion what the contributing factors are. It would be great if someone from a large IT department or data center would weigh in their thoughts on drive replacement.

Are you running these drives in a server or desktop environment?
 
Last edited:
I've had 1 WD Drive fail in the last 5-10 years and it was a 2tb Green drive. On the plus side, they cross shipped me a new one and I had it in less then 5 days. Second one has been fine for the past year though and has never been shut down.

I still have 2 of the first generation 30gb WD Raptors they released running in one of my folding machines. I think they have been running nearly continuously since I got them, I have no idea how many hours are on them, but it has to be pretty impressive by now. I think Western Digital makes good shit for the most part.
 
I don't know how to give you hard evidence, no one is going to come in here claiming that the drives are horrible for the fun of it. So this is the best I can give you; thermal imaging of the drives, snapshot of wd rma page, and smart logs. I would give you more if I still had the drives.

Your drives haven't fail right, how many hours on them? Post the smart log lets start keeping records of this.

A lot of people make disparaging claims to hurt a manufacturer or retailer. Some think that is fun.

We upgraded our server early this year. For the previous 6 years (early 2006-2013) it ran with no disk errors. It rebooted every night. The computer had 3 WD green drives when we shut it down.

Our new server has 2 SSDs and 2 WD greens. Our HTPC has 8 WD greens. Since 2006 we have had no disk failures.

I will admit that our hard drives do not have a very long operational lives. We outgrew 120GB, 250GB, 500GB, 1TB and 1.5TB hard drives. It takes about 2 years to either fill a backup hard drive or for a production hard drive to be considered too slow.

I don't know if WD drives are good or not. I do know they serve my purposes.
 
Yeah I usually retire any drive that's over 3 years old (26K) - did have some in the 30K range, but again, with my low power cycle counts (maybe 75), my drives are usually still in good condition.

Shouldn't retire a disk with such a low hours. Should go hard on it!

 
Probably yes :)

Point in case: American people have a life expectancy of 78.62 years, or 689,183 hours. Their MTBF is 119.19 years, or 1,044,815 hours, estimated from their Annualized Failure Rate published in the CIA World Factbook.

MTBF does not mean "life expectancy," so the verdict "complete & utter BS" may be rooted in a misunderstanding.

I bought a bunch of 7200.11 drives. The MTBF is based on 2400 hours/YEAR of use. That is it.

So their "MTBF" is bs. In no place I've ever worked, nor at home is the pc ever turned off. Their MTBF is based on the drive not being powered on 75% of the time...

Also: In my 16 drive array, I very rarely have any that make it over 30k hours & they are in a basement (cooler) in an enclosed case seaparate from the main PC with a fan. I drop the drives when smartmontools tests tell me the drive is failing or it gets an increase in uncorrectable errors.
 
Where there's smoke there's fire. No sense arguing against the lot of us who have bad wd greens. I never owned a deathstar and these are the worst hard drives I've ever owned. Since wd red are similar I skip those now. I'd only trust their enterprise drives until they clean up their act.
 
I bought two WD 1.5TB green drives a few years ago. First thing I did was disable head parking. Both drives are still running strong.
 
When I buy a hdd, I don't care if it's WD, Seagate, SamSung, Hitachi, Toshiba, Maxtor, ....All I should care is my hdds must stay away from heat, power fluctuation, and vibration.

That probably is a big reason why drives are failing. I need to RMA my 3TB Seagate Expansion so I need a new drive now and can't tell which one I should get. When you consider people:

1. Are putting the drives in different situations.
2. People write reviews too soon before really testing the drive. You don't know if they bought the drive three days ago or three years ago.
3. They're more likely to complain when a drive fails, which also skews the results.

I have NO IDEA what drive to get.
 
Why is everyone disabling head-parking?

Because it makes a clicking noise. HDD geeks associate any sort of click coming from a HDD with the "click of death" of a dying HDD. The WD Green is set to head-park every 8 seconds, so every 8 seconds they hear the drive dying, and nothing anyone says will ever convince them that head-parking isn't killing the drive.

Interesting reading, particularly the post by sub.mesa that busts a bunch of myths.
 
Because it makes a clicking noise. HDD geeks associate any sort of click coming from a HDD with the "click of death" of a dying HDD. The WD Green is set to head-park every 8 seconds, so every 8 seconds they hear the drive dying, and nothing anyone says will ever convince them that head-parking isn't killing the drive.

Interesting reading, particularly the post by sub.mesa that busts a bunch of myths.

Thanks, sub.mesa also mentions that 8 seconds might be too often to park and suggested it should have been every 2 minutes. Apparently most drives are made for around 300,000 parks and you can easily go over that with one every 8 seconds.

Why are people disabling parking instead of switching it to 2 minutes? I've heard one guy say people are having problems changing it so they just disable it.
 
What about the WD Green 2.5" drives? Are they just as bad? I was thinking of getting a 2TB drive. (I know that these don't fit in notebooks.) But there are no WD Blues or Blacks in the 2TB size, and I haven't found other manufacturers yet who make 2TB 2.5" drives.
 
Back
Top