WD Red drives?

Hi there,

I picked up a couple of these drives last week and have been stress testing them before I stick them in a NAS. I've run DBAN and am currently running a 4 pass badblocks. However, from the threads I've read so far I wasn't expecting it to take this long. Current elapsed time on two 2TB drives is 5 days 20 hours with the last leg of pass number 4 to go.

I'm using an X9SCL with PartedMagic. I fired off two instances of badblocks in parallel using the following command:

Code:
badblocks -wv -p 4 -b 4096 -c 131072 /dev/sdx

So far no errors have been reported but I wanted to check that this runtime was expected.

Thanks.
 
I got a Malaysia (24 Oct 2012) batch myself, 3/4 of the drives died within 24 hours and the remaining died under bcwipe stress testing. Average MTBF: 37 hours.

Further, I've been on hold with Western Digital's special support line for 202min as of this post.

That's less than encouraging.
 
2tb, with 4 tests, run 4 times over the drive, for 5 days, gives you an avg speed of 146MB/sec

I would consider that an excellent speed for that model of disk.
 
2tb, with 4 tests, run 4 times over the drive, for 5 days, gives you an avg speed of 146MB/sec.

To expand on that, each pass writes and reads the whole drive 4 times ( for patterns 0xaa, 0x55, 0xff, 0x00), so that is 8 times the capacity of the drive read or written for each pass. You had 4 passes, so that is 4*8 = 32 times the capacity of the drive. At a rough estimate of 100MB/sec average speed (faster at the outside, slower at the inside), that comes to 2e12 * 32 / 100e6 = 640000sec = 177.8 hours = 7.4 days.

Since it looks like your drives will complete in less than 7 days, the average speed must be higher than 100MB/sec. That's pretty good.

By the way, why do you use "-c 131072"? The default of 64 blocks (4KiB) at a time seems to give me plenty of speed. Perhaps increasing it to 128 or 256 would improve the speed a few percent. I don't see why you need 131072.
 
By the way, why do you use "-c 131072"? The default of 64 blocks (4KiB) at a time seems to give me plenty of speed. Perhaps increasing it to 128 or 256 would improve the speed a few percent. I don't see why you need 131072.

Thanks for the simple maths lesson - don't know why I didn't do it myself now. :)

As for the -c switch, I found a post somewhere on the web that said the larger the number of blocks you run at once the faster it will run. I initially tried to use 4GB per instance but that segfaulted so I worked up to 131072 which uses 1GB of RAM.
 
JoeComp, I would say, 64blocks would be too small. Not sure what these drives currently have, but the new ones are all going have 64megs of onboard ram. So I would try to do atleast 128mb so make sure the drives onboard ram is flushed fully before the it does the reads. This is even more true if you have a raid card with onboard ram too. We don't want to be testing the ram chips, but the drive themselfs.
 
I initially tried to use 4GB per instance but that segfaulted so I worked up to 131072 which uses 1GB of RAM.

I really don't think that is necessary. The linux kernel already defaults to 128KiB read-ahead. You can increase the read-ahead to 256KiB or 512KiB if you like, but I have found that does not make much difference in speed.

Similarly, 64 x 4 KiB blocks = 256 KiB transfers. A 256 KiB sequential read or write should already be close to maximum speed on an HDD. If not, increasing -c to 128 or 256 (512KiB or 1MiB) sequential transfers should certainly reach full speed. Using 128Ki to transfer 512MiB at a time is way overkill.
 
Last edited:
So I would try to do atleast 128mb so make sure the drives onboard ram is flushed fully before the it does the reads. This is even more true if you have a raid card with onboard ram too. We don't want to be testing the ram chips, but the drive themselfs.

Have you used badblocks?
 
Edit: Ignore, hardware problem See post #501.
When using badblocks on 3x 3TB Segates I noticed an increase in instantaneous speed from -c 128 to 2048, past that gave little increase. Disk throughput was viewed using iotop. I did not look at average speed because I ended up killing badblocks after 2 patterns.
 
Last edited:
When using badblocks on 3x 3TB Segates I noticed an increase in instantaneous speed from -c 128 to 2048, past that gave little increase. Disk throughput was viewed using iotop. I did not look at average speed because I ended up killing badblocks after 2 patterns.

What was your -b block size?
 
I wanted to give a Status Update: When I recieved the original 8 in July 2012, 1 was DOA and 1 had bad sectors. Newegg got those replaced very quickly.

Since I got the drives up and running in a Raid 5 array, I have not had any issues.

0 Dropouts and power consumption is down from the WD AAKS they replaced in the array.


storaged.png
 
I've been running 5 x 3TB REDS for a month now. Zero issues, good performance and very low noise levels. Very happy :)

But they came properly packed
emMLA.jpg

g1wFf.jpg
 
Hi there,

Code:
badblocks -wv -p 4 -b 4096 -c 131072 /dev/sdx

So far no errors have been reported but I wanted to check that this runtime was expected.

Thanks.

Hi, just a reminder to other readers. I thought it useful to try this on my home computer, after stopping the mdadm array, without thinking much, I issued the command to two existing data disks (as RAID configuration) and promptly erased all the data on my home computer.

Again, it is entirely my own fault for not checking the command before running. I think next time I need to purchase and prepare actual external backup storage.
 
Last edited:
Edit: Ignore, hardware problem. See post #501
Adding to the badblocks discussion, I got 3 more Seagate 3TB. Using -b 4096 -c 256 vs -c 2048 saw a doubling in instantaneous throughput. After 4096 only 10's of MB/s were added with each -c doubling.
 
Last edited:
Adding to the badblocks discussion, I got 3 more Seagate 3TB. Using -b 4096 -c 256 vs -c 2048 saw a doubling in instantaneous throughput. After 4096 only 10's of MB/s were added with each -c doubling.

Your last sentence seems confused. Why did you jump to 4096 instead of 2048? And it is hard to believe that going from -c 2048 to -c 4096 (or from -c 4096 to -c 8192) would increase the speed by >10MB/s, which is actually a sizable amount. Also, I suspect your speed measurements are misleading (or something is poorly configured on your system) because I can get average speed higher than 100MB/s on my system with -b 4096 and -c 64 (default). There should be no need to go to -c 2048 to achieve optimal average speed.

Why not post the actual speeds you measured for each value of -c XXXX ?
 
Last edited:
Hi there,

I picked up a couple of these drives last week and have been stress testing them before I stick them in a NAS. I've run DBAN and am currently running a 4 pass badblocks. However, from the threads I've read so far I wasn't expecting it to take this long. Current elapsed time on two 2TB drives is 5 days 20 hours with the last leg of pass number 4 to go.

I'm using an X9SCL with PartedMagic. I fired off two instances of badblocks in parallel using the following command:

Code:
badblocks -wv -p 4 -b 4096 -c 131072 /dev/sdx

So far no errors have been reported but I wanted to check that this runtime was expected.

Thanks.

interesting you should be using that board as I got one in with 6 drives and an intel raid controller (cant remember the model) and speed testing on that board is attrocious - I'm talking less than 30Mb/s write speeds and less than the read speed of one drive when in a raid 5.
 
That is definitely not the fault of the board. If you are using a raid controller without nonvolatile cache (or cache at all) low write speeds are expected unless you force enable write caching at the risk of a data loss in case of a power failure.
 
Origin_Unknown, I have an Intel SAS8UC8i that was flashed with the LSI IT firmware. I was having no speed issues with a 6 drive raid-z2 during my initial testing. Can't remember the numbers but well over 100MB/s.

As for my WD Red drives, they checked out fine during my stress testing and now in use.
 
Your last sentence seems confused. Why did you jump to 4096 instead of 2048? And it is hard to believe that going from -c 2048 to -c 4096 (or from -c 4096 to -c 8192) would increase the speed by >10MB/s, which is actually a sizable amount. Also, I suspect your speed measurements are misleading (or something is poorly configured on your system) because I can get average speed higher than 100MB/s on my system with -b 4096 and -c 64 (default). There should be no need to go to -c 2048 to achieve optimal average speed.

Why not post the actual speeds you measured for each value of -c XXXX ?

I didnt see good speeds until -b 4096 -c 4096. It is a MicroServer related problem and not badblocks settings. Reran testing on host1 with a LSI2008 and stock settings gave me better performance.
Results on MicroServer:
Test Value | Avg. Read (MB/s) | Avg. Write (MB/s) | Test Duration Wall Time MM:SS.SS
128 | 174.20 | 44.98 | 7:43.77
256 | 178.03 | 71.95 | 5:19.96
512 | 179.08 | 102.72 | 4:11.29
1024 | 177.53 | 130.48 | 3:37.72
2048 | 152.62 | 151.14 | 3:36.10
4096 | 157.48 | 164.17 | 3:23.97
8192 | 159.10 | 171.00 | 3:19.82
16384 | 159.57 | 174.64 | 3:16.81
32768 | 160.98 | 176.41 | 3:15.88​

On host1, default -b and -c gave a 3:11.77 wall time for the first 4GiB. So good hardware, stock settings are fine. Inset foot in mouth.
 
I put a WD20EFRX into a Sandy Bridge PC (native 6G port) to test it out, and unlike every HDTune of this drive that I've seen run, mine doesn't hit 150 at the beginning but rather 125. Would anyone know why that might be? CrystalDiskMark basically agrees. It's Win8, if that matters, and the MS vs Intel controller driver doesn't make a difference. An SSD, also in the system, has no issue approaching 500 on a neighboring port.

Update: running DBAN on it now: 128000 KB/s, which is, you guessed it, 125 MB/s. Hmmm.

Another puzzler: I used Smartmontools to disable TLER. Unlike other posts that I've seen on this, mine does NOT survive a power recycle. The first pic here is right after I disabled it; the second one after I power cycled the system and checked it:
http://imgur.com/a/ifTHE
 
Last edited:
I put a WD20EFRX into a Sandy Bridge PC (native 6G port) to test it out, and unlike every HDTune of this drive that I've seen run, mine doesn't hit 150 at the beginning but rather 125. Would anyone know why that might be? CrystalDiskMark basically agrees. It's Win8, if that matters, and the MS vs Intel controller driver doesn't make a difference. An SSD, also in the system, has no issue approaching 500 on a neighboring port.
Mystery solved, and it's a shocker: my 2TB drive has three platters. So much for 1TB platters for all Reds, which I understand wasn't guaranteed anywhere. Question now: is it worth trying to play the lottery to get one with 1TB platters considering that this one arrived packed well and has checked out?
http://www.overclock.net/t/1340807/baffled-by-a-relatively-low-max-throughput-in-tests/0_40
 
Is there any way to determine if it is the two platter version or the three platter version?
 
Is there any way to determine if it is the two platter version or the three platter version?

Yes, by inference: if your HD Tune is maxing at 125, when you know you're on a port that allows more, it's not 1TB platters.

Direct evidence (these are the bare-drive weights for drives with 1TB platters):

1TB: 0.99 lb
2TB: 1.32 lb
3TB: 1.40 lb

If your drive has any more (e.g. if a 2TB is 1.4 lbs), you don't have 1TB platters.
 
So who package these the best? Amazon or nicx? Or anybody else that I should buy these from?
 
So who package these the best? Amazon or nicx? Or anybody else that I should buy these from?

I have bought 3TB WD Red and the 3TB Seagate drives from both amazon and newegg recently (last 2 months) and they were all well packaged in individual boxes with the plastic hard drives spacers/inserts that you see in retail boxed drives. I've run surface scans on 14 of the drives so far with no reported problems.
 
I have to 3TB red running on a 4bay ReadyNAS and it works great. If I were to use this drive for a large array, say 15-20 spindles, on RAID60, would it be a problem?
 
I have to 3TB red running on a 4bay ReadyNAS and it works great. If I were to use this drive for a large array, say 15-20 spindles, on RAID60, would it be a problem?

For a non-enterprise environment, it would work just fine.
 
For a non-enterprise environment, it would work just fine.


What do you define as enterprise?
I'm contemplating using Reds for a "enterprise" mirrored file servers. However, I know my iops are low (< 50iops for the entire array), I mostly need storage capacity.

My alternatives are WD RE SATA/SAS drives, but they're about double the cost of the Red.
 
Aside from the added warranty, I doubt the RE4 drives will have more to offer.
 
What do you define as enterprise?
I'm contemplating using Reds for a "enterprise" mirrored file servers. However, I know my iops are low (< 50iops for the entire array), I mostly need storage capacity.

My alternatives are WD RE SATA/SAS drives, but they're about double the cost of the Red.

Many-user environments.
These aren't true nearline-class drives, and thus, without those added features are not defined as enterprise drives.

These were meant for small NAS units, not SANs and large storage arrays.
I'm not saying they won't work in those environments, obviously they would, but you would be running a greater risk by using them.

If they won't be in a high-usage scenario though, and are just used for data storage, I'm sure they would be more than fine.
 
Aside from the added warranty, I doubt the RE4 drives will have more to offer.

Actually they do.
The RE4 drives are nearline-class, while these are desktop-class drives with some added nearline features.
 
Is this drive a good choice as a secondary drive for game installs (Steam, Origin, etc) or would a Blue/Black be better? Main OS drive is an SSD.
 
I just got my 4 drive from newegg. Two of the drive static bag was rip. it looks like one of the drive with the tear has a metal chip on the bottom. Doesn't look like they improve there shipping.
 
Ive had issues with quality control and WD recently. After the fukushima event and the flood, their quality checks suffered. Almost 50% of the drives I bought with the red label have failed or were DOA..
 
Back
Top