Best Way to Buy and Test New Hard Drives?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,739
Hey all,

As I am about to embark on the Great NAS Capacity Upgrade of 2017, I have noted the following typical advice:

1.) Don't buy all your drives from the same vendor at the same time. You'll likely get most drives from the same lot, meaning if there is any kind of defect you are more likely to have multiple drive failures at once, and data loss.

2.) Don't buy drives from Newegg. They don't package them well, and they are often DOA or die prematurely after a few months. Amazon does a better job.

3.) Buy more than one drive at a time, as usually the packaging is better than a lone drive in a box

4.) Test your drives before installation just in case.


So. Is all of the above still accurate in 2017?

If so, does the following strategy make sense for drives I can only find on Newegg and Amazon and need 12x of?

1.) Buy two at a time, once every other week for 12 weeks.

2.) Alternate every other two drives from Amazon and every other two from Newegg.


Also, how is the best way to test them after receipt? A SMART Conveyance test? Long or short?

Much obliged.
 
Can't really speak for anything there except finding a good/great price on the drives you're interested in and that the only software I ever recommend for testing hard drives is the manufacturer's diagnostic tool (which pretty much every current hard drive maker still produces) and only that software. I just don't trust anything else, nothing third party, for testing the functionality and reliability of any hard drive than the tool created by the maker of the drive itself but that's just my own opinion.

Having said that I will backtrack a bit and say I still trust SpinRite in some instances but with the shift towards SSDs and NVMe hardware (yes I know it's considered to be SSD hardware even so) SpinRite will finally fall into the obsolete category soon enough. I still prefer physical hard drives for raw storage and of course SSDs for pure performance nowadays so SpinRite can still prove useful in some situations.

As for doing the actual diagnostics even with the manufacturer's tools, if you're looking at getting some very large capacity drives (like 8TB or larger) it's gonna be a long long process per drive for a full test - I would say use the short test (not S.M.A.R.T. related testing, ever, but whatever actual short diagnostic test the tool provides for) and if the results are clear then use the drive. Doing the long diagnostic (typically does a surface scan) on 8TB+ sized drives could be as long as 8 hours or more to completion so unless you're really really paranoid the short diagnostic - and again I say don't use any S.M.A.R.T. based testing methods - should provide the drive with a clean bill of health if it's good to go.
 
4.) Test your drives before installation just in case.

I do a 4 pass badblocks on every single hard disk drive I get (new / used / refirb) work, home or other. For 8TB drives the test takes several days but still less than a week.
 
that the only software I ever recommend for testing hard drives is the manufacturer's diagnostic tool

At work (where I have had 100s of drives and have done 75+ RMAs) I have found the manufacturer's tools suspect. Several times I have had manufacturer tools tell me a drive that was very bad was fine. As a result I don't trust them at all.
 
At work (where I have had 100s of drives and have done 75+ RMAs) I have found the manufacturer's tools suspect. Several times I have had manufacturer tools tell me a drive that was very bad was fine. As a result I don't trust them at all.

Interesting.


What do you trust?
 
... looking at the SMART raw data.

Just as a counterpoint, until I recently moved from Las Vegas I had a shoebox full of Western Digital hard drives that all had 100% clean S.M.A.R.T. status, not one issue, but not one of them could pass the manufacturer's diagnostic and not one of them actually worked in regular use. ;)

Personally I don't trust S.M.A.R.T. data, ever, so where does it go from here? :sneaky:
 
2.) Don't buy drives from Newegg. They don't package them well, and they are often DOA or die prematurely after a few months. Amazon does a better job.
FWIW, I recently bought some refurb HGST 3TBs when Newegg had them for $42 each. They were packed very nicely.

And I test with badblocks + SMART before use.
 
FWIW, I recently bought some refurb HGST 3TBs when Newegg had them for $42 each. They were packed very nicely.

And I test with badblocks + SMART before use.


I thought this might have been a historic concern, but I wasn't sure. Good to know they have improved.
 
Just as a counterpoint, until I recently moved from Las Vegas I had a shoebox full of Western Digital hard drives that all had 100% clean S.M.A.R.T. status, not one issue, but not one of them could pass the manufacturer's diagnostic and not one of them actually worked in regular use. ;)

Personally I don't trust S.M.A.R.T. data, ever, so where does it go from here? :sneaky:

Did you actually run the conveyance test, or were you just looking at existing flagged smart data?

If you don't run a test, SMART will only Report failures as they happen.

Based on my reading, the best approach is to run badblocks first with the -w flag (this destroys data, but not a problem on a new drive). For a 8-10 TB drive this may take a few days.

It is unlikely badblocks will find errors directly as modern drives automatically flag and remap bad blocks, but the a team of running it should cause errors to appear in the SMART report if they are present.

After that, as an additional step a SMART conveyance test may be run as well.

This is quite a large time commitment for testing, but since I am buying these two drives at a time, with 14 days in-between and they can be run in parallel, it's not a huge deal.
 
I ran several tests on each using various utilities outside of an operating system environment (MHDD, Victoria, HitachiDFT because it runs on any brand of drive, SpinRite, and of course WD's own tool all booted from optical media or USB sticks) and the drives consistently failed and would never finish any diagnostics but the S.M.A.R.T. status for all 11 of the drives was clear and they were a mixture of drives used in a variety of machines so it's not like 11 of the same drive from the same batch or anything.

Sometimes weird shit happens and that was one of the weirdest in my long experience of working with computer hardware.
 
I ran several tests on each using various utilities outside of an operating system environment (MHDD, Victoria, HitachiDFT because it runs on any brand of drive, SpinRite, and of course WD's own tool all booted from optical media or USB sticks) and the drives consistently failed and would never finish any diagnostics but the S.M.A.R.T. status for all 11 of the drives was clear and they were a mixture of drives used in a variety of machines so it's not like 11 of the same drive from the same batch or anything.

Sometimes weird shit happens and that was one of the weirdest in my long experience of working with computer hardware.

That IS odd.
 
Well, I've started ordering Seagate Enterprise 10TB Helium drives (ST10000NM0016).

I'm definitely going to look at Badblocks and SMART data on them when I get them. Anyone know what the manufacturer tool for these is?

I presume it is one of these, but there is very little description.
 
I recommend not getting all your drives from the same batch. I bought 5 2TB drives for a 5-Bay NAS in 2012 all from Seagate. This way back when hard drive prices were crazy high due to flooding in.. Thailand? anyway, 4 out of the 5 have died. Fortunately, no two failed at exactly the same time but they were not being used in a high-stress way at all.
 
I recommend not getting all your drives from the same batch. I bought 5 2TB drives for a 5-Bay NAS in 2012 all from Seagate. This way back when hard drive prices were crazy high due to flooding in.. Thailand? anyway, 4 out of the 5 have died. Fortunately, no two failed at exactly the same time but they were not being used in a high-stress way at all.


Yeah, this is the goal.

It's not always easy to know exactly which serial numbers/batches you are going to get though.

For this reason (and for the reason of incurring less instantaneous financial pain) I am buying two at a time once every two weeks on payday.

Getting all 12 will take 12 weeks this way. Hopefully other customers will buy enough in between to spread them out over many lots.

I'm also alternating back and forth between Newegg and Amazon for my buys on every other paycheck, which hopefully will further diversify them.

It's too bad you can't call someone up and just order 12 drives, all from different lots :p
 
SeaTools is Seagate's hard drive diagnostic software:

https://www.seagate.com/support/downloads/seatools/


Thanks,

How unfortunate. No native Linux version.

edit:

I take that back, there is an Enterprise Linux CLI version under "Legacy Tools", but it says it does not test ATA or SATA drives, so presumably only SCSI and SAS?

Either way, not going to work for me. Guess it's going to have to be a combination of Badblocks and SMART.
 
Newegg did package HDDs insufficiently, and personally I had a few returns because of that (which, to their credit, both Newegg and WD handled very well).

That has changed in the past couple of years - now Newegg is at least comparable, and arguably better, as Amazon at packaging OEM HDDs. That being said, prices are typically comparable, I'd go either way if I could save a few bucks on identical SKUs.

There is a slight risk of there being a class/lot wide specific defect, but HDD tech is mature enough now that if you are testing the drives after arrival, and you have warranty coverage for a few years following, that even if there were a class-wide problem that you wouldn't lose all the drives at the same time and could handle a failure gracefully. The most recent occasion I can think of where even basic RAID redundancy would be insufficient to protect against systemic drive failure would be the IBM DeskStar 75GXP, and that was nearly 20 years ago now.

If your buying 12 drives total, two drives a week, by the time you actually get your NAS up and running, your first couple of lots of drives are going to be outside of retail return and into manufacturer's RMA (which is typically a much longer return process) - you did test the drive independently, but until you have all your drives in and spinning, you aren't testing the entire system. Your also making the assumption that 2 drives is enough to get the better packaging - which in my experience isn't true until you are getting a "case" of drives, which can be anywhere from 4 to 20.

Now, you could cut that down significantly: 3 from Amazon, 3 from Newegg, within the same week, out over 2 weeks total. I still think that may be an unnecessary precaution, but it at least gets you all the hardware in hand for testing while it's all within it's 30-day easy return window.
 
If your buying 12 drives total, two drives a week, by the time you actually get your NAS up and running, your first couple of lots of drives are going to be outside of retail return and into manufacturer's RMA (which is typically a much longer return process) - you did test the drive independently, but until you have all your drives in and spinning, you aren't testing the entire system. Your also making the assumption that 2 drives is enough to get the better packaging - which in my experience isn't true until you are getting a "case" of drives, which can be anywhere from 4 to 20.

Now, you could cut that down significantly: 3 from Amazon, 3 from Newegg, within the same week, out over 2 weeks total. I still think that may be an unnecessary precaution, but it at least gets you all the hardware in hand for testing while it's all within it's 30-day easy return window.


This is good advice I hadn't thought of.

It shouldn't make a difference for me though, as my plan is to perform a ZFS "grow in place" upgrade.

As each new 10tb Seagate Enterprise drive finishes badblocks and SMART Conveyance testing they will immediately replace a 4TB WD Red currently in the running NAS server.

The vdev will resilver and continue running with mismatched drives until the last 10tb Seagate gets swapped in, at which point the entire storage pool grows to its new size.

So, upon completion of testing, each new drive will see immediate use regardless of the fact that my purchasing is spread out over 12 weeks.
 
For my HDD burn ins, I run these (my NASbox is freenas, so I actually grabbed this info from their forums, I'm only shamelessly pasting the commands here):

smartctl -t short /dev/adaX
smartctl -t conveyance /dev/adaX (didn't work for me, YMMV)
smartctl -t long /dev/adaX (to catch any errors out of the gate)

then
sysctl kern.geom.debugflags=0x10 (to perform the raw IO)
badblocks -b 4096 -ws /dev/adaX (specifying block size for drive-greater-than-2tb issues)
badblocks -b 4096 -ns /dev/adaX

then
smartctl -t long /dev/adaX (to find any errors that popped up with the previous testing)
smartctl -A /dev/adaX (results!)

I used Tmux to run the tests on all 6 drives concurrently, and it took several days to finish. When I was done, however, I was fairly confident that the drives wouldn't give me an issue within the first month or so. ;)
 
Failure is more statistical than testable. So an initial test-OK may be statistically insignificant.

I'd do dd & cmp a 500GB image and listen for excessive vibration and weird "clonk" & parking sounds.
 
Failure is more statistical than testable. So an initial test-OK may be statistically insignificant.

I'd do dd & cmp a 500GB image and listen for excessive vibration and weird "clonk" & parking sounds.

Yes and no.

The initial testing has a lot of value in determining if there are any special causes for failure, most of which stem from shipping conditions, as the drives undergo a good amount of testing before they leave the factory.

Once we have determined that there are no special causes for failure, then we return to the more statistical bathtub curve model of reliability.
 
  • Like
Reactions: Meeho
like this
So I received the first two drives yesterday.

At first glance some of the RAW SMART values for the drives looked horrendous, but then I find out this is just how Seagate does things, and the RAW values aren't necessarily indicative of anything. After completing all the tests I am going to hvae to read the disk status using SeaTools to make sure I get the right info. It's a shame that they hwve strayed away from SMART as the industry standard, instead requiring their own tools. :(

I've run short and conveyance smart tests. The drive goes to sleep before a long test completes aborting it though. I'm going to have to write some sort of script to ping the drive every few minutes to make sure that doesn't happen.

Running through badblocks right now. Total runtime for a 10TB drive appears to be about 11 hours.

One thing that stood out to me about these 7200rpm helium drives is how amazingly quiet they are. They are much quieter than my 5400rpm WD Reds and old WD Greens.
 
Running through badblocks right now. Total runtime for a 10TB drive appears to be about 11 hours.

I'll have to take this back.

Looks like 11 hours was just for the writes, then it starts a separate read and compare action.

The read and compare is 73.5% done with 23h 45min on the clock.
 
Ignore all this bad advice. The Best way to Buy and Test new Hard Drives is through Best Buy and Geek Squad. Duh!

As Tiberian said, use SeaTools DOS.

https://www.seagate.com/files/www-c...tools/_shared/downloads/SeaToolsDOS223ALL.ISO

http://tech.chandrahasa.com/2013/12/22/how-to-create-a-bootable-seatools-usb-drive/

With DOS method you will need to set BIOS storage controller mode from AHCI to ATA (IDE/SATA).

The alternative is use SeaTools for Windows.

https://www.sevenforums.com/tutorials/313457-seatools-dos-windows-how-use.html
 
Ignore all this bad advice. The Best way to Buy and Test new Hard Drives is through Best Buy and Geek Squad. Duh!

As Tiberian said, use SeaTools DOS.

https://www.seagate.com/files/www-c...tools/_shared/downloads/SeaToolsDOS223ALL.ISO

http://tech.chandrahasa.com/2013/12/22/how-to-create-a-bootable-seatools-usb-drive/

With DOS method you will need to set BIOS storage controller mode from AHCI to ATA (IDE/SATA).

The alternative is use SeaTools for Windows.

https://www.sevenforums.com/tutorials/313457-seatools-dos-windows-how-use.html


Yeah, I'm having some SAS controller compatibility issues with my spare server I was planning on doing these tests in, so right now I'm using an Asus Chrome Box, hacked to run Linux, and an external USB enclosure for my testing.

Not sure if the DOS version would actually recognize the USB enclosure, so I may have to run it over Windows on my gaming desktop.

Rather than run the actual tests in SeaTools, I think I ma going to run these linux based tests (badblocks, and SMART tests via smartmontools) and then just read the drive health statistics after they conclude using SeaTools.

That ought to do the trick, I think.
 
Yeah, I'm having some SAS controller compatibility issues with my spare server I was planning on doing these tests in, so right now I'm using an Asus Chrome Box, hacked to run Linux, and an external USB enclosure for my testing.

Not sure if the DOS version would actually recognize the USB enclosure, so I may have to run it over Windows on my gaming desktop.

Rather than run the actual tests in SeaTools, I think I ma going to run these linux based tests (badblocks, and SMART tests via smartmontools) and then just read the drive health statistics after they conclude using SeaTools.

That ought to do the trick, I think.
You really should have the drive connected directly to SATA. I doubt the DOS tool will see USB connected drive, but more importantly you are testing 10TB drives. Even if the USB enclosure is detected, imagine how much longer it will take to complete each test! There has to be some sort of conversion overhead slowing the process down, at least for the writes.
 
You really should have the drive connected directly to SATA. I doubt the DOS tool will see USB connected drive, but more importantly you are testing 10TB drives. Even if the USB enclosure is detected, imagine how much longer it will take to complete each test! There has to be some sort of conversion overhead slowing the process down, at least for the writes.

Believe me, I am in agreement. (though it is a USB3 compatible dock, so it's not as bad as it could be) Doing a full write cycle covering the entire 10TB drive in about 12 hours, corresponds to an average write speed of 231MB/s which doesn't seem that terrible.

I wonder if since it is just rewriting the same 4k block pattern across the entire disk, that pattern just sits in the drive cache, and there isn't much traffic across the interface at all.

I'm sure the performance would be better over native SATA, but tha'ts kind of moot, as I don't ahve any extra sata or sas connectors in my server, and it is "home production" so I'm not going to take it down for the test, and my spare server is having the SAS controller issue so I can't use it, and I don't want to tie up my desktop with this testing, so it's either the USB dock or nothing right now until I get the SAS issue solved.

I'm working on it though. Trying to re-flash firmware.
 
Last edited:
Finally got all of my new drives tested and installed.

I love having plenty of storage. Hopefully this will last me another 5 years!

upload_2017-12-29_16-22-19.png
 
How are your sync writes? I'm looking to do something similar as an NFS share for ESXI.

Sync writes could be better, but that's in part because I probably bought the wrong log devices (ZIL/SLOG) back in 2014 when I first set this up. My SLOG devices are probably the next thing I am going to try to upgrade

I'm not sure how familiar you are with the workings of ZFS, so first lets just briefly go into what a SLOG device does.

The ZIL (ZFS Intent Log) exists in all ZFS pools even if you don't have a SLOG (Separate Log Device), if no SLOG is present it is just mixed into the pool.

The ZIL is where sync writes are written so that the pool can report to the host that all writes have been committed to a non-volatile device. Tnis is only temporary though, the data written to the ZIL stays in ram and is committed to the regular pool in the next write cycle. During regular operation, the ZIL is never read from, only written to, and once the data is committed to the pool it is purged. The only time the ZIL is read from is during reboot after a failure has occurred, before the data in the ZIL has been committed to the pool. At that point the ZIL data is recomputed and committed to the pool.

A SLOG is a separate device for the ZIL used to speed up sync writes over what the pool can normally accomplish. Because it is purged after every write cycle (usually one second) only a very small amount of data is ever used. It does - however - need to have very low latency writes. Due to the drive being key in saving data if disaster strikes, you also want it to be either battery or capacitor backed so that any data in its cache can be committed to the drive in case of power failure.

At the time it was recommended to mirror SLOG devices as a failure could result in a corrupt drive. In more recent pool revisions this is no longer the case, and a corrupt SLOG device will only result in the loss of the last second of writes, but if this last second of writes has a potential to be very important to you (like in VM's or databases) it is still a good idea to do so.

Even very fast consumer SSD's with high write speeds usually perform VERY poorly as log devices, as it is the latency that matters more than the sequential write speeds.

At the time I set up my pool in 2014, everyone recommended Intel S3700 as the best affordable log devices. It is capacitor backed, has fairly low latency writes and does pretty well in this function. I made mistake and got the 100GB models though. I assumed that since I'd only need a very small amount of disk space, getting a larger drive would be a waste, failing to take into account that larger SSD's have more parallelism and thus tend to be faster.

Because of this, I only get ~105 - ~110 MB/s sync writes. Not bad at all compared to a consumer SSD or no SLOG device at all, but still not a high performance solution at all. if I had gotten the larger S3700 drives, I could probably have gotten 200-250MB/s sync writes.

At the time in 2014 the absolute best SLOG device money could buy was a ZeusRAM 8GB battery backed RAM device. Those could get you 400-500MB's sync writes, (or more with mpio sas) You can still find these on eBay for ~$300. Since NVME has come around though, there may be faster solutions for potentially less money, but I have always struggled with these philosophically. I don't want to buy two 400, 800 or 1200GB PCIe SSD's I'm only ever going to use less than 10GB of. It seems like such a waste.

A problem with SLOG devices is that those of us who use them are a relatively small bunch, so there isn't a whole lot in the way of good reviews of them out there. Often all you get are small blog posts like this with old out of date information, or pages like this with low detail insufficient information. So, you kind of have to buy these expensive drives and experiment for yourself, which kind of stinks.

Intel's new Optane PCIe drives ought to do excellently in this regard, but they are VERY expensive :(

Anyway, I hope this helps.
 
How are your sync writes? I'm looking to do something similar as an NFS share for ESXI.

Sync writes could be better, but that's in part because I probably bought the wrong log devices (ZIL/SLOG) back in 2014 when I first set this up. My SLOG devices are probably the next thing I am going to try to upgrade

I'm not sure how familiar you are with the workings of ZFS, so first lets just briefly go into what a SLOG device does.

The ZIL (ZFS Intent Log) exists in all ZFS pools even if you don't have a SLOG (Separate Log Device), if no SLOG is present it is just mixed into the pool.

The ZIL is where sync writes are written so that the pool can report to the host that all writes have been committed to a non-volatile device. Tnis is only temporary though, the data written to the ZIL stays in ram and is committed to the regular pool in the next write cycle. During regular operation, the ZIL is never read from, only written to, and once the data is committed to the pool it is purged. The only time the ZIL is read from is during reboot after a failure has occurred, before the data in the ZIL has been committed to the pool. At that point the ZIL data is recomputed and committed to the pool.

A SLOG is a separate device for the ZIL used to speed up sync writes over what the pool can normally accomplish. Because it is purged after every write cycle (usually one second) only a very small amount of data is ever used. It does - however - need to have very low latency writes. Due to the drive being key in saving data if disaster strikes, you also want it to be either battery or capacitor backed so that any data in its cache can be committed to the drive in case of power failure.

At the time it was recommended to mirror SLOG devices as a failure could result in a corrupt drive. In more recent pool revisions this is no longer the case, and a corrupt SLOG device will only result in the loss of the last second of writes, but if this last second of writes has a potential to be very important to you (like in VM's or databases) it is still a good idea to do so.

Even very fast consumer SSD's with high write speeds usually perform VERY poorly as log devices, as it is the latency that matters more than the sequential write speeds.

At the time I set up my pool in 2014, everyone recommended Intel S3700 as the best affordable log devices. It is capacitor backed, has fairly low latency writes and does pretty well in this function. I made mistake and got the 100GB models though. I assumed that since I'd only need a very small amount of disk space, getting a larger drive would be a waste, failing to take into account that larger SSD's have more parallelism and thus tend to be faster.

Because of this, I only get ~105 - ~110 MB/s sync writes. Not bad at all compared to a consumer SSD or no SLOG device at all, but still not a high performance solution at all. if I had gotten the larger S3700 drives, I could probably have gotten 200-250MB/s sync writes.

At the time in 2014 the absolute best SLOG device money could buy was a ZeusRAM 8GB battery backed RAM device. Those could get you 400-500MB's sync writes, (or more with mpio sas) You can still find these on eBay for ~$300. Since NVME has come around though, there may be faster solutions for potentially less money, but I have always struggled with these philosophically. I don't want to buy two 400, 800 or 1200GB PCIe SSD's I'm only ever going to use less than 10GB of. It seems like such a waste.

A problem with SLOG devices is that those of us who use them are a relatively small bunch, so there isn't a whole lot in the way of good reviews of them out there. Often all you get are small blog posts like this with old out of date information, or pages like this with low detail insufficient information. So, you kind of have to buy these expensive drives and experiment for yourself, which kind of stinks.

Intel's new Optane PCIe drives ought to do excellently in this regard, but they are VERY expensive :(

Anyway, I hope this helps.


According to this, Intels 900P Optane drives seem like they outperform anything pre-Optane (though not as good as the DC 4800x) for ZIL purposes, yet at a price point that is conceivably affordable for home use.

I just spent over 4 grand on hard drives, so I am going to hold off for a bit, but when I work up an appetite for my next storage upgrade, maybe a pair of 900P's for the SLOG will be in the cards.

Thing is, I have an older server with Gen2 PCIe slots, so I wonder how that will impact the performance...
 
For what it is worth async writes are absolutely astonishing.

I just dropped a 50GB Virtualbox image from myy desktop to one of my sync=disabled folders.

Via 10G Ethernet I averaged write speeds of ~900MB/s.

This isn't just some super-compressible new image full of zeroes either. It's one set to dynamically allocate more storage as needed, so the entire contents is actual data.

I would have taken a screenshot, but I was too busy acraping my jaw up off of the floor.

Now I can't repeat the test, as that file is the only really large one I have handy on my desktop, and now it is already in cache, which would artificially inflate the results.
 
Back
Top