Assembling a high capacity NAS

One important thing that is not too obvious on those external HDDs is how you open them.

I have a small WD My Passport, and I can't figure how to open it.
There ya go:

I've shucked 12 of the 8TB and they've been running in my QNAP nas for over a year with no issues, saved myself around $1200 doing it.
I waited till the sales and got all of them for $130-$140/ea.

The only thing to watch for is that most of the WD elements/easystores are now the white label drives that could have the 3.3v issue depending on your power supply.
Which can be fixed by kapton tape (high heat withstanding, non-electrically conductive, no residue) or a molex to sata adapter:
 
Last edited:
How do I get to know which external WD HDDs have a RED label driver inside?

It's not mentioned in Amazon, probably on purpose.
 
How do I get to know which external WD HDDs have a RED label driver inside?

It's not mentioned in Amazon, probably on purpose.

You cant tell without opening and plugging them in (you can use crystal disk info via usb), 99% of them have the white labels nowadays.

If you need guaranteed red label drives you have to pay the premium or buy used.
 
And if you're assembling a home NAS- you don't need Red labels. Apparently they have the same 'tuning', which is something that you can check and potentially even set yourself; realistically speaking, you mostly just don't want a Green or Purple drive that has tuning suited to a different purpose, and at that, those drives still work well, just suboptimal when Reds/Whites are available from enclosures cheaper.

If you want performance drives, get Seagate Ironwolfs.
 
Why do Ironwalls still show less "stars" than the WD Reds, apparently from people unsatisfied with them ?
 
And if you're assembling a home NAS- you don't need Red labels. Apparently they have the same 'tuning', which is something that you can check and potentially even set yourself; realistically speaking, you mostly just don't want a Green or Purple drive that has tuning suited to a different purpose, and at that, those drives still work well, just suboptimal when Reds/Whites are available from enclosures cheaper.

If you want performance drives, get Seagate Ironwolfs.

The thing you want is the TLER that red drives have, which is suited for RAID-styled redundant drives. Essentially a standalone drive, if it encounters a red error it will try and try again for an extended period of time to read the bad sector to try to save it.

Not only is this npt necessary in a redundant array, but it can also cause problems, like the drive being dropped from the array.

TLER solves this by limiting the time spent trying to read a bad sector to a few seconds, and then giving up and using the redundancy to get it instead.

This is why TLER type drives (like the reds) are very well suited for redundant configurations, but actually worse when being used as standalone drives.
 
The thing you want is the TLER that red drives have, which is suited for RAID-styled redundant drives. Essentially a standalone drive, if it encounters a red error it will try and try again for an extended period of time to read the bad sector to try to save it.

Not only is this npt necessary in a redundant array, but it can also cause problems, like the drive being dropped from the array.

TLER solves this by limiting the time spent trying to read a bad sector to a few seconds, and then giving up and using the redundancy to get it instead.

This is why TLER type drives (like the reds) are very well suited for redundant configurations, but actually worse when being used as standalone drives.

As a side note the white label drives have the same TLER as the reds (hence their value from shucking)
 
The only reason I decided against shucking when I last upgraded all my drives was Warranty.

I'm not sure how they would handle warranty of drives if they are taken out of the case, and even if they do honor the warranty, it is much shorter than a drive sold as an Enterprise or Prosumer NAS drive.

The 5 year warranty on my 10TB Seagate Enterprise drives is pretty nice. All 12 of them have been perfect since December 2017 (knock on wood)

Before them I had 12 WD Red 4TB drives. They were perfect for the first couple of years too. Then I lost a handful and replaced them under warranty. Time will tell how the Seagats perform, but thus far it feels like these are not the same as the bad old Seagates which were highly failure prone.

WD's RAM process was fairly straight forward. I hope Seagate's is as good.
 
The only reason I decided against shucking when I last upgraded all my drives was Warranty.

I'm not sure how they would handle warranty of drives if they are taken out of the case, and even if they do honor the warranty, it is much shorter than a drive sold as an Enterprise or Prosumer NAS drive.

The 5 year warranty on my 10TB Seagate Enterprise drives is pretty nice. All 12 of them have been perfect since December 2017 (knock on wood)

Before them I had 12 WD Red 4TB drives. They were perfect for the first couple of years too. Then I lost a handful and replaced them under warranty. Time will tell how the Seagats perform, but thus far it feels like these are not the same as the bad old Seagates which were highly failure prone.

WD's RAM process was fairly straight forward. I hope Seagate's is as good.

The difference between the comparable WD reds is only 1 year, (3 years for the bare drives, 2 for the usb drive).
None of the pieces of the enclosure show signs of tampering when shucking them (if done correctly) so you could just put it back together and return it in the enclosure if you keep them, just costs the space to store the plastic shell.

Almost half the cost of the bare drives vs shucking them is well worth it to me for 1 year less of warranty.
 
Unfortunately now the difference between a boxed in WD 10TB and a single 10TB RED is quite small, probably on purpose.

Also very few externals now carry RED label drivers, only WHITE label.

BTW: I won't assemble a RAID NAS. It's too expensive doubling the quantity of HDDs.
 
BTW: I won't assemble a RAID NAS. It's too expensive doubling the quantity of HDDs.
Only at small scale and/or raid 1, you don't need red/white label quality drives for raid 1.
Additionally if you choose a raid 6 which is the current standard (or 5 if its not super important) average arrays are often 8-12 disks, you lose less than 1/4 of your drives at that amount.
 
Unfortunately now the difference between a boxed in WD 10TB and a single 10TB RED is quite small, probably on purpose.

Also very few externals now carry RED label drivers, only WHITE label.

BTW: I won't assemble a RAID NAS. It's too expensive doubling the quantity of HDDs.

Well, best practice is to both have redundancy locally AND back up offsite, if you care about your data at all.

Just a bunch of independent drives is generally a bad idea. When you are dealing with that much data and that many drives a failure at some point is pretty much a guarantee.

With my 12 WD Reds, none of them died outright, but over 4 years 4 of them started developing slight read errors, and I consider that a pretty good result. If I had not had some sort of redundancy built in, this silent corruption would have gone unnoticed, until it was time to use the affected file, and it would have not worked.

If you are doing things like backing up blu-rays or something like that, you could 2 years from now be going to view a movie or restore a drive image, get 2/3rds of the way through it and then have it fail.

it's just not worth it to not use redundancy, even if you don't care about the data. It's just too much of an inconvenience WHEN things get corrupted. (Not if, it WILL happen when you are dealing with this much storage capacity)
 
Please forgive my total ignorance in all these raid matters.

From what I knew, raid was important when you were editing video, to be safe and not lose anything. This is not the case here.

My idea is to have 3 x 10TB NAS, which may eventually grow, and I will a dedicated PC for my server, holding the three HDDs.

Which brand and model would be better for that then: WD RED, WD WHITE or Seagates (can't remember the model)? All 10TBs, of course.
 
But I do think having an extra 10TB HDD for emergency backup might be a good thing.

Now I check my HDDs periodically, and the times I had problems there was a warning it might happen, so I could back things up.

But there was an HDD where I didn't know there was a problem and when it crashed I got to know, through several YouTube videos, that that 3TB Seagate model had a heads problem.

Now I would have to change the heads for it, and for it you need another similar HDD.
 
RAID or some sort of drive pool software with redundancy is needed so you don't lose data when a drive goes bad.

I would argue that if you care so little about the data that you require no redundancy, then just delete it now and save yourself the trouble of doing the project lol.
 
As far as best drive it all comes down to what you are willing to spend. Reds are a good middle ground, shucked Whites are cheap, Golds (Now HGST Ultrastar DC) are better but much more expensive. Spend more $, get more warranty and in some cases performance.
 
But I do think having an extra 10TB HDD for emergency backup might be a good thing.

Now I check my HDDs periodically, and the times I had problems there was a warning it might happen, so I could back things up.

But there was an HDD where I didn't know there was a problem and when it crashed I got to know, through several YouTube videos, that that 3TB Seagate model had a heads problem.

Now I would have to change the heads for it, and for it you need another similar HDD.

Random read and write errors happen all the time. No data is safe without redundancy, especially when the drives get large like this.

It's not just for video editing. It would be a very bad idea to build a storage server without some sort of redundancy.

It doesn't need to be a hardware RAID card, there are many good software solutions (in fact Is argue that software solutions like ZFS are vastly superior to hardware RAID) but you do need something, and yes, this will result in you spending more on drives for redundancy. There is no way around that.


Even for trivial entertainment purposes like a movie collection, do you really want to get 3/4 of the way through a movie just to have a read error?

Bare de ves without redundancy are a really bad idea. Even single drive redundancy (RAID5 or RAIDz on ZFS) are considered a bad idea these days, as with the size of modern drives you are almost guaranteed a read error during the rebuild.

The starting point today is two redundant drives (RAID6 or RAIDz2 on ZFS)

Different systems vary, and usually you cannot add additional drives to group once built without destroying data, so you'll want to plan ahead.

For inatance, with ZFS, let's say you start with one RAIDz2 vdev of six disks. You can't expand that to 8 disks later. You can - however - add a second 6 disk RAIDz2 vdev and pool the two.
 
Last edited:
As far as best drive it all comes down to what you are willing to spend. Reds are a good middle ground, shucked Whites are cheap, Golds (Now HGST Ultrastar DC) are better but much more expensive. Spend more $, get more warranty and in some cases performance.


Something to keep in mind: As much as not going with redundancy is ill advised, if you are stubborn and decide to do so anyway against recommendations DO NOT use Red's or any other NAS, Server or Enterprise drive, as these will use TLER which is good for redundant setups, but can increase your risk of data loss if used as a standalone drive.

Use desktop drives. WD Blacks maybe? Do they still sell those?

But I cannot repeat this enough, please use redundancy or abandon the project. A storage server with non-redundant drives is a terrible idea.
 
Last edited:
For low drive counts, I'd use ZFS and simply start with mirrored pairs. The bonus is that this produces the best random read performance.

The downside is that you lose 50% of your capacity to redundancy. It is more flexible though, as you can just keep adding to your pool two mirrored disks at a time when you need it.

Otherwise I'd go with a RAIDz2 (or RAID6) setup with six disks. You'll lose a third of the capacity to redundancy and still have good reundancy in place for when (not if) something goes wrong. To get that same 30TB capacity you want, you'd need 6x 8TB drives. This is going to cost you more, no doubt, but at least each individual drive will be cheaper.

Of course, I have no idea what OS you had in mind. ZFS precludes windows. Don't let that scare you though. The FreeNAS appliance OS is a good place to start. It has an easy to understand web managed interface. Just install the OS, use the interface on screen to configure your IP address, and then use the web GUI interface to set up your storage pools and manage shares. Easy peasy.

Other things to keep in mind hardware wise is that you definitely want Intel NIC's. Don't use anything Realtek for a server. Just don't. It's not worth it.
 
Last edited:
What are Intel NICs? What is ZFS?

FreeNAS is the appliance OS I was going to use.

My main desktop computer runs on Windows 7-64, and I would prefer to stay with it instead of W10.

Until now my videotheque was on DVD-DL, using 8GB MKV files, down converted from BD and other sources. Even my DVDs were converted to MKV when not available in BD.

But I thought an HDD server would be a step forward in quality, as I would be able to use larger files.

During the past years I have lost more files due to defective HDDs than to defective DVD media. And I got interested in media servers with the arrival or larger size SDDs.

So the HDDs I see them more as an in between stage until SDD prices come down. That's why redundancy is not something I'm very much interested in.
 
What are Intel NICs? What is ZFS?

FreeNAS is the appliance OS I was going to use.

My main desktop computer runs on Windows 7-64, and I would prefer to stay with it instead of W10.

Until now my videotheque was on DVD-DL, using 8GB MKV files, down converted from BD and other sources. Even my DVDs were converted to MKV when not available in BD.

But I thought an HDD server would be a step forward in quality, as I would be able to use larger files.

During the past years I have lost more files due to defective HDDs than to defective DVD media. And I got interested in media servers with the arrival or larger size SDDs.

So the HDDs I see them more as an in between stage until SDD prices come down. That's why redundancy is not something I'm very much interested in.

ZFS is a file system and Software RAID system all in one, originally developed by Sun Microsystems for Solaris back in the day, but since open sourced and ported to other *nix operating systems. (Linux, BSD, etc.) FreeNAS is an embedded version of FreeBSD with an easy to use interface specifically for NAS purposes.

If you are using FreeNAS then you are automatically using ZFS, because that is what it is based on.

A NIC is a Network Interface Card. Essentially, your network card. (or often on board ethernet these days) You want to make sure you use Intel Ethernet chips, because the Realtek ones, while OK for desktop purposes, really fall on their ass in server applications. They just aren't reliable enough.

Regardless of what type of drive you use (HD, SSD, etc) you want redundancy. You see, you are not only concerned with total disk failures where the drive goes unresponsive. You are concerned with so called URE's or Unrecoverable Read Errors, where a single bit can't be read. Even the top end drives on the market (As mentioned above HGST Ultrastar DC series) specify 1 URE per 10^15 bits read. These drives now come in 14TB sizes. That's 1.12*10^14 bits. In other words, every time you read through the drive from end to end you are about 11.2% likely to have a URE. A URE will ruin any file in which it occurs, and if that happens to be a large video file, well, its now damaged. And this is if the drive performs as rated. If you get a bad one, it might perform way worse. Other drives on the market (non enterprise ones) have lower URE specs as well.

It's been a while since I set of a FreeNAS box (I manage my storage manually from the command line in ZFS on Linux (ZoL) these days) but since it is ZFS based, I don't even know if FreeNAS will let you set up non-redundant storage pools.
 
You mean the NIC on the server computer or on my desktop?

The network adapter on my desktop is a Killer e2200. The one on my laptop is Realtek.
 
Last edited:
You mean the NIC on the server computer or on my desktop?

The network adapter on my desktop is a Killer e2200. The one on my laptop is Realtek.

The network card you're using on the server computer is what matters most and is recommended to be intel and/or an enterprise grade port.
 
You mean the NIC on the server computer or on my server?

The network adapter on my desktop is a Killer e2200. The one on my laptop is Realtek.

I'm talking about the NIC on the system that will contain the drives and be running FreeNAS. You want that to be an Intel NIC. The clients don't matter as much, but the longer I do this stuff I just try to make as many as possible of my systems use Intel NIC's, because I just have fewer problems when I do.

If you have a system that doesn't have an Intel NIC you can always install one in a, there are a ton of used ones on eBay I have had very good luck with. Just search for "Intel Pro/1000 PT". They come in both dual and quad configurations. I have bought more dual versions than I care to count. They were not cheap when they were new, but these days they go for under $15 on eBay. As old as they are they are still leaps and bounds better than anything Realtek has pumped out, and they only take a 4x PCIe gen 1 slot.
 
Pretty much every HDD these days is around the same reliability. I have used a combination of WD, HGST, Toshiba, and Seagate. I have used the most seagates since they have higher RPMs and thus more speed than WD NAS drives. Reliability has been excellent. 6TB is my smallest size drive, 8TB is what most are. I also have a pair of 10TB and 12TB seagates. Though I have had no problems with my HGST or Toshibas, they both run louder and hotter than my WD and Seagates do. COuld be a coincidence of the single model series they are but just wanted to point it out.

Currently 8TB is the sweet spot in price to capacity.
Seagate is releasing new 16 and 18TB drives this year too which will hopefully lower then 8-12TB capacities down $30-40.

This is far from the truth.

https://www.backblaze.com/blog/hard-drive-stats-for-2018/

OP, stay far away from Seagate, when you see one, run, and don't walk. These stats are proof they are unreliable and I have seen it for myself in our own datacenter and enterprise environments that Seagate has and still is unreliable compared to HGST, WD, and other brands. I've learned after 25 years in IT to never buy a Seagate again, and I won't.
 
NIZMOZ You cant go by BackBlaze stats for home user use. They take consumer external drives, shuck them, set em up in big raid arrays and then hammer them like crazy 24/7 in an enterprise environment they were not designed for. That is nowhere even close to a scenario for home users.
Now when a home user does use a drive they buy, the quantity of drives is so small they really have equal chance from WD to Seagate to HGST to get a random failure, and those failure rates will be very different than what Backblaze gets.


And just FYI, its kinda dumb for you to say "stay away from seagate" when you are advocating BB stats and yet BB says Seagate is better than WD.
 
NIZMOZ You cant go by BackBlaze stats for home user use. They take consumer external drives, shuck them, set em up in big raid arrays and then hammer them like crazy 24/7 in an enterprise environment they were not designed for. That is nowhere even close to a scenario for home users.
Now when a home user does use a drive they buy, the quantity of drives is so small they really have equal chance from WD to Seagate to HGST to get a random failure, and those failure rates will be very different than what Backblaze gets.


And just FYI, its kinda dumb for you to say "stay away from seagate" when you are advocating BB stats and yet BB says Seagate is better than WD.

Umm, if WD home drives can handle it, I rather put my money on them than any Seagate. I've had many home Seagate drives fail on my personally in my own home computers.

Like I said, all were compared, the same stress, and WD survived the best. I rather own the one that did even though its an overkill test. Our NAS at home runs 24x7 as well, so it is still working the drive pretty well as it never rests. How does BB say its better when the stats say otherwise. They didn't say what you made up anywhere in that article. lol
 
Also, you can't just say that Seagate is better than WD. These are the facts.

SEAGATE 4TB - 23,236 count of 4 tb 581 failures
HGST 4TB 14,550 54 failures

Seagate 4tb - 3441 failures
HGST 188 failures + 152 failures + 95.

Enough said.

blog-chart-2018_data.png


blog-chart-2018-hard-drive-stats.png
 
Also, you can't just say that Seagate is better than WD. These are the facts.

SEAGATE 4TB - 23,236 count of 4 tb 581 failures
HGST 4TB 14,550 54 failures

Seagate 4tb - 3441 failures
HGST 188 failures + 152 failures + 95.

Enough said.

View attachment 156540

View attachment 156541


Your reading and selective highlighting of numbers from the Backblaze reports is about as worthwhile as Barr's summary of the Mueller Report. Look at the percentages. Seagate isn't that far off from WD, particularly for more recent models.

Also, IIRC a lot of those 4-6 TB HDDs were purchased during the Thailand floods and the recovery thereafter. Quality across all brands dipped (probably glossing over QA in order to get units out the door), and it was exceedingly difficult to get drives in bulk. I seem to remember Blackblaze and other smaller companies actively shucking externals and offering bounties to users who were able to locate certain HDD models.
 
Also, you can't just say that Seagate is better than WD. These are the facts.

SEAGATE 4TB - 23,236 count of 4 tb 581 failures
HGST 4TB 14,550 54 failures

Seagate 4tb - 3441 failures
HGST 188 failures + 152 failures + 95.

Enough said.

dd033fb53f7167731a1bbf5893714538.jpg
 
I was talking to a friend of mine who was the technical engineer on a big TV production studio, and his opinion is that I wouldn't need to use RAID on my application.

What he does advice is to program the server to turn off the HDDs when I'm not using them, which would be during the day.

Then I would continue to use my desktop HDD to download and edit things up, and when it's over to move the final files to the server.
 
I was talking to a friend of mine who was the technical engineer on a big TV production studio, and his opinion is that I wouldn't need to use RAID on my application.

What he does advice is to program the server to turn off the HDDs when I'm not using them, which would be during the day.

Then I would continue to use my desktop HDD to download and edit things up, and when it's over to move the final files to the server.

As long as you don't mind losing the final file data thats a perfectly acceptable risk if you want.
If that data is important either your friend is an idiot or you didn't emphasize the importance of the data.
 
I was talking to a friend of mine who was the technical engineer on a big TV production studio, and his opinion is that I wouldn't need to use RAID on my application.

What he does advice is to program the server to turn off the HDDs when I'm not using them, which would be during the day.

Then I would continue to use my desktop HDD to download and edit things up, and when it's over to move the final files to the server.

If you don't care about those Fi Al filed you can do anything.

I can't believe a production studio would operate that way though. Highly irresponsible. There is a huge risk of losing lots of work.
 
Back
Top