Assembling a high capacity NAS

Mote details about the mobo I plan to use.

Asus H61 M-A/BR, with two DDR3 memory slots, one PCI Express 3.0 and two PCVI Express 2.0 slots, four SATA 3.0 sockets, Realtek Gigabity LAN, HDMI video output.

Intel i7-3770 CPU

16GB DDR3 memory. These were from my desktop, which were replaced with 32GB cards.

As mentioned above, I will follow the advice to buy an Intel Network board, to replace the Realtek.

From what I know, I do not need to use a monitor on that computer continuously, so I would use an LG LCD monitor I have as backup.

Besides the Intel LAN board, I also plan to buy a small desktop case, as the mobo is micro-ATX.
 
Mote details about the mobo I plan to use.

Asus H61 M-A/BR, with two DDR3 memory slots, one PCI Express 3.0 and two PCVI Express 2.0 slots, four SATA 3.0 sockets, Realtek Gigabity LAN, HDMI video output.

Intel i7-3770 CPU

16GB DDR3 memory. These were from my desktop, which were replaced with 32GB cards.

As mentioned above, I will follow the advice to buy an Intel Network board, to replace the Realtek.

From what I know, I do not need to use a monitor on that computer continuously, so I would use an LG LCD monitor I have as backup.

Besides the Intel LAN board, I also plan to buy a small desktop case, as the mobo is micro-ATX.


Yeah, if you go with FreeNAS, the only time you need a monitor is during initial OS install, and first time network configuration. After that everything is managed via pointing your web browser to its IP address.
 
If you use an early warning system like CrystalDiskInfo and check your disks every week, you can backup the one showing any signs of potential problems in advance.

1.) Drives often fail with no warning. Monitoring SMART data can help prevent some cases, but it is not a guarantee by any stretch of the imagination.

2.) As I have mentioned before, the bigger worry is Silent Corruption. The drive continues to operate healthily, but the data on the drive has random flipped bits. This will never show up in any kind of diagnostics tool before it is too late.
 
My budget is the minimum possible, so it only includes the HDDs.

As I intend my NAS to have a 30TB capacity, my only safety precaution would be to have an additional 10TB HDD. Having more than that is not an option I'm interested in. Neither is assembling a 20TB server. It's 3 + 1.

All HDDs should be brand new.

My server would be a computer with an Asus MB and an Intel i7 cpu, all which I already have, running FreeNAS or a similar Linux program.

Please do suggest your options including all those decisions.


So, why even ask for advice if you are not open to hearing the pretty much unanimous advice we have to offer?

I mean, you do you. No skin off our backs, but I guarantee you if you don't follow our advice, sooner or later you will regret it, and you'll either abandon running a NAS all together, or you'll come around and do it right the second time.

Why not do it right the first time?

And if you ask the same question in the FreeNAS forums I guarantee you you'll get the same answer as here.
 
Sorry, but it wasn't totally unanimous.

It was suggested to use a RAID system with three 10TB HDDs and another 10TB HDD that I agreed to.

What I do no want is your totally redundant system with double the drives I need for my data.

Why not help me work from that 3 + 1 system?
 
Sorry, but it wasn't totally unanimous.

It was suggested to use a RAID system with three 10TB HDDs and another 10TB HDD that I agreed to.

What I do no want is your totally redundant system with double the drives I need for my data.

Why not help me work from that 3 + 1 system?

With single drive redundancy your data is still vulnerable when it is at its highest risk.

Scenario: 3+1 RAID5 setup. You have single drive redundancy. Your system has run for a few years and your drives are getting older.

1.) One drive fails. No problem you say, I'm redundant. I just have to replace the drive and rebuild.

2.) You do this, but your drives are almost full, and it takes 20+ hours for the rebuild process to complete. During this time your aging drives are at full load for an extended period of time.

One of two things can happen:

3a.) Best Case: Your aging drives have a few URE flipped bits during rebuild. Since there is no redundancy during rebuild, this causes some files to be corrupted during the rebuild process. This is silent, and you'll never know until you go to try to use the files.

3b.) Worst Case: One of your remaining aging drives is marginal. The added stress of running at full load for a rebuild for 20 hours causes it to fail. All of your data is now gone.

RAID5 has been known to be on its last legs since 2007 when Zdnet did a piece on it. The option about RAID5 being dead was controversial back then, but over the several years since it has become accepted as truth. It is wholly inadequate to protect your data in 2019.

I mean, yeah, sure, it is better than nothing at all, but I am not going to sugarcoat it to give you a false impression that it is OK.
 
  • Like
Reactions: Meeho
like this
Your best bet if you are going to insist on doing it this way is to install all four drives into the system and let FreeNAS and it's ZFS file system take care of it. There really is only one way to run such a configuration, but it does leave you open to a fair amount of risk. Much better than 3 or 4 drives just striped together running NTFS, but a fair amount of risk nonetheless.
 
Either your data is important or it isnt, its that simple. If its important then you really need 2 copies of it, with one being off site.

Relying on smart data giving you a warning is in no way a backup solution. RAID is not a backup solution either, it just helps with uptime.

As mentioned above raid 5 is not a great idea either, i wouldnt even bother trying to rebuild a raid 5 array if a drive failed i would get any currently not backed up data off it and then start over from backups, because its unlikely to survive a rebuild without some corruption. If going RAID and attempting to use it as a backup which is a bad idea, go raid 6 or mirrors at a minimum IMO.

I just built a budget NAS as well, i used alot of old parts i had around i only bought the drives and a HBA card and the case new. I reused a old ivy bridge mobo/cpu/ram and a old PSU.

I built a 64TB array, consisting of 11 8TB WD white lable drives that i shucked from external drives to save money, they are in a RaidZ2(ZFS's RAID6 equivalent) array, it is backed up to a group of 6 10TB external drives that i keep at my brothers house and update monthly, to ensure no data is lost during the month if my array fails i sync any changed files to a 8TB drive in my main PC that stores only data that has changed since my last backup and i wipe it monthly, so even if the array fails mid month i should still be good between the backups and what is saved to my main PC. This should cover me for any failures as long as i dont change more than 8TB of data on the array per month, which i dont, its mostly media storage.

Over the years i have lost alot of data, from many failures, HDD themselves, had a PSU fry a whole PC once, and had a fire once take out most of my possessions. Bottom line if your data matters then you need a off site backup.
 
Yup, all those stories are much too horrific and do not seem to have nothing in common to the problems I have ever had. Only one time, and because I didn't back up immediately, did I lose data.

In a way it makes me think again if my present arrangement, with all my video data being recorded in 8GB DL DVDs, isn't much safer (and cheaper) than any HDD arrangement you have suggested.

Many people did suggest to me, along the years, why I didn't move my videotheque onto a server. And I always said that the losses or problems I have had by using DVDs was always limited to the rare disc problems. On an HDD I would have lost say 200 video files on a single 4TB unit failure.

But a server would make my data handling simpler than loading and unloading DVDs.

The panorama you have exposed seems to suggest that my thoughts on HDDs being unreliable didn't change.

So what I think is this. Let's completely forget about RAID arrangements. Can I use FreeNAS to assemble a separate HDDs server, all NTFS formatted?
 
So sorry if I went along suggestions you did that were not on my thread opening question. I didn't intend to make you waste time.

Your comments were absolutely helpful, at least for confirming the reliability of the system I use now, and to think carefully on what I move to.

My original question remains though: which HDDs brands and models are better for a non-RAID server?
 
I have seen 3TB disks from Seagate that dies like flies after 3 years but with enterprise disks from HGST, Seagate or WD you can expect a failure rate of say 3-5% per year when they are new. Rate increases over time. Intended use time for these disks is 5 years. They may live longer but at a certain time they will die.

Non raid systems is an absolute nogo for me as a data loss propability is 100% if you wait long enough. If you want avoid real time raid (all disks online) use at least a raid on demand system like unraid that can survive a dataloss on a disk failure (at least at the state of last sync)

Otherwise: no backup no merci
And backup is like old bread, always from yesterday/last week/month or even older.
 
If you want avoid real time raid (all disks online) use at least a raid on demand system like unraid that can survive a dataloss on a disk failure (at least at the state of last sync)

As long as there's absolutely no risk of any disk trouble on one HDD affect data stored on the other HDDs, I would like to know more about this you're talking about.
 
You want to store 30 TB of data. Even when data is not so important, you will loose the content of a disk say 8 -12 TB for sure when the affected disk fails. Time (and cost) to recover may be huge.

A backup of your data (given you have another 30TB storage) may last 3 weeks, even incremental backups can last a lot of time. You do not want to execute it often enough.

You can now use tools like unraid or snapraid. They use an additional parity disk (at least as large as the largest of your disks) to create redundancy on demand on the parity disk. Such a sync run can last many hours but in the end you are able to replace a dead disk, run a restore (last many hours) and your date is back again (hope for it)

Realtime Raid does not need this sync run. You alway have redundancy to survive a single disk failure (raid-5 or ZFS Z1) or a dual disk failure (Raid-6 or ZFS Z2) or a triple failure (ZFS Z3).

ZFS is the gold standard then of data security with realtime raid as it avoids a corrupt filesystem on a crash during write, gives you Ransomware save readonly snaps, checksums to protect against silent data errors and many more features..
 
Last edited:
I don't get it. I never had one disk failure after another in my 25 years of using computers.

Where does all this doom idea come from? Why such a certainty that I will lose such a large amount of data if the disks are new and reliable?

This IS NOT Murphy's law, as you make it seem. At least not in the terms you are presenting it.
 
I don't get it. I never had one disk failure after another in my 25 years of using computers.

Where does all this doom idea come from? Why such a certainty that I will lose such a large amount of data if the disks are new and reliable?

This IS NOT Murphy's law, as you make it seem. At least not in the terms you are presenting it.

It is a function of size. The more capacity you have the more likely you are to run into an unrecoverable error. Most hard drives are rated at 1 unrecoverable error per 10^14 bits. This means that no matter what you do, you are likely to run into at least 1 unrecoverable error for every 12 TB of storage, and this is accumulative across drives in a pool. Which means you are almost guaranteed 3 unrecoverable errors within a 30 TB storage pool (whether you pool those drives together or use them individually, it makes no difference).

The point of redundancy (RAID) and fancy archival file systems such as ZFS and ReFS are to mitigate these errors in these enormous data pools. NTFS does not do this.
 
This IS NOT Murphy's law, as you make it seem. At least not in the terms you are presenting it.
Actually, this thread is exactly Murphy's Law, just not in the terms you currently understand it.

Most people seem to believe that ML is a cute name for a fancy form of "shit happens." But Murphy was a real guy who learned expensive lessons in USAF research. Rather than "shit happens," ML really means that you need to consider all possible shits, then intentionally design systems so that avoidable, catastrophic failures simply cannot occur.

"Anything that can go wrong will go wrong. (Make sure nothing can go wrong!)" Get it now?

If you're designing a sensor that needs 3 screws to attach to a rocket, you do NOT make an equilateral triangle. Using an isosceles or irregular triangle will force the sensor to be installed in the correct orientation - there's simply no way to install it incorrectly because the holes won't line up. The Russians failed to learn this lesson & paid the (100% avoidable) price just a few years ago.

The advice in this thread hasn't addressed every potential doomsday scenario. People haven't recommended servers with redundant PSUs, etc. But they have considered the most likely failures, the results of such failures, and the time & effort required for recovery. Follow this advice, including making regular backups, and the odds you lose data will be very low.

Heck, go with your gut & build a simpler configuration. You still have pretty good odds of success, at least for a while, maybe years. But you'll have both ignored other people's hard-won experience & failed to engineer a system for its application, and you will have a truly miserable time when whatever can go wrong does.
 
The advice in this thread hasn't addressed every potential doomsday scenario. People haven't recommended servers with redundant PSUs, etc. But they have considered the most likely failures, the results of such failures, and the time & effort required for recovery. Follow this advice, including making regular backups, and the odds you lose data will be very low.

Heck, go with your gut & build a simpler configuration. You still have pretty good odds of success, at least for a while, maybe years. But you'll have both ignored other people's hard-won experience & failed to engineer a system for its application, and you will have a truly miserable time when whatever can go wrong does.
Actually used old servers were recommended already, they were shot down because they were 'expensive'.
My view so far is he had something in mind and wanted folks to validate it, it backfired and the consensus/recommendations from here didn't match up to what he wanted to do.
 
What I meant when I said that Murphy's Law doesn't apply here is in the doomy ways that were described. My humble experience with HDDs show me a different version. I'm quite aware of what is involved in the ML concept.

Being HDDs mechanical units, with a rotating part and a fixed part touching each other, it's obvious that errors will happen.

Hopefully data accumulation in systems where such wear does not happen gets to develop with less and less errors. For now SSDs seem to be a way, which hopefully fulfills their promise better than CDs did in music playing, claimed to be "indestructible".

Since I started paying attention to HDDs temp and to leave them on all the time, 24/7, HDD problems practically disappeared, problems became more predictable. I could check that with Speedfan and SMART quite well.

The access to used servers in my country is a LOT more limited than in the USA, because they are scarce and expensive.

An used good server is sold here for the price of a new one in the US. So, yes, that is an "expensive" option, unusual as it may seem to you. That was the way I was originally planning to use: buying a server.

What I had in mind was suggested to me by a professional, with plenty of years working as technical director in international video production companies. So it's not something I imagined and insist on as if I was a child.

Of course that the larger the HDD, and errors being defined in percentage, the more data is in danger of being corrupted. Perhaps using smaller HDDs is a better way to go, to reduce the quantity of data that can be affected.

That would just increase physical space and see how the SATA available sockets problem is solved.
 
The best hard drives fail at a rate of about 1%/year. SSDs are about the same, maybe a little better. There are other risks to consider - accidental deletion, file system corruption, bad power supply, fire/flood, malware, etc. - which individually aren't very risky, but add up.

RAID(-1,-5,-6) is a convenient way to deal with drive failure, which is the single biggest risk. It still leaves you vulnerable to other causes of data loss.
 
Nice to see this thread as I'm slowly looking into building something like this from scratch or going the qnap way (you know, backup/plex all that good stuff).

Several years ago I got a used server for free but after trying it for a weekend it went back. I knew it was loud but didn't realize how loud until it was in my quiet house and not in a server room with all the hvac. Oh, and I didn't even look at the power consumption lol (that was to be my next thing to check heh). Currently looking at a small rack-able server (that you build) with ECC RAM - (interesting about shucking external HDDs) - but ...still weighing the difference between doing that and just getting a prebuilt system lol (both cost and time involved). Lots of good considerations here :)
 
Nice to see this thread as I'm slowly looking into building something like this from scratch or going the qnap way (you know, backup/plex all that good stuff).

Several years ago I got a used server for free but after trying it for a weekend it went back. I knew it was loud but didn't realize how loud until it was in my quiet house and not in a server room with all the hvac. Oh, and I didn't even look at the power consumption lol (that was to be my next thing to check heh). Currently looking at a small rack-able server (that you build) with ECC RAM - (interesting about shucking external HDDs) - but ...still weighing the difference between doing that and just getting a prebuilt system lol (both cost and time involved). Lots of good considerations here :)
I went the qnap route but am regretting it now, unless you buy one of their power CPU versions the ARM cpus just don't get the throughput/performance I was expecting.
While it works fine for several Plex streams its less ideal so keep that in mind if you select a prebuild box (I'm just using it as storage not even running VMs on the NAS).
Though it is extremly quiet and power efficient, if I could do it again I'd likely go unraid (or ZFS).
 
This is still available. An absolute steal for what you get. I have one with Opteron 4174HE CPUs and it is fairly quiet after it finishes powering up. Definitely not "living room appliance" quiet, but I'm sure you could stash it somewhere. :)
 
As an eBay Associate, HardForum may earn from qualifying purchases.
This is still available. An absolute steal for what you get. I have one with Opteron 4174HE CPUs and it is fairly quiet after it finishes powering up. Definitely not "living room appliance" quiet, but I'm sure you could stash it somewhere. :)

That's pretty nice. Somewhat limited number of drive slots for a storage server though.

I like how HP used to cram 12x 3.5" drive slots in the front of their DL180 2U servers. As mentioned before though, the fans were loud as hell.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I have an NZXT mid-tower (now discontinued) that can hold 13 3.5" drives. It's either that or one of those Rosewill 4U EATX units that holds 12 drives these days.
 
That's pretty nice. Somewhat limited number of drive slots for a storage server though.

I like how HP used to cram 12x 3.5" drive slots in the front of their DL180 2U servers. As mentioned before though, the fans were loud as hell.

If I had the money sitting around and a reasonable chance of getting the drives I'd need in it in anything close to a timely fashion, I'd buy it for "Friendlocation" myself :)

As it is, it is going take ages to get the 8TB drives I want for the one I have :( Budget is super tight and I will only be able to afford one at a time (sometimes) when they go on sale. I plan to shuck. Until then, the 5x4TB eSATA RAID5 (with backup to a lot of smaller drives) carries on...
 
I went the qnap route but am regretting it now, unless you buy one of their power CPU versions the ARM cpus just don't get the throughput/performance I was expecting.
While it works fine for several Plex streams its less ideal so keep that in mind if you select a prebuild box (I'm just using it as storage not even running VMs on the NAS).
Though it is extremly quiet and power efficient, if I could do it again I'd likely go unraid (or ZFS).

Have a co-worker with qnap, but not sure which one. Though I'm thinking he has one of the more... robust versions lol.
 
Theres some impressive norco 4u 24 bay chassis as well.
Do they mount drives horizontal or vertical? Many rackmount chassis I see do it vertical, especially when they want it really dense, and I had a TON of drive failures when doing that. It was something like 8 failures in 2 years I used a chassis like that. Since I switched back to horizontal mounting of drives I have only had 1 failure in the past 7-8 years.
Currently I just switched over a month ago to a Corsaid 760D with 24 drive bays. 17 of which are filled right now.
 
If I had the money sitting around and a reasonable chance of getting the drives I'd need in it in anything close to a timely fashion, I'd buy it for "Friendlocation" myself :)

As it is, it is going take ages to get the 8TB drives I want for the one I have :( Budget is super tight and I will only be able to afford one at a time (sometimes) when they go on sale. I plan to shuck. Until then, the 5x4TB eSATA RAID5 (with backup to a lot of smaller drives) carries on...

Yeah, I was only able to do my backup "friendlocation" server because I had enough spare parts to pull it off while only spending tens of dollars on eBay.

Otherwise it wouldn't have happened :p
 
Have a co-worker with qnap, but not sure which one. Though I'm thinking he has one of the more... robust versions lol.
Likely, I got the 1635AX, which is a nice unit but I was hitting 250MB/s transfer with 6 disks. I expanded it to 8 and the speed didn't increase which prompted me to look into it.
The ARM CPU immediately hits 100% when I kick off a transfer, with my current 10 disk raid 6 over the 10Gb connection(direct attach from storage to server) I'm still getting 250-275MB/s it should be at least double that speed with 10 disks.
(I'm not using it for anything but a storage unit, no apps, docker instances etc, I have my compute hardware separate)

Again not a huge deal since only the storage<->server can take advantage, the rest of my network is 1g and more than enough to run several plex streams, but still a bit annoying.

Do they mount drives horizontal or vertical? Many rackmount chassis I see do it vertical, especially when they want it really dense, and I had a TON of drive failures when doing that. It was something like 8 failures in 2 years I used a chassis like that. Since I switched back to horizontal mounting of drives I have only had 1 failure in the past 7-8 years.
Currently I just switched over a month ago to a Corsaid 760D with 24 drive bays. 17 of which are filled right now.
http://www.norcotek.com/product/rpc-4224/

Horizontal in the front with hot swap bays and the backplanes are 6x SFF-8087 mini sas connectors, but about $350, there are cheaper options likely unless you already have most of the hardware.
 

Yep. The Pro/1000 PT NIC's are pretty old at this point, but they are still very solid gigabit Ethernet cards for servers.

I have several of these, a single port a few dual ports and a quad port across my devices. Very good experience. Perfect for FreeNAS, BSD or Linux servers.

They work well in Windows too, but Intel has discontinued them from their premium driver offerings, so now you just get basic features under Windows. (you can't set up Link Aggregation and stuff like that, but it is not a big deal)

Since drivers in Linux and BSD (which FreeANS is based on) are open source, they will never become obsolete.

The price on Amazon seems a bit on the high side though. Here is one for $10.99 on ebay.

Also, depending on what case you are using, make sure they come with the correct bracket. For a normal PC case you'll need the full height bracket. If you are using a more compact case, you may want the half height.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
That from ebay is second hand. I'd prefer to buy it new.

Is there any newer Intel NIC?

Are you sure the one on Amazon is new? These haven't been made in many years.

You are missing great opportunities by your aversion to used server pulls. Thats how most of us afford our Enterprise hardware.

It's not like buying used consumer parts. These have been installed in a Enterprise server somewhere, spent a few years in use, and then been pulled and sold. They haven't been messed with like consumer stuff has.

I have been buying server pulls for my servers for YEARS and havent had a single problem with them.
 
That from ebay is second hand. I'd prefer to buy it new.

Is there any newer Intel NIC?

The Intel X540's are the latest cards but they're also 10GbE since they're to the latest standards and you pay for it.
 
Are you sure the one on Amazon is new? These haven't been made in many years.

Yes, I'm sure. Look at this list


https://www.amazon.com/gp/offer-listing/B000BMZHX2/ref=dp_olp_all_mbc?ie=UTF8&condition=all

You are missing great opportunities by your aversion to used server pulls. Thats how most of us afford our Enterprise hardware.

It's not like buying used consumer parts. These have been installed in a Enterprise server somewhere, spent a few years in use, and then been pulled and sold. They haven't been messed with like consumer stuff has.

I have been buying server pulls for my servers for YEARS and havent had a single problem with them.

That may work if you live and buy in the USA. Not living in Brazil. I could not return it, for instance.

In any case, I only buy second hand if I can actually have it in my hands, and see its condition.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
They're cheap.

If it fails, buy another, but that's also highly unlikely. Hell, buy two from two different vendors if you're worried.

Only things I won't buy used are hard drives and power supplies; hard drives because I don't run enough units for the savings to be worth the potential failures, and power supplies because bad ones are insidious and new ones are cheap.
 
  • Like
Reactions: x509
like this
Yes, I'm sure. Look at this list


https://www.amazon.com/gp/offer-listing/B000BMZHX2/ref=dp_olp_all_mbc?ie=UTF8&condition=all



That may work if you live and buy in the USA. Not living in Brazil. I could not return it, for instance.

In any case, I only buy second hand if I can actually have it in my hands, and see its condition.

That's fair. I wouldn't have guessed. Your English is very good

My Fiance is a paulistana, and I've spent some time down there, so I know all about the differences in trust levels when dealing with people and the greater difficulty and expense finding certain things.

Still, are there no parts like this used locally done there? Mercado Livre?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thanks for your comments about my English. I was born in Argentina, but I live in Rio de Janeiro.

Unfortunately not, we can't find much really useful and specific in ML.

And I'm not having these Amazon purchases shipped directly to Brazil, or I would pay a lot in shipping by courier, and also taxes when it gets here. '

I have them sent to a service that then air-mails the stuff to me, much cheaper, and then I may or not pay import taxes, as by air-mail things are sampled at customs and only 10% pay taxes.

So buy two of these things would be cheap, but not other stuff.
 
Back
Top