How do you build a fast PC-based NAS?

mnewxcv

[H]F Junkie
Joined
Mar 4, 2007
Messages
8,986
Basically I want to replace my WD 2 bay NAS with something faster. I've been happy with my 2x3TB mycloud mirror (like EX2) NAS, but I want to move to something that can't be perceived as not a local disk. I definitely want redundancy (RAID 0 = no), but I want it to be faster. Doesn't have to be SSD fast, but that would be nice if possible. What do I need in terms of hardware to make a PC based NAS? Cores? Memory? RAID 5/6/10? More drives? Someone please help me understand this. If I could get 250MB/s over a NAS I would be ECSTATIC. I know I will need to upgrade to 2.5+ Gbit. No problem. I have an older 1155 motherboard I could get a CPU for but really want to know what kind of hardware to get first, if it makes more sense to use my AM4 system for this instead, etc. Thank you all.
 
If we are talking NAS and not DAS, then on a lot of setups, one of the limiting factors, is the network itself, even for spinning disk NAS.

Even 2.5Gbit can be easily saturated by a small set of disks.

Just something to consider. Things like flash can greatly speed up IOPS, but transfer rates may still be very constrained by the connection (that is network for NAS).

With that said, 250MB/s is pretty low bar. I usually call anything less than 300MB/s, single disk speed.

Memory can help since filesystems cache. A lot of "big boy" NAS subsystems really rely on this, but of course, also rely on being able to defer writes greatly, which implies ensuring writes can complete (even if the power gets disrupted somehow).

Personally, I wouldn't attempt RAID unless you're at the 8 drive level. And, today, if not using something like ZFS RaidZ, if more traditional or HW RAID, I'd go RAID6. But if your are using spinning disk and want to get better IOPS and a bit more reliability at the expense of space, RAID10 (not 0+1).

ZFS likes memory (more than the caching example mentioned above) and CPU starts becoming more important as well (even if not using ZFS potentially, though arguably doesn't take much for just SW RAID).

Now, the "future" (actually already here)... that is, all flash (SSD), and potentially all flash distributed/augmentable object storage. Again, this is more typical of enterprises, but doable at home as well. The idea is there is no "RAID", but rather distributed object storage and more efficient (than RAID5/6) "erasure codings". These "erasure coding" algorithms (leverages the fact you have contemporary CPUs and flash) give you RAID5/6 reliability with less storage tradeoffs (more disks/bricks provide better space to reliability ratios). Using "erasure coding" means more I/Os and computations, but the idea is you CPU can perform operations fast and you've got high IOPS storage (all flash). I mention this for those that really want be on the bleeding edge. Obviously, there are reasons why you would not do this, but might be fun. (Ceph? https://docs.ceph.com/docs/master/rados/operations/erasure-code/)
 
Need more information about your workload and what that 250MB/s entails. How many clients? Read, write, sustained, burst, what kind of data is it? Willing to purchase a <$1,000 server OS license or need to stay free?

I'm not sure if ZIL and L2ARC work with mirrored VDEVs in Freenas but that might be a nice setup. Small SSDs as ZIL and L2ARC, 16 GB of ram, and a two big disks in a mirror. I haven't messed around with ZIL or L2ARC in years and haven't kept up on it.
 
Need more information about your workload and what that 250MB/s entails. How many clients? Read, write, sustained, burst, what kind of data is it? Willing to purchase a <$1,000 server OS license or need to stay free?

I'm not sure if ZIL and L2ARC work with mirrored VDEVs in Freenas but that might be a nice setup. Small SSDs as ZIL and L2ARC, 16 GB of ram, and a two big disks in a mirror. I haven't messed around with ZIL or L2ARC in years and haven't kept up on it.
I would take the ram as high as the MB would allow for ARC - 64GB should be doable without spending insane amounts on ram, and I think I would just do, say, a 1TB Samsung pro nvme as L2 Arc to keep it simple. Depending on total storage needs, you could either do high capacity spinning disk, or smaller SSDs, or even a combo of the two if you wanted an archive area and a speedy storage area.

Network wise I would want the NAS on 10gbit so it could feed multiple 2.5gbit clients. If you have one client that you want no compromise in speed, you could put that client on 10gbit too, giving that one client around 1000MB /sec transfer speeds. I don’t have a good way to get around network latency though.

Edit:
I just reread the initial post - the OP is currently using a 2 drive mycloud for what appears to be home use. The use case is likely just for a single client connecting to the NAS at a time with a storage requirement *probably * double what the OP has now.

Freenas would be my choice for software. I would recommend Intel for hassle free setup (I had a nightmare of a time trying to setup a old AMD APU as a freenas server). Since the new 10 series chips are right around the corner, I would probably checkout something in the i3 range, and fill it up with either 32GB or 64GB of ram - raw capacity is all we are worried about here without spending tons of money. Disk wise 8TB spinning drives in a mirror would probably be sufficient. If a really fast network drive was desired, a pair of 2TB SSDs in a mirror would probably be sufficient and be 2/3 the capacity of the existing solution. I think I would still try to get 10gbit on the nas host if network gear upgrade isn’t an issue.
 
Last edited:
Yeah, if you have 1gb ethernet... you buy a cheap box, throw a few cheap drives in, and set it up to share files. Bam, it's as fast as you're going to get. If you have 10gb ethernet a bit more work and planning is required. I am running RAID 10 (hardware with battery backup) with 6 drives and I can sustain 1GB/s (gigabyte, not bit) of read speed on sequential. I have a 1gb ethernet connection and my transfer rates are capped at around 120MB/s... which is about the speed of a single 3.5" HDD, so my RAID is no different to transfer to than the external usb hard drive I have connect to my server. If I had 10gb ethernet, I'd pretty much saturate the network with my current setup (they would both be borderline maxed out). Now, I can plug in multiple 1gb connections to my server and it can transfer to each client @ 1gb/s, but even that's limited and I rarely need to transfer from multiple clients simultaneously like that. My SSD isn't any faster for transfers (maybe a bit faster for many small files, but not generally noticeable).
 
Like others have said, unless you have a dog slow NAS your network will be the limiting factor. I have gigabit Ethernet both to my NAS and server. If I copy over a single large file from my pc that has a Samsung 860 evo to my NAS I get around at 115MBps transfer rate to my old Samsung 5,400rpm drives (may be 5,900rpm, I forget) with an old dual core AMD cpu. If I transfer the same file to my server that has a much faster NVMe drive and a new 6 core, 12 thread Ryzen I get 115MBps as well. I’m pretty sure that speed is somewhere around the cap for gigabit. Now, smaller files will go much slower from all the reads and writes. What I’m trying to say is - don’t blow your wad on top of the line hardware and expect miracles if you network stays the same.
 
Thanks for the feedback. I'm looking to upgrade to 2.5gbit on the LAN. I'm thinking 4 drives in raid 10 might do the trick...
 
Thanks for the feedback. I'm looking to upgrade to 2.5gbit on the LAN. I'm thinking 4 drives in raid 10 might do the trick...
What size drives and how critical is the data? Keep in mind with RAID 10 you lose 1/2 of your data to redundancy. If it's just semi important that it doesn't all get destroyed, RAID 5 (with 4 disks) would give you 50% more data than RAID 10 with the same drives, redundancy but with a higher chance of data loss on a rebuild. This is what backups are good for though. If you have a lot of data that doesn't change much and if you lose a tiny bit it's ok, nightly backups or weekly, or w/e. Even mirroring an important directory to a backup. And this is from the guy running RAID 10 ;). I am starting to look into 2.5G Ethernet, but really it's going to cost me more than it's worth (to me, I currently have 2 switches and 7 desktops + a server). Any new MB's I buy going forward will have 2.5gbe or 10gbe built in though for some future proofing.
 
you shouldn't be using raid 5 nowadays unless your got another array backing up that array (witch you should any way if data is critical)

raid 5 can blow up in your face when replacing a disk (as you don't have any redundancy when a disk is been replaced, if you get a data error it can crash the array), with raid 6 if you get a Data error when a disk is been replaced it will just correct it from parity and continue on rebuilding

disks over 2TB should be raid 6 or 10 (if its 4 disks it doesn't really matter much which one you go for, RAID 10 is sure faster but has less data consistency check in place compared to raid 6 as its uses Parity so any data inconsistencies are corrected more reliably)

even using RAID6 you should be able to saturerate a 2.5gbe link
 
you shouldn't be using raid 5 nowadays unless your got another array backing up that array (witch you should any way if data is critical)

raid 5 can blow up in your face when replacing a disk (as you don't have any redundancy when a disk is been replaced, if you get a data error it can crash the array), with raid 6 if you get a Data error when a disk is been replaced it will just correct it from parity and continue on rebuilding

disks over 2TB should be raid 6 or 10 (if its 4 disks it doesn't really matter much which one you go for, RAID 10 is sure faster but has less data consistency check in place compared to raid 6 as its uses Parity so any data inconsistencies are corrected more reliably)

even using RAID6 you should be able to saturerate a 2.5gbe link
Depends on the size of the array which is why I asked. RAID 6 is useless for 4 disks... Better to stick with raid 10 if you are going to use 2 disks for redundancy (on a 4 disk array) and at least get the performance benefits. Also, depends on what the required level of data integrity is. If it's I just have a few things I'm actually worried about, then raid 5 is fine, if a drive craps out just copy the important stuff before rebuilding. If it's all super important, raid 10 would be good. Just about any RAID can saturate 2.5gbe, so honestly that doesn't even need to be much thought towards speed unless.you have some VMs or something running that need bandwidth. Of course most companies are more worried about IOPS more than sequential read/write speeds.
 
Back
Top