Prebuilt NAS or Home Build

Violator

Gawd
Joined
Mar 31, 2005
Messages
768
Not quite sure where to post this exactly, so I'll try here. Been thinking about building a NAS for a while, and bouncing back and forth between just getting a prebuilt or having a bit more fun and building my own. Would appreciate some feedback.

NAS would primarily be a media-server, possibly doing some light transcoding (heavy stuff done by my main PC then copied over), and also used for photography storage. I want a small, quiet solution, with good performance (hence CPU choice). Reds will be in a FreeNas Raid5 equivalent, and I've read about ZFS loving a ton of RAM (hence 16GB). However if RAID is not the best option (I want data security) please let me know.

If I was going with the home build, I was looking at :

Corsair Builder Series CXM 430W Modular 80 PLUS Bronze Certified ATX/EPS PS
Corsair CMV16GX3M2A1333C9 Value Select 16GB (2x8GB) DDR3 1333
Fractal Design Node 304 Case- Fractal Design
Western Digital - 3TB Desktop SATA Hard Drive for NAS - OEM - Red x 4
Intel Core i3 3220 Dual Core CPU ( 3.30GHz, Socket 1155, 3MB Cache, Ivy Bridge, 55W
Intel PRO/1000 GT Desktop Adapter - Network adapter - PCI - EN
Gigabyte SKT-1155 H61N-D2V Mini ITX Motherboard Gigabyte

Have read about some issues/poor performance with Realtek NICs and FreeNas, so I want an Intel NIC but they are v difficult to find on mini-itx boards (hence the adapter card), also the only board I could find reasonably priced that had a PCI slot (would prefer PCI-E but hey-ho), other options are welcome!

So is this horribly unbalanced/expensive/overkill? I am budgeting between £800-900 (pounds not dollars!)
 
Home build every-time.

Also the new Intel Celerons are actually quite good, you could use one of them, they are the same chips as the i3's just some features disabled.

You could also use Debian or Ubuntu-server for the OS, that way you can use any NIC

Also just so you know RAID IS NOT BACKUP if you want to protect your data you will need a separate drive to backup too. RAID is simply redundancy, so if one drive fails the other still has the data.
 
Last edited:
Anything beyond a Celeron/Pentium for a basic NAS build is overkill. Also why Ivy bridge when you can go Haswell? I have an identical setup as yours with a i3 4340 + asrock z87x itx (Intel NIC) + 6 WD Red 3TB serving exactly what you mentioned. It originally was meant to be my primary system but I had a change of plans since I needed more expansion slots. The system idles at 18w and during full load it takes up about 85w. If I'm doing a dedicated NAS build now I would definitely get a passive low power PSU, 4GB of ram instead of 16GB, and probably a Celeron/Pentium since they offer 90% of the same performance for $50 to $100 less.
 
So there are a few things you need to decide on before you can really choose one way or the other, as far as I can see it.

Transcoding, if you want to do any, then you'll need a CPU that can handle it, prebuilt or DIY, that puts you in a particular category of the prebuilt which is mid range expensive and up.

Flexibility, how important is it to you? Are you going to want so swap out parts down the road, such as CPU, motherboard, raid controller? Obviously you can't do most of these things with prebuilt, so DIY is the way to go if this is what you want.

I have almost half a dozen prebuilt NAS's from different manufacturers, they all server different purposes. Once upon a time I liked mucking about with hardware, but at this stage in my life, I have too many other things to do and simply don't have the time or the inclination, however I do see the value in it if its your thing and I am not against it. For me, a NAS sits in the corner, connected to the network serving up media whenever I want, its a set and forget type of thing and it takes little care and feeding. There are some downsides to prebuilt NAS's, I had a power supply die on the first NAS I ever bought about six years ago and had to send it off to the manufacturer for repair, that was a pain, but since then its never happened to any my of other NAS's, guess they are better built and last longer now.
 
You said ZFS, so just thought I would bring up that ZFS really needs a UPS (as do all "servers") and ECC support on the CPU, Motherboard and the RAM.

ECC means that the ZFS scrubbing and data rot detection and repair can work as it should do. If you do go ZFS, ECC is also a recommendation.
 
Thanks for the advice so far :) must say I'm more confused than ever now. Let me address some individual points

Spazturtle said:
Also the new Intel Celerons are actually quite good, you could use one of them, they are the same chips as the i3's just some features disabled. Any specifically?

You could also use Debian or Ubuntu-server for the OS, that way you can use any NIC FreeNAS seemed best suited to my needs and fairly simple to use

Also just so you know RAID IS NOT BACKUP if you want to protect your data you will need a separate drive to backup too. RAID is simply redundancy, so if one drive fails the other still has the data. I'm aware RAID isn't 'backup' as such, but does offer redundancy and usually some speed improvement, depending on the RAID type

Agnesis said:
Anything beyond a Celeron/Pentium for a basic NAS build is overkill. Also why Ivy bridge when you can go Haswell? I have an identical setup as yours with a i3 4340 + asrock z87x itx (Intel NIC) + 6 WD Red 3TB serving exactly what you mentioned. It originally was meant to be my primary system but I had a change of plans since I needed more expansion slots. The system idles at 18w and during full load it takes up about 85w. If I'm doing a dedicated NAS build now I would definitely get a passive low power PSU, 4GB of ram instead of 16GB, and probably a Celeron/Pentium since they offer 90% of the same performance for $50 to $100 less.

I was trying to keep costs down a 'little' by not going Haswell/mobo. I also went with more RAM as what I read about FreeNAS and its RAID equivalent is that its very RAM hungry. Maybe I've misread that :)

ashman said:
Transcoding, if you want to do any, then you'll need a CPU that can handle it, prebuilt or DIY, that puts you in a particular category of the prebuilt which is mid range expensive and up.

Flexibility, how important is it to you? Are you going to want so swap out parts down the road, such as CPU, motherboard, raid controller? Obviously you can't do most of these things with prebuilt, so DIY is the way to go if this is what you want.

I can't decide whether I want it to be able to transcode or not. If I don't, then it's much more straightforward and I'd probably look straight at a Synology 413 (or similar). If I do, then I'm leaning more to the home-build as transcoding anything over 720 takes some grunt.

Liggywuh said:
You said ZFS, so just thought I would bring up that ZFS really needs a UPS (as do all "servers") and ECC support on the CPU, Motherboard and the RAM.

ECC means that the ZFS scrubbing and data rot detection and repair can work as it should do. If you do go ZFS, ECC is also a recommendation.

If I 'have' to go ECC for that, then that really rules out a custom build, ECC is not cheap.
 
ECC doesn't cost as much more as everyone seems to expect. Maybe $30 more all up, assuming you weren't going for a budget motherboard originally.

Though it is hard to find Mini-ITX boards that support ECC, there's only a few of them.

If you want data integrity you really should go with ECC, and if you decide not to you probably shouldn't use ZFS.

You should also use RAID6, RAID5 is dead with the high capacity drives we have today. If you have to replace a drive there's a very high probability of encountering another error during the rebuild - which means with RAID5 you have no more parity to correct that error.

Also, why PCI?
 
ECC doesn't cost as much more as everyone seems to expect. Maybe $30 more all up, assuming you weren't going for a budget motherboard originally.

Though it is hard to find Mini-ITX boards that support ECC, there's only a few of them.

If you want data integrity you really should go with ECC, and if you decide not to you probably shouldn't use ZFS.

You should also use RAID6, RAID5 is dead with the high capacity drives we have today. If you have to replace a drive there's a very high probability of encountering another error during the rebuild - which means with RAID5 you have no more parity to correct that error.

Also, why PCI?

ECC isn't just the memory though is it? You've got to make sure the mobo/cpu all support it too, and that limits options (and it's definitetly more expensive) :)

I must admit I haven't kept up with RAID last few years, RAID 5 always used to be the best option when you had 3 or more drives. *edit* had a quick look at RAID 6, you appear to lose half your capacity??

PCI? It was the only mini-itx mobo that had a PCI expansion slot, I'd prefer PCI-E but I couldn't find one.
 
IMO look into unRAID. http://lime-technology.com/technology/

I will say I am a noob about raid, as this is my 1st time building a NAS. I was deciding against building a traditional RAID5 setup, vs ZFS RAIDZ1. Then I stumbled upon unRAID. But before all this, I decided to go the custom home built route due to flexibility and cost. You will find that home built NAS is much cheaper than a ready to go unit as the home built NAS WITH HARDDRIVES generally run about the same as just the pre-built unit without drives.

But regarding the RAID I chose, I went with unRAID because at the end, I was debating primarily against ZFS system like FreeNAS. The redundancy/parity portion of unRAID and RAIDZ are primarily the same for my application. I wanted to have a 1 drive fail tolerance. unRAID won for me because in the case 1 drive fails, unRAID and RAIDZ are equal. Advantage is, the data is not stripped in unRAID, meaning if one drive fails, you can still access the file as is. Also if a 2nd drive fails, you are screwed in RAIDZ and your whole array is toast. In unRAID, you would only have lost data in 2 drives, and everything else still intact and usable.

unRAID is also very light on system requirements. I sort of had an overkill for my build, going with a 4130T and 8GB ram, but it's way more than enough for unRAID. ZFS wants more ram per TB storage you use.

Performance-wise, I'm not really sure which is better, but I've not really had any issues with unRAID. It works great and isn't hard to setup.

Downside is unRAID has licensing, so generally speaking one would want to at least get the PLUS version to unlock more limits in drive numbers it can handle and better user share security.

The hardware you chose is pretty decent. Although I would recommend an ASRock board because there are a few that have 8 sata ports. Most other boards only have 6.
 
Homebuild, Unraid.

Case closed.


Lets us know if you need help.


use Assassin HTPC blog to setup unRaid. if you need any addons like plex, SAB, CP, or SB, I can give you the links to those guides from LimeTechnology's forums.
 
Unraid, sorry but the money I buy that license, I can buy another 3TB drive to enrich my storage and go with a lot free but robust OS.
 
IMO look into unRAID. http://lime-technology.com/technology/

I will say I am a noob about raid, as this is my 1st time building a NAS. I was deciding against building a traditional RAID5 setup, vs ZFS RAIDZ1. Then I stumbled upon unRAID. But before all this, I decided to go the custom home built route due to flexibility and cost. You will find that home built NAS is much cheaper than a ready to go unit as the home built NAS WITH HARDDRIVES generally run about the same as just the pre-built unit without drives.

But regarding the RAID I chose, I went with unRAID because at the end, I was debating primarily against ZFS system like FreeNAS. The redundancy/parity portion of unRAID and RAIDZ are primarily the same for my application. I wanted to have a 1 drive fail tolerance. unRAID won for me because in the case 1 drive fails, unRAID and RAIDZ are equal. Advantage is, the data is not stripped in unRAID, meaning if one drive fails, you can still access the file as is. Also if a 2nd drive fails, you are screwed in RAIDZ and your whole array is toast. In unRAID, you would only have lost data in 2 drives, and everything else still intact and usable.

unRAID is also very light on system requirements. I sort of had an overkill for my build, going with a 4130T and 8GB ram, but it's way more than enough for unRAID. ZFS wants more ram per TB storage you use.

Performance-wise, I'm not really sure which is better, but I've not really had any issues with unRAID. It works great and isn't hard to setup.

Downside is unRAID has licensing, so generally speaking one would want to at least get the PLUS version to unlock more limits in drive numbers it can handle and better user share security.

The hardware you chose is pretty decent. Although I would recommend an ASRock board because there are a few that have 8 sata ports. Most other boards only have 6.

This is a very naive post, that is not properly comparing what Lime calls 'unRAID' and ZFS' parity options.

Period, end of story, can brook no argument: ZFS is better at data integrity and keeping your data safe, including in the case of drive failures (with the one caveat being you must set it up to be safe -- for instance, you mention RAIDZ, which can only handle 1 disk death; true, but that's why there's RAIDZ2 and RAIDZ3, as well as the ability and good practice of multiple vdevs, further spreading the failure domain). A properly configured ZFS system on, just as a random example, 21 disks, could survive the death of anywhere from 3 to 9 drives depending on which die without losing a /single block of data/. Unraid would lose tons of files in this situation. Other files would be intact, sure, but ZFS lost nothing and unraid lost TB's of data.

Unraid also does not have on-the-fly always-on checksumming, so it is susceptible to bit rot in a way ZFS is not.

I'm not suggesting you not use unRAID - especially in a home environment, especially if you have a hodge-podge of disks to throw at it, especially if the data in question is of minimal value to you, unRAID may be easier to use and a very decent solution to your problem. Just don't compare its data integrity to ZFS. It will lose. Every. Single. Time.

Lime has made a good go on their site of convincing you unRAID's method of data protection is "better" than RAID, but it's just marketing bullshit. unRAID is probably the least safe solution from a 'keep your data' perspective except for literally just saving files to hard disks individually with no form of RAID or protection whatsoever. It is 1 step above that. Which is still many steps below ZFS. But what it is is 'good enough' for some home users, and if that includes you, that's fine. Just don't compare it to ZFS as if it is remotely as safe. :)
 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813135342
This is a board with the celeron 1037u soldered in it (basically an ivy bridge dualcore at 1.8ghz) in a mitx form factor, has pci express x16 and 4x sata ports would be a good starting point at $80 if you wanted to be on the low cost side.

I would say that if you want to do transcoding you might want to stay away from the prebuilt nas' as they just do not have the horsepower to run plex full bore. I personally have a ds1512+ and while it is great and can do plex, when you start streaming to more than two devices with transcode the performance really suffers. This is the one with the x86 dual core atom. The device itself is awesome for being a nas and it works flawlessly in all other aspects. On solution if you wanted to go this kind of route would be to have another computer be the plex server and link back to the storage on the synology and you should be golden.
 
Thanks again for all the advice everyone :). I'm now leaning towards a prebuild, the homebuild seems far more complicated than I'd imagined.

rhansen5_99 said:
I would say that if you want to do transcoding you might want to stay away from the prebuilt nas' as they just do not have the horsepower to run plex full bore. I personally have a ds1512+ and while it is great and can do plex, when you start streaming to more than two devices with transcode the performance really suffers. This is the one with the x86 dual core atom. The device itself is awesome for being a nas and it works flawlessly in all other aspects. On solution if you wanted to go this kind of route would be to have another computer be the plex server and link back to the storage on the synology and you should be golden.

I already have a good i5 based 'main' PC that currently runs Mezzmo and does all the transcoding, the idea was that I offload all this onto a NAS, and give myself far more storage (and a RAID solution at the same time). I'm very tempted now to just get myself a Synology DS414, chuck 4 WD 3TB Reds in and be done with it.
 
Have read about some issues/poor performance with Realtek NICs and FreeNas, so I want an Intel NIC but they are v difficult to find on mini-itx boards (hence the adapter card), also the only board I could find reasonably priced that had a PCI slot (would prefer PCI-E but hey-ho), other options are welcome!

I have a long slot quad port Intel Pro NIC pci-e 4x on both my mini-ITX builds using the PCI-e 16x slot. Works great.

Personally I would get a Synology ATOM based NAS, maybe $1k total with 8TB of disks, as it's less of a pain, super compact, quiet, and does a host of other stuff extremely well. Has a large community of people specifically for their devices as well.
 
I now think I'm leaning towards a Synology DS414. Reasonably priced and will do what I'm after (keeping transcoding to my main PC and just copy over).
 
So I got the DS414, beautiful little box. Plugged in 4 3TB Reds and off we go. 15 hours into RAID initialisation (33%!).

Am I correct in assuming it's OK to write to the disks during this phase, just that it'll be slower. It doesn't affect the parity does it?
 
That's a good question. I didn't mess with the volume til it was done. I think mine took 15 hours. Didn't want to mess with it.
 
That's a good question. I didn't mess with the volume til it was done. I think mine took 15 hours. Didn't want to mess with it.

I can't find a definitive answer. I have already written some data to the array, went in at about 60-70Mb/s. I would have *thought* it wouldn't allow you to write to it if it wasn't 'ready' or was going to mess up the parity info.
 
How's it working for you Violator? I am looking into getting the same setup.

Is there a reason why you went with 3TB drives instead of 4TB? Cost?

 
Is there a reason why you went with 3TB drives instead of 4TB? Cost?

While 4TB drives give you the maximum data density, your downside is that when you have to rebuild an array after a drive failure, it takes much longer to rebuild a 4TB than a 3TB. If you're buying hard drives from the same lot and there is a manufacturing defect then the drives are more likely to fail within hours or days of each other.

Thus going with a 4TB setup over a 3TB setup increases the probability that you will loose data during a drive failure event. That is one of the reasons that most enterprise users lean towards smaller capacity drives and simply increase the number of them.
 
Back
Top