help on pre built nas question

sparks

2[H]4U
Joined
Jun 19, 2004
Messages
3,206
guys is all the plug in appliances just a waste?
I was looking at a Synology and didn't have a clue if I would ever use any of them.
A buddy said NO get a buffalo or something without all that junk way cheaper..
input on this please
 
What exactly are you look for in a NAS? There are a lot of threads around here with pros and cons about pre-built NAS or building your own.
 
with a lot of them they offer a lot of plug in packages to add anything from Emule to antivirus.
I was wondering if any of these things are used by anyone or just something to make ads about.
 
It depends on what you are looking for in a NAS. I wouldn't put Buffalo in the same category as Synology or Qnap either, Buffalo is a very cheap, bottom end no frills storage solution and while the others may come with a OS and a whole host of features, the build quality and support is better.
 
how about the Thecus N5550. Atom processor 5 bays.
price around $400
 
Pre-built vs DIY NAS :

Pre-built is faster to deploy

Pre-built is mainly maintenance-free (automatic updates of services and firmware)

Pre-built has sometime less features than DIY: Debian or Red Hat has a lot more packages you can install.

Pre-built has lower end hardware (mainly Atom) while you can get better hardware when building a custom NAS (but can be useless, unless you go with ZFS)

Pre-built uses less power (because it has less powerful hardware)

Performance are about the same since you are limited by Gigabit Ethernet.

The BIG difference : Pre-built uses EXT4 file system, while you can uuse ZFS (OpenIndianna, Solaris, FreeNAS, etc) with a custom one. But ZFS is mainly used in larger scale storage solution.
 
well after posting some questions on company sites I am a bit put off with them.
I was wanting to use individual drives mainly for storing backups and movies.
But all of them roll the JBOD into a single drive. I asked why then even have it, I would just go raid0
I got some pretty sorry ass answers to that one.
 
Pre-built vs DIY NAS :

Pre-built is faster to deploy
Can you explain what aspect of deployment you are referring? Is it sticking the disks in and making sure the disks you chose works or adding users, creating share, adding to your domain controller, etc? This would be more accurate with the word "usually".

Pre-built is mainly maintenance-free (automatic updates of services and firmware)
The same can be applied to a custom build system. The OS can be setup to automatic update if you so choose.

Pre-built has sometime less features than DIY: Debian or Red Hat has a lot more packages you can install.
Always have less features.

Pre-built has lower end hardware (mainly Atom) while you can get better hardware when building a custom NAS (but can be useless, unless you go with ZFS)
The one you almost got right. Why only ZFS can make use of good hardware?

Pre-built uses less power (because it has less powerful hardware)
Again, better with the word "usually". One could also build a low power system.

Performance are about the same since you are limited by Gigabit Ethernet.
Can you explain how a pre-built Atom system (as per item 4 above) will have the same performance calculating RAID6 parity as a current Xeon or i7 processor? Have you even heard of dm-crypt or true-crypt?

The BIG difference : Pre-built uses EXT4 file system, while you can uuse ZFS (OpenIndianna, Solaris, FreeNAS, etc) with a custom one. But ZFS is mainly used in larger scale storage solution.
Not all pre-built uses EXT4. Another place for the word "usually". And they usually run on Linux md RAID. And why do you think all custom build NAS uses ZFS. This is only one of the many many filesystems available. You should skim through the 10+TB thread for the various setup and some of the filesystems available.
 
well after posting some questions on company sites I am a bit put off with them.
I was wanting to use individual drives mainly for storing backups and movies.
But all of them roll the JBOD into a single drive. I asked why then even have it, I would just go raid0
I got some pretty sorry ass answers to that one.

Huh? I don't think this is true. What is the question exactly?
 
Can you explain what aspect of deployment you are referring? Is it sticking the disks in and making sure the disks you chose works or adding users, creating share, adding to your domain controller, etc? This would be more accurate with the word "usually".

On a pre-built file server like a Synology, you don't have to assembly the hardware, install the OS and install all the services you want. It's mainly plug and play. Sure, you have to add the users, DC, create share, etc, just like on any OS.



The same can be applied to a custom build system. The OS can be setup to automatic update if you so choose.

Sure for the OS, but I doubt you Apache, MySQL and other web services update automatically without any user input and/or trouble. I've always updated my web services manually on Linux.

The one you almost got right. Why only ZFS can make use of good hardware?

As I've studied from ZFS, this file system requires more power than any other file system. It must have a lot of ECC RAM, so it implies a Xeon CPU and SSD for read/write caching. Any other conventional file system such as NTFS and EXT4 does not calculate end-to-end checksum and don't do on-the-fly compression as you can enable on ZFS. This requires CPU time that an Atom CPU cannot afford while keeping good performance (or maybe it can, but it's not what this CPU is intended for).

Conventional file system can make use of good hardware. That's not the point. As you said, calculating RAID 6 parity can be a CPU demanding task and md RAID can make benefit of better hardware for such a RAID. But once again, for RAID 6 under EXT4, it doesn't require 8 GB ECC RAM at minimum and a Xeon CPU.

Again, better with the word "usually". One could also build a low power system.

It's hard to consume less than an Atom ;) But yes, everybody can use a small low-power CPU for its file server. It's not exclusive to pre-built systems.

Can you explain how a pre-built Atom system (as per item 4 above) will have the same performance calculating RAID6 parity as a current Xeon or i7 processor?

For relatively small arrays, you reeeeeally don't need nor a Xeon neither a Core i7 to calculate RAID 6 on mdadm RAID and you perfectly know it. Increase the size of the array by a factor of 4-5 and yes, maybe. But I don't think Sparks want a large-scale file server of 30 TB and more. And the Atom 2.13 Ghz is plenty enough to calculate parity on a RAID 6 array in a DS1513+. Having a faster CPU would not have any benefit. You must increase the number of bays and add more drive to see the limits of an Atom.


And why do you think all custom build NAS uses ZFS. This is only one of the many many filesystems available. You should skim through the 10+TB thread for the various setup and some of the filesystems available.

I said "you can use ZFS". ZFS is, I think, the best file system available right now and certainly the more secure. I said ZFS because it's the first one that came in mind at the time of writing, but I could have said others. And ZFS should be envisaged when building a custom file server if you have very valuable data.
 
Last edited:
Ghost26,
Ok. Now I get it. You clearly have been brainwashed by the ZFS camp. You need to lift your head up from your dogma and look around for a minute.
 
Lol :p

Nah I haven't been brainwashed.

I've made my own research about ZFS and that's the conclusion I've ended with.

ZFS is a very nice file system with great features, but generally require more power, especially from RAM. But some have successfully installed ZFS systems on little HP N5xL and got good performance out of it after upgrading RAM.

Finally, all I can say is that both pre-built and custom-built NAS have their advantages and disadvantages.

But if going with a pre-built NAS is envisaged, Synology and QNAP are my favorites. They got a lot of features and are very nice to use. Still pricey because they are the high end of pre-built NASes, but you always get what you pay for. The OSes in these NASes are really nice and pack a lot of services suck like FTP/HTTP server, cloud application (kind of DropBox), VPN, Virtualization support, many multimedia features, backup applications, and other network services like DHCP and DNS server.
 
Last edited:
Lol :p

Nah I haven't been brainwashed.

I've made my own research about ZFS and that's the conclusion I've ended with.

ZFS is a very nice file system with great features, but generally require more power, especially from RAM. But some have successfully installed ZFS systems on little HP N5xL and got good performance out of it after upgrading RAM.
ZFS is a capable filesystem, but saying is is the best is like saying one screwdriver is the best screwdriver. I did my research as well and decided against ZFS because of some of its restriction that would not fit my requirement.

Finally, all I can say is that both pre-built and custom-built NAS have their advantages and disadvantages.

And with that, we are both in agreement.
 
Last edited:
I'm interested in the reasons that pushed you to go against ZFS :)

I'm currently writing an essay (I'm a student) on networking storage and this could be interesting. I have to write down every pros and cons of systems, including ZFS.
 
When I started building my NAS ZFS was not supported under Linux. However, as my NAS grew, I have been looking at other filesystem and RAID options and still mdadm with EXT4 is still the best fit.

I will start a new thread with my requirement as to not hijack this one.
 
I guess I am going to go raid5.
I asked thecus and synology and both went bonkers when I said I want to use individual drives.
Then when I said well single lumped JBOD does not make since, I would go raid0. They talked down to me saying they didn't understand, IF raid0 one drive goes down then you loose all of it, BUT with jbod combined the only time you will have NO data to restore would be if the os drive crashes. WHAT?
They liked to say concatenation is the best way. wow a big word that makes more since now :)

They could not understand why anyone would want independent drives. I really pissed off the synology people when I said how about 2 raid 1's.
LOL

I asked some more and it turns out that none of the modern nas can support more than one virtual drive. So something like the old nas you could have individual drives something like 192.168.50.1 192.168.50.2 etc
the new ones can only support 1 address
 
Last edited:
Greetings

well after posting some questions on company sites I am a bit put off with them.
I was wanting to use individual drives mainly for storing backups and movies.
But all of them roll the JBOD into a single drive. I asked why then even have it, I would just go raid0
I got some pretty sorry ass answers to that one.

Why not just upgrade your PC or motherboard/CPU and use the old gear for a NAS, if you already have Windows running on it then all you have to do is add the drives you want, give them each a drive letter and make them sharable on the network, this would be real easy to administer as its just another PC and you now have the JBOD's available just how you wanted them. That's what I do hardwarewise with my old gear with the exception that I take out the drives and add an old crappy disk for the OS and install Solaris and put in 10 new drives for a ZFS Raid-Z2 array.

I asked some more and it turns out that none of the modern nas can support more than one virtual drive.

I'm guessing that they could if they wanted to but they probably are trying to make it as super simple for the great unwashed masses out there so the poor things don't get confused.

As I've studied from ZFS, this file system requires more power than any other file system. It must have a lot of ECC RAM, so it implies a Xeon CPU and SSD for read/write caching.

I have not used any ECC RAM in any of my ZFS systems so it's not an absolute neccesity, I do agree however, that it is highly desirable to have it over ordinary RAM.

I've run Solaris 10 ZFS on an 8GB RAM Opteron 146 with a 10 drive Raid-Z2 array using HD154UI's, performance as to be expected was slow, I had default compression turned on and uploading to the NAS was 8 MB's and downloading was 12 MB's, upgrading the board to a Q6600 system increased uploading speed to about 30-50MB's and downloading speed was about 60-70 MB's. Using a ten drive Raid-Z2 array of Toshiba DT01ACA300's under Solaris 11 with an I7-3820 I could upload to that setup at full Gb speed (100-105 MB's) but downloading for some strange reason only went at about 50 MB's.

Ghost26,
Ok. Now I get it. You clearly have been brainwashed by the ZFS camp. You need to lift your head up from your dogma and look around for a minute.

It's not dogma, it's pretty much proven fact and I suggest you read Vijayan Prabhakaran's thesis about IRON File Systems here and this as well where he discusses Linux ext3, ReiserFS, JFS, XFS, and Windows NTFS and these file systems inability to cope with things like faults such as latent sector errors and block corruption, although Ext4 is not mentioned I presume it would be in the same boat as Ext3.

It's well known that ZFS's 1st priority is data integrity and the speed of the filesytem is further down on the list but speed isn't everything, for example when I first installed XP I had the option of using FAT32 or NTFS and although FAT32 was known to be faster I used NTFS because it was a better FS, I use NTFS on my Win7 box only because I can't have ZFS on it. I use ZFS for my NAS because I want the most robust FS that's as good as (or better) than ZFS and since ZFS is the only one around then that's what I use, BTRFS is not quite production quality either and NTFS/Ext4/etc/etc are inferior from the data integrity aspect.

I did my research as well and decided against ZFS because of some of its restriction that would not fit my requirement.

That's unfortunate but I'm guessing that if this restriction did not exist you would most likely be using it.

I'm interested in the reasons that pushed you to go against ZFS :)
and
When I started building my NAS ZFS was not supported under Linux.

This may change at some point in the future and a robust ZFS filesystem on Linux would be a good thing to have, not as good as having ZFS under windows however.

I'm currently writing an essay (I'm a student) on networking storage and this could be interesting. I have to write down every pros and cons of systems, including ZFS.

As far as ZFS goes I suggest you read about it's basic features here and one of the "cons" is its inability to extend a Raid-Z/Z2/Z3 stripe by adding a hard drive, I get around this by saving up and buying a batch of 10 drives at a time for a Raid-Z2 array as they comfortably sit in a tower case, this I consider just to be a minor nuisance because I am aware of this limitation beforehand, whereas, other people that get caught out with it afterwards because they were not aware of it probably considered it to be a major PITA.

A more serious problem is that the "pros" of ZFS in succesfully avoiding a raid-5 write hole is due to the fact that it is a COW filesystem (copy on Write) as is also Netapp's WAFL, unfortunately the "cons" this now creates is that the filesystem gets very heavily fragmented and how do you think a 95% slowdown (refer to the graph in figure 3) would grab your attention especially in a business environment?

If you want to familiarise yourself with NTFS then I can highly recommend "Inside the windows NT file system" by Helen Custer (ISBN 1-55615-660-X) in which you'll discover for example that even the much maligned windows software RAID will upon detection of a bad block on a fault tolerant volume such as a mirror or Raid-5 array immediately mark that block as bad and re-create and re-allocate that cluster to another part of the disk much like ZFS already does, in addition you'll also be surprised to find out why it doesn't do any of this at all on a hardware raid array.

File systems are complex beasts and unfortunately you have to educate yourself about their capabilities and limitations but more importantly you have to figure out how they deal with different failure modes but information about this aspect is usually the most difficult to obtain or poorly documented. ZFS is still the most capable filesystem in this regard.

Cheers
 
Thank you very very much HobarTas for this !!! I'll take a look at this for my essay :)

Thanks a lot :p
 
I guess I am going to go raid5.
I asked thecus and synology and both went bonkers when I said I want to use individual drives.
Then when I said well single lumped JBOD does not make since, I would go raid0. They talked down to me saying they didn't understand, IF raid0 one drive goes down then you loose all of it, BUT with jbod combined the only time you will have NO data to restore would be if the os drive crashes. WHAT?
They liked to say concatenation is the best way. wow a big word that makes more since now :)

They could not understand why anyone would want independent drives. I really pissed off the synology people when I said how about 2 raid 1's.
LOL

I asked some more and it turns out that none of the modern nas can support more than one virtual drive. So something like the old nas you could have individual drives something like 192.168.50.1 192.168.50.2 etc
the new ones can only support 1 address

The NAS appliances definitely have their place. They are easy to setup and manage and the vendors provide support. Good ones are going to cost you more than the cheap ones but give a much better experience. The Synology units can do multiple raid volumes no problem so I do not know why you would piss them off asking about it.

Using independent drives in a NAS appliance doesn't make sense. Why would you even do that? Same with JBOD. Synology lets you do that.
http://www.synology.com/en-us/support/tutorials/512
Multiple raid volumes: http://www.synology.com/en-us/support/tutorials/558#t4_2

The DIY route is cheaper and you have much more freedom to over build it. The down side is the learning curve. If you are not familiar with Linux/Solaris/whatever system you choose you can get yourself in trouble pretty easily. Thankfully there are great boards like this one that you can get help though.

I had a Synology DS 1812+ with 8 3TB drives. I loved how easy the Synology was to work with, I had SABNzbd, sickbeard, couchpotato on it along with their Audiostation (used the iPhone and Android apps to stream).

I also have a DIY All-In-One VM Lab with a 5 3TB drive OmniOS/Napp-it VM (zfs). I ended up selling the 1812+ and it's drives because I really needed to scale down what hardware I had.

Both would saturate a 1 gig network connection. The Synology was 100x easier to setup and manage. You could also do expansion on the Synology (add drives to an existing array one at a time) and you can't do that on a ZFS system.
 
is that the Synology Adaptive raid you are talking about.
They say it will use 100% of mismatched drive sizes as well.

how good it this?

I looked up the 412+ and its one of the fastest nas.
Funny that they released a 413 and now coming out with a 414 and both are slower...maybe I don't understand numbering lol
 
412+ : + means small business. It is on an Intel x86 platform.

413 and 414 are more targeted for home use and use ARM platform.

So for more advanced use, you want the "+" :)
 
is that the Synology Adaptive raid you are talking about.
They say it will use 100% of mismatched drive sizes as well.

how good it this?

It's good. I have one. ;) SHR works well:

http://forum.synology.com/wiki/index.php/What_is_Synology_Hybrid_RAID?

I looked up the 412+ and its one of the fastest nas.
Funny that they released a 413 and now coming out with a 414 and both are slower...maybe I don't understand numbering lol

I agree. I don't understand why the ones that are coming out now are slower but the 412+ fetches a higher price still and is x86. You can also bump the RAM if you want on your own quite easily. I get about 150MB writes to it in LAG mode when I tested a bunch of server nfs mounts and iSCSI targets.
 
I got offered a new 413 for $450. Not sure if the other $150 on the 412+ is worth it for home use.
 
Back
Top