Server vs. Nas

yashi

n00b
Joined
Sep 21, 2012
Messages
14
A few weeks ago i wanted to buy a prebuild NAS (synology, qnap etc...) After learning about most of the products in the 4+ Nas Market. I started reading about building my own via freenas, etc...

My question at this point, why would i even buy a pure NAS product? The good ones need around 20W on standby and cost around 600+ (Rackcases) RS812+ f.e.

For this price i can easily build a pc that is probably faster than the one im currently using as my main PC :p. Energy wise it would be very compareable, but without any restrictions. Why on earth would i buy a prebuild system?

Case: Inter-Tech-Case-IPC-1HU-1304L 126
Netzteil: Seasonic SS-250SU 50€
Mainboard: asrock-b75m-dual-pc3-12800u-ddr3 54
CPU: intel-celeron-dual-core-g1610t-cm 43,57 (35W)
Cooler: ekl-s85-slim 25
RAM: ddr3 4gb 25
=299
 
There are many reasons that brand name NAS's make sense for some people, they may not make sens to you, that does not mean they are not valid. I'm technically proficient, I could probably build my own box as you want to, but I wouldn't want to spend the time for one, building it, supporting it, caring for it or feeding it, when I can spend money on a box that has a smaller footprint, comes with support and just works. Now I know many of you will scream that a home made box is just as good as brand name, and who needs support when you can fix it yourself? My time is money, I'd rather be spending it with my family or friends then toiling away fixing a home made NAS, thats my choice though, if you are single and have all the time in the world, have at it.
 
@ashman

dont get me wrong. In an Business IT situation, i wouldnt even think about building one. Always buy supported industry grade stuff, this way you can blame others for failures. But the private situation is different.

>Why on earth would i buy a prebuild system?

i guess i should have wrote "should" instead of "would"

>o, but I wouldn't want to spend the time for one, building it, supporting it, caring for it or feeding it,

i see that more as a bonus. I want to learn as much as possible. But ofc. i dont want to care for it everyday. Ive no issues with a few days lasting installing process in the beginning.

> on a box that has a smaller footprint

the difference is negligable. Both will be 1he 19" rack cases. The length is a bit different, but as long as it fits in the rack i dont mind.
 
Only thing I can think you might want is an ECC supporting CPU and motherboard and ECC RAM, and use ZFS.
 
yes ecc ram would be nice. The problem is, it increases the cost by a huge amount. The question is, is it really necessary :/
 
1. Plug and Play
2. Saves Time
3. Doesn't require tech savvy person

HUGE negative for me are the limitations of speed, and ability to fix it yourself if there's issues.

I had a netgear NAS in use, and just finished putting together my own which is actually in my 2nd desktop PC. It's WAY overkill for a NAS but doubles as HTPC for streaming & Ripping, desktop, mysql ram drive, etc... i7-930,24gb,ARECA RAID Card + 4gb CACHE,WD RE4 Array, SSD Array, etc... I wanted the ability to have fast storage, and archive storage all in one but also be able to tweak it, and make it how I want. In the future I can add external storage to this and add many many many more drives to house my movies, pictures, and more.

If I was buying parts I'd have gone with a XEON + ECC RAM though.

A lot of people just want plug and play.
 
I do agree with ashman on this , ironically I'm a storage specialist as my day job managing multiple petabytes on many platforms.
But at home I use net gear nas's. They sit there and serve data fast enough to stream to a coupe of units at once. I am currently hitting some bottlenecks though when copying data while streaming though it rare.
It all Comes down to you're key drivers, and for me it's minimum hassle with maxim frugality.
 
FreeNAS can use raid or ZFS.

ZFS is not raid. It is ZFS.

You need ECC ram no matter if you value your shit.

Pre-built NAS typically dont come with ECC and also have a plethora of features.

FreeNAS etc... dont have a plethora of features but are rock solid if you use rock solid hardware and hey .... I can throw a 10gb ethernet card in my NAS4Free box if I want, which I did, and you cant do that with almost all pre-built boxes.

Plus I wont lie ... RAID sucks ass once you go ZFS you will kick your self in the nutsack for wondering why you never did it earlier.
 
I would not fault a person for choosing either option, they both have their strong points.

Personally I went with two HP Microservers and couldn't be happier. I use one for a home server and HTPC duties and the other functions as my VM host for work related stuff.
 
I have a question, and I'm not trying to flame anyone, is ZFS used in the enterprise and if no, why? It it ignorance or admins being too lazy to learn it or what?
 
I have a question, and I'm not trying to flame anyone, is ZFS used in the enterprise and if no, why? It it ignorance or admins being too lazy to learn it or what?
ZFS is certainly used in enterprise environments. Though I think there's still a fair number of old school IT directors and sysadmins who scoff at the idea of a software RAID-like mass storage solution.
http://www.oracle.com/us/products/servers-storage/storage/nas/overview/index.htm
http://www.aberdeeninc.com/abcatg/enterprise-san.htm
http://www.pogostorage.com/products/nexenta/overview/index.php
 
FreeNAS can use raid or ZFS.
Plus I wont lie ... RAID sucks ass once you go ZFS you will kick your self in the nutsack for wondering why you never did it earlier.

ZFS is not raid. It is ZFS.

isnt ZFS just a Filesystem like FAT, NTFS, extX ...? It just has advanced capabilities for mass storage configurations. But how is it not raid? Raid just means redundant array of independent disks, for which a RAIDzX configuration, as the name already states, surely qualifies.
 
Last edited:
The reason I went with a pre-built NAS instead of continuing to use a home built storage server I made was because I tend to overbuild things, as I'm sure many here do as well, so once I overbuilt my storage server I started wanting to use some of that extra power for other things, which inevitably leads to the occasional "oops" and suddenly the server is down for a bit while I try to fix whatever I "oopsed", and while I do that the family doesn't have access to the file server.

So I bought a NAS so that even if I "oops" a server the storage is still up.
 
@-Dragon-
e I tend to overbuild things, as I'm sure many here do as well, so once I overbuilt my storage server I started wanting to use some of that extra power for other things, which inevitably leads to the occasional "oops" and suddenly the server is down for a bit while I try to fix whatever I "oopsed",

Oh boy do i know this problem :D

My nas already evolved into

1 x Intel Xeon E3-1220LV2, 2x 2.30GHz, Sockel-1155, tray (CM8063701099001)
1 x Kingston ValueRAM Hynix DIMM 4GB PC3L-10667R reg ECC CL9 (DDR3L-1333) (KVR13LR9S4/4HE)
1 x ASUS P8B-C/4L, C202 (Sockel-1155, dual PC3-10667E DDR3) (90-MSVDM0-G0UAY00Z)
1 x EKL S85 Slim (87000000003)
1 x Supermicro 813MT-350CB schwarz, 1HE, 350W

and has probably twice the performance of my main PC LoL
 
Last edited:
You'd do well to get yourself a synology. Granted due to the Linux base there's more you could do with them if you're willing to hop on the CLI and install a bunch of packages, but I find for me at least as long as I have another VM host to do that on the temptation is less to do it on the NAS leading to better storage stability.
 
If you build a server it's usually cheaper per drive bay. Most NAS only have 2 or 4 drive bays, which is kinda pointless these days. Especially 2 bay ones. They're ok for someone who does not have that much data or a need to be able to expand over time I guess but a server is always more flexible. A server is much more expensive up front though.
 
If i decide to build my own server/nas, im not sure whether to use ECC or standard RAM. I know most recommend ECC Ram and i understand why. Arguments are usually: it doesnt cost much more, lets you sleep better etc..

Its true the ram itself doesnt cost much more. But the mainboards do cost A LOT more. Plus they are not widely in use aka. linux support might be not that good. I found one post about the ASUS P8C WS(170Euro), which looked promising. The person was unable to install ubuntu without disabling acpi.

In short. I do not care if a tiny amount of files gets corrupted. But this might be an issue, if i decide to encrypt the harddrives. But as far as i understand it, as long as the volume header stays fine, i will be able to decrypt the volume even if it is corrupted(ofc. the corrupted parts will result in corrupted files). If that is true, i guess i dont need ECC RAM afterall
 
isnt ZFS just a Filesystem like FAT, NTFS, extX ...? It just has advanced capabilities for mass storage configurations. But how is it not raid?
ZFS is software RAID + LVM + a filesystem in one. Many of the cool things ZFS does are only possible through this vertical integration.
 
If i decide to build my own server/nas, im not sure whether to use ECC or standard RAM. I know most recommend ECC Ram and i understand why. Arguments are usually: it doesnt cost much more, lets you sleep better etc..

Its true the ram itself doesnt cost much more. But the mainboards do cost A LOT more. Plus they are not widely in use aka. linux support might be not that good. I found one post about the ASUS P8C WS(170Euro), which looked promising. The person was unable to install ubuntu without disabling acpi.

In short. I do not care if a tiny amount of files gets corrupted. But this might be an issue, if i decide to encrypt the harddrives. But as far as i understand it, as long as the volume header stays fine, i will be able to decrypt the volume even if it is corrupted(ofc. the corrupted parts will result in corrupted files). If that is true, i guess i dont need ECC RAM afterall

Yeah the ram is not all that more expensive but the server motherboards and server processors is what tends to set you back. TBH, my main server is running on a core2quad with 8GB of standard ram on a desktop Intel board. IT ran fine till a few years back when it had to be shut down due to a long power outage. I've since added 200AH worth of batteries to the UPS to avoid this. The issue is at random the server will completely lock up during high IO and VMs will start to crash.

Whether or not ECC ram would save me from that problem, I don't know. The issue happens maybe once a month, and I recently built a new file server which I'll be moving IO over to, so I'm not overly concerned about it at this point. Any server I build from now on is ECC though, mostly for the "sleep better" factor.

It would actually be interesting to see a study on whether or not it's really better to go ECC. Sounds like a ++i vs i++ battle. :D
 
Most people recommend ECC ram because it offers an extra level of protection (not saying it's foolproof, nothing is)

And that protection is from silent memory errors. If you theoretically had ram that was giving errors then zfs' built in checksums would suddenly say that good data on the disks actually is full of errors. ZFS would then try and self heal, and with this potentially bad ram, you would essentially wipe all or some of the data on your arrays.

It is for this reason that people say to use ECC ram with zfs. It's a small and essentially unquantifiable chance, but this possibility does exist and has happened to a few people on this board.
 
I've spent the last few days configuring my old PC into a file server (le old celeron with pata drives,) and from that perspective it's really not worth the hassle.
 
Back
Top