NAS o new server

Joined
Oct 13, 2015
Messages
8
Hello everyone, I noticed a few posts on here that came up while doing Google searches and I was hoping someone could help me.


I currently have a home server running on a laptop, the screen was cracked so I removed it. The laptop has plenty of power, i5 w/ 4GB, but I only have a 80GB SSD in it. I'm planning to setup some local storage on RAID 5 for backups and any large files I need to share or access over the internet. I'm planning to setup VPN as well, I haven't yet tested this on my router but it has an option to do so. I have 2 desktops, a surface, a laptop, and my home server that will be accessing and backing up to it.

So the options I'm looking at right now are a standalone NAS that can be access by my current home server or build out a new home server that can hold more drives. My debate is will I notice substantial read/write performance by using a home server for my storage over accessing it on a NAS? I wouldn't do anything on the NAS except be storage, I'll do any work on the server and have it access the NAS.

My familiarity with technology is primarily in Linux, specifically the LAMP stack. Although recently I have been working with a Windows environment due to a new job at a company that uuses Microsoft everything. If I go with a home server I'm probably going to use ownCloud, I have setup ownCloud on a server a few times and love it.

My budget is undecided, money isn't a big deal. Pretty much keep it under $1k, but the lower the better. Anyway thanks for any input, I have never setup storage in the "home" environment, I always did it in a enterprise like environment and think that would be a bit overkill for my home use.
 
A NAS generally has lower performance than local storage. This is because gigabit networking is ubiquitous and it puts a bottleneck on the sequential I/O performance (~110MB/s max). But in terms of IOps performance you can gain something, because a NAS often uses advanced technology such as ZFS which thanks to ARC caching can cause substantial performance gains in terms of random I/O performance.

If about 100MB/s performance is enough for you, then the advantages on terms of reliability when using a ZFS NAS are unparalleled. But if you demand higher performance, local storage or above-gigabit networking are options to consider. 10G copper ethernet (10GBaseT) is pretty expensive though. Xeon-D boards will sell for just under $1000 which have dual 10G ethernet included. Such as this board:

Supermicro-X10SDV-TLN4F-Overview-600x409.jpg


http://www.servethehome.com/supermicro-x10sdv-tln4f-review-platform/
 
If I go with a home server I won't be looking at anything that extreme, I guess the real question I'm trying to find out is will a NAS bottleneck when my home server is accessing it for important data?

Lets say I have something like a game server running on it and want to place the large map files on the NAS, will the NAS be able to access and provide these files fast enough? The only "home" NAS I have used was a piece of junk LG NAS back in 2005, so my past history of working with a NAS is pretty poor. If I was only backing up my computers I would have pulled the trigger a long time ago, but I don't want to drop hundreds of dollars on something that is limited to being a backup device.
 
A game server generally does not access the texture information and such, but only the essential game data which can be very limited in size. That is, it might still require big files because they contain everything, but it might only access small portions within those huge files.

If this is indeed the case, you do not need high sequential performance at all.

You can setup a NAS very easily if you have a spare system with gigabit and a single harddrive - you can begin using it with a dedicated NAS product like FreeNAS, NAS4Free or ZFSguru. Those three are capable of using ZFS which i strongly recommend. But there are also options without ZFS such as OpenMediaVault which is based on Linux instead of BSD.
 
Well the game server is just an example I can think off the top of my head, any situation where the data is needed in order for the server function. Will the NAS slouch in performance and fail to deliver the file in time for the server operate?

I'm trying to think of a better example, but pretty much I want to know if I can take EVERYTHING that would operate on my server and dump it onto the NAS. Since I'm working on a relatively small SSD right now I'd like to only have my essentials on it and put everything else on the NAS.

I'm referring to a purchased NAS box, like qnap. If I setup my own NAS I'm back at square one and will just setup a new home server and virtualize storage and server functions.
 
"I want to know if I can take EVERYTHING that would operate on my server "

Do you plan to list "everything" ? so we kow what it is you are thinking??
 
The VMs that use my FreeNAS box as an iSCSI target are quite a bit faster than the ones that use a local mirrored Windows Storage Space. Sequential performance might be a bit lower but IOPS are way higher due to the ARC.
 
I think what I decided to do is go with a lower end NAS that will be storage/backup only and in a few months to a year build out a new decent server. That way I can use the NAS as a backup for the server, because if I take my data directly of the NAS I run the risk of having no backup unless I backup up the NAS as well. My data isn't important enough to be backed up to a NAS and than have a back up of that NAS. LOL!

Anyone have personal experience with those pre-built NAS boxes like qnap? I'm looking to put in 4 2 or 3 TB drives, I don't have a tone of data, and run it in RAID 5.
 
Last edited:
Sorry about double post, but I was looking at these two:

QNAP TS-431
QNAP TS-453 Pro

Any suggestions/comments?

*Looks like QNAP supports iSCSI, I think this is the route I'll go unless anyone has a reason I should look at a different NAS?
 
Last edited:
QNAP is legacy storage - you will not receive the benefits of advanced filesystems like ZFS.

Perhaps you can consider a low-end ZFS build instead? They are cheap - cheaper than prebuilt NAS systems, and very low power too. Only costs you a few hours to assemble and setup the thing.

Try this board:

N3150DC-ITX%28L3%29.jpg


http://www.asrock.com/mb/Intel/N3150DC-ITX/index.nl.asp


Benefits:
- onboard quadcore J1900 processor
- passive cooling; no CPU fan
- comes with built-in power supply (DC-DC) and the motherboard includes 65W power brick
- up to 4x SATA/600
- hardware accelerated encryption (AES-NI)
- cheap considering you get motherboard, cpu, heatsink and power supply all in one product
- only needs a 8GiB DDR3 memory module and a casing, plus disks
 
Hmm if I go towered a ZFS filesystem I'd want to be on ECC RAM, I have never had to recover from it before but I have heard of nightmares with RAM failure and writing the wrong data to a zpool.

I guess I'll have to sleep on it.
 
For ZFS the importance of ECC memory is actually less, because ZFS at least can eventually detect all on-disk corruption caused by RAM bitflips (checksum invalid or data invalid) and can correct many on-disk corruption caused by RAM bitflips.

Your client, however, cannot do this and thus might require ECC memory much more than the ZFS server. If ZFS receives the wrong data from the client, all the protections you give to the server are not going to matter. The weakest link in the chain is the client. So just ECC on the ZFS server is not enough - your clients need ECC too.

Can you tell me about your experiences with 'RAM failure' and your ZFS pool? I am very much interested. Because in my tests, with faulty RAM, i managed to cause massive corruption on disk caused by the RAM bitflips; but after rebooting with a good RAM module all corruption was also corrected again with a scrub. Many people discover there is a problem with their RAM because of random checksum errors on their pool; and these errors are corrected just as well when the server has good RAM again.

So you might lose files, but at least you can see which files. And those files you can restore from backup.

Also, does the QNAP has ECC? If not, isn't is strange to only want to consider ZFS if you go the expensive route and get ECC memory? It's not 'all or nothing' - going from legacy storage like Qnap/Synology offers to a 3rd generation filesystem like ZFS is a huge step forward. Going for a more expensive ECC build only marginally adds reliability on top of that, in my point of view. Especially when considering a cheap build, ECC is going to ruin your party. If you have backups of your most important files, many would be able to live with the potential of having individual files corrupted.

Aside from a defect RAM module, i have never heard about this happening in real life on a ZFS server. That is: permanent on-disk corruption. Temporary corruption can be a problem depending on usage scenario. Basically non-ECC breaks End-to-End Data Security, but that feature may not be very important to home users - it is very much crucial to enterprise/soho users which is why ECC is mandatory on all computer systems they employ.
 
There have been many discussions about,
mostly the following is common

- a RAM error can cause a corrupted write
This can lead to a corrupted file or filesstrucure up to a pool/partition lost
Such errors cannot be detected with ZFS as the checksum covers bad data

- The probability of RAM problems scale with amount of RAM and RAM usage.
As large RAM with heavy use of RAM as cache is common today, this is not
a ZFS but a common problem.

- I have not heard of a ZFS pool failure that was definitely caused by RAM in years
while all forums are full of problem reports with problems on old filesystems.
ZFS is definitely more robust and does not suffer from disk corruptions or
problems initiated by driver, controller, cabling, backplane or disk.

- Even a ZFS pool can be lost on a disaster. I have not had such a disaster since I moved to ZFS
mit 2 or three disk redundancy but a disaster backup is always needed.

If you buy new, buy ECC with any OS/ filesystem.
If you care about data security, use btrfs, ReFS or ZFS, Last is the current champion.
 
If you are looking for a cheap solution coupled with RAID5 I'd recommend an Avoton solution if you are looking right now. You'd either need to get a RAID5 capable card (I'm unaware of your target # for HDD's) or get something like:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813132230

I'm waiting to see that product get a refresh sometime this year, this will replace my legacy hardware (pulling a combined 255watts) when it releases. Almost pulled the trigger on that product but decided to wait to see what intel does next with their server atom series.
 
_CiPHER_ said:
Can you tell me about your experiences with 'RAM failure' and your ZFS pool?

I said I read terrible things about it, I have never had to recover a zpool before.

_GeaIf said:
you buy new, buy ECC with any OS/ filesystem.
If you care about data security, use btrfs, ReFS or ZFS, Last is the current champion.

Yeah that's what I'm thinking, if I go new I might as well invest in ECC RAM just to cover myself. Last time I used ZFS was back in the OpenSolaris days and I didn't care for it too much, but from what I have been reading it's become very stable now.

Trimlock said:
If you are looking for a cheap solution coupled with RAID5 I'd recommend an Avoton solution if you are looking right now. You'd either need to get a RAID5 capable card (I'm unaware of your target # for HDD's) or get something like:

http://www.newegg.com/Product/Produc...82E16813132230

I'm waiting to see that product get a refresh sometime this year, this will replace my legacy hardware (pulling a combined 255watts) when it releases. Almost pulled the trigger on that product but decided to wait to see what intel does next with their server atom series.

I think if I go DIY I'll probably run RAID in the Linux kernel instead of buying a separate RAID controller. That board looks interesting, but it's reviews are pretty low.


Anyone here ever make a DIY NAS that competes with the power consumption of the pre-built NAS? I'm starting to lean towered building out a new home server with ECC, besides I have always been a fan of having DAS when dealing with home servers. I guess I just got excited about all this hype with home NAS setups and wanted to try setting up a home NAS even though I don't necessarily need one. :D
 
Last edited:
That Asrock board looks pretty good.

HP Microservers are similar to the QNAPs & easily capable of running ZFS. If you can find one at a good price, the older N54L model would be more than fast enough for a backup server.
 
That Asrock board looks pretty good.

HP Microservers are similar to the QNAPs & easily capable of running ZFS. If you can find one at a good price, the older N54L model would be more than fast enough for a backup server.

Any experience with the Lenovo ThinkStations? I often see deals for these and with ECC support too.
 
Any experience with the Lenovo ThinkStations? I often see deals for these and with ECC support too.

I have a 5 year old Lenovo ThinkStation at my desk at work, and the thing is a champ. 6 memory slots for DDR3 ECC (the 4GB sticks are cheap on ebay, if you don't have work to pay for 'em)
I forget the model, but I've been impressed by the build quality of my Lenovo ThinkStation. Hell, it can handle *me* without too many problems, and that's asking a lot of it.
 
Any experience with the Lenovo ThinkStations? I often see deals for these and with ECC support too.
Not any of the last couple generations.

The TS140 & Dell T20 frequently go on sale around $250 with Xeon E3v3, and they're both great value.
 
285 is quit common but you will need to replace PSU and buy an adapter and do external hosting because the case and PSU are worthless if you use it beyond its default specs
 
Back
Top