Home server vs packaged NAS

Logan321

[H]ard|Gawd
Joined
Oct 9, 2003
Messages
1,900
I'm currently using an Omninas KD20 for a NAS, but I'm annoyed with some of the mapping limitations (unable to disable iTunes, which I don't use, and services are either/or which I want both enabled).

I would like to change to a 4bay hotswap NAS or home server. I like the idea of Drobo except for the proprietary format that makes all the drives bricks without a replacement if the Drobo fails.

I was looking at the Acer AC100 as a possibility, seems a decent price for a home server, to run a linux based NAS. It also seems to have a standard motherboard layout, so should be upgradable in the future: http://www.ncix.com/detail/acer-ac100-micro-server-intel-55-92824-1089.htm

My other options are to build a server from scratch, using a similar case, mitx board, ram, etc, which I think would end up costing as much or more than the server above, but have newer components or to buy a prebuilt NAS for about the same price but with a lot less flexibility, but also less screwing around to get it setup how I like.

My intention is to run raid5 eventually. At the moment I just have a single 3TB red wd and 2tb green wd. Suggestions or other options similar to the above would be great.
 
That acer ac100 looks pretty decent to me. I just bought a synology and this makes me wonder if I did the right thing.
 
I've been a big fan of Synology NAS's for home use. I've recommended them to quite a few friends and family members and nobody has had any problems.
 
I've been a big fan of Synology NAS's for home use. I've recommended them to quite a few friends and family members and nobody has had any problems.

Do they use standard or proprietary raid setup? If the NAS dies, can you install the drives in a PC to access the files, or do you have to use only the Synology?
 
The cheapest 4bay synology I could find was about $50 more than the server I linked, and the server comes with Intel I3-2120 3.30GHZ 4GB and 2X1TB hdds.

I think I will go the server route, I can use a 1TB drive as a boot/download/music drive, and then move the downloads to the raid5, so the raid5 can be spun down unless I'm watching shows or movies.
 
Once you go over 2TB drives, you'll want to move to RAID 6 or 10. If you want a server, just build your own. Lian Li PC-26 or a Silverstone DS380 for a case and then fill it with whatever hardware you want.
 
One additional consideration is power use - do you really need a full blown Sandy Bridge running your file server 24/7?

For some people, the answer is yes. For most, it's probably no.

Footprint is also a consideration.

I happen to like NASes - they are pretty energy efficient, most of them have pretty nice UIs/front ends, are versatile enough for what you'd want to run on them, and pretty small footprint compared to most PC servers.

But they aren't as versatile as a full PC server. If you wanted to do something like MythBox, then a PC would be the way to go.
 
The Acer AC100's footprint is 8x8x8" approximately with internal power supply (power bricks are annoying) so size-wise it's no bigger than most commercial NAS products. I guess the appeal to me is it seems to house a standard mitx motherboard so could be replaced with an embedded cpu board in the future. I've been looking for a case to custom build a server that's about that size but they don't seem available. I don't have the room for a tower case in my little network closet.

Power consumption was measured at 40W with 2 7200rpm drives installed for the xeon cpu version, not sure how that'd compare to the i3, but I'm hoping I can set the raid set of drives to spin down and the whole box to sleep most of the time anyway, with wake on lan, so power consumption shouldn't be that bad.
 
I don't know if you know this so I'll say it : RAID is not backup.

While that Acer seems decent, I would rather take a HP microserver instead.

But then I don't think I could live with 4/5 bays anyway.
 
Yep, I know about raid /= backup. My personal files are backed up to the cloud. This is just for media files which can be replaced. Is there a HP micro in the $500 range?
 
My intention is to run raid5 eventually. At the moment I just have a single 3TB red wd and 2tb green wd. Suggestions or other options similar to the above would be great.
Just to repeat ND40oz said earlier, RAID 5 is no longer recommended for large sized arrays using drives greater than 2TB. The stress of rebuilding a RAID array can cause additional hard drives to drop out of the RAID array. IN addition, due to greater density of the data, you'll end up significantly more data if more than two drives die in a RAID 5 array. Hence why RAID 6 or RAIDZ2 is generally recommended.
 
Just to repeat ND40oz said earlier, RAID 5 is no longer recommended for large sized arrays using drives greater than 2TB. The stress of rebuilding a RAID array can cause additional hard drives to drop out of the RAID array. IN addition, due to greater density of the data, you'll end up significantly more data if more than two drives die in a RAID 5 array. Hence why RAID 6 or RAIDZ2 is generally recommended.

I personally use Raid5 and have never had a double failure (8X4TB arrays), with a 8-12 hour rebuild not being a huge issue. I also have all of the data backed up up so worst case i just to a restore. Losing 6 drives worth of data a cross 3 arrays is not really and option.

That might not be the case with the seagate 8TBs I have ordered as they seem to have a terrible write speed (35MB/s). Time will tell I guess.
 
What defines a "large size array", then? I'm looking at only running 3x 3gig drives for my raid 5, rebuild time shouldn't be that bad. There is no raid 6 on a 4 bay server unless you can't do raid 10 for some reason.

Also, the spun up time of the drives will probably only be 10-20 hours per week.
 
DIY = More time, more hassle, more power, more options

NAS = Plug and play, ready to go. Synology mostly 0 headache, install and go.

DIY is going to 2be much more work no matter how you slice it, and for only 3x3GIG I'd go Synology.

I have a Synology 5Bay i"m retiring for my DIY solution, and I've spent a LOT of time on it to make it how I want, fast, safe, secure, etc...
 
I personally use Raid5 and have never had a double failure (8X4TB arrays), with a 8-12 hour rebuild not being a huge issue. I also have all of the data backed up up so worst case i just to a restore. Losing 6 drives worth of data a cross 3 arrays is not really and option.
In your case, that's ok considering that you have an actual backup for the data. But if your RAID is your only backup, I'd still recommend RAID 6. There's a whole thread about RAID 5 VS RAID 6 et al:
http://hardforum.com/showthread.php?t=1855083
What defines a "large size array", then? I'm looking at only running 3x 3gig drives for my raid 5, rebuild time shouldn't be that bad. There is no raid 6 on a 4 bay server unless you can't do raid 10 for some reason.
Over 3TB or so. Wasn't talking about rebuild time but rebuild stress.

I highly recommend reading through the thread I linked above.
 
In your case, that's ok considering that you have an actual backup for the data. But if your RAID is your only backup, I'd still recommend RAID 6. There's a whole thread about RAID 5 VS RAID 6 et al:
http://hardforum.com/showthread.php?t=1855083
.

Well you know what they say about Raid and backup :) .

What defines a "large size array", then? I'm looking at only running 3x 3gig drives for my raid 5, rebuild time shouldn't be that bad. There is no raid 6 on a 4 bay server unless you can't do raid 10 for some reason.

I'm not sure if this is always the case but generally the rebuild time is more tied to drive size than array size assuming you have a fast controller. As long as your controller can handle the higher parity calculations to comes down to how fast the missing drive can write data. IE my array of 4x4TB rebuild only takes about and hour or 2 less than my array of 8x4TB and i'm sure thats largely due to the controller bottleneck.
 
Well you know what they say about Raid and backup :) .



I'm not sure if this is always the case but generally the rebuild time is more tied to drive size than array size assuming you have a fast controller. As long as your controller can handle the higher parity calculations to comes down to how fast the missing drive can write data. IE my array of 4x4TB rebuild only takes about and hour or 2 less than my array of 8x4TB and i'm sure thats largely due to the controller bottleneck.


Nice rebuild speed. Are those 7200rpm drives or lower power 5xxx rpm drives?
 
Any drawbacks to putting a 6tb hdd in a computer and leaving it running all the time as a server? Also to backup going to get an external hdd to mirror that hdd and only to turn it on to back up. Is there any software that will detect changes and only backup the new data? I'm using this server for Plex movies and don't want to run the risk of raid. Also the hdd is the HGST 6tb which is pretty damn fast..... Computer is in my main gaming computer with a 2500k overclocked 250gb ssd 8gb ram and a 780ti graphics card and 1000watt psu.
 
For a single drive, I might consider one of those network-ready "cloud" drives, like what WD has. You can't really plug in a drive to mirror, AFAIK, but you can probably make a copy over the network. Easy, and done.

The main downside for a single drive connected to a computer is probably failure. If you're only going with a single drive, I'd either consider a single bay NAS or cloud drive, or just put the drive in an existing machine and share it out. Not really any need to add extra hardware for that, though a two-bay QNAP isn't very expensive.

The copy software you should take a look at is rsync, depending on your solution. It is mostly for *nix-based systems, but I think there are Windows versions as well.
 
I believe my Synology is around 5 years old at this point. I started with two 2TB drives in a mirrored array and a few years back added two 3 tb drives to it. I still have about 2 tb of free space after spending months cleaning it up and optimizing files.

Overall this far into it I still really like this thing. Constant updates, great application pool, easy as sin to work with and virtually no stress to work with.

I spent a long time going back and forth on the server / NAS argument and determined that for my needs the storage / streaming stuff were for me. I have not once been sorry about the choice. A few times I have wished for small things but found ways the NAS can do them for me.

My only gripe now is the 1.2 Atom CPU is just not cutting it anymore for multitasking lol. I have found what I want to upgrade to, 8 bay Synology but the 1k price point is too steep and I can make this thing truck along for a few more years to save up.
 
freenas user here: It took about 4 hours to get everything set up and running but after it was done it's smooth sailing. Definitely have to spend all day reading the manual and watching youtube tutorials. My set up is just for home use and honestly if I didn't have a spare computer that was not being used at all I'd just get a packaged solution.
 
Well you know what they say about Raid and backup :) .

I'm not sure if this is always the case but generally the rebuild time is more tied to drive size than array size assuming you have a fast controller. As long as your controller can handle the higher parity calculations to comes down to how fast the missing drive can write data. IE my array of 4x4TB rebuild only takes about and hour or 2 less than my array of 8x4TB and i'm sure thats largely due to the controller bottleneck.

The problem is URE which has ZERO to do with the controller. A URE during a rebuild = bye bye. The notion of "time" is a red-herring that people get stuck on.

But...if your personal experience believes that tons of measured data and analysis is all mumbo-jumbo...then go for it.
 
The problem is URE which has ZERO to do with the controller. A URE during a rebuild = bye bye. The notion of "time" is a red-herring that people get stuck on.

But...if your personal experience believes that tons of measured data and analysis is all mumbo-jumbo...then go for it.

Very few people who rant about URE mention anything more than the manufactures URE rating which might not be accurate (look at MTTF, like any drive will last 100 years) . All of my arrays have a capacity of great than 12TB and I have never had a double failure ( I have done loads of rebuilds due to dodgy cables ect) and yet if you read articles like this one http://www.zdnet.com/article/has-raid5-stopped-working/ seem to think that you should have a failure every 12TB so my 32TB array should never make it through a rebuild and yet it has maybe about 7 times. Let alone my other arrays (24TB and 16TB), either I am the luckiest person alive (what is the probability of of reading 224TB and not getting a failure if the average is every 12TB) or the number mentioned are bogus .

I'm not saying that a failure is impossible just that its not inevitable and assuming you have backups (Which i do) then its almost a non issue given that in my case its personal data and not as part of an enterprise storage array.
 
The cheapest 4bay synology I could find was about $50 more than the server I linked, and the server comes with Intel I3-2120 3.30GHZ 4GB and 2X1TB hdds.

I think I will go the server route, I can use a 1TB drive as a boot/download/music drive, and then move the downloads to the raid5, so the raid5 can be spun down unless I'm watching shows or movies.

Why wouldn't you use both of those 1TB drives as storage and drop $50 into a 128GB SSD for the boot drive of a server?
 
I believe my Synology is around 5 years old at this point. I started with two 2TB drives in a mirrored array and a few years back added two 3 tb drives to it. I still have about 2 tb of free space after spending months cleaning it up and optimizing files.

Overall this far into it I still really like this thing. Constant updates, great application pool, easy as sin to work with and virtually no stress to work with.

I spent a long time going back and forth on the server / NAS argument and determined that for my needs the storage / streaming stuff were for me. I have not once been sorry about the choice. A few times I have wished for small things but found ways the NAS can do them for me.

My only gripe now is the 1.2 Atom CPU is just not cutting it anymore for multitasking lol. I have found what I want to upgrade to, 8 bay Synology but the 1k price point is too steep and I can make this thing truck along for a few more years to save up.

I was going back and forth also...until I got a smoking deal on a dual L5639 + motherboard setup.

If I just needed/wanted the storage, I would probably agree...prepackaged. I want to run a Plex Server though, and maybe a couple other services like SFTP. Honest question...Can you do all of that on a Synology?
 
IMHO, I would do a home server 100 out of 100 times.

I had a 5 bay Drobo once years ago, and building my own NAS server to replace it was the best thing I've ever done - storage wise.
 
Well, I think I'm going to hold off on the whole thing for a few months. For now, I've got a 2gig WD green in an external usb3 case for backup and I'll just put another 3gig WD red in my Omninas in Raid 1 for redundancy... then there's no rebuilding to mess with. I only have about 1TB of videos on it, I thought it was higher.
 
I had a Cost Effective Build with ThinkTank ( my Windows Home Server 2011 Build)
Intel Core 2 Dou E8500 and Intel DQ45EK mITX motherboard from Ebay (mobo box still in plastic). The other parts I already had. 4x 3TB Seagate Barracudas( + 500 gig OS drive) and the 4 gigs DDR 2 ram. It was in my 19" rackmount case to start, but I found an Antec Sonata II case used at a local shop. It sits under my desk now.
It was at the right price point for me to build. And for what it does, a Core 2 Dou is plenty of processor for it. 8 Gigs would be nice to have, but another expense in my budget as I had the ram on hand.. The case cost me $30 and came with a P4 HT and mobo. (can't complain).

ThinkTank is a budget build, but a good one for what it does
 
Last edited:
This looks like a good option for a SFF server case. Chenbro 4 bay hotswap with 4 thumbscrews to secure. http://www.newegg.com/Product/Produ...13&cm_re=84H220910-079-_-16-212-036-_-Product Can just cut a square hole in the side or front of a case and drill holes for the thumbscrews and voila.

I'll have to look for a suitable case. Moddin time!

Looks like a waste of money, considering something like the DS380 has 8 hotswap bays for just over the cost of two of them.
 
Read the OP please. I don't have the space for either a full tower case or a rack mount case. Small form factor suggestions are helpful. :)

There are 0 mitx cases with 3x 5.25" bays, so that's out too.
 
Read the OP please. I don't have the space for either a full tower case or a rack mount case. Small form factor suggestions are helpful. :)

There are 0 mitx cases with 3x 5.25" bays, so that's out too.

DS380 (mITX 8 hotswap), PC-Q25 (mITX 5 hotswap), PC-Q26 (mITX 10 hotswap)
 
Read the OP please. I don't have the space for either a full tower case or a rack mount case. Small form factor suggestions are helpful. :)

There are 0 mitx cases with 3x 5.25" bays, so that's out too.

Ah, Fair enough. Read it originally, but forgot about it. To me, shit like this is what basements and closets are for :p

That being said, I have seen some cool livingroom "Lack Rack" implementations :p

It's the ultimate in IKEA hacking :p
 
Back
Top