Help me with my NAS, please

berky

2[H]4U
Joined
Aug 28, 2001
Messages
2,233
So, I'm trying to create a NAS for my home storage. It will mostly contain media (video), but also other files.

My requirements are the following:

  1. power conscience - should use as little power as possible, while still having ability to encode video on the fly to my roku (currently use Plex)
  2. wake on lan
  3. put unused disks to sleep
  4. scalable - must be able to expand storage on demand as needed.
  5. not locked in to specific drives - must not require exact same disk drives as storage is expanded
  6. Rack mountable - I'm going to be getting a small cabinet to put all my server stuff in to tuck it away under the steps
  7. Easily upgradeable - Should have separate disk for OS that can be upgraded through the GUI, with a single file. If I had to maintain a standard Linux OS, I would, but would prefer to have a standalone 'embedded' type of system
  8. easily back up OS drive - I want to make a mirror image of the OS drive, but without using RAID. would prefer to have backup set as auto-boot only in event that primary could not boot

I think that's about it for now. I was looking at getting the Norco RPC-4020 case. I would like to re-use some of my existing hardware for the time being: MSI K9A2 Platinum mobo, PCP&C 700+W PSU, AMD Phenom quad core, 4GB RAM. I would replace at a later date if necessary to reduce power consumption, but if I can use this hardware, the extra cost of hardware won't reduce power enough to justify the added cost.

then I was gonna get some WD Green HD's and a LSISAS9201-16i card.

My current plan is to set up the drives using 2 separate JBOD instances so I can have backup capability, but i'm open to suggestions.

I'm not sure if I want to use FreeNAS, NAS4Free, Open Media Vault, etc. This will depend mainly on what features each can accomodate.


Any and all help is appreciated. Thanks!
 
Just going to chip in and say that you may well be better suited with WD Red drives, they don't have the head parking feature of the greens, and better warranty.

I will be watching this thread with interest!
 
For processor your gonna want to go with the ivy bridge i3, my complete i3 build uses like 40w of power at idle.
 
Last edited:
I wouldn't recommend getting the RPC-4020 case as cable management would be a PITA with 20 separate individual SATA cables. I would look at the RPC-4220 or RPC-4224 cases which uses SFF-8087 ports on the backplane and use a SFF-8087 to multilane reverse adapter to connect the SATA ports on your motherboard to the backplane.

That way, you would only have to deal with 5-6 cables rather than 20.

I also concur with Liggywuh about the drives: Try to avoid the WD Green drives as they don't always work too well.

As for your requirements:
1) Then AMD is out for you if this was a serious goal. Considering your later statement, it doesn't appear that power conscience is a big issue for you if you're still willing to use that Phenom. And yes that Phenom is still usable.
2) WOL is pretty much motherboard/NIC dependent. If your motherboard or NIC don't have WOL support, then you won't get it.
3) FlexRAID and unRAID both do that IIRC.
4 and 5) Drive Bender, StableBit Drive Pool, SnapRAID, and FlexRAID all pretty much does exactly what you want in regards to drive expansion. Oh and unRAID too.
6) Norco of course.
7) N/A.
8) This will require manual work on your part.

then I was gonna get some WD Green HD's and a LSISAS9201-16i card.

My current plan is to set up the drives using 2 separate JBOD instances so I can have backup capability, but i'm open to suggestions.
That's not backup. That's redundancy at best. It's not backup because the data isn't located on a different medium or device from the main source. In other words, if the data was copied to another NAS, another file server, online, external drive, etc, that would be a backup. But since both copies of the data reside on the same system, that's not backup but redundancy at best.

Nor would I recommend such a route in the first place as some JBOD arrays are setup in a way that if one drive dies, all the data is practically lost.
 
I wouldn't recommend getting the RPC-4020 case as cable management would be a PITA with 20 separate individual SATA cables. I would look at the RPC-4220 or RPC-4224 cases which uses SFF-8087 ports on the backplane and use a SFF-8087 to multilane reverse adapter to connect the SATA ports on your motherboard to the backplane.

That way, you would only have to deal with 5-6 cables rather than 20.

Ok, I'm not too familiar with SAS vs SATA, so I wasn't sure if I could still use SATA drives with SAS connectors. I'll definitely consider these cases over the 4020.


I also concur with Liggywuh about the drives: Try to avoid the WD Green drives as they don't always work too well.

I was only looking at them for power reduction. As long as I can still spin down the Reds while not in use, I'm fine with that.

As for your requirements:
1) Then AMD is out for you if this was a serious goal. Considering your later statement, it doesn't appear that power conscience is a big issue for you if you're still willing to use that Phenom. And yes that Phenom is still usable.

Yes, it is a serious goal, but cost is as well. The idea for re-using existing equipment was only a temporary measure. I'm starting to think for this particular build I might just get new stuff right off the bat and re-use existing stuff for something else.

2) WOL is pretty much motherboard/NIC dependent. If your motherboard or NIC don't have WOL support, then you won't get it.

Right, but I thought the OS had to support it as well, no?

3) FlexRAID and unRAID both do that IIRC.

Do you have any experience with OCE (Online Capacity Expansion)?

4 and 5) Drive Bender, StableBit Drive Pool, SnapRAID, and FlexRAID all pretty much does exactly what you want in regards to drive expansion. Oh and unRAID too.

I should have mentioned I won't use Windows. The only reason I was leaning away from unRAID was due to the licensing, but I'm potentially reconsidering that based on what you've brought to light already.

6) Norco of course.
7) N/A.
8) This will require manual work on your part.

That's not backup. That's redundancy at best. It's not backup because the data isn't located on a different medium or device from the main source. In other words, if the data was copied to another NAS, another file server, online, external drive, etc, that would be a backup. But since both copies of the data reside on the same system, that's not backup but redundancy at best.

Nor would I recommend such a route in the first place as some JBOD arrays are setup in a way that if one drive dies, all the data is practically lost.

backup? redundancy? meh, it's just semantics to me. As long as it's a physically separate set of disks that are not in a RAID 1 configuration, it's a backup to me.

I would consider a true backup box if, and only if, I could script it so I wake up the backup system (WoL) and then shut it down when the backup is done. I would also have to be able to reutilize existing hardware, which I have plenty of. However, I'm trying to keep cost low, and I'd need to be able to have a case that is rack mountable, and I only have a bunch of standard cases. Seems a bit much to get a separate Norco case just for a backup system when I should be able to break up the disks in the primary system as 2 separate volumes with manual or automated backup from one to the other (not RAID 1).
 
I'm using wdreds on all my storage servers. No problems at all.
 
Yes, it is a serious goal, but cost is as well. The idea for re-using existing equipment was only a temporary measure. I'm starting to think for this particular build I might just get new stuff right off the bat and re-use existing stuff for something else.
Up to you whether or not you want to spend the money. Like I said, your current parts as is should be ok for a NAS as long as the power usage and heat generation doesn't bother you.
Right, but I thought the OS had to support it as well, no?
Nope. WoL is pretty much all hardware. Think of WoL as a remote power button. Does the OS need to support the power button on your case? No it does not. The only time the OS is involved with WoL is whether or not there's an application for it that allows it to send commands over the net to trigger WoL. There's pretty much a WoL related application for virtually every OS out there. I use to trigger WoL for my Linux file server from a laptop with Windows using the PuTTy appplication.
Do you have any experience with OCE (Online Capacity Expansion)?
With Linux's MDADM RAID, yes.

I should have mentioned I won't use Windows. The only reason I was leaning away from unRAID was due to the licensing, but I'm potentially reconsidering that based on what you've brought to light already.
SnapRAID and FlexRAID still work on Linux.
The only reason I was leaning away from unRAID was due to the licensing, but I'm potentially reconsidering that based on what you've brought to light already.
Yeah the main reason why I am not a fan of unRAID is the relatively slow performance of file transfers. It's not a bad solution mind you but not the solution I would use for myself.

backup? redundancy? meh, it's just semantics to me. As long as it's a physically separate set of disks that are not in a RAID 1 configuration, it's a backup to me.
Unfortunately there will be people who read this thread for help. So to help them, at least use the terms correctly. So again: What you described here:
I should be able to break up the disks in the primary system as 2 separate volumes with manual or automated backup from one to the other (not RAID 1).
is not backup. It's redundancy. So please use the term correctly. If you want redundancy, that's fine but don't call it backup. What is backup is if this NAS is holding a second copy of the data of the hard drive in your primary PC. That's when the NAS is considered a backup.

Yes it's semantics but might as well do things right.
I would consider a true backup box if, and only if, I could script it so I wake up the backup system (WoL) and then shut it down when the backup is done..
That is indeed possible with Linux:
http://www.linuxplanet.com/linuxplanet/tutorials/7308/1

As well as FreeNAS in a way:
http://blog.graceabundant.com/2012/08/24/automatic-shutdownwake-up-on-freenas/


Seems a bit much to get a separate Norco case just for a backup system
Just to be clear: I'm not saying you should get another Norco case. Any case (rackmount or not) or system will do as long as it has enough HDD space to hold your most critical and valuable data.

Anyway, back to the hardware, I recommend hitting up this link for info about the controller you plan on using:
http://www.servethehome.com/lsi-sas-2008-raid-controller-hba-information/

Also, what video card are you planning on using in that setup? That mobo doesn't have onboard video.
 
Thanks for all of the info thus far.

I'm not too concerned with file transfer speeds. mainly just so it can keep up with the transcoding. I'm more concerned with setup and maintenance of the software and ability to monitor the health of the drives.

concerning the case for a potential backup system, the only reason I mentioned having a duplicate Norco is because whatever I would use for the backup would have to be able to handle the same or larger storage capacity. I'm starting to think that's not such a good idea, as it would likely cost way too much, when "redundancy" would be good enough for my purposes.

regarding the LSI controller, I was only looking at that for the ability to handle all of the drives. I would not be using it as a hardware RAID controller, only as SATA expansion ports. However, with the Norco 4220, how do the SAS connectors hook up to the mobo and drives? The description on newegg states:

Five internal SFF-8087 Mini SAS connectors support up to twenty 3.5" or 2.5" SATA (I or II) or SAS hard drives

I don't really know anything about SAS, so I'm not sure how that would work (ie. what does the case have to do with it?)



for the mobo, are you referring to the MSI I listed? if so, I have plenty of old vid cards laying around. I'm not too concerned with that.
 
concerning the case for a potential backup system, the only reason I mentioned having a duplicate Norco is because whatever I would use for the backup would have to be able to handle the same or larger storage capacity. I'm starting to think that's not such a good idea, as it would likely cost way too much, when "redundancy" would be good enough for my purposes.
Thats why it's generally recommended to backup only the stuff that are extremely valuable to you from any perspective (i.e family photos, videos, etc. That cuts down the costs of backup setup dramatically.
regarding the LSI controller, I was only looking at that for the ability to handle all of the drives. I would not be using it as a hardware RAID controller, only as SATA expansion ports. However, with the Norco 4220, how do the SAS connectors hook up to the mobo and drives? The description on newegg states:

I don't really know anything about SAS, so I'm not sure how that would work (ie. what does the case have to do with it?)
Ok, the Norco 4220 and 4224 have a backplane that 5 and 6 SFF-8087 connectors respectively. All of the SATA drives connect to the backplane via the standard SATA interface and don't require any cables for that part. A single SFF-8087 port/connector provides data connectivity for four drives. In order to connect the LSI to the backplane of the 4220, you simply need four SFF-8087 to SFF-8087 cables like these:
$20 - Norco C-SFF8087-D SFF-8087 to SFF-8087 Multilane SAS Cable

That would connect 16 of the drives to the LSI. In order to connect the last four SATA ports on the backplane, you need the reverse breakout cable I listed earlier:
$15 - Norco C-SFF8087-4S SFF-8087 to Multi-lane SATA Reverse Break-out Cable

Connect the above cable to the four SATA ports on your motherboard. Then connect the SFF-8087 end to the SFF-8087 port on the backplane. And thats how you connect 20 drives to the system.

for the mobo, are you referring to the MSI I listed? if so, I have plenty of old vid cards laying around. I'm not too concerned with that.
Yes that is what I am referring to.
 
thanks!

that makes sense about the backup. I may consider something along those lines. maybe I can make a rasp pi system with a couple drives for my backup. I don't have tons of that kind of stuff, so I wouldn't really need a large system. but that's for another thread.

for the connectors, thanks. that makes sense.

So to summarize what I think I've determined so far:

case: norco 4220 (i really don't need the 24, so i'll save on the extra cost)
mobo/cpu: intel core i3 with compatible mobo
ram: i guess about 4g should do? or should i go for 8g?
power supply: i was looking at the zippy psu's, as that's what the BackBlaze pods use. seems to have something to do with the 5v rail for lots of HDDs. gonna research this a little more though
sound dampeners: rubber/silicone grommets for hdd's
OS: still not sure what I want to do here yet. i like the sound of the flex raid, although i'd prefer a fully embedded system as opposed to an application running on a standard OS. but we'll see. I don't really need this figured out before i get the hardware
drives: WD Reds

I still have to look into what the best price-per-gigabyte is for drives. i'm assuming it's the 2TB drives, but I haven't done the math yet.

is there anything else major i'm missing?
 
Go for 8GB of RAM simply because 8GB RAM sets have the better bang for the buck value. As for which mobo and CPU you should be getting, that's largely dependent on your budget and whether or not uptime is a big deal for you. OS choice also matters a bit too.

As for the PSU, you don't need a Zippy. You'll be fine with a solid 650W PSU from Corsair, Seasonic, XFX, etc. In other words, consumer grade PSUs will be fine for that Norco.

You really should decide on the OS first as that may influence what hardware you should get.
 
ok, i'll start researching the OS's a little more and post back once I figured that out.
 
So I think I am going to go with Open Media Vault. This gives me the ability to run Plex (which is what I've been using for streaming to my Roku).

Also, I could then use either FlexRAID or ZFS. I can decide how I want to do that later, but at least now I can start figuring out the specifics of the hardware.
 
Back
Top