Home-Built NAS Questions

iRpilot

Weaksauce
Joined
Oct 4, 2011
Messages
102
I wouldn't consider myself a noob by any means, but I definitely have some questions about a home-built NAS rig I plan on throwing together. Typically when I have questions I dig through the interwebs for hours until I've either found and answer, my eyes hurt, or the boss (wife) expresses her displeasure with me being on the computer for too long. So, I'm just going to take the short route, throw it all out there, and see what happens. I'd appreciate any insight guys. :)

I have 3 HP Compaq dc5000 MT desktops that my friend in the IT dept at a school gave me, and I plan on using them to make an NAS rig.

Specs for each:

Pentium 4 3.00 GHz CPU
1x Seagate Barracuda 7200RPM 40GB IDE HDD
2x 516MB RAM
3 empty PCI slots
4x HDD slots in the cage total

I plan on swapping out RAM for 2x 1GB sticks and a PCI SATA 4x controller card for at least 2 or more 1TB SATA HDD's.

I'm not particularly concerned about the HDD types and specs or anything, but rather if I sould run RAID, what RAID config I should run, what type of NAS OS I should use, and how to boot the OS. I have no experience with any type of server OS at all.

I've thought about booting NASlite or a similiar OS (open to suggestions on OS) from a 1GB CF card in an IDE to CF card adapter plugged into the IDE slot I'd free up from using the SATA controller card, but it's not a necessity.

I'm concerned about disk organization the most. I want to have a setup that allows for expansion easily, but I don't want to have everything spanned across several drive letter designations. I'm not particularly concerned about redundancy, but I also don't want a drive to crap out and lose everything. If a drive craps out, I'd like to only have to replace whatever was on the drive that I lost, not everything, like I understand I'd have to do with some RAID configs. If I'm accessing files on the NAS from another computer, will the OS on the NAS rig group everything together, or will I need to go to C:\Movies or D:\Music to access what I'm looking for?

Thanks for any help.

Tyler
 
Couple of questions,
1) How much storage do you intend to provide ?,2)What type of performance are you looking for? and 3) whats your budget to get this all done? ¿?

About your question on how the drives work, it all depends on how you configure it. The point of running a NAS os, it allows you to expand and segment your storage space in a flexible manner to suit your specific needs. It sounds like what you want to do is setup multiple drives in a single array with parity. This allows for drives to fail without data loss, simply replace the drive and rebuild the array. Additionally you can provision the array to appear as one big network drive if thats what floats your boat.
 
The problem is that since you will only be using a PCI SATA controller, you're gonna see huge performance impacts. Slow file transfers and the like. Not to mention that the P4 isn't exactly energy efficient.

As far as OS goes, are you familiar with BSD or Linux? If not, are you willing to learn?

In regards to disk organization, you have a few options:
1) Traditional RAID 5. You need a minimum of three drives for this. About a drives worth of space is used for parity while the rest is for storage. So if you have 3 x 1TB drives, you'll only be able to use 2TB of space. However this allows you to lose one drive, not lose any data barring extreme circumstances, and then rebuild the RAID array again with another drive. I'd recommend any Linux OS with MDADM Software RAID. I think FreeNAS has a built in RAID 5 software RAID as well (Thats not ZFS related)

2) ZFS RAID Z. I can't sum this up well enough. I recommend reading this link (Ignore the references to ZFSGuru):
http://hardforum.com/showpost.php?p=1036644362&postcount=2

3) Unraid:
Links
http://www.lime-technology.com/
http://lime-technology.com/wiki/index.php?title=UnRAID_Wiki[/QUOTE]

Pros:
+ Allows use of different sized drives.
+ Should one or multiple drives die, only the data on those dead drives are lost.
+ If the data on the hard drive isn’t being accessed, the hard drive is spun down until needed.

Cons:
- unRAID Free has a limit of 3 drives (2 data, 1 parity).
- unRAID Plus has a limit of 7 drives (5 data, 1 parity, 1 cache)
- unRAID Pro has a limit of 21 drives (19 data, 1 parity, 1 cache)
- Costs $120 for unRAID Pro (21 drives) and $69 for unRAID Plus (7 drives)
- Relatively low write speeds without that cache drive.

Notes:
* “unRAID™ is similar to RAID-4 in that for every n hard drives, there are n-1 data drives, and a single fixed parity drive” - From unRAID website

4) Flexraid Basic:
Links:
http://en.wikipedia.org/wiki/FlexRAID
http://www.openegg.org/

Pros:
+ Uses data based parity in which parity data is only kept of the file actually stored, not like with traditional RAID where parity data is is kept of the entire hard drives, regardless of whether or not you have data on those drives.
+ Allows use of different sized drives as a result of the data based parity setup
+ Free
+ Can be used with either Linux or Windows.

Cons:
- Not realtime parity so if the data needs to be protected at all times, not a good idea to use FlexRAID
- The FlexRAID needs periods of time of no usage at all in order to re-sync the RAID
- If multiple users are editing stored files, flexRAID is not recommended
 
Couple of questions,
1) How much storage do you intend to provide ?,2)What type of performance are you looking for? and 3) whats your budget to get this all done? ¿?

1) I intend to start with 1 TB of storage, maybe 2 TB, and add another hard drive of the same size when I need to expand.

2) As far as performance, I'm obviously limited by processing power, which was another reason to use the NAS os. I'm not concerned about how long it takes to transfer or write files. My main performance concern is that I'm going to be streaming media from it on an HTPC I'm building, which leads me to the next question. Depending on what RAID config and controller card I chose will affect processor load, right?

3) Since I'm building an HTPC, I want to keep costs as low as possible. Obviously I'm going to have to have to drop some cash on some hard drives, and what RAID config I use will determine how many drives, and also what controller card to use, right?
 
Your worst chock point wont be the cpu, but the PCI interface - you would never get anything over 30 MBytes/second reads or writes regardless of your hard drivers.
Keep in mind as a single modern 7200rpm drive can easily read at 80-90Mbytes/second....
So filling even 2tb volume would take you at least 18 hours....

Not to discourage you from doing it for learning experience, but if your IT friend would give you HP dc5700 machines - you'd be in better state - as they have at-least sata 1 interfaces
 
The problem is that since you will only be using a PCI SATA controller, you're gonna see huge performance impacts. Slow file transfers and the like. Not to mention that the P4 isn't exactly energy efficient.

As far as OS goes, are you familiar with BSD or Linux? If not, are you willing to learn?

In regards to disk organization, you have a few options:

Yeah I really wish I had a PCI-E slot somewhere on this mobo.

As far as Linux, the only experience I really have is with Android SDK. I was willing to pay for NASLite M2 based on all it's features, but I wanted to see what other people thought about it and what other OS's people recommend.

I was thinking RAID 5. Would that eat up alot of CPU power? I guess I could buy a controller card with on onboard processor for RAID. I'll check out some of those other options you linked me.

Thanks
 
Not to discourage you from doing it for learning experience, but if your IT friend would give you HP dc5700 machines - you'd be in better state - as they have at-least sata 1 interfaces

Yeah, I have 0 SATA interfaces on the mobo lol.

Would the slow read speeds affect streaming significantly?
 
Last edited:
One point of note, once most people get a server, they tend to fill it faster than they would a local drive since it is usually bigger than their local drive thus the mental process of "hrm, i'm downloading a 100GB onto 640GB drive" is much different "than i'm downloading 100GB onto my 2TB system".

In my case, my data downloading/consumption trippled in a very short period of time and now 10x.
 
I was willing to pay for NASLite M2 based on all it's features, but I wanted to see what other people thought about it and what other OS's people recommend.
The problem with NASLite M2 is that I see nothing that makes it worth buying. I mean what specific features does it have that other storage OSes don't have? In addition to the solutions I mentioned earlier here are some other OSes to check out:
1) FreeNAS
2) WHS 2011
3) OpenIndiana with Napp-It appliance installed
4) OpenFiler
5) Nexenta Community Edition
6) Amahi Home Server

I was thinking RAID 5. Would that eat up alot of CPU power? I guess I could buy a controller card with on onboard processor for RAID. I'll check out some of those other options you linked me.

If you're using a non-windows software RAID, it will eat up some CPU power but not as much as Windows based software RAID would. In addition, some of those old PCI controller cards with an actual XOR engine/processor may not actually support 1TB let alone 2TB drives. Plus they might be a bit expensive for what they are. Your best bet for a cost-effective solution would be to find the cheapest 4 port PCI SATA controller card that supports 1TB to 2TB drives and use some form of software RAID. As a price reference: Note that you can get significantly faster 8 port SATA controllers for around $70 to $100 these days. So a slower 4 port SATA controller should be no more than half that.


Would the slow read speeds affect streaming significantly?
Possibly depending on the disk organization solution you use, the type of media you're streaming, how many PCs are accessing the PC, etc.
 
Would the slow read speeds affect streaming significantly?

Streaming, probably not, 30MBis is still 240Mbis which more more than enough to stream HD
also keep in mind these systems are not power efficient so do expect noticeable increase in electrical bill if you plan to run it 24/7
 
The problem with NASLite M2 is that I see nothing that makes it worth buying. I mean what specific features does it have that other storage OSes don't have?

LOL alright. I guess i should say hardware requirements rather than features. Initially I looked at FreeNAS but read mixed reviews from users, and I might be pushing it with Openfiler's min hardware requirements. I'll look into the OS factor a bit more now that I know what I'm needing. Thanks for the suggestions.

Possibly depending on the disk organization solution you use, the type of media you're streaming, how many PCs are accessing the PC, etc.

I'll probably end up using an OS that has software RAID. My highest performance demands will be streaming full 1080p on my HTPC. Only one computer, maybe 2 max will be accessing the server at a time.

Streaming, probably not, 30MBis is still 240Mbis which more more than enough to stream HD

So in my situation, being limited by PCI isn't going to affect my ultimate goal, which is streaming HD content, which is great. It'll just take a long-ass time to copy large amounts of data, right?
 
With unRAID, does my parity drive have to be as big as my biggest drive?
 
Also, with the PCI interface maxing out at 30MB/s what's the point of some like this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816124034

It's advertised as SATA II, up to 3 Gb/s -but with PCI potential only being up to ~0.25 Gb/s it seems pointless. Why not just go with something like:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816132013

or

http://www.newegg.com/Product/Product.aspx?Item=N82E16816124024

I'll never get anywhere near even 1.5 Gb/s with PCI, right? Is my train of thought going in the correct direction on this?
 
Don't go down the SiL3114 route unless you like trouble...
As for the posts above I can tell you that ZFS is not going to work on your hardware, 32-bit CPU and 1Gb RAM wont be able to handle it. You might instead want to look at plain RAID 5 on both Linux or FreeNAS/BSD which your hardware should be able to handle much better compared to ZFS. As other have stated your PCI card is going to be a severe bottleneck, I can tell you by first hand experience with anything above 25mbyte/s is impossible to achieve if the whole array is attached to the PCI-controller (tested with a Promise PCI-controller card (SATA2)). Streaming shouldn't be an issue but you may (most likely) have issues doing two things at the same time due to very limited bandwidth so don't be too surprised if you get stuttering while streaming and transferring data to the array.
//Danne
 
I'll never get anywhere near even 1.5 Gb/s with PCI, right? Is my train of thought going in the correct direction on this?

Yes.

diizzy does have a point: ZFS isn't the right choice for your setup.
 
Alright... well based on all the responses I'm thinking salvaging the old dc5000's might not fit my needs. I was thinking I might be able to put them to use but oh well.

I have some mismatched parts from an old gaming rig and a more efficient processor I could definitely use:

Intel DQ965GF mobo
Intel 6420 dual core 2.13GHz CPU
EVGA 8800GT GPU
Cooler Master 600W power supply

I might just sell the GPU, I don't need it for a server rig... or I might just use rig to stream directly to my TV until I finish my HTPC.

What do you guys think?
 
Just wondering if you statement will apply to something like SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA II (3.0Gb/s) Controller Card? anyone has any experience that it will only give 30 MB/S?

This card is a different animal provided that you use it in a motherboard with proper PCI-X slots (64 bits either 133MHZ or 100MHz)

The problem is that PCI-X has been kind of obsoleted - you still can find motherboards with 64 bit 133MHz or 100MHz slots but they will be very expensive if brand new.

And do not pay that sticker price - these are available on EBay for $30-40.
 
Alright... well based on all the responses I'm thinking salvaging the old dc5000's might not fit my needs. I was thinking I might be able to put them to use but oh well.

I have some mismatched parts from an old gaming rig and a more efficient processor I could definitely use:

Intel DQ965GF mobo
Intel 6420 dual core 2.13GHz CPU
EVGA 8800GT GPU
Cooler Master 600W power supply

I might just sell the GPU, I don't need it for a server rig... or I might just use rig to stream directly to my TV until I finish my HTPC.

What do you guys think?

Intel DQ965GF - is much better choice - at least it has 1x pci-e slot with 250MB/s theoretical capacity (pci-e x16 can only be used for video card)
Already includes 1gb nic, so thats good news.

I think most popular sas LSI-1068 based controllers would work in pci-e x1, thou limited in bandwidth. You should build a array 4-5 drives would be idea. more could add extra storage or redundancy, but will not add higher performance.

Video card is of course redundant and psu is way to powerful - it would be big waste of electricity. 300-350W real watts should be enough.
 
This card is a different animal provided that you use it in a motherboard with proper PCI-X slots (64 bits either 133MHZ or 100MHz)

The problem is that PCI-X has been kind of obsoleted - you still can find motherboards with 64 bit 133MHz or 100MHz slots but they will be very expensive if brand new.

And do not pay that sticker price - these are available on EBay for $30-40.

Thanks for the explanation, i just contacted amazon.... its going back. I'm just going to go with dual Intel SASUC8I and flash to LSI IT firmware, hope i can see 2 of the same card.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Intel DQ965GF - is much better choice - at least it has 1x pci-e slot with 250MB/s theoretical capacity (pci-e x16 can only be used for video card)
Already includes 1gb nic, so thats good news.

I think most popular sas LSI-1068 based controllers would work in pci-e x1, thou limited in bandwidth. You should build a array 4-5 drives would be idea. more could add extra storage or redundancy, but will not add higher performance.

Video card is of course redundant and psu is way to powerful - it would be big waste of electricity. 300-350W real watts should be enough.


PCIe x16 can be used for anything.

My motherboard has 2 of them and I have PCIe 4x cards in both. IBM br10i and an HP NC380T, both of which are not video cards.
 
Back
Top