building a nas, input...

Joined
Jan 25, 2006
Messages
57
Hey all
so i think i finally have the time and capacity to build my own nas. didn't really wanna by one cause i reckon i can build one cheaper, better, etc.
with the good reviews of freenas why shouldn't i do it.

anyway i was hoping you all could chip in with some info, there is still a bit i don't understand.
here's my basic plan

id like this nas to flexible in terms of drive additions, if i need to add two at a time so be it.

custom case, will plan this around final parts i settle on.
x2: 6x sata backplanes sata disks of course
os: freenas
mainboard and ram: (based off your input)
raid config/controller: (as above obviously has to support linux)

what id like help with is
raid card selections
and raid config
was thinking raid 5 but i dunno, i wanna keep adding drives over time, and have them hot swappable if possible. id like to manage one array but if needs be two are fine.

id like to start off with 2x 2tbs and go from there.
if possible could someone point me to a good raid faq, i still dont know how the backplane integrates with the raid controller.

let loose and dont hold back, thanks all.
oh and pix too if i get this thing off the ground, i cant wait to kick this project off!
 
Last edited:
12 hot-swappable bays? For that much money, you're better off with the Norco RPC-4020 or 4220 cases. Main reason being that 4in3 hotswap bays costs roughly $95 each. Since you want 12 hot-swap bays, that's about $300 for just the 4in3 hot-swap bays alone. At that point, you're waay better off with just paying the $300 for the Norco RPC-4020 or 4220 cases that has 20 hot-swap bays + the case itself.

Nitpicking here but FreeNAS isn't Linux: It's BSD.With that said, if there is Linux support, there generally is BSD support. But just nitpicking there.

Also RAID 5 requires a minimum of 3 drives so if RAID 5 is the way you want to go, you're gonna need three drives.

Also what's your max budget for this server, drives not included?
 
Linux (or bsd) software raid and an lsi 1068e or other well supported sas chip for your controller. Let's you use expanders if needed. Also gives you 8 ports, can be moved to another machine if needed, etc. Hardware RAID5 will cost quite a bit more to build and won't really offer much in the way of practical benefits if your limited by network speed anyways. For chassis, the norco stuff isn't terrible, and it's a hell of a lot cheaper than a real server grade supermicro or chenbro chassis. On the other hand, you can often find used supermicro stuff on eBay fairly cheap. Look for the 3U chassis with 15 sata hot swaps across the front. I think it's sc935 or sc936 model number. See them fairly regularly with redundant psu's for about $225 sometimes they even come with a couple year old opteron board with CPUs and ram even heh. Buying new though, the supermicro stuff has a tendancy to break the bank a bit since it's not exactly consumer grade cheap stuff. On the cheaper end might be a new sc743tq-865b. 4u pedestal, 8 hot swap bays and 3 5.25" bays. Even comes with a badass 865w psu. Cheaper yet would be a chenbro sr106/107. The one im thinking of is 5u pedestal, has two drive bay cages, 3 5.25" bays and a floppy/boot drive bay. The two drive cages are removable and on the cheap model hold 4 fixed drives each. You can get 4 drive hot swap cages or 6 2.5" hot swap drive cages as well for a bit more money. About $130 with two fixed cages and no psu.

My old q6600 build was in the chenbro, my new dual xeon e5520 build in the supermicro sc743. Both are nice cases, but frankly the supermicro is a LOT nicer. You can also order the superquite version, or just replace stock hot swap fans with the SQ fans for not a lot of money.
 
gosh you guys have just put the bullet thru the heart of this one.
did a bit of googling
can get the
Norco RPC-4020 Rackmount Server Chassis, No PSU - 4U
for 550 au $
back planes included and supports 20 disks
all i need is the mb plus extras, disks, raid controller and psu.
seems like a good start.

also to answer: max budget is sliding, that's what i am willing to spend on the case so far keeping in mind auds
also this is a nas, not a server...
 
Last edited:
Ok, I wish I could help you more, but I am kind of new to FreeNAS and what hardware it supports, so with that in mind you will have to put in the legwork to research to make sure the parts are supported but I would go with the following

Drives- Any 1-2 Tb drives of your choice. Personally I think 1.5 Tb are in the sweet spot for pricing versus capacities and I have always been a Western Digital fan.

Mobo and CPU - You dont need quad cores running at 3Ghz here. Freenas is light. I have it running a JBOD array on a 250 Mhz ( M , M, M , M Mhz NOT Ghz ) I also have it running on an older 1600 Ghz system. Now I will comment that on the 250 Mhz system my cpu usage hovers around 80%, but on the 1600 Ghz it never breaks about 7%

Ram - Same thing. Id try one stick of 1gb. Buy brand name ram though for sure.

NIC - If your mobo doesnt have onboard gig or its not supported definately invest in an Intel nic

SATA card - I believe the 8 port Supermicro non- raid card is compatible and is about $100 new


Note that this is NOT a hardware raid setup, but its a lot more wallet friendly. I wish I could tell you more, and in a few days I am going to be converting the large Freenas box to a software raid and will let you know if your interested as far as responsiveness, etc. I hope this helps you.
 
also to answer: max budget is sliding, that's what i am willing to spend on the case so far keeping in mind auds
also this is a nas, not a server...
Sorry I don't understand: What exactly do you mean that your "max budget is sliding"?

Don't see why you reminded us that this is a NAS. It's gonna have similar hardware to a file server anyway just a little lower-end.

Now do you want hardware RAID or software RAID?
 
In the interest of also wanting to build my own nas/san I have wondered how this often works with appliances like netapp. I've never had a a chance to look inside of a netapp appliance but from what I can see the disk systems would only need to have a chassis for all the disks, a sufficient back-plane possibly with or without raid depending on your needs/wants and a good power supply to power all the disks. Then the back-plane has an interface to external to chassis that allows for attachment to a server or a nas/san switch usually FC. Am I right or wrong in this assumption. Just wondering as I would like to build my own san/nas in this fashion so that multiple servers could utilize the same disk withing the FC network.
 
Just be sure to get a beefy single rail 12V PSU like a corsair or similar, and lots of RAM if you intend on software RAID.
 
and lots of RAM if you intend on software RAID.

4GB should be sufficient. In the 12 to 15 linux software arrays (raid5 or 6) I have here at work most systems are 4GB with the exception of some newer systems that have 8GB. Most of these systems read at over 300MB/s and write at over 200MB/s but not like that will help much as the 100% gigabit network can only transfer at around 80MB/s per nic.
 
In the interest of also wanting to build my own nas/san I have wondered how this often works with appliances like netapp. I've never had a a chance to look inside of a netapp appliance but from what I can see the disk systems would only need to have a chassis for all the disks, a sufficient back-plane possibly with or without raid depending on your needs/wants and a good power supply to power all the disks. Then the back-plane has an interface to external to chassis that allows for attachment to a server or a nas/san switch usually FC. Am I right or wrong in this assumption. Just wondering as I would like to build my own san/nas in this fashion so that multiple servers could utilize the same disk withing the FC network.

Guess where I am right now :)

Their filer systems have one or two controllers, CPU's, ECC RAM, add on cards such as NICs and such. If you want, I can find a rack and take pics.
 
4GB should be sufficient. In the 12 to 15 linux software arrays (raid5 or 6) I have here at work most systems are 4GB with the exception of some newer systems that have 8GB. Most of these systems read at over 300MB/s and write at over 200MB/s but not like that will help much as the 100% gigabit network can only transfer at around 80MB/s per nic.

Needs a better switch :)
 
Guess where I am right now :)

Their filer systems have one or two controllers, CPU's, ECC RAM, add on cards such as NICs and such. If you want, I can find a rack and take pics.

That would be sweet. Would love to see the inside the disk array and the inside of the controller system. I've always wondered what was on the inside of a netapp.
 
I just thought of something. Am I confusing with what I was described above with a port multiplier? Essentially netapp data shelf connect to filer controller which connects to network, so it sounds like I am thinking of multipliers. Also with every netapp data shelf that is added and daisy chained to the one before it, doesn't that take away from the overall bandwidth available or is netapp different.

If only I could afford a netapp for my home network, but then half the fun is in building your own storage system so I probably wouldn't enjoy it as much as if I had built it myself.
 
I'll take some pictures when I get a chance this week.

NetApp's basic setup is made up of filers and disk shelves (Although the FAS2000 line, for example, has up to two controllers plus room for disks). The filer contains the "brains" while the disk shelves contain... well disks.

I was thinking about getting a FAS2020 for home, but decided to just build my own Norco unit. Totally different since I'm also hosting multiple VM's on the box, and therefore am using much more CPU power than a NetApp filer needs (in general).
 
Mobo and CPU - You dont need quad cores running at 3Ghz here. Freenas is light. I have it running a JBOD array on a 250 Mhz ( M , M, M , M Mhz NOT Ghz ) I also have it running on an older 1600 Ghz system. Now I will comment that on the 250 Mhz system my cpu usage hovers around 80%, but on the 1600 Ghz it never breaks about 7%

Ram - Same thing. Id try one stick of 1gb. Buy brand name ram though for sure.

NIC - If your mobo doesnt have onboard gig or its not supported definately invest in an Intel nic

SATA card - I believe the 8 port Supermicro non- raid card is compatible and is about $100 new

Cool, cool thanks for the starter, i might try and get two intel nics in, to up read write access. will see.

Danny Bui said:
Sorry I don't understand: What exactly do you mean that your "max budget is sliding"?

Don't see why you reminded us that this is a NAS. It's gonna have similar hardware to a file server anyway just a little lower-end.

Now do you want hardware RAID or software RAID?

err max budget is flexible, it's not really set at the moment but so far it is sliding... up -.-
i needed to remind ME that it is a nas, otherwise i'd try to build a file server, just cause someone mentioned it. yeah i know i have ADD.
as for software/hardware raid im not sure, i think i will go with hardware raid since i know a little, and i stress a little more about it. which is nothing really.

staticlag said:
Just be sure to get a beefy single rail 12V PSU like a corsair or similar, and lots of RAM if you intend on software RAID.

i'd say my mobo choice may limit s/w raid going on the above quote.
i'd like to keep it as power efficent as possible.
it'll be for home, on all the time.

so far i am leaning toward the Norco RPC4020
will look around for alternative cases tonight.
 
oh can someone recommend a raid faq, that isnt wikipeida.
also, i will look to overhaul the stock fans on the case to quiet ones, anyone have any suggestions.
 
anything above 5 spindles and I'd do RAID6. Or raidz2. Costs of a hardware raid controller would likely be more expensive than just switching to raid10 even with the extra drives that requires. Really for 95% of solutions I think software raid is the way to go. Reallistically, even many of the "hardware" raid options for San stuff is actually software raid running on linux with an embedded os and a config GUI/program to set it all up. Add battery backup for the ram modules and who's to say it's not as nice or as full featured as true hardware raid.

I would also take a hard look at chassis options and I would also look at motherboards that also let you use ecc ram. Those are far higher on my list than hardware raid. Doubly so if you aren't going to NEED 500+ MB/s speeds out of it.
 
could some one link me a good faq (including recommended requirements etc) for software v hardware raid arrays. i know it's pretty lazy, im hoping there is someone out there that has seen a good one peer review etc.
 
I am quite confused here. Do you want to build a NAS or a server? ( no, Im not gonna explain the difference, Google is your friend here) How much do you really know about RAID? Hardware and Software ?? Be honest with yourself, youll save money and headaches later. How comfortable are you with Freenas? Do you need it fast or reliable? Or both ? Sit down, seriously reconsider your needs and post back. I will comment though, that if youre not willing to do some research yourself, your best bet may be to buy a premade.
 
Why would you want hardware RAID if you opt for FreeNAS? The idea is building a NAS cheaply; like 150 dollars a complete NAS system excluding disks. So cheaper, more extensible and faster are the advantages over premade NAS systems. If you are going hardware RAID you lose on the being cheap argument.
 
Why would you want hardware RAID if you opt for FreeNAS? The idea is building a NAS cheaply; like 150 dollars a complete NAS system excluding disks. So cheaper, more extensible and faster are the advantages over premade NAS systems. If you are going hardware RAID you lose on the being cheap argument.

^^I'd second this. HW Raid shines on OS's like Windows where there isn't a great software raid. Stick to one or the other.
 
im am pretty sure a nas is what i need to suit my archiving needs (large storage area with some redundnacy)
id like to share over linux and win os' (windows take priority though)

this is for home;
doesn’t need to be particularly speedy read/write wise but i'd like increased performance where i can get it, would like some input on this too. whether i add an extra nic, not sure.
if possible energy efficient, needs to be relatively low maintenance, i'll be checking in on any drive failure alerts etc.
as quiet as possible but not essential
modular, as in new disks can be added to the array(s) over time with ease
i'd like to be able to get it up and running sweetly then check in on it daily.

now whether or not freenas can deliver all this i don’t know
hardware or software raid, i don’t know

premade isn’t really an option due to pricing, lack of drive bays and lack of customability
$1,775.40 for a Thecus i5500 v a Norco 4020 for $550 i think we know who wins. Price per bay.

i still have some unanswered questions and haven’t really solidified anything yet.

i am leaning toward a Norco case with an OCZ 850w and Hitachi 2tb disks
still unsure on:
raid config likely 5 or 6
os (if someone wants to recommend some alternative please do)
controllers (be it hardware or software)
mainboard cpu ram combination

not particularly familiar with bsd/freenas but from what i saw on the documentation page it looked pretty straight forward

anyway the point of all this, is that i could plod ahead with the little i know; but i am sure that would turn into a failed experiment JBOD with zero redundancy.

i am really looking for input here, maybe share your own set up at home, or modify it a bit, and point out pros and cons.
if you were going to build a nas, (for home) how would you do it?

just keep in mind that i am limited as to what i can get my hands on, being in the south pacific and all. I was surprised that I could find Norco down here, not all of us have newegg at their disposal. thanks all hope this clarifies what i am trying to do
 
Ok, so it seems price and availability are pretty high on the concerns list.

premade Thecus is $1700, ( yeah, no way for me either) BUT some of the things you have mentioned arent cheap either ( NIC teaming, Hardware raid cards, etc) so....

Norco =$500, 2 Tb Hitachi drives X 4 = ~$600 Hardware Raid Card ( and this is a guesstimate) about $300 to $600 , so far you have ~$1400-$1700 into it. Now granted, youre going to get a lot more bang for your buck with the prebuilt route, but also know that when you start discussing things like hardware raid, etc it gets expensive quick.

My setup ( about 4Tb duplicated so I am running 8Tb actual )

Server 1 - An older PC running WHS, I let WHS handle the jbod, etc.

Server 2 - Backs up server1 - Was using WHS, now going to a linux solution, Minty with a share is all now. Using a very cheap raid card, lol, wouldnt reccomend it to anyone, Siil3114 chipset, but hey for $20 I took a chance.

Reasoning and logic to this setup. 1) WHS even with duplication has screwed up and lost data, so I just disabled it and back up to a whole new machine. 2) All servers and main PC's use gig ethernet. 3) I didnt need hotswap. Yeah its a pain to pull the server off the rack to upgrade or add HD, but honestly, with drives in the 1-2 tb range, how often am I gonna have to do that?

What I would do.....find a decent case ( I have seen ATX mid towers for as little as $50 with 8 HD bays). Get a decent NIC ( intel ) $35, a decent raid card ( I have heard and am using also a Supermicro 8 port non raid card $100 , get a decent CPU and Mobo, some good RAM ( apparently the more the better for software raid) and play with freenas. I use it and its really nice, but not for everyone. I also use WHS and Linux also, each has its own pros and cons. Then if you find Freenas is all you need, sweet, youre golden. If not, then maybe research WHS before you spend any money.



EDIT : Freenas is pretty simple to set up and use. The problem is, once you have a problem theres virtually no one to help you. The user community in general are a bunch of smug asholes. If your lucky a search will turn up the exact same problem you had, with a solution.
 
Appreciate your thoughts there , Jay. I know what you mean about some in the open source community.
Looking in on sw v hw raid, people seem really divided on the subject.
some in the s/w camp stating the raid controller is the biggest point of failure;
some in the h/w stating the os does not provide the comprehensive capacity that a raid controller does.

If anything this process has highlighted the glaring omissions in my knowledge of raid arrays, which was limited at best.

if someone could answer this final question that'd be swell

if i choose the software path, which is what i am leaning toward, how do i interface all the drives with the main board, or backplane to the mainboard?
do i need an expander, daisy chain or some connector?
obviously there will be sata ports on the board but i doubt there will be enough for the amount of drives i will array, so i will need a soultion to this.

i was always under the impression that all the drives hung off the raid controller, like i said glaring ommissions, right? anyways, i thought the raid controller regulated the amount of drives i could maintain (physically) in a raid array. if anyone has any suggestions on the above question, it'd really help.

finally, my set up will be based around these thoughts

raid 5
freenas
s/w controlled
norco case? (still not sure)
still need a mobo cpu ram combo little easier to figure running with s/w raid
im thinking a q6600 board and 4 gig ram, should handle the job.
maybe an extra nic to run with the onboard, but not essential, still need to figure out if i can make it work.

anywho, thanks for the input all.
 
Last edited:
Unless you can find the Q6600 and 4GB DDR2 RAM for cheap, don't bother with the Q6600. The AMD Athlong X4 620 offers the same performance but on a newer socket.

Ok, gonna try to answer your questions:
If you go with the Norco 4020 case and the software RAID, WHS, FreeNAS, or FreeBSD ZFS route, then this classic setup is all that you need:
$100 - SuperMicro AOC-SAT2-MV8 PCI-X (PCI Compatible) 8 Port SATA Controller Card

You just connect a whole much of SATA cables from the controller card to the backplane. Pretty simple.

If you go the WHS route, Red Hat, or Suse Linux, then I'd recommend this card instead:
$100 - SuperMicro AOC-SASLP-MV8 PCI-Ex4 8 Port SATA Controller Card
$36 - 2 x 3ware SFF-8087 to Multi-lane SATA Forward Break-out Cable
---
Total: $136

In general, yes the amount of ports available on the RAID controller determines amount of HDDs you can place in a true hardware RAID array. For software RAID, depending on which one, the amount of HDDs in the RAID array is pretty much determined by the total number of SATA ports in the system. So theoritcally you can create a RAID array from drives located on the mobo's SATA ports and the SATA ports from another SATA controller card.

You might want to hit up these links for a bit more info on FreeNAS + ZFS:
http://hardforum.com/showthread.php?t=1505320
http://hardforum.com/showthread.php?t=1500505
 
Just swap the ioplate and use the uio cards. PCIe x8 with the components on the "wrong" side. Aoc-usas-l8i. It's an lsi 1068e based card with 2 minisas ports, let's you run 8 drives without expanders or 144 drives I think with expanders. The card is about $130. The newer aoc-usas2-l8i is based on the lsi2008 sas2 controller, for about the same price. It doesn't have driver support for quite everything yet though, so check to see if your os supports it first. As far as nic, you can pick up nice x4 dual port gigabit cards for under $100 as well. That's intel based. If your willing to use dualport Marvell nics, they can be found for about $50.
 
Ok, some more thoughts for you. I currently have a JBOD Freenas setup. Its on an OLD pc. Heres specs 250 Mhz cpu ( notice, Mhz, not Ghz ) 64 mb of memory, 3 IDE HD (160+80+40 for about 253 Gb total) It runs from CD rom and saves the config to an old thumbdrive. It transfers at about 3-4 Mb/s. The PC pretty much stays maxed at around 90 % usage, but thats to be expected. I had all the parts and was curious about Freenas so I made this one. Its old, slow , but was a blast to tinker with and hey, I got 253Gb of network storage from parts that otherwise would have been wasted. So thats cool.

Now the uncool. When I tried to make a newer bigger faster version of a freenas I had problems with any type of array. Drives wouldnt get mounted on boot, etc. The secondary partition was always reported corrupted ( after reading more than a few threads about this and not finding an answer I gave up) So my new 4 Tb worth of sata drives wouldnt work. Weird, but I finally just installed Minty ( A lot like Ubuntu) and after tinkering got my array working fine and use it as a backup. The specs for this one are 1600 Ghz single core processor,768 megabytes of ram, running software raid 0 stripe. I havent quite got all the networking bugs worked out, but it is gigabit, and I am getting around 15Mb/s transfers to the box using just teracopy. The processor stays right at 60% usage when just transferring files to it, and does occasionally jump to 99% for a very short period.

Also, just to throw more wrenches in the works, in the good old days ( before WHS, and before I learned of freenas and 1gb was a lot of stuff) I actually had an XP Pro machine setup with) a shared folder running fakeraid.

So as you can see, even on a low budget theres tons of ways to do it. Good luck
 
Just swap the ioplate and use the uio cards. PCIe x8 with the components on the "wrong" side. Aoc-usas-l8i. It's an lsi 1068e based card with 2 minisas ports, let's you run 8 drives without expanders or 144 drives I think with expanders. The card is about $130. The newer aoc-usas2-l8i is based on the lsi2008 sas2 controller, for about the same price. It doesn't have driver support for quite everything yet though, so check to see if your os supports it first. As far as nic, you can pick up nice x4 dual port gigabit cards for under $100 as well. That's intel based. If your willing to use dualport Marvell nics, they can be found for about $50.


the AOC-USAS-L8I is an AMAZING HBA for the price IMO. Instead of swapping the I/O plate on it you can just use longer screwes and a spacer (i think its just about 1/4" spacer or so) and the I/O sheild will line up with the slot BEHIND the card.... Works great for my cards :)

I actually just got one of the HP Expanders the other day so I will be able to let you know how it works with an expander as well here as soon as my cables get here tommorow.

21 drives should be fun to play with :)
 
Back
Top