Recommendations: Expandable NAS, Raid 6, w/ best throughput

jaw4322

n00b
Joined
May 26, 2010
Messages
58
Hello All

Starting a new NAS project and what better place to get advice than HardForum. I'm looking to build a NAS w/ 10-16 hotswap drive chassis, Raid 6, easily expandable, and gives the best throughput on our network. Expandability and throughput will be the main focus. If I fill up a chassis i want to be able to connect another chassis and continue on. I will also need fast file transfers in and out. Mainly moving large video files by network share and ftp.

As far as hardware I would like your input. Recommendations on Raid controllers, sas expanders, motherboard, OS (FreeNas?, Server 2008?,) is really what I'm looking for. If you know of other builds on this forum let me know. I appreciate any help.

Thanks
 
I would think most recent and decent NAS devices will give you pretty fast throughput, but you should really define:

- What you mean by fast throughput
- Will there be several simultaneous users
- What else will there be apart from large video files

Streaming large files is pretty easy on the NAS, but if you have another user at the same time doing operations with lot's of small files (say storing several large email folders) it might kill the streams. Usually IO is more of a killer then throughput.
 
pj thanks for the info, i'll take a look.
drop- I'm looking to part this together, so no all in one units. It's way more fun. ;)
- What you mean by fast throughput - fast file transfers (ftp/share)
- Will there be several simultaneous users only 2-10 connections at a time
- What else will there be apart from large video files only large video file transfers, no streaming or etc

Thanks i look forward to seeing all the different ideas.
 
Trying to figure this out and I'm confused, and trying to figure out which setup is best for us. Please tell me if this is correct way to do this. If there is a better way to do this let me know. Just want to be able to expand as much as possible and still get good r/w speeds. I guess I'm looking for the max expandability scenario in a 12-16 bay multi-chassis setup.

Setup:
areca ARC-1680IX-12 PCIe x8 SAS RAID Card in a 12 bay hotswap chassis Raid 6
LSI SAS 9200-16e HBA Card 4 port in same chassis which connects to the Areca external port
The other 3 ports can then expand to Habey DS-1280 3U 12-bay or other SAS expander chassis
Which would give me a total 48 drives, and 4 Chassis's.

I'm guessing there's a way to do this that allows more chassis/drives? Thanks guys.
 
What are your requirements for availability? Is this a work thing or a private hobby? Is it a problem if the system is down for a day? How about a week?
 
This is both a hobby and work project. This will probably be for work, I will most like build something similar for home. I would like it to be up all the time. If a drive goes, its just hot swapped. If something did happen, we could get by for a day or two, but thats worst case. We would most likely have spares for the critical parts.
 
Here is my prelim setup. Looking for some feedback. Does anyone know the performance i might get on this setup?

Host Server:
24 Drive Hotswap Chassis
ARC-1680IX-24-4G 1out to expansion, 6x Mini-SAS to Mini-SAS 24 drives(backplane), Raid 6 Host

2nd Expansion Server:
24 Drive Hotswap Chassis
Chenbro CK13601 1in, 1out to expansion 3, 6x Mini-SAS to Mini-SAS 24 drives(backplane)

3nd Expansion Server:
24 Drive Hotswap Chassis
Chenbro CK13601 1in, 1out to expansion 4, 6x Mini-SAS to Mini-SAS 24 drives(backplane)

4th Expansion Server:
24 Drive Hotswap Chassis
Chenbro CK13601 1in, 1out to expansion 5, 6x Mini-SAS to Mini-SAS 24 drives(backplane)

5th Expansion Server:
16 Drive Hotswap Chassis
Chenbro CK12803 1in, 1out to expansion 6, 4x Mini-SAS to Mini-SAS 16 drives(backplane)

6th Expansion Server:
16 Drive Hotswap Chassis
Chenbro CK12803 1in, 4x Mini-SAS to Mini-SAS 16 drives(backplane)

Totaling a Max 128 Drives of the ARC-1680IX-24-4G.
 
Last edited:
4 months here and you are spending $10k+ on a real enterprise storage solution, but building it yourself and posting on here about RAID6 and a NAS?


something doesnt add up...

why not buy some Dell or HP servers that are designed for uptime with warranties on parts with next day part service?

you are spending some serious $$$$

are you sure you want to build this as a DIY and not from a OEM
 
I cant give much advice since I have never attempted what you are, but one thing to look out for is the performance of the ARC-1680IX-24-4G with multiple RAID sets. I have read that the Areca cards do not do well with many RAID arrays at the same time. It would be better if you read the Areca owners thread.

http://hardforum.com/showthread.php?t=1483771
 
Coolrunner- thanks for the link, good stuff in there.

Adidas4275- thanks for not helping and questioning someone who is obviously trying to learning more about this type of mass storage. The setup i suggested is not going to be bought all at once. We will probably only buy 1 chassis and 12 or so drives. The setup is just me wanting to know if i would be able to expand to that level if needed. If you have suggestion on that I am open, and would be thankful. Obviously OEM is great for people that have that kind of money, and want to pay 3-4 times for the same amount of storage. I prefer getting my hands dirty and find it fun researching (i know nerd talk) and seeing the different ways people are doing it these days. Now if you convince me with facts that OEM is the way to go, then I'll tip my hat to you and take your advice.
 
i guess it just depends what is on the disks....

if it is a production server in a business environment than the extra $$ is worth it (IMO) because you get a level of service that matches the importance of the data...

If your controller dies 8 months after you build it and for whatever reason it is hard to find the same controller, maybe a rev has happened, than it is possible that the new card will not rebuild the arrays and the data will need to be restored from a full backup

which would be lost revenue, if the data is mission critical, and a PITA

if it is in your basement for movies and personal documents than OEM $$ might not be worth the cost.

I have my own Norco 4020 WHS that i built and never would have paid OEM $$, but that is not the same scope nor environment your proposal is.

Also, assuming you are not the owner of the business, I would never want to be on the hook for building a server(s) DIY that is $10K+ and then have hardware failures..... it would be really hard to justify DIY over OEM after a big failure.

I not trying to piss you off or troll, but most of the time when "new" people come in here and post about building servers for work there is a whole mess of people who recommend exactly what I have...

so I was just trying to raise the issue, to make sure that point so you don't get into a situation that is regrettable in the future.

it looks like a cool project and figuring it out as you go is fun, but figuring out a production level storage server array(s) for mission critical data in a business by asking on an enthusiast forum you've belonged to for 4 months sounds like a terrible idea to me.

JMO so take it with a grain of salt.... or a truckload of salt ;)


(oh and you can get a Dell Powervault that is around $4k-$10k depending on setup, including disk cost from dell disks)
http://configure.us.dell.com/dellstore/config.aspx?oc=bvcwuk2&c=us&l=en&s=bsd&cs=04&kc=pvaul_md3000i

Or a poweredge that will hold 16 drives for $1K minus disks with a Perc6/i controller
 
Last edited:
Back
Top