Drive Raid setup question

DataMine

n00b
Joined
Feb 8, 2012
Messages
41
hello I have had a nas for about 2.5 year now and it has grown out of control, multiple raids using different file system and stand alone drives without any protection and I want to resetup my drive setup I want to remove all the drives less than 1 TB and setup 1 or 2 arrays only. But I am a little worried about the setup. I hope that all the good people here would have a solution for me.
I have and ASUS motherboard with 8 sata II (1 esata) (raid 1,0,10 capable but caps the drives at 2.2 TB) and 2 8 port 9500S 3Ware PCI-X Cards (total 16 ports), 2 PIC-E sata cards with 2 ports each (4 ports),
Drive I want to use are as follows
5x2TB
3 WD green 5400 RMP
1 Seagate LP 5400 RMP
1 Samsung 5400 RMP

4x1.5 TB
2 Seagate LP 5400 RMP
2 Seagate 7200 RMP

7x1 TB
7 WD Green 5400 RMP

750 GB Seegate 7200 RMP for OS and torrents

Originally I had various raids and drives setup and expanded over the last 2.5 years, Using Ext3 and Ext4, even NTFS drive I had added some were raid (software and hardware depends when they were added) some were stand alone. with drives ranging from 200gb-2 TB with IDE and sata drives. I have used software raid and find it easy to setup, but I never have had to replace a disk so I count my self lucky. What I am thinking of doing is making the 5x2Tb into a 10 TB (8 usable) arrary, but new and better tech has come out, Should I use ZFS, or BTFS, or stick with EXT4, (note I only have 2 GB of ram installed, I can go up to 4 GB if need though, and ZFS uses a lot of ram, will it run only on 2 GB), I also plan on continually using my system as the Media center with XBMC on it. As it take a long time to build arrays here is the layout I plan on using. Please make any subjection you have.

Array 1
5x2TB (8TB) Raid 5 (mdadm) with BTFS or ZFS for the file checking capabilities (motherboard SATA II ports used)
-Media Files

Array 2
4x1.5 TB (mix of 7200 RMP and 5400 RMP) (4.5TB) I read that this is possible but is it a good idea?? (raid 5 mdadm) uses the PCI-E Sata II ports) also with BTFS or ZFS, or EXT4
-Other Files

Array 3
7x1 TB WD green (5400 RMP) (6TB)on one of the 3Ware 9500S PCI cards (mdadm arrary or should I use the hardware array??) This is for backing up the 5 PC/Laptops in the house so Im fine being limited to the PCI bus speed (around 100 MBps) also BTFS ZFS or EXT4??
-Crash Plan Backup Files.

This Should give me about 18.5TB (ok less but Im rounding) of usable space. leaves 2 open Sata ports on the MB to add 2 more 2TB drive later,


The OS Im currently Using iw Mint 11 with XBMC install, So if need I can use the unit as a desktop computer (which I have needed in the past) I am usnign the UBUNTU version and not the Debian version. Its going to take me a few days to move all the data around (moving it all the the 7x1TB drives that I plan on building array 3 out of)
 
Last edited:
Your suggested array layout seems fine. It is similar to what I am running. Plan your RAID with future growth in mind. When you get larger drives, will you swap out one of the arrays or will you start a new one? With that many drives, I would recommend adding a bit more redundancy. Either go RAID6 or add a hot spare across all three arrays. The later will require a disk from the array with the largest drives. I am not a fan of hardware RAID so I cannot recommend it. You will be running software RAID on 2 of the 3 arrays, so stick with it on the 3rd as well. Plus, that gives you the hot spare option I mentioned earlier. Choose the FS which you are most comfortable with. Each have their pros and cons.

As for building out your arrays, you don't have to move the data around to start. As long as you have two drives free for any array you can start the build and migration. Start with a RAID1 array. As you copy data over to the array and free up other disk, you can add them to the array. You can then migrate to different raid level as you add disk(s) to the array. This can all be down while your server is online with access to the data, there will just be a bit of a performance hit while you move data around.
 
Well Ihave built the following array
4x2TB Raid 5 (6TB Usable)
Read-220 MBps -Write 40 MBps
File System Brtfs
There must be some kind of caching going on because network read/write is
Write -75 MBps Read -30 MBps, Im ok with the write, but the read is to low, it jams up as soon as i try to stream more than 1 HD file. anyone know how to fix this??
 
Back
Top