RAID Upgrade

OldSchool

Limp Gawd
Joined
Jul 6, 2010
Messages
477
So I have been planning on upgrading my current storage RAID setup and I am a bit unfamiliar with all of the different options that are available. As it sits right now, the array consists of 4 2TB Hitachi 7200rpm Deskstars and an Adaptec 2260200 in RAID 0.

I want to expand it to 8 drives with some sort of redundancy, the main problem I am having is deciding on a controller. Correct me if I am wrong but there appears to be 3 different types of RAID setups? "Hardware" raid where the controller has on board cache and handles everything on the card. "Software" raid where some of the operations for calculating parity are offloaded to the main system CPU at a driver level. And then some kind of "OS" raid at an operating system or file system level where the OS handles the redundancy entirely starting from JBOD.

I understand that the last option is completely dependent on that particular OS installation, and is not portable? Is the software raid portable, or is it also tied to the OS that it was originally configured on?

I was also wondering about the processor load of the software raid option, what sort of load would an 8 drive raid 5 array put on the current processor? (An AMD X2 6400+) After it is set up, all of the data on the array will be encrypted with TrueCrypt which also adds substantially to the CPU load. The current array is encrypted as well, and going full bore it is capable of maxing the CPU just doing encryption/decryption.

Yet another consideration is the size of the array and data loss. I have been thinking of going RAID 6 due to what I have been reading about the possibility of unrecoverable read errors during a rebuild after losing a drive. In that case I would pretty much have to get a hardware raid controller as I haven't seen any software controllers that do RAID 6.

Basically, I am wondering if money was a big consideration would you splurge on the hardware controller? Or would the software controller be good enough?

The hardware controller I have been looking at is:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151039

Or a software controller like this one:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117171

Is it worth the extra $160?

Thanks for any advice ;)
 
I want to expand it to 8 drives with some sort of redundancy, the main problem I am having is deciding on a controller. Correct me if I am wrong but there appears to be 3 different types of RAID setups? "Hardware" raid where the controller has on board cache and handles everything on the card. "Software" raid where some of the operations for calculating parity are offloaded to the main system CPU at a driver level. And then some kind of "OS" raid at an operating system or file system level where the OS handles the redundancy entirely starting from JBOD.
Yeah that's a close enough approximation of RAID setups. However, the "software RAID" you speak of is generally called FakeRAID and the OS RAID is also known as software RAID. Then again, some people term/call fakeRAID as software RAID as well.

I understand that the last option is completely dependent on that particular OS installation, and is not portable? Is the software raid portable, or is it also tied to the OS that it was originally configured on?

OS/Software RAID is portable but only to OSes of the same platform as the originally configured OS. i.e a software/OS RAID array created in Ubuntu Linux can be imported by Fedora Linux and vice vera. fakeRAID (i.e the one created by a card) can be portable depending on the implementation of the RAID itself. Like whether or not there's driver support for the OS in question, etc.

I was also wondering about the processor load of the software raid option, what sort of load would an 8 drive raid 5 array put on the current processor? (An AMD X2 6400+) After it is set up, all of the data on the array will be encrypted with TrueCrypt which also adds substantially to the CPU load. The current array is encrypted as well, and going full bore it is capable of maxing the CPU just doing encryption/decryption.
Pretty high. RAID 0 itself doesn't use that much CPU in the first place. The fact that your RAID 0 array is maxing out your CPU as a result of encryption/decryption doesn't bode well for a RAID 5 array which generally uses significant amounts of CPU power.

With that said, Linux MDADM software RAID has a major advantage over fakeRAID or OS arrays made in Windows in that CPU usage is rather low while providing superior performance.

Yet another consideration is the size of the array and data loss. I have been thinking of going RAID 6 due to what I have been reading about the possibility of unrecoverable read errors during a rebuild after losing a drive. In that case I would pretty much have to get a hardware raid controller as I haven't seen any software controllers that do RAID 6.
If you're using Windows, then yeah I think that's the case.


Basically, I am wondering if money was a big consideration would you splurge on the hardware controller? Or would the software controller be good enough?

The hardware controller I have been looking at is:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151039

Or a software controller like this one:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117171

Is it worth the extra $160?

Thanks for any advice ;)
It really depends on which OS you're planning to use. If Windows, definitely the hardware controller hands down. If Linux or FreeBSD, you can go as low as this:
$120 - Supermicro AOC-USAS-L8i PCI-Ex8 8 Port SATA Controller Card + 2 x $15 - 3ware SFF-8087 to Multi-lane SATA Forward Break-out Cable
 
After reading a bit more, it seems these "FakeRAID" solutions are essentially useless short of offering a BIOS? As the CPU load will be comparable to just having the OS do it.

I am almost starting to lean towards ZFS RAID-Z2. I mean theoretically I could use the money I would spend on a hardware RAID controller to upgrade the storage server's core system (motherboard/memory/proc), which would leave ample overhead for the encryption. Is ZFS fairly efficient? Can it hold a candle against a hardware RAID solution as far as read/write performance?

For the upgrade I was planning on getting 5 more 2TB drives so that I had the spare sitting around as a replacement, and had an extra 2TB to temporarily copy data from my current array while the new one is being built (theres about 3.4TB in use atm). If I go the ZFS route, would I be able to add the 9th drive to the RAID-Z2 array after the data has been copied off of it?

I would have to sort some other things out as the storage server runs Windows Server 2008 and is the domain controller, etc, but that is a minor issue.
 
Is ZFS fairly efficient? Can it hold a candle against a hardware RAID solution as far as read/write performance?
It's fairly decent in terms of read/write performance. But not as high as some high-end hardware RAID controllers IIRC.

For the upgrade I was planning on getting 5 more 2TB drives so that I had the spare sitting around as a replacement, and had an extra 2TB to temporarily copy data from my current array while the new one is being built (theres about 3.4TB in use atm). If I go the ZFS route, would I be able to add the 9th drive to the RAID-Z2 array after the data has been copied off of it?
Short answer: no.

You should hit up this links if you haven't already:
Building your own ZFS fileserver
FreeBSD ZFS NAS Web-GUI
 
I am almost starting to lean towards ZFS RAID-Z2. I mean theoretically I could use the money I would spend on a hardware RAID controller to upgrade the storage server's core system (motherboard/memory/proc), which would leave ample overhead for the encryption. Is ZFS fairly efficient? Can it hold a candle against a hardware RAID solution as far as read/write performance?

As Danny said, ZFS will never win benchmarks against a good hardware RAID. ZFS is designed for efficiency and write safety without BBUs. And with ZFS you cannot choose the RAID chunk size in accordance with your data access patterns. With hardware RAID, you can get a BBU and benefit from a large write cache. And when you create your RAID set, you can set the RAID chunk size larger or smaller depending on your expected data access patterns.

But I am wondering why you seem especially concerned about performance. When somone talks about the amount of data storage that you are mentioning, and they do not say they are setting up a corporate system, I assume that most of their data is video files. Usually video files do not need high performance.
 
Of course performance is of importance...

Since he is running "4 2TB Hitachi 7200rpm Deskstars and an Adaptec 2260200 in RAID 0" I suspect he is targeting similar performance but with redundancy, correct?
If this is correct, is the write-performance of the above setup just as important or is it the read-performance that is of importance?
If the setup is only working as a network-storage then ~100MB/s should do in normal cases...

Please give more input as to what the storage is supposed to do/handle...
 
Yes, it is primarily used to store media files. And after everything is said and done, my requirement would be that it can at least saturate one or two GbE ports. (down the road I plan on teaming 2 ports to the switch) The reason I started with RAID 0 is because I wanted the most space possible, I have just recently been more concerned about losing all of the data now that it is starting to build up. On that note, am I being crazy with wanting to go with RAID 6/RAID-Z2? Are the odds of 2 failures that high that it is imperative to use it on an array of this size given that I'm not going to die if the data is lost?

My concern with performance has a lot to do with the encryption. I really have no idea what kind of load an 8 drive ZFS array encrypted with 512bit AES is going to put on a system, I have no practical experience with that sort of thing. That and I am generally impatient and demand a lot out of my hardware ;) But I would be satisfied with ~200MB/s read (my current RAID can do ~350MB/s with encryption)
 
For media files, I think unRAID or flexRAID make more sense than ZFS. You get the advantage of ease of expansion and flexibility with HDD sizes, and if you lose more disks than you have parity, you do not lose the entire RAID set, but only the data on the failed disks. Also, power use is a lot lower since typically only 1 HDD will be accessed at a time.

Why do you want 100 - 200 MB/s throughput? The maximum bit rate specified for the Blu-ray format is 54 Mbps, which is less than 7 MB/s. Typical bit rate is lower -- many that I have seen are around 24 Mbps = 3 MB/s.

If you really need over 100 MB/s throughput, then UR/FR are not the right choice, since their throughput is limited to that of the HDD that is being accessed. But for a video server, even streaming three max bit rate Blu-rays at once you only need about 20 MB/s.
 
Last edited:
That's still not quite clear. If your old drive had a 3 year warranty and you get it RMA'd one month before the end of that term (35 months), what is the warranty on the replacement drive? Just the one remaining month? Or the standard warranty on that replacement drive?
 
I wasn't aware that they had added that feature to the i5/i7s. Does that mean that they can do it in real time with no "load" on the CPU?

And as for the performance specs I am trying to achieve, I am aware of the various bitrates of media formats in a streaming capacity. This storage also acts as a general space where I tend to dump everything, and I have drives on my main PC that are capable of reading/writing at gbit speeds (SSD) so I expect the raid to be able to accommodate that. For example with my current raid I regularly sustain 80MB/s+ while copying files from my PC to the array.
 
This storage also acts as a general space where I tend to dump everything,

You may want to consider separating out your large video files from the rest of your files, so you would have two RAID sets. ZFS would be a good choice for data files that are frequently changed or deleted, require high throughput, and need the most protection (raidz2 or raidz3). FlexRAID or UnRAID are good choices for a large and expanding collection of video files.
 
Okay, so I have come to the conclusion that ZFS isn't going to work for me. My requirement is that I have a dual redundant single array that is expandable. The inability to expand a RAID-Z2 is a deal breaker. So I have come back around to 8 drives and a hardware RAID controller.

My primary question is, are all hardware controllers that can do RAID 6 capable of expanding the array by adding new drives and re-distributing the data? For example if I get 4 additional drives, copy the data from the current 4 drive array over to one of them, build a raid 6 array with 7 of the drives, copy the data back onto it from the single drive, can I then expand the array by adding the 8th drive?

Also, I have been looking at the HP SAS expander for future growth. If I were to get a SAS expander and connect the 8 drive raid 6 array to it then connect the hardware RAID controller to the SAS expander would it just appear as though nothing had changed? And I can just expand further from there?

Can anyone recommend some controllers that are capable of doing this?

Much thanks for all the advice so far! ;)
 
I recommend using linux software raid for this. And yes it supports adding drives to raid 6 arrays without needing to copy the array. For hardware controllers you will probably have to look at the specifications of each individual card.


Edit:
BTW, the hardware card you posted in your first link says it supports online expansion.

RAID level 0, 1, 3, 5, 6, 30, 50, 60 or JBOD
Multiple RAID selection
Online array roaming
Online RAID level/stripe size migration
Online capacity expansion and RAID level migration simultaneously
Online volume set growth

http://www.areca.com.tw/products/pcietosas01.htm
 
Last edited:
My primary question is, are all hardware controllers that can do RAID 6 capable of expanding the array by adding new drives and re-distributing the data? For example if I get 4 additional drives, copy the data from the current 4 drive array over to one of them, build a raid 6 array with 7 of the drives, copy the data back onto it from the single drive, can I then expand the array by adding the 8th drive?


Much thanks for all the advice so far! ;)

Yes, this is called online capacity expansion. Work great and the array is usuable even when adding new drives. Some also support array level migration which allows you to migrate a RAID 0 array to Raid 5 without losing any data.
 
Thank you all for the help, I think I have nailed down my final upgrade path. I am going to get this controller:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023

Because it is a 1680 series which I read supports the HP SAS Expander. As well as 4 more 2TB drives. So in theory this setup should allow me to freely expand my array size as I have money for more drives and the expander in the future, be fast as hell, and fully portable to any system I want to put it in! :)
 
Expensive. I paid less than that for my entire i7 system (minus the inherited drives that I bought over the years and the 3 F4s that I have not allocated yet).. But I guess you buy the controller only once and keep it for longer than you would a pc..


Edit: As always with HW raid make sure you purchase the BBU if there is one otherwise write performance will be bad or possibly dangerous.
 
Last edited:
Yeah, it is certainly no small amount to spend on a controller for a personal storage array. But basically the other option is a full upgrade of the core system of the server (i5/i7, mobo, memory, etc), which still wouldn't allow me the versatility and massive expansion capability that the HW controller and SAS expander would down the road. I see it as a long term investment, and expect to have the card in use for many many years to come.
 
Back
Top