Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Software RAID will always be beat by modern RAID Controllers...
If you wish to transfer the RAID array to a new OS install, you will need to start from scratch.
Is there any "limit" you should stick to with raid6 for software or hardware as far as capacity goes?
What about mixing drive sizes or manufacture, bad idea with raid6?
If you use a motherboards RAID controller, when you upgrade the motherboard, you will most likely need to start a new RAID array.
No. There is no limit that says you have to go to software or hardware raid after a certian # of disks or certian size of disk.
However with raid6 I would keep a single array with a absolute maximum of 12 to 14 drives. And an absolute maximum of 5 or 6 disks with RAID5.
You can mix drives including sizes. The array will ony use the smallest drives space on each disk.
BTW, If you do go for hardware raid make sure you get a raid controller that has cache ram and a BBU.
What about mixing drive sizes or manufacture, bad idea with raid6?
With many of the software RAID options you can combine drives of different sizes in the array and use all available space, with the caveat that your parity drive(s) has to be equal to or greater in size than your largest non-parity disk.
This type of RAID is usually called snapshot RAID.
The OP did not specify what type of data is going on his array. If it is media files, or other files that are primarily read-only (i.e., the files are not edited in place, and there is not a lot of file deletion going on), then snapshot RAID is probably the best choice, either SnapRAID or FlexRAID.
FlexRAID also offers real time software RAID with this functionality.
Sky is the limit for drive count if you use RAID10. Just FYI.
If I'm understanding this right.
RAID10 if you lose the wrong 2 drives you lose it all.
RAID6 you CAN lose 2 drives and rebuild.
And RAID6 has a bit more capacity, just a bit slower...
Correct?
Often but definitely not always.
This entirely depends on what you use. If you use MDADM or ZFS that is simply not true.
In fact one of the biggest reasons to NOT use a Hardware raid controller is you need the EXACT same card if it fails. A good software raid (again MDADM/ZFS/etc) you can move it to a new/different/spare system with relative ease if you have a hardware failure.
If we're talking about fake-RAID (intel onboard, etc) - I can never faithfully suggest using it.
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.
I have a 11x2tb mdadm raid 6 and it runs great very fast and stable, if the os dies then i reinstall linux and am good to go again.
I would recommend it over hardware raid any day.
Yea me to, altho I am interested in the progress BTRFS is making with R5/6 as well
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.
RAID0/1? Totally different. Software (onboard intel preferably) all day long.
How can you say so? This is utterly wrong.Software RAID will always be beat by modern RAID Controllers,
How can you say so? This is utterly wrong.
Software raid, for instance ZFS, uses the server's cpu and RAM. Which has the most resources? A hardware raid might have 800 MHz PowerPC to do parity calculations, and 512MB RAM as disk cache. Compare that to a medium server, a dual 6-core server cpu at 2.7GHz and 64-128GB RAM as disk cache. With this large RAM you can even cache all of your work load. So, software raid uses the server's resources, and a server will always have much much more resources than a tiny hardware raid card. It is like comparing a thin client with 1GHz cpu and 1GB RAM to a medium sized server - which is most powerful? A medium/huge server, or a card with a tiny cpu and tiny RAM?
Also, software raid scales much much better than hardware raid. ZFS can use all HBAs in the server and all RAM, so there are many Petabyte ZFS servers out there. Some ZFS servers have million of IOPS and even more. One supercomputer Sequioa uses ZFS + Lustre for 55 Petabyte and 1TB/sec bandwidth. Now, compare all this to a single hardware raid card with 800MHz cpu and 1GB RAM - which one scales best? Which one gives most performance? Do you connect 100s of disks to a single hardware raid card? No, but you can do that to a ZFS server.
And in the case of ZFS, ZFS has superior protection against data corruption. Hardware raid has notorious bad data corruption protection, and has several issues.
Also, you are locked in with hardware raid. If your card crashes, you need to buy an identical card. With ZFS, you can migrate the disks to another computer, even running another OS - you are not locked in. For instance, move a ZFS raid from a SPARC server running Solaris, to a Linux x86 PC or FreeBSD. Or Mac OS X. You are free.
I dont know of any advantage that hardware raid has, over software raid. What would that be? Also, most (all?) hardware raid lacks snapshots, which can save you hours of work. When I mess up my system disk, accidentally deletes an important system file, and the PC does not work anymore - instead of googling for hours trying to fix my Solaris installation or reinstall everything, I just reboot into a snapshot and I am back to a time before I made the error. ZFS Snapshots has many times, saved my hours of work. I am not afraid to experiment anymore, so I accelerate my learning of Unix. If I am going to try out a weird command/configuration, I just do a snapshot before starting banging the system. Godsend.
out put from a dtrace script on one of my production zfs systems.Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.
I dont understand the numbers, can you explain them to me?out put from a dtrace script on one of my production zfs systems.
2013 Aug 13 14:22:18 tank 577 ms, 759 wMB 30 rMB 48582 wIops 2071 rIops 0+0 dly+thr; dp_wrl 111197 MB .. 133820 MB; res_max: 15956 MB; dp_thr: 26753
that is a 15 vdev (8 x raidz2) array. thats pretty close to the limit for random writes. sequential throughput for these volumes is over 2GBs.
you're right though, software raid is slow.
My point is, hardware raid is just a tiny computer on a card, running software. It has a cpu, RAM, it runs a BIOS, etc - it is a tiny computer (that is why they cost so much). What is the difference if you run the software on the card's cpu, or on the server's cpu?For almost any home usage software RAID will be more than enough in terms of performance. As has been said here, by far the biggest benefit of hardware RAID is when drives break, you need to add more drives or you need to upgrade hardware, e.g. the controller. This is typically just made to be simple on these controllers, especially if you have a hotswap enclosure. Another benefit is that Areca and Adaptec controllers offer online migration if you want to switch between RAID levels (although it can take forever to complete...).
On systems with very high load one benefit is that e.g. writing to a drive requires just one write command to the controller. If you use software RAID the CPU typically has to send multiple writes, one for each drive that the data will touch, in addition to possible reads for parity calculations. This will increase bus traffic and latency.
Hardware raid was once useful and had it's place, long time ago when cpus where weak.
ok, but in response to that I say that my Intel software RAID 5 on my i7 920 server gets a staggering 35MB/s and takes days to rebuild. Where as my (old) PERC5, gets 135-150MB/s which saturates the GigE, which is the bottleneck for me.