Software RAID6 vs Hardware

ToddW2

2[H]4U
Joined
Nov 8, 2004
Messages
4,017
What are the differences?

Performance differences?

I have an ARECA card but debating going software...

(I have 5x WD RE 2TB drives.)
 
Software RAID will always be beat by modern RAID Controllers, does that mean you will notice a difference? Depends on your workload.
 
Software RAID requires the CPU to perform processing, this increases with the number of drives in the array and the type of RAID.
This might not matter with modern CPUs, but it can if there is a problem and a drive has to be rebuilt while the array is in use (assuming your method of software RAID lets you do this).
If you wish to transfer the RAID array to a new OS install, you will need to start from scratch.
If the OS corrupts and you dont have a backup, you will likely lose the array.

Hardware RAID comes in a few guises that can affect whether the array can be moved.
If you use a motherboards RAID controller, when you upgrade the motherboard, you will most likely need to start a new RAID array.
If you use an add in RAID controller, this is generally the most flexible as you can transfer the array to new hardware.
If you upgrade to a new OS, it must have a useable driver for the RAID controller, no matter if on the motherboard or on a card, generally not such a big issue unless using a very old motherboard/RAID controller.

Hardware RAID can offer features not available in software.
I'm not wise to much of them as I havent done this in years.
The ability to hot swap drives and rebuild the array while in use are 2.

As Phog said you may not notice the difference in performance.

I would always use hardware RAID to separate the fundamental RAID functions from OS issues.
Which type of hardware depends whether I care about up time. If it cannot fail, a decent dedicated RAID controller is a must.
As you have a RAID card, use it if it provides the features you need.
 
Software RAID will always be beat by modern RAID Controllers...

Often but definitely not always.

If you wish to transfer the RAID array to a new OS install, you will need to start from scratch.

This entirely depends on what you use. If you use MDADM or ZFS that is simply not true.

In fact one of the biggest reasons to NOT use a Hardware raid controller is you need the EXACT same card if it fails. A good software raid (again MDADM/ZFS/etc) you can move it to a new/different/spare system with relative ease if you have a hardware failure.

If we're talking about fake-RAID (intel onboard, etc) - I can never faithfully suggest using it.
 
Is it a network device? mdadm can easily saturate gigE, even on older hardware with low spindle counts.

As stated, you can then move the array to just about any Linux box and mount the volume.
 
Thanks guys.

Is there any "limit" you should stick to with raid6 for software or hardware as far as capacity goes?

I was thinking 5x2TB for now, but would probably add another couple disks in <year.

What about mixing drive sizes or manufacture, bad idea with raid6?
 
Is there any "limit" you should stick to with raid6 for software or hardware as far as capacity goes?

No. There is no limit that says you have to go to software or hardware raid after a certian # of disks or certian size of disk.

However with raid6 I would keep a single array with a absolute maximum of 12 to 14 drives. And an absolute maximum of 5 or 6 disks with RAID5.

What about mixing drive sizes or manufacture, bad idea with raid6?

You can mix drives including sizes. The array will ony use the smallest drives space on each disk.

BTW, If you do go for hardware raid make sure you get a raid controller that has cache ram and a BBU.
 
Last edited:
If you use a motherboards RAID controller, when you upgrade the motherboard, you will most likely need to start a new RAID array.

Just FYI, that's not hardware RAID, that's FakeRAID, aka motherboard RAID.
Essentially, it uses drivers to "trick" the OS into thinking the drives are a single array, via the HDD controller.

It has none of the strengths of software RAID, yet all of the weaknesses of hardware RAID.
It's the cheapest, and worst solution imo, unless of course one is just doing RAID 0 or 1.

For RAID 5/6, if it is even supported, the rebuild times will take much longer than either hardware or software solutions.
 
No. There is no limit that says you have to go to software or hardware raid after a certian # of disks or certian size of disk.

However with raid6 I would keep a single array with a absolute maximum of 12 to 14 drives. And an absolute maximum of 5 or 6 disks with RAID5.



You can mix drives including sizes. The array will ony use the smallest drives space on each disk.

BTW, If you do go for hardware raid make sure you get a raid controller that has cache ram and a BBU.

Thanks, I have an areca 1882ix-24-4G + BBU that I`ll probably end up using instead of software raid, and I guess it makes sense to use the same size drives :D
 
What about mixing drive sizes or manufacture, bad idea with raid6?

With many of the software RAID options you can combine drives of different sizes in the array and use all available space, with the caveat that your parity drive(s) has to be equal to or greater in size than your largest non-parity disk.
 
On linux software raid or with ZFS you can have each member of a raid array partitioned to the same size and use the rest of the space of the larger disks for some other purpose (including additional raid arrays). This works however it becomes somewhat harder to deal with when drives die or need to be swapped out.
 
With many of the software RAID options you can combine drives of different sizes in the array and use all available space, with the caveat that your parity drive(s) has to be equal to or greater in size than your largest non-parity disk.

This type of RAID is usually called snapshot RAID.

The OP did not specify what type of data is going on his array. If it is media files, or other files that are primarily read-only (i.e., the files are not edited in place, and there is not a lot of file deletion going on), then snapshot RAID is probably the best choice, either SnapRAID or FlexRAID.
 
This type of RAID is usually called snapshot RAID.

The OP did not specify what type of data is going on his array. If it is media files, or other files that are primarily read-only (i.e., the files are not edited in place, and there is not a lot of file deletion going on), then snapshot RAID is probably the best choice, either SnapRAID or FlexRAID.

FlexRAID also offers real time software RAID with this functionality. It doesn't matter which method of RAID you choose, there are options to do either with different sized disks.
 
Sky is the limit for drive count if you use RAID10. Just FYI.
 
FlexRAID also offers real time software RAID with this functionality.

It is still best called snapshot RAID, since it is not doing any of the standard RAID levels that the term "software RAID" usually refers to.
 
Sky is the limit for drive count if you use RAID10. Just FYI.

If I'm understanding this right.

RAID10 if you lose the wrong 2 drives you lose it all.

RAID6 you CAN lose 2 drives and rebuild.

And RAID6 has a bit more capacity, just a bit slower...



Correct?
 
If I'm understanding this right.
RAID10 if you lose the wrong 2 drives you lose it all.
RAID6 you CAN lose 2 drives and rebuild.
And RAID6 has a bit more capacity, just a bit slower...
Correct?

Correct. I'm not saying RAID10 is the right option here, was just saying it's the best if you want to have tons and tons of drives in one array.

Also, RAID6 isn't a "bit" slower. It's a lot slower. RAID10 is for IOPS (not media streaming)
 
Ok great.

Yeah, this is just my backup array for media and web files/dbs. (Pics,Movies, PHP,HTML,MySQL) and thus I want the most redundant for the $.

The 'live' versions will be on my desktop, and I have SSD on this server for faster access to the DBs, etc...

Thanks again for the help/info.
 
Often but definitely not always.



This entirely depends on what you use. If you use MDADM or ZFS that is simply not true.

In fact one of the biggest reasons to NOT use a Hardware raid controller is you need the EXACT same card if it fails. A good software raid (again MDADM/ZFS/etc) you can move it to a new/different/spare system with relative ease if you have a hardware failure.

If we're talking about fake-RAID (intel onboard, etc) - I can never faithfully suggest using it.

Actually, most arrays will work with any similar RAID card from the same manufacturer. I know Areca and Adaptec allow you to move arrays even between cards that are generations apart. The "exact card" requirement is quite outdated (10+ years).
 
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.

RAID0/1? Totally different. Software (onboard intel preferably) all day long.
 
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.

It depends on the software and hardware. On linux with mdadm I have software raid 6 arrays that net 900MB/s sequential reads and writes (with 10 x 7200 RPM 2TB drives ) and rebuild times of less than 9 hours. On windows I agree with you, you generally get abysmal performance with software or fake raid5/6 in windows.
 
Last edited:
I have a 11x2tb mdadm raid 6 and it runs great very fast and stable, if the os dies then i reinstall linux and am good to go again.

I would recommend it over hardware raid any day.
 
I have a 11x2tb mdadm raid 6 and it runs great very fast and stable, if the os dies then i reinstall linux and am good to go again.

I would recommend it over hardware raid any day.

Fully agreed. I am looking forward to the advancements that are in recent kernels (3.10.X) that allow you to use an SSD as a cache although I always wait a year or so and test thoroughly before implementing this at work..
 
Yea me to, altho I am interested in the progress BTRFS is making with R5/6 as well

I have started to test that on a server with throw away data. Although to be fair the I can not really measure the performance with 6 year old 500 GB drives..
 
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.

RAID0/1? Totally different. Software (onboard intel preferably) all day long.

That's not software RAID.
What you are talking about is FakeRAID, aka motherboard RAID.

Yes, any parity RAID on FakeRAID is definitely abysmal, but not on software RAID (assuming MDADM).
 
Software RAID will always be beat by modern RAID Controllers,
How can you say so? This is utterly wrong.

Software raid, for instance ZFS, uses the server's cpu and RAM. Which has the most resources? A hardware raid might have 800 MHz PowerPC to do parity calculations, and 512MB RAM as disk cache. Compare that to a medium server, a dual 6-core server cpu at 2.7GHz and 64-128GB RAM as disk cache. With this large RAM you can even cache all of your work load. So, software raid uses the server's resources, and a server will always have much much more resources than a tiny hardware raid card. It is like comparing a thin client with 1GHz cpu and 1GB RAM to a medium sized server - which is most powerful? A medium/huge server, or a card with a tiny cpu and tiny RAM?

Also, software raid scales much much better than hardware raid. ZFS can use all HBAs in the server and all RAM, so there are many Petabyte ZFS servers out there. Some ZFS servers have million of IOPS and even more. One supercomputer Sequioa uses ZFS + Lustre for 55 Petabyte and 1TB/sec bandwidth. Now, compare all this to a single hardware raid card with 800MHz cpu and 1GB RAM - which one scales best? Which one gives most performance? Do you connect 100s of disks to a single hardware raid card? No, but you can do that to a ZFS server.

And in the case of ZFS, ZFS has superior protection against data corruption. Hardware raid has notorious bad data corruption protection, and has several issues.

Also, you are locked in with hardware raid. If your card crashes, you need to buy an identical card. With ZFS, you can migrate the disks to another computer, even running another OS - you are not locked in. For instance, move a ZFS raid from a SPARC server running Solaris, to a Linux x86 PC or FreeBSD. Or Mac OS X. You are free.

I dont know of any advantage that hardware raid has, over software raid. What would that be? Also, most (all?) hardware raid lacks snapshots, which can save you hours of work. When I mess up my system disk, accidentally deletes an important system file, and the PC does not work anymore - instead of googling for hours trying to fix my Solaris installation or reinstall everything, I just reboot into a snapshot and I am back to a time before I made the error. ZFS Snapshots has many times, saved my hours of work. I am not afraid to experiment anymore, so I accelerate my learning of Unix. If I am going to try out a weird command/configuration, I just do a snapshot before starting banging the system. Godsend.
 
How can you say so? This is utterly wrong.

Software raid, for instance ZFS, uses the server's cpu and RAM. Which has the most resources? A hardware raid might have 800 MHz PowerPC to do parity calculations, and 512MB RAM as disk cache. Compare that to a medium server, a dual 6-core server cpu at 2.7GHz and 64-128GB RAM as disk cache. With this large RAM you can even cache all of your work load. So, software raid uses the server's resources, and a server will always have much much more resources than a tiny hardware raid card. It is like comparing a thin client with 1GHz cpu and 1GB RAM to a medium sized server - which is most powerful? A medium/huge server, or a card with a tiny cpu and tiny RAM?

Also, software raid scales much much better than hardware raid. ZFS can use all HBAs in the server and all RAM, so there are many Petabyte ZFS servers out there. Some ZFS servers have million of IOPS and even more. One supercomputer Sequioa uses ZFS + Lustre for 55 Petabyte and 1TB/sec bandwidth. Now, compare all this to a single hardware raid card with 800MHz cpu and 1GB RAM - which one scales best? Which one gives most performance? Do you connect 100s of disks to a single hardware raid card? No, but you can do that to a ZFS server.

And in the case of ZFS, ZFS has superior protection against data corruption. Hardware raid has notorious bad data corruption protection, and has several issues.

Also, you are locked in with hardware raid. If your card crashes, you need to buy an identical card. With ZFS, you can migrate the disks to another computer, even running another OS - you are not locked in. For instance, move a ZFS raid from a SPARC server running Solaris, to a Linux x86 PC or FreeBSD. Or Mac OS X. You are free.

I dont know of any advantage that hardware raid has, over software raid. What would that be? Also, most (all?) hardware raid lacks snapshots, which can save you hours of work. When I mess up my system disk, accidentally deletes an important system file, and the PC does not work anymore - instead of googling for hours trying to fix my Solaris installation or reinstall everything, I just reboot into a snapshot and I am back to a time before I made the error. ZFS Snapshots has many times, saved my hours of work. I am not afraid to experiment anymore, so I accelerate my learning of Unix. If I am going to try out a weird command/configuration, I just do a snapshot before starting banging the system. Godsend.

+1. Couldn't have said it better
 
Now you guys have me thinking about going the software route :p

My system already runs Win7, could I run software raid inside a VM on win7 or would I be better off using an older core2duo for the software raid / nas setup instead?

-Todd
 
Any of the parity RAIDs (R5/R6) you need to go with hardware. Been there done that.
Performance on software RAID was laughable.
out put from a dtrace script on one of my production zfs systems.

2013 Aug 13 14:22:18 tank 577 ms, 759 wMB 30 rMB 48582 wIops 2071 rIops 0+0 dly+thr; dp_wrl 111197 MB .. 133820 MB; res_max: 15956 MB; dp_thr: 26753

that is a 15 vdev (8 x raidz2) array. thats pretty close to the limit for random writes. sequential throughput for these volumes is over 2GBs.

you're right though, software raid is slow.
 
out put from a dtrace script on one of my production zfs systems.

2013 Aug 13 14:22:18 tank 577 ms, 759 wMB 30 rMB 48582 wIops 2071 rIops 0+0 dly+thr; dp_wrl 111197 MB .. 133820 MB; res_max: 15956 MB; dp_thr: 26753

that is a 15 vdev (8 x raidz2) array. thats pretty close to the limit for random writes. sequential throughput for these volumes is over 2GBs.

you're right though, software raid is slow.
I dont understand the numbers, can you explain them to me?

But of course, software raid can be slow sometimes. The point is, software raid uses the server's resources, and if the server is weak, or doing much other work, then the performance of software raid can suffer. But, this is only a problem with a very very weak server, or a very very heavily loaded server, so in situations where a tiny hardware raid card with 800MHz has more resources.

The main point is that software raid scales with the server. If you lack performance, just upgrade the server: add more disks, more RAM, etc - and at some point the server will get more resources than a hardware raid. And at that break point, the software raid will always be faster than a hardware raid.

But I imagine there are situations where a hardware raid can outperform a weak and small server. But a hardware raid should never outperform a medium sized or large server. So if the server is weak or ancient, hardware raid can be faster. If the server is decent, it will always be faster.

Back in the old days, a server was weak. And it had not much cpu resources. So a hardware raid card was needed to offload the cpu and handle all the I/O traffic, which was heavy work in those days. But today's multicore cpus are very strong and can easily handle all necessary I/O without breaking a sweat.

As time passes, more and more functionality can be handled in software, because cpus gets stronger. Back in the day, you needed a separate modem with it's own cpu to offload the server, but today any server can run a software modem, no extra dedicated hardware modem is needed. I suspect that you could run graphics in cpu soon, so you wont need special hardware for handling graphics. And sound in software. In 20 years, when cpus are really strong, they can handle all functionality you can request, in pure software. No extra hardware to offload the server is needed. All can be run in software. Back in the days, you probably would have needed extra hardware for voice chatting, but today it can be done in software. Hardware raid is not needed anymore, normal PCs are strong enough to handle all I/O. Hardware raid is going obsolete.
 
For almost any home usage software RAID will be more than enough in terms of performance. As has been said here, by far the biggest benefit of hardware RAID is when drives break, you need to add more drives or you need to upgrade hardware, e.g. the controller. This is typically just made to be simple on these controllers, especially if you have a hotswap enclosure. Another benefit is that Areca and Adaptec controllers offer online migration if you want to switch between RAID levels (although it can take forever to complete...).

On systems with very high load one benefit is that e.g. writing to a drive requires just one write command to the controller. If you use software RAID the CPU typically has to send multiple writes, one for each drive that the data will touch, in addition to possible reads for parity calculations. This will increase bus traffic and latency.
 
For almost any home usage software RAID will be more than enough in terms of performance. As has been said here, by far the biggest benefit of hardware RAID is when drives break, you need to add more drives or you need to upgrade hardware, e.g. the controller. This is typically just made to be simple on these controllers, especially if you have a hotswap enclosure. Another benefit is that Areca and Adaptec controllers offer online migration if you want to switch between RAID levels (although it can take forever to complete...).

On systems with very high load one benefit is that e.g. writing to a drive requires just one write command to the controller. If you use software RAID the CPU typically has to send multiple writes, one for each drive that the data will touch, in addition to possible reads for parity calculations. This will increase bus traffic and latency.
My point is, hardware raid is just a tiny computer on a card, running software. It has a cpu, RAM, it runs a BIOS, etc - it is a tiny computer (that is why they cost so much). What is the difference if you run the software on the card's cpu, or on the server's cpu?

Sure hardware raid will offload some bus traffic and latency, but is that a problem today? ZFS does checksum calculations on every block data read (similar to do a MD5 sum checksum on every data block, so ZFS can detect data corruption) and this takes cpu power, yes. But it is something in the order of 3-5% of one core, in a quad core cpu. Surely you can trade 3-5% cpu power in exchange for not needing to buy another piece of hardware. I doubt there are many users who can not afford 3-5% of one core. In servers there are 6-12 core cpus. And some have dual or even multiple cpus.

Hardware raid was once useful and had it's place, long time ago when cpus where weak. But today you can do it all in software running on core. I would sell my hardware raid cards today because there is still a market for them. But I predict the market will diminish.
 
Hardware raid was once useful and had it's place, long time ago when cpus where weak.

I totally agree. Since the late 1990s (possibly early 2000s at worst) the CPU usage due to parity raid has been a non issue on linux and other non windows software raid.
 
ok, but in response to that I say that my Intel software RAID 5 on my i7 920 server gets a staggering 35MB/s and takes days to rebuild. Where as my (old) PERC5, gets 135-150MB/s which saturates the GigE, which is the bottleneck for me.

With good software raid5/6 (zfs or mdadm) I would expect > 500MB/s reads and writes on the outer tracks with your computer and hard drives using the Intel ports set to SATA mode and a 10 to 12 hour rebuild (if tuned).
 
I have a RAID6 with 5x Enterprise-grade 2TB HDDs (6TB usable) and I get performance much greater than a single HDD. I am getting read and write speeds up to 350 MB/s, but usually it hangs around 150 MB/s. I am very happy. I would have been happy if I even just got the performance of a single drive, but I got more. I am using a LSI MegaRAID 9260-4L.

So don't be afraid to use RAID6 if you're looking for something with 99.999% to 100% chance of recovery from drive failure (between 5 and 7 physical drives in the array) with the ability to lose 2 drives and still be fine. Just keep in mind that this still isn't a backup solution, especially if your RAID controller decides to take a dump. :D

>.< I still need to invest in a few Enterprise-grade 4TB HDDs for non-RAID backup storage.
 
Back
Top