So I have a Raid 6 array, but is slow. Raid 10 any better?

jordan12

[H]F Junkie
Joined
Dec 29, 2000
Messages
10,197
So I have been trying to read the real differences between these two Raid options. With my 10 X 8TB drives, I get 64 GB of usable space. And I can withstand 2 drive failures and still have access to my data.

But what advantages are their for Raid 10? Can I lose more than 2 drives and still have access to the data? What do you guys suggest?
 
An 8-disk Raid 10 will create four 2-disk mirror sets (RAID 1) and then stripe them together (RAID 0). In theory, you can withstand up to four disk failures, provided each is in a different mirror set. Worst case though, you can only withstand one failure -- if the second disk in that mirror set also fails, your array is toast.

Fast RAID-6 typically requires a dedicated hardware RAID controller. I think your NAS doesn't do that, instead relying on its Celeron CPU for RAID-6 computations? I would open a support ticket with Asustor and see what they have to say about the slow RAID-6.
 
Well, lets start off easy.

1) What's "slow"?

2) What drives are you using? Are they Seagate archive drives (ST8000AS0002)?
 
Agreed. If these are Seagate Archive Drives then slow writes are going to happen regardless if you use a single drive or if you use them in raid. This is a function of the SMR drive technology. Reads should not be slow with this technology.
 
Last edited:
they are archive trives and they are slow....really slow.
Okay, with a little Google-fu I see he bought 12 of the 8TB Archive drives and sold 4 of them. Reads should be fine. But they're not going to perform well for writes in any RAID array. Heck, they're not going to perform well for writes outside of a RAID array, but the RAID rebuild case is the worst case because it's non-stop writes across the entire drive.

Which leads us back to the first question. What's "slow"?
 
Generally speaking, yes, you'll have better performance with 10 or 01 than 6 because no parity is involved during writes. That doesn't mean that RAID 6 performance can't be good. What hardware are you running on? I have a 16x 2 TB RAID6 with WD Green drives and it's stupid fast. Running linux software raid on an i5-760 and 16 GB ram.
 
These drives are much slower than WD Green drives in writes. Because shingled recording requires writing a band of tracks at the same time (which ends up requiring reading the tracks first then rewriting).
 
I hit in the 50 MB per second during writes, and When adding drives to the array, it seems to take a very long time. As in 3-4 days for one drive..
 
Can I lose more than 2 drives and still have access to the data?

No. Everything is gone in that case.

But what advantages are their for Raid 10?

Faster random IO / Faster rebuilds (at the cost of a higher chance of total loss). It will still take days to rebuild with these drives.
 
Generally speaking, yes, you'll have better performance with 10 or 01 than 6 because no parity is involved during writes. That doesn't mean that RAID 6 performance can't be good. What hardware are you running on? I have a 16x 2 TB RAID6 with WD Green drives and it's stupid fast. Running linux software raid on an i5-760 and 16 GB ram.
Except you're comparing PMR drives to SMR drives. It's very much not apples to apples. BTW, the parity isn't what is making it slow. The parity information is only an 25% extra overhead with 8 drives. So RAID-10 could pick up that 25%, maybe.

I hit in the 50 MB per second during writes, and When adding drives to the array, it seems to take a very long time. As in 3-4 days for one drive.
You have the wrong tool for the job. You should have steered clear of SMR drives if you want fast writes.

No. Everything is gone in that case.
With RAID 10 It depends on which drives fail. He could lose all his data with as few as 2 drives failing, or keep it with as many as 4 drives failing.
 
Except you're comparing PMR drives to SMR drives. It's very much not apples to apples. BTW, the parity isn't what is making it slow. The parity information is only an 25% extra overhead with 8 drives. So RAID-10 could pick up that 25%, maybe.

Good catch on the Archive drives, I'm not as familiar with those. I wasn't as trying to compare my setup to his, just saying the original question "Is raid 10 faster than raid 6" has many caveats and there are times when Raid6 can indeed be quite fast. Parity accounts for 25% overhead on the drive-writes, but if it's software raid it gobbles up CPU which will definitely take a severe hit on an underpowered system. I'd argue that if he was trying to do Raid6 on a Pentium 4, parity vs no parity would be much more noticeable.
 
Shoot, I thought we warned the OP in his NAS selection thread to NOT get Archive drives?

Archive drives are not recommended for hardware parity RAID, period. Rebuilds or similar operations (like expanding the array) will take ages. If you can tolerate the write speeds, RAID 1 or RAID 10. If you want parity, the best bet is to use these drives with dedicated (regular) drives for parity using Snapraid. (Edit: snapraid will happily work with 2x4TB drives, you don't need a super expensive PMR 8TB)

Not gonna get sustained fast writes ever though.

I'd recommend getting rid of the Archive drives and switching to normal 6TB drives, if feasible.
 
Last edited:
RAID 6, if not CPU bound, will always be equal to or faster than RAID-10 for the same number of drives.

Suppose you have 2n drives. RAID-6 will effectively operate on 2n-2 drives in parallel, while RAID 10 will effectively operate upon n drives (mirrored sets) in parallel.

So when is 2n-2 > n? Solving the constraint, when n > 2, i.e. for 6 or more drives. When you have 4 drives, they are equally fast in theory.
 
RAID 6, if not CPU bound, will always be equal to or faster than RAID-10 for the same number of drives.

Suppose you have 2n drives. RAID-6 will effectively operate on 2n-2 drives in parallel, while RAID 10 will effectively operate upon n drives (mirrored sets) in parallel.

So when is 2n-2 > n? Solving the constraint, when n > 2, i.e. for 6 or more drives. When you have 4 drives, they are equally fast in theory.
Good catch. I overlooked the impact on the write speed of the mirroring portion.
 
RAID 6, if not CPU bound, will always be equal to or faster than RAID-10 for the same number of drives.

When you have 4 drives, they are equally fast in theory.

I'm not sure I agree with your logic, but I could be wrong. Striping != mirroring. Reading from a mirrored set is always faster almost linearly as you add more drives. Raid6 read speed doesn't increase linearly because it is striped. Instead of two parallel paths you have a serial queue. Now raid6 is always faster than reading one drive because you basically eliminate seek times and you often read directly from cache, but I'm not sure you can make the conclusion that the same number of drives is faster in raid6. I'd love to see some data proving the opposite, because it would make be feel better about my massive raid6 array.
 
I think I am going to just stick to the Raid 6 and call it a day. I figure once all 10 drives have been added to the array, I will see better performance.
 
I'm not sure I agree with your logic, but I could be wrong. Striping != mirroring. Reading from a mirrored set is always faster almost linearly as you add more drives. Raid6 read speed doesn't increase linearly because it is striped. Instead of two parallel paths you have a serial queue. Now raid6 is always faster than reading one drive because you basically eliminate seek times and you often read directly from cache, but I'm not sure you can make the conclusion that the same number of drives is faster in raid6. I'd love to see some data proving the opposite, because it would make be feel better about my massive raid6 array.
You only get a speed increase on a RAID-1_ array with reads because the controller effectively treats the drives (or sets of drives) like a striped raid-0 array reading half of the data from each drive (or set of drives) which close to doubles the effective throughput. If your controller can't do that you will get no increase in read speed with RAID-1_. There is no increase in write speed with RAID-1_ because all the data has to be written to both drives (or sets of drives).

RAID-6 is like RAID-0, but with 2 drives worth of parity data being written to the array in addition to the data. So, if your CPU is fast enough to not be the limiting factor an 8 drive RAID-6 array should have the same effective read / write performance of a 6 drive RAID-0 array. An 8 drive RAID-10 array will have the effective write speed of a 4 drive RAID-0 array. For reads (depending on the block size and the controller) it could have the read speed of an 8 drive RAID-0 array.
 
I figure once all 10 drives have been added to the array, I will see better performance.

I am confused why you did not create the array with all the drives you want to use from the start. Expanding a raid6 array an expensive operation.
 
I think I am going to just stick to the Raid 6 and call it a day. I figure once all 10 drives have been added to the array, I will see better performance.
if you're going from 8 to 10 drives you'll see about a 33% improvement in write speed (assuming the CPU isn't the bottleneck).
 
I am confused why you did not create the array with all the drives you want to use from the start. Expanding a raid6 array an expensive operation.


I needed 2 drives in an external to store all data until the other drives were added, and then I would be able to copy the data over to the array.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Yeah, there's not too much difference between the two besides the rated MTBF. I was just suggesting the cheaper variant, but the NAS ones work as well.
 
I decided to run some benchmarks comparing RAID-10 to RAID-6 to demonstrate that RAID-6 is faster. I used six 6TB Toshiba 7200RPM enterprise SATA drives connected to a IBM ServeRAID M5016 (LSI 9265-8i w/ 1gB CacheVault & supercap).

RAID-10:

My own ATTO style benchmark using IOmeter. Same idea except its actually accurate and not affected by caching:
Atowxpe.png


CDM:
D6brh32.png


RAID-6:

My own ATTO style benchmark using IOmeter:
I0qeQGE.png


CDM:
3YJOu9H.png


As you can see the RAID-6 is faster than RAID-10 with the same number of drives
 
I find it strange that no one asks for the RAID engine used - which would make all the difference even despite the use of SMR drives. If there are only contiguous writes, his performance should not be bad at all. But assuming he uses some Windows-based FakeRAID/onboardRAID, performance will be much worse.

if it's software raid it gobbles up CPU which will definitely take a severe hit on an underpowered system.
It is a myth that parity RAID requires a dedicated processor to accelerate XOR writes, it that was what you were referring to. XOR is ultra simple and even a simple Celeron can calculate multiple gigabyte a second. Basically it is bottlenecked by the RAM throughput.

The reason why Hardware RAID controllers are faster is due to the firmware they employ which is designed much better than any Windows based software RAID. If you compare against software RAID available on Linux and BSD it becomes a different story. The much more powerful hardware of the host system allows much higher performance theoretically. An Areca ARC-12xx for example, can be bottlenecked with one SSD in terms of IOps (70.000 IOps). No such caps exist for the host system.

You only get a speed increase on a RAID-1_ array with reads because the controller effectively treats the drives (or sets of drives) like a striped raid-0 array reading half of the data from each drive (or set of drives) which close to doubles the effective throughput. If your controller can't do that you will get no increase in read speed with RAID-1_.
Even if your controller can do that, there probably will be no speed benefit for reading. This is because with mirroring, all disks store the same data. If the controller lets disk A read block 1 and let disk B read block 2 and alternate this way, the disks will have to skip 50% of the data under the head each time, causing 50% of their maximum sequential throughput. To actually achieve a performance benefit you need to let disk A read several blocks (1-64) and let disk B read 64-128). This will prevent the seeking penalty that prevents RAID1 from achieving any benefit.

BSD geom_mirror with round robin algorithm is very good in this respect. But the best performance i have seen with ZFS, which can read almost as fast as a RAID0 pool.
 
As you can see the RAID-6 is faster than RAID-10 with the same number of drives

Great! Now we have experimental evidence as well to back up the theory. As you noted earlier, it is possible for RAID-10 reads to be as fast as RAID-0 provided the controller will stripe-read each mirrored set, but I have not noticed this to be the case in my experience with LSI, 3ware or Areca hardware controllers, although mdadm software RAID-10 with a custom layout will do it.

...despite the use of SMR drives. If there are only contiguous writes, his performance should not be bad at all.

If you were acquainted with OP's previous threads -- which most of the current posters here have been involved in -- you'd know he's using an Asustor NAS with proprietary software RAID on a Celeron J1900 CPU.

SMR drives are known to be poor sequential write performers once the size increases beyond their "cache" -- which is approx. 20GB on the Seagate 8TBs.


It is a myth that parity RAID requires a dedicated processor to accelerate XOR writes, it that was what you were referring to.

Agreed. However, dual-parity RAID requires a more complex GF(2) operation which a custom ASIC (or FPGA) may perform faster than a usual CPU.

If you compare against software RAID available on Linux and BSD it becomes a different story. The much more powerful hardware of the host system allows much higher performance theoretically. An Areca ARC-12xx for example, can be bottlenecked with one SSD in terms of IOps (70.000 IOps). No such caps exist for the host system.

OP's NAS OS is most probably a customized embedded Linux distro. He's using spinners, not SSDs (no one disputes that even today's fastest hardware RAID controllers -- based on the LSI SAS3108 chip -- will bottleneck on large RAID-6 SSD arrays). The OP's bottleneck, if any, is probably the NAS CPU.

Even if your controller can do that, there probably will be no speed benefit for reading. This is because with mirroring, all disks store the same data. If the controller lets disk A read block 1 and let disk B read block 2 and alternate this way, the disks will have to skip 50% of the data under the head each time, causing 50% of their maximum sequential throughput. To actually achieve a performance benefit you need to let disk A read several blocks (1-64) and let disk B read 64-128). This will prevent the seeking penalty that prevents RAID1 from achieving any benefit.

mdadm will let you do a custom RAID-10 stripe layout that offers RAID-0 read speeds at the expense of slower-than-usual writes.
 
Even if your controller can do that, there probably will be no speed benefit for reading. This is because with mirroring, all disks store the same data. If the controller lets disk A read block 1 and let disk B read block 2 and alternate this way, the disks will have to skip 50% of the data under the head each time, causing 50% of their maximum sequential throughput. To actually achieve a performance benefit you need to let disk A read several blocks (1-64) and let disk B read 64-128). This will prevent the seeking penalty that prevents RAID1 from achieving any benefit.

BSD geom_mirror with round robin algorithm is very good in this respect. But the best performance i have seen with ZFS, which can read almost as fast as a RAID0 pool.
You were saying?

wg4291T.png


RAID-1 with 2 of the 6TB drives on the same controller.
 
One more comparison. This time with 8 drives instead of 6. I used eight 6TB Toshiba 7200RPM enterprise SATA drives connected to a IBM ServeRAID M5016 (LSI 9265-8i w/ 1gB CacheVault & supercap).

RAID-10:

My own ATTO style benchmark using IOmeter.
3gUqz9H.png


CDM:
eBlR1lr.png


RAID-6:

My own ATTO style benchmark using IOmeter:
Aw3D8oy.png


CDM:
69q0rjM.png


As you can see the performance gap grows (in terms of linear access) as the number of drives goes up.
 
The only thing is that the only real world performance numbers that matter are the 4K/4KQ32. You can see that the RAID 10 beats the RAID 6.

Also you kinda stacked the deck with your IOmeter settings. Set it to 100% random and 50% read and then post the graph
 
The only thing is that the only real world performance numbers that matter are the 4K/4KQ32. You can see that the RAID 10 beats the RAID 6.

Also you kinda stacked the deck with your IOmeter settings. Set it to 100% random and 50% read and then post the graph
Real world performance for what? It all depends what you're using the NAS for. If you're worried about the speed copying large amounts of data to and from it (as the OP mentioned), the sequential performance is key trait. For a home user streaming media files to clients throughout the house isn't going to stress either one.

Sure if you're running fiberchannel and are using the large array as the HDDs for remote systems or have large databases in an enterprise environment then sure the workload is totally different, but I don't expect many home users are doing that.
 
Real world performance for what? It all depends what you're using the NAS for. If you're worried about the speed copying large amounts of data to and from it (as the OP mentioned), the sequential performance is key trait. For a home user streaming media files to clients throughout the house isn't going to stress either one.

Sure if you're running fiberchannel and are using the large array as the HDDs for remote systems or have large databases in an enterprise environment then sure the workload is totally different, but I don't expect many home users are doing that.

I agree usage scenario dictates your setup. My comment was based on the fact that in this thread you were pushing RAID 6 as a do-all magic bullet
 
I agree usage scenario dictates your setup. My comment was based on the fact that in this thread you were pushing RAID 6 as a do-all magic bullet
I did no such thing. I simply pointed out that RAID-6 was not responsible for the slowness he was seeing while copying data. I also pointed out that RAID-10 would not be faster for copying data.
 
Back
Top