RAID10 slower than RAID5 on SAS system?

Joined
Sep 6, 2006
Messages
2
We just got an MD1000 SAS enclosure (from dell) and it's on a Perc 5/e adapter... it seems to be running great and is outbenchmarking the U320 enclosure we have by strides... but, internally, its Raid 5 set of drives outperforms its Raid 10 set of drives.. this shouldn't be the case, right? Or because it's SAS that somehow changes the performance expectations between Raid 5 and Raid 10?

Here's the setup

Each Drive is:
15krpm, SAS 67.75GB

Raid 10 Set is 3 spans of 2 disks (6 Drives total) - 203.25 GB
Raid 5 Set is 6 disks (6 Drives total) - 338.75 GB

Both have a 128k Stripe Size, and the same write/cache policy (No Read Ahead/Write Back) Also Both have been formatted with NTFS 64K AU Full Format for the max size of their partition (although I tried 50GB partitions on both and the Raid5 still outperformed the Raid10 set)

I'm using SiSoft Sandra 2007 to do the benchmarking and here are the results:

RAID 5, 128K, 6 Drives

Code:
Benchmark Results
Drive Index : [B]319 MB/s[/B]
Results Interpretation : Higher index values are better.
Random Access Time : 4 ms
Results Interpretation : Lower index values are better.

Benchmark Breakdown
Buffered Read : 577 MB/s
Sequential Read : 456 MB/s
Random Read : 172 MB/s
Buffered Write : 533 MB/s
Sequential Write : 294 MB/s
Random Write : 120 MB/s
Random Access Time : 4 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 339GB
Free Space : 339GB, 100%
Cluster Size : 64kB

Raid 10, 128K, 6 Drives

Code:
Benchmark Results
Drive Index : [B]292 MB/s[/B]
Results Interpretation : Higher index values are better.
Random Access Time : 2 ms
Results Interpretation : Lower index values are better.

Benchmark Breakdown
Buffered Read : 583 MB/s
Sequential Read : 394 MB/s
Random Read : 230 MB/s
Buffered Write : 536 MB/s
Sequential Write : 156 MB/s
Random Write : 128 MB/s
Random Access Time : 2 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 203GB
Free Space : 203GB, 100%
Cluster Size : 64kB

Also, here are comparison's using BM Disk:

Raid 5

Code:
Now Calculating disk throughput on disk 2
Rate for start of disk area  : 356962043 bytes/sec
Rate for middle of disk area : 356962043 bytes/sec
Rate for end of disk area    : 215092513 bytes/sec

Raid 10

Code:
Now Calculating disk throughput on disk 1
Rate for start of disk area  : 266305016 bytes/sec
Rate for middle of disk area : 270600258 bytes/sec
Rate for end of disk area    : 44739243 bytes/sec

I don't trust the BM_DISK results as much as the Sandra Results, but they still seem to indicate the Raid 5 is faster for the most part.. (any other benchmarking utilities I should use if Sandra isn't considered reliable enough?)

So to sum it up, the Raid 5 got an index of 319, and the Raid 10 got an index of 292, any ideas why the Raid 5 set outperformed the Raid 10 set?
 
Could the PERC's cache be messing up the reporting of the RAID 5 numbers, aka pulling results from it instead of the drives themselves?
 
I suppose it's possible.. I'm not sure how I would check for this? The current cache policy is set to:

No Read Ahead (Other options are Read Ahead, Adaptive Read ahead)

and

Write Back (Other options are Write Through, Force Write Back)

I tried changing these two settings with no noticeable difference in speed for one of the benchmarks...
 
Try HDTach. I don't trust Sandra for hard drives.

Also, IOMeter would be a good test. Either of those is likely to give you more accurate and useful results than Sandra.

 
Back
Top