switching from raid 5 to raid 10

piako

[H]ard|Gawd
Joined
Apr 5, 2006
Messages
1,613
backed up the array and init right now the raid 10 array. Seems my controller does two drives per mirror and then stripes them all together. Any way to get say 2 mirrors of three disks each rather than 3 mirrors of two disks each? There are 6 250 GB HDDs and the resultant array is ~750 GB (698 GiB). hba is rocket raid 2220.

now it's

1-|
2-| - raid1-|
3-|
4-| - raid1-| - raid0
5-|
6-| - raid1-|

I would like

1-|
2-|
3-| - raid1-|
4-|
5-|
6-| - raid1-| - raid0

I know it will eat up some more space but I figured it'd be more secure. Seems the rr2220 only does the top scheme.
 
Raid controllers should allow this. But no matter how many drives you stick on those RAID 1 arrays, it is not a substitute to a good backup plan. Why are you ditching Raid 5?
 
Raid controllers should allow this. But no matter how many drives you stick on those RAID 1 arrays, it is not a substitute to a good backup plan. Why are you ditching Raid 5?

Looking for more security. The array has sensitive data. 2x3 will allow 1 disk to fail per raid 1 and still work but 3x2 would allow 2 disks to fail over all regardless of raid 1 level and the array would still work.
 
If your controller won't do what you want a raid 5+1 or similar should allow the same level of redundancy. All these schemes should allow up to 4 drives to fail and still run. In the end, you will only have 500GB with any of them.

Also, I'd hate to see what kind of event will take out 4 drives simultaneously but not the remaining two? Wow. It's your data though, so do whatever the data warrants.
 
Can you create a RAID-1 array with three drives? You could do the striping in the OS and the RAID-1 through the card.
 
Can you create a RAID-1 array with three drives? You could do the striping in the OS and the RAID-1 through the driver.

Sure. You could have a raid1 array with n number of member disks where n is <= number of ports available. That's not good advice to do raid 1 on the raid level and raid0 on the OS level. In raid 1/0 if the OS failed you'd be really f******.

I e-mailed highpoint to ask them about this so I'll get a response some time later and post back.
 
Sure. You could have a raid1 array with n number of member disks where n is <= number of ports available. That's not good advice to do raid 1 on the raid level and raid0 on the OS level. In raid 1/0 if the OS failed you'd be really f******.
Why would I be fscked? Unless the OS's R-0 implementation is inferior in reliability to whatever HPT has whipped up, there is almost no reason to prefer one over the other, unless you want to boot from this array.

But yes, I do not think it's an optimal solution, but it is a solution that will work now. Alternatively, the OP could do a four-disk R-10 array with two hot-spare drives.
 
Why would I be fscked? Unless the OS's R-0 implementation is inferior in reliability to whatever HPT has whipped up, there is almost no reason to prefer one over the other, unless you want to boot from this array.

But yes, I do not think it's an optimal solution, but it is a solution that will work now. Alternatively, the OP could do a four-disk R-10 array with two hot-spare drives.

I believe that if you so say a raid 10 with the raid level being jbod x 6 and then the OS raid doing raid1 x 2 (with 3 member disks per array) and then one giant raid 0, if the OS ever needed to be re-installed the array would dissappear because the logic of the array is stored on a OS-level on not a RAID-memory level. I'm pretty sure this is how OS X works if I remember correctly. If you setup the array and then re-installed the OS after wiping the boot os (where the raid logic is stored) you would see only the raid slices and not the raid volume. This is indeed the advantage of raid-level-memory.

Good point with the 4 disk raid 10 w/ two hot spares I'll check that out right now.

Unfortunately the rocket raid setup only allows TWO members PER raid1

picture1cm3.png


OS X allows n number of members per raid as seen here.

4x JBOD in raid-level
picture2kt8.png


Raid1 in OS X with 4 member disks
picture3yt0.png
 
I believe that if you so say a raid 10 with the raid level being jbod x 6 and then the OS raid doing raid1 x 2 (with 3 member disks per array) and then one giant raid 0,...
Since the controller does not allow you to run 3 drives in RAID-1, my suggestion is rather pointless anyway.

if the OS ever needed to be re-installed the array would dissappear because the logic of the array is stored on a OS-level on not a RAID-memory level.
this is not true for Windows or recent Linux based RAID arrays. For all I know about Windows dynamic disk raid and have been told about Linux Software RAID, they store the array information on the disk(s), just like most hard or software based RAID controllers.

Good point with the 4 disk raid 10 w/ two hot spares I'll check that out right now.
While it may be a good idea, it is inferior to your three-drive solution, since you have a period of vulnerability when a single drive fails.
 
this is not true for Windows or recent Linux based RAID arrays. For all I know about Windows dynamic disk raid and have been told about Linux Software RAID, they store the array information on the disk(s), just like most hard or software based RAID controllers.
That's interesting. I've been thinking about moving the server box to another OS but OS X seems to be working good for now.
While it may be a good idea, it is inferior to your three-drive solution, since you have a period of vulnerability when a single drive fails.
Well, it would work ok but not at the level of three co-existing member disks per raid1. If I understand the manual for the rr2220 correctly if I did a 4x250 raid 10 and added two spares the highpoint would keep them on the burner if any of the four disks were to fail and auto-rebuild the array using a spare if a member disk fails. Interestingly, highpoint offers another option where you pick disks from the existing array and assign them the job of a spare. In this option if a member disk of the array fails AND there is enough available space it will convert the array down to a smaller logical size and remove the failed member from the array. This only works in raid 5 situations though. For raid 10 you can only have non-used spares (they only kick in when a disk fails and just sit spinning until that time). Which really doesn't make sense because you'll be wasting them anyway just doing nothing. In the raid5 setup the member disk that replaces the dead one CAN be a active member of the raid5 array but if a disk fails the raid software simply reduces the available space on the raid5 and consolidates the parity into a smaller raid5 volume with 1 less disk. Interesting options.
 
It seems the only real difference between raid 5 and 10 is how it performs when degraded. The speeds are similar. If 1 disk in raid 5 fails and then another fails you're sol. If 1 disk in raid 10 fails and then another fails it depends where the other disk is located.

It appears that raid 10 has an advantage over raid 5 certainly, but since I can't do raid 10 over more than two raid1 members it seems that for this controller this is as good as it gets. :rolleyes:

testfg1.jpg


27346229vv4.jpg
 
For dynamic disks the data is stored on the drives themselves. I've used a couple dynamic RAID-0s and moving the array back and forth between machines was relatively easy.
 
I am just curious: what does this server do that requires this amount of uptime guarantee?

Aside, I'd be rather skeptical about those write performance numbers. For random writes, RAID-10 should perform better than RAID-5, since no reads and parity calculations will need to be performed. Aside from that, I am surprised that you see only 64MiB/s from what ought to be a 3-disk RAID-0 array. Even if you are running 'older' 250GB disks, you should get at least 100MiB/s out of that array.
 
So you didn't say anything about 5+1 or 1+5. Is this possible with your controller?

1 | Raid 1 |
2 | |
|
3| Raid 1 | Raid 5
4| |
|
5| Raid 1 |
6| |

Up to 4 drives can fail in this config. But some failures of 4 drives can be bad such as 1234. And up to 3 Drives can fail without an effect on performance.


1|
2| Raid 5
3|
| Raid 1
4|
5| Raid 5
6|

Just another possibility.
 
So you didn't say anything about 5+1 or 1+5. Is this possible with your controller?

1 | Raid 1 |
2 | |
|
3| Raid 1 | Raid 5
4| |
|
5| Raid 1 |
6| |

Up to 4 drives can fail in this config. But some failures of 4 drives can be bad such as 1234. And up to 3 Drives can fail without an effect on performance.


1|
2| Raid 5
3|
| Raid 1
4|
5| Raid 5
6|

Just another possibility.
The controller only can do 0, 1, 5, and 1/0 and Jbod.
I am just curious: what does this server do that requires this amount of uptime guarantee?

Aside, I'd be rather skeptical about those write performance numbers. For random writes, RAID-10 should perform better than RAID-5, since no reads and parity calculations will need to be performed. Aside from that, I am surprised that you see only 64MiB/s from what ought to be a 3-disk RAID-0 array. Even if you are running 'older' 250GB disks, you should get at least 100MiB/s out of that array.
Hmmmm. Don't know really. The boot drive controller and the raid controller are the only devices on the PCI bus. It's a 64 bit 33mhz PCI bus with 3 slots. I think that maxes out at a theoretical 266 MB/S. Why the writes aren't faster I couldn't tell you. The disks are WD2500KS with 16 mb cache each. Any thoughts?
 
Back
Top