Online Capacity Expansion

aye29

Weaksauce
Joined
Jun 20, 2006
Messages
76
What does the online capacity expansion feature of some RAID cards allow you to do? Does it allow you to add a drive to a RAID 5 array without rebuilding?
 
So without online capacity exapnsion I would still be able add a drive to an existing array but I would have to rebuild each time? In either case would I have to wipe the data in the existing array to add a drive?
 
online as in the RAID array is still ONLINE and it can be EXPANDED while its ONLINE
hense, online expantion.

no extra work required..

i could be a bit mistaken though :eek:

and while im at it, anyone know if the RAID 5 arrays on the 965 and 975 chips have online expansion?
 
aye29 said:
So without online capacity exapnsion I would still be able add a drive to an existing array but I would have to rebuild each time? In either case would I have to wipe the data in the existing array to add a drive?
Without OCE, you'd need to nuke the array, recreate it with the added disk and restore data from backups.
 
How reliable is online capacity expansion? Is it likely that using it would destroy that data on my existing array? I got a LSI Megaraid 150-6 off ebay and I want to create a 3 drive raid5 array and add more disks later using OCE. If OCE is dangerous then I'll probably just buy 6 drives and create a 6 drive array and not deal with OCE in the future.
 
Just a note: OCE does require a "rebuild", however the array will not be offline and likely not lose data during the process.

aye29 said:
How reliable is online capacity expansion? Is it likely that using it would destroy that data on my existing array? I got a LSI Megaraid 150-6 off ebay and I want to create a 3 drive raid5 array and add more disks later using OCE. If OCE is dangerous then I'll probably just buy 6 drives and create a 6 drive array and not deal with OCE in the future.

reliable enough that companies that have backups of their data will use it.
 
I have used OCE on my RR2320 twice now with no ill effects. (4 Drive to 5 drive, 5 drive to 8 drive).

Be warned, expansion can take a VERY long time. It took 30+ hours to expand from 5 drives to 8 drives.
 
It's another uptime preservation tool. Usually it won't save you any time overall -- the re-build plus partition expansion time can take more absolute time than a full restoration from backup. There are multiple potential points of failure, so it isn't a good idea to use OCE to avoid making a backup. Usually you'll get lucky, but the one time you get unlucky, it could cost you all your data.
 
Madwand said:
It's another uptime preservation tool. Usually it won't save you any time overall -- the re-build plus partition expansion time can take more absolute time than a full restoration from backup. There are multiple potential points of failure, so it isn't a good idea to use OCE to avoid making a backup. Usually you'll get lucky, but the one time you get unlucky, it could cost you all your data.


An advantage of XFS here is that expanding to fill the new space takes around two seconds.
 
Dew said:
An advantage of XFS here is that expanding to fill the new space takes around two seconds.

Had to register just to ask you about this :p

I recently aquired my own rr2320 and I've been trying to figure my brains out on how to setup now an initial 4x500GB RAID5 array on a Fedora Core 5 system so that I can easily expand it in the future with additional drives. It would be for storage only and with one partition and fs.

I have no personal experience of how exactly the OCE works but I've come to understand that it only expands the physical volume or "free space" of what the OS sees and leaves any partitions and filesystems untouched. It seems easy to enlarge an XFS with xfs_grow but don't you have to expand the underlying partition first? And how do you do that, just delete+recreate a new, larger, one with fdisk? Is this really reliable? Or could you use something like LVM or EVMS to do the job?

Is it possible when OCE'ing the array that the free space would appear "in front" of the existing partition and not "behind" it? Stupid question, maybe, but just wondering...
 
Welcome to [H] :D
crewd said:
I have no personal experience of how exactly the OCE works but I've come to understand that it only expands the physical volume or "free space" of what the OS sees and leaves any partitions and filesystems untouched. It seems easy to enlarge an XFS with xfs_grow but don't you have to expand the underlying partition first? And how do you do that, just delete+recreate a new, larger, one with fdisk? Is this really reliable? Or could you use something like LVM or EVMS to do the job?
Even if you use a tool like EVMS, delete+recreate is probably what'll happen underneath. As long as it starts at the same cylinder, there should be no problems. My suggestion is try making a 3-disk array, expand it to 4, and write down what steps you have to take to get it all expanded.
crewd said:
Is it possible when OCE'ing the array that the free space would appear "in front" of the existing partition and not "behind" it? Stupid question, maybe, but just wondering...
Possible? Sure! But I don't think the manufacturers would get very far with this approach... try it first, but my guess is it's actually a useful feature ;)

 
Oh, I'm definitely going to try and test different scenarios. It's just a tad tedious when just initializing an empty 3x500 R5 array takes 10+ hours... I'm afraid to find out just how long it takes to add another drive :p. Probably better timewise to do the testing using JBOD or maybe RAID0; results should essentially be the same, just a lot faster to set up.

unhappy_mage said:
Possible? Sure! But I don't think the manufacturers would get very far with this approach... try it first, but my guess is it's actually a useful feature ;)
Heh, yeah.
 
If you use XFS and have not already put data on the drives, follow the steps to setup xfs but SKIP the partitioning. XFS can run directly on the drive without a partition.

so instead of accessing /dev/sdb1, you would use /dev/sdb

This means that when you expand, all you have to do is xfs_growfs. No mucking with partitions. Besides, fdisk caps out at 2TB.

I didn't find out about XFS being able to run directly on the drive till it was too late, now it would be a royal pain to redo my stuff, so I settled with the 2TB limit(and lost 40gigs off my array). When I build my second array, I'll do things properly on that array, move all my data over, and setup my current array the right way.

It looks like I'll be building that second array in a few months, at the rate I'm going.

Here are the steps if you do use a partition:
Delete partition
n for new partition
p for primary
1 for number one partition
enter twice to make is use all space
w to write changes
reboot your system
xfs_growfs /your/mount/point

Here is the process if you put XFS directly on the drive:
xfs_growfs /your/mount/point
 
Wow, this was news. Of all the googling that I've done about XFS (pre-registering here) not once did I find anyone mentioning this. And the fdisk capping out at 2TB :(. Very good info, since adding more 500 gig drives I can get up to 4TB with one card.

No, I haven't moved any data to my array yet, just done some partitioning/xfs exercises, so it's not too late for me. I'm really glad now that I registered :D. Thanks guys.
 
Glad to help, it would have sucked when you found that out after adding enough drives to break 2GB. fdisk will roll-over and tell you you only have as much disk space as you have OVER 2TB. So, if you end up with formatted capacity of 2300GB, it would tell you that your array is 100gigs!
 
A word of warning, you may have to specify the filesystem as xfs in the fstab as it may not realize that there is a filesystem on /dev/sdX (where X = the letter assigned to your array)
 
Mm, I'm not sure what you mean. Wouldn't it be defined there in any case?

Code:
/dev/sda               /mntpoint                       xfs    defaults 1 1
 
yep, what you have should work fine. My point is that you can't use auto (I like to cheat and do that sometimes).
 
Back
Top