Areca users: capacity expansion question

Allynm

n00b
Joined
Apr 20, 2005
Messages
31
Fellow gurus,

I've just recently made the switch over to an Areca card (specifically the ARC-1230). I have an expansion question specific to how Areca works their volume structuring.

Example (over-simplifying for clarity - this is not my setup):
- 'Raid set' of 3x 500gb drives, 1500gb raw capacity.
- 'Volume set' - 1tb raid-5 created within the above raid set.

With the above example, what precisely happens if the user wants to expand capacity, not by adding drives to the array, but by replacing drives with those of a larger capacity? Is it possible to intentionally fail the array by replacing a 500gb drive with a 1tb drive, allowing a rebuild, then repeating for the other 2 drives? If this is done (and is allowed / works), does the raid set raw capacity simply jump to 3000gb allowing later expansion of the volume set to 2tb, or is there some hidden step I am missing?

As a side question, I know with other controllers' interfaces you could have 2 1tb drives and 1 500gb drive, and you could make a raid-5 taking the first 500gb of all 3 and a raid-1 using the 2nd half of the 2 1tb drives. Is this possible with the way Areca does their raid sets?

Thanks in advance for your assistance
Al
 
I have used OCE on the Arecas to expand an array, but it was with similar sized disks, and I needed to delete and make a new volume set. I dont believe OCE will expand the RAIDset to fill up the new drive, it will use up 500GB of the new disk and (I believe) show the rest of it as free. Also, the volumeset (which is formatted) will not expand to take up the new space.

The easiest way to go about it is to backup your data and then make a new raidset and volume set. If you don't have enough ports on the raid controller (or another machine with plenty of space) it can become a bit complicated. If you have a friend with a few TB free, it can be helpful! If not, read on:

With the array still on the Areca, install a few of the new drives onto a standard SATA controller. Copy the data off of the RAID array to the single drives and when you have sufficiently backed that up, pull the old array. Now, with a verified backup of all of the important parts on the RAID array, install the old RAID disks as SATA drives in the machine in question. Now is when you format all of those disks as single disks and again, backup all the data to those. Again, verify that you copied everything. Now, take all of the new disks and install them into the Areca, make a raidset, and then a volumeset, format and get the new array up. Now you can take the data off of the old disks and copy it onto the array. In the end you should have a new RAID array with the newer, larger disks.
 
It may be possible to acchieve such expansion, but you are basically putting your data at great disk by intentionally pulling live drives. By doing this, you risk of losing your entire array via corruption or simply another drive fails during this process.

Your best bet is to look at adding another array (run in parallel) and then expanding the new sized array or look at a solution such as WHS.
 
Fellow gurus,

I've just recently made the switch over to an Areca card (specifically the ARC-1230). I have an expansion question specific to how Areca works their volume structuring.

Example (over-simplifying for clarity - this is not my setup):
- 'Raid set' of 3x 500gb drives, 1500gb raw capacity.
- 'Volume set' - 1tb raid-5 created within the above raid set.

With the above example, what precisely happens if the user wants to expand capacity, not by adding drives to the array, but by replacing drives with those of a larger capacity? Is it possible to intentionally fail the array by replacing a 500gb drive with a 1tb drive, allowing a rebuild, then repeating for the other 2 drives? If this is done (and is allowed / works), does the raid set raw capacity simply jump to 3000gb allowing later expansion of the volume set to 2tb, or is there some hidden step I am missing?

Yes you can replace 1 drive at a time, let it rebuild after each drive and after you replaced all you can expand the volume to use up all space of the new drives.

Allynm said:
As a side question, I know with other controllers' interfaces you could have 2 1tb drives and 1 500gb drive, and you could make a raid-5 taking the first 500gb of all 3 and a raid-1 using the 2nd half of the 2 1tb drives. Is this possible with the way Areca does their raid sets?

Yes you can mix drives of different size in raid 5. It will simply limit your space to (n-1) * size of smallest drive ( where n is number of drives in array)
 
It may be possible to acchieve such expansion, but you are basically putting your data at great disk by intentionally pulling live drives. By doing this, you risk of losing your entire array via corruption or simply another drive fails during this process.

Your best bet is to look at adding another array (run in parallel) and then expanding the new sized array or look at a solution such as WHS.

I cited an example in my original post. In reality I am building a 10x1TB array, and running another array in parallel is not exactly 'practical' in that case.

Since I am using raid-6, you can pull 1 drive at a time all day long. You still have redundancy so data is not at risk.
 
Yes you can replace 1 drive at a time, let it rebuild after each drive and after you replaced all you can expand the volume to use up all space of the new drives.
Yes you can mix drives of different size in raid 5. It will simply limit your space to (n-1) * size of smallest drive ( where n is number of drives in array)

OK, to be clear, so based on what you're saying, the raw space available for the raid set will always be (smallest drive)*n? Does this number automatically change?

Remember, I'm asking this not for raid cards in general, but for Arecas specific implementation of it. I'm hoping someone can pipe in that has actually done it on an Areca card.
 
1) why not go whole hog and do 12x1TB? :)

2)You can get more help in the storage forum at the 2cpu.com forums, there are some there who may have done exactly that. I could try it, but I'm not about to risk data on either array.
 
Also, the volumeset (which is formatted) will not expand to take up the new space.

Current Areca firmware has a specific option to expand the (last) volume set within a raid set. The big question is if the raid set auto-expands once the last drive has been swapped to the larger size.

With the array still on the Areca, install a few of the new drives onto a standard SATA controller. Copy the data off of the RAID array to the single drives and when you have sufficiently backed that up, pull the old array. Now, with a verified backup of all of the important parts on the RAID array, install the old RAID disks as SATA drives in the machine in question. Now is when you format all of those disks as single disks and again, backup all the data to those. Again, verify that you copied everything. Now, take all of the new disks and install them into the Areca, make a raidset, and then a volumeset, format and get the new array up. Now you can take the data off of the old disks and copy it onto the array. In the end you should have a new RAID array with the newer, larger disks.

This is what I just did to switch from Promise to Areca. I swear I'm never doing that again. Not only does it take 2x as long, it is crazy risky, and sort of defeats the purpose of having multiple terabytes in raid-6 to begin with...
 
1) why not go whole hog and do 12x1TB? :)

I can't fit that many drives in the case :). I'll try and post up some pics.

Thanks for the tip on 2cpu, I'll check there. If I see anything I'll dupe it in this thread for the others that are wondering the same as I am...
 
OK, to be clear, so based on what you're saying, the raw space available for the raid set will always be (smallest drive)*n? Does this number automatically change?

Remember, I'm asking this not for raid cards in general, but for Arecas specific implementation of it. I'm hoping someone can pipe in that has actually done it on an Areca card.

Ok i never tried that on areca but adaptec, lsi and few others act that way so i have no reason to belive areca is any different.

I actually need to torture my new batch of sas drives so I'll run a test for ya. Will take a bit so I'll probably post the results tomorrow.
 
Ok with 15K sas drives and small capacity the test went pretty fast, it was however, a complete bust. After swapping all the drives to bigger capacity ones there is no way to take advantage of the extra space. The raid set space remains the same and there is no way to modify it.

Here's the brand new raid 5 with 3 x 36GB drives

1.jpg

2.jpg


Volume on that raid set
3.jpg


Swapped 1 drive
4.jpg


Swapped 2 drives
5.jpg


Swapped the 3rd and final drive
6.jpg


Raid set after replacing all 36GB drives with 73GB drives
7.jpg


As you can see still same capacity (110GB) and no way to expand it.
 
Yup. That was exactly what my fear was. Thanks for testing it out for me. I guess I'll either try and point Areca towards this thread and hope for an update, or I'll move to the 1260 so that I have enough extra ports to build a new set if I ever shift the array to larger drives.

It's a shame though, that whole 'raid set' part of the Areca configuration seems to be a limit in this case.

I wonder, did you try seeing what the 'expand' option does for you in that current state?
 
expand is not available due to lack of free drives
I can try adding 4th 73gb drive and see what that does
 
adding 4th 73GB drive didn't fix things. As far as the controller was concerned it was just like I added 36GB hdd the capacity went to 146GB and there is no way to modify that.
 
Axan,

Thanks for the additional attempt. Keep that set handy, I've contacted Areca on the issue and pointed them towards this thread, as well as another I found on storagereview:
http://forums.storagereview.net/index.php?showtopic=26790

It just seems 'odd' that arguably the best raid card line is unable to do what nearly every other line of cards can. It all seems to stem from the whole 'raid set' vs. 'volume set' thing. Having that additional (and seemingly unnecessary) layer is killing the overall flexibility of set creation (like in the second example I cited in my first post).

Al
 
It just seems 'odd' that arguably the best raid card line is unable to do what nearly every other line of cards can. It all seems to stem from the whole 'raid set' vs. 'volume set' thing. Having that additional (and seemingly unnecessary) layer is killing the overall flexibility of set creation (like in the second example I cited in my first post).

Al

They also have the split array issue, which seems to stem from cable and/or drive issues. I lost a 5TB array on my 1231ML due to a faulty ML cable. In the end, instead of moving to another brand, I built a second server, initially to backup the first one, and then that second "backup" machine got out of control and it has a 12x 1TB R5 array on it. :)

Lessons learned: Always have fresh ML cables on hand. RAID is not a backup. Always have a backup (I mostly did).
 
Raid set after replacing all 36GB drives with 73GB drives
7.jpg


As you can see still same capacity (110GB) and no way to expand it.

This would actually be what I would expect because the raid set would still be the same size and all of its space is taken up. If you go to RaidSet Functions -> Expand Raid Set does it not give you the option to expand it at all? Then from there edit the volume set and expand that?
 
we've been over that, read the whole thread before posting. When you got to expand raid set, it tells you there are no free drives to do so.
 
we've been over that, read the whole thread before posting. When you got to expand raid set, it tells you there are no free drives to do so.

I actually did read the whole thread but I did not see where you said that exactly. Is this what you were talking about?

expand is not available due to lack of free drives

I didn't know that was specifically the 'expand raid set' option from just that. Also you didn't have any screenshots which actually showed that (even though you were posting screenshots of a bunch of other stuff) which is why I wasn't sure what you were referring to due to the lack of details.
 
actually he did try to expand the set, first with all 3 drives swapped (no go), and then by adding a drive (worked but kept the space the same as if it was 36 gb drives all along).
 
I got a first round reply from Areca tech support.

Dear Sir,

expand raidset by replace drives with bigger one is possible.
we had already support this feature for few yeas.

but such solution is not recommended.
if your volume not raid6, volumes will stay in degraded mode for a long time.
if any member drive got problem, the data will be corrupted.

we suggest customer clone drives instead rebuild, although clone increased service down time, but no risk of data lost.

I've replied, pointing him to the pics in this thread. The raid-5 case is a moot point, since I would never do this with a raid-5 set. The point here is knowing if we can do this successfully, and if it is supposed to work, why did it not work when tested?
 
hmmm

Dear Sir,

it is because that user didn't reset the raidset capacity.
if they contact with us, we will able to told them what the problem could be.

...?
 
Just got this back:

Dear Sir,

please mail us two screen shots
1. the raidset information page.
2. the entire raidset hierarchy page.

axan - I think he asked for the *entire* hierarchy page for a reason. I sent him the last 2 pics from your pic post, but can you check the hierarchy page again and see if there is any additional info at the bottom? If so, please post it.

Thanks
Al
 
Here you go

8.jpg


9.jpg


BTW the hierarchy page screenshot i posted above shows the entire page as well, you can see all the channels listed as well as piece of the background.
 
I cited an example in my original post. In reality I am building a 10x1TB array, and running another array in parallel is not exactly 'practical' in that case.

Since I am using raid-6, you can pull 1 drive at a time all day long. You still have redundancy so data is not at risk.

Practical isn't always the case, nor is building an array while pulling out LIVE drives.

And your Raid 6 comment is not true. While in theory and in redundancy it's true, pulling live drives can corrupt your entire array. I have done it before, hot swapping or pulling live drives out should be a last resort effort to protect your data, not used as a convenient way to "upgrade" your system. You will see no business will replace live drives on a system without proper backups and during a service window, for this exact reason. They will also not upgrade in this fashion either.

Ok with 15K sas drives and small capacity the test went pretty fast, it was however, a complete bust. After swapping all the drives to bigger capacity ones there is no way to take advantage of the extra space. The raid set space remains the same and there is no way to modify it.

Here's the brand new raid 5 with 3 x 36GB drives

1.jpg

2.jpg


Volume on that raid set
3.jpg


Swapped 1 drive
4.jpg


Swapped 2 drives
5.jpg


Swapped the 3rd and final drive
6.jpg


Raid set after replacing all 36GB drives with 73GB drives
7.jpg


As you can see still same capacity (110GB) and no way to expand it.

I didn't think it would be possible as I had a similar issue with my Areca before.
 
And your Raid 6 comment is not true. While in theory and in redundancy it's true, pulling live drives can corrupt your entire array.

See:
http://www.barrys-rigs-n-reviews.com/reviews/2007/hardware/nvplus/nvplus1.htm

Here is a raid-5 NAS that has a specific upgrade feature involving swapping drives. Sure this is risky since it's only a 5 set, but at least it is doable.

If pulling a single disk from an otherwise healthy raid corrupts the entire array, there is something wrong with your raid hardware or your implementation. Think about it - the raid controller would be failing in its primary purpose. With a properly functioning and maintained raid, the only risk to the data should be if the partition / file system is corrupted by something outside of the raid system entirely (i.e. OS). The only exception being a parity+1 drive failure before a rebuild can complete.

Even in cases where failure occurs, data recovery is possible. I used to run a 2TB raid-5 and had a second drive fail during the attempted rebuild of a first failure. Recovery was still possible by imaging the failed drives and de-striping the array in software. Knowing this is possible means I, a home user, don't have to *always* have a 1:1 backup - I only have to buy 1:1 if an oddball failure occurs in order to have room for the recovery. Given the above, there might be cases where I'd be willing to do the 1 for 1 swap of drives for an upgrade.

Your average home user does not have an equal number of drives available to do backups, and with current drive sizes, backup to tape/dvd is just not practical. So, this type of upgrading should be possible.

Aside from all of this - we like to tinker and push our hardware within its limits. If we find a limit that should not be there, we like to try and fix it. Hence this thread :)
 
Allyn, I think what you're not understanding about this situation is a few of us - Ockie, myself, as well as a bunch on the 2cpu forums have had issues with the Areca controllers. There's no reason to try to introduce any more complications by swapping out live drives. Sure, it should work, but that doesn't mean it will.
 
I'm with you, but 'pushing the tech' purely as an exercise and feeding what we find back to Areca couldn't hurt, could it?. Judging their frequent updates of these cards, I think they share our passion and would be receptive to input from the community.
 
I have done it a few times on SCSI arrays, after you get the last drive in I just went into the configuration for the controller and expanded the volume, then after that I expanded the partition.

I find it hard to believe that a 10 yr old SCSI Raid card can do it, but a new Areca can't. It is just a matter of figuring out how to tell it to expand the volume (array) to use the available space after the last new drive is in.
 
after you get the last drive in I just went into the configuration for the controller and expanded the volume, then after that I expanded the partition.

The trick here is the added layer Areca adds:
* Raid set
- Volume set(s)
- Partition(s)

The raid set idea is what limits the overall functionality of Areca cards. It is limited by the smallest drive across the board, meaning you can't have overlapping volume sets sharing drives. It also (as discovered earlier in this thread) adds another thing that must be expanded, and in this case, does not work as expected, even by Areca support.
 
Dear Sir,

please follow the procedure below :
1. login the browser management console
Raidset Functions > Rescue RaidSet
2. enter the keyword "RESETCAPACITY Raid Set # 000", confirm and submit.
after that, controller will reconfigure the raidset capacity.

I just *love* undocumented features. axan, you're up :). I'm curious as to exactly what happens on this one - i.e. does it do a full rebuild or not.

Thanks!
Al
 
Dear Sir,

please follow the procedure below :
1. login the browser management console
Raidset Functions > Rescue RaidSet
2. enter the keyword "RESETCAPACITY Raid Set # 000", confirm and submit.
after that, controller will reconfigure the raidset capacity.

I just *love* undocumented features. axan, you're up :). I'm curious as to exactly what happens on this one - i.e. does it do a full rebuild or not.

Thanks!
Al

You got to be kidding me, how the f is someone suppose to know to do that?

Anyway it worked perfectly!

10.jpg


11.jpg


12.jpg


After that I modified the volume to use the entire raw space of the raid set

13.jpg


And here's the raid set after the modification completed

14.jpg
 
axan,

i found another crazy undocumented feature. if you're game for all out tinkering, try this:

power down and move the array to 4 different ports. even try randomizing the order a bit. see if the card recognizes the raid or not. if not, here is another gem i found:

if raid not found, perform these commands:
"RESCUE"
reboot
"SIGNAT"
"LeVeL2ReScUe" (case sensitive) (!!!)
reboot (array should be back)
"SIGNAT"

I am about to move my raid from the 1230 to the 1260, so I may end up doing this one if the regular RESCUE/SIGNAT combination doesnt work.
 
axan,

i found another crazy undocumented feature. if you're game for all out tinkering, try this:

power down and move the array to 4 different ports. even try randomizing the order a bit. see if the card recognizes the raid or not. if not, here is another gem i found:



I am about to move my raid from the 1230 to the 1260, so I may end up doing this one if the regular RESCUE/SIGNAT combination doesnt work.

To be honest, half the time you dont even need to move the drives around to get a split array with the Arecas.

"RESETCAPACITY Raid Set # 000" is good to know though, I hadn't found that until this thread, and I've been using the cards for years (1230, 1231MLx2, 1261ML).
 
Hello all,

I have an ARC-1220 with 3x1.5TB Seagates (copying from 5 drive array and only 8 ports) in RAID 5. Everything works great. However, I've added my 2 remaining Seagates for a total of 5, expanded the array and modified the volume to use all 6000GB usable space. However, under Linux, parted only sees the original 3000GB. I need to expand this ext3 partition across the rest of the usable space, but trying to resize does not work because the other 3000GB aren't visible to parted. Thoughts?
 
Back
Top