SLOW RAID5 Rebuild

Syntax Error

2[H]4U
Joined
Jan 14, 2008
Messages
2,781
So I have an Adaptec 31605 and I've been doing experiments on how OCE and other nifty features work out, as it's a 16-port and expandability is an important feature for me in terms of uptime and general convenience (not having to find space to dump all my data and rebuild an array is a BIG plus).

However, when I do add on another drive (I use Adaptec's software, Adaptec Storage Manager), it's REALLY slow in rebuilding an array.

How slow? 11 days now rebuilding an array from RAID5 6-disk to RAID6 8-disk and it's at 58% :eek:. Windows diagnostic shows that the disk is only being accessed like from 680KB/s - 1MB/s, so there must be something wrong, it can't be that slow to rebuild an array, is it?

Specs of my FS is in my sig, what could be the cause of this problem? :confused:
 
Not necessarily wrong.
Adding a drive to the array is painfully slow and a bit to risky for my blood (reason why I bit my tounge nd bought all my drives at once).

I have a 3ware 9650SE 8 port card and it does this procedure at about 4MiB/s and that is DEAD slow.
A member of this board did the same on his new Adaptec 5xxx card and we calculated it was exactly 4x faster. Making it 16MiB/s.
That card uses the newest XOR 1200MHz dual core processor, and although it runs way hot IMO, it is the only choice if you intend to expand a BIG array and spare your drives weeks of heavy usage doing recalculations.
 
That seems real slow... on 9650s isn't there a setting for rebuild speed? Havn't paid attention to Adaptec's controllers much. Calculations shouldn't be much more complicated than doing a regular rebuild. Expanding software array doesn't seem any more cpu intensive than rebulding a degraded array.
 
20 days for OCE?! That's really long. It only took me about 30 hours to go from an 8 drive RAID 5 array to a 10 drive RAID 5 array with my Areca, and I even sustained a drive loss half-way through.
 
Which is why something might be wrong in the system, which I hope isn't hardware related. The performance on the RAID array when I was running it (not expanding) was pretty good, it's just that OCE takes a LONG time. :eek:

I suppose it's not that big of a deal because I can't see myself adding too many drives at this point, and OCE is only done every now and then. It's just that if I were to suffer a drive failure and if the RAID rebuild takes that long, that'd just further the chance of another drive failure within MTBF, in which case I'd be screwed. :(
 
Can you change the priority for the expansion? My card has a few settings and having it at the highest priority helps a lot.
 
According to Adaptec Storage Manager, it's been on "High" priority for this whole time now:

OCE.jpg


Could disk stripe size be an issue here? I have my RAID5 array set up with 64kb stripe size.
 
How slow? 11 days now rebuilding an array from RAID5 6-disk to RAID6 8-disk and it's at 58% :eek:. Windows diagnostic shows that the disk is only being accessed like from 680KB/s - 1MB/s, so there must be something wrong, it can't be that slow to rebuild an array, is it?

Specs of my FS is in my sig, what could be the cause of this problem? :confused:

Did you fat finger this? Is it from a 6 disk RAID-5 to an 8 disk RAID-5? because you initially said it was from RAID-5 to RAID-6...in which case, this time issue could be moot, as it would be not only expanding the logical volume, but also adding an additional set of parity to the entire logical drive. That would make sense in the time it's taking, and the amount of data being transferred on/off of disk (which you report as being very low), as the data is being moved to expand a RAID-6 set would also have to have the parity recalculated for the second parity stripe.

If this is from a 6 disk RAID-5 to an 8 disk...then this is indeed very slow, and something doesn't sound right.
 
20 days for OCE?! That's really long. It only took me about 30 hours to go from an 8 drive RAID 5 array to a 10 drive RAID 5 array with my Areca, and I even sustained a drive loss half-way through.

Isn't the IOP on the latest generation of Areca cards like a zillion times faster with rebuilds and OCE than the previous generation? That could explain your case.
 
Did you fat finger this? Is it from a 6 disk RAID-5 to an 8 disk RAID-5? because you initially said it was from RAID-5 to RAID-6...in which case, this time issue could be moot, as it would be not only expanding the logical volume, but also adding an additional set of parity to the entire logical drive. That would make sense in the time it's taking, and the amount of data being transferred on/off of disk (which you report as being very low), as the data is being moved to expand a RAID-6 set would also have to have the parity recalculated for the second parity stripe.

If this is from a 6 disk RAID-5 to an 8 disk...then this is indeed very slow, and something doesn't sound right.

It does seem slow, the extra parity set could be the reason.
 
Isn't the IOP on the latest generation of Areca cards like a zillion times faster with rebuilds and OCE than the previous generation? That could explain your case.
I've used the older ones too, and they weren't nearly as slow.
 
Did you fat finger this? Is it from a 6 disk RAID-5 to an 8 disk RAID-5? because you initially said it was from RAID-5 to RAID-6...in which case, this time issue could be moot, as it would be not only expanding the logical volume, but also adding an additional set of parity to the entire logical drive. That would make sense in the time it's taking, and the amount of data being transferred on/off of disk (which you report as being very low), as the data is being moved to expand a RAID-6 set would also have to have the parity recalculated for the second parity stripe.

If this is from a 6 disk RAID-5 to an 8 disk...then this is indeed very slow, and something doesn't sound right.

Good catch. I thought he was going from raid 5 to raid 5 with two additional drives.
 
I am going from a 6-disk RAID5 to an 8-disk RAID6 as an experiment to see the possibility of changing RAID levels, expanding, and expanding the partition without having to back up the data during the process (finding a place to dump 3TB of data is no fun :()

I suspect it could be that I'm doing a RAID level change from RAID5 to RAID6, which could explain why it's taking a long time to complete the process. However, I did another experiment earlier before this OCE in which I expanded a 4-disk RAID5 to a 6-disk RAID5 and it took less time, but it did seem a bit slow (1-2MB/s IIRC). What could be the cause of this?

Also, what I/O read and write speeds could we expect from an array such as this? I know that's sort of a subjective question but I was getting around 200-220MB/s read and slightly less on write, and I was expecting my read speeds to be a bit faster than that. This was on an 8-disk RAID5 array that was built and verified (not quick initialized) before I started all this RAID expansion experiments for my own good. :/
 
I'd say doing both OCE and RAID migration at the same time is not the best way to do it.. It's probably be faster if you did one of these at a time..
 
I am going from a 6-disk RAID5 to an 8-disk RAID6 as an experiment to see the possibility of changing RAID levels, expanding, and expanding the partition without having to back up the data during the process (finding a place to dump 3TB of data is no fun :()

I suspect it could be that I'm doing a RAID level change from RAID5 to RAID6, which could explain why it's taking a long time to complete the process. However, I did another experiment earlier before this OCE in which I expanded a 4-disk RAID5 to a 6-disk RAID5 and it took less time, but it did seem a bit slow (1-2MB/s IIRC). What could be the cause of this?

Also, what I/O read and write speeds could we expect from an array such as this? I know that's sort of a subjective question but I was getting around 200-220MB/s read and slightly less on write, and I was expecting my read speeds to be a bit faster than that. This was on an 8-disk RAID5 array that was built and verified (not quick initialized) before I started all this RAID expansion experiments for my own good. :/

As to the read/write and IO performance, I can't really say. That's more a function of the array makeup than I can just guesstimate accurately. I don't have these drives, or this controller, so I will just leave that part out of my response. As far as what you are seeing now on the array, versus what will be sustainable for you after the migration though, I can say that what you are seeing now is very likely so slow because it's moving small clocks of data from the previous location to a new location on the expanded logical array, and recalculating at least one set of parity, perhaps both (depends on how your controller handles the RAID level migration and expansion).

I do think that Lazn_Work is likely correct, you probably would have been better off to first expand the array, then migrate the RAID level. Doing it the way that you did is causing everything to have to be moved and have the parity calculations redone.

Ockie - see? sometimes I can read!
 
Well if you are doing both of those functions, I would say you are going to wait a long, long, long time.
 
Sounds like a reasonable explanation to my slow OCE/RAID migration. Thanks, guys.

Next time I'll do 'em separately depending on my needs. :)
 
I'm having a similar issue with my 51645, I've added one 500gb disk to my raid 5 array (six disks to seven disks, still raid 5), and it's been running for 6 hours, and is only 2 percent reconfigured. Is this too slow, or to be expected?
 
Just thought I would bump this thread.... I previously had an Adaptec 4805SAS, bought a 51245 and the array was instantly recognized. I threw some empty drives into my hotswap bays and tried to do an OCE from a 4-drive RAID5 to 6-drive RAID5 (thinking the process would be quick given the faster processor), and here I am, 6 hours later and only 3% into the process (priority is also "high" for me). If I'm calculating correctly, I have ~8 days remaining! :(

Blue Fox makes me feel bad about my purchase. Maybe it's a bug?
 
Well, I decided to ask them, so it would appear it is just that slow :(

I currently have a raid 5 setup with 6, 500gb Western Digital re2 drives. I've added a 7th to the array, and have begun the reshaping/reconfiguring process. It seems to be going rather slow, as it is only 39 percent complete after having run for 48 hours. Is this the standard speed for a reshape? Is there a way to run the reconfiguration in offline mode, so that it completes more quickly?
----------------------
Greetings from Adaptec,

A reconfiguration of an array is a slow process as the controller not only had to perform its normal work, but also has to move all the data around and recalculate parity so it can add the new drive to the array. This operation can only be performed on the system while it is up in the O/S so if the system is busy with user or operating system requests the process can be considerably slower.

We do not recommend stopping a reconfiguration operation. In your case, it appears to be moving at a pretty good pace. We have seen system that take in excess of several weeks to complete a reconfiguration, so if yours is going to take about 50-60 hrs, this would be considered normal.

Thank you for contacting Adaptec Technical Support
 
I sure didn't. My software raid took maybe a day or two to do a complete resizing (yes, it had to be offline, but there wasn't this long period of "is it going to work?"). Thankfully, I have a UPS backup, otherwise I would be very worried about loosing data during the reshape. It ended up taking about a week for the reshape to finish :( .
 
In my case, it took me about 10-12 days to OCE from RAID5 6-disk array to a RAID6 8-disk array. I'm aware of the load that RAID level changing entails, as well as OCE'ing two disks, but 10-12 days to complete such an operation is kinda ridiculous. Thank God the array was still accessible (at an impaired state, however) as this length of time to complete an OCE would be problematic.

Fortunately, OCE is fairly rare and only when we get a new hard disk we want to add to an array, but I can forsee this becoming an annoyance with the very long OCE when I decide to expand my array in the future. :mad:
 
If only I could just shut my computer down for two days to do this (instead of having it online for 8)...

The crappy thing is the fact that I have two more drives I want to add AFTER this expansion. I figured it would be quick so I only did two drives initially.

They should put a time warning on the website!

edit - I wanted to point out something in reference to the hype in losing your data if something goes wrong during the migration... Adaptec seems to have picked up the ball here, since I can actually shut down my comp and do whatever during the process, and it will resume where it left off on next boot (I actually did this when I didn't believe it was going to take 8 days :D). Cool stuff.
 
Back
Top