RAID 6 disk swap to increase capacity

ZoSo

Limp Gawd
Joined
Jun 5, 2004
Messages
236
I'm currently running 8 500GB drives in RAID 6 and would like to swap all the disks with 1TB or greater.
My question is, can I just clone/ghost each 500GB to each larger disk. Or replace each one in the RAID and let it get rebuilt? What would be the proper procedure, without having to back everything up and create a new RAID?
I'm running an Adaptec RAID 6805.

TIA
 
Your best bet is to just create a new array with the 1TB disks (You need the requisite additional ports of course). You could of course pull one drive at a time (8 times), let the array rebuild (8 times) and then expand the raidset and volume. I would sooner buy a new card/additional card/expander for the additional ports before I did it that way if I had the opportunity.
 
RAID 6 can swap 2 disks at a time, so its only 4 reps if OP has to swap out.
 
RAID 6 can swap 2 disks at a time, so its only 4 reps if OP has to swap out.

I would avoid deliberately degrading a raid 6 array by 2 disks unless you have a backup.
 
What would be the proper procedure, without having to back everything up and create a new RAID?

Umm... surely the best way is to back up, remove the 500 GB HDDs, insert the new HDDs, create a new RAID, then restore from your backup.

Indeed, since you've a max of 3TB of data, you could back it all up onto one drive.
 
Your best bet is to just create a new array with the 1TB disks (You need the requisite additional ports of course). You could of course pull one drive at a time (8 times), let the array rebuild (8 times) and then expand the raidset and volume. I would sooner buy a new card/additional card/expander for the additional ports before I did it that way if I had the opportunity.

I have successfully done this procedure at least 1/2 dozen times at work (with arrays up to 12 disks) on linux software raid although I do have backups on my tape archive. Since mdadm now has the ability to replace a drive without rebuilding the whole array it will be faster next time.
 
I'm currently running 8 500GB drives in RAID 6 and would like to swap all the disks with 1TB or greater.
My question is, can I just clone/ghost each 500GB to each larger disk. Or replace each one in the RAID and let it get rebuilt? What would be the proper procedure, without having to back everything up and create a new RAID?
I'm running an Adaptec RAID 6805.

TIA

I would use linux dd to make a bit for bit copy of each disk 1 by 1 on some other sata ports. Hopefully the adaptec raid card does not record the serial number in the raid metadata.
 
But if something goes wrong, can't you just put back in the drives you swapped out since they are still perfectly working?
Not easily unless the raid array was read only.
 
But if something goes wrong, can't you just put back in the drives you swapped out since they are still perfectly working?

Also keep in mind the Adaptec firmware is no where near as forgiving of problems as an Areca or even LSI, nor does it have the recovery options (other than a straight rebuild) of the Areca. I personally had gotten rid of all our adaptec cards a few years ago because of issues.
 
Also keep in mind the Adaptec firmware is no where near as forgiving of problems as an Areca or even LSI, nor does it have the recovery options (other than a straight rebuild) of the Areca. I personally had gotten rid of all our adaptec cards a few years ago because of issues.

My experience with actual RAID is limited to mdadm. I know that what I suggested would be easy to do in mdadm as I've done it before.

I always just assumed that if a basic free software RAID system could do it that it would be trivial for an expensive hardware RAID controller to do also.

Is performance really that much better with a hardware RAID controller that it's worth it then to lose what seems like simple useful functionality like being able to add back in drives?

Sorry to change the topic a bit.

My last array consisted of 5x2TB drives in RAID5 and took only 12 hours to create or rebuild even on a measly 1.6GHz Intel Atom with the CPU at <50% usage the whole time. It could also sustain about 180MB/s read/write.

It hardly seems worth shelling out for a hardware controller especially if you also lose some functionality. I know that when I was upgrading my array, being able to put a drive back into the array saved my data.
 
I always just assumed that if a basic free software RAID system could do it that it would be trivial for an expensive hardware RAID controller to do also.

HW raid does not always allow its users to do potentially dangerous operations even if these potentially dangerous operations could have saved an array. This will vary from vendor to vendor.

It hardly seems worth shelling out for a hardware controller especially if you also lose some functionality. I know that when I was upgrading my array, being able to put a drive back into the array saved my data.

On linux or most non microsoft operating systems I have to agree with you but windows users do not have any good software raid 5/6 implementations.
 
Last edited:
HW raid does not always allow its users to do potentially dangerous operations even if these potentially dangerous operations could have saved an array. This will vary from vendor to vendor.



On linux or most non microsoft operating systems I have to agree with you but windows users do not have any good software raid implementations.

Ah good information. Yeah I do realize Windows does not have very good (performance especially) software RAID. So a business would probably still want a hardware RAID solution if they ran Windows servers.
 
Not easily unless the raid array was read only.

This is what I did.

I had most of my important data in a huge 18 TB partition. I upgraded from 1TB to 2 TB drives. The OS stuff I didnt care so much about so I remounted my huge file-system as read-only (so i could still access all my data) and then replaced each drive one by one. After a week I finally had all 20 disks replaced (the array rebuilt 20 times. I did md5sums on a bunch of files too just to be careful but it was worth putting it in read only to insure i could go back to the original drives if i needed to without corruption (for my large file-system).

Since I didn't have another 24 areca controller around I then used the old disks with an IT controller on open solaris and backups my entire big array to it so I could re-partition/format. I did not trust that file-system expansion would work correctly @ 18 TB and I am glad I did not. I later tried going from 36 -> 44 TB and got corruption (I had data backed up first).
 
Ah good information. Yeah I do realize Windows does not have very good (performance especially) software RAID. So a business would probably still want a hardware RAID solution if they ran Windows servers.

Hardware RAID will almost always be generally faster, have better performance, and allow for higher IOPS than software RAID can, even MDADM.

MDADM is absolutely great and is one of, if not the, best software RAID daemons available, and when compatibility across controllers is necessary, it really comes in handy.
However, for anything more than single-user or multi-user scenarios, aka many-user enterprise environments, hardware RAID is far preferred and in many cases necessary, even when Linux is used, not just Windows.
 
You only have 4TB of data, that&#8217;s so easy to backup. Would be faster and safer to backup data and build new array.
 
Thanks everyone for the helpful information. This time I can do a backup and rebuild. I can also replace a drive at a time, the Adaptec Storage Manager can handle it and the expansions. I'll probably do both just to familiarize myself with ASM.
 
Thanks everyone for the helpful information. This time I can do a backup and rebuild. I can also replace a drive at a time, the Adaptec Storage Manager can handle it and the expansions. I'll probably do both just to familiarize myself with ASM.

You're seriously going to suffer through 8 rebuilds to migrate 3TB of data? And by the way each time you add a larger drive to an array of smaller drives it doesn't automatically increase usable free space, it will only use the first 500GB of each larger drive in this instance. Meaning you won't be able to convert the extra space of the larger drives into usable space until after all the smaller drives have been replaced, at the end of 8 rebuilds, and then you'll likely need a 9th rebuild process to expand the volume into the free space of the raidset, assuming the Adaptec supports it. If it doesnt and continues to insist on treating the 1TB+ drives as 500GB array members, you'll be back to square one.

Just not a smart way to go, like taking 9 trips to the store to buy one gallon of milk. There are better ways to familiarize yourself with ASM.
 
Last edited:
Since I didn't have another 24 areca controller around I then used the old disks with an IT controller on open solaris and backups my entire big array to it so I could re-partition/format. I did not trust that file-system expansion would work correctly @ 18 TB and I am glad I did not. I later tried going from 36 -> 44 TB and got corruption (I had data backed up first).

Which filesystem corrupted going from 36 to 44TB?
 
This is what I did.

I had most of my important data in a huge 18 TB partition. I upgraded from 1TB to 2 TB drives. The OS stuff I didnt care so much about so I remounted my huge file-system as read-only (so i could still access all my data) and then replaced each drive one by one. After a week I finally had all 20 disks replaced (the array rebuilt 20 times. I did md5sums on a bunch of files too just to be careful but it was worth putting it in read only to insure i could go back to the original drives if i needed to without corruption (for my large file-system).

Since I didn't have another 24 areca controller around I then used the old disks with an IT controller on open solaris and backups my entire big array to it so I could re-partition/format. I did not trust that file-system expansion would work correctly @ 18 TB and I am glad I did not. I later tried going from 36 -> 44 TB and got corruption (I had data backed up first).

So the only reason you did all that was because of the areca controller, right ? If the drives hadn't been "marked" you could have just put the 1TB on the other controller, create the new array with 2TB, and copy the data ?
 
Back
Top