mdadm software raids showing faulty spare

FLATcura

Weaksauce
Joined
Oct 20, 2006
Messages
96
so i have used 5 disks to create 3 arrays

raid 1 for /boot
raid 1 for swap
raid 5 for /

now a while back i tested the fault tolerance by shutting down the system and unplugging /dev/sdc

which gave me what i expected, a degraded but stable array, when i plugged them back in i was suprised to find

md0 : active raid1 sde1[4] sdd1[3] sdc1[5](F) sdb1[1] sda1[0]
200704 blocks [5/4] [UU_UU]

md1 : active raid1 sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
1052160 blocks [5/5] [UUUUU]

md2 : active raid5 sde3[4] sdd3[3] sdc3[5](F) sdb3[1] sda3[0]
3902026752 blocks level 5, 256k chunk, algorithm 2 [5/4] [UU_UU]

the swap array rebuilt itself, but the / and /boot did not, so i readded them to the array, it rebuilt and all was happy

i checked in on it yesterday and noticed the same two (/devmd0 and /dev/md2) are in a state of degrade with the explanation being
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 0 0 2 removed
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3

5 8 35 - faulty spare /dev/sdc3


faulty spare? how is that even possible of the /dev/sdc disk is working just fine on its partition for the /dev/md1 array

i try to readd the disk and it says its busy, i need to delete the partition and rebuild it to be readded, but it tells me its busy, i cant find the mdadm command to remove partitions.

the only thing i can think of is to drive out to the site and physically unplug the disk, format sdc and recreate all 3 partitions which suuucks

does anyone have any ideas, questions, want to call me a noob :p

thanks :)
 
You just need to remove the disk from the array and you should be able to readd it after.

And don't set up RAID like this...
 
Back
Top