Raid 1 quick questions.

nightfly

2[H]4U
Joined
Jun 7, 2011
Messages
3,812
Can I take one of the disks out and just put it in another computer, then add a blank to each of those computers to create basically the same system on both and the raid array can just duplicate the second (initially blank) disk just like the first to make a raid array again? Also, is it possible can I take one disk out of the raid 1 set up and use it as a non raid system disk by itself? Or does it have to be used in a raid system?
 
If the OS is linux yes. If not maybe. This will depend on the raid implementation.
 
I'm using windows 7 on ASUS Maximux Z Gene IV motherboard. Crucial m4 128 size SSD's as system drives. Should have stated that, sorry. Hope that will change the 'maybe' to a better answer.
 
Last edited:
So can one of the raid 1 drives be pulled and used as a single drive if the second one fails and you don't have another drive to use as a replacement (I'm supposing that the bios will have to be set to the same raid configuration of 1 until another drive is obtained)? I know this probably sounds pretty basic to someone who already knows, but this is my first go round with raid systems other than my netgear NAS, which basically takes care of itself.
 
O.K., trial and error. I'll test everything out and post the results. Good reason to stay home and watch basketball, drink, and waste an afternoon away.
 
To be honest even if you had had an answer, in your shoes I would rather have tested for myself, at least if you intend a RAID1 to do as it should, keep the computer running when one drive fails.

I'm experimenting with ZFS currently and intentionally disconnect drives and such to see what happens.
 
To be honest even if you had had an answer, in your shoes I would rather have tested for myself, at least if you intend a RAID1 to do as it should, keep the computer running when one drive fails.

Agreed. At work I have test boxes that I use to test my raids several times a year to verify that under the current conditions if a disaster would happen that I would be able to recover. I trust this type of testing a lot more than documentation from a company or an open source implementation. BTW, last year in this testing I found a linux kernel bug (deadlock after hot removal of devices) and I was able to bisect it down to the exact commit that caused that and I got my name mentioned in the kernel changelogs for that..

I'm experimenting with ZFS currently and intentionally disconnect drives and such to see what happens.

In a brief test of zfsonlinux by accident I somewhat tested that and I can say that ZFS performed wonderfully. What happened is I originally had a mdadm array on 8 disks that I used to create a zfs raidz2. The problem was I forgot to zero some of the superblocks (at the end of each disk) so after a reboot mdadm tried to assemble the raid array. The assembly failed but a few days later I noticed the missing disks in the mdadm raid (forgetting that this was the zfs array) I readded them and got mdadm to begin to rebuild.. Anyways a few minutes later when there were 1000s of mismatches I figured out something was way wrong. I stopped the mdadm array zeroed the superblocks rebooted and had zfs fix the mess. For the most part the zfs array came back fine and after the resilver was done zfs listed the 10 or so files that were corrupted (no loss here this was a test box). However I was happy how well it performed the recovery.
 
Back
Top