ZFSonLinux 0.6.1 Released

The detach started a resliver again. I did not notice that until I tried a replace and that failed again with busy devices..

Code:
datastore4 log # zpool status
  pool: zfs_test
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue Apr 16 15:16:36 2013
    609M scanned out of 68.8G at 87.0M/s, 0h13m to go
    75.4M resilvered, 0.86% done
config:

        NAME                                                   STATE     READ WRITE CKSUM
        zfs_test                                               DEGRADED     0     0     0
          raidz2-0                                             DEGRADED     0     0     0
            ata-ST3500413AS_5VMR9FEJ-part5                     ONLINE       0     0     0
            ata-ST3500320AS_5QM2XCX9-part5                     ONLINE       0     0     0
            ata-ST3500320AS_9QM1X8B3-part5                     ONLINE       0     0     0
            ata-ST3500320AS_9QM23DTQ-part5                     ONLINE       0     0     0
            ata-ST3500413AS_9VMCXG9C-part5                     ONLINE       0     0     0
            spare-5                                            UNAVAIL      0     0     0
              ata-WDC_WD1001FALS-00J7B0_WD-WMATV1120865-part5  UNAVAIL      0     0     0
              ata-ST3500413AS_6VMPF5ZH-part5                   ONLINE       0     0     0  (resilvering)
            ata-ST3500320AS_5QM0LW3A-part5                     ONLINE       0     0     0
            ata-ST3500413AS_Z2AKZ3NX-part5                     ONLINE       0     0     0
        spares
          ata-ST3500413AS_6VMPF5ZH-part5                       INUSE     currently in use

errors: No known data errors
 
Everything like it should be with ZFS:

- You cannot remove a disk from a Raid-Z
- You cannot replace active hotspares

What you can do:
Remove the faulted disk, plugin a new one (depending on OS, controller and settings,
it replaces the faulted disk automatically or you must do a disk replace faulted -> new).

After that the hotspare is available again (a hotspare keeps a hotspare, even when inuse)
If you fust want to replace the faulted disk with the hotspare: remove the hotspare (property)
 
I got the array back online. The solution was to detach the missing drive.

Code:
datastore4 log # zpool detach zfs_test ata-WDC_WD1001FALS-00J7B0_WD-WMATV1120865-part5

datastore4 log # zpool status
  pool: zfs_test
 state: ONLINE
  scan: resilvered 8.60G in 0h11m with 0 errors on Tue Apr 16 15:27:45 2013
config:

        NAME                                STATE     READ WRITE CKSUM
        zfs_test                            ONLINE       0     0     0
          raidz2-0                          ONLINE       0     0     0
            ata-ST3500413AS_5VMR9FEJ-part5  ONLINE       0     0     0
            ata-ST3500320AS_5QM2XCX9-part5  ONLINE       0     0     0
            ata-ST3500320AS_9QM1X8B3-part5  ONLINE       0     0     0
            ata-ST3500320AS_9QM23DTQ-part5  ONLINE       0     0     0
            ata-ST3500413AS_9VMCXG9C-part5  ONLINE       0     0     0
            ata-ST3500413AS_6VMPF5ZH-part5  ONLINE       0     0     0
            ata-ST3500320AS_5QM0LW3A-part5  ONLINE       0     0     0
            ata-ST3500413AS_Z2AKZ3NX-part5  ONLINE       0     0     0

errors: No known data errors
datastore4 log #
 
Back
Top