I seem to have made a muckery of my ZFS pool
I noticed i had a failed disk with message "to many corrupted error" so i tried making the pool rebuild without much luck. I order a new disk and inserted it into the same slot, but i noticed another disk appears to be reporting error (see below)
I noticed i had a failed disk with message "to many corrupted error" so i tried making the pool rebuild without much luck. I order a new disk and inserted it into the same slot, but i noticed another disk appears to be reporting error (see below)
admin@SAN:~$ zpool status
pool: data
state: DEGRADED
scan: scrub repaired 0 in 1h58m with 0 errors on Wed Jan 30 02:43:31 2013
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
c3t5000C5004E334744d0 ONLINE 0 0 0
c3t5000C5004E44CC0Ed0 ONLINE 0 0 0
c3t5000C5004E48FC47d0 ONLINE 0 0 0
c3t5000C5004E55FDCEd0 ONLINE 0 0 0
replacing-4 UNAVAIL 0 0 0 insufficient replicas
c3t5000C5004E5606B4d0/old OFFLINE 0 0 0
c3t5000C5004E5606B4d0 FAULTED 0 0 0 too many errors
c3t5000C5004E7664F5d0 ONLINE 0 0 0
c3t5000C5004E771B9Dd0 ONLINE 0 0 0
c3t5000C500537895ABd0 ONLINE 0 0 0
cache
c3t5001517387E8CC5Ed0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
errors: No known data errors
So before i go about running more command i figure i better get some advice from more experienced zfs users. So anyone care to chime in on what i need to do to bring my pool back to good health?
The new drive is inserted into the old slot of the drive that failed.