Code:
nas4free: ~ # zpool status -v fastpool1
pool: fastpool1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub canceled on Fri Feb 20 10:17:17 2015
config:
NAME STATE READ WRITE CKSUM
fastpool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 9
da0 ONLINE 0 0 80
da1 ONLINE 0 0 72
mirror-1 ONLINE 0 0 5
da2 ONLINE 0 0 105
da3 ONLINE 0 0 73
mirror-2 ONLINE 0 0 7
da4 ONLINE 0 0 68
da5 ONLINE 0 0 67
mirror-3 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
errors: No known data errors
I have 4 mirrored pairs of 256GB SSDs in a pool I created. I had a bad power outage (tripped a breaker) while I had things being written and read to and from the pool. It takes about two hours to scrub the pull and I have these errors returned each time. Do I need to manually delete the pool and re-create it? Does this show integrity issues with the disks themselves?
There is a 600GB ZFS volume on the pool mounted with an iSCSI LUN to an esxi box.
I manually mounted the datastores to esxi as vsphere refused to see them in the LUN. I did this via shell access in esxi. I was able to power on two virtual machines and the machines appear to be working fine - but I don't want to proceed with my project if I should just rebuild the array. the only thing important on the volume is the domain controller vm but I can recreate it easily in an hour or so since I never promoted it to be a FMSO holder...
How can I further check the integrity of the data? If I clear the errors and run a new scrub I get errors again.