I had to delete the ZFS file system "backup_appliance", only then could all file systems be remounted ;-)
-> Shares only on sub filesystms
Thanks for the help with my problem. The tip about the cables was also worth its weight in gold. I hadn't thought about that at all.
Unfortunately the shares do not go :-/
When I export the pool cleanly and import it again, I get errors with the shares. On file level all data are there. Also the directory smallpool is empty during the export.
How could I fix this error?
root@aio-pod:~# zpool import smallpool
cannot mount...
It seems that the cables were to blame. In my super compact NAS, the cables were very tense. I have now relocated them and had the zpool resilvered. The metadata errors are gone.
Thanks for the tip with the cables !
root@aio-pod:~# zpool status -v smallpool
pool: smallpool
state: ONLINE...
Good morning, I would personally exclude the RAM now. After more than 9 hours of running through no error. Remains wiring (I think) and power supply and LSI.
The RAM is already ECC RAM on a supermicro board (A1SAi-2750F). Power supply, yes could be. I have to debug that individually. I will report my findings. I still wish a quiet evening and a pleasant rest of the weekend.
Not to be misunderstood. The disks are all attached to the LSI SAS HBA. My idea was to complete the resilvering by moving the disks to USB cases and passing them through to the VM. That the performance is then low, I am aware. The idea was to save the pool. Unfortunately, I don't have a second...
It's driving me crazy. After spreading a whole 18 TB across x individual hard drives and USB sticks and uploading again almost completely it hangs again. 4 x resilvers on 5 disks I have never seen before....
root@aio-pod:~# zpool status -v smallpool
Pool: smallpool
status: DEGRADED
status...
I had already tried a disk replace once. One disk would be there. Even already online in another slot. But how does this work, if I can only mount the pool readonly? If I mount the pool normally, I immediately get the i/o error. a zpool clear "hangs" then.
-> "backup all important data from the...
root@aio-pod:~# zpool status -v smallpool
pool: smallpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from...
I'm sorry to hijack the thread, but I think my topic is fine here. After a power failure on my OmniOS NAS (151044), my pool only shows:
zfs: [ID 961531 kern.warning] WARNING: Pool 'smallpool' has encountered an incorrectable I/O failure and has been suspended; `zpool clear` will be required...