Recent content by Photographix

  1. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I had to delete the ZFS file system "backup_appliance", only then could all file systems be remounted ;-) -> Shares only on sub filesystms Thanks for the help with my problem. The tip about the cables was also worth its weight in gold. I hadn't thought about that at all.
  2. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Unfortunately the shares do not go :-/ When I export the pool cleanly and import it again, I get errors with the shares. On file level all data are there. Also the directory smallpool is empty during the export. How could I fix this error? root@aio-pod:~# zpool import smallpool cannot mount...
  3. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    It seems that the cables were to blame. In my super compact NAS, the cables were very tense. I have now relocated them and had the zpool resilvered. The metadata errors are gone. Thanks for the tip with the cables ! root@aio-pod:~# zpool status -v smallpool pool: smallpool state: ONLINE...
  4. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Good morning, I would personally exclude the RAM now. After more than 9 hours of running through no error. Remains wiring (I think) and power supply and LSI.
  5. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Yes I have , the eventlog does not contain any relevant, info except that I log in incorrectly from time to time. No ECC errors
  6. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The RAM is already ECC RAM on a supermicro board (A1SAi-2750F). Power supply, yes could be. I have to debug that individually. I will report my findings. I still wish a quiet evening and a pleasant rest of the weekend.
  7. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Not to be misunderstood. The disks are all attached to the LSI SAS HBA. My idea was to complete the resilvering by moving the disks to USB cases and passing them through to the VM. That the performance is then low, I am aware. The idea was to save the pool. Unfortunately, I don't have a second...
  8. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    It's driving me crazy. After spreading a whole 18 TB across x individual hard drives and USB sticks and uploading again almost completely it hangs again. 4 x resilvers on 5 disks I have never seen before.... root@aio-pod:~# zpool status -v smallpool Pool: smallpool status: DEGRADED status...
  9. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I had already tried a disk replace once. One disk would be there. Even already online in another slot. But how does this work, if I can only mount the pool readonly? If I mount the pool normally, I immediately get the i/o error. a zpool clear "hangs" then. -> "backup all important data from the...
  10. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    root@aio-pod:~# zpool status -v smallpool pool: smallpool state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from...
  11. P

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I'm sorry to hijack the thread, but I think my topic is fine here. After a power failure on my OmniOS NAS (151044), my pool only shows: zfs: [ID 961531 kern.warning] WARNING: Pool 'smallpool' has encountered an incorrectable I/O failure and has been suspended; `zpool clear` will be required...
Back
Top