• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.

OpenZFS NAS (BSD, Illumos, Linux, OSX, Solaris, Windows + Storage Spaces) with napp-it web-gui

I'm assuming it's something in zpool or zfs list that triggers it since the hard error counter is already 6184 since I last posted. last reboot was 312 days ago.
Code:
Apr 14 2024 23:43:18.860040354 ereport.io.scsi.cmd.disk.dev.rqs.derr
nvlist version: 0
    class = ereport.io.scsi.cmd.disk.dev.rqs.derr
    ena = 0x6558c120600001
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = dev
        device-path = /pci@0,0/pci15ad,1976@10/sd@0,0
        devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    (end detector)

    devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    driver-assessment = fail
    op-code = 0x1a
    cdb = 0x1a 0x0 0x48 0x0 0x20 0x0
    pkt-reason = 0x0
    pkt-state = 0x37
    pkt-stats = 0x0
    stat-code = 0x2
    key = 0x5
    asc = 0x24
    ascq = 0x0
    sense-data = 0x70 0x0 0x5 0x0 0x0 0x0 0x0 0xa 0x0 0x0 0x0 0x0 0x24 0x0 0x0 0xc0 0x0 0x2 0x0 0x0
    __ttl = 0x1
    __tod = 0x661cb066 0x33432ca2

Apr 15 2024 22:02:18.021441500 ereport.io.scsi.cmd.disk.dev.rqs.derr
nvlist version: 0
    class = ereport.io.scsi.cmd.disk.dev.rqs.derr
    ena = 0x65508d38800001
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = dev
        device-path = /pci@0,0/pci15ad,1976@10/sd@0,0
        devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    (end detector)

    devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    driver-assessment = fail
    op-code = 0x1a
    cdb = 0x1a 0x0 0x48 0x0 0x20 0x0
    pkt-reason = 0x0
    pkt-state = 0x37
    pkt-stats = 0x0
    stat-code = 0x2
    key = 0x5
    asc = 0x24
    ascq = 0x0
    sense-data = 0x70 0x0 0x5 0x0 0x0 0x0 0x0 0xa 0x0 0x0 0x0 0x0 0x24 0x0 0x0 0xc0 0x0 0x2 0x0 0x0
    __ttl = 0x1
    __tod = 0x661dea3a 0x1472bdc

Apr 15 2024 22:02:40.591475959 ereport.fs.zfs.checksum
nvlist version: 0
    class = ereport.fs.zfs.checksum
    ena = 0xb965037ac00401
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = zfs
        pool = 0x437636acfddf72db
        vdev = 0xd4cf77f4226405bf
    (end detector)

    pool = pool01-hdd
    pool_guid = 0x437636acfddf72db
    pool_context = 1
    pool_failmode = wait
    vdev_guid = 0xd4cf77f4226405bf
    vdev_type = disk
    vdev_path = /dev/dsk/c3t5000CCA253C7D32Ad0s0
    vdev_devid = id1,sd@n5000cca253c7d32a/a
    vdev_ashift = 0xc
    parent_guid = 0x45072fdeb1df4429
    parent_type = replacing
    zio_err = 50
    zio_offset = 0x3400042b000
    zio_size = 0x1000
    zio_objset = 0x11632
    zio_object = 0x0
    zio_level = -1
    zio_blkid = 0x0
    cksum_expected = 0x116e952ba4 0x13c89ef46cd0 0xddc68b7a59181 0x748e36368b79e26
    cksum_actual = 0x0 0x0 0x0 0x0
    cksum_algorithm = fletcher4
    bad_ranges = 0x0 0x20 0x40 0x60 0x70 0x78 0x90 0xc0 0x210 0x220 0x240 0x248 0x260 0x268 0x270 0x290 0x2c0 0x2d0 0x400 0x410 0x440 0x488 0x490 0x498 0x600 0x610 0x640 0x698
    bad_ranges_min_gap = 0x8
    bad_range_sets = 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
    bad_range_clears = 0x19 0x1c 0x17 0x79 0xb 0xc 0xf 0x4a 0x2 0xb 0xc0 0xb 0xb 0xcb
    bad_set_histogram = 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
    bad_cleared_histogram = 0xb 0x12 0xe 0x14 0x11 0x12 0x14 0x16 0xb 0xc 0x9 0x13 0xd 0xd 0xc 0x10 0xc 0xa 0xd 0x10 0xf 0x10 0x15 0x14 0xb 0x9 0xc 0xc 0xc 0xe 0x10 0x12 0x8 0x6 0x6 0x9 0xa 0x10 0x12 0x12 0x4 0x3 0x7 0x5 0x7 0xc 0xf 0xc 0x5 0x2 0x6 0x5 0x5 0x6 0xa 0xb 0x7 0x6 0x6 0x7 0x5 0x7 0x9 0xe
    __ttl = 0x1
    __tod = 0x661dea50 0x234134f7

Jan 20 2025 02:04:34.937866382 ereport.io.scsi.cmd.disk.dev.serr
nvlist version: 0
    class = ereport.io.scsi.cmd.disk.dev.serr
    ena = 0x580aa87ce3800001
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = dev
        device-path = /pci@0,0/pci15ad,1976@10/sd@0,0
        devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    (end detector)

    devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    driver-assessment = retry
    op-code = 0x2a
    cdb = 0x2a 0x0 0x1 0xb8 0xa6 0xdc 0x0 0x0 0x1f 0x0
    pkt-reason = 0x0
    pkt-state = 0x0
    pkt-stats = 0x0
    stat-code = 0x22
    __ttl = 0x1
    __tod = 0x678e0392 0x37e6b48e

Jan 20 2025 02:04:34.937866477 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
    class = ereport.io.scsi.cmd.disk.tran
    ena = 0x580aa87ce3800001
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = dev
        device-path = /pci@0,0/pci15ad,1976@10/sd@0,0
    (end detector)

    devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    driver-assessment = retry
    op-code = 0x2a
    cdb = 0x2a 0x0 0x1 0xb8 0xa6 0xdc 0x0 0x0 0x1f 0x0
    pkt-reason = 0x4
    pkt-state = 0x0
    pkt-stats = 0x10
    __ttl = 0x1
    __tod = 0x678e0392 0x37e6b4ed

Jan 20 2025 02:04:34.937865823 ereport.io.scsi.cmd.disk.recovered
nvlist version: 0
    class = ereport.io.scsi.cmd.disk.recovered
    ena = 0x580aa87ce3800001
    detector = (embedded nvlist)
    nvlist version: 0
        version = 0x0
        scheme = dev
        device-path = /pci@0,0/pci15ad,1976@10/sd@0,0
        devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    (end detector)

    devid = id1,sd@n6000c297cd07f630fd9b45917dcb6a04
    driver-assessment = recovered
    op-code = 0x2a
    cdb = 0x2a 0x0 0x1 0xb8 0xa6 0xdc 0x0 0x0 0x1f 0x0
    pkt-reason = 0x0
    pkt-state = 0x1f
    pkt-stats = 0x0
    __ttl = 0x1
    __tod = 0x678e0392 0x37e6b25f
1740232431826.png

Code:
Jan 26 2023 10:06:42 ereport.io.scsi.cmd.disk.tran   
Jan 26 2023 10:06:39 ereport.io.scsi.cmd.disk.dev.rqs.merr
Jan 26 2023 10:06:39 ereport.io.scsi.cmd.disk.dev.rqs.merr
Jan 26 2023 10:06:39 ereport.io.scsi.cmd.disk.recovered
Jan 26 2023 10:06:39 ereport.io.scsi.cmd.disk.recovered
Jan 26 2023 10:06:39 ereport.io.scsi.cmd.disk.recovered
Jan 26 2023 10:06:51 ereport.fs.zfs.io               
Jan 26 2023 10:06:51 ereport.fs.zfs.io               
Jan 26 2023 10:06:51 ereport.fs.zfs.io               
Jan 26 2023 10:06:51 ereport.fs.zfs.io               
Jan 26 2023 10:06:51 ereport.fs.zfs.io               
Jan 26 2023 10:06:51 ereport.fs.zfs.checksum         
Oct 06 2023 11:22:51 ereport.io.scsi.cmd.disk.dev.rqs.derr
Oct 06 2023 11:51:31 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:14 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:24 ereport.fs.zfs.io               
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:24 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:25 ereport.fs.zfs.io               
Apr 07 2024 22:18:20 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:24 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:24 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:28 ereport.fs.zfs.io               
Apr 07 2024 22:18:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:24 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:28 ereport.fs.zfs.probe_failure   
Apr 07 2024 22:18:37 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:43 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.tran   
Apr 07 2024 22:18:52 ereport.fs.zfs.io               
Apr 07 2024 22:18:52 ereport.fs.zfs.io               
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:52 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:49 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:52 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:52 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:52 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:18:55 ereport.fs.zfs.io               
Apr 07 2024 22:18:55 ereport.fs.zfs.probe_failure   
Apr 07 2024 22:19:01 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:19:07 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:19:13 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 07 2024 22:19:19 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:46:51 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:06 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:15 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:27 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:42 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:51 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:47:57 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:09 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:21 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:24 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:33 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:39 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:46 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:48:46 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:49:07 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:49:13 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:49:25 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 20:49:30 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 14 2024 23:43:18 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 15 2024 22:02:18 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 15 2024 22:02:40 ereport.fs.zfs.checksum         
Jan 20 02:04:34.9378 ereport.io.scsi.cmd.disk.dev.serr
Jan 20 02:04:34.9378 ereport.io.scsi.cmd.disk.tran   
Jan 20 02:04:34.9378 ereport.io.scsi.cmd.disk.recovered
 
Hard to say. With 6k errors in 300 days you have around 1 error per hour. Unless there are real errors ex checksum problems I would say the problem is not serious as the main effect is a re-read with success and with a small performance impact.
 
OpenZFS for Windows 2.3 rc6f

https://github.com/openzfsonwindows/openzfs/releases/tag/zfswin-2.3.0rc6

Release seems not to too far away as we see a new release every few days to fix the remaining problems that came up as more users testing OpenZFS on Windows now on different soft and hardware environments. So folk test it and report remaining problems under https://github.com/openzfsonwindows/openzfs/issues

In my case the rc6f from today fixed a remaining BSOD problem around unmount and zvol destroy. It is quite save to try OpenZFS on Windows as long as your bootdrive is not encrypted so you can boot cli mode directly to delete the filesystem driver /windows/system32/drivers/openzfs.sys on a driver bootloop problem (I have not seen a bootloop problem for quite a long time. Last time it was due an incompatibility with the Aomei driver).

I missed OpenZfS on Windows. While Storage Spaces is a superiour method to pool disks of any type or size with auto hot/cold data tiering, ZFS is far better for large arrays with many storage features not available on Windows with ntfs or ReFS. Windows ACL handling was always a reason for me to avoid Linux/SAMBA. Only Illumos comes near with worldwide unique Windows AD SID and SMB groups that can contain groups.

Windows with SMB Direct/RDMA (requires Windows Server) and Hyper-V is on the way to be a premium storage platform.
 

OpenZFS on Windows 2.3.1 rc​


https://github.com/openzfsonwindows/openzfs/releases/tag/zfswin-2.3.1rc1
  • Separate OpenZFS.sys and OpenZVOL.sys drivers
  • Cleanup mount code
  • Cleanup unmount code
  • Fix hostid
  • Set VolumeSerial per mount
  • Check Disk / Partitions before wiping them
  • Fix Vpb ReferenceCounts
  • Have zfsinstaller cleanup ghost installs.
  • Supplied rocket launch code to Norway
report and discuss issues
https://github.com/openzfsonwindows/openzfs/issues
https://github.com/openzfsonwindows/openzfs/discussions
 
  • Like
Reactions: uOpt
like this
Windows is already superiour to Linux/SAMBA regarding ACL handling and can pool disks of different type or size with ssd/hd auto tiering of hot/cold files. A Windows Server (Essentials) adds Active Directory and working SMB Direct (RDMA) with nics > 10G what allows SMB with lowest cpu load and latency up to 10 Gbyte/s.

Once released, OpenZFS on Windows is a game changer for (Windows) storage.
 
new day, new release
rc3
  • Fix BSOD running CrystalDiskMark
  • Fix userland hostid mismatching
  • Change mount code to only open mountmgr once

    Jorgen Lundman, maintainer of OpenZFS for Windows is working hard for a release edition
    Test it and report remaining problems!
 
I already started to evaluate Proxmox as a replacement for ESXi. For VMs this is a fine replacement maybe not as resource efficient and fast for Windows VMs. For NAS use, Proxmox and Linux/SAMBA is ok but when it comes to ACL and permissions it is a pain compared to Solaris/OmniOS or Windows. Atm I still prefer ESXi + OmniOS as NFS and SMB server.
 
Back
Top