I have an all in one setup using nappit/omnios (configuration below). I just had a single disk crash in an 8 disk zraid2 configuration but the file system is down/not mounted despite only a single disk failure. Im behind on backups and hoping to recover the system if at all possible.
If I run the zraid status it shows the pool as degraded but not mounted and a single disk failure. I keep getting messages about the failed disk not responding and think the consumer grade sata disk is to blame. I know I should not be using these disks (have some 4tb nearline SAS I need to upgrade to) but I would be ever greatful if somebody would be able to help me get the data off the degraded raid.
I have to admit Im not that knowlegable on how to manage the zraid but from searching it seems I might need to somehow detach/remove the bad disk and get the degraded system mounted somehow.
I had thought about shutdown and detaching the faulty disk but I dont know how to map the drive letters in the solaris to the serial number of the disks and afraid that if I power up the system with another disk disconnected it might make the raid go from degraded to failed. Thinking I might be able to somehow put the raid offline (though if it is not responding I might not even be able to do this).
Would greatly appreciate any tips on how to get started.
Kuma
Configuration:
Supermicro X10SL7-F with onboard LSI-SAS flashed to non raid
Xeon E3-1232v3
32GB ECC ram
ESXi 6.0
2x Intel GBE
8xSeagate 2TB drives (on LSI SAS passed through to OMNIOS)
250GB Samsung 850 SSD (esxi local datastore)
OMNIOS/ZFS VM
I have a primary VM for omnios 6GB memory, 4cores onboard LSI-SAS passed through.
VMware tools installed
LSI SAS, E1000, 30GB vdisk on local SSD store
I created a ZFS pool with RAIDZ2 and have it shared via smb and nfs
Windows10 VM
2 cores 6GM memory, 120GB disk on OMNIOS/NFS share with thin provisioning
E1000, LSI-SAS, Paravirtual controller
If I run the zraid status it shows the pool as degraded but not mounted and a single disk failure. I keep getting messages about the failed disk not responding and think the consumer grade sata disk is to blame. I know I should not be using these disks (have some 4tb nearline SAS I need to upgrade to) but I would be ever greatful if somebody would be able to help me get the data off the degraded raid.
I have to admit Im not that knowlegable on how to manage the zraid but from searching it seems I might need to somehow detach/remove the bad disk and get the degraded system mounted somehow.
I had thought about shutdown and detaching the faulty disk but I dont know how to map the drive letters in the solaris to the serial number of the disks and afraid that if I power up the system with another disk disconnected it might make the raid go from degraded to failed. Thinking I might be able to somehow put the raid offline (though if it is not responding I might not even be able to do this).
Would greatly appreciate any tips on how to get started.
Kuma
Configuration:
Supermicro X10SL7-F with onboard LSI-SAS flashed to non raid
Xeon E3-1232v3
32GB ECC ram
ESXi 6.0
2x Intel GBE
8xSeagate 2TB drives (on LSI SAS passed through to OMNIOS)
250GB Samsung 850 SSD (esxi local datastore)
OMNIOS/ZFS VM
I have a primary VM for omnios 6GB memory, 4cores onboard LSI-SAS passed through.
VMware tools installed
LSI SAS, E1000, 30GB vdisk on local SSD store
I created a ZFS pool with RAIDZ2 and have it shared via smb and nfs
Windows10 VM
2 cores 6GM memory, 120GB disk on OMNIOS/NFS share with thin provisioning
E1000, LSI-SAS, Paravirtual controller