Let me start off by saying I know RAID 0 is frowned upon, I shouldn't have done this in the first place, etc etc... Data loss wasn't a real concern and I needed the speed bump. Please spare me the posts about what a dumb idea this was.
Okay, I had 4 Samsung 860 Evo's in a RAID0 config in Ubuntu 16.04. All was well for a few weeks but after a reboot I had some issues with my XFS partition.
'dmseg | less' showed the following:
[ 23.197822] XFS (md2p1): Metadata CRC error detected at xfs_inobt_read_verify+0x6c/0xd0 [xfs], xfs_inobt block 0x101ba9200
[ 23.197826] XFS (md2p1): Unmount and run xfs_repair
[ 23.197827] XFS (md2p1): First 64 bytes of corrupted metadata buffer:
[ 23.197829] ffff881007dd6000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197831] ffff881007dd6010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197832] ffff881007dd6020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197833] ffff881007dd6030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197840] XFS (md2p1): metadata I/O error: block 0x101ba9200 ("xfs_trans_read_buf_map") error 74 numblks 8
[ 23.197844] XFS (md2p1): xfs_do_force_shutdown(0x1) called from line 315 of file /build/linux-PrHwV2/linux-4.4.0/fs/xfs/xfs_trans_buf.c. Return address = 0xffffffffcb60e8e8
[ 23.198811] XFS (md2p1): I/O Error Detected. Shutting down filesystem
[ 23.198813] XFS (md2p1): Please umount the filesystem and rectify the problem(s)
[ 41.158820] XFS (md2p1): xfs_log_force: error -5 returned.
Running XFS repair didn't help much, and it was clear there was something very much wrong.
A total rebuild of the RAID0 and a fresh install had similar problems again.
I'm left to assume one of the drives has an issue, but I don't know which one. I've thrown them all onto my desktop system and checked them all with Samsung's Magician tool, they all show as "Good"...
What can I do to test the drives individually for problems?
Okay, I had 4 Samsung 860 Evo's in a RAID0 config in Ubuntu 16.04. All was well for a few weeks but after a reboot I had some issues with my XFS partition.
'dmseg | less' showed the following:
[ 23.197822] XFS (md2p1): Metadata CRC error detected at xfs_inobt_read_verify+0x6c/0xd0 [xfs], xfs_inobt block 0x101ba9200
[ 23.197826] XFS (md2p1): Unmount and run xfs_repair
[ 23.197827] XFS (md2p1): First 64 bytes of corrupted metadata buffer:
[ 23.197829] ffff881007dd6000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197831] ffff881007dd6010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197832] ffff881007dd6020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197833] ffff881007dd6030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 23.197840] XFS (md2p1): metadata I/O error: block 0x101ba9200 ("xfs_trans_read_buf_map") error 74 numblks 8
[ 23.197844] XFS (md2p1): xfs_do_force_shutdown(0x1) called from line 315 of file /build/linux-PrHwV2/linux-4.4.0/fs/xfs/xfs_trans_buf.c. Return address = 0xffffffffcb60e8e8
[ 23.198811] XFS (md2p1): I/O Error Detected. Shutting down filesystem
[ 23.198813] XFS (md2p1): Please umount the filesystem and rectify the problem(s)
[ 41.158820] XFS (md2p1): xfs_log_force: error -5 returned.
Running XFS repair didn't help much, and it was clear there was something very much wrong.
A total rebuild of the RAID0 and a fresh install had similar problems again.
I'm left to assume one of the drives has an issue, but I don't know which one. I've thrown them all onto my desktop system and checked them all with Samsung's Magician tool, they all show as "Good"...
What can I do to test the drives individually for problems?