Disk errors during ZFS receive

DragonQ

Limp Gawd
Joined
Mar 3, 2007
Messages
351
Hi,

I have put together a backup NAS using old parts. It contains a "backups" ZFS pool and I am using ZFS send/receive to backup my main data pool (on my primary NAS) to this backup pool. However, every now and then, the ZFS send/receive operation hangs and if I run "zpool status" on the backup NAS, it shows one disk as "REMOVED" instead of "ONLINE". Looking through the kernel logs shows several disk and ZFS related errors, for example:

Code:
ata3.00: exception Emask 0x0 SAct 0x18000 SErr 0x0 action 0x6 frozen
ata3.00: failed command: WRITE FPDMA QUEUED
sd 2:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
WARNING: Pool 'backups' has encountered an uncorrectable I/O failure and has been suspended.
sd 1:0:0:0: [sdb] tag#5 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

If I reboot the server, let it resilver, then run a scrub, the pool is back online and everything looks fine. But I know the next time I do a ZFS send/receive it will likely fall over again. I've even tried moving the OS and data disks to another (similar) machine but have experienced exactly the same issue, so it doesn't seem to be the underlying hardware (e.g. SATA ports failing). Specs are here:

Primary NAS
Asus X370-PRIME PRO
AMD Ryzen 1700
16 GiB RAM (DDR4, 3000 MHz, ECC, dual channel)
6x 10 TB WD WD100EFAX (ZFS RAID-Z2)

Backup NAS - Config 1
MSI H67MA-E35
Intel Core i3-3220
4 GiB RAM (DDR3, 1600 MHz, dual channel)
2x 5 TB Seagate ST5000DM000 (ZFS Mirror)

Backup NAS - Config 2
MSI H77MA-G43
Intel Celeron G1610
8 GiB RAM (DDR3, 1333 MHz, dual channel)
2x 5 TB Seagate ST5000DM000 (ZFS Mirror)

All servers are running Ubuntu 18.04 LTS Server (kernel 4.15.0-47-generic) with ZFS-on-Linux 0.7.13. The disks were both working fine before I setup this server, and the primary NAS has been running for 7+ months with no issues and shows no kernel errors. Here are the SMART printouts of the disks:

Code:
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   117   100   006    Pre-fail  Always       -       124416336
  3 Spin_Up_Time            0x0003   091   091   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   086   086   020    Old_age   Always       -       14543
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   074   060   030    Pre-fail  Always       -       12978842592
  9 Power_On_Hours          0x0032   078   078   000    Old_age   Always       -       20022
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   093   093   020    Old_age   Always       -       8113
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   001   000    Old_age   Always       -       0 0 9324
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   067   041   045    Old_age   Always   In_the_past 33 (0 80 34 19 0)
194 Temperature_Celsius     0x0022   033   059   000    Old_age   Always       -       33 (0 18 0 0 0)
195 Hardware_ECC_Recovered  0x001a   117   100   000    Old_age   Always       -       124416336
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       2518h+29m+47.641s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       76549057144
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       131607260394

SMART Error Log Version: 1
No Errors Logged

Code:
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   115   100   006    Pre-fail  Always       -       88217888
  3 Spin_Up_Time            0x0003   092   091   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   086   086   020    Old_age   Always       -       14679
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   071   060   030    Pre-fail  Always       -       34473540100
  9 Power_On_Hours          0x0032   061   061   000    Old_age   Always       -       34661
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       119
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   099   000    Old_age   Always       -       3 3 3
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   068   037   045    Old_age   Always   In_the_past 32 (23 207 34 19 0)
194 Temperature_Celsius     0x0022   032   063   000    Old_age   Always       -       32 (0 17 0 0 0)
195 Hardware_ECC_Recovered  0x001a   115   100   000    Old_age   Always       -       88217888
197 Current_Pending_Sector  0x0012   100   094   000    Old_age   Always       -       32
198 Offline_Uncorrectable   0x0010   100   094   000    Old_age   Offline      -       32
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       2878h+21m+05.725s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       22373668156
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       221455983814

SMART Error Log Version: 1
No Errors Logged

Notes:
  • I am not sure where to begin trying to debug this. I notice the Raw_Read_Error_Rate and Hardware_ECC_Recovered SMART attributes don't look good, but I am not sure if this is a cause or a symptom.
  • During 2 of the 3 failures, the log shows errors for both disks only a few minutes apart, which suggests to me that it's not a disk issue (or it's a big coincidence).
  • Could it be a driver issue? I have not installed any custom drivers, I am just using the ones that come with the standard server install of Ubuntu 18.04 LTS.
  • I have seen reports of Marvel controllers being dodgy but both the MSI H67MA-E35 and MSI H77MA-G43 only have Intel SATA ports.
  • Could it be a ZFS receive bug? I am currently running "badblocks -b 4096 -nsv <disk>" on both disks and so far no issues (about 4 hours in).
I have attached the full kernel logs from the backup NAS in case that's useful.
 

Attachments

  • Kernel Logs.7z
    25.3 KB · Views: 0
Running "badblocks -b 4096 -nsv <disk>" on each disk at the same time went over a day with no errors. As soon as I started a ZFS send/receive, one of the disks got removed again in a similar fashion to before. None of the several hour-long scrubs I've performed have ever thrown up an issue.

Really not sure what's going on here...
 
I don't know enough about ZFS to answer your question- but I'm wondering why you didn't try rsync or some other solution?

I understand ZFS send/receive for say VM migrations, but as a pure backup tool, something like Veeam might make more sense to me.
 
I would propably remove the disk and replace or optionally test it with a low level tool like WD data lifeguard or similar. If ZFS throws the disk out it is not good, believe it, the disk is bad or has bad sectors. If the disk is really good, check cabling or backplane.

btw.
ZFS replication can keep two filesystems in sync based on ZFS snaps, locally or over a network, even with multiple Petabytes on a high-load server with open files and a delay of down to a minute or less with ZFS properties intact- rsync or Veeam cannot offer similar features.
 
Well shit- I guess that's worth fixing!

[though to be noted, while rsync can periodically do the same basic thing at the file level- Veeam will do versioning etc., which is one of the reasons I like]
 
ZFS can do versioning via snaps. No problem with tens of thousands of snaps . A snap can be created without delay and without initial space consumption - will consume only the amount of modified datablocks after the snap was created.
 
I ran all the SeaTools tests on the disks. The Fix all (long) test found some errors on the disk that kept going to a "REMOVED" state in ZFS but fixed them, thus it passed. The same test found errors on the other disk too but could not fix them, so it failed. So it looks like I have one damaged disk and one dodgy disk, unless these kinds of issues can be caused by the SATA controller/cables/etc. rather than the disks themselves.
 
Last edited:
Back
Top