zfs and bad sectors...

ghat

n00b
Joined
Feb 8, 2012
Messages
39
so...

just after the thai floods, there was 2TB WDEARS on sale in BestBuy for $80 and I bought a bunch of drives, as soon as I got them I testted for bad sectors, if I fond a bad drive I exchanged it, to the point all the 3 drives have very few bad sectors, but then I ended up with drives which DO have bad sectors, but at the very tail end of the drive...

I managed to partition the drive such at the "good section" is partition1, and the bad section is 99% of the 2TB and the secound unused partition, and have linux software RAID on /dev/sdb1 etc...

Now I plan to upgrade my server and am planing to buy more drives, and also install zfs on the new system

I wanted to know if I have to use these drives under zfs how does it handle the bad sectors. Any read/write access to these sectors increases the access time to the drive.
so the zfs somehow needs to know what blocks to ignore completely and never access them.

G
 
Greetings

It is somewhat unusual to have errors clustered at the end of the drive but there could have been a manufacturing problem with that batch of drives.

If the errors as you describe are at the actual end of the drive you have two options, firstly use the hard drive manufacturer supplied tools and reduce the effective size of the drive by creating a host protected area which you will never use which encompasses the damaged area, I believe it blocks off the tail end of the drive but it's conceivable that it could be the start of the drive so double check this after you create it. This is the simplest method as say you have a 2000 GB drive and reduce it by 10 GB it will then appear as a 1990 GB hard drive and as far as the computer it is attached to is concerned it will appear as a native 1990 GB hard drive and you don't have to concern yourself with complications such as partitions etc. you would otherwise have.

Secondly, the easiest thing to do would be to create either a mirrored or raid-z/z2/z3 redundant volume and let ZFS itself take care of the bad sectors and relocate them upon detection, when writing if it comes back with a drive I/O error then ZFS should re-try at a different location and mark the original attempted blocks as bad, also you should do the recommended monthly scrubs and ZFS will read all the blocks and upon detection of the bad blocks it would reconstruct the missing data and relocate the information to a new location and again would mark the original data as bad so it's not reused.

You should in the first instance try to repair the damaged blocks by writing to them as the hard drive itself should be able to re-allocate sectors, there is a limit to the number of sectors it can do this for however, I believe its about a thousand in total, once this grown defect list is used up it can't re-allocate any more sectors and I presume the manufacturer supplied tools will probably fail the drive at this point so you can get it replaced if still under warranty, I would not use such a drive in a non redundant fashion such as a solitary Windows NTFS volume but as part of a ZFS Raid-Z/Z2/Z3 config it should be perfectly fine, I suggest you have a minimum of a Raid-Z2 set as you would need one parity drive to cope with bad sectors and a second parity drive to cope with a failed drive if that ever happens, personally I would setup a Raid-Z3 volume to have extra protection in the form of three parity drives.

If you do want to use any of them as solitary ZFS volumes you could set the copies=2 flag but since it will write two copies of the data it will reduce the effective size by half, copies=3 would triplicate the data and reduce usable space by two thirds so this is not recommended due to the wastage of space involved.

Cheers
 
Monthly scrub is recommended with Enterprise disks. Weekly scrubs is recommended with commodity disks.

Be carefull with those disks. ZFS will repair your errors, but there will be some point where your disks fails beyond repair, and that is a fact. Most people would buy new disks.
 
Back
Top