You might be best recreating the array and restoring from a backup. Since the controller doesn't know about your file system, I can't see how it would know what parts of the array are in use vs not in use.
I don't see why the file system would matter, RAID is below the file system.
14 hours is not horrible depending on if you are using the system live or not. However if the array was larger the rebuild would have to read more data. Not sure how much slower this makes things as it should be able...
Yeah I'm trying to be backup-first before worrying about the raid. It's not like the raid will protect me from theft or other damage to the system (including from software).
But I don't want to restore from a backup every 6 months if a drive fails as it can be a bit of a pain. Otherwise I...
Over at Spiceworks it is the only thing that people seem to recommend for a large number of drives, since as you increase the number of drives you are not compromising safety.
No I am talking about raid 1+0. As the number of drives increases, there are more chances that a drive will fail. However there are also more possibilities for the failure to be in a place where it does not take down the array. These probabilities cancel each other out.
Anyway my original...
With RAID 10, the odds of a disk failure taking out the array is the same as RAID 1. It does not increase as the # of disks increases. With RAID 6 this is not true and that should be considered as well.
Not to mention how URE affects a RAID 6 rebuild.
ZFS is not really an option for me...
Seems like it is about the RAID size rather than the number of drives? During a rebuild, all data must be read in order to reconstruct the data. Whereas with RAID 10, only data from one drive needs to be read which is a big win.
I make three copies to LTO-6 and store them offsite.
But like others have said, I'm leaning RAID 10 for safety reasons. The data is video footage that is written one time and read many times. I'm also considering the new SMR drives.
I'm thinking about a 12x6 TB RAID 6 resulting in 60 TB total using HGST NAS drives. However due to the higher URE rating on these drives compared to enterprise drives,I'm concerned about failure during rebuild.
Has anybody tried such a configuration and done some rebuilds?
Thanks!