Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Is there a big difference between RAID60 and Raidz3
Or are you accounting for spares?
It seems irresponsible to give any recommendations without more information
Reliable: ZFS is best
Pool layout: 2 x Z2 vdev à 10 disks + spare disks or 2 x 11 disks Z3 without spare
Although I gave advice I did think about that. I mean if this is an HTPC I would have a totally different answer for 22 disks. For an HTPC I would recommend snapraid (17 data + 5 parity or 18 data + 4 parity) with whatever filesystem the OP is most comfortable with.
Fair enough. A reasonable response.It seems irresponsible to give any recommendations without more information.
It is a backup of the backup box. (Yes, 2 other boxes configured similarly)What is the purpose of the array?
2TBWhat size(s) of HDDs are you talking about?
Ubuntu LinuxWhat OS are you expecting to use?
In discussion, but unlikely given its purpose.Do you have a backup strategy yet?
Although I gave advice I did think about that. I mean if this is an HTPC I would have a totally different answer for 22 disks. For an HTPC I would recommend snapraid (17 data + 5 parity or 18 data + 4 parity) with whatever filesystem the OP is most comfortable with.
Interesting. I have a completely separate home server that I am setting up that has 9x1.5TB disks. Primary usage will be Media Storage/Home Network backup. Some VM sandboxes will be there too.
Would you recommend snapraid for that? I have been tossing around ZFS, BTRFS, or plain old MDADM RAID6
I always heard RAIDZ3 was plain better than RAIDZ2+spare if performance is not a factor.
As far as I understand snapraid, it will not be able to recover all data from a failed drive if any file on any drive in the array changes after the last sync. Which means the moment there is a change in your VM image, you lose the protection no matter how many parity drives you have. So snapraid is only useful for completely static data which never changes.
As far as I understand snapraid, it will not be able to recover all data from a failed drive if any file on any drive in the array changes after the last sync. Which means the moment there is a change in your VM image, you lose the protection no matter how many parity drives you have. So snapraid is only useful for completely static data which never changes.
So snapraid is only useful for completely static data which never changes.
As far as I understand snapraid, it will not be able to recover all data from a failed drive if any file on any drive in the array changes after the last sync. Which means the moment there is a change in your VM image, you lose the protection no matter how many parity drives you have. So snapraid is only useful for completely static data which never changes.
You are basically right
This is correct.
I believe parity still helps with changes. I believe it can ignore the changed data (treat the changes as a removed disk) and use the parity to recover a second bad block. However this increases the need for extra parity disks.
It is a backup of the backup box. (Yes, 2 other boxes configured similarly)
2TB
Ubuntu Linux
Well, my comment was directed at the inability to restore the changes to VM images. Rereading his comment I think that was probably not what he meant.No, he and you are incorrect.
Interesting. I have a completely separate home server that I am setting up that has 9x1.5TB disks. Primary usage will be Media Storage/Home Network backup. Some VM sandboxes will be there too.
Would you recommend snapraid for that? I have been tossing around ZFS, BTRFS, or plain old MDADM RAID6
Would it even make sense to add the VM images to the SnapRaid volume? The frequently altered data cannot be restored anyway. And unless you shutdown the VMs during the syncs you may end up with inconsistent filesystems. Additionally each sync would have to read a lot of data from all disks because of the VM images. The additional parity disk could be used as a mirror for a dedicated VM storage disk instead.Alternatively, if all the VM sandboxes will fit on one drive, then you could have a single SnapRAID volume using all 9 drives, but you would want at least 3-parity, since 1 parity drive would be needed to protect against the frequent changes in the VM files. However, the puts the recent changes to your VM files at risk if the VM drive fails, so I would not recommend this as my first choice.
Completely agreed, that is what I do as well. Although in a production environment you may not be able to live with a VM state that is 10 minutes old. Database servers for example. And a block-level snapshop of a VM image without a saved VM state is like you pulled the plug on that VM after a restore.I can't stand VMs on anything but SSDs at this point, but I also can't really justify spending tons of mirrors and stuff... so VMs go on individual SSD zpools but I use zfs send/recieve to send incremental snapshots to the previsouly mentioned 2-4 disk zpool. Worst case some SSD wtih VMs on it fails I have a snapshot that is at most 10 minutes old or so on a spinning disk.
Would it even make sense to add the VM images to the SnapRaid volume? The frequently altered data cannot be restored anyway. And unless you shutdown the VMs during the syncs you may end up with inconsistent filesystems. Additionally each sync would have to read a lot of data from all disks because of the VM images. The additional parity disk could be used as a mirror for a dedicated VM storage disk.
If files are deleted or changed on one or more drives, and then a different drive fails (with no changes since the last sync on the failed drive), then you will be able to recover all your data as long as you have sufficient parity. The surviving drives with changes or deletions can be lumped together with the failed drive(s) in your analysis.
"Reduce this problem" and "improves the chances" is not the same as "With two parity drives you will be able to recover the data if files on only one drive were modified". They really need to rephrase this.In the worst case, any file deleted or modified in not broken disks may prevent to recover the same amount of data in the broken disk. For example, if you deleted 10GB of data, you may not be able to recover 10GB of data from the broken disk. The exact amount of data lost depends on how much the deleted and broken data overlaps in the parity.
To reduce this problem you can use two parity drives. This improves a lot the chances to recover the data.
"Reduce this problem" and "improves the chances" is not the same as "With two parity drives you will be able to recover the data if files on only one drive were modified". They really need to rephrase this.