Hi all,
Your wisdom and experience would be highly appreciated with the following query.
I am setting up a 24-drive NAS (QNAP TS-h1283XU-RP). The hardware seems decent (Xeon E-2236 6-core 3.4 GHz, 128GB ECC RAM). The system volume will reside on an m.2 drive. For cold storage, there are 12 SATA HDDs size 16TB each, and 12 SATA HDDs size 18TB each. The NAS will be used by a small office of 10 people, mainly for storage, moderate back-up operations, and media editing/streaming. The data will be backed up externally.
The goal is to strike a good balance within the impossible trinity of Storage - Performance - Security.
We're looking to set up the NAS to run in ZFS, which is quite new to me. I have read through countless forums, but a lot of the posts are from several years ago and it's not clear whether the information is still relevant, especially when it comes to performance.
If I understand correctly, a zpool with more vdevs (each with fewer drives) will run faster than a zpool with fewer vdevs (each with more drives). Are there any general rules of thumb as to the appropriate number of drives in a vdev?
Our cost/storage calculations are limitting us to sacrificing no more than four drives from the overall array.
Specifically in our context, would we be better off creating:
Option 1: One zpool containing two 12-drive vdevs, each running in raidz2
Option 2: One zpool containing four 6-drive dvevs, each running in raidz1
Option 3: Two zpools, each containing two 6-drive vdevs, each running in raidz1?
My instinct would be to avoid raidz1 (raid 5), given the risk of single redundancy. However, I am concerned about the performance of a 12-drive vdev running in raidz2 (raid 6). The few tests I have managed to find online suggest a heavy performance penalty under this set-up vs regular EXT4.
Does anyone have experience with 24-drive ZFS configurations? Would the NAS be able to handle the aforementioned tasks with two 12-drive vdevs in raidz2? Would adding another m.2 drive as a Read Cache (or ZIL) vdev mitigate most of the performance penalty? What sort of scrubbing or resilvering times would we be looking at for a vdev with 12 drives of size 18TB each?
Many thanks in advance for your help!
Your wisdom and experience would be highly appreciated with the following query.
I am setting up a 24-drive NAS (QNAP TS-h1283XU-RP). The hardware seems decent (Xeon E-2236 6-core 3.4 GHz, 128GB ECC RAM). The system volume will reside on an m.2 drive. For cold storage, there are 12 SATA HDDs size 16TB each, and 12 SATA HDDs size 18TB each. The NAS will be used by a small office of 10 people, mainly for storage, moderate back-up operations, and media editing/streaming. The data will be backed up externally.
The goal is to strike a good balance within the impossible trinity of Storage - Performance - Security.
We're looking to set up the NAS to run in ZFS, which is quite new to me. I have read through countless forums, but a lot of the posts are from several years ago and it's not clear whether the information is still relevant, especially when it comes to performance.
If I understand correctly, a zpool with more vdevs (each with fewer drives) will run faster than a zpool with fewer vdevs (each with more drives). Are there any general rules of thumb as to the appropriate number of drives in a vdev?
Our cost/storage calculations are limitting us to sacrificing no more than four drives from the overall array.
Specifically in our context, would we be better off creating:
Option 1: One zpool containing two 12-drive vdevs, each running in raidz2
Option 2: One zpool containing four 6-drive dvevs, each running in raidz1
Option 3: Two zpools, each containing two 6-drive vdevs, each running in raidz1?
My instinct would be to avoid raidz1 (raid 5), given the risk of single redundancy. However, I am concerned about the performance of a 12-drive vdev running in raidz2 (raid 6). The few tests I have managed to find online suggest a heavy performance penalty under this set-up vs regular EXT4.
Does anyone have experience with 24-drive ZFS configurations? Would the NAS be able to handle the aforementioned tasks with two 12-drive vdevs in raidz2? Would adding another m.2 drive as a Read Cache (or ZIL) vdev mitigate most of the performance penalty? What sort of scrubbing or resilvering times would we be looking at for a vdev with 12 drives of size 18TB each?
Many thanks in advance for your help!