Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'll third adding the 2, both for greater data integrity, and for slightly better performance.
optimal array sizes for raidz2 are: 4, 6, and 10 drives
optimal array sizes for raidz1 are: 3, 5, and 9 drives
basically power of 2 + 1 for raidz1
power of 2 + 2 for raidz2
power of 2 + 3 for raidz3
I'll third adding the 2, both for greater data integrity, and for slightly better performance.
optimal array sizes for raidz2 are: 4, 6, and 10 drives
optimal array sizes for raidz1 are: 3, 5, and 9 drives
basically power of 2 + 1 for raidz1
power of 2 + 2 for raidz2
power of 2 + 3 for raidz3
If the drive has 4k sectors, then incorrect alignment, might result in degraded performance. I remember alignment being a larger issue when you aren't using an optimal array size. I've not used Solaris 11. For freebsd, I used a gnop command to make the OS see the drives as having 4k sectors before creating the pool.
After creating the pool, you can check to make sure the alignment is correct for 4k sector drives, using the following command:
zdb | grep ashift
If you see: ashift=12 you're good
If you see: ashift=9 there might be some performance loss.
TLER on or off is not nearly so important for ZFS or software RAID, as compared to hardware RAID. Though in most multi-disk use cases, turning it on is better.
WD20EARX's do in fact have the 4K sector sizes (Advanced Format) with the 512-byte emulation. There is not much I can do about it though. FreeBSD offers somewhat of a solution by allowing you to create a zpool with ashift=12, which I could then re-attach to my Solaris host, but FreeBSD won't work with my disk controller. So I'm stuck at 4K for now.
I concur, I see no use at all for single parity raid levels with modern 1+tb HDDs. Rebuilds on such drives take hours, and the array is entirely too fragile during a rebuild. You've just lost one disk, and now you're putting a heavy workload on the entire array, when it's 1 drive away from total data loss.
If you need reliability and capacity, run double or triple parity raid(raidz2, raidz3, raid6). If you need reliability and performance use raid 1 or raid 10/0+1/etc, or the zfs mirror equivalents.
In this particular scenario, the possible performance improvement is just icing on the cake.
Solarismen hosts binary-patched versions of Solaris 'zpool' where ashift is hardcoded to 12. Download the binary and use it to create your pool. The Solaris 11 EA (snv_173) version should work.
Damn it. I can't believe you guys are trying to convince me to give up TWO disks to parity. A whole extra 2TB disk. You guys are nuts, no way I am doing that. That would leave me only 6TB left out of 10TB. I'll take my chances. This isn't production, it's a home storage server
Just don't get mad when we quote this post a year from now when you are on here complaining about wasting a weekend restoring all that from a backup.
Damn it. I can't believe you guys are trying to convince me to give up TWO disks to parity. A whole extra 2TB disk. You guys are nuts, no way I am doing that. That would leave me only 6TB left out of 10TB. I'll take my chances. This isn't production, it's a home storage server
# zpool replace home6 /dev/ada1 /dev/ada3
# zpool status
pool: home6
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h2m, 1.21% done, 4h0m to go
config:
NAME STATE READ WRITE CKSUM
home6 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
ada0 ONLINE 0 0 0 15.5M resilvered
replacing ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada3 ONLINE 0 0 0 6.82G resilvered
da3 ONLINE 0 0 0 15.3M resilvered
da2 ONLINE 0 0 0 15.1M resilvered
da1 ONLINE 0 0 0 15.4M resilvered
da0 ONLINE 0 0 0 15.2M resilvered
da7 ONLINE 0 0 0 15.4M resilvered
da6 ONLINE 0 0 0 15.2M resilvered
da5 ONLINE 0 0 0 15.4M resilvered
da4 ONLINE 0 0 0 15.2M resilvered
# zpool status
pool: home6
state: ONLINE
scrub: resilver completed after 8h14m with 0 errors on Thu Jan 6 09:07:23 2011
config:
NAME STATE READ WRITE CKSUM
home6 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
ada0 ONLINE 0 0 0 1.58G resilvered
ada2 ONLINE 0 0 0 393G resilvered
da3 ONLINE 0 0 0 1.57G resilvered
da2 ONLINE 0 0 0 1.56G resilvered
da1 ONLINE 0 0 0 1.57G resilvered
da0 ONLINE 0 0 0 1.56G resilvered
da7 ONLINE 0 0 0 1.57G resilvered
da6 ONLINE 0 0 0 1.56G resilvered
da5 ONLINE 0 0 0 1.58G resilvered
da4 ONLINE 0 0 0 1.57G resilvered