3Tb and ZFS reliablity question

DataMine

n00b
Joined
Feb 8, 2012
Messages
41
Hello, I am currently wanting to add some more space to my current ZFS build, and have 5x3TB drives (WD green and seagate green) drives, can anyone give me the risk of raidz vs raidz2 vs raidz3 (I know that raidz = 1 lost disk, raidz2 = 2 lost disk and raidz3 = 3 disk lost, but how likely Im i going to get an error with 3 TB rebuild??), now I am going to setup a separate array and not put this with my current zpool (15x2tb in raidz2),

if raidz2 or raidz3 is better for 3tb disk, would i be better off just doing mirror dev with the 4 disk that I have now, and adding more mirrors until I get 8-10 disk then rebuilding it into a Raidz2 or Raidz3 zpool (move all data to external drives than back to the new 10 disk Raidz3 would give me 21 TB over 15 TB of storage


dev1
WDG 3TB
WDG 3TB mirror
dev2
SGG 3TB
SGG 3TB mirror
----------
total 6Tb usable, WDG 3TB not used until another drive purchesed

vs Raidz -5 drives
WDG 3TB
WDG 3TB
WDG 3TB
SGG 3TB
SGG 3TB
----------
total 12Tb usable,

vs Raidz2 -5 drives
WDG 3TB
WDG 3TB
WDG 3TB
SGG 3TB
SGG 3TB
----------
total 9Tb usable, mirrors should be faster than raidz2 with 4 drives right??


also if I use Mirror dev, I can mix drives as long as the mirror drives are the same size right?

dev1
WDG 3TB
WDG 3TB mirror
dev2
SGG 3TB
SGG 3TB mirror
dev
WDG 2TB
WDG 2TB mirror
dev
Samsung 1TB
Samsung 1TB mirror
----------
total 9Tb usable,
 
Last edited:
yes but i would avoid vdevs with greatly different sizes - perf can suffer if one or more vdevs are almost full.
 
well thanks, but it doesnt really answer my question on rebuild error rate on 3 TB drives,
 
That I can't say - I commented on my concern that I thought was a show-stopper and stopped there.
 
So, there are a lot of questions in here. The array configuration is a trade off between performance, redundancy, and capacity. You will need to determine what is most important to you. Best bet is to head over to a ZFS forum for this sort of question, but in general you will want to do this right the first time so try to avoid building a temporary pool only to rebuild it later on. Unless you are doing link aggregation performance shouldn't be a big concern because your array should easily be able to saturate a gigabit link. If you are doing something other than home file storage your performance requirements might vary.
I have heard that there are limits to how many drives you should put in a vdev but I don't have the technical reasons at my fingertips. Do some searching but I believe that your current array of 15 drives in one vdev is probably not a recommended configuration. I have 6 3tb drives in RAIDZ2 and it works well. I think that for RAIDZ1 you should have an odd number of drives and RAIDZ2 you should have an even number of drives. Drive failure is always an issue with consumer drives so the recommended configuration is a minimum of RAIDZ2.
I'm not sure what OS you are using but if you head over the the FreeNAS forums all of this info is easily searchable and will pertain to any ZFS installation.
Hope this helps.
 
Greetings


more on this particular topic here and here.


can anyone give me the risk of raidz vs raidz2 vs raidz3 (I know that raidz = 1 lost disk, raidz2 = 2 lost disk and raidz3 = 3 disk lost, but how likely Im i going to get an error with 3 TB rebuild

I have an educated guess and I'd have to say short of another drive failure the correct answer is ZERO chance of failure.

Lets look at your statement first "raidz = 1 lost disk, raidz2 = 2 lost disk and raidz3 = 3 disk lost" for a start you haven't considered the possibilities that there might be URE's on the other disks. If you lose one disk in Raid-Z1 and have a URE on another disk then you will go below minimum redundancy and that stripe cannot be re-constructed, however, ZFS is smart enough to know what the affected files are and will notify you of this AND WILL CONTINUE WITH THE RE-BUILD until it is finished, same applies with a 2 disk loss and URE with Raid-Z2 and also with a corresponding 3 disk loss and URE with Raid-Z3.

If you are re-building an otherwise idle array then assuming it takes 1TB hard drives to re-build in 2 hours and 2TB drives to re-build in 4 hours and 3TB drives to re-build in 6 hours then essentially it boils down to this question. What is the chance of another hard drive dying in the two hours extra its going to take to re-build (6 hours vs 4 hours)? Answer I don't think an extra 2 hour windows is really going to make much of a difference, its obviously without a doubt non-zero but small enough to really not worth be worrying about.

If you had a hardware array in this situation then you would be pretty stuffed as firstly you would have a broken stripe with not much chance of fixing it let alone knowing where the affected data was. Assuming of course the re-build process didn't just abort the entire re-build in the first instance which is usually what happens most of the time. How much fun would you have locating the damaged files in NTFS starting with blocks X to X+n (where n is the number of disks less parity drives).

This is why ZFS is far superior to hardware raid + NTFS in this aspect.

I have heard that there are limits to how many drives you should put in a vdev but I don't have the technical reasons at my fingertips.

Its to do with IOPS as the vdev usually has the IOPS equivalent of a single disk, so with say 20 disks

(a) for a business that needs high performance you would have say four lots of 5 disk Raid-Z1's

(b) for a home NAS probably used to store and accumulate data primarily in a WORM fashion a 20 disk Raid-Z2 or Raid-Z3 would be used.

On a Sun Fire X4500 server, do not create a single vdev with 48 devices. Consider creating 24 2-device mirrors. This configuration reduces the disk capacity by 1/2, but up to 24 disks or 1 disk in each mirror could be lost without a failure

I don't know what the upper limit is (if any) but if their saying 48 is undesirable but 24 is OK then if I had a 24 drive Norco I would probably have either two lots of 12 drive raid-Z2's or a 24 drive Raid Z-3.

I have 6 3tb drives in RAIDZ2 and it works well. I think that for RAIDZ1 you should have an odd number of drives and RAIDZ2 you should have an even number of drives. Drive failure is always an issue with consumer drives so the recommended configuration is a minimum of RAIDZ2.

The ideal number of drives in a vdev is

(a) data drive number is a power of 2 so the recordsize (default 128KB) divides cleanly over the number of drives e.g. 16KB into 8 drives, 32 KB into 4 drives etc,etc.

(b) plus the number of parity drives 1 for Z1, 2 for Z2, 3 for Z3.

My preference is a 10 drive Raid-Z2, I was thinking about doing an 11 drive raid-Z3 for my lastest build and would have done so had the post flood drives quality been considerably lower but since they were not then I decided Z2 was more than enough, if your still less than convinced with what I have said here then by all means go Z3. The reasons I picked 10 drives is primarily I want to

(a) maximize the data drives as much as reasonably possible.
(b) maximize the protection (Raid-z2)
(c) minimize the cost

Mirrors would mean data storage space would be only N/2 drives as opposed to 8 drives. Once I start filling up the 10 drive array I start setting aside money to buy the next lot of 10 drives once its full.

If your still not sure what config you should setup then I have rambled on in more detail here if your interested.

Cheers
 
Thanks for all the help I setup a 6x3TB Raidz2 array, for a test I pulled one of the disk with the array loaded with 5TB of data, decent speed 215 MBps Rebuild,
Sata 2 (3.0 GBps ports)
Rebuild -200-225 MBps
Read -400-430 MBps
Write -300 MBps,
not an issue even with the bonded adapter, (2000 Mbps)
Network Read -95 MBps
Network Write-92-93 MBps
 
Last edited:
Back
Top