15 drive zfs pool, how would you split it?

Guldan

Weaksauce
Joined
Jan 16, 2013
Messages
113
Hey Guys,

Thanks to you guys i'm nearly done my setup, best tech forum on the internet. How would you guys split 15 x 750g drives in ZFS? I'm thinking just a Z2 but would you go so far as a Z3? or maybe Z2 w/ a hot spare? This is for media storage

They are enterprise drives with low usage but will run 24/7, although they are enterprise drives i'm still scared because there are 15 of them.

Let me know your thoughts, thanks!
 
Last edited:
I'm no ZFS expert but I'd just go with 5 sets of 3-drive raid-z's as they're enterprise drives and while raidz2 would give you fault tolerance for eons...I think you waste too much space going that route.
 
When determining how many disks to use in a RAIDZ, the following configurations provide optimal performance. Array sizes beyond 12 disks are not recommended.
Start a RAIDZ1 at at 3, 5, or 9, disks.
Start a RAIDZ2 at 4, 6, or 10 disks.
Start a RAIDZ3 at 5, 7, or 11 disks.
The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.
 
I was unaware 12+ disks are not recommended, perhaps I will do 11 disks +1 hot spare and have the others for backup/hot spare.p

Gigatexal what do you mean by 5 sets of raidz?

Can I do two seperate groups of 7 and stripe them? like a 1+0? I'm using freenas btw
 
I would do raidz3 with or without a spare.

The problem with using many disks in a VDEV, it takes long time to repair the zpool. But if you are using a media server, then there will hardly be any load, and you can close down other users while repairing. Thus, you loose performance with many disks, but that is no issue because it is a media server.

If you were to use the system in production, it would be better with two raidz2 and one spare - that would give much better performance.

Maybe you should benchmark each configuration with bonnie++? If you do, please report the numbers back here for discussion?
 
I think the config determines how the drives are arranged.

I do mine as follows:

4 enclosures, 4 drives each.

one raidz with the TOP drive from each enlcosure
2nd raidz with the SECOND FROM TOP drive of each enclosure
etc.

If one ENTIRE enclosure goes offline, I still have all my data (but all raidzs are degraded).

If you have multiple controllers in multiple slots you can do this as well.
 
I'd do a raidz with a hot spare. 4 years to data failure, 169 years to data loss.

 
Array sizes beyond 12 disks are not recommended.
Does anyone have a Sun/Oracle link to why you shouldn't build arrays with more than 12 disks? I'm curious as to the "why" behind it. I can find links to folks recommending no more than 12 drives, no more than 15 drives, etc... But nobody has any reasoning beyond "It's a bad idea". Is it a performance thing? The only bit I can see is from wikipedia, which states that " It is not recommended to create a zpool with a single large vdev, say 20 disks, because write IOPS performance will be that of a single disk, which also means that resilver time will be very long (possibly weeks with future large drives)."
 
Last edited:
I'm no ZFS expert but I'd just go with 5 sets of 3-drive raid-z's as they're enterprise drives and while raidz2 would give you fault tolerance for eons...I think you waste too much space going that route.

Clearly you're no expert because with a RAIDZ2 he "loses" 2 drives while with 5 RAIDZ he loses 5 ! And that's with less redundancy since if the 2 wrong drives fail he's screwed.
 
Can I do two seperate groups of 7 and stripe them? like a 1+0? I'm using freenas btw

A ZFS pool is made up of one or more vdev. If there are several vdev, they're striped. Each vdvev can be made up as you wish, RAIDZ2, mirror, whatever.
 
Thanks guys lots of good info and also not surprising at all that everyone has a different opinion. Also I should note my hardware, which I have in another thread. ESXi will just be used for a testbed so the data isn't critical and I don't require blazing speeds.

Dell 1950 w/ Freenas DAS to MD1000 (Media Storage)
Dell 1950 w/ ESXi (using Freenas ISCSI target)

Right now I'm running a RAIDZ2 with all 15 drives, so it would be a slow a painful rebuild but not nearly as painful as a RAIDZ3 which is overkill I think for media storage plus I will have a secondary backup for worst case scenarios.

I also don't get why i'm not supposed to go over 12+ disks I'll research it, but as it stands I think the only change I might make is adding a hot spare. That will give me 8.2 TB, decent performance and peace of mind.

The good thing is I have lots of time to play around and test different configs as my data is on another Array for now
 
I was unaware 12+ disks are not recommended, perhaps I will do 11 disks +1 hot spare and have the others for backup/hot spare.p

Gigatexal what do you mean by 5 sets of raidz?

Can I do two seperate groups of 7 and stripe them? like a 1+0? I'm using freenas btw

5 set of 3-drives in raid-z making one big pool. My thinking is that up to 5 drives could fail, granted you're screwed if more than 1 drive in each set fails.

best way to mirror and stripe is to do mirror two drives and then raid-0 them as a whole.

so 2x 750GB drives in RAID1, x 7, so you'd have 14 drives or 7 pairs all mirrored, so you get half the storage space, but crazy amounts of redundancy and increased I/O speeds
 
Resilvering should not take long on that size of a drive. Even if it's filled with data
 
5 set of 3-drives in raid-z making one big pool. My thinking is that up to 5 drives could fail, granted you're screwed if more than 1 drive in each set fails.

best way to mirror and stripe is to do mirror two drives and then raid-0 them as a whole.

so 2x 750GB drives in RAID1, x 7, so you'd have 14 drives or 7 pairs all mirrored, so you get half the storage space, but crazy amounts of redundancy and increased I/O speeds

Yeah that would be brilliant for a database server or something that needs IO but to me seems like a lot of complexity for a media server no? I'm not sure you can stripe mirrors from the Freenas GUI

Resilvering should not take long on that size of a drive. Even if it's filled with data

Good point, I was thinking of upgrading to 2TB but would cost me a lot of $$$ and I inherited these drives for almost nothing. The bonus is they are enterprise drives
 
Hey Guys,

What do you think of this?

CIFS Main Storage = 13 x 750GB in a raidz2
ISCSI VM Storage = 2 x 1000GB in raidz1 Mirror

The array happens to have two 1TB drives (I assume two 750's died and they replaced with 1TB) might as well not waste the extra space plus I can have a dedicated 1TB mirror for my VM's over ISCSI.
 
The most important question here is what are you storing on it?

What is the 'main storage' going to hold?
What are the VMs going to be doing? eg: How busy will they be, disk wise..

Raidz = great for throughput (eg: streaming/reading large files) but is terrible for random reads. (eg: database info)
 
What kind of data will you store? If database or I/O heavy things, then you should use only mirrors. If media, one big raidz2/3 might suffice if there are few users. For work, several raidz2 (8-12) disks are a good choice.

On FreeBSD mail list, someone did a big raidz2 with 20 disks or 24 disks. The result were very bad: resilver did not just finish, took weeks. Very bad performance. Then they redid with several raidz2, and performance skyrocketed and resilver was quick.
 
Main Storage is just media.. Videos, Music, Pictures, Small documents

VM's will be probably Server 2012, Linux, just testing OS's and whatnot.. Nothing IO intensive

Sounds like raidz2 with 13 disks is not a good idea, perhaps I will split it up then
 
13 disks with raidz3 might work. I am thinking of 11 disks with raidz3. The best thing would be that you tested each config, and benchmark.
 
Back
Top