24 Disk ZFS Pool Config Recommendations Needed

iolaus

n00b
Joined
Jul 12, 2009
Messages
21
I'm in the process of building my new NAS and have 24 hot-swap bays to utilize for my ZFS pool(s) as well as two additional bays on the chassis rear for OS drives.

The NAS will serve large file content (movies and music) as well as host around 15-20 VMs via iSCSI. A couple of the VMs will likely perform frequent small-write operations (such as db writes).

The VMs will be running on a pair of Dell R905 servers so the NAS will only be responsible for storage.

I'm trying to determine how to best utilize the 24 bays available for my particular needs.

A few options I've been considering:

  • 20 Disk SATA RAIDZ-3, 1 Disk SSD L2ARC, 2 Disk SSD ZIL Mirror, 1 Disk SATA Hot-Spare
  • 21 Disk SATA RAIDZ-3, 1 Disk SSD L2ARC, 2 Disk SSD ZIL Mirror
  • 20 Disk Mirrored SATA RAIDZ-2 (10 disk ea), 1 Disk SSD L2ARC, 2 Disk SSD ZIL MIRROR, 1 Disk SATA Hot-Spare

UPDATE

My latest thinking for configuration based on feedback:

System Disks:
  • 2-Disk SSD Mirror - Model ? / Capacity ?

VM Pool [3TB Usable]:

VM Backup Pool [4TB Usable]:

General Storage Pool [32TB Usable]:


Any feedback or additional suggestions would be very much appreciated.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I'm not the expert but I'd chunk up my disks into different zpools, so that vm's and your storage don't have to share IOPS.

I'd also try to isolate the traffic for each if that is applicable (AKA you're not doing an vmware all-in-one)

Your media storage probably won't need L2ARC or ZIL. But for your VM vdev it seems like a good idea.
I'd size them according to your needs with thoughts for expansion.

I'd recommend using high quality disks with good reputations (I like just about any SAS disk, or SATA hitachi disks) and try to avoid some disks like WD Greens for this kinda setup.

I'd also suggest using common and proven HBA's like the IBM M1015 or various models from LSI.
Also a quality motherboard that supports ECC, and has intel NIC's or the ability to add them.
 
Hi,

I would use two pools.
One for the VMs and for the music/videos.
The VM pool with 8 or 10 HDDs with Raid 1 plus spare and the other pool with rest with Raidz2 and spare or Raidz3 without spare.

A combination of both could be your option 3 with two Raidz2 mirrored together. The IO is not so good as with my advise, but you won't loose so much space!
 
I'm not the expert but I'd chunk up my disks into different zpools, so that vm's and your storage don't have to share IOPS.

I'd also try to isolate the traffic for each if that is applicable (AKA you're not doing an vmware all-in-one)

Your media storage probably won't need L2ARC or ZIL. But for your VM vdev it seems like a good idea.

I should have mentioned that this is not an all-in-one setup, the VMs will be running on separate servers. (I'll update the original post to reflect this)

I had thought about separating the pools for file share and VMs but was shying away because of the extra ZIL/L2ARC disk requirements. Not utilizing those features on the file share pool is an interesting idea. I guess in that scenario the question becomes what kind of configuration I would use for the VM pool and if that configuration would offer significantly better IOP performance than one large pool.

I'd also suggest using common and proven HBA's like the IBM M1015 or various models from LSI.
Also a quality motherboard that supports ECC, and has intel NIC's or the ability to add them.

I've already acquired two M1015s which should meet my needs as the chassis has a built-in SAS backplane. As for the motherboard I've been looking at some Supermicro options and do intend to utilize ECC memory.
 
Last edited:
Thinking along the line of separate pools I came up with the following:

VM Pool:
6 Disk SAS Striped Triple Mirror, 1 Disk SSD L2ARC, 2 Disk SSD ZIL Mirror, 1 Disk SAS hot-spare

Media Pool:
14 Disk SATA RAIDZ-3

Am I correct in thinking that if I later wanted more space for the VM Pool I could remove one disk from each of the triple mirrors and create a new standard mirror to add to the stripe? (ie. striped 2x triple mirror -> striped 3x standard mirror)

Also, any problem with a 14 Disk RAIDZ-3 vdev? Is there a better way to utilize 14 disks for file/media storage?
 
You need two pools because the media and the VM storage have very different performance requirements.

The media pool does not need to be performance optimised to the same degree, because the files are large/sequential and I'm guessing there won't be a huge number of small files, writes, or users (correct me if I'm wrong?). You only need a mediocre sequential read speed for sharing a video or audio file. If you think about it, the performance of a single drive would be more than sufficient.
To maximise the storage capacity you could go for a large vdev of something like 18 drives in RAID-Z2, but that number of drives in a single vdev is not usually recommended. One reason is that that configuration offers poor IOPS performance compared to, say, breaking the 18 drives into three 6-drive RAID-Z2s. But then you are losing an additional 4 drives-worth of space, and you don't need good IOPS if you're just storing media. As long as you have the data backed up elsewhere, a larger vdev is what I would be tempted to do for the media pool.

If you were to go for 18 drives in the media pool that leaves you with 6 for the VMs.
You could go for three 2-drive mirrors (equivalent to a 3-way RAID 10), which I think would probably give you good enough performance. If you use SSDs for these VM pool drives then there's little need to bother using additional ZIL or cache drives.
 
The media pool does not need to be performance optimised to the same degree, because the files are large/sequential and I'm guessing there won't be a huge number of small files, writes, or users (correct me if I'm wrong?)

Those assumptions are correct. The media pool would not have many simultaneous users and the number of smaller files would be relatively few (photos, documents, etc). The media pool may, however, occasionally be asked to do large sequential writes while doing large sequential reads (remuxing one movie while watching another for instance).

You only need a mediocre sequential read speed for sharing a video or audio file. If you think about it, the performance of a single drive would be more than sufficient.

I have experienced issues with my existing RAID-6 storage system in situations like those mentioned above where a large sequential read/write creates enough disk latency to cause stuttering during video streaming. I want to be sure to avoid that if at all possible. Perhaps this can be mitigated with some sort of ionice mechanism though (which I don't have available on my current Windows-based storage system.

To maximise the storage capacity you could go for a large vdev of something like 18 drives in RAID-Z2, but that number of drives in a single vdev is not usually recommended. One reason is that that configuration offers poor IOPS performance compared to, say, breaking the 18 drives into three 6-drive RAID-Z2s. But then you are losing an additional 4 drives-worth of space, and you don't need good IOPS if you're just storing media. As long as you have the data backed up elsewhere, a larger vdev is what I would be tempted to do for the media pool.

If you were to go for 18 drives in the media pool that leaves you with 6 for the VMs.
You could go for three 2-drive mirrors (equivalent to a 3-way RAID 10), which I think would probably give you good enough performance. If you use SSDs for these VM pool drives then there's little need to bother using additional ZIL or cache drives.

I think SSD drives for my VM pool will be cost prohibitive so I'm still leaning toward the VM pool configuration (10 drives total) outlined in my last post. I could, however, use the remaining 14 drives to make two striped 7-disk RAID-Z2s. That would only cost me one additional drive worth of capacity over the 14-disk RAID-Z3 I proposed in my previous post. It would also provide an upgrade path allowing me to start with just one RAID-Z2 vdev and create another if/when needed.

Does anyone know if there would be much of a performance difference between a 14-disk RAID-Z3 vdev and two striped 7-disk RAID-Z2 vdevs?
 
Last edited:
I think a .XML would be great for these sorts of capacity<-->availability ZFS & RAIDn arithmetics.

Someone should write one for us!! :D
 
I think SSD drives for my VM pool will be cost prohibitive so I'm still leaning toward the VM pool configuration (10 drives total) outlined in my last post. I could, however, use the remaining 14 drives to make two striped 7-disk RAID-Z2s. ...

My devil's-advocate argument would be that you are considering dedicating 10 hard drives to the VM pool, whereas you would need only a few SSDs to match that for performance. So although the individual SSDs might be more expensive, would they really be more expensive than 10 hard drives? I suppose it depends exactly which models you go for.
I just remembered you mentioned there will be 15-20 VMs - how much space are we talking for those?
Plus, the fewer drives you use for the VM pool the more free slots you have for expanding the storage.
 
My devil's-advocate argument would be that you are considering dedicating 10 hard drives to the VM pool, whereas you would need only a few SSDs to match that for performance. So although the individual SSDs might be more expensive, would they really be more expensive than 10 hard drives? I suppose it depends exactly which models you go for.
I just remembered you mentioned there will be 15-20 VMs - how much space are we talking for those?
Plus, the fewer drives you use for the VM pool the more free slots you have for expanding the storage.

A very fair point.

To the question of how much space we're talking about I guess I'd be most comfortable with somewhere in the neighborhood of 2TB minimum usable for the VM Pool. That would give me plenty of room to grow up to 30 VMs (if the need should arise) @ ~66GB/VM average.

To achieve 2TB of storage utilizing SSDs I figure I'd need nine 500GB drives (one hot-spare) which at current prices would run ~$2700.

Alternatively, I could achieve 4TB of usable storage in the VM Pool utilizing four 2TB SAS drives in RAID10, one 2TB SAS hot-spare, one L2ARC SSD, and two ZIL SSDs. This would likely be enough for my purposes and would get me down to 8 drives total in the VM Pool (as apposed to 9 in the all SSD approach).

With current pricing I figure this config would run ~$1335.

Thanks so much for the feedback! Challenging my assumptions and getting new ideas is exactly what I'm looking for. I've updated the parent post with some changes based on this discussion.
 
Last edited:
In my previous post, am I correct in thinking I'd still want to utilize RAID10 and a hot-spare with an SSD based VM Pool or could I get comparable performance to my SAS + L2ARC + ZIL config with an SSD RAIDZ-1 and a hot-spare?
 
This is very close to what I am doing right now, just using a 15 disk array. I ended up doing a 13 disk raidz2 as my main media pool and a 2 disk mirror iscsi target in freenas for my VMs (I'll probably have 4-5 max)

Last night I direct connected my freenas array to my vm server (on separate individual nics) and gave iscsi it's own network, I'm pretty shocked how responsive it is but haven't done extensive testing yet.
 
Back
Top