ZFS Storage Layout QUESTIONS

Vengance_01

Supreme [H]ardness
Joined
Dec 23, 2001
Messages
7,215
I current have a single vdev on a raid 2z with 5 4TB SATA drives. I am building a new virtual freenas server with ESXi 6.7 and will be doing a direct pass-through(LSI card in IT mode H200 model). I have 8x3.5 SAS/SATA back-plane slots. Freenas and VMs will run on dedicated SSD for data-stores attached to internal SATA port on Supermicro and this will be used for a NFS datastore and mostly plex media server data.

option 1
I am thinking of 2 vdevs of 3x4TBs disks stripped or mirrored(this what I need help with) and then a second vdev for backups, snapshots and a NFS datastore to migrate VMs between my 2 ESXi hosts

option 2
Use all 8 slots and make 2 vdevs with 3x4 TB each and a single SSD for caching/extra performance in each vdev and stripe/mirror (ideas?) and get a USB 3.0 External and pass it directly to freenas and use it for backups.

leaning towards option 2 but I am looking for long term sustainability with flexibility
 
I don't immediately have an answer as there are too many unknowns in the scenario, but just some points to think about:
- Backups should not be stored on the same physical system, unless you don't care about recovering from fire/flood/etc. Depends how critical your data is, what it's for, etc. But consider removable media that you can take off-site etc
- How much do you have to spend, and what parts do you already have available vs. need to buy?
- Realistically how much performance do you need for each requirement? You already have an SSD for the VM datastore so sounds like most of the data on the zpool is going to be static media etc. I would consider one large raidZ vdev as it's going to be more cost effective. You're probably not going to notice the difference between that and striped mirrors if it's only one person watching a movie on Plex.
 
Concentric makes some good points, and for myself I would go with neither of those options. If the VM storage is going to run off a separate SSD storage, then that's covered and we're just considering data storage.

For 6 4TB disks, I'd run either a set of mirrors (2 disks each in 3 mirrors), or RAIDz2. Don't worry about SSD caching for the data store unless you're running much faster than GigE network speeds and primarily NFS or something. For caching you have some specific conditions to consider (not just any SSD), and for the most part it won't improve your performance for data storage over the network (it is only helpful in pretty specific circumstances).
 
Concentric makes some good points, and for myself I would go with neither of those options. If the VM storage is going to run off a separate SSD storage, then that's covered and we're just considering data storage.

For 6 4TB disks, I'd run either a set of mirrors (2 disks each in 3 mirrors), or RAIDz2. Don't worry about SSD caching for the data store unless you're running much faster than GigE network speeds and primarily NFS or something. For caching you have some specific conditions to consider (not just any SSD), and for the most part it won't improve your performance for data storage over the network (it is only helpful in pretty specific circumstances).
I am running a single vdev with 6x4TB disks as a raid1z with a spare added and then I am going to have 8-12tb single disk in its own vdev for backup. One I get a full clean copy of the data backed up I will try the 2 vdevs of 3 disks. But I am seeing about 250-300mb/s on read and writes via a windows 10 VM using cifs. The freenas and win10 run on the same esxi 6.7 box using vmnetx3 driver. Running 21gb ram on the freenas box. Things are running pretty good considering these are 5700rpm drives.
 
Back
Top