Setting up ZFS question - Solaris 11 Express or OpenIndiana?

Mysteriouskk

Weaksauce
Joined
Oct 21, 2006
Messages
122
Which OS should I use for my file server? Better for the long run?

Also, how should I setup my ZFS shares? I mostly have movies that I want to store. I know that I have to create zfs pools, so does that mean I'll have multiple shares?

example:

zfspool_1/Movies
zfspool_2/Movies
zfspool_3/Movies

where each zfspool contains 5 hdds for a total of 15 hdds in the system. Is that how it could work if I wanted to have 3 different zfs pools?

Is there a way to combine all the zfspools into one to have a single Movies folder?
 
You can have multiple shares per pool.

zfspool/Movies
zfspool/Pron
zfspool/Nudies
zfspool/Cartoons

etc

If you want separate pools just keep them between 3-9 vdevs (drives) per pool and you should be OK.

You could do a large pool, striped with 7 disks per pool and a global hot spare, you need to keep the stripes even in vdev(drives). You can't do 7 and then 8 for instance. You could also do 3 groups of 5 if you don't want to bother with a hot spare.
 
Last edited:
You can have multiple shares per pool.

zfspool/Movies
zfspool/Pron
zfspool/Nudies
zfspool/Cartoons

etc

If you want separate pools just keep them between 3-9 vdevs (drives) per pool and you should be OK.

You could do a large pool, striped with 7 disks per pool and a global hot spare, you need to keep the stripes even in vdev(drives). You can't do 7 and then 8 for instance. You could also do 3 groups of 5 if you don't want to bother with a hot spare.
Is there a way to combine shares on different pools as a view?

Can I see one Movies folder which contains 3 different pools?
 
Is there a way to combine shares on different pools as a view?

Can I see one Movies folder which contains 3 different pools?

Not that I am aware of.

Any particular reason you'd want to do it that way if it were possible?
 
Not that I am aware of.

Any particular reason you'd want to do it that way if it were possible?

I've been using Unraid and have a single Movies folder where I store all my movies in.
Unraid knows when a disk is getting full so it automatically uses the next drive.

So with ZFS, I will have to put my movies on different zpools if one get filled up?
 
You can mount pool B inside pool A, like:

/tank
/tank/newpool

This command would do that:
zfs set mountpoint=/tank/newpool <newpoolname>

But generally you can expand your pool with several disks at once. If you pick mirroring you can add 2 disks at a time, with RAID-Z you can add 3,4,5 which appears to be the sweet spot. RAID-Z2 i recommend either 6 or 10. Once you expand your pool, you will instantly have free space on all folders part of the pool, so you will not have to move stuff elsewhere.
 
No. You can, and I'm betting most, have one "Storage" zpool (as opposed to a zpool for your boot env, eg rpool)

Each pool can have multiple Files Systems (zfs file systems or block device).

Each File system can be Shared Out. Solaris and OI have native smb clients, NFS, iSCSI via Comstar... I think on some builds you can get AFP via netatalk... there's probably more that I'm missing.

You create, and thus expand, zpools by adding vdevs.

A vdev is how you arrange your physical storage devices, whether it it be a single disk, multiple disks, mirrored disks, or in a raidz vdev (raidz1,2,or 3 and think of the # as the maximum number of disks that can fail in the vdev before it fails). I'd say the rule of thumb for a vdev, is decide on the redundancy level you're comfortable with (eg, how many disks can fail balanced with performance requirements) and don't add vdevs that fall below that. If any vdev in a zpool fails (eg faulted beyond repair), the entire zpool fails.

You can look at my past posts to see how I arranged mine, but to visualize it now, my zpool looks like this. This is the physical layout, not the file system layout... I have like... 10 files systems on top of this pool, and I don't care to post that info. Most of them are SMB shared.
Code:
someone@nunaybiz:~$
 pool: svpool2
 state: ONLINE
 scrub: scrub completed after 14h30m with 0 errors on Tue Feb  1 14:30:46 2011
config:

        NAME        STATE     READ WRITE CKSUM
        svpool2     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0
            c8t2d0  ONLINE       0     0     0
            c8t3d0  ONLINE       0     0     0
            c8t4d0  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c8t6d0  ONLINE       0     0     0
            c8t7d0  ONLINE       0     0     0
        cache
          c3t4d0    ONLINE       0     0     0
        spares
          c3t6d0    AVAIL
          c3t5d0    AVAIL
          c3t7d0    AVAIL

errors: No known data errors
 
Last edited:
You can mount pool B inside pool A, like:

/tank
/tank/newpool

This command would do that:
zfs set mountpoint=/tank/newpool <newpoolname>

But generally you can expand your pool with several disks at once. If you pick mirroring you can add 2 disks at a time, with RAID-Z you can add 3,4,5 which appears to be the sweet spot. RAID-Z2 i recommend either 6 or 10. Once you expand your pool, you will instantly have free space on all folders part of the pool, so you will not have to move stuff elsewhere.



Where are you getting 6/10 for Z2? ZFS tuning guide just specifies between 3-9 per vdev.
 
Where are you getting 6/10 for Z2? ZFS tuning guide just specifies between 3-9 per vdev.
I believe this is through his testing/experience.

I've read through a lot of sub.mesa's stuff, I'd personally trust his advice.

Now mind you, this is all from a FreeBSD perspective, right sub?
 
Thanks for all the great info.

So let's say I have 12 drives, and want to use zraid1. How would I setup my drives with a Movies folder?
 
Thanks for all the great info.

So let's say I have 12 drives, and want to use zraid1. How would I setup my drives with a Movies folder?
Personally, and just basing it on the level of redundancy... I would not go more than 4 drives per raidz1, so assuming you want all 12 drives in the zpool as available drives, three raidz1 vdevs (4 drives per vdev).

Lol... I've had quite a few drives crap out or timeout on me using hardware raid and with my ZFS server (for ZFS, those were failures or buggy, twitchy drives... thanks WD).

Having my server stay up through drives failing is a priority for me. Of course, you have to balance that with cost (eg available space vs $) and performance.
 
Personally, and just basing it on the level of redundancy... I would not go more than 4 drives per raidz1, so assuming you want all 12 drives in the zpool as available drives, three raidz1 vdevs (4 drives per vdev).

Lol... I've had quite a few drives crap out or timeout on me using hardware raid and with my ZFS server (for ZFS, those were failures or buggy, twitchy drives... thanks WD).

Having my server stay up through drives failing is a priority for me. Of course, you have to balance that with cost (eg available space vs $) and performance.

So does that mean 4 usable drives and 1 drive like a parity drive?
 
So does that mean 4 usable drives and 1 drive like a parity drive?
No, it would be equivalent to 3 usable drives and 1 drive as parity.

That's just my advice. 25% is a mental barrier for me. I don't want my hamburger below 1/4 of a pound and I don't want to only be able to lose less than 25% of my drives before any array goes buh bye :)
 
Ok, should I go with Solaris 11 Express or OpenIndiana
or should I just go with submesa's zfsguru?

What is the easiest to use in terms of one drive failing? Is it easy to replace failed drives?

@hinnyjl - What do you use as your OS?
 
Last edited:
Seriously, just try one of them and go from there. If you use Solaris or OpenIndiana, you'll want to download napp-it which gives you uber easy menus to figure everything out through. You'll learn a lot very quickly just by playing around.
 
One thing that might steer you away from Solaris 11 Express is that, as I understand it, it is completely unsupported unless you pay for it. By unsupported I don't just mean tech support, but no patches, security fixes, etc. That in itself would steer me towards OpenIndiana or one of the others.

I personally found sub.mesa's ZFSGuru to have a lower learning curve than OpenSolaris/SE11/OI. Mainly because it's really a full distro so you boot from the CD and you get a fully functional system complete with menus (though it's still a work in progress).

Plus, even without ZFSGuru, BSD just seems simpler and more straightforward to me (at least until you delve into the deeper mysteries, like recompiling kernels with root-on-ZFS enabled and stuff like that). Then again I'm somewhat biased as I learned a lot of my UNIX basics on SunOS and Linux/Slackware 1.0 which were more BSDish. Solaris has a much different (granted, probably more powerful) set of administrative tools and conventions. Setting up a simple Samba server on FreeBSD, for example, is a snap, whereas the CIFS system on Solaris requires username mappings and possibly other messy ACL-related stuff. I never could get it to work quite right, even with napp-it's excellent UI. I'll probably give it another go at some point as I really do want to see if the kernel-based CIFS server does better than Samba on some directories I'm sharing that have huge numbers of files.
 
I think the biggest 3 features I need are the ones missing from ZFSguru:

# Missing feature: Error recovery; replacing failed devices.
# Missing feature: Hot spares.
# Missing feature: Email notifications upon degraded/failed pool.

Does OpenIndiana or napp-it have these features in a UI environment?
 
OpenIndiana doesn't have a UI without Napp-It. (Napp-it is solely a UI)

You might check out NexentaStor, it's good for up to 18TB now in "free" mode.
 
OpenIndiana doesn't have a UI without Napp-It. (Napp-it is solely a UI)

You might check out NexentaStor, it's good for up to 18TB now in "free" mode.

Yea, I was thinking about it, but I don't want to be limited to 18TB right now.
 
Yea, I was thinking about it, but I don't want to be limited to 18TB right now.

Do you have more than 18TB right now?

If no: when that time comes, export the pool and load up Openindiana/Open Solaris Express and re-import it.
 
napp-it has those 3 features you listed

besides I don't see why people are getting so hung up on the gui. It's ridiculously simple to add hot spares or replace drives in vdev using cli
 
Do you have more than 18TB right now?

If no: when that time comes, export the pool and load up Openindiana/Open Solaris Express and re-import it.

Right now I have like 4TB, but I'd rather stick with one and not have to worry about transferring, etc later on.
 
Right now I have like 4TB, but I'd rather stick with one and not have to worry about transferring, etc later on.

It's not really transferring the data, it's transferring the OS. But I get ya. I run NexentaStor CE on a 5.5TB array. :) Works pretty well. using iSCSI for my OSX machines at home. :) Time Machine works real well like this.
 
Thanks for all the great info.

So let's say I have 12 drives, and want to use zraid1. How would I setup my drives with a Movies folder?

On other systems like Window or Linux, you store your files on a raidset. With ZFS, you
store your files on pools build from raidsets (the correct name is vdev)

If you want to optimize a new pool, think about a triangle between the optimation-options
performance, capacity and data-security.

best data-security: multiple n-way mirrors or raid-z3
best performance: multiple striped single disks or striped mirrors
best capacity: one pool build from one raid-z vdev

in your case with media files, i would build one pool from one raid-z2 or z3 vdev.
The only problem with large vdevs is the rebuild time in case of failures.
(one pool build from one raid-z2 vdev is simiar to a raid-6, but without the write hole problem)

Gea
 
Where are you getting 6/10 for Z2? ZFS tuning guide just specifies between 3-9 per vdev.
I believe this is through his testing/experience.

I've read through a lot of sub.mesa's stuff, I'd personally trust his advice.

Now mind you, this is all from a FreeBSD perspective, right sub?

I think it had something to do with the dynamic block size of ZFS. When you stripe the blocks across the "golden" number of drives, you get stripes that line up cleanly with 4K sector boundaries. With other combinations you get messy block alignments. Or something like that.
 
Ok, thanks for all the info.

I think I'll build my server and try OpenIndiana first and see how that goes.

btw.... will it have any issues with the Samsung F4 drives or WD EARS?
 
Which OS you use seems to be mostly irrelevant since you can just import your pool on a new OS install. Already done that 3 times :eek:
I used Solaris 11 since development of OpenSolaris derivatives seems kind of slow at the moment.

I'm using Samsung F4's and haven't had any problems yet.

As far as the folder hierarchy, on my previous system I had different shares for Music, Movies, etc. This time I created a single "Media" share with subfolders for Movies, Music... just simpler than mounting 5 different shares.
Like previously mentioned, you can add multiple vdev's to your pool so it will appear as one lump system.
 
Which OS you use seems to be mostly irrelevant since you can just import your pool on a new OS install. Already done that 3 times :eek:
I used Solaris 11 since development of OpenSolaris derivatives seems kind of slow at the moment.

I'm using Samsung F4's and haven't had any problems yet.

As far as the folder hierarchy, on my previous system I had different shares for Music, Movies, etc. This time I created a single "Media" share with subfolders for Movies, Music... just simpler than mounting 5 different shares.
Like previously mentioned, you can add multiple vdev's to your pool so it will appear as one lump system.
Is it a lot safer to use raidz2 than raidz1?

Let's say I use 3x (6 drives in raidz2) vdevs for a total of 18 drives. Is that a lot safer than this

5x (4 drives in raidz1) vdevs?
 
napp-it has those 3 features you listed

besides I don't see why people are getting so hung up on the gui. It's ridiculously simple to add hot spares or replace drives in vdev using cli
No kidding, its not hard to man or just look ? switch to see the syntax. I barely had any experience with any *nix variant OS before I did my build. Now mind you... I'd totally install napp-it to make it easier.

I personally use OpenSolaris, the last derivative, but I'm thinking going Solaris 11 Express soon, I just haven't had the time or inclination to bring it down to do the upgrade. I'm still fuzzy on if OpenIndiana (or really anyone) is going to be getting the new ZFS binaries from Oracle.
 
As far as I know, adding more vdevs at a given raid level doesn't really give you that much more protection, so I'd say you're almost always better off using raidz2, unless you mirror the raidz1 vdevs, I suppose.
 
At THIS point in time I have gone with Solaris 11 based upon maturity and advanced features. FreeBSD used with zfsguru gui is still a work in progress. Same problem with openindiana. Solaris provides a graphical interface and the built in "time slider". Gea's napp_it interface makes you an expert in no time.

It is not recommended to have many zpools but rather just one with multiple vdev's. My magic number is 5 drives per raidz1 vdev. My storage grows 5 disks at a time. 5 drive vdev's gives me the most bang for the buck ...speed, parity, space. I'm now exploring visualizing Vail via virtualbox / Solaris as a backup system useing the underlying zfs structure for its storage.
Can't type on this phone any longer.. enjoy
 
Back
Top