Linux+RAID card vs Solaris+ZFS?

halcyon

Gawd
Joined
Mar 20, 2003
Messages
717
Hi all-
I am looking into building a home NAS, the main requirements are:

-Cost effective
-Able to saturate a 1G link at least for reads, ideally for writes as well over SMB/CIFS to Windows
-As lower power as possible
-Reasonably easy to administer and able to assign ACLs to folders (so family members dont accidentally nuke data)

I'm considering a mini itx board + low power sandy bridge as the base, no decisions on case/drives yet.

Ive been doing Linux sysadmin for years, but never tried any of the Solaris-based ZFS distributions. It seems to me it would be more cost effective to skip a RAID card and use 'green' or other drives, than having to buy the more expensive enterprise SATA drives that will timeout on reading bad sectors so RAID cards are happy. I'd love to hear any experience people have with these two options, any pitfalls or other general suggestions.

Thanks!
 
i have years of experience with linux MD, lvm, etc etc. zfs is better in every way possible.
 
my opinion, if its for home....if you don't want to tinker with it and just set it and forget it. Use want you comfortable with and then get comfortable with something else and migrate to that later down the road
 
No offense, but this has really been discussed in detail a lot on this forum, and the search function should be able to provide results.

Regarding your specific requirements:
Cost effective: Large "Green" drives, mainstream hardware (or whatever you might have on hand)
Able to saturate a 1Gbit link: Obviously depends on how many drives, what type of drives, redundancy level and so forth
Easy to administer: Depends on your skills. If you're a linux guru, this platform will probably be easier, but as you'd know if you'd searched and read a bit, lots of Windows people find it relatively easy to set up, administer and use a Solaris based ZFS solution.

If you're not doing any virtualization, most OS' should be compatible with most hardware, but go check out whitelists and stick to them if you want a 100% flawless installation.

As you point out yourself, not buying a raid card saves you a bunch, which is one of the reasons for the huge, and still increasing, number of ZFS threads on this board.

Happy reading, and HF with your build :)
 
you don't need to buy a raid card for the straight linux either. mdadm and lvm work fine.

My home file server is using 6 2TB green, in mdadm multiple partitions (100gb of sd[abc] raid 5 mounted /, 90gb of sd[def] /usr, 10gb of sd[def] swap, remainder raid 5 /media) no raid card, and saturates my gigabit with no problem with an old 775 6750cpu 8gb ram.

Samba lets you set shares as read only so ACL not a problem.
 
I have a Ubuntu 11.10 workstation, with a M1015 card with IT firmware, attaching 5 2Tb green drives using zfsonlinux. I think it meets all your requirements, except I have a larger case. I'm very happy.
 
One big advantage, as far as I can tell (md experts please correct me if I am wrong): With md/lvm/fs, you need to reserve the space a volume/snapshot will need ahead of time (when partitioning.) With zfs, by default, a new dataset (think volume/filesystem) shares the free storage of the containing pool.
 
you don't need to buy a raid card for the straight linux either. mdadm and lvm work fine.

I agree. There is little need for a raid card on linux if you have hardware made within the last 5 or so years. At work a 3+ year old core2quad I get 800+ MB/s reads (and 500+ MB/s writes) on my 9 drive linux software raid6 array. And it rebuilds in around 8.5 hours (not days like cheap hardware raid) for 14TB of space. During normal reads or writes (non rebuild) the raid uses 7% to 12% of a single core. This does increase during a rebuild but no where near 100% of all 4 cores.
 
I have a few followup questions:

-I took a look at the ZFS ports to Linux but they seemed a little dodgy, are they well supported? Is there a decent GUI to deal with them?

-Are ZFS drives migratable between systems? Imagine I have 6 drives running ZFS, and a separate OS drive. If the OS drive dies, can I hook up the ZFS drives to a new OS install and have them correctly recognized and work as expected?

-Does Samba allow you to have user-based per-directory ACLs? I'd like to have full read/write to everything but then read only to other users on certain directories.

Thanks!
 
One big advantage, as far as I can tell (md experts please correct me if I am wrong): With md/lvm/fs, you need to reserve the space a volume/snapshot will need ahead of time (when partitioning.) With zfs, by default, a new dataset (think volume/filesystem) shares the free storage of the containing pool.

With lvm you can dynamically expand or contact a filesystem provided the filesystem allows for that. xfs does not currently allow for shrinking. While ext3, ext4 and reiserfs (among others) will expand or shrink.

-Are ZFS drives migratable between systems? Imagine I have 6 drives running ZFS, and a separate OS drive. If the OS drive dies, can I hook up the ZFS drives to a new OS install and have them correctly recognized and work as expected?

Yes this will work, meaning you can easily transfer your arrays/pools ... So will a mdadm array and lvm.

-I took a look at the ZFS ports to Linux but they seemed a little dodgy, are they well supported?

There is a userspace fuse implementation that is kernel independent and has been around for a long time. This used to be very slow but mostly that is fixed by now. There is also a kernel based solution, however that may restrict what kernel versions you use and hold your kernel updates back to the the schedule of the team who did the port.

Is there a decent GUI to deal with them?

I have not seen any in linux however I do all of my management from the cli usually via ssh so its not like I have looked hard..
 
Last edited:
Sorry, I was not clear. The issue of dynamic size for the FS wasn't my point, it was that you can't arbitrarily create lvm volumes/snapshots of different sizes that can grow dynamically, no? Not having to size things out for me is a huge win...
 
One caveat is if your zfs pool was created under freebsd and you used a gpt partitioning scheme instead of geom, it won't be importable into solaris.
 
-Does Samba allow you to have user-based per-directory ACLs? I'd like to have full read/write to everything but then read only to other users on certain directories.

Yes, samba can be setup for that, using either groups or just per user.
 
Back
Top