ZFSguru NAS fileserver project

I have twenty 2TB Samsung F4's that I want to set up as two 10 disk raidz2 vdev's under the same storage pool. I created the pool with the first 10 disks and then went to the expansion tab under pools, selected the next 10 drives, and set the redundancy as Raid6 (as with the original pool). The pools page reports 36.2T, but shouldn't this be more like 32T due to loss of space from redundancy? The filesystems page shows a total capacity of only 27.8T. Did I expand the pool incorrectly or is this a bug?
 
Last edited:
The pools page list ALL capacities as RAW capacity, thus a 100MB file with 25% redundancy would show up as 125MB, same for total/free capacity.

On the files page, the capacities are for file storage, so 100MB will be 100MB regardless of RAID0 of mirror or RAIDZ2; the redundancy is excluded from the calculations.

In the next release, i added some text explaining this to avoid obvious confusion (you're not the only one! :D )
 
Regarding Root-On-Zfs Install, what are the implications of the following options?

Code:
Memory tuning - Automatic memory tuning
Filesystem structure - Full ZFS filesystem structure
Preserve system image - Copy system image to target pool
 
Memory tuning will choose optimal settings for /boot/loader.conf based on your memory size. It would be the same as pressing the 'Reset to recommended' button on the System->Tuning page. This option does that automatically for the new Root-on-ZFS installation. So you don't have to reboot twice but only once for the tuning to work.

Full ZFS filesystem structure will create some additional system filesystems for the portstree. This is useful if you want to manually extend FreeBSD with software in the portstree, recommended for advanced users comfortable with SSH login and tinkering with FreeBSD themselves.

Preserve system image will copy the system image file (250MB) on your livecd to you new Root-on-ZFS installation. This would only be useful if you want to install the same image again to another device without having to insert the livecd again. Not that useful for most people, and you can always redownload or insert the livecd with that system image when you need to install again.
 
Great, thanks a bunch.
Btw, are there any plans to integrate geli-support in zfsguru anytime soon?

I hope you dont mind if I come back to you for geli-related questions.
 
Another question regarding the disks-tab. Using the shell-script you install a bootable partition on every disk and then create an 2048-aligned data-partition. Just out of curiousity, why do you make every disk bootable with real boot-code?
 
You can expect GELI full-disk encryption in next release. There will be two other encryption methods (zvol and file-based) which will be introduced in a future release.

Just out of curiousity, why do you make every disk bootable with real boot-code?
Because you can't change it later. It doesn't hurt too, assume you have pool X where you boot from, and the disks of pool Y are also bootable. If you boot from those, you would still boot into pool X instead! ZFS allows one bootable pool per system, and it doesn't matter which disk you used to start the boot process. The boot loader will scan all disks and look for a bootable pool property (bootfs).

So GPT formatted disks are bootable, disks formatted with GEOM are not. You can't change this without losing the data on the disk, so if you ever want boot support you would need to pick GPT.

Technically, it would be possible that your bootable pool consists of GEOM formatted (non-bootable) disks but you use a different GPT formatted disk to boot from. So you can see this behavior can have advantages but also does have limitations.
 
Hi :)
After I changed the defect drive the server has been good for the last 18 days now :D

One thing I like to see next is AntiVirus, an interface to run it and schedule it :)
 
Nice to hear it's working well for you now!

Antivirus is indeed something i've not heard before, but yes ClamAV can be configured to scan your files regularly. Probably will integrate something like this in a Maintenance tab, which would also be the place for regular scrubs or other 'scheduled' tasks.

Don't expect this in upcoming release though. Scheduled tasks may need to wait for at least 0.1.9. But once the extensions are in this would be on the list.

Hope i can release a first 0.1.8-preview next week!
 
Not yet, but when i do you can see it on the System->Help page where the changelog get's displayed. Soon i'll update that for the first 0.1.8 preview release. Speaking of which, i'm getting rather close to that point. :)
 
Think it might be in a few days already. I still have to do disk formatting, installation and services.

I just completed error handling, sanity checking and friendly redirects. This should help catch illegal characters used in certain places (such as spaces) which can lead to problems. Better to catch these beforehand, for a good 'newbie experience'.

Once the basic stuff is there i'll build my first .iso and the first public preview release shouldn't take long. I'm about 80% towards the release now.
 
Update: i'm very close to an early preview of 0.1.8. This preview would lack some functionality like Services page, but most other functionality is present. I'm looking for people who have a test system (i.e. not for real data) who could test this preview release thoroughly without being afraid that it breaks something or even is harmful to your files. If you want to test the 0.1.8 preview, please send me a PM! It should be ready in a day or two.

@nl-darklord: after the 0.1.8 preview, i will continue on adding the long-awaited and overdue features including a 'Welcome wizard'. this wizard would display on the first time the GUI is accessed and would allow you to set a password and optionally IP-bound authentication.

Working on new features is the fun stuff; so far i've been doing the 'boring' hard stuff, but this is very important! Once i get to the 0.1.8 preview, adding all the fun features wouldn't take that long and i could 'ramp up' releases. The 0.1.8 is a major rewrite and alot of work, but only after the rewrite will the benefits of all this work become visible.

You guys' patience shall be rewarded. :)
 
Update:You guys' patience shall be rewarded. :)

If anyone needs to be rewarded it's you. ZFSguru is working great for me and I'm sure its working great for others as well. I will soon be adding an extra 2 drives to my existing server, scrapping the existing config and re-doing it in preparation for more drives. I'm currently running 8 drives in a RAIDZ2. I'm going to make it a single 10 drive raidz2 for now and eventually 2x 10drive RAIDZ2's when money permits it.

Thanks again for your hard work.
 
Finally, i released ZFSguru 0.1.8 preview release
This is a testing-only release; URL on request (private message / email)

How to update:
1) Download the .tgz file to your computer
2) Go to System->Update page on your ZFSguru web-interface using a Firefox browser
3) Use the HTTP upload form to select your downloaded .tgz file and click the button next to it.
4) You should now run 0.1.8-preview as can be seen on the Status page

A little issue in 0.1.7 prevents you from importing web-interface using this method if you are using the Chrome browser, and possibly other browsers too which send no or incorrect MIME header. So try to use Firefox at least for the import/update. This is fixed in 0.1.8.

Limitations
Most functionality should work. The Disks->Benchmark page still could use some work, and the Services page is missing altogether. Sharing samba through the Files page should work though. Another limitation is that the Network page only has display functionality, and does not allow to change DHCP or IP settings yet.

Please test as many functions as you can, and report any issues here!
 
As several people reported, the updating also does not work with Firefox 3.6, it probably only works with Firefox 4.

There is an alternative! You can use the root command line to execute these commands:

First execute this command in root command line:
fetch -o /tmp/guru018.tgz http://zfsguru.com/files/ZFSguru-0.1.8-preview.tgz

Once the previous command is done, execute the second command - again as root!
tar xvfz /tmp/guru018.tgz -C /usr/local/www/zfsguru/

WARNING! ONLY do this on a ZFSguru *TEST* installation, not on an installation with real data. Once 0.1.8 is more matured it will be ready, just not yet! Only for testing!
 
What version of FreeBSD is the 0.1.8 test release based on? I believe 8.2 just went final a day or two ago.
 
The 0.1.8 preview release is Web-interface only; the system images (FreeBSD versions) are separate releases.

I'm already building the recently released FreeBSD 8.2-RELEASE amd64 dist, and will release a new system image called 8.2-002 soon based on this final version (up from RC1 versus the current 8.2-001 version).

The new 8.2-002 system image probably will have some core changes though, such as integrated rtorrent for the ZFSguru web-interface only (extensions will control torrents for the user instead). More info on that when i'm further ahead with the system image release.
 
:D

By the way, you can see all available system images by going to System->Install page, then click Root-on-ZFS and you get to step 2. There you see all available versions.

Once i release the new 8.2-002, it will also be displayed in that list, and available for download. I would like to make use of torrents instead of direct HTTP downloads though, to relieve strain on the web-server.
 
So I just checked the SMART info on my ZFSGuru system and one of my drives has a Current Pending Sector Count of 18 and a reallocated event count of 0. Is there any way to get the drive to reassign those pending sectors with spares? Should I just RMA the drive now?
 
@Sub.Mesa

I have a question regarding ZFS and using it with Hyper-v/ESX. If i use iSCSI how does that work with VMs? iSCSI basically presents a block device to the VM, so can i format it with NTFS?

Also its generally recommended to use NFS w/ ESX, so can i create a NFS volume and format it with VMFS for an ESX data store?

Are there any drawbacks? Can i still use the features of ZFS such as deduplication/compression?
 
So I just checked the SMART info on my ZFSGuru system and one of my drives has a Current Pending Sector Count of 18 and a reallocated event count of 0. Is there any way to get the drive to reassign those pending sectors with spares? Should I just RMA the drive now?
A zero write will fix those pending sectors. If it was physical damage you would see Reallocated Sector Count 18. If there was no physical damage, you would see 0/0 instead.

If you still need the data on that disk, then recover it. If it is part of a redundant ZFS configuration, running a scrub should fix those errors as well, unless the bad sector is in a location not in use by ZFS. In that case it won't hurt really, since when ZFS *does* choose to write to it, the HDD could swap it for a reserve one if needed. Only reads to a bad sector are a problem, writes to a bad sector are the solution and will fix the problem.

If it's a windows drive, you might have more problems with this. Running spinrite might recover the sectors without forfeiting their data; thus without dataloss.
 
I have a question regarding ZFS and using it with Hyper-v/ESX.
I'm not really familiar with those technologies, but i'll try!

If i use iSCSI how does that work with VMs? iSCSI basically presents a block device to the VM, so can i format it with NTFS?
iSCSI is a SAN protocol, meaning that the server that holds the actual data, does not know what is stored; all it sees are binary blocks. Basically it behaves like a harddrive; it knows about blocks but not about files.

So at the client side (your guest VM i presume?) you get to see a virtual "SCSI" harddrive, and can format it with NTFS and boot from it like it was a normal harddrive, though the booting requires some special tricks to get working. If your VM solution has integrated iSCSI support, then likely this means it facilitates booting from the iSCSI volume, and presents it directly as SCSI disk without the guest OS requiring specific drivers for it.

Also its generally recommended to use NFS w/ ESX, so can i create a NFS volume and format it with VMFS for an ESX data store?
NFS is a NAS protocol, meaning that data is stored on the server and the server allocates files, not blocks of data. This means that multiple machines (clients) and the server itself, can access that data. The server controls the filesystem, so Windows 7 could be working on ZFS without it knowing. NFS and Samba are the two most used options here. Samba implements windows filesharing (SMB / CIFS).

NFS generally is the preferred choice over Samba, for Linux and other UNIX platforms. For Windows i would just keep Samba.

So with Samba/NFS you can share data across multiple computers. iSCSI is only for the guest system; nobody else accesses that data, at least not simultaneously.

Are there any drawbacks? Can i still use the features of ZFS such as deduplication/compression?
Dedup could work well with iSCSI volumes since they can share alot of identical data. As long as you keep the data set down the dedupe shouldn't hammer your RAM needs too much.
 
I just got a message from mailing list.
FreeBSD 8.2 Stable has been released 33 minutes ago. With ZFS v15.
 
The .iso were available for some days already, I've built the source right now and producing my first livecd based on the new system image. Still want to do some adjustments like installing rtorrent in a jail and such, before i release the system image as 8.2-002 (FreeBSD 8.2-RELEASE). And note that i used the same patch for FreeBSD 8.1 to receive ZFS version 15 as well. The current 8.2-001 based on FreeBSD 8.2-RC1 also has ZFS v15.

For the first time, i will be building i386 images as well, but they will be experimental. I need to do tuning way differently for i386 especially since automatic memory tuning is enabled for Root-on-ZFS installations; which could cause i386 systems not to boot. So changes required in my web-interface as well; 0.1.8 final should be good in this regard and ready for i386 (normal 32-bit) systems.
 
Been following your threads with great interest. Being that the industry is moving away from 32-bit, why spend time retro fitting?
 
I gotta agree... 64bit chips have been introduced in April 2003. IIRC The last 32bit chip (Excluding the atom/Nano) to be manufactured was circa 2005 (Intel kept making outdated chips for OEMs, they love selling obsolete hardware to customers, makes them upgrade more often).
With windows 32bit is theoretically needed (it was a terrible stupid idea to have 32bit versions of vista and win7) for backwards compatibility with drivers that will not be recompiled (WOW32 works fine for almost all programs except those with built in drivers), but this is a non issue today and has never been an issue in the open source community.

I also seriously doubt that an AthlonXP is fast enough (or can address enough ram) to run ZFS. At least, the original designers of ZFS have explicitly said that it isn't.
 
Last edited:
My AMD Geode LX800 at 500MHz still is 32-bit and runs ZFS just fine. :D

Seriously, i also plan on having ZFSguru as wide usage not just as NAS product. It would be nice if more platforms than just i386/amd64 were supported; i can do that relatively easy, and demonstrates independent system releases.

It's also not alot of work to do this, just one compile run extra. Of course i need to test and make changes, but i would love to provide both 32-bit and 64-bit capability, especially considering quite a few people requested this, especially to test out on an older laptop for example.

I do agree a serious ZFS server implies 64-bit; though also making a 32-bit livecd would surely please some people! Probably there will be less i386/32-bit releases than there will be 64-bit releases, so amd64/64-bit is definitely the focus.
 
Well, I should have said atom& the like. So atom,nano, and geode. But the full sized processors from AMD are all 64bit.

But fair enough, it makes sense actually when I think about it. Even though it wouldn't make a good server a netbook would still benefit from using ZFS for storing its OS. Of course its your time and your decision how to spend it; but now that you explain it makes more sense.
 
@Sub.Mesa

Also its generally recommended to use NFS w/ ESX, so can i create a NFS volume and format it with VMFS for an ESX data store?

In what universe? iSCSI and FC are preferred over any other. NFS is considered better over a Samba share but thats it. Saying that NFS is plain recommended with ESX is just silly. Any VCP would tell you the same.
 
In what universe? iSCSI and FC are preferred over any other. NFS is considered better over a Samba share but thats it. Saying that NFS is plain recommended with ESX is just silly. Any VCP would tell you the same.

I'm a VCP4, and iSCSI doesn't have the performance advantage you'd think it does. VMWare itself essentially found no performance difference on 1Gbit.

http://www.vmware.com/files/pdf/storage_protocol_perf.pdf

On 10 GBit? Also looks like there's not that much of a difference:

http://communities.vmware.com/thread/253535

I believe that's non-hardware iSCSI, so an iSCSI offload card may help.
 
In what universe? iSCSI and FC are preferred over any other. NFS is considered better over a Samba share but thats it. Saying that NFS is plain recommended with ESX is just silly. Any VCP would tell you the same.

Well no one mentioned FC, and yes it would be best, but we werent talking about that now or enterprise SAN were we? Didnt think so.

I still maintain that NFS > iSCSI.

With iSCSI you have some limitations. Single Disk IO,VMFS vs RDMs, Zones, identical LUN Ids across ESX servers, you cant resize LUNs on the fly.

With NFS all of this goes away. VMDK thin provisioning by default, You can expand/decrease the NFS volume on the fly and realize the effect of the operation on the ESX server with the click of the refresh button., no vmfs or rdm decisions, no zones, HBAs, LUN ids. No single disk I/O queue, so your performance is strictly dependent upon the size of the pipe and the disk array.
You can have a Single mount point across multiple IP addesses and you can use link aggregation IEEE 802.3ad to increase the size of your pipe whereas with iSCSI you are restricted to 1gbps unless you have a 10gbps network (which most people dont).
 
Well no one mentioned FC, and yes it would be best, but we werent talking about that now or enterprise SAN were we? Didnt think so.

I still maintain that NFS > iSCSI.

With iSCSI you have some limitations. Single Disk IO,VMFS vs RDMs, Zones, identical LUN Ids across ESX servers, you cant resize LUNs on the fly.

With NFS all of this goes away. VMDK thin provisioning by default, You can expand/decrease the NFS volume on the fly and realize the effect of the operation on the ESX server with the click of the refresh button., no vmfs or rdm decisions, no zones, HBAs, LUN ids. No single disk I/O queue, so your performance is strictly dependent upon the size of the pipe and the disk array.
You can have a Single mount point across multiple IP addesses and you can use link aggregation IEEE 802.3ad to increase the size of your pipe whereas with iSCSI you are restricted to 1gbps unless you have a 10gbps network (which most people dont).

I agree, NFS is way more flexible, especially if you're not using FC. Unless you're using a hardware iSCSI card, there's no performance benefit (and even with a hardware card not that much) than NFS, and NFS is way more flexible.
 
hey.
I cant update to 0.1.8 preview :(

ERROR: Uploaded file must be of type "application/x-tar"

I used Firefox 3.6.13 , also tried Chrome. do you have a solution?
zfsguru is running from the latest virtualbox version for win7.
 
As several people reported, the updating also does not work with Firefox 3.6, it probably only works with Firefox 4.

There is an alternative! You can use the root command line to execute these commands:

First execute this command in root command line:
fetch -o /tmp/guru018.tgz http://zfsguru.com/files/ZFSguru-0.1.8-preview.tgz

Once the previous command is done, execute the second command - again as root!
tar xvfz /tmp/guru018.tgz -C /usr/local/www/zfsguru/

WARNING! ONLY do this on a ZFSguru *TEST* installation, not on an installation with real data. Once 0.1.8 is more matured it will be ready, just not yet! Only for testing!

On the previous page there is the answer. It is post #179.
 
Back
Top