FreeBSD ZFS NAS Web-GUI

Status
Not open for further replies.
Aha! I may have a clue what the problem is; it could be the new memory tuning while installing Root-on-ZFS; i did not test this functionality because my VM has only 2GB RAM and the memory tuning doesn't work then. Could you try Root-on-ZFS installation and de-select the Automatic memory tuning checkbox at Step 4?

If that solves it, then i can fix the issue. ;-)

Yes! Unchecking the memory tuning fixed the problem (This system has 8GB, by the way). I left everything else the same, and it came right up. Also, /boot/loader.conf is now populated with lots of options. Before, it was completely empty, and the System->Tuning form wasn't working right either (i.e. you could enter a change and click "Save Changes", and the form would just come back blank again.
 
Thanks for the feedback on the direction of my project. I'm still focusing on the core features to deliver a stable 0.2.0 release, usable as NAS with basic functionality. But the question is what direction to take after that. Torrents and DLNA/UPNP mediastreaming come to mind, the file manager that i wanted to implement. But these big features take time, so prioritizing might be worthwhile. For example, the torrent feature would also allow me to distribute my software and system image downloads via torrents instead, saving me bandwidth when this project 'takes off'.

As I mentioned awhile back I would like to see ....backing up and restoring ZFS SNAPSHOTS using something like rsync. If I may reference this gentlemans work http://www.thelazysysadmin.net/2010/...zfs-snapshots/ you can get a better idea what I mean.

Just in-case you run out of other ideas :)

WF
 
sub.mesa, you mentioned in another thread the issue of BER rearing its ugly head when doing a reconstruction with a degraded array.
A degraded raidz array has no parity, so a single bit error will result in corruption.
Only if the affected block is not a ditto block; i.e. no copies=2 is being used. Even with degraded RAID-Z, files with copies=2 have additional protection against BER. Metadata always is replicated, so only unprotected data on degraded volume would be at risk.

just to be clear:
1. that corruption is going to be limited to one file (the one where the error is at) correct?
Yes, unless you use compression then it could cover multiple smaller files.

2. file tables and the like are stored multiple times to protect against such issues.
3. a traditional raid5 array will error out of the build processes, but ZFS will finish rebuilding (with that one corrupt file),
Correct. As for 2) metadata is given ditto blocks or replication; even on a single-disk pool.

4. Doesn't that also mean that 2 drive mirror is not sufficient either?
Indeed; a RAID6 (RAID-Z2) should be superior to mirror since it guarantees you 2 full failures with 0% dataloss; a mirror cannot guarantee this.

Some people like to think you need one level of redundancy for BER alone. So RAID5 would be RAID0 but a fix for the BER issue; RAID6 would be RAID5 with a fix for the BER issue. I think this is too strict on ZFS; it also depends on your data. With ZFS you can give your more important files (tax/administration/work/letters/personal photos) extra protection over bulk files which are less important or can be regenerated/redownloaded.

5. what could I expect from a performance and reliability standpoint if I use (with 4k sector drives):
a. 6 drive raidz2.
b. 5 drive raidz2
c. a pool of 3 vdevs, each of 3 drive mirror
d. 2 seperate pools, each of a single vdev, each of 3 drive mirror.
6-drive RAID-Z2 works well; anything under 6 drives and RAID-Z performs poorly. ZFS version 29 may fix this; for now you should consider 6 disks the minimum for RAID-Z, especially on 4KiB sector disks. Mirror or nested mirror configurations have no issues with 4KiB sectors; only RAID-Z is affected in particular.

I would prefer the 6-drive RAID-Z2. Do note that a real backup beats any redundancy. For that reason i use two ZFS boxes; one primary that is always on, one backup that is powered off most of the time and only syncs with the main server.

6. Wouldn't frequent scrubbing, such that there are no unreadable sectors at the time of drive failure nullify prevent this from becoming a problem?
It can help, but it can not completely prevent this issue. Reading a harddrive at max speed takes about 2-5 hours, in an array configuration you could see longer than 10 hours for a single scrub, depending on how busy it is. Perhaps if you could refresh all sectors every 10 minutes, that BER issue would be much less severe, but that's not possible. Also it would require continuous 100% duty cycle on your HDDs, decreasing performance. Scrubbing every night might be an option, but i would think you would want to consider a real backup in addition to redundancy if you simply cannot afford to lose data.

ZFS is a great first step towards that goal, but a TRUE independent backup with snapshots just increases your protection considerably to the point of not ever losing any precious data in your lifetime.
 
thank you very much for all the answers sub,mesa. gives me a lot of info to work with.
 
Is this not an issue with standard non-4K drives? I can buy more 4K 2TBs(yeaaaah!) but I don't relish having to flesh out all my planned pools(2x3 drive pools)
6-drive RAID-Z2 works well; anything under 6 drives and RAID-Z performs poorly. ZFS version 29 may fix this; for now you should consider 6 disks the minimum for RAID-Z, especially on 4KiB sector disks. Mirror or nested mirror configurations have no issues with 4KiB sectors; only RAID-Z is affected in particular.
 
Sub, i was reccomended this file manager by a Oldschool *nix coder that I work with,
http://www.ajaxplorer.info/wordpress/
Was wondering if you or wingfat or someone would be able to tell me if that is installable on the server alongside ZFSguru. Not asking for it to be integrated into your interface, but It would give me a good way of getting into my files and tidying them up. I have about 5TB of movies, tv shows etc to rename, organise etc, I basically copied torrents folders from 5 different Hdds onto the server.
 
@kxy: looks very nice, and LGPL license so might integrate with my project; have to see if there's anything linux-specific in there but it shouldn't i think. I'll put it on my wishlist. :)
Is this not an issue with standard non-4K drives? I can buy more 4K 2TBs(yeaaaah!) but I don't relish having to flesh out all my planned pools(2x3 drive pools)
It was an issue on my 512-byte sector SATA drives; same poor performance for RAID-Z2 with less than 6 disks.

For 4K disks the ideal vdev configuration is:
- mirrors (no issue with 4K sectors)
- 5-disk RAID-Z (or 9-disk)
- 6-disk RAID-Z2 (or 10-disk)

The sectorsize override may also help, but at the moment renders your pool unbootable. You should get very good performance with enough RAM and 4K disks with a configuration like above. ZFSguru creates properly aligned partitions and stuff.
 
When you say poor performance, is that performance still better than single drive performance and just not up to what you COULD expect, or is worse than a regular lone drive?

Less than optimal speeds don't bother me too much, so long as it will be an overall gain from my current WHS performance. As for my 4K pool plans, looks like I get to go shopping:cool:

Thanks so much for the quick responses, you don't know just how handy it is(especially for a neophyte like myself)

It was an issue on my 512-byte sector SATA drives; same poor performance for RAID-Z2 with less than 6 disks.

For 4K disks the ideal vdev configuration is:
- mirrors (no issue with 4K sectors)
- 5-disk RAID-Z (or 9-disk)
- 6-disk RAID-Z2 (or 10-disk)

The sectorsize override may also help, but at the moment renders your pool unbootable. You should get very good performance with enough RAM and 4K disks with a configuration like above. ZFSguru creates properly aligned partitions and stuff.
 
When you say poor performance, is that performance still better than single drive performance and just not up to what you COULD expect, or is worse than a regular lone drive?
Better than a single disk; though one exception: 4/5 disk RAID-Z2 performance, as stated, is quite poor and can be less than a single disk, until 6 disks where it behaves normally.

You may also want to check the 4K testing thread here on OCP. That contains alot of ZFS-performance data.

Less than optimal speeds don't bother me too much, so long as it will be an overall gain from my current WHS performance. As for my 4K pool plans, looks like I get to go shopping:cool:
What hardware did you have in mind? Do you have motherboard, RAM, CPU and controller already? Consider that ZFS likes alot of RAM.
 
Are pools of 4 ok? I have 2 sets of 4 drives that I intend to use. I read through some of the 4K test thread, most of it is a touch over my head, still requires a bit of thinking to figure out whats going on:confused:
Better than a single disk; though one exception: 4/5 disk RAID-Z2 performance, as stated, is quite poor and can be less than a single disk, until 6 disks where it behaves normally.

You may also want to check the 4K testing thread here on OCP. That contains alot of ZFS-performance data.

I'm treating myself to a whole new rig for this, Supermicro X8SIL-F w/ an i3 of some flavour, and probably 8GB(maybe more), for the controller I'm tossing the idea of the Intel SASUC8i.
What hardware did you have in mind? Do you have motherboard, RAM, CPU and controller already? Consider that ZFS likes alot of RAM.
 
I can't figure out how to add a samba share
capturegu.png

This is the screen I'm working on
 
First, from the "Filesystem" list you click on the filesystem you want to share, such as "zfs_root_terabyte/zfsguru/usr". Then just click the "Not shared with Samba" checkbox, and and fill in a name for the share, then click the Share button. The text will change to "Shared with Samba". It's a little confusing because the text next to the checkbox tells you what state the system is currently in, *not* what action will be performed if you check the box. If you check the box, you actually cause the system to do the opposite of what the text says.
 
First, from the "Filesystem" list you click on the filesystem you want to share, such as "zfs_root_terabyte/zfsguru/usr". Then just click the "Not shared with Samba" checkbox, and and fill in a name for the share, then click the Share button. The text will change to "Shared with Samba". It's a little confusing because the text next to the checkbox tells you what state the system is currently in, *not* what action will be performed if you check the box. If you check the box, you actually cause the system to do the opposite of what the text says.
That's what I did but the page refreshes and the box with the typed in text just goes blank like my screen shot.
 
This is highly weird
Although nothing is showing up on the screenshot I linked or on the page under Services-> Samba when I go to Windows Explorer and go to //ZFSGURU, the appropriate share does show up

I'm going to test out copying to and from the ghost share, we'll see what happens
 
Finaly got time for another round of testing.

Using the lagg driver to team 2 NIC, i had a little trouble getting samba to work, i could not connect to the server from windows.
Except when i set "interfaces = lagg0" in smb.conf

Then it worked, when i browse to //10.10.10.10, but not when i browse to //ZFSGURU.

When i browse to //ZFSGURU, i just get an error saying the network path was not found? Any tips on what to do?

When i browse to //10.10.10.10 i can use the full network speed without any problems, so i dont think its related to the lagg configuration, but maybe samba is to blame?
 
Hi all.. I have a question. Right now I'm running ZFSguru 0.1.7-preview2e and I noticed that more than half of my 8-disk WD20EARS raidz2 lost their labels and that the webgui is not reporting the 4k sector override anymore. I'm still very happy with performance, transfer from my temporary storage (win7) to the radz2 gives 60-65MB/s and transfer from radz2 to the temp storage gives around 90-95MB/s.
Does anyone have any idea why this is happening.
 
Hi all.. I have a question. Right now I'm running ZFSguru 0.1.7-preview2e and I noticed that more than half of my 8-disk WD20EARS raidz2 lost their labels
How are the disks recognized? If they are recognized as gptid/<blabla> then that's an issue i fixed with upcoming system image. If you mean you lost the .nop suffix to your disks, that is normal!

and that the webgui is not reporting the 4k sector override anymore.
It won't after a reboot, but it will still have changed essential metadata on the ZFS pool; it would use ashift=12 instead. This you can check with:
zdb -e <poolname> | grep ashift

Pools created without the sectorsize override would indicate ashift=9 instead. So you only have to do the sectorsize trick once and it would change how ZFS allocates data, benefiting 4K disks.
 
First, from the "Filesystem" list you click on the filesystem you want to share, such as "zfs_root_terabyte/zfsguru/usr". Then just click the "Not shared with Samba" checkbox, and and fill in a name for the share, then click the Share button. The text will change to "Shared with Samba". It's a little confusing because the text next to the checkbox tells you what state the system is currently in, *not* what action will be performed if you check the box. If you check the box, you actually cause the system to do the opposite of what the text says.
This ought to be incorrect; please test again!

First of all, the checkbox has no function. It comes checked if it's shared; unchecked if it's not. Whether you leave it (un)checked or switch it will have no influence at all. The only thing that matters is which button you click.

It shouldn't be misleading:

If the filesystem is not shared
The text says: Not shared with Samba
The button says: Share

If the filesystem is shared
The text says: Shared with Samba
The button says: Unshare

But if there's a problem with Samba, clicking the buttons will appear to have no effect, and it would simply revert back to the original position every time. This would indicate a problem with either Samba or the smb.conf configuration file.

That's what I did but the page refreshes and the box with the typed in text just goes blank like my screen shot.
What you can do: enter the name for the share in the yellow box and then click the Share button; that should work. If it does not, it could be a problem with samba or samba configuration file. Check the Services->Samba page and Reset the configuration, then restart Samba. See if that helps.
 
Some big news!

I have produced a ZFS v28 system image and made my web-interface compatible with it.

System version 9.0-001 amd64
- based on FreeBSD 9.0-CURRENT 20101221 code + ZFS v28 patch + head patches
- highly experimental 'bleeding edge' release; DO NOT USE ON REAL DATA!
- it actually does work, however! Imports your existing pools fine.
- supports ZFS v28 including de-duplication and RAID-Z3
- booting support in Virtualbox proved difficult; Virtualbox does not yield access to other devices than the boot device. Please test more thoroughly in the real world! Create a RAID-Z2 and boot from it, have multiple pools of different ZFS versions, etcetera.
- has the new MPT Fusion 2 'mps' driver supporting LSI SAS2008 and other 6Gbps-family chips. This adds support for SuperMicro USAS2 6Gbps HBAs.
- has a few fixes for disk labeling (no more gptid labels)
- has a new menu instead of login prompt, with very basic functionality; but it does display the IP address the web-interface is running on!
- updated all packages, including Samba to 3.5.6 (up from 3.4.x); this may have performance implications.

ZFSguru web-interface version 0.1.7-preview2f
- fixed issues with ZFSv28 new data format
- adds De-duplication support to Files page, on pools and systems that support it.

I will be providing .iso and system image downloads soon; just finishing my own tests. Please consider this stuff extremely experimental and don't put your real data at risk by trying it. Only when you can afford to lose the contents of your pools, should you try the new ZFS v28 system image.

How to upgrade?
- first update your Web-interface on the System->Update page, to preview2f version (not yet available)
- next, System->Install page will allow you to perform Root-on-ZFS installation and download the newly available 9.0-001 system image to your system and use it for installation.
- after installation reboot and you should run the new experimental ZFS v28 system!

Note that in order to use de-duplication, you must either create a pool of at least pool version 21, or upgrade an existing pool with the zpool upgrade command. Note that if you do this, you cannot go back to stable ZFS versions like the ZFS v15 the stable ZFSguru versions use.

So please test, but be careful not to burn your fingers! Bleeding edge can bite you. :)
This is an important step for my project, though. More will follow next week. ;-)

Cheers!
 
Hi and thanks for the quick reply.. The disks was recognized as GEOM:disk3, 5 and 8 and this have survived several reboots. Now before writing this I had a powerfailure and after coming up again all labels are gone. I don't know if it's of any importance that the disks doesn't have labels, I was thinking that it would be useful if/when a disk moves on to the other side so that I know what physical disk to replace..

zdb -e tank | grep ashift showed that ashift=12, so I suppose everything is OK.
 
@olavgg
Well i did not test yet, since my primary test box is virtualbox and as such i can't really test performance. But it should work on small data sets. For really large data like multi-TB you would need an awesome amount of RAM and/or SSD as L2ARC to store the dedup-tables, which grow in size relative to the dataset.

I don't think de-duplication particularly is a strong feature for home users. It makes sense in some situations, in others it just overcomplicates your setup and has a strong tax on performance; may not be what you want. Buying an extra HDD for extra performance + storage would be more logical IMO.

Still, dedupe can be useful on smaller datasets which have similar or mostly identical data. Then you wouldn't need such high hardware requirements either. So i guess it's a nice feature, but a little overhyped. Other ZFS features are far more interesting. For example you can now use a ZIL device safely, without fearing that losing your SSD functioning as ZIL would destroy all data on the pool. This makes the ZIL feature actually usable, and together with L2ARC can significantly improve performance.

@pool
Yeah the GEOM labels could lose their label easily. With GPT that's alot harder. GEOM labels means no partitions and the ZFS filesystem starts at sector 1; ZFS can always 'see' this disk by the raw device name. If you use GPT partitions, you force ZFS to look at the GPT partition instead.

So using GPT partitions generally would be the preferred choice; though it may not work on OpenSolaris or FreeNAS which is something to consider. The geom label will be cross-platform compatible; it might lose the label name but it will identify the disks because it reads the raw device instead.

And yeah ashift=12 means that the sectorsize override worked on your pool; so aside from disk configuration, this should yield the best performance for your 4K disks.
 
Hi..sub ,thank you for your explanation. For now Im just trying this ZFS thing out. I'm waiting for another 2 disks to make a 10 4k disk raidz2 array, then I should not need to worry about sector override and the less special config the better.
I have been running freebsd as a file server (NFS CIFS JBOD) for a long time but after a major hickcup with several broken disks I decided to get some redundancy and raid doesn't feel right, ZFS seems to be the way. I have tried freenas but after changing NIC:s on booth ends several times, still not getting anything above 15-20MB/s with a G network I was somewhat dissapointed but zfsguru helped this up to 60-65 writing and 90-95 Mb reading from the server and this I can live with.
Thank you for your effort, I hope your "release" gains in popularity so that you feel confident continuing to develope/expand it.
Merry crhistmas
/Martin S.
 
Yeah the GEOM labels could lose their label easily.

It would be nice to have a script that grabbed GEOM label info using glabel dump, and put them into a log file. This could run maybe once a day with cron to avoid problems during disk and label failures.
 
Has anybody tested zfsguru in a motherboard with the sb850 southbridge? I was planning to buy one with some F4
 
@El_Kurgan: SB850 6Gbps SATA should work fine in AHCI mode; set your BIOS to that mode and FreeBSD will use its native ahci driver which works very well (NCQ+Hot-Swap+Port Multipliers). Any AHCI-compliant hardware should 'just work' on FreeBSD. These include many modern non-RAID controllers.

Be sure to flash your Samsung F4 with new firmware, to correct a corruption bug that may occur. Otherwise you would be fine! Also consider ECC memory on AMD-platform, if you haven't yet bought the RAM. Since AMD-platforms have cheap access to ECC; any AMD non-Sempron Dual-core should support it.
 
New Release: 0.1.7-preview3

My Christmas present to you guys: preview3 release!
Warning: the ZFS v28 system image, including the one included on the LiveCD, is considered very experimental, and you should not use this on your existing valuable data unless you got a very good backup! This release is for testing only!


System version 9.0-001 amd64 (bleeding edge)
  • based on FreeBSD 9.0-CURRENT 20101221 code + ZFS v28 patch + head patches
  • highly experimental 'bleeding edge' release; DO NOT USE ON REAL DATA!
  • it actually does work, however! Imports your existing pools fine.
  • supports ZFS v28 including de-duplication and RAID-Z3
  • booting support in Virtualbox proved difficult; Virtualbox does not yield access to other devices than the boot device. Please test more thoroughly in the real world! Create a RAID-Z2 and boot from it, have multiple pools of different ZFS versions, etcetera.
  • has the new MPT Fusion 2 'mps' driver supporting LSI SAS2008 and other 6Gbps-family chips. This adds support for SuperMicro USAS2 6Gbps HBAs.
  • has a few fixes for disk labeling (no more gptid labels)
  • has a new menu instead of login prompt, with very basic functionality; but it does display the IP address the web-interface is running on!
  • updated all packages, including Samba to 3.5.6 (up from 3.4.x); this may have performance implications.

ZFSguru web-interface version 0.1.7-preview3
  • fixed issues with ZFSv28 new data format
  • adds De-duplication support to Files page, on pools and systems that support it.
  • changed automatic memory tuning to new defaults (kmem=1.5x the RAM)
  • fixed automatic memory tuning on Root-on-ZFS installations
  • some important of under-the-hood changes

How to upgrade?
  • first update your Web-interface on the System->Update page, to preview3 version
  • next, System->Install page will allow you to perform Root-on-ZFS installation and download the newly available 9.0-001 system image to your system and use it for installation.
  • after installation reboot and you should run the new experimental ZFS v28 system!

Note that in order to use de-duplication, you must either create a pool of at least pool version 21, or upgrade an existing pool with the zpool upgrade command. Note that if you do this, you cannot go back to stable ZFS versions like the ZFS v15 the stable ZFSguru versions use.

So please test, but be careful not to burn your fingers! Bleeding edge can bite you. :)

If there are no critical issues with this release, it will be branched 0.1.7 (final) and next release will be 0.1.8 in new year. The 0.1.7 final will contain the 'stable' ZFS v15 system image based on FreeBSD 8.2-RC; so this really is a testing-only release.

Cheers!
 
Last edited:
sub.mesa - do any of your builds support zfs encryption (not using geli, I mean natively)? I am not sure if that was ported to FREEBSD or not. Thanks!
 
That code is still closed source as far as i know, and is not present in ZFS v28. Oracle said they would release new ZFS code as CDDL license (open source) as well, but at a later date; not sure how long that is.

The OpenSolaris zfs-crypto work went in ZFS v30. FreeBSD's implementation is fully threaded though, while the last time i asked zfs-crypto was a single threaded implementation. It does mean very easy management, though! But for serious encryption i would still prefer GELI or GBDE.
 
Last edited:
Hey it works. :D

I can see some guys downloading. Haha this is great, alright torrents work; thanks for testing this! I now will provide torrent links for each download URL as well.

How's the speed people get from downloading the torrents?

Edit: was just thinking.. if i would put torrent seeding of ZFSguru data files on by default on any new installation, it would automatically seed the data files on each installation. That would mean alot of people will stream the data to you when you need it, and the downloads would be completed very quickly. Still would love an opt-out box for those who do not want to activate this torrent seeding. What do you guys think? Activate this automatically?
 
Last edited:
That's really great! Thanks all who will be seeding this stuff! Currently the 'master seed' shows Uploaded: 1716.8 MB

I really like the simplicity of torrents, and the fact that it quickly can build a collection of seeders around the globe.

I guess i should make the BitTorrent links more prominent on the new webpage, preferring users to download the torrent instead of direct http download.
 
Wow, this is a great thread. I know a person who loves FreeBSD, I'll have to show this to him. Once I get better hardware I'll start implementing this, so thanks for posting it!
 
Just a reminder: you can install Virtualbox (free VMware-clone) and get this thing running in no time! Just create a few virtual disks of 1GB size, like 4 of them, and experiment with Root-on-ZFS installation in the Web-GUI!

You would need to reserve at least 1GB memory to the virtual machine running ZFSguru; 1.5GB+ would be recommended.
 
I got to the point of upgrading to Freebsd 9

How do I upgrade the root zfs pool to latest zfs version and my main data zfs pool to the latest zfs version without cmd line?

This is FANTASTIC!!!!!!!!!!!!!!!
 
Status
Not open for further replies.
Back
Top