FreeBSD ZFS NAS Web-GUI

Status
Not open for further replies.
No upgrade in the web-GUI yet; but the command is easy:

zpool upgrade <poolname>

If you also want to upgrade your filesystems:

zfs upgrade -r <poolname>

Keep in mind that you can't downgrade ZFS versions. Once you upgrade, you can never go back to stable V15 ZFS versions. You can also decide to upgrade to a specific version like 22, the same on stable Solaris 10/9 release. For that use the commands:

zpool upgrade -V 22 <poolname>
zfs upgrade -r -V 4 <poolname>
 
I've ran into several issues.

1. Virtual Machine:
0.1.7-preview2 worked find inside VMWare workstation. It boots, creates a zfs-on-root on a USB stick where I use to install ZFSGuru to server machine (which doesn't have a DVD drive).
But 0.1.7-preview3 didn't boot until memory was increased from 1GB to 1.5GB.

2. Moving on to real hardware:
shortly after starting benchmark, it panics.
Code:
Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 02
fault virtual address	= 0x0
fault code		= supervisor read data, page not present
instruction pointer	= 0x20:0xffffffff80581f06
stack pointer	        = 0x28:0xffffff8000071aa0
frame pointer	        = 0x28:0xffffff8000071b30
code segment		= base 0x0, limit 0xfffff, type 0x1b
			= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags	= interrupt enabled, resume, IOPL = 0
current process		= 2 (g_event)
trap number		= 12
panic: page fault
cpuid = 0
Uptime: 6m4s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort
similar thing occurred after manually creating a pool and copied around 5 GB. The system died

last benchmark attempt had a more obscure result:
Code:
login: Dec 30 22:05:47 zfsguru syslogd: exiting on signal 15
pid 1403 (php), uid 0, was killed: out of swap space
pid 18198 (dd), uid 0, was killed: out of swap space
pid 1019 (lighttpd), uid 80, was killed: out of swap space
Swap? the boot partition is 10 GB big. Well, system didn't boot after that.

3. ZFSGuru script.
when booting from 0.1.7-preview2 livecd, updating web script to latest version and then trying to install a zfs-on-root: it attempts to download the 8.1 distro from internet instead of using the CD. So I had to perform update after installation.

Never had an issue with 8.1 :confused:
 
- 1GB may indeed not be enough anymore, since the new preview3 FreeBSD9 image grew in size
- how much memory does your real system have?
- the "out of swap space" means you ran out of memory and it's gotten so severe the kernel has to kill off running applications to free up memory! You would need swap not to start crashing things; generally this means you're running with too few RAM.
- the g_event crash could be a real issue with 9-CURRENT; this may just fix itself. Would be nice if you could test with more RAM to see if that had any relation with it. But entirely possible this is a 'real bug' in FreeBSD. After all, we're working on the development 9-CURRENT branch now!
- your "booting from 0.1.7-preview2 livecd, updating web script to latest version and then trying to install a zfs-on-root" should have worked. It should not download anything without you clicking the download button. Can you describe exactly what happened? Does not it not find your cdrom?
- downloading of a new system image may fail if you're on LiveCD with too few RAM
- would using Virtualbox instead of VMware change anything?
- note that benchmarking while running the livecd distribution may not be recommended; instead after installing root-on-zfs you would have saved 0.6GB of RAM; that may help.
 
The system has 8GB of memory detected by system. Other specs at signature. For the record, all benchmarks are performed using root-on-zfs installation.

I've tried downgrading to 0.1.7-preview2 with latest update. Again, benchmark was halted due to swap space
Code:
pid 1435 (php), uid 0, was killed: out of swap space
pid 830 (mountd), uid 0, was killed: out of swap space
pid 1005 (lighttpd), uid 80, was killed: out of swap space

Code:
ZFSGURU-benchmark, version 1
Test size: 8.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 5 disks
disk 1: gpt/d2_1
disk 2: gpt/d2_2
disk 3: gpt/d2_3
disk 4: gpt/d2_4
disk 5: gpt/d2_5

* Test Settings: TS8; 
* Tuning: KMEM=12g; AMIN=4g; AMAX=6g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	234 MiB/sec	235 MiB/sec	255 MiB/sec	= 242 MiB/sec avg
WRITE:	471 MiB/sec	485 MiB/sec	476 MiB/sec	= 477 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	269 MiB/sec	277 MiB/sec	273 MiB/sec	= 273 MiB/sec avg
WRITE:	571 MiB/sec	572 MiB/sec	573 MiB/sec	= 572 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	191 MiB/sec	185 MiB/sec	183 MiB/sec	= 186 MiB/sec avg
WRITE:	228 MiB/sec	207 MiB/sec	225 MiB/sec	= 220 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	152 MiB/sec	156 MiB/sec	154 MiB/sec	= 154 MiB/sec avg
WRITE:	344 MiB/sec	336 MiB/sec	348 MiB/sec	= 343 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	39 MiB/sec	39 MiB/sec	40 MiB/sec	= 40 MiB/sec avg
WRITE:	61 MiB/sec	69 MiB/sec	59 MiB/sec	= 63 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	192 MiB/sec	214 MiB/sec	188 MiB/sec	= 198 MiB/sec avg
WRITE:	76 MiB/sec	72 MiB/sec	77 MiB/sec	= 75 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	168 MiB/sec	155 MiB/sec	155 MiB/sec	= 159 MiB/sec avg
WRITE:	118 MiB/sec	104 MiB/sec	118 MiB/sec	= 114 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	306 MiB/sec	301 MiB/sec	312 MiB/sec	= 307 MiB/sec avg
WRITE:	127 MiB/sec	143 MiB/sec	128 MiB/sec	= 133 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	200 MiB/sec	192 MiB/sec	208 MiB/sec	= 200 MiB/sec avg
WRITE:	257 MiB/sec	261 MiB/sec	253 MiB/sec	= 257 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ:	105 MiB/sec	106 MiB/sec	114 MiB/sec	= 109 MiB/sec avg
WRITE:	130 MiB/sec	118 MiB/sec	141 MiB/sec	= 130 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	173 MiB/sec	172 MiB/sec	161 MiB/sec	= 169 MiB/sec avg
WRITE:	276 MiB/sec	265 MiB/sec	299 MiB/sec	= 280 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	229 MiB/sec	203 MiB/sec	222 MiB/sec	= 218 MiB/sec avg
WRITE:	387 MiB/sec	388 MiB/sec	388 MiB/sec	= 388 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	102 MiB/sec	113 MiB/sec	114 MiB/sec	= 110 MiB/sec avg
WRITE:	98 MiB/sec	108 MiB/sec	120 MiB/sec	= 109 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	128 MiB/sec	123 MiB/sec	128 MiB/sec	= 126 MiB/sec avg
WRITE:	207 MiB/sec	201 MiB/sec	206 MiB/sec	= 205 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cW

This might worth mentioning, I had replaced my SSD boot drive with a 2.5 western digital. Too slow to swap? funny. Next will try using a bootable flash. Or I could spare one of my drives as boot for test purposes.

- your "booting from 0.1.7-preview2 livecd, updating web script to latest version and then trying to install a zfs-on-root" should have worked. It should not download anything without you clicking the download button. Can you describe exactly what happened? Does not it not find your cdrom?
It only has a download button, no link on the version number. Even though the hash is the same. Here is a screen shot. Without updating script, it is automatically detecting and mounting the cdrom. It doesn't sound like a big deal, does it?

Thanks for your work mesa o/
 
Regarding the cdrom detection: i should have realised i changed the cd label that the Web-GUI looks for. There would be a work-around:
- login via SSH or use root command line to execute:
# mount -t cd9660 -r /dev/iso9660/ZFSGURU /cdrom
- copy /cdrom/system.ufs.uzip and the .md5 .sha1 files to /tmp/ by using command:
# cp -p /cdrom/system.ufs.uzip* /tmp/
- now refresh web-interface and would be the same as if you downloaded that system image.

But this would only be an issue with older liveCD; i will produce newer stable LiveCD images as well soon that wouldn't have this issue.

During my testing this would actually work, but i guess now i've broken this compatibility by changing the cd label; yet i think it's better to do this now, before the final 0.1.7, and work to provide a real good upgrade path from there. Upgrade and downgrade are the things i want to concentrate on; since that gives you alot of flexibility. If something doesn't work with web-gui version X or system version Y, then you can try different versions by just clicking some buttons; at least that's the idea. Downgrading web-gui is also not implemented.

As for your swap issues; if you have 8GB RAM this would be strange. But you could add some file swap, quick and dirty:
- edit the /etc/rc.conf file:
# ee /etc/rc.conf
- scroll to the bottom and add the following, substituting poolname for your pool name
swapfile="/poolname/swap.000"
- exit and save (esc-enter-enter)

Now create a 4GiB swap file:
dd if=/dev/zero of=/poolname/swap.000 bs=1m oseek=4095 count=1

Now reboot and watch 'top' output if you indeed have got 4GB swap now; now re-test whether this issue still occurs.
 
quick update:
i should have realised i changed the cd label that the Web-GUI looks for. There would be a work-around:
yup that did the trick.

As for your swap issues; if you have 8GB RAM this would be strange. But you could add some file swap, quick and dirty:
The server seems to be happy with its new swap file. So far I ran a small benchmark. Then created a version 28 pool and copied around 25 GB to it without a problem. Will do a full benchmark later.

thanks for your continuous support!
 
You're welcome! I'm launching my new ZFSguru website in a few hours, feel free to have a peek when it's open!

I'm also readying a stable 8.2-RC1 system version image, but it probably won't be ready for after newyear. But the general idea is that ZFSguru should be ready to rock in 2011!

Happy newyear everyone! :)
 
Got around to playing with this again.

Did Root-on-ZFS, no problems there.

Something of note. Previously, as a complete noob, the whole filesystem/sharing seems a bit unintuitive/confusing? I didn't immediately know how to create a share, nor is there any explanation about filesystems. I think that's something Napp-It did pretty well, and the reason I understand what's going on in ZFSguru now.

1. Had to use cmdline to replace disks. Planned functionality?
2. Had to reboot to get the pool to expand to the new capacity. Set autoexpand property on the pool via cmdline and figured it would expand. Nope. Did export/import, nothing. Perhaps I'm just doing it wrong. :)
3. Regarding snapshots. Was able to restore snapshots as expected (via cmdline). One difference I noted between this and Napp-It, is the volume shadow copy action you get with SolarisExpress+Napp-It combo.
4. Sharing. Some sort of user account creation/assignment possible via GUI?
 
Bug ive found...
when enabling the ability to 'destroy' a pool you cannot disable it afterwards.
 
Update: I'm readying final 0.1.7 web-interface release and new stable 8.2-RC1 system image and livecd for download; just finishing up and i'll do the announcement. :)

Once the stable version is up i'll launch the new website. :cool:

@Ruroni:

Thanks alot for your feedback; this is exactly what i'm looking for. Please explain more about how you first interpreted things in my web-gui that you find awkward or not easy to immediately understand.

However, do understand that i'm concentrating on features right now. The real user friendly GUI will be ready when i've implemented some sort of user guide; the ZFSguru, like a personified wizard that answers your questions and gives you advice or issues caution. That's the idea behind the name ZFSguru; guiding the user through the world of ZFS.

Regarding your inquiries:
1. Planned core feature; might see in 0.1.8

2. Normally exporting + importing would give you the immediate capacity, but this might have to do with some Zpool version 16 (stmf). I can remember some talk about this would increase capacity on the fly, rather than requiring an export+import. But you say you had to reboot? Were you running Root-on-ZFS on that pool?

3. Still have to figure out managing snapshots. The samba-passthrough is an interesting feature as well, as well as Apple Time Machine-support, for OSX users. For the short term, you will see Snapshot management like rolling back, cloning and so forth in a separate tab on the Files page. Of course this is very important functionality!

4. No; permissions can be tricky. I use file permissions with an 'nfs' user account, which would be uid=1000 so ideal for default NFS access on like Ubuntu Linux systems. Files you write with NFS, should be readable+writable by Samba access as well. To make that work, i use only the nfs user for both to prevent any file permission issues.

However, you can do permissions several ways: NFS can limit per IP or pf firewall can do the same job. Samba can have virtual users which can have access to different shares with different read/write rights. Depending on your solution, you could rig your configuration using either NFS, Samba or both.

Permissions are something i don't fully master yet. In the sense that, to translate what users want and how the system manages permissions could be two incompatible things. ACLs add another complexity but also opportunity, but i've not played with it myself yet. I would have to, in order to provide decent permissions support in ZFSguru.

What you propose would be a simple user/group creation via the Web-GUI, but what next? A simple "pw useradd john" would do the same thing; that would be the command that i execute via the GUI. So instead, i think the GUI should be more powerful, to not just do system related tasks, but provide solutions that directly address the needs of the user.

Example: user John has a boring sister that he doesn't like very much, but his parents told him to configure it so that she could play here Toystory movies/whatever. So John would want himself to have exclusive access to his own stuff, without either his parents or his sister access, or even knowledge, about it. Second there would be general storage for both the parents and his sister, but his sister may not write stuff on that general share, while the parents would.

Translating that to something that is intuitive and WORKS is the key that i've not yet found. ;)
 
I'm not sure if this or the thread about 4k disks in ZFS is best, but let's try here!

I'm trying to create the pool in ZFSGuru to make use of the gnop 4k alignment and then import it into Solaris Express 11. I boot with the ZFSGuru LiveCD, create the pool, and then export it with "zpool export tank" from the shell.

Booting back into Solaris... If I had formatted the disks with GPT, Solaris' zpool straight up segfaults when it tries to query the pool. If I format with GEOM, when I try to import in Solaris, it complains that the pool's metadata is corrupted.

Anyone been able to successfully do this?
 
New Release: 0.1.7 (final)

Happy New Year everyone!

With the fresh number of 01-01-2011 the new 0.1.7 final release sees the light of day. This release comes with some fixes to the web-interface, as well as a new stable system image based on FreeBSD 8.2-RC1, also featuring the latest Samba 3.5.6 package, login menu and 'mps'-driver for 6Gbps LSI controllers; just like the preview3 ZFS v28 experimental livecd i posted earlier. So use this package if you want to track the stable product line instead! It features ZFS v15.

Download (via Bit-Torrent)

How to upgrade?
  • first update your Web-interface on the System->Update page, to 0.1.7 version
  • next, System->Install page will allow you to perform Root-on-ZFS installation and download the newly available 8.2-001 system image to your system and use it for installation.
  • after installation reboot and you should run the new stable release!

@Loto_Bak: this should also include a fix for the issue you reported, please test!

Cheers!

Update: ZFSguru website is opened!
 
Last edited:
If nothing else, I got an improvement on filetransfer with samba.
On preview 3 I were never able to reach more than 65-70MB/s write to the server on a gbit network, now I get aprox 90-95MB/s on the 0.1.7 Final.

Playing around some more, and will offcourse seed your torrent until you make a new one :)
 
Website looks great!

I'm sure this has been covered, but I lack the motivation to read 33 pages of this thread :D Am I able to run this as a VM?
 
Glad to hear the core features coming. :)

As far as the export/import/expand, I'll need to do some thorough testing and get back to you to verify my results.

Intuitively, I felt I should be sharing under "Services - Samba". Probably more naturally because two shares are listed by default. Then I see no option to create a new share and have to sit back and wonder where it is. Filesystems was the next stop as it was the only thing "file related" of my options. Was not sure what a "filesystem" was because I'm new to ZFS. Then I learn that filesystems are (folders on a drive // filesystems on a pool) with unique properties. At least that's what made sense to me (via Napp-It). I didn't understand the point of creating a new filesystem when I think the pool is just the repository of data kind of like formatting a disk in Windows and assigning a drive letter. Then I learn to share the filesystem/folder directly from the filesystem page after clicking on a filesystem. At this point it all makes sense, but completely confusing to me at first.

Snapshots tab, there is no option to create snapshots. That is of course under filesystems, discovered after clicking on one. I simply made a snapshot from cmdline because I "figured" the snapshot functionality wasn't fully implemented as I was only presented with a list of created snapshots (or lack thereof) rather than anything interactive.

The Pools section makes sense to me completely. Feels like a create-raid page. It's the natural course of action once you have disks available. Which leads me to...

Disks page: So I need to format disks before I can actually use them in a pool. Went to the pool page expecting to just select my disks to add to a pool. So the flow is to format disks on the disk page, then select them on the pool page to create the pool. Perhaps a way to make that flow without leaving the tab. GPT labels are good though, preferable to some hardware string.

All I got at the moment. Good lookin' site by the way. :)
 
Yes, i recommend Virtualbox. If you want to use this for real then also look at this thread:
http://forums.servethehome.com/showthread.php?11-Is-zfsguru-stable-yet

If you just want to test then just grab the latest .iso and set it up in Virtualbox; give at least 1GB (1.5GB+ recommended) of memory to the VM, and some virtual disks like 3 disks so you can test RAID-Z. :)

Awesome, thanks! I've been wanting to test this out for quite some time, but I haven't had a spare minute in months :D Finally have some free time, so I'm excited to try it out!
 
sub.mesa:
Thanks for all your work on this. It is the answer to a lot of people's prayers. I've bee using FreeNAS for a little while now. As soon as your final stable release comes out, I'll be switching over!

Thanks again.
 
Thanks for all your positive feedback!

Since this thread has come rather long, and the project has changed alot since i started work on it, i think we should start with a clean thread.

So i made a new one, please don't post in this thread anymore but head over to:

ZFSguru NAS fileserver project

Cheers!
 
Status
Not open for further replies.
Back
Top