ZFSguru NAS fileserver project

I started a project some weeks ago, called SolarStatus (http://github.com/hotzen/SolarStatus)

It is aimed for Solaris Servers and offers a simplistic PHP5/HTML5/jQuery UI to check the server's status. It is in no way aimed to administer the server or running any admin-commands.
Instead it probes for system-information via shell-scripts or custom commands which are displayed and refreshed (if enabled).
Furthermore some output is parsed to generate Overview Meters/Graphs (e.g. CPU, Mem, ZFS-IO, NIC-IO) (work in progress).

I have to push a lot of pending changes in a few hours, so the current github-version is missing some better configuration-options.

Management Summary: Simple probes, lightweight UI, aimed for Server-Status, by concept *not* performing admin-privileg-requiring commands (as all of you ZFS-users are prefering to administer at the shell anyways...)

If there is any interest, I could add some BSD-specific scripts/commands. Currently, it is tailored towards SE11.

Cheers
 
I really don't like that the project requires PHP and Apache.

FYI, ZFSGuru doesn't use Apache; it uses lighttpd. You can probably switch out lighttpd for any web server since it doesn't look like Jason used any sort of special configuration.
 
Dont want to overtake that thread and hopefully Jason will continue working on the really awesome ZFSGuru.
Just wanted to comment that I just pushed to github and a distribution .zip-file is available, as well as a screenshot for the first impression.

https://github.com/hotzen/SolarStatus

thats it, cheers.
 
Hi all :)
Im in a kinda possision now that I dont know if I want to continue with OI or something else after the ZFSGuru "disappeared" :(
Im looking at the Synology DS1511+ and it has all what I need an more.
Well, its an different platform, but right now I need an storage device that works with less fuzz and stuff.

Anyone here have any experience with it?
 
ZFSGuru was just a basic web page ...

also if you really want a ZFS GUI just use Solaris as they have a full ZFS GUI , the only reason it has not been ported is because its in java ... and i am sure their is some licency issues

tho ZFS is pretty easy to use just via the Terminal I also noticed webmin has a bit to do with ZFS as well

really loving ZFS on FreeBSD its proving hard to kill :p

been adding to the pools and ripping HDD's out every where :p as long as each group is over 3 HDD's you have redundancy

the Pool expanding is really easy just adding on more drives

each set has its own raid redundancy

ie Group 1 4 HDD's Group 2 3 HDD's
as long as only 1 drive in Group 1 fails and only 1 Drive in Group 2 fails the array is all sweet

also if say Group 1 + 2 are combined and you have 20% remaining space and throw on another 3~8 HDD's it will spread out the data across all the Drives still until the remaining 20% is used so you have speed ,, but you still have redundancy in each group


zpool add -f tank raidz label/Group5disk1 label/Group5disk2 label/Group5disk3 label/Group5disk4
label/Group5disk5

is my fav command :p just keep growing the array and INSTANT unlike my Hardware Raid
and the best part is you do not need the same size drives
when I run out of room i just have to swap out all the small drives and repeat :p

when I run out of space I just add the next 5 in ( my HDD rack is in groups of 5 )


the only concern I have so far is if an entire group fail's if i can get the other groups back online

I have tried this a few times with out success ( but my data is all still their as I never WIPED the Drives I yanked ) it seems you can only remove drives\groups when the pool is 100% :S

going to try again tomorrow on killing it by yanking out the middle group and wiping ( I am only wanting this as expandable data archiving system so data is mostly only ever going to be spanned across one 1 group )

tho one thing that is getting to me is how hard it is to kill

I ripped out hdd's made multiple windows partitions copied data onto them and zfs still accepted them back into the pool ( the only drive it did not accept was a new drive I added in )
I think its going to silver it soon ... its doing repair on the wiped drives and the fresh ones still sitting UNAVAIL

tho I skipped through videos that i knew were spanned aross the over written drive and saw data corruption but zfs never reported it

running a scrub now to see what it thinks :p

was very disappointed that it did not know on the fly !!!

looks like I will have to Zero all my drives tomorrow for further kill testing I just want to know if I can get my data back with a 100% group fail
 
Last edited:
Jason was a huge help when I started making the switch to ZFS from the setup I was using before. It's great to see him come back and find out that he is in fact alright. Unfortunately for me I have put in a huge amount of effort into moving away from ZFSguru and will probably not be moving back to it until things become a bit more mature.
 
I have to admit, most of my irritation is because he couldn't be bothered to post a 3-line message saying he had to go on hiatus. I know it was an option for him, since he said so on his site (but, paraphrasing, 'i did not feel like it.')
 
Going to have to agree with danswartz and a few others I've seen post elsewhere. I can understand taking a break but to just vanish like a failed array is concerning. I've moved on to another system too.
 
Going to have to agree with danswartz and a few others I've seen post elsewhere. I can understand taking a break but to just vanish like a failed array is concerning. I've moved on to another system too.

I had someone giving me a hard time about my attitude. "people have things happen in real life, etc..." Yeah, I get that. But by his own admission, he COULD have clued us in, but didn't feel like it (paraphrasing there...) The main thing is: it's still a single developer distro, and 'once burned, twice shy'...
 
Going to have to agree with danswartz and a few others I've seen post elsewhere. I can understand taking a break but to just vanish like a failed array is concerning. I've moved on to another system too.

I gotta disagree with this attitude. Personal stuff happens and you just need time to yourself sometimes. Besides this isn't a 9-5 job but an open source project.

You moving to another system is understandable in such a situation.
 
I gotta disagree with this attitude. Personal stuff happens and you just need time to yourself sometimes. Besides this isn't a 9-5 job but an open source project.

I understand that. I tried to be as clear as I could be that my complaint was his just leaving everyone in the lurch without even a 1-line update. And that, yes, he could have told us, but wasn't in the mood.
 
Whatever the issue was, that's his business and I don't fault him or even suggest that he's any lesser of a person. But the landscape is already littered with abandoned projects by devs who no longer have a personal itch to scratch.

I'm not trying to come across with attitude. Just worth noting that responsible citizens usually keep the lines of communication open even in dire situations. Dropping off the grid is usually reserved for special behaviors... like uni-bombers.
 
I'm not sure why you two can't seem to understand his situation. Have you ever had a life changing event happen? Did you ever once think "Oh shit, I should be worrying about hardforum people, not dealing with this ungodly burden that's consuming me!".

It really is that simple. People have things, usually called priorities, that take precedence during times hardship. If Jason "wasn't in the mood" that means he was in a very bad place - and I'm sure hardforum was the last thing on his mind. If you don't like the project anymore because of his actions, fine, be that way; but don't try to convince other people that Jason's a bad guy or that his project is crap.

I swear, some people are just happy-go-lucky idiots who are physically incapable of empathy for fellow human beings.
 
Yes, I've had quite a few life changing situations. In none of them did I think "hey, you know, I really should take 30 seconds to tell folks I've been working with that I am out of action for the foreseeable future. nah..." Nowhere did I say he's a bad guy or his project is crap. As far as the latter part, as someone else posted: the internet is littered with the debris of single-dev projects that have been abandoned, for one reason or another. These events just reinforce that failure mode.
 
Calm yourself xDezor. Nobody said anything about the guy must come update Hardforum "idiots". He's got his own project page and forums he might have decided to do that upon. Nobody is making like he owes anybody anything at all.

Some of us have been in a bad way too, laid up in hospitals, massive life changing surgeries, deaths, you name it, so don't imply Internet forum stalkers are incapable of empathy. And some of us have even taken the moment out of our lives that were consuming us to update our digital lives because really, digital life is not much different than picking up a phone and calling any one relative and saying "hey, I'm not dead" so people who care can reach out to others. It's just part of being human.
 
I'm really happy to see that you're okay Jason!

As for all the thread crapping going on here, WTF?! Seriously?! If you've moved on to other projects, why keep posting here? Please go away.
 
Maybe someone here will have some ideas because I haven't come up with the answer on google. I'm new to Solaris and ZFS, so maybe I'm overlooking something. I'm having trouble importing a pool I made in ZFSguru 1.8 (Latest release, not preview) into Solaris 11 Express. I am able to import the pool back into ZFSguru with no problems. The pool is a 6 drive raidz1 pool with the 4k override and I did format the drives with the GEOM option. My process was to format each drive with GEOM giving them disk labels of disk0, disk1, disk2, etc. I then created a raidz1 pool with all 6 drives and the 4k override. I then rebooted ZFSguru. After reboot, I verified the 4k override and exported the pool. I then booted into Solaris 11 and attempted to import the pool which resulted in the following:

Code:
Kevin@solaris:~# zpool import
  pool: tank
    id: 14170363512088274941
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

        tank                         UNAVAIL  insufficient replicas
          raidz1-0                   UNAVAIL  insufficient replicas
            c0t5000CCA228C0B5D8d0p0  UNAVAIL  cannot open
            c0t5000CCA228C0A3F5d0p0  UNAVAIL  cannot open
            c0t5000CCA228C08B7Dd0p0  UNAVAIL  cannot open
            c0t5000CCA228C093D0d0p0  UNAVAIL  cannot open
            c0t5000CCA228C0A3A3d0p0  UNAVAIL  cannot open
            c0t5000CCA228C097EAd0p0  UNAVAIL  cannot open

I tried to import the pool anyway and whether or not I use the -f option, I get the following:
Code:
Kevin@solaris:~# zpool import tank
cannot import 'tank': invalid vdev configuration

I even tried importing by the pool id, but got the same response. I then tried to import back into ZFSguru which was successful. I exported from ZFSguru one again and repeated the import process in Solaris with the same results as before.

From the napp-it interface, it shows all 6 drives though they are listed as not in use.
Code:
Disks not in use:
id 	                 cap 	        pool    vdev    identify state 	if 	busy 	        error 	                vendor 	product 	        sn
c0t5000CCA228C08B7Dd0 	3.00 TB 	- 	- 	- 	- 	c11 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-
c0t5000CCA228C093D0d0 	3.00 TB 	- 	- 	- 	- 	c12 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-
c0t5000CCA228C097EAd0 	3.00 TB 	- 	- 	- 	- 	c14 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-
c0t5000CCA228C0A3A3d0 	3.00 TB 	- 	- 	- 	- 	c13 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-
c0t5000CCA228C0A3F5d0 	3.00 TB 	- 	- 	- 	- 	c10 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-
c0t5000CCA228C0B5D8d0 	3.00 TB 	- 	- 	- 	- 	c9 	configured 	Error: S:0 H:0 T:0 	ATA 	Hitachi HDS5C303 	-

One thing I did notice, is that the disk id listed from the zpool import output and the disk id listed by napp-it differ slightly. The disk id listed by zpool has a "p0"at the end whereas the disk id listed by napp-it does not. Is this normal or could this be my problem?

On a side note, these are the hitachi 5k3000 3TB drives which I know are not 4k. I'm only planning ahead because it's my understanding that if you ever anticipate having 4k drives in your pool, you should start out with a 4k pool to begin with. As more and more drives move to 4k, I'm sure I'll eventually add some to the pool. Since these are replacing samsung F4 drives, I may be tempted to add the samsung drives to the pool later.

Another side note, does anyone have any idea why when I ran ZFSguru from an ESXi 4.1 VM, it hung during boot when my SAS2008 based card was added via passthrough? ZFSguru works fine with the card when used outside a VM. Also, ZFS guru boots fine if the card isn't added via passthrough. I have VT-d enabled in the bios and my other VM's haven't had any issues with passthrough thus far.

To be clear though on my above pool importing problems, once I started having passthough problems with ZFSguru, I started using both ZFSguru and Solaris directly on hardware and am not using any VM's as of right now. So right now, my pool import problems are not VM related. Also, there is no data on the pool to worry about. Once I have the pool importing issues ironed out, I hope to move Solaris to a VM.
 
Last edited:
Solaris based OS's do not support GEOM partitions. Has to be GPT.


EDIT: Sorry, I got it mixed up. GEOM partitions are fine. GPT is what causes an issue.
 
Last edited:
Maybe someone here will have some ideas because I haven't come up with the answer on google. I'm new to Solaris and ZFS, so maybe I'm overlooking something. I'm having trouble importing a pool I made in ZFSguru 1.8 (Latest release, not preview) into Solaris 11 Express. I am able to import the pool back into ZFSguru with no problems. The pool is a 6 drive raidz1 pool with the 4k override and I did format the drives with the GEOM option. My process was to format each drive with GEOM giving them disk labels of disk0, disk1, disk2, etc. I then created a raidz1 pool with all 6 drives and the 4k override. I then rebooted ZFSguru. After reboot, I verified the 4k override and exported the pool. I then booted into Solaris 11 and attempted to import the pool which resulted in the following:


It sounds like you did everything right. I've done this several times myself with no problems. And GEOM w/4k override is the right way to format it, NOT GPT. In my case, though, I used an LSI 1068e controller, so maybe there's some wierd incompatibility between the SAS2008, ESXi, and your FreeBSD version that is corrupting the drives.

EDIT: Also, in my case I used the ZFSguru 1.7 distro on top of, I think, FreeBSD 8.1, which uses a much older zpool version. I think the newer BSD's use a newer zpool, so maybe they introduced some incompatibility in the on-disk formatting?

Here's what I suggest: Verify that you can see the drives correctly in napp-it, and can create a pool natively in napp-it (don't worry about the 4k alignment for now).

Then, destroy the pool and use a hacked zpool binary to create native 4k pool in SE11 directly. You can download the binaries for several Solaris versions, including SE11 here:

http://www.solarismen.de/archives/12-Modified-zpool-program-for-newer-Solaris-versions.html

Here is a direct link to the SE11 download, since it's kind of buried at the end of the blog entry:
http://www.kuehnke.de/christian/solaris/zpool-s11exp


Whether using the ZFSguru approach or the hacked zpool, one thing to be aware of with SE11 is that by default it will create/upgrade you to zpool and ZFS versions that are not supported by any other version of Solaris/BSD. So if you think you may want to run, say, OpenIndiana in the future, make sure to create the pools and filesystems with lower version numbers (I'm not sure when or if OI will ever update to SE11's current ZFS level).
 
Last edited:
It sounds like you did everything right. I've done this several times myself with no problems. And GEOM w/4k override is the right way to format it, NOT GPT. In my case, though, I used an LSI 1068e controller, so maybe there's some wierd incompatibility between the SAS2008, ESXi, and your FreeBSD version that is corrupting the drives.

EDIT: Also, in my case I used the ZFSguru 1.7 distro on top of, I think, FreeBSD 8.1, which uses a much older zpool version. I think the newer BSD's use a newer zpool, so maybe they introduced some incompatibility in the on-disk formatting?

Here's what I suggest: Verify that you can see the drives correctly in napp-it, and can create a pool natively in napp-it (don't worry about the 4k alignment for now).

Then, destroy the pool and use a hacked zpool binary to create native 4k pool in SE11 directly. You can download the binaries for several Solaris versions, including SE11 here:

http://www.solarismen.de/archives/12-Modified-zpool-program-for-newer-Solaris-versions.html

Here is a direct link to the SE11 download, since it's kind of buried at the end of the blog entry:
http://www.kuehnke.de/christian/solaris/zpool-s11exp


Whether using the ZFSguru approach or the hacked zpool, one thing to be aware of with SE11 is that by default it will create/upgrade you to zpool and ZFS versions that are not supported by any other version of Solaris/BSD. So if you think you may want to run, say, OpenIndiana in the future, make sure to create the pools and filesystems with lower version numbers (I'm not sure when or if OI will ever update to SE11's current ZFS level).

Thanks for the insight. I was trying to avoid the hacked zpool binary. FYI, I went back and tried it with ZFSguru 1.7 and used the default filesystem and pool versions. Same result. I even tried to import it into OpenIndiana with the same result(tried this just for the heck of it). I can only assume this problem has something to do with the way the SAS2008 controller is handled by the two OS's. I know there has been some trouble with BSD andSAS2008 controllers. I may go back and try this with the onboard ports, though I thought that's how started out with all this. Last night I actually just went ahead and made a new pool in Solaris. I've copied a bit of data to it, but not enough that I couldn't start over. I guess it really isn't the end of the world to not have the 4k override, but I hate letting problems beat me. However, this has now eaten up 4 days of my vacation(obviously not entire days, but enough of each day) and I just want to be done.
 
Did some more testing today and have determined that it has something to do with the SAS2008 controller. As I mentioned, I went ahead and made a new pool directly from within Solaris and started migrating data to it. Not a lot of data, but enough that I didn't want to delete it for no reason. Therefore today's testing was done with some 1tb drives that I haven't repurposed yet. The first Interesting note is that with these drives I got a different error during import. This time when I searched for pools to import from within Solaris, I got the follwing error:

Code:
Assertion failed: rn->rn_nozpool == B_FALSE, file ../common/libzfs_import.c, line 1077, function zpool_open_func Abort (core dumped)

Trying to import the pool yielded the same error. Next I tried creating the pool with the drives on the motherboard ports and the import into Solaris was successful. With this encouragement, I tried switching the drives over to the SAS2008 controller after creating the pool with the drives on the motherboard ports, but the above error popped up again. So it definitely has something to do with the SAS2008 controller and the switchover between BSD and Solaris, just not sure what exactly is the problem. My guess is that it's something in the way BSD handles the controller vs. the way Solaris does. I am curious as to why the error was different with these drives though.
 
WOW ... this thread has certainly gone quiet all of a sudden!

I'm just wondering if anybody else has started using 3TB drives in their ZFSGuru systems yet?

I've just added a few extra drives this afternoon, and I was a little bit surprised to see that they were only showing up on the disks page as 2.2TB.

Disks 1-10 are actually 2TB, so that's fine, but the new disks [11-14] are 3TB drives.

http://img269.imageshack.us/img269/8294/testbm.png

Any thoughts?
 
You're using LSISAS3081E-R's? They use the 1068 chip which doesn't support 3TB afaik.
 
This is correct, only the newer 6Gb/s LSI controllers support 3TB+ - the older 3Gb/s adapters are limited to 2TB.
 
interesting

I have 4X IBM ServeRAID M1015 / LSI SAS9220-8i PCI-Express CARDS
I was just about to throw 3 tb Drives onto

I hope I am safe .. as they are 6Gbs ones glad I spent that $10 extra now
 
Ibm Serverraid m1015 works fine in FreeBSD 8 Stable and 9 with the IT Firmware. It doesn't work with the IR firmware.
 
Has anyone seen this before?
Code:
Starting istgt.
istgt version 0.4 (20111008)
normal mode
LU1 HDD UNIT
LU1: LUN0 file=scsir50/test, size=214748364800
LU1: LUN0 419430400 blocks, 512 bytes/block
istgt_lu_disk.c: 648:istgt_lu_disk_init: ***ERROR*** LU1: LUN0: open error
istgt_lu.c:2019:istgt_lu_init: ***ERROR*** LU1: lu_disk_init() failed
istgt.c:1667:main: ***ERROR*** istgt_lu_init() failed
/usr/local/etc/rc.d/istgt: WARNING: failed to start istgt

my istgt.conf
Code:
[LogicalUnit1]
  Comment "Hard Disk Sample"
  TargetName "test"
  TargetAlias "Data Disk1"
  Mapping PortalGroup1 InitiatorGroup1
  AuthMethod Auto
  AuthGroup AuthGroup1
  UseDigest Auto
  UnitType Disk
  QueueDepth 64
  LUN0 Storage /scsir50/test 200G

I've tried:
/dev/zvol/zpool/zfs
/dev/zvol/zpoo
zpool/zfs
zpool
zpool/zfs

basically ever possible combination and istgt refuses to start as it fails to open the device. i have nothing in /dev or /dev/zvol yet zpool status reports back fine and the FS is mounted. I am fairly stumped here.

Also, in zfsguru 1.8 and 1.9 i can't get at the iscsi page either. Not really sure what to do.

Oh I am running 8.2-STABLE.
 
ah, found the problem.

zfs create -V

the documentation is fairly poor around the sub option for the zfs command and or my google skills were temporarily high as a kite.
 
still got me why the Live Disk / on Root soaks my Gig LAN whilst my BSD 9 Install only has 1/4 bandwidth

what possible samba / networking settings could you be using differently

even samba 3.6 with AIO was slow :S

unless I am not using my correct LAN drivers


also been having issues with 1.9 with write access ( was hell to get past that new start up wizard )

Warning: file_put_contents(/usr/local/www/apache22/data/ZFSguru//config/cache.bin) [function.file-put-contents]: failed to open stream: Permission denied in /usr/local/www/apache22/data/ZFSguru/includes/common.php on line 211

the main issue was ZFSguru would not work unless I was using user www so I had to change back to U www G www instead of the User name I was using with G Wheel

the other things that caused a bit of a hickcup was even after destroying the previous zfs systems tank and tank2 .. they still show up and occasional re mount / repair them self's causing data corruption with the new ZFS system

I was using GEOM but changed them over to GPT for Root On ZFS thankfully I still had the 88 files of 350 odd gig of data needed for repair still backed up
the partition tables a a bit of a mess now tho during boot most of the drives have to use the 2nd table still yet to get around to re writing table one back corectly


my main 9-current boot says this


GEOM: da3: the primary GPT table is corrupt or invalid.
GEOM: da3: using the secondary instead -- recovery strongly advised.
GEOM: da2: the primary GPT table is corrupt or invalid.
GEOM: da2: using the secondary instead -- recovery strongly advised.
GEOM: da4: the primary GPT table is corrupt or invalid.
GEOM: da4: using the secondary instead -- recovery strongly advised.
GEOM: da5: the primary GPT table is corrupt or invalid.
GEOM: da5: using the secondary instead -- recovery strongly advised.
GEOM_LABEL: Cannot create slice SAS1P0C3D8.
GEOM_LABEL: Cannot create slice 1b2426b0-f30f-11e0-96f0-0010181a4d85.
GEOM_LABEL: Cannot create slice SAS1P0C3D8.
GEOM_LABEL: Cannot create slice 1b2426b0-f30f-11e0-96f0-0010181a4d85.


ZFS on root

GEOM: da3: the primary GPT table is corrupt or invalid.
GEOM: da3: using the secondary instead -- recovery strongly advised.
GEOM: da4: the primary GPT table is corrupt or invalid.
GEOM: da4: using the secondary instead -- recovery strongly advised.
GEOM: da5: the primary GPT table is corrupt or invalid.
GEOM: da5: using the secondary instead -- recovery strongly advised.

any how now that I have discovered this HDD price hike looks like I have to re use some of my old drives till these HDD prices drop again :S


the other thing I can not work out is why Root on ZFS is using 100 watts more power ...

guessing some CPU power save feature is turned off or HDD's are not in low state mode

I can boot of my 9-current install and thrash the day lights out of it ( and scrub ) and it still uses less power then ZFS on Root idling unsure what is going on their

will check again before i nuke my now stuffed 9-current install ( pc hanged up during building new world ) now kde will not start up ect ...
 
Last edited:
Sorry for bringing up this old thread but just a heads up for everyone that ZFSguru/Jason appears to be coming back to life, with 0.2beta8 being released a couple days ago. http://zfsguru.com/forum. I much prefer his interface/layout over FreeNas and nas4Free...
 
Back
Top