ZFSguru NAS fileserver project

And now, over to the enemy:
Here is some pictures from FreeNAS 8.0 RC2:
They are very nice and the GUI looks very good visually.

The tabbar in the middle works somewhat strange though. Sometimes tab appear somewhere in between and it seems it still could use some work. The level of functionality is good, but it's not always clear where to go.

The biggest problem i think is that FreeNAS is stripped down. This means lower RAM usage but does make it more difficult to add stuff manually. I plan on making ZFSguru based on normal FreeBSD, and as such people can choose which version of FreeBSD to run using separate system image releases. This allows me to create experimental images with new functionality ready for testing or preview.

FreeNAS always lagged behind considerably with FreeBSD releases. But i think they should have that under control now, being rewritten and maintained by new developers.

Seeing these screenshots my interface looks old fashioned and dull; i should fix that in 0.1.8 final when themes would become available to radically change the visual appearance of all pages. That should make it more pretty. :)
 
I'd rather have your incredibly easy to use and extensible offering than something attractive any day. Aside from occasional maintenance I intend to never actually SEE the webGUI. If the system is doing it's job there is no reason for me to be in there:D
 
Yes, although that would work better when email notification is enabled, so it emails you when for example your array becomes degraded, and thus you need to take action (unless you configured a hot-spare). Email notification will be for 0.1.9 or 0.2.0 though; 0.1.8 will be getting extensions integrated properly, now known as zfsguru services.

The extensible character of ZFSguru would indeed be one of the primary reasons to prefer ZFSguru over FreeNAS. But it's perfectly alright to have two projects with overlapping interests, because each has different goals and interpretations, and gives you people an alternative to one or the other. If you run FreeNAS and are unhappy for example due to a problem or issue, then you can switch to ZFSguru and vice versa. You can do this without dataloss, though i don't know about going from ZFSguru to FreeNAS; the other way around should work though.

I do prefer the logical layout of my interface to that of FreeNAS though. But that could just be because i'm so accustomed to it. :p
 
Currently what are the best 2TB drives to use with ZFS. Cheaper the better and I don't think there is a need for 7200RPM drives.
 
I prefer 5400rpm too, especially since you can use SSDs to shorten the read access times (L2ARC) and sync writes (SLOG).

Samsung F4 generally showed the best performance to me, the newer 3-platter EARS trailing the F4. The Samsung F4 may need an important firmware fix though.

Small but relevant update: i'm currently porting the 4K sector bootcode to new release, so when using 4K sectorsize override you should be able to boot from those pools. For this to work you need to have formatted at least one disk with the new version, so that disk get's installed the new bootcode. The cool thing is though, that disk does not have to be a member of the pool you're booting to. It could be a simple USB stick that only serves to boot from GPT but boots into your pool with sectorsize override. Why the hassle? Well otherwise you would have to reformat the disks with GPT, destroying data. This procedure would allow you to use existing pool with data and 4K sectorsize override and still boot from it, which previously wasn't possible.
 
Last edited:
New test 0.1.8-preview2-vbox system here... four 2 TB Seagate disks on Intel mobo SATA ports, 8GB RAM, no cache disk. And no previous experience with ZFS.

I created my pool in the webgui then clicked benchmark. I left the defaults and started the benchmark. After a bit it seemed like nothing was happening and I didn't see a way to stop the benchmark so I clicked start again. I had the console up which displayed:
Code:
root@zfsguru /]# Mar   5 09:20:03 zfsguru kernel: pid 1167 (VBoxSVC), uid 0, was killed: out of swap space
and a couple others.

I tried to "start benchmark" again and it said the file exists. (Might want to put an option to delete that test file on that screen?)

So... thinking the "out of swap space" was caused by the benchmark, I re-opened the shell. Waited a bit without doing anything, then the out of swap space errors popped up again and the shell dropped back to the menu.

I'm rebooting now and will re-try the shell in case that benchmark had just run merrily away with itself.
 
System is booted. My volume didn't auto-mount which is probably by design?

So after I imported it, the pool status page says "The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. "

I formated these disks from blank using zfsguru's GPT page. Should I have done them a different way?

Going to click upgrade pool now...
(the console hasn't weirded out of swap so it must have been the benchmark)

EDIT: the upgrade seems to have worked but the app does not prevent me from doing executing it multiple times.
 
Last edited:
Keep in mind that preview LiveCD is extra experimental, and should NOT be used for anything real; throw it away when the next upcoming release is coming, since the preview2-vbox will be incompatible with new extension architecture. It's a showcase for the upcoming virtualbox extension only.

Also there's an issue with using the current automatic memory tuning and heavy ZFS I/O without a swap partition. In upcoming release the swap will be automatically created when installing.

At this point, if you're new to ZFSguru, you should have a much better experience with 0.1.7 for the time being. You can update anything later. The only thing 0.1.7 doesn't have is automatic swap configuration.

Creating a swap manually is relatively easy:
1) create a zvol of say 2GB on the files->volumes page
2) execute the command on the root command line on the System page:
zfs set org.freebsd:swap=on poolname/zvolname
Substitute poolname and zvolname for the name chose, this example assumes the zvol is created directly on the top pool filesystem (poolname).
3) activate as swap:
swapon /dev/zvol/poolname/zvolname

Then you can perform benchmarks with current agressive tuning.
 
Ah, ok. Yes, I've read all the warnings about this cut being a sneak peek. I'm doing this toying around on a spare pc that I don't intend to keep for real... and no data. Everything up to this point was so I could try out the virtualbox ext.
 
Well the real extension will be coming soon; so watch this thread in a few days for updates so you can try the new release with optional virtualbox extension that you can download using the new integrated bittorrent client. Still working on that, though. :)

If you want to give your VMs a decent amount of memory, you might need to tune down ZFS memory usage since the default memory tuning is fairly aggressive and claims quite a bit of memory for ZFS. I want to rewrite the tuning page to be more user friendly in that regard, so the user knows which options to control rather than just being presented with a list of alot of options and some of them potentially dangerous.
 
I've been working every day on it for the last week. :)

But it took me quite some more time than i anticipated to finish rtorrent integration. But after quite bit of hours, it is working now and has its own 'service panel'. I'm now working on actually using the rtorrent interface; by making system images and extensions use torrent downloads now. That still would take a couple of days i guess.

After that i have to finish the Service installation page, after which i could release preview3 together with system image 8.2-002. After this release i will add more extensions to the database.

I've put alot of work on this, and it will be barely visible once complete. But it's a major milestone that i now can distribute content via bittorrent instead of direct HTTP downloads from a single server.

It does appear that the release of 0.1.8 final has slipped quite a bit, towards end of March. I hope i can release more often after this release, especially because 0.1.8 would be a major rewrite compared to 0.1.7 but from a user point of view not that much has changed. After 0.1.8 i hope i can add more exciting features to ZFSguru itself, once that the highly anticipated extensions are working.
 
Hey sub, do you have a paypal account? Would not mind throwing a few bucks your way once I get my NAS up and running.
 
Good to hear you're making progress! The end of March happens to be right around when I should be able to get my server finished up, so that works out perfectly :D

Hey sub, do you have a paypal account? Would not mind throwing a few bucks your way once I get my NAS up and running.

Likewise!
 
Having a little issue with my ZFSGuru install, there's an interrupt storm happening on IRQ 19, which is shared with my NIC....is it possible in FreeBSD to forcibly change the assignment? I've looked online and people seem to say it is but I can't find an example of how to do this.

Anyone know how?
 
Can't you reassign that PCI slot in the BIOS?

My motherboard doesn't seem to have that ability, however I was able to create a secondary virtual NIC for the VM and it assigned on IRQ 17, so all is now happiness and joy. And also speed:D
 
Glad you could solve your issue, could have been a bug in ESXi or FreeBSD handling the virtualized hardware improperly or naively. Either way, you would want to perform a thorough benchmark to ensure everything is stable enough to use as real storage platform.

@Vengance_01: thanks alot for the offer! I don't have an account yet, and i'm considering an option on my website. Right now i want to get 0.1.8 released so i'll focus my efforts here. But after that i might indeed look into paypal/google/moneybookers etc. There's a thread on my forum discussing it.
 
Glad you could solve your issue, could have been a bug in ESXi or FreeBSD handling the virtualized hardware improperly or naively. Either way, you would want to perform a thorough benchmark to ensure everything is stable enough to use as real storage platform.

Yeah, I'm barraging the system with a heavy load today, moving 6TB of material over and then I'm going to run checks on it over the next few days before completing the migration, to ensure that the data integrity is A-OK. I rather think that the issue is with ESXi rather than with FreeBSD as running ZFSGuru natively yielded no such problems.

Regardless of the cause it seems that performance in a VM is pretty much great! So far in my tests my CIFS performance seems to be on par with what the system natively run was capable of if I can figure out how to upgrade pcre from 8.11->8.12 I can install and use the open-vm-tools which may yet yield even better performance. There is also the possibility of installing and passing through a physical NIC to the guest.
 
what is the best method for moving data between disks? or between 2 servers?

cp -r ?

or zfs send / receive? does someone have experience with this command? examples?
 
what is the best method for moving data between disks? or between 2 servers?

cp -r ?

or zfs send / receive? does someone have experience with this command? examples?

I am a noob (coming from a Windows background), I have used "mv" to move and "cp -R" to copy directories/files from one location/pool to another within the same BSD build. To initially copy/move files from my Windows server to FreeBSD, I set up Samba on FreeBSD and then accessed the share from my retired Windows server.
 
Question about ZFS:

If I run windows on a machine, which have been running FreeBSD and it have alot of hard drives in ZFS, but you can't really read them, since your in windows. Can you set up a VM in the windows machine install FreeBSD, and access the physical hard drives through the FreeBSD VM, and then share them over network so your Windows Machine can access them?
 
Yes you could, but be very careful connecting ZFS disks to Windows; if Windows writes to them your pool is gone! This happens when Windows asks you to 'initialize' the drive - DO NOT DO THIS! You will kill your data with it.

Otherwise, you could boot Windows with say 4 ZFS disks present, connected to a NON-RAID controller, then install virtualbox:
1) make raw disks of each disk using the 'createrawvmdk' command
2) pass the physical disks to a new VM
3) install ZFSguru or other dist to that VM, import your ZFS pool
4) setup sharing (simple in ZFSguru: click 'share' on the Files page)
 
What? If I share a ZFS pool over the net, and write on it from a windows machine, will it die?
 
No, that is the normal usage for the pool. Windows doesn't know it's ZFS if you're writing to it over the network. It will use network protocols like CIFS or NFS or iSCSI to know about files and stuff.

I thought you wanted a way to recover the data on a windows computer with the ZFS disks connected to that computer, using virtualization. If you meant that, then i can say that is possible, though a bit cumbersome since you have to create a raw disk file for each physical disk part of ZFS. This can be done with the 'createrawvmdk' command, which is executed on the command line.

If you connect ZFS disks directly to Windows then that would be potentially dangerous, since Windows asks you to initialize the drive; and in doing so you destroy/overwrite data on that drive, possibly crippling critical metadata and thus failing the pool. So do NOT INITIALIZE the drives when asked. This is not relevant for a normal ZFS NAS you access over the network, only if you want to connect the ZFS disks directly to Windows and access the contents.
 
Using Windows host OS running Virtualbox to run FreeBSD/FreeNAS/ZFSguru accessing ZFS disks? It could work, but would not be an ideal configuration. It also taxes your performance. Why not build a dedicated box? That's why it's called a NAS; storage that you add to your network.
 
Well it is a NAS, but it would just be cool to use all that CPU power for some cool stuff from time to time.
 
Well it is a NAS, but it would just be cool to use all that CPU power for some cool stuff from time to time.

what is the something cool that you actually want to do? what is keeping you from running FreeBSD as the core OS with a virtualized windows for that other thing... or doing it natively?
Want to host an FTP? BSD can do that.
Want to reencode video? ditto.
So what is it specifically that you actually need windows for (on the NAS box)?

I really wouldn't risk my data by having the NAS machine load ZFS in a virtual box on top of windows.
 
Video encoding in Freebsd is hard they have no GUI for it.

I would also miss to play video games on the same machine I share files from (LAN's) I will miss to be depending on one computer (yes I know how retarded it sounds but I really like AIO) I NEED to fileshare and play games on LAN party's because this is the two main reasons I go to lan events, and yes the count exactly 40% each. (the rest is to meet new people)

The other thing I would like is to benchmark and overclock, all the cool tools that really matter is for windows.

Do FreeBSD even have a DC++ client and a descent torrent client?
 
wait, you actually cart around your raid server to lan parties? Doesn't it weigh a ton? aren't you afraid that the bumping around of the drives will damage them and cause you to lose your precious data?

video encoding, handbrake: http://www.freebsdsoftware.org/multimedia/handbrake.html
You can just use azureus for torrent: http://www.freebsdsoftware.org/net/azureus.html for torrents.
And if you are a fan of utorrent it can be ran on wine (and with some optimizations, faster then on windows): http://forum.utorrent.com/viewtopic.php?id=15888
DC++ https://launchpad.net/linuxdcpp

For lan parties bring a svelte-ish (depending on your video cards) windows PC without any ZFS, just a single drive with NTFS. I wouldn't risk damaging my file storage by carting around my NAS. There is no real benefit to having it be AIO and a whole lot of drawbacks. A dedicated server + dedicated gaming PC makes much more sense.

Everything I know says to overclock from BIOS... unless you mean stability testing tools, those are indeed windows focused. You can just dual boot for those. Still I wouldn't even let windows come near my server lest it do something stupid and break it. (I know it would... you actually have to disconnect all drives except the one you install windows on or it WILL install itself wrong; putting the bootldr on the wrong drive no matter what you specify as first priority in bios)
 
Last edited:
What? transportation have been going so well all these years. And I think the shipping supermicro box is very shock resistant. Yes it it heavy.

I don't like running uTorrent under wine, last time I did it I got a 10 KB/s up limit....

Well since this linux dc++ works on freebsd, will deluge or rtorrent work as well?

I need to share files :F Thats also apart of it, see how much data you manage to upload from your PC in one week or a weekend or however long the party is. The NAS will be necessary at some LAN parties more then others. So maybe I can make a compromise with myself, some party's, desktop PC with some extra hard drives, others, the NAS + a laptop, and other I maybe need both o_O Which will be a pain in the ass!

Anyway I don't encode DVD

I do BD to x264 or well yes some DVD to x264 (it I can't find it on blu-ray ofc..) And I always need the newest version of x264 I can't run a old one.
 
If you're intent on using Windows I really recommend just setting up something like Flex RAID and calling it a day. You'll have drive pooling and parity and performance should be good, granted not as good as ZFS but no doubt you would lose a lot of performance anyway by that layer of abstraction, disregarding the RAM limitations.
 
I agree with No1451. I never imagined there will actually be a situation where I recommend someone to use NTFS over ZFS, but you are it.

Anyway I don't encode DVD

I do BD to x264 or well yes some DVD to x264 (it I can't find it on blu-ray ofc..) And I always need the newest version of x264 I can't run a old one.
Handbrake is a GUI for x264.

I don't like running uTorrent under wine, last time I did it I got a 10 KB/s up limit....
Fair enough, azureus or one of the other FOSS torrent clients it is then.
 
Well, if I run FreeBSD + VMware and run a Windows machine how much CPU power will I be able to use? Compared with running it directly on the machine? Since you'r not suppose to encode with to many cores, can I dedicate one of the CPU's to the vm?
 
Well, if I run FreeBSD + VMware and run a Windows machine how much CPU power will I be able to use? Compared with running it directly on the machine? Since you'r not suppose to encode with to many cores, can I dedicate one of the CPU's to the vm?

It's not really so much a question of CPU power, unless you turn on some of the more advanced features you could likely run it fine. Where you are going to run into issues is RAM, while playing with my FreeBSD VM I routinely see it eating 7-8GB of ram for simple I/O, nothing really intense even. How much ram do you have installed on your main rig? I dedicated 10GB to my ZFSGuru VM in ESXi and I still see a significant increase in performance when running it natively with the full 16GB of ram. Unless you absolutely need copy-on-write and the performance of ZFS(which you will likely hamstring anyway) there are better options for pooling and sharing out protected storage on your main rig.

In the end it is your decision and if you are really intent on it, try it! That's the beauty of VMs, you can do it tonight if you want to. Take 15 minutes and install ZFSGuru in a vm and try it out, if you have some spare disks you can use connect them and give it a go. I still of course cannot see any reason why Flex RAID would not be a much better fit for this instance.

I'm sure you've heard the old maxim "Aces in their places"...
 
I'm ordering the rig with one 6 core intel cpu and 12 GB of ram, I will upgrade to 24 GB or ram when I upgrade to another cpu which will happen after I get in the first 10x2TB hard drives. and then I still have 6 open memory slots, I was planning to keep it at 24 GB of ram and still have the upgrade opportunity.
 
It's not really so much a question of CPU power, unless you turn on some of the more advanced features you could likely run it fine. Where you are going to run into issues is RAM, while playing with my FreeBSD VM I routinely see it eating 7-8GB of ram for simple I/O, nothing really intense even. How much ram do you have installed on your main rig? I dedicated 10GB to my ZFSGuru VM in ESXi and I still see a significant increase in performance when running it natively with the full 16GB of ram. Unless you absolutely need copy-on-write and the performance of ZFS(which you will likely hamstring anyway) there are better options for pooling and sharing out protected storage on your main rig.

In the end it is your decision and if you are really intent on it, try it! That's the beauty of VMs, you can do it tonight if you want to. Take 15 minutes and install ZFSGuru in a vm and try it out, if you have some spare disks you can use connect them and give it a go. I still of course cannot see any reason why Flex RAID would not be a much better fit for this instance.

I'm sure you've heard the old maxim "Aces in their places"...

wouldn't gaming also be an issue because of the GPU not being properly made available for the client machines?
 
wouldn't gaming also be an issue because of the GPU not being properly made available for the client machines?

Well if I'm reading him right he seems to want Windows to be the bottom level and FreeBSD running on top of that. No idea why. But yeah, if FreeBSD is on the bottom with windows virtualized....good luck gaming. Modern games will just crap themselves.

Personally, on a Windows machine, get a Windows solution! ZFS is cool and very powerful, but is it worth so much fuss and trouble over? Hell, even in a lanparty setting you won't be able to use the speed that ZFS can provide, almost all lan parties I have attended only run 10/100 to the participants and reserve GbE for the core.
 
Back
Top