ZFSguru NAS fileserver project

jtg1993: Good its not only me then.
Did it yust happend, or did it happend after an restart?
 
I had the same problem with it going back to 512K sectors. I wouldn't write to the array until this is fixed, my array is now corrupt because I tried the non destructive benchmark.
The 'going back to 512-byte sectors' is completely normal; no body ever got any different behavior!

If your array is corrupt because of the non-destructive benchmark, then obviously something happened unrelated to the benchmark itself; it simply writes a file to your ZFS filesystem which can't cause it to become corrupt.

Please tell me as much for each problem you encounter. Losing GPT labels is unrelated to corruption on your disks or pool. You would need the output of zpool status and information on what exactly you did, for me to figure out what might have happened.

I can just assure you: the non destructive benchmarks writes and then deletes one big file to your pool; that never should allow ZFS to become corrupt. Those issues must be separate from each other.

@svar: if you see the GPT labels on the disks page, then this is not really a bug but rather a known issue. It shouldn't be harmful either. Your GPT labels are still there. Once you reboot, ZFS will detect your pool and find it by its /dev/adaXp2 entry first. Try issuing a "glabel status" on the command line, and see it that still shows all the labels for your disks. If so, this is probably unrelated to your damaged disks.
 
Sub-mesa: How can it be normal that the 4K sectors go back to 512 after an reboot?
If I run the "glabel status", only the boot drive show, not the data (WD) drives.
The "Disk bandwidth monitor" page dosent show the data drives too.
Maybe I should reinstall ZFSGuru.
 
The disks will/should only be 4KiB sectors during the time ZFS creates the pool, so that it uses different internal values which change how ZFS allocates data on the disk. On normal ZFS pools, this setting would be known as 'ashift=9', while with 4KiB sector pools, this setting will be 'ashift=12'. This can be checked with this command:

zdb -e <poolname> | grep ashift

The idea is that once your pool shows ashift=12, this would mean that ZFS will continue to use this value even after reboot, where the .nop entries will be gone and your disks show as 512-byte sectors again. So the sectorsize override only works to force ZFS to use ashift=12.

Regarding your issue, it is different from other issues. Your label probably is there, but ZFS chose to use the device entry. You could try this: export the pool, then reboot, then import the pool. Before you import, check that the GPT names in /dev/gpt/ are there with a simple "ls /dev/gpt" command. I assume you formatted with GPT rather than GEOM, right?
 
They are formatted with GPT, yes :)
If I run the command you said, I get:
Code:
[root@zfsguru /]# zdb -e Storage_1 | grep ashift
zdb: can't open Storage_1: File exists
                ashift=12
[root@zfsguru /]#

I tried the export/reboot/import stuff, but still it likes to use the drive entry
 
Last edited:
Alright, so can you check that before you import the pool, there is a device entry at /dev/gpt/<LABELNAME> ? It's possible that this entry disappears as soon as ZFS starts using the adaXp2 raw device, so you would need to export the pool first to figure this out.

So again try:
zpool export Storage_1
(wait a few secs)
ls /dev/gpt

Now look if your label is listed here. If it's there, then this would be a separate issue that i have to figure out. That particular issue (ZFS chosing a non-preferred device entry and not the preferred label entry in /dev/label or /dev/gpt) is not dangerous or harmful, just inconvenient.
 
Yupp, everything good there:
Code:
[root@zfsguru /]# ls /dev/gpt/
Raid5_1	Raid5_2	Raid5_3	Raid5_4	Raid5_5	Raid5_6	start
[root@zfsguru /]#

:)

But according to the last command, the pool still runs at 4K, but in the GUI it show 512, right?
Thats kinda confusing for the "none-terminal-guru" user.
 
Last edited:
The GUI is displaying sector size reported by HDD itself. I second that a pool's sector size value to be displayed at the pool page.
I had one of those "ahaaah" moments when I read mesa's explanation on 2nd page. I was confused for why hdds would lose the sector override until then. It should be at first post or something :)

Also, cross-referencing gpt/label issue from another thread
The only caveat after using 4k-override is that HDDs are referenced by partition into the pool instead of the gpt label. Not a real issue. Just wondering what's causing it.
 
@RedHams: I see :) So, as long the ashift=12, everything is good, right?
As for now, the Pool page dont show the sector sice at all.

But that the GPT label are gone from the pool page and the 4K discs are totaly gona from the I/O Monitor is kinda confusing.
 
The I/O monitor is configured to hide all but 'known labeled' disks using /dev/gpt or /dev/label; to prevent all sorts of redundant output but instead display only one line per device.

The issues with labels would need to be addressed, but i would want to stress that there are several issues, and the issue of the GPT label being intact but not used with ZFS, is not a dangerous issue of any kind, but inconvenient. I'm still investigating these issues and will report my findings to the FreeBSD developers.

On another note, my progress on rewriting the essential structure: i'm about 30% done i think. Multiple pages not work, but it will still require alot of work to restore all original functionality. Perhaps in two weeks i will be done, after which it's time for more fun features to implement. :)
 
sub.mesa: Nice to know its not bad for the system, but its shure is annoying, hehe.
I guess I can continue to mov data to the server then :)

PS: Sorry for not sending you anything yet, but Im buzzy with some work stuff.
Wil get to the ZFS Guru manual Im working on as soon as I can :)
 
Great work so far...interface looks quite polished already.

I am running version 0.1.7 under ESXi 4.1 with a SAS2008 and a SAS1068E in pass-through mode to the virtual machine. (8 GB vRAM, 4 vCPU)

The SAS2008 controller seems to hang the startup. When removed, the startup is fine.

When booting, the console outputs:

run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 180 seconds for xpt_config mps_startup

etc...after 240 seconds...after 300 seconds...etc

I can use the SAS1068E without problems, but I would prefer to use the SAS2008 if possible.

Secondly, is there a password protect mechanism for the web-GUI in the works?

Finally...I noticed that the status page says:
Memory size 8 GiB, of which 12 GiB kernel memory (307.2 GiB max.)

How do I interpret this? There is 8 GB of RAM assigned to the virtual machine; what is meant by 12 GB of kernel memory? Will ZFSguru use the remaining 8 GB (subtract system and service resources) as ZFS cache?

Thanks!
 
Hey justin2net,

Appears you encountered one of the known bugs in the 'unfinished' mps driver:

> - Sometimes you'll run into a device that fails part of the probe on boot,
> and you'll end up running into the run_interrupt_driven_config_hooks
> timeout. You see some aborts during probe, and then the 5 minute probe
> timeout kicks in and panics the kernel. For instance:
>
> (probe4:mps0:0:20:0): SCSI command timeout on device handle 0x0012 SMID 81
> mps0: mpssas_abort_complete: abort request on handle 0x12 SMID 81 complete
> run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config
http://lists.freebsd.org/pipermail/freebsd-scsi/2010-September/004520.html

Can you check whether this always occurs? Also, does it work if you connect/passthrough only the new SAS2008 controller, thus removing the LSI 1068E?

If you still can't get it working could you try booting the system directly with ZFSguru livecd, circumventing ESXi? Just to rule out any issue there.

I could make a PR and let the BSD people look at it. The LSI people are working together on this driver, so i would expect this issue to be solved at one point; though i can't give you any ETA on that.

Please help diagnose this issue so i can report it. At least one other person has reported success with ZFSguru and the SAS2008 controller; but he had no other controllers and did not use ESXi.

I would really want the SAS2008 to work very well with ZFSguru; i think it's the best controller chip for ZFS available right now.

Secondly, is there a password protect mechanism for the web-GUI in the works?
Yes, planned for next release. Also IP-based authentication and optionally firewall protection; so triple protection against unauthorized access.

For now, i recommend you use other means to protect from unauthorized access. Anyone access to the ZFSguru interface basically has full root access to your system! Note that by default it does block non-local IP addresses (not starting with 10.x.x.x or 192.168.x.x) but that's the only protection right now, so beware.

Finally...I noticed that the status page says:
Memory size 8 GiB, of which 12 GiB kernel memory (307.2 GiB max.)
Yes, your physical memory is 8GiB, because you used memory tuning during installation, the kernel memory is set to be 1.5 times your RAM size; thus exceeding your RAM.

The trick here is to limit the ARC but give KMEM more headroom, even more than your actual RAM. This makes sure that memory fragmentation does not cause you to hit memory panics with you having enough memory free.

You can see how much ZFS is using at the moment in the 'top' output (login SSH or execute locally on the shell). Wired memory is kernel memory, InActive is UFS caching (should be low since you're using ZFS-only system!), Active means programs that are running and Free is totally unused/wasted memory. Ideally you would want Wired to take almost all of your RAM, with little "free" memory.

The maximum kmem is determined whether you run 32-bit system (1GiB max kmem) or 64-bit. Not sure how the value is calculated, but this is how big kernel memory could theoretically be scaled to. Higher settings would prevent booting, so 1GiB kmem limit on 32-bit systems is quite a limitation.
 
I remember something now: you get that error message when during boot one of your devices connected to that controller is not behaving well. Try disconnecting some devices and see if it boots then. This is a driver-limitation.
 
Update number? The 0.1.8 release should be about a month away. It will be worth the wait, though! With extensions that add functionality as biggest hot new feature; this is needed to give everyone the functionality that he/she wants without overburdening the main system with things you don't need.

I'll keep you guys updated on my progress. :)


Oh yes! More than 4000 PENDING sectors; these are VERY dangerous and just one of them can cause most Hardware RAID / Onboard RAID / Windows-based software RAIDs to kick them out of the array and you have a lot of headaches as a result!

Pending sectors are bad sectors on ACTIVE data; data that the HDD *SHOULD* be able to read but CANNOT because the bit errors exceed the internal 40-bit ECC correction capabilities.

Since your disks are EADS, they have normal 512-byte sectors. Due to their high data density the 40-byte ECC of those 512-byte sectors simply isn't enough to prevent occurrences like yours. Using 4KiB sector drives with 100-byte ECC would be able to correct more damage and becomes necessary with ever increasing data densities.

In other words, 2TB 512-byte sector disks could have serious amnesia. Your two disks appear to be suffering from just that.


First power down and install the new harddrives, leaving the existing ones untouched; though it shouldn't harm if you change cables or something; ZFS 'smells' the disks to see who they really are.

Now power up with your new two disks attached, format those disks (not your existing ones!) with GPT. You may need reserved space 0 depending on whether the new disk is smaller than the existing or not.

Once you have your disks formatted, you can execute this on the root command line:

zpool replace Raid1z-8TB gpt/1 gpt/FIRSTNEWDISK

and for the second disk:

zpool replace Raid1z-8TB gpt/2 gpt/SECONDNEWDISK

Of course, replace the name in capital letters with your chosen GPT name for those new disks. You MUST use different label names for your new disks!

You may also want to consider using RAID-Z2 (RAID6) with two parity drives; though that would require you copying all data to some temporary location and then creating a new RAID-Z2 pool from your 7 (?) disks.

[ssh@zfsguru /]$ zpool status -v
pool: Raid1z-8TB
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub in progress since Tue Jan 25 20:34:12 2011
4.37T scanned out of 5.12T at 8.58M/s, 25h15m to go
858K repaired, 85.46% done
config:

NAME STATE READ WRITE CKSUM
Raid1z-8TB ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gpt/1 ONLINE 0 0 0 (repairing)
gpt/2 ONLINE 0 0 0
gpt/3 ONLINE 0 0 0
gpt/4 ONLINE 0 0 0
gpt/5 ONLINE 0 0 0

errors: No known data errors

pool: zfs_root
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfs_root ONLINE 0 0 0
gpt/0 ONLINE 0 0 0

errors: No known data errors
[ssh@zfsguru /]$
What does it mean when it is done repairing? Do I still need to buy 2 new drives and get them replaced?
 
I would RMA the two disks which showed more than a 1000 pending sectors, for sure. You should definitely get it refunded/replaced through warranty.

It appears ZFS fixed all the damage though, now look with SMART information and see if all the active bad sectors are gone and now are new passive bad sectors are displayed instead. You can see this on the ZFSguru Disks->SMART page.
 
Can anyone explain how I can setup rsync between ZFSGuru and OSX?
:)
 
rsync is already installed on ZFSguru. You would need to modify the /usr/local/etc/rsyncd.conf (copy from /usr/local/etc/rsyncd.conf.sample) and set rsyncd_enable="YES" in /etc/rc.conf. Then reboot. This will configure rsync as a server, enabling OSX to function as a client.
 
How's development going? Do you have an idea when 0.1.8 will be out yet?
 
Hey justin2net,

Appears you encountered one of the known bugs in the 'unfinished' mps driver:


http://lists.freebsd.org/pipermail/freebsd-scsi/2010-September/004520.html

Can you check whether this always occurs? Also, does it work if you connect/passthrough only the new SAS2008 controller, thus removing the LSI 1068E?

If you still can't get it working could you try booting the system directly with ZFSguru livecd, circumventing ESXi? Just to rule out any issue there.

I could make a PR and let the BSD people look at it. The LSI people are working together on this driver, so i would expect this issue to be solved at one point; though i can't give you any ETA on that.

Please help diagnose this issue so i can report it. At least one other person has reported success with ZFSguru and the SAS2008 controller; but he had no other controllers and did not use ESXi.

I would really want the SAS2008 to work very well with ZFSguru; i think it's the best controller chip for ZFS available right now.


Yes, planned for next release. Also IP-based authentication and optionally firewall protection; so triple protection against unauthorized access.

For now, i recommend you use other means to protect from unauthorized access. Anyone access to the ZFSguru interface basically has full root access to your system! Note that by default it does block non-local IP addresses (not starting with 10.x.x.x or 192.168.x.x) but that's the only protection right now, so beware.


Yes, your physical memory is 8GiB, because you used memory tuning during installation, the kernel memory is set to be 1.5 times your RAM size; thus exceeding your RAM.

The trick here is to limit the ARC but give KMEM more headroom, even more than your actual RAM. This makes sure that memory fragmentation does not cause you to hit memory panics with you having enough memory free.

You can see how much ZFS is using at the moment in the 'top' output (login SSH or execute locally on the shell). Wired memory is kernel memory, InActive is UFS caching (should be low since you're using ZFS-only system!), Active means programs that are running and Free is totally unused/wasted memory. Ideally you would want Wired to take almost all of your RAM, with little "free" memory.

The maximum kmem is determined whether you run 32-bit system (1GiB max kmem) or 64-bit. Not sure how the value is calculated, but this is how big kernel memory could theoretically be scaled to. Higher settings would prevent booting, so 1GiB kmem limit on 32-bit systems is quite a limitation.

Yes this panic always occurs:

ZFSguru liveCD emulation with SAS2008 and SAS1068E passthrough: panic
ZFSguru liveCD emulation with SAS2008 passthrough only: panic
ZFSguru liveCD emulation with SAS1068E passthrough only: NO panic

ZFSguru installed (installed without any controllers attached), then with SAS2008 passthrough: panic
ZFSguru installed (installed without any controllers attached), then with SAS1068E passthrough: NO panic

ZFSguru liveCD with no ESX hypervisor, with SAS2008 and SAS1068E installed: panic
ZFSguru liveCD with no ESX hypervisor, with SAS2008 only installed: panic
ZFSguru liveCD with no ESX hypervisor, with SAS1068E installed: NO panic

I would appreciate any advice you can give and a bug report. Again, great work so far.

Just a minor thing about your website -- there are some dead links and mislinked images (I think).

On the page: http://zfsguru.com/doc/quick#manual
The page http://zfsguru.com/doc/bsd/install has incorrect images?
http://zfsguru.com/doc/bsd/install dead image links?
http://zfsguru.com/doc/zfsguru 404?

:)
 
Last edited:
Hey Justin,

Yes those links of the 'manual FreeBSD installation' are oudated, i added a notice to indicate so. Until i rewrite them, the ZFSguru livecd would be a much better choice than a manual installation; because ZFSguru gives you a 100% ZFS-only system, while manual methods often employ a UFS boot partition, which is easier, but also causes some RAM loss to UFS while you want all the memory available to ZFS instead.

Regarding your SAS2008 boot panic; did you try booting without any disks connected to the controller? Are there any other controllers that might interfere, can you disable as much stuff as possible?

You could try disabling MSI interrupts, but i guess that is unrelated. I could try writing the mps driver developers. For that i would need all the information you can give me.

If possible it would be great if you could try booting the ZFSguru livecd in another system with that SAS2008 controller, to see if this is system specific. If you didn't test booting without disks attached to the controller, i would try that first.

If it boots without disks, try booting with just one disk attached, etc. If one of your disks has some issues during boot, this would explain this issue well according to the documentation. If it happens without any disks attached, there must be another cause.
 
I would RMA the two disks which showed more than a 1000 pending sectors, for sure. You should definitely get it refunded/replaced through warranty.

It appears ZFS fixed all the damage though, now look with SMART information and see if all the active bad sectors are gone and now are new passive bad sectors are displayed instead. You can see this on the ZFSguru Disks->SMART page.

I finally got 11 new Hitachi 2TB drives
I have 2 of them for this box -- how do I replace them using ZFS Guru?

It also seems as if one of the drives got dropped, scrub is still going on, lots of files showed unrecoverable errors but I just deleted them (no big deal).

[ssh@zfsguru /]$ su
[root@zfsguru /]# zpool status -v
pool: Raid1z-8TB
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: scrub in progress since Thu Feb 3 23:56:09 2011
784G scanned out of 5.12T at 9.74M/s, 130h2m to go
64.5K repaired, 14.97% done
config:

NAME STATE READ WRITE CKSUM
Raid1z-8TB DEGRADED 69 0 0
raidz1-0 DEGRADED 69 0 1
gpt/1 REMOVED 0 0 0
gpt/2 ONLINE 69 0 0 (repairing)
gpt/3 ONLINE 0 0 0 (repairing)
gpt/4 ONLINE 0 0 0 (repairing)
gpt/5 ONLINE 0 0 0 (repairing)

errors: Permanent errors have been detected in the following files:

Raid1z-8TB:<0x2ca08>
Raid1z-8TB:<0x2ca0a>
Raid1z-8TB:<0x2ca0b>
Raid1z-8TB:<0x2ca0c>
Raid1z-8TB:<0x2ca0f>
Raid1z-8TB:<0x2ca12>
Raid1z-8TB:<0x2ca14>
Raid1z-8TB:<0x2ca15>
Raid1z-8TB:<0x2ca18>
Raid1z-8TB:<0x2ca1b>
Raid1z-8TB:<0x2ca1d>
Raid1z-8TB:<0x2ca1f>
Raid1z-8TB:<0x2ca40>
Raid1z-8TB:<0x2ca4d>
Raid1z-8TB:<0x2ca50>
Raid1z-8TB:<0x2ca52>
Raid1z-8TB:<0x2ca53>
Raid1z-8TB:<0x2ca57>
Raid1z-8TB:<0x2ca61>
Raid1z-8TB:<0x2ca6d>
Raid1z-8TB:<0x2ca75>
Raid1z-8TB:<0x2ca77>
Raid1z-8TB:<0x2ca79>
Raid1z-8TB:<0x2c97d>
Raid1z-8TB:<0x2c97e>
Raid1z-8TB:<0x2c97f>
Raid1z-8TB:<0x2c980>
Raid1z-8TB:<0x2c983>
Raid1z-8TB:<0x2c98b>
Raid1z-8TB:<0x2c98c>
Raid1z-8TB:<0x2ca9d>
Raid1z-8TB:<0x2c99f>
Raid1z-8TB:<0x2c9a3>
Raid1z-8TB:<0x2c9aa>
Raid1z-8TB:<0x2c9ac>
Raid1z-8TB:<0x2c9b3>
Raid1z-8TB:<0x2c9b4>
Raid1z-8TB:<0x2c9bd>
Raid1z-8TB:<0x2c9c2>
Raid1z-8TB:<0x2cac3>
Raid1z-8TB:<0x2c9c8>
Raid1z-8TB:<0x2c9d2>
Raid1z-8TB:<0x2c9d3>
Raid1z-8TB:<0x2c9d6>
Raid1z-8TB:<0x2c9d8>
Raid1z-8TB:<0x2c9db>
Raid1z-8TB:<0x2c9dc>
Raid1z-8TB:<0x2c9de>
Raid1z-8TB:<0x2c9e0>
Raid1z-8TB:<0x2c9e1>
Raid1z-8TB:<0x2c9e2>
Raid1z-8TB:<0x2c9e5>
Raid1z-8TB:<0x2c9e7>
Raid1z-8TB:<0x2c9ee>
Raid1z-8TB:<0x2c9ef>
Raid1z-8TB:<0x2c9f3>
Raid1z-8TB:<0x2c9f4>
Raid1z-8TB:<0x2c9f7>
Raid1z-8TB:<0x2c9fc>
Raid1z-8TB:<0x2c9fe>
Raid1z-8TB:<0x2c9ff>

pool: zfs_root
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfs_root ONLINE 0 0 0
gpt/0 ONLINE 0 0 0

errors: No known data errors

Smart Status
smartstatus.png

Drive Status
drivestatus.png
 
The disk dropout is not good; if it is dropped ZFS also can't fix its damage and you have no protection to correct any errors on the remaining disk. You need to have the original device back in there and then run the scrub.

Before running the scrub, you might clear all current errors, by issuing:
zpool clear <poolname>
 
The disk dropout is not good; if it is dropped ZFS also can't fix its damage and you have no protection to correct any errors on the remaining disk. You need to have the original device back in there and then run the scrub.

Before running the scrub, you might clear all current errors, by issuing:
zpool clear <poolname>

I did not drop the disk, it might've happened during a reboot, I'll reboot again
When do I plug in the 2 new disks?

Can I stop the scrub, clear errors, and try starting scrub again?
What is a way to rescan for disks to have the dropped disk come back?
 
I rebooted and the drive came back

When do I put in the 2 new disks? I only have 1 extra SATA port? Do I just do them 1 at a time?
Right now it is resilvering... lol
 
How do I change the name of the samba host (especially if I have multiple ZFS Guru machines on the same network?)
 
Connect with ssh,
type: ee /etc/rc.conf
search for and update: "hostname=..." line
 
Is it possible to run a VM on top of ZFSguru? I would like to virtualize Windows Home Server mostly for the client backups and have it run on top of ZFSguru...

cwagz
 
I rebooted and the drive came back

When do I put in the 2 new disks? I only have 1 extra SATA port? Do I just do them 1 at a time? Right now it is resilvering... lol
If you perform a scrub on a DEGRADED pool, then ZFS cannot fix any errors that come across. This may also cause disks to drop on RAID controllers, because ZFS does not fix the damage by writing to the device. But ZFS can only do that if a redundant source exists. Your files may not have copies=2, which stores all files twice to protect against bad sectors/BER, but metadata always does. So it shouldn't kill ZFS metadata, but it may harm your data.

Instead, you should perform the scrub with all disks that have problems, in the pool. Then the damage will be fixed (current pending sector disappears). You may still consider not using those disks anymore, but starting the scrub with all drives present would have a high chance of full recovery, unless related blocks on several disks are corrupted/unreadable at the same time, then the affected file basically 'dies'.

You can stop the scrub on the pools page, or with command zpool scrub -s <poolname>
Is it possible to run a VM on top of ZFSguru? I would like to virtualize Windows Home Server mostly for the client backups and have it run on top of ZFSguru...
Actually, the Virtualbox VM solution would probably be the first available extension for the new 0.1.8 release. I can't guarantee it would be part of the release, but if not i still expect it to be ready in February. This would allow Virtualbox with phpvirtualbox frontend to be installed and allow you to manage virtualbox and the guests OS through a flash-application so you could interact with the guest OS through the web-interface. Sleek huh?

Other extensions would follow, but this can only be done after I implemented a extension framework. So it may still take until late February as reasonable estimate. But that would mean more extensions can be made either by me or by you guys, and shared and improved and thus extending functionality of ZFSguru systems greatly. This is quite exciting for me, and I'm sure you guys would be too once this is working properly. :)
 
That is EXACTLY what I wanted to do as well..and its something I intend to play with..

Apparently virtualbox supports a raw zvol for storage, and there's a PHP web interface that could integrate nicely.

.
 
You can also just use files as Virtualbox storage. This would be faster especially for writes, since they happen async. But it may be less safe, meaning that if the host crashes, the guest may have a corrupt filesystem. With journaling (NTFS, Ext3) this generally would not be such a big deal though, so depending on how important the VM is, consider using file-backed storage instead. Do not run the 'zero write' option though, not sure how it's described, something like claiming all the space. On NTFS+HDD this would be good. But on ZFS as a CoW filesystem this would be unnecessary and may actually harm performance.
 
Actually, the Virtualbox VM solution would probably be the first available extension for the new 0.1.8 release. I can't guarantee it would be part of the release, but if not i still expect it to be ready in February. This would allow Virtualbox with phpvirtualbox frontend to be installed and allow you to manage virtualbox and the guests OS through a flash-application so you could interact with the guest OS through the web-interface. Sleek huh?

Other extensions would follow, but this can only be done after I implemented a extension framework. So it may still take until late February as reasonable estimate. But that would mean more extensions can be made either by me or by you guys, and shared and improved and thus extending functionality of ZFSguru systems greatly. This is quite exciting for me, and I'm sure you guys would be too once this is working properly. :)

Wow this will be awesome. For now I blew out me dedicated pfsense box and am putting WHS on it. I will feed it storage via iSCSI. Once the vbox thing is working I will most likely virtualize WHS on my ZFSguru box. I am sure I am going to miss my pfsense router in the meantime.
 
On contrary
A transfer of similar stuff to a Win7 single disk 160gb box, notice how the the strong dips are not there

Unless I'm missing something... seems like you only solidified the statement about wifi being the culprit.
 
Try testing the standard 'tmpfs' share when booting from the livecd (currently does not work for Root-on-ZFS, sorry). This would be memory-backed so you would write to RAM instead. If you still see the dips over wifi, then it probably is a network issue rather than ZFS performance issue.

In the upcoming release i will add a network benchmark (iperf) which allows you to do bandwidth testing completely independent from ZFS.
 
Back
Top