ZFSguru NAS fileserver project

Hi sub.mesa,

after playing for several hours with ZFSguru, I managed to install netatalk and I love this system. Too bad ZFSguru does not have (yet?) AFP settings inside :) Maybe a feature request.

A question: Got an Intel SASUC8i (LSI 1068E) today - what do you mean with IT-firmware? Can you give me some more information?

Thanks!
 
Hi sub.mesa,

after playing for several hours with ZFSguru, I managed to install netatalk and I love this system. Too bad ZFSguru does not have (yet?) AFP settings inside :) Maybe a feature request.

A question: Got an Intel SASUC8i (LSI 1068E) today - what do you mean with IT-firmware? Can you give me some more information?

Thanks!

That Intel card is merely a rebadged LSI 3081E-R with the RAID (aka "IR") firmware loaded by default. For a number of reasons, you don't really want the RAID firmware on the card for ZFS use. So, it is recommended to reflash it with what LSI calls the "IT" (Initiator-Target, I think) firmware, available for download from the LSI Logic website.

This article explains it better and gives instructions on how to use LSI's firmware flash utility with the right switches to override the Intel firmware version check:

http://www.servethehome.com/flashing-intel-sasuc8i-lsi-firmware-guide/
 
What does this mean?

I have setup rsync in cron that seems to keep my data wherever i want it.

FTP, but with scheduled download jobs(not terribly interested in rsync as I've always had issues with it not picking back up for incomplete files) so that every day at 3pm for example it would synchronize a remote folder with a local one. I need some method of oversight for it since I will be moving anywhere from 30-100GB over a slow home connection.

Sadly I think your answer is why I won't be able to use this awesome looking filesystem, I simply am not willing to learn yet another OS just to manage tasks that are trivial on Windows/OS X
 
That Intel card is merely a rebadged LSI 3081E-R with the RAID (aka "IR") firmware loaded by default. For a number of reasons, you don't really want the RAID firmware on the card for ZFS use. So, it is recommended to reflash it with what LSI calls the "IT" (Initiator-Target, I think) firmware, available for download from the LSI Logic website.

This article explains it better and gives instructions on how to use LSI's firmware flash utility with the right switches to override the Intel firmware version check:

http://www.servethehome.com/flashing-intel-sasuc8i-lsi-firmware-guide/

Great link - just did the flashing - successfully. Luckily I had a USB floppy with inserted floppy inside - so no need to install Windows for that. But I admit that I will use Nexenta nor FreeBSD/ZFSguru, because it's easier to expand for myself.
 
sub_mesa - It's really cool that you're actively developing this. I've been trying to spread the ZFS gospel since I discovered it.

Currently using a modified Solaris distribution called Eon... fairly limited, no GUI. Does ZFSGuru have the mpt_sas driver that FreeNAS lacks? I'm using a Supermicro AOC-USAS2-L8E and the driver support was the largest stumbling block for me before I got it working.
 
Both the stable and experimental LiveCD should work with your controller, as it includes the MPT Fusion 2.0 'mps' driver, which most likely FreeNAS doesn't have. Note that the driver is not complete yet and still being developed on by both LSI and FreeBSD people. There may be some issues requiring you to reboot, as the error recovery control isn't finished yet. Shouldn't be a real issue especially if you just want to test.

For real storage, you may want to wait a bit for a more mature driver; if it's available i can upload a new system version and you can upgrade your system to it. But aside from these issues; your controller should work out-of-the-box with recent ZFSguru LiveCD.
 
@sub.mesa looks like kfoda wrote his own script for zfs pool spindown. its pretty impressive, to me at least since spinning down an idle zfs pool is the last thing keeping me from really embracing ZFS and moving off of hardware raid. I asked the same question of gea for solaris, but you think you could mod this to implement in zfsGURU?

he posted his script in the "other" forum. :)

http://forums.servethehome.com/showthread.php?36-FreeBSD-vault&p=163&viewfull=1#post163
 
Sure i could rig something like that, but could you explain to me again (you did earlier i recall) what the issues are?

Right now you should have two methods:
  • spindown using camcontrol/atacontrol spindown command, with an inactivity timer
  • spindown using APM; where the drive itself spins down after being inactive for some time (no precision setting possible; but rough setting is possible)
If you do that on all disks in a pool, they should all go down at about the same moment, and up at about the same moment when the pool is accessed again. Is that the behavior you want?

Personally, i was thinking of implementing spindown in a new 'Power Management' tab on the Disks page, where you can:
  • spindown a disk right now
  • set inactivity timer to x seconds/minutes of inactivity
  • enable APM if the drive supports it
  • see the current status of the drive (spinning or spun down)

Then on the Pools page, i was thinking of making a pool wide 'sleep' button, which puts the pool to sleep; by exporting it so writes don't happen again, and then spinning the disks down. Then you can wake the array up again by importing it (would need something easy for this).

The FreeBSD devs may think using such a script is kind of messy. It for example does not address I/O being done on the drives outside of ZFS, and also it doesn't really have an inactivity timer. If you are writing and have a few seconds rest, and then the script checks the zpool iostat output, then it would assume the pool is inactive and spin down the disks; so this could occur at random.

FreeBSD has a kernel-level implementation for this, why not use that? It would internally keep track of inactivity by monitoring the device I/O directly, much cleaner solution i think. But please share you comments/concerns! I'll be writing this functionality soon and it will appear in 0.1.8.
 
Sure i could rig something like that, but could you explain to me again (you did earlier i recall) what the issues are?

It's simple. Right now I use hardware RAID adapters which let me set an inactivity timeout value (60 minutes), after which the array(s) will spin down. Most of my arrays are archival usage pattern, so spindown is valuable because: A) sometimes weeks or even months can go by between accesses to most arrays, B) when I do access an array it is for a short period of time - minutes to a few hours, C) I live in an earthquake zone.

The question is how to duplicate that functionality for ZFS, with zfsGURU, or whatever. I don't think APM is the route to take with this. Not all drives support it and I don't see how having the drives independently managing their own APM timers is a good thing - isn't the whole point of a striped array that drives remain dumb?

FreeBSD has a kernel-level implementation for this, why not use that?

I'm not asking because I need to know, I'm suggesting it as a valuable new feature for your project. Like many people coming from the windows world, I know relatively nothing about FreeBSD, and I'm not compelled to learn it when everything I need to do already works in Windows, albeit at a higher price tag.

Did you look at the script at the link I posted?
 
Last edited:
Not all your drives support APM; alright!

But that still means you can use the normal atacontrol/camcontrol spindown commands, which have an inactivity timer unrelated to APM.

With APM, the disk spins down itself with internal inactivity timer
With FreeBSD spindown, the FreeBSD device driver issues spindown on activity
With that script, it would be more like a 'guess' whether the pool is in use or not. It may work 98% of the time, but 2% spindown while you're using it may not be that sexy. Not the cleanest solution, anyway.

So back to your situation: if you set all your disks to say 600 seconds (10 minutes) spindown, then wouldn't that do what you want? That works on your non-APM capable drives and would work independent of APM; this is the device driver that manages the spindown which is sexier i think.

Anyway, the spindown is not the problem i think. If disk4 spins down a few secs or even minutes later, that's not really bad is it? Not that that would be the case, though; perhaps with APM it would. But my point is that only the spin-up would be critical: unless you have power concerns, you probably don't want the disks to spinup one by one, but rather all at once. ZFS should take care of that.

The remaining issue is whether ZFS does some light I/O on your sleeping pool, waking it and idling again. That would be bad. My 'export and go to sleep' button for pools would cope with that, and make the pool temporarily unavailable until you wake it manually on the Pools page.
 
Hi all,

I'm trying to create an iSCSI disk that I can mount from a Server 2008R2 machine, but I'm really not having much luck with it at the moment.

Here's what I've done so far:

1) Created a new ZVOL called 'ISCSITEST'
2) Configured Services > iSCSI > Quick Configuration as below:
iscsi1.png

3) Started iSCSI on the Services page.

So far I haven't touched anything in the 'Main configuration' or 'Authentication' sections yet.

At this point I've opened up the iSCSI Initiator on 2008 and tried to run a target discovery, but it's just not finding anything.

When I look at the logs page on ZFS though, I see this:
Code:
Jan  9 09:17:12 zfsguru istgt[1410]: Login(discovery) from iqn.1991-05.com.microsoft:vm-srv2k8-dc01 (192.168.1.5) on (192.168.1.8:3260,1), ISID=400001370000, TSIH=1, CID=1, HeaderDigest=off, DataDigest=off

At this point I'm pretty much lost on the problem. I've only configured iSCSI disks a few times before, and that's always been on systems that are much simpler to configure [EG QNAP NAS], so I'm sure that I'm probably doing something simple wrong ... but I just have no idea what it is?!?

Anybody able to steer me onto the right path with what I might be doing wrong?
 
Did you restart the iSCSI service after creating that iSCSI disk? If not, could you try that?
 
Did you restart the iSCSI service after creating that iSCSI disk? If not, could you try that?

The iSCSI service wasn't even enabled at all until after I created the disk.

Just to be sure I've tried restarting it again anyway, and it's still the same result.
 
Okay ... so more weirdness ...

I just restarted both servers [ZFS & 2008], and now the iSCSI drive is being detected on the 2008 machine, but it's only showing up with a 1GB capacity [the ZVOL is set to 250 GB].

It works fine ... I've just formatted it and tried copying a bunch of files to/from it several times without any problems, but it's only showing 1GB capacity.

I've just deleted the ZVOL and created another one [50GB this time] and the same thing has happened again.

Any ideas why that might be?
 
Okay ... so more weirdness ...

I just restarted both servers [ZFS & 2008], and now the iSCSI drive is being detected on the 2008 machine, but it's only showing up with a 1GB capacity [the ZVOL is set to 250 GB].

It works fine ... I've just formatted it and tried copying a bunch of files to/from it several times without any problems, but it's only showing 1GB capacity.

I've just deleted the ZVOL and created another one [50GB this time] and the same thing has happened again.

Any ideas why that might be?

I had the exact same problem. I am a complete noob with FreeBSD, but I was able to figure out that the size of the target appears to be in the istgt.conf file at /usr/local/etc/istgt. I was also able to figure out that the parameters of the file also happen to be under "Main configuration" of the iSCSI tab. Towards the bottom of the page, the Target that I created was listed with a size of 1GB at the end of the LUN0 entry. I edited the size in the LUN0 entry (in my case it was 500GB), saved changes, and then restarted the iSCSI service. After doing this, the target appeared as 500GB (I am using the Microsoft connector in Server2008R2 and also Win7) Also, the new size is listed in the istgt.conf file.

I'm not sure why ZFSGuru defaults to a size of 1GB when other sizes are used. So, this would appear to be a bug.
 
So I finally put my system together and was running some benchmarks but ran across the following errors.

On boot I get this message, indicating an acpi checksum error?-
booterror.jpg


While running my first benchmark I became aware that I had a bad cable by looking at the SMART data for my drives. I tried stopping the benchmark but the system froze and I had to restart. Upon rebooting I could see the zfsgurubenchmark pool in my pools, which was faulted due to me disconnecting one of my backplanes, causing drives to be missing. I deleted that and tried running the benchmark again and received this message:

ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: 4096 bytes
Number of disks: 10 disks
disk 1: label/R2C2.nop
disk 2: label/R1C3.nop
disk 3: label/R4C2.nop
disk 4: label/R4C1.nop
disk 5: label/R2C1.nop
disk 6: label/R2C4.nop
disk 7: label/R1C4.nop
disk 8: label/R1C1.nop
disk 9: label/R1C2.nop
disk 10: label/R2C3.nop


* Test Settings: TS32; SECT4096;
* Tuning: KMEM=12g; AMIN=4g; AMAX=6g;
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 8 disks: cinvalid vdev specification
use '-f' to override the following errors:
/dev/label/R4C1.nop is part of potentially active pool 'gurubenchmarkpool'

* ERROR during "zpool create"; got return value 1
cannot open 'gurubenchmarkpool': no such pool

I deleted the pool so I am not sure how to correct this issue.

I was able to get the benchmark running again by selecting 10 other disks and everything was going fine until I went to sleep while the benchmark was running and woke up with this message and a frozen webui and console:
crasherror.jpg


I am now trying to run more benchmarks and can not because I am getting the same error message /dev/label/insert drive name here is part of potentially active pool 'gurubenchmarkpool'

I see the following on the import pools page:
Code:
  pool: gurubenchmarkpool
    id: 10152584200383203197
 state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

	gurubenchmarkpool  ONLINE
	  mirror        ONLINE
	    label/R5C1  ONLINE
	    label/R5C2  ONLINE
	    label/R5C3  ONLINE
	    label/R5C4  ONLINE

  pool: gurubenchmarkpool
    id: 9310017500265073382
 state: UNAVAIL (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
	devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

	gurubenchmarkpool  UNAVAIL  insufficient replicas
	  raidz1        UNAVAIL  insufficient replicas
	    label/R4C3  UNAVAIL  cannot open
	    label/R4C4  UNAVAIL  cannot open
	    label/R2C2  UNAVAIL  cannot open
	    label/R1C3  ONLINE

  pool: gurubenchmarkpool
    id: 10817255840134594273
 state: UNAVAIL (DESTROYED)
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

	gurubenchmarkpool  UNAVAIL  insufficient replicas
	  label/R1C4  ONLINE
	  label/R1C1  ONLINE
	  label/R1C2  ONLINE
	  label/R2C3  ONLINE
	  label/R4C1  ONLINE

  pool: gurubenchmarkpool
    id: 2062787335801055440
 state: UNAVAIL (DESTROYED)
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

	gurubenchmarkpool  UNAVAIL  insufficient replicas
	  mirror        ONLINE
	    label/R4C2  ONLINE
	    label/R2C4  ONLINE

  pool: gurubenchmarkpool
    id: 16679198228447500156
 state: UNAVAIL (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
	devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

	gurubenchmarkpool  UNAVAIL  insufficient replicas
	  raidz2        UNAVAIL  insufficient replicas
	    label/R5C1  UNAVAIL  cannot open
	    label/R5C2  UNAVAIL  cannot open
	    label/R5C3  UNAVAIL  cannot open
	    label/R5C4  UNAVAIL  cannot open
	    label/R4C3  ONLINE
	    label/R4C4  ONLINE
	    label/R2C2  ONLINE

I was able to import and destroy one of the pools in an effort to "reset" it but on the other pools I get a similar message to this:
ERROR: On command [ zpool import -D 2062787335801055440 ] i got return value 1 with output text:

cannot import 'gurubenchmarkpool': one or more devices is currently unavailable

Any ideas on what I can do to fix this?
 
HI all :)

Then I try to start the iSCSI service I get this error:

"ERROR: Got return value 1 when trying to onestart service "iSCSI" with output:
/usr/local/etc/rc.d/istgt: WARNING: /usr/local/etc/istgt/istgt.conf is not readable. /usr/local/etc/rc.d/istgt: WARNING: failed precmd routine for istgt"
-------------

Sub.Mesa: Wil you make an GUI for adding Samba users?

Can anyone tell me how to connect to an OSX 10.6 computer over NFS right?
I tried with some guides, but then I try to open the share I get "you dont have the rights to read/whrite" (or something like that)
 
HI all :)

Then I try to start the iSCSI service I get this error:

"ERROR: Got return value 1 when trying to onestart service "iSCSI" with output:
/usr/local/etc/rc.d/istgt: WARNING: /usr/local/etc/istgt/istgt.conf is not readable. /usr/local/etc/rc.d/istgt: WARNING: failed precmd routine for istgt"
-------------

I ran into that problem a few times while I was testing things too.

All I needed to do was go to Services > iSCSI and create the actual disks first. Once I'd done that I was able to start the iSCSI process without any problems.

Try that and let us know if it solves things for you ...
 
Sorry was kind of busy past days, but let's try to catch up:

@YoungEinstein:
I also replied to your query on the ZFSguru forums. The 1GB ZVOL is an issue i would need to resolve; it does not handle fractions properly as well, or at least that would be a separate bug/issue related to ZVOLs and iSCSI-target configuration. I'll resolve this in 0.1.8, until that time please set the size manually through the iSCSI configuration on the services page; simply change the 1GB string to say 20GB for example. Note that "GB" is actually GiB; but this is the syntax that istgt.conf configuration uses so i have to obey it. It does not handle fractions in the configuration file.

@MilesTeg
You do not have to destroy the 'destroyed' zpools; only the one which is not destroyed yet. The easiest way probably to resolve this, is to create a new zpool covering all your disks you used for benchmarking, just in a RAID0 configuration. Once you get to the dangerous command execution page, add the -f parameter to force overwriting the disks, in essence overruling a safety mechanism designed to prevent mistakes by selecting the wrong disks with an active pool on it.

example command:
zpool create blablablabla

change to:
zpool create -f blablablabla

So just add a -f directly after the create; but make sure there are spaces before and after the -f. After you created this pool, destroy it again and your issue should be gone.

@svar
You may need to reset your iSCSI configuration, you can do so on the Services->iSCSI page. This will overwrite the current (non-existing/erroneous) config and replace with a default skeleton. Then add the disk you want and start the service.
 
@YoungEinstein: thanks, that worked :)
But I got the same problem that everyone else on if I make an iSCSI drive of 100GIB, OSX or linux sees it as 1,1GiB.
If I only can get what Sub.Mesa is saying over this post, meybe I can edit it manualy :)
 
But I got the same problem that everyone else on if I make an iSCSI drive of 100GIB, OSX or linux sees it as 1,1GiB.
If I only can get what Sub.Mesa is saying over this post, meybe I can edit it manualy :)

tcs2tx has actually figured out the answer to that one:

http://hardforum.com/showpost.php?p=1036699034&postcount=56

From the ZFS intefrace click 'Services' > 'iSCSI' > 'Main Configuration'

Right down the bottom of the list you'll see a section labelled 'LUN0 ... which will say something along the lines of Storage /dev/zvol/GPTPOOL/ESXITEST 1GB

[Obviously your pool names etc will be different]

The important part is just to change the size at the end of that filed to match the size of the iSCSI disk you've created.

Hit save and restart the iSCSI service and you should be all set to go!
 
@YoungEinstein: working good :)
I missing an GUI tab for adding samba users, but I guess its comming later :)


Can someone tell me how to connect sucessfully to OSX 10.6 over NFS?
I get connected, but cant read or write for some reason (or even open the share sometimes).... :confused:
------------

EDIT:
I doing some speed tests now, comparing to FreeNAS.
The hardware is the same.
On FreeNAS I was lucky if I got over 20-25MB/sec over Samba.
On ZFSGuru, I have got 50-80MB/sec over Samba.
(offcause the speed depend on what I copy)
(client OSX 10.6.6)

:D
 
Last edited:
Can anyone seed the 1.7 torrent file? No one online with it yesterday or today.

Cheers
Paul
 
I am seeding it now. I was seeding it all last week but had to reinstall WHS. I can't wait until I can run ZFSGuru!
 
Can anyone seed the 1.7 torrent file? No one online with it yesterday or today.

Cheers
Paul

I will be leaving it up on my EU seedbox indefinitly.


I tossed an install of this on my spare storage dev box that I have been playing with... getting some odd benchmark results... I am rerunning the tests now and will post the results when it finishes.

Configuration is dualcore conroe Xeon 3075, 8GB RAM, 1068e controller, connected to 6 1TB 7200.12 Seagates, and a 60GB callisto SSD on the onboard controller for cache testing.

I wasent able to get the iSCSI working right last night with it though... i setup it all up on the ZFS end properly (as far as I can tell) though my windows 7 test environment would not see it. Some kind of trick im missing? I havent setup iSCSI in anything past server2003, maybe there is something finnicky with windows 7?
 
There should always be seeds for the torrents, at least the main server is sharing non-stop since early this month.

If you continue to have problems with the torrents and/or downloading, then please tell me since this should be something i have to address. But my general thought is that torrents give you multiple sources to download from, where you don't have to rely on a central server to be online.

I still use HTTP downloads to download the system images when installing on the System->Install page, i would very much like to make the transition from direct HTTP downloads to full torrent-based downloads.

Thanks for everybody seeding!

@Tau: iSCSI configuration still could use some polishing. But i heard about having to reboot Windows before you could actually 'discover' the iSCSI target, did you try that already? It should work with Windows 7 i know people who configured it that way.
 
There should always be seeds for the torrents, at least the main server is sharing non-stop since early this month.

If you continue to have problems with the torrents and/or downloading, then please tell me since this should be something i have to address. But my general thought is that torrents give you multiple sources to download from, where you don't have to rely on a central server to be online.

I still use HTTP downloads to download the system images when installing on the System->Install page, i would very much like to make the transition from direct HTTP downloads to full torrent-based downloads.

Thanks for everybody seeding!

@Tau: iSCSI configuration still could use some polishing. But i heard about having to reboot Windows before you could actually 'discover' the iSCSI target, did you try that already? It should work with Windows 7 i know people who configured it that way.

I reread through the thread and saw that a couple reboots solved the issue, I will try that here shortly after the benchmarks are done... your going to be scratching your head when you see these...
 
@sub.mesa: Any news as to when you can expect the next more full-feature release? I'm sitting on a WHS box right now, but I'm getting fed up with the performance (especially the once per hour balancing). I'm ready to try something new, but am waiting until things are a little more settled on the ZFSguru project.
 
HI again :)

I finaly found out how to connect to ZFSGuru from OSX over NFS, but I only can read from the shares.
Im doing something wrong or?
 
@svar
To facilitate both NFS and Samba access to the same share/data, i use the 'nfs' user at uid 1000 and gid 1000. When using NFS on Linux like Ubuntu, this would be the preferred option.

However, i assume that by your problems your user id (uid) is not 1000; please check this and see if you can mount with a different user id. If you only want to use NFS and not Samba for that data, you can use a command like this to yield unlimited access to that data:

chmod -R 777 /path/to/directory/for/nfs

Samba can also be configured in this way, using write mask and directory mask configuration options.

But permissions is still lacking my interface, so you would have to find some way to make this work for you, until i got something more smooth to manage permisisons and stuff. That stuff is tricky to say the least, especially cross-platform!


@Khadgar
I perfectly understand your question: when is the right time to join the ZFSguru bandwagon?

I'm not sure i have a clear answer to that. I did try hard to make 0.1.7 future-proof, meaning that you could use your existing 0.1.7 install to upgrade and upgrade and upgrade to a much later version. It has most basic features covered and uses stable FreeBSD and ZFS code. So it may be a viable option already, depending on your needs and expertise to tweak stuff to your liking.

The usefulness of ZFSguru will improve with each version, and it can be improved alot beyond what is provided right now; i have a lot of ideas! But i have to take on some hard things first, and create a foundation of extensions that supplement functionality to the interface and system. Without that i can't continue with things like adding a torrent client without manageability becoming messy.

That does mean though, that the next 0.1.8 release might take awhile, probably in late February. It would be a huge improvement in some areas though, and starts the availability of software extensions. It's still going to be a lot of work, i know that. And i have to catch up with (real) work, too. Anyway, i love this project and will continue to devote my tender care to it. :)
 
@sub.mesa
You phrased my question better than I did :) Thanks for that info. With that said, I think I'll likely wait until the 0.1.8 release to jump in.
I am a tinkerer, but not 100% confident in my FreeBSD skills enough to do all the things necessary for security, user access, etc.

Needless to say, I'm anxiously awaiting your future releases and am following this thread and your site closely.
 
There should always be seeds for the torrents, at least the main server is sharing non-stop since early this month.

If you continue to have problems with the torrents and/or downloading, then please tell me since this should be something i have to address. But my general thought is that torrents give you multiple sources to download from, where you don't have to rely on a central server to be online.

I still use HTTP downloads to download the system images when installing on the System->Install page, i would very much like to make the transition from direct HTTP downloads to full torrent-based downloads.

Thanks for everybody seeding!

Thanks Sub.

I think its an issue at my end or something, I cant connect/see anyone seeding it. Ive tried from a few different locations. Not sure if its to do with the tracker address or what.

Cheers
Paul
 
6x 1Tb 7200.12 Seagates, on a 1068e, dual core 3075 xeon, 8GB RAM..

Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 6 disks
disk 1: gpt/da0
disk 2: gpt/da1
disk 3: gpt/da2
disk 4: gpt/da3
disk 5: gpt/da4
disk 6: gpt/da5


* Test Settings: TS32; AL512; 
* Tuning: KMEM=11.9g; AMIN=4g; AMAX=5.9g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	191 MiB/sec	191 MiB/sec	193 MiB/sec	= 192 MiB/sec avg
WRITE:	158 MiB/sec	170 MiB/sec	170 MiB/sec	= 166 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	195 MiB/sec	195 MiB/sec	195 MiB/sec	= 195 MiB/sec avg
WRITE:	170 MiB/sec	168 MiB/sec	168 MiB/sec	= 169 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	195 MiB/sec	194 MiB/sec	194 MiB/sec	= 195 MiB/sec avg
WRITE:	164 MiB/sec	171 MiB/sec	168 MiB/sec	= 168 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	191 MiB/sec	190 MiB/sec	191 MiB/sec	= 191 MiB/sec avg
WRITE:	123 MiB/sec	121 MiB/sec	122 MiB/sec	= 122 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	194 MiB/sec	195 MiB/sec	192 MiB/sec	= 194 MiB/sec avg
WRITE:	131 MiB/sec	133 MiB/sec	128 MiB/sec	= 131 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	175 MiB/sec	175 MiB/sec	176 MiB/sec	= 175 MiB/sec avg
WRITE:	137 MiB/sec	136 MiB/sec	136 MiB/sec	= 136 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	192 MiB/sec	192 MiB/sec	191 MiB/sec	= 192 MiB/sec avg
WRITE:	58 MiB/sec	56 MiB/sec	53 MiB/sec	= 56 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	193 MiB/sec	192 MiB/sec	191 MiB/sec	= 192 MiB/sec avg
WRITE:	89 MiB/sec	92 MiB/sec	94 MiB/sec	= 92 MiB/sec avg

Now testing RAIDZ2 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	191 MiB/sec	192 MiB/sec	192 MiB/sec	= 192 MiB/sec avg
WRITE:	110 MiB/sec	106 MiB/sec	107 MiB/sec	= 107 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	192 MiB/sec	191 MiB/sec	191 MiB/sec	= 191 MiB/sec avg
WRITE:	45 MiB/sec	45 MiB/sec	45 MiB/sec	= 45 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	196 MiB/sec	196 MiB/sec	195 MiB/sec	= 196 MiB/sec avg
WRITE:	34 MiB/sec	36 MiB/sec	36 MiB/sec	= 35 MiB/sec avg

Now testing RAID1 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	197 MiB/sec	198 MiB/sec	197 MiB/sec	= 197 MiB/sec avg
WRITE:	28 MiB/sec	30 MiB/sec	31 MiB/sec	= 30 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	192 MiB/sec	192 MiB/sec	192 MiB/sec	= 192 MiB/sec avg
WRITE:	89 MiB/sec	89 MiB/sec	87 MiB/sec	= 88 MiB/sec avg

Now testing RAID1+0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	197 MiB/sec	197 MiB/sec	197 MiB/sec	= 197 MiB/sec avg
WRITE:	86 MiB/sec	88 MiB/sec	87 MiB/sec	= 87 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ:	117 MiB/sec	118 MiB/sec	118 MiB/sec	= 117 MiB/sec avg
WRITE:	120 MiB/sec	112 MiB/sec	120 MiB/sec	= 117 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	188 MiB/sec	186 MiB/sec	183 MiB/sec	= 186 MiB/sec avg
WRITE:	166 MiB/sec	169 MiB/sec	165 MiB/sec	= 167 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	195 MiB/sec	193 MiB/sec	191 MiB/sec	= 193 MiB/sec avg
WRITE:	169 MiB/sec	166 MiB/sec	171 MiB/sec	= 169 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	120 MiB/sec	120 MiB/sec	120 MiB/sec	= 120 MiB/sec avg
WRITE:	80 MiB/sec	79 MiB/sec	80 MiB/sec	= 80 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	177 MiB/sec	177 MiB/sec	177 MiB/sec	= 177 MiB/sec avg
WRITE:	109 MiB/sec	111 MiB/sec	106 MiB/sec	= 109 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	120 MiB/sec	119 MiB/sec	118 MiB/sec	= 119 MiB/sec avg
WRITE:	56 MiB/sec	55 MiB/sec	55 MiB/sec	= 55 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	160 MiB/sec	159 MiB/sec	159 MiB/sec	= 159 MiB/sec avg
WRITE:	86 MiB/sec	88 MiB/sec	86 MiB/sec	= 87 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	180 MiB/sec	181 MiB/sec	180 MiB/sec	= 180 MiB/sec avg
WRITE:	58 MiB/sec	60 MiB/sec	59 MiB/sec	= 59 MiB/sec avg

Done

A hell of alot slower than I was expecting... Something weird going on for sure.
 
@Tau

What motherboard are you using?
All those numbers are all about the same. Shot in the dark but could it be that you have your controller card in a non-8X PCI-E slot?
Did you flash the IT firmware onto your card?
 
Its in an HP Ml110 G5, 3x PCI-E slots in 8 x mechanical... not sure what the electrical is... but that shouldent be the cap as even a single PCI-E 2.0 lane is capable of 500MB/s so even assuming its a 4x slot thats 2GB/s i can push through that slot.. But I will check when I get the chance.

Its the controller from one of the old servers so it SHOULD be in IT mode, though the firmware is no doubt old on it... I will reflash with the latest here over the next day or so as well to confirm.
 
Im using 0.1.7 rootonzfs (specs in sig) and shut my server off today and when I booted it back up I found something odd.
48jpo.png

All but my OS (80GB) and one of my 2TB drives (2TB-3) lost their labels, I had them labeled 2TB-1 through 4 using GEOM. They also lost the 4K sector .nop fix and now say 512 B. I attached another image of my pools page.
Sh8Gu.png

If you need anymore info or pics just say so.

@sub.mesa things I would like to see added are a backup page, also be able to make samba shares without using the filesystems page. I wanted to make a share of an already existing folder on one of my pools but couldn't figure out how to using the webgui so I used ssh and echo to add one line at a time to smb.conf for a new share.
 
Last edited:
Backup page is on my wish list too.
I wish to use my Drobo as an backup unit, still trying to figure out the best way to do that.
 
Count me in for a backup page. I was hoping to use my WHS as a backup of my ZFSguru server, at least for a little while. I would also welcome any suggestions on how I could manually sync the data in the meantime.
 
Back
Top