Testing ZFS RAID-Z performance with 4K sector drives

I've got 4 of these Samsung F4's that I was hoping to use in a Raidz, and then eventually get 2 more and migrate to a Raidz2..

So is the general consensus that these drives should be avoided for ZFS?

Is speed the only really problem?
 
can't comment on the F4 drives but you do know zfs doesn't support raid level migration right? you wont be able to migrate 4 drive raidz into 6 drive raidz2 without offloading the data and recreating the zfs pool
 
If you already have them, I'd say use them. As you can tell from sequential benches they really aren't that bad performance wise. They do need some tweaks, like disabling NCQ via /boot/loader.conf

Maybe over time there will be a performance hit due to variable stripe size, but I'm not sure, these are fairly new drives.

How exactly do you plan to migrate only using 2 more disks?
 
remember too that there is a performance hit running an even number of drives in raidz as seen by numerous users benchmarks.. better off with 3,5,7...
killa, am I missing something.. don't see significant increases with the new Hitachi's.. I've never been able to trust the IBM/Hitachi after the "deathstars".. I had 4 GXPs, 100% failure rate, returned and the replacements failed after months as well.. good paper weights if you ask me.
 
Last edited:
black6spz, you are right, there really isn't a significant performance increase. I was expecting a bit more after all the bad mouthing these drives have taken, they actually seem to do a lot better than the WD Green 4k sector drives.

I've never been able to trust the IBM/Hitachi after the "deathstars".. I had 4 GXPs, 100% failure rate, returned and the replacements failed after months as well.. good paper weights if you ask me.

Talking to a psychiatrist may alleviate your trust issues over time.:p

The hitachi's are proven to be reliable. There is no more Deathstar era...
 
can't comment on the F4 drives but you do know zfs doesn't support raid level migration right? you wont be able to migrate 4 drive raidz into 6 drive raidz2 without offloading the data and recreating the zfs pool

Yes I know that it would involve dumping the data and recreating the pool.

I've never been able to trust the IBM/Hitachi after the "deathstars".. I had 4 GXPs, 100% failure rate, returned and the replacements failed after months as well.. good paper weights if you ask me.

When you find a good shrink, let me know.. I need to talk to them as well..

I know that the "Deathstar" days are behind them, I just haven't been able to let it go.

Back in 03, I worked on a Dell server that had a 5 disk Raid 5 array, that consisted of Hitachi SCSI drives. One of the drives failed, and another failed while the array was rebuilding..
 
Last edited:
Quick question - If I already have data on my RAID, can I enable this "GNOP" to start doing the 4K sector emulation?

How do I tell if my drives already have the 4K sector size? I have 2TB WD Green EADS ..

I would like to help do some testing if I can ....

I have 2x2.2Ghz Quad Cores with 8GB of ram in this box.

Thanks.
 
Ok, so if I do the gnop ... will I lose all of my data? How does that work?

Thanks...
 
Hey xp0!

Yes you can use GNOP or geom_nop on existing disks with existing data (part of a ZFS pool). This will not destroy any existing data on the disk; it will only make the sectorsize 4K instead of 512 bytes normally used. However, the command may fail if the disk is in "active use"; meaning ZFS is loaded and it will 'lock' write access to the physical disks. So you can only use GNOP on your exported array, on disks which have no ZFS pool yet, or when ZFS is not running at all.

I recommend exporting your pool:
zpool export mypool

Now the disks won't be in use, and you can apply gnop to your disks:
gnop create -S 4096 /dev/label/disk1

After this command you will have /dev/label/disk1.nop device as well as the normal /dev/label/disk1 device. Both point to the same data, but the .nop device will present a 4K sectorsize to the filesystem. When you done this for all your disks, you can import the array again:
zpool import mypool
zpool status mypool
(replace mypool with your pool name). Now it should find the disks with your .nop additions attached to it. Try it. It could be that it refuses to accept the new .nop devices and just use normal non-GNOP device with normal sector size (512b).

I must warn you that i'm not sure whether applying GNOP on disks with existing ZFS data will provide the same performance effect as doing this when creating the pool. It is possible that, when creating the pool, ZFS may store some variables which depend on the sectorsize. If this is true, then it could be that the performance effect will only show its maximum potential when performing GNOP on disks before you create a pool; thus before you have any data on the disks you would like to save. So i can't guarantee it will have the same effect if you want to keep the existing pool with all its data; but hey you can try. It should be safe to do; though i must warn that it's easy to make mistakes when messing with disks like this; so be careful if that data is important and you don't have a backup!

If you want to do benchmarking you may want to use the latest ZFSguru LiveCD and use the automated Benchmark feature. For the URL and more info, check the FreeBSD ZFS Web-GUI thread (last page). With 8GB RAM i'm very interested in the results!
 
Hi sub.mesa ...

Are you still looking for any more systems to test?

I'm just putting the finishing touches on a new ZFS build with 10 x F4EG 2TB drives in it if you'd like to have a play with it.

Specs are:

Motherboard: Intel Server Board S5520HCR
CPU: Intel Xeon L5630
PSU: Seasonic X-750
RAM: KVR1066D3D4R7S/4GI [x 2]
SAS Controller: Intel SASUC8I
 
If only we could get manufacturers to release firmware that makes drives report the correct physical sector size... It sucks that a POS OS (XP) is causing headaches for everyone.
 
For anyone interested...

I used OpenIndiana (zpool v28) with some new Samsung HD203UIs (4k) and applied a patch to the zpool command to allow ashift = 12. I also used the magical number of drives in RAIDZ and tested performance with bonnie++.

You can check out the results here if you're interested :)
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/
Hey Digital, do you think this fix/patch would also help WD20EARS owners ?

Thanks in advance
 
Yes it does, but be careful: it can corrupt your entire pool. If you do this, do it on a fresh pool. Some people who tried this on their full pool lost their xTB of data and were not very amuzed.

ZFS v28 is also not a stable ZFS version, but it can't hurt to test what the performance is like in this configuration! Not that shocking, to be honest.

On the FreeBSD platform, using automated GNOPs might be a safer route if you want some additional performance, though the difference is not shocking.

@young_einstein: you can test yourself with the ZFSguru distribution; it now has an automated benchmark feature giving you nice graphs. It takes a long time to run though. You may want to want until newer preview2.iso which i'm still working on; it has ZFS v15 and some other improvements.
 
For anyone interested...

I used OpenIndiana (zpool v28) with some new Samsung HD203UIs (4k) and applied a patch to the zpool command to allow ashift = 12. I also used the magical number of drives in RAIDZ and tested performance with bonnie++.

You can check out the results here if you're interested :)
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/

Isnt it a general understanding that the issue with 4k drives isn't with large sequential I/O, but rather smaller random I/O?
 
just to make sure i have it right.
you need the right amount of drives if using 4k drives in raidz and raidz2... but there is no such issue with raid1?
 
Those are all very decent scores killagorilla187, thanks for testing it!

If your sequential scores don't degrade too much, you can use the recommended values:
min=4
max=32
(32 because SATA NCQ has maximum of 32 outstanding requests)
 
Sub.mesa, thanks for your suggestions!
Disclaimer, To avoid any confusion these have NOTHING to do with 4k drives:
min=4 and max=32 I get around 457MB/s read and write (These were almost exactly the same)
min=1 and max=32 I get about 450MB/s write and 465MB/s read
min=1 and max=1 I get 450MB/s write and 562MB/s read
I used metaslab patch and zfs to v15 and am getting a nice boost in read performance, roughly 65-75MB/s
I'm thinking I need to do some random I/O testing to see if NCQ will benefit my setup. Although, random I/O needs to be considerably better to take a 100MB/s hit on sequential read performance.
 
My specs:

Supermicro AOC-SAT2-MV8 x2
Supermicro C2SBX
Hitatchi 2 TB x16
E8400
8 GB ram
ZFSguru-0.1.7-preview2.iso


The results:

(Click to get bigger pictures)






Code:
ZFSGURU-benchmark, version 1
Test size: 64.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 5 seconds
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS64; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Secure Erase. Now testing RAID0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	605 MiB/sec	630 MiB/sec	604 MiB/sec	= 613 MiB/sec avg
WRITE:	416 MiB/sec	416 MiB/sec	414 MiB/sec	= 415 MiB/sec avg
raidtest.read:	12626	12825	12460	= 12637 IOps ( ~814 MiB/sec )
raidtest.write:	10636	10678	10546	= 10620 IOps ( ~684 MiB/sec )
raidtest.mixed:	11044	10918	11197	= 11053 IOps ( ~712 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 9 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	612 MiB/sec	606 MiB/sec	611 MiB/sec	= 609 MiB/sec avg
WRITE:	419 MiB/sec	419 MiB/sec	420 MiB/sec	= 419 MiB/sec avg
raidtest.read:	12774	12869	12449	= 12697 IOps ( ~818 MiB/sec )
raidtest.write:	10660	10575	10632	= 10622 IOps ( ~684 MiB/sec )
raidtest.mixed:	11269	10975	11113	= 11119 IOps ( ~716 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 10 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	638 MiB/sec	643 MiB/sec	638 MiB/sec	= 640 MiB/sec avg
WRITE:	429 MiB/sec	425 MiB/sec	430 MiB/sec	= 428 MiB/sec avg
raidtest.read:	12499	12890	12629	= 12672 IOps ( ~816 MiB/sec )
raidtest.write:	10656	10611	10629	= 10632 IOps ( ~685 MiB/sec )
raidtest.mixed:	11233	11009	11126	= 11122 IOps ( ~716 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 11 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	658 MiB/sec	659 MiB/sec	660 MiB/sec	= 659 MiB/sec avg
WRITE:	423 MiB/sec	423 MiB/sec	424 MiB/sec	= 423 MiB/sec avg
raidtest.read:	12397	12385	12338	= 12373 IOps ( ~797 MiB/sec )
raidtest.write:	10703	10820	10641	= 10721 IOps ( ~691 MiB/sec )
raidtest.mixed:	11082	11106	10851	= 11013 IOps ( ~709 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	419 MiB/sec	409 MiB/sec	412 MiB/sec	= 413 MiB/sec avg
WRITE:	305 MiB/sec	305 MiB/sec	304 MiB/sec	= 305 MiB/sec avg
raidtest.read:	12550	12823	12252	= 12541 IOps ( ~808 MiB/sec )
raidtest.write:	10555	10550	10543	= 10549 IOps ( ~679 MiB/sec )
raidtest.mixed:	10998	10928	10981	= 10969 IOps ( ~706 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 9 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	450 MiB/sec	449 MiB/sec	446 MiB/sec	= 448 MiB/sec avg
WRITE:	325 MiB/sec	323 MiB/sec	325 MiB/sec	= 325 MiB/sec avg
raidtest.read:	12870	11975	12623	= 12489 IOps ( ~804 MiB/sec )
raidtest.write:	10573	10566	10595	= 10578 IOps ( ~681 MiB/sec )
raidtest.mixed:	10996	10958	10934	= 10962 IOps ( ~706 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 10 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	427 MiB/sec	422 MiB/sec	424 MiB/sec	= 424 MiB/sec avg
WRITE:	321 MiB/sec	322 MiB/sec	319 MiB/sec	= 320 MiB/sec avg
raidtest.read:	12307	12448	12670	= 12475 IOps ( ~804 MiB/sec )
raidtest.write:	10504	10586	10544	= 10544 IOps ( ~679 MiB/sec )
raidtest.mixed:	10946	11076	10906	= 10976 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 11 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	440 MiB/sec	436 MiB/sec	430 MiB/sec	= 435 MiB/sec avg
WRITE:	337 MiB/sec	337 MiB/sec	327 MiB/sec	= 334 MiB/sec avg
raidtest.read:	12876	12271	12653	= 12600 IOps ( ~812 MiB/sec )
raidtest.write:	10637	10494	10497	= 10542 IOps ( ~679 MiB/sec )
raidtest.mixed:	10805	10906	10979	= 10896 IOps ( ~702 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	376 MiB/sec	368 MiB/sec	379 MiB/sec	= 374 MiB/sec avg
WRITE:	254 MiB/sec	262 MiB/sec	261 MiB/sec	= 259 MiB/sec avg
raidtest.read:	12760	12306	12710	= 12592 IOps ( ~811 MiB/sec )
raidtest.write:	10496	10407	10518	= 10473 IOps ( ~675 MiB/sec )
raidtest.mixed:	10882	10936	11110	= 10976 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 9 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	415 MiB/sec	415 MiB/sec	419 MiB/sec	= 416 MiB/sec avg
WRITE:	272 MiB/sec	269 MiB/sec	274 MiB/sec	= 272 MiB/sec avg
raidtest.read:	12988	12228	12564	= 12593 IOps ( ~811 MiB/sec )
raidtest.write:	10497	10515	10520	= 10510 IOps ( ~677 MiB/sec )
raidtest.mixed:	10953	11005	10995	= 10984 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 10 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	455 MiB/sec	452 MiB/sec	450 MiB/sec	= 452 MiB/sec avg
WRITE:	272 MiB/sec	273 MiB/sec	272 MiB/sec	= 272 MiB/sec avg
raidtest.read:	12353	12531	12855	= 12579 IOps ( ~810 MiB/sec )
raidtest.write:	10614	10461	10596	= 10557 IOps ( ~680 MiB/sec )
raidtest.mixed:	11076	10993	10960	= 11009 IOps ( ~709 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 11 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	401 MiB/sec	401 MiB/sec	406 MiB/sec	= 403 MiB/sec avg
WRITE:	283 MiB/sec	285 MiB/sec	281 MiB/sec	= 283 MiB/sec avg
raidtest.read:	12655	12333	12674	= 12554 IOps ( ~809 MiB/sec )
raidtest.write:	10411	10538	10488	= 10479 IOps ( ~675 MiB/sec )
raidtest.mixed:	10975	10919	10893	= 10929 IOps ( ~704 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	577 MiB/sec	575 MiB/sec	580 MiB/sec	= 577 MiB/sec avg
WRITE:	63 MiB/sec	64 MiB/sec	63 MiB/sec	= 64 MiB/sec avg
raidtest.read:	12775	12757	11612	= 12381 IOps ( ~797 MiB/sec )
raidtest.write:	10425	10583	10462	= 10490 IOps ( ~676 MiB/sec )
raidtest.mixed:	10893	11061	10819	= 10924 IOps ( ~704 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 9 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	601 MiB/sec	602 MiB/sec	610 MiB/sec	= 604 MiB/sec avg
WRITE:	58 MiB/sec	57 MiB/sec	57 MiB/sec	= 58 MiB/sec avg
raidtest.read:	12779	12841	12704	= 12774 IOps ( ~823 MiB/sec )
raidtest.write:	10490	10495	10493	= 10492 IOps ( ~676 MiB/sec )
raidtest.mixed:	10914	10900	10844	= 10886 IOps ( ~701 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 10 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	626 MiB/sec	618 MiB/sec	631 MiB/sec	= 625 MiB/sec avg
WRITE:	53 MiB/sec	53 MiB/sec	53 MiB/sec	= 53 MiB/sec avg
raidtest.read:	12819	12777	12806	= 12800 IOps ( ~825 MiB/sec )
raidtest.write:	10601	10483	10450	= 10511 IOps ( ~677 MiB/sec )
raidtest.mixed:	11009	10996	10883	= 10962 IOps ( ~706 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 11 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	641 MiB/sec	629 MiB/sec	642 MiB/sec	= 637 MiB/sec avg
WRITE:	48 MiB/sec	48 MiB/sec	48 MiB/sec	= 48 MiB/sec avg
raidtest.read:	12692	12803	12620	= 12705 IOps ( ~818 MiB/sec )
raidtest.write:	10490	10420	10455	= 10455 IOps ( ~673 MiB/sec )
raidtest.mixed:	10890	11043	10920	= 10951 IOps ( ~705 MiB/sec )

Secure Erase. Now testing RAID1+0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	444 MiB/sec	455 MiB/sec	443 MiB/sec	= 447 MiB/sec avg
WRITE:	234 MiB/sec	234 MiB/sec	236 MiB/sec	= 235 MiB/sec avg
raidtest.read:	12040	12250	12302	= 12197 IOps ( ~786 MiB/sec )
raidtest.write:	10437	10572	10585	= 10531 IOps ( ~678 MiB/sec )
raidtest.mixed:	10813	10856	10943	= 10870 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAID1+0 configuration with 10 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	551 MiB/sec	562 MiB/sec	567 MiB/sec	= 560 MiB/sec avg
WRITE:	244 MiB/sec	243 MiB/sec	242 MiB/sec	= 243 MiB/sec avg
raidtest.read:	12469	12589	12745	= 12601 IOps ( ~812 MiB/sec )
raidtest.write:	10411	10464	10543	= 10472 IOps ( ~674 MiB/sec )
raidtest.mixed:	10980	10959	10936	= 10958 IOps ( ~706 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	424 MiB/sec	422 MiB/sec	420 MiB/sec	= 422 MiB/sec avg
WRITE:	327 MiB/sec	328 MiB/sec	329 MiB/sec	= 328 MiB/sec avg
raidtest.read:	12315	12403	12276	= 12331 IOps ( ~794 MiB/sec )
raidtest.write:	10529	10644	10399	= 10524 IOps ( ~678 MiB/sec )
raidtest.mixed:	11029	10967	10594	= 10863 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	491 MiB/sec	482 MiB/sec	476 MiB/sec	= 483 MiB/sec avg
WRITE:	375 MiB/sec	378 MiB/sec	375 MiB/sec	= 376 MiB/sec avg
raidtest.read:	12636	12293	12515	= 12481 IOps ( ~804 MiB/sec )
raidtest.write:	10564	10565	10597	= 10575 IOps ( ~681 MiB/sec )
raidtest.mixed:	11059	10890	10983	= 10977 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	542 MiB/sec	542 MiB/sec	532 MiB/sec	= 539 MiB/sec avg
WRITE:	399 MiB/sec	403 MiB/sec	400 MiB/sec	= 401 MiB/sec avg
raidtest.read:	12208	12490	12435	= 12377 IOps ( ~797 MiB/sec )
raidtest.write:	10519	10514	10562	= 10531 IOps ( ~678 MiB/sec )
raidtest.mixed:	11028	10725	10863	= 10872 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	559 MiB/sec	565 MiB/sec	561 MiB/sec	= 562 MiB/sec avg
WRITE:	408 MiB/sec	413 MiB/sec	408 MiB/sec	= 410 MiB/sec avg
raidtest.read:	12363	12290	12259	= 12304 IOps ( ~793 MiB/sec )
raidtest.write:	10577	10487	10616	= 10560 IOps ( ~680 MiB/sec )
raidtest.mixed:	11006	11094	11050	= 11050 IOps ( ~712 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	339 MiB/sec	347 MiB/sec	341 MiB/sec	= 342 MiB/sec avg
WRITE:	221 MiB/sec	220 MiB/sec	221 MiB/sec	= 221 MiB/sec avg
raidtest.read:	12408	12761	12430	= 12533 IOps ( ~807 MiB/sec )
raidtest.write:	10558	10420	10382	= 10453 IOps ( ~673 MiB/sec )
raidtest.mixed:	10918	10859	10842	= 10873 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	437 MiB/sec	421 MiB/sec	430 MiB/sec	= 429 MiB/sec avg
WRITE:	262 MiB/sec	270 MiB/sec	261 MiB/sec	= 264 MiB/sec avg
raidtest.read:	12761	12799	12628	= 12729 IOps ( ~820 MiB/sec )
raidtest.write:	10443	10424	10533	= 10466 IOps ( ~674 MiB/sec )
raidtest.mixed:	10983	10931	10913	= 10942 IOps ( ~705 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	409 MiB/sec	401 MiB/sec	421 MiB/sec	= 410 MiB/sec avg
WRITE:	287 MiB/sec	280 MiB/sec	283 MiB/sec	= 283 MiB/sec avg
raidtest.read:	12590	12639	12741	= 12656 IOps ( ~815 MiB/sec )
raidtest.write:	10392	10484	10391	= 10422 IOps ( ~671 MiB/sec )
raidtest.mixed:	10802	10966	10853	= 10873 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	397 MiB/sec	391 MiB/sec	396 MiB/sec	= 395 MiB/sec avg
WRITE:	293 MiB/sec	285 MiB/sec	292 MiB/sec	= 290 MiB/sec avg
raidtest.read:	12772	12497	12165	= 12478 IOps ( ~804 MiB/sec )
raidtest.write:	10390	10427	10510	= 10442 IOps ( ~673 MiB/sec )
raidtest.mixed:	10682	10955	10979	= 10872 IOps ( ~700 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	235 MiB/sec	236 MiB/sec	236 MiB/sec	= 236 MiB/sec avg
WRITE:	153 MiB/sec	154 MiB/sec	156 MiB/sec	= 154 MiB/sec avg
raidtest.read:	12845	12743	12733	= 12773 IOps ( ~823 MiB/sec )
raidtest.write:	10650	10618	10567	= 10611 IOps ( ~683 MiB/sec )
raidtest.mixed:	11004	11069	10989	= 11020 IOps ( ~710 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	343 MiB/sec	344 MiB/sec	346 MiB/sec	= 345 MiB/sec avg
WRITE:	188 MiB/sec	193 MiB/sec	191 MiB/sec	= 190 MiB/sec avg
raidtest.read:	12542	12807	12739	= 12696 IOps ( ~818 MiB/sec )
raidtest.write:	10498	10550	10498	= 10515 IOps ( ~677 MiB/sec )
raidtest.mixed:	11086	10945	10949	= 10993 IOps ( ~708 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	417 MiB/sec	421 MiB/sec	407 MiB/sec	= 415 MiB/sec avg
WRITE:	230 MiB/sec	233 MiB/sec	233 MiB/sec	= 232 MiB/sec avg
raidtest.read:	12609	12697	12901	= 12735 IOps ( ~820 MiB/sec )
raidtest.write:	10528	10462	10181	= 10390 IOps ( ~669 MiB/sec )
raidtest.mixed:	11051	10920	10878	= 10949 IOps ( ~705 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	383 MiB/sec	383 MiB/sec	386 MiB/sec	= 384 MiB/sec avg
WRITE:	243 MiB/sec	244 MiB/sec	244 MiB/sec	= 244 MiB/sec avg
raidtest.read:	12784	12743	12675	= 12734 IOps ( ~820 MiB/sec )
raidtest.write:	10465	10333	10398	= 10398 IOps ( ~670 MiB/sec )
raidtest.mixed:	10978	10719	10755	= 10817 IOps ( ~697 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	335 MiB/sec	332 MiB/sec	335 MiB/sec	= 334 MiB/sec avg
WRITE:	92 MiB/sec	93 MiB/sec	93 MiB/sec	= 93 MiB/sec avg
raidtest.read:	12448	12872	12443	= 12587 IOps ( ~811 MiB/sec )
raidtest.write:	10369	10528	10452	= 10449 IOps ( ~673 MiB/sec )
raidtest.mixed:	10785	10958	10777	= 10840 IOps ( ~698 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	404 MiB/sec	395 MiB/sec	408 MiB/sec	= 402 MiB/sec avg
WRITE:	88 MiB/sec	88 MiB/sec	88 MiB/sec	= 88 MiB/sec avg
raidtest.read:	12880	12717	12579	= 12725 IOps ( ~820 MiB/sec )
raidtest.write:	10420	10579	10414	= 10471 IOps ( ~674 MiB/sec )
raidtest.mixed:	10873	11093	10879	= 10948 IOps ( ~705 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	495 MiB/sec	493 MiB/sec	490 MiB/sec	= 493 MiB/sec avg
WRITE:	80 MiB/sec	80 MiB/sec	79 MiB/sec	= 79 MiB/sec avg
raidtest.read:	12610	12596	12269	= 12491 IOps ( ~805 MiB/sec )
raidtest.write:	10481	10433	10474	= 10462 IOps ( ~674 MiB/sec )
raidtest.mixed:	10796	10933	10926	= 10885 IOps ( ~701 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	563 MiB/sec	562 MiB/sec	561 MiB/sec	= 562 MiB/sec avg
WRITE:	71 MiB/sec	71 MiB/sec	71 MiB/sec	= 71 MiB/sec avg
raidtest.read:	12878	12642	12653	= 12724 IOps ( ~820 MiB/sec )
raidtest.write:	10454	10349	10407	= 10403 IOps ( ~670 MiB/sec )
raidtest.mixed:	10908	10981	10872	= 10920 IOps ( ~703 MiB/sec )

Secure Erase. Now testing RAID1+0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	238 MiB/sec	237 MiB/sec	236 MiB/sec	= 237 MiB/sec avg
WRITE:	177 MiB/sec	179 MiB/sec	178 MiB/sec	= 178 MiB/sec avg
raidtest.read:	12239	12615	12352	= 12402 IOps ( ~799 MiB/sec )
raidtest.write:	10515	10498	10431	= 10481 IOps ( ~675 MiB/sec )
raidtest.mixed:	10906	10962	10945	= 10937 IOps ( ~704 MiB/sec )

Secure Erase. Now testing RAID1+0 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	355 MiB/sec	355 MiB/sec	355 MiB/sec	= 355 MiB/sec avg
WRITE:	226 MiB/sec	226 MiB/sec	226 MiB/sec	= 226 MiB/sec avg
raidtest.read:	12216	12467	12794	= 12492 IOps ( ~805 MiB/sec )
raidtest.write:	10464	10443	10377	= 10428 IOps ( ~672 MiB/sec )
raidtest.mixed:	10932	10857	10767	= 10852 IOps ( ~699 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 1 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	119 MiB/sec	119 MiB/sec	119 MiB/sec	= 119 MiB/sec avg
WRITE:	97 MiB/sec	97 MiB/sec	98 MiB/sec	= 97 MiB/sec avg
raidtest.read:	12482	12892	12219	= 12531 IOps ( ~807 MiB/sec )
raidtest.write:	10623	10548	10629	= 10600 IOps ( ~683 MiB/sec )
raidtest.mixed:	11038	11061	11050	= 11049 IOps ( ~712 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	234 MiB/sec	233 MiB/sec	235 MiB/sec	= 234 MiB/sec avg
WRITE:	184 MiB/sec	183 MiB/sec	185 MiB/sec	= 184 MiB/sec avg
raidtest.read:	12275	12455	12816	= 12515 IOps ( ~806 MiB/sec )
raidtest.write:	10540	10719	10504	= 10587 IOps ( ~682 MiB/sec )
raidtest.mixed:	11047	10910	10992	= 10983 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAID0 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	334 MiB/sec	342 MiB/sec	333 MiB/sec	= 337 MiB/sec avg
WRITE:	267 MiB/sec	267 MiB/sec	267 MiB/sec	= 267 MiB/sec avg
raidtest.read:	12689	12236	12218	= 12381 IOps ( ~797 MiB/sec )
raidtest.write:	10472	10602	10495	= 10523 IOps ( ~678 MiB/sec )
raidtest.mixed:	10925	10984	11124	= 11011 IOps ( ~709 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	119 MiB/sec	120 MiB/sec	120 MiB/sec	= 120 MiB/sec avg
WRITE:	94 MiB/sec	94 MiB/sec	95 MiB/sec	= 94 MiB/sec avg
raidtest.read:	12592	12913	12835	= 12780 IOps ( ~823 MiB/sec )
raidtest.write:	10414	10562	10436	= 10470 IOps ( ~674 MiB/sec )
raidtest.mixed:	11029	11109	10935	= 11024 IOps ( ~710 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	233 MiB/sec	231 MiB/sec	230 MiB/sec	= 231 MiB/sec avg
WRITE:	168 MiB/sec	169 MiB/sec	168 MiB/sec	= 168 MiB/sec avg
raidtest.read:	11867	12294	12333	= 12164 IOps ( ~784 MiB/sec )
raidtest.write:	10450	10563	10601	= 10538 IOps ( ~679 MiB/sec )
raidtest.mixed:	11026	11003	10927	= 10985 IOps ( ~708 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	120 MiB/sec	120 MiB/sec	120 MiB/sec	= 120 MiB/sec avg
WRITE:	90 MiB/sec	90 MiB/sec	91 MiB/sec	= 90 MiB/sec avg
raidtest.read:	12378	12445	12332	= 12385 IOps ( ~798 MiB/sec )
raidtest.write:	10466	10352	10229	= 10349 IOps ( ~667 MiB/sec )
raidtest.mixed:	10936	10844	10513	= 10764 IOps ( ~693 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	120 MiB/sec	119 MiB/sec	119 MiB/sec	= 119 MiB/sec avg
WRITE:	96 MiB/sec	96 MiB/sec	96 MiB/sec	= 96 MiB/sec avg
raidtest.read:	12502	12401	12216	= 12373 IOps ( ~797 MiB/sec )
raidtest.write:	10551	10483	10598	= 10544 IOps ( ~679 MiB/sec )
raidtest.mixed:	11003	11011	10915	= 10976 IOps ( ~707 MiB/sec )

Secure Erase. Now testing RAID1 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	246 MiB/sec	248 MiB/sec	247 MiB/sec	= 247 MiB/sec avg
WRITE:	95 MiB/sec	94 MiB/sec	95 MiB/sec	= 94 MiB/sec avg
raidtest.read:	12496	12608	12805	= 12636 IOps ( ~814 MiB/sec )
raidtest.write:	10506	10315	10396	= 10405 IOps ( ~670 MiB/sec )
raidtest.mixed:	10914	10853	10925	= 10897 IOps ( ~702 MiB/sec )

Done

I do wonder why the test stoppet at 11 disk, when I told it to use all 16.
 
submesa:

What record size are you using when measuring the sequential read/writes? I was wondering if it is too small, since the scaling in RAID 0 looks poor above 5 or 6 disks. I guess the record size should be at least 2MB, and preferably something like 16MB.
 
Recordsize is a very special thing; it's not analogous to stripe size. The maximum recordsize is fixed at 128KiB; i would have loved to test higher values though. It also doesn't allow 96KiB type of values; to fix some odd data disk combinations having poor performance in 4K emulation mode.

In the default 128KiB config, when doing sequential I/O you get:

pool: test; recordsize=128KiB
vdev1: (128KiB) gets spread over all members (RAID-Z/RAID-Z2)
vdev2: (128KiB) gets spread over all members (RAID-Z/RAID-Z2)
vdev3: (128KiB) gets spread over all members (RAID-Z/RAID-Z2)
.. etc

So for the pool this is 'stripesize' considering a single RAID-Z a single disk as it were. The main pool is hardcoded RAID0 essentially; each vdev you add will be striped.

The problems i see is when the 128KiB get's spread over too many disks:
128KiB for 10-disk RAID-Z2 = 128KiB / 8 = 16KiB. Ideally you want drives to be handling chunks of 32KiB - 128KiB for optimal performance.

ZFS scaling is not linear. There are some patches that would address some performance issues, that are not included in the current ZFS system code of FreeBSD 8.1. I added one of those patches, the matured metaslab patch, in the latest 0.1.7-preview2 ISO, though.

Once i have enough testing results, i hope to submit them for analysis by some ZFS people, and look at things that can be improved. Probably would be the FreeBSD people who can tweak ZFS without changing any on-disk format.

Some other ZFS news: PJD is working hard at ZFS v28; next week or so there should be new ZFS v28 patch that addresses some issues and completes v28 boot support, which is crucial for my project. So at that stage, i could build a 9-CURRENT ISO (very experimental) and see how performance scales there.

While we want to optimize performance, i'm perfectly okay with trading in some performance for data security instead. ZFS adds some upkeep such as the ZIL which limits writing on mechanical HDDs etc.

When i extend the benchmark used in my distribution, i will also include ability to configure a SLOG and/or L2ARC configuration, so you can test the influence this has on performance.

But the biggest 'issue' with ZFS performance is the transaction groups i would say, this is being worked on, and in half a year, FreeBSD 9.0-RELEASE should have an excellent performing ZFS considering the features that it brings, including all features that Sun/Oracle released thus far.
 
I think you misunderstood my question. Whenever you write or read to a drive, you transfer some amount at once. If you are doing a 4KB random I/O, then your record size is 4KB. If you are doing 512KB random I/O, then your record size is 512KB.

If you are doing sequential I/O, then you usually choose some large record size, like 16MB. I was asking how much you are transferring in your sequential I/O tests.
 
You mean transfer block size or bs= parameter of dd? That's 1 MegaBytes (1MiB). You can set this too in the web-interface Benchmark form, but there's no reason to increase it beyond 1MiB due to read-ahead and write buffering. Just do not go below 128KiB chunks or it may add some CPU overhead; still the effect is minimal unless you choose really small chunks.
 
You have not explained why the RAID-0 scaled so poorly for MrLie above about 5 or 6 disks. Can you tell what he used for the transfer size from the log he posted? I think it would be worth trying it with a large transfer size, like 16MiB, to see if the RAID-0 scales any better.
 
My specs:
The results:
I do wonder why the test stoppet at 11 disk, when I told it to use all 16.
Two bugs i see in your output that i fixed:
bug 1) disk counts of 12 and beyond were not run
bug 2) raidtest now gives meaningless cache scores due to lack of populating the zvol; i restored original behavior but instead write zeroes now. That should restore the intended test results.

Will be making this available for upgrade soon; small upgrade just one mouseclick and you have at least these two issues covered. I also want to hide sharing options for ZVOLs that came apparent when using iSCSI.

Your test results are somewhat low. Try the tuning option under System->Tuning page and click the Reset to recommended button, and reboot. Then re-run the tests. Best to wait until my new version is ready, so you can test with 12+ disks as well.
 
You have not explained why the RAID-0 scaled so poorly for MrLie above about 5 or 6 disks. Can you tell what he used for the transfer size from the log he posted? I think it would be worth trying it with a large transfer size, like 16MiB, to see if the RAID-0 scales any better.
Tuning settings will show any deviations from the default settings. If he changed to 16MiB blocksize, it would show BS=16m or something similar. So he's using 1MiB input+output blocksize. Note that smaller blocksizes are aggregated, so it's just a matter of CPU utilization here i believe.
 
In reply to some of the above posts about my benchmark, I used default settings which had 1 MiB Blocksize.

I have to say that I expected somewhat higher read with 16 drives in raid0, than what I got. Or am I wrong to beleave that each PCI-X slot have 1064 MB/sec bandwith, and that in fact all the PCI-X slots share the 1064 MB/sec bandwith?
 
PCI and PCI-X may have higher latencies on parallel I/O, when multiple requests are sent at the same time. PCI-X is still half-duplex; so it can only send or receive at one time; not at the same time like PCIe.

But while you have 8GiB RAM, your kernel memory is by default set to 2.4GiB, and ZFS ARC to 0.5GiB; that's pretty low. It should be stable as hell, but low performance. Only after you do tuning, would you see better performance scaling.

Do you see any error messages for your controller in the dmesg output, after/while doing heavy benchmarking? dmesg is what appears on the console (direct-attached monitor). You can also type "dmesg" in command line to get the last dmesg output.
 
Last edited:
Here is the output from my dmesg while running your benchmark.

Code:
zfsguru.bsd$ dmesg
Copyright (c) 1992-2010 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
	The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.1-RELEASE-p1 #2: Sat Nov  6 02:55:42 CET 2010
    root@slash:/usr/obj/usr/src/sys/GENERIC amd64
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Core(TM)2 Duo CPU     E8400  @ 3.00GHz (3000.02-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x1067a  Family = 6  Model = 17  Stepping = 10
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x408e3fd<SSE3,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,XSAVE>
  AMD Features=0x20100800<SYSCALL,NX,LM>
  AMD Features2=0x1<LAHF>
  TSC: P-state invariant
real memory  = 8589934592 (8192 MB)
avail memory = 8168710144 (7790 MB)
ACPI APIC Table: <PTLTD  	 APIC  >
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
ioapic0 <Version 2.0> irqs 0-23 on motherboard
ioapic1 <Version 2.0> irqs 24-47 on motherboard
kbd1 at kbdmux0
acpi0: <PTLTD 	 XSDT> on motherboard
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x1008-0x100b on acpi0
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
em0: <Intel(R) PRO/1000 Network Connection 7.0.5> port 0x1820-0x183f mem 0xd2e00000-0xd2e1ffff,0xd2e24000-0xd2e24fff irq 16 at device 25.0 on pci0
em0: Using MSI interrupt
em0: [FILTER]
em0: Ethernet address: 00:30:48:dd:b7:6c
uhci0: <Intel 82801I (ICH9) USB controller> port 0x1840-0x185f irq 16 at device 26.0 on pci0
uhci0: [ITHREAD]
usbus0: <Intel 82801I (ICH9) USB controller> on uhci0
uhci1: <Intel 82801I (ICH9) USB controller> port 0x1860-0x187f irq 17 at device 26.1 on pci0
uhci1: [ITHREAD]
usbus1: <Intel 82801I (ICH9) USB controller> on uhci1
uhci2: <Intel 82801I (ICH9) USB controller> port 0x1880-0x189f irq 18 at device 26.2 on pci0
uhci2: [ITHREAD]
usbus2: <Intel 82801I (ICH9) USB controller> on uhci2
ehci0: <Intel 82801I (ICH9) USB 2.0 controller> mem 0xd2e26000-0xd2e263ff irq 18 at device 26.7 on pci0
ehci0: [ITHREAD]
usbus3: EHCI version 1.0
usbus3: <Intel 82801I (ICH9) USB 2.0 controller> on ehci0
pci0: <multimedia, HDA> at device 27.0 (no driver attached)
pcib1: <ACPI PCI-PCI bridge> irq 16 at device 28.0 on pci0
pci5: <ACPI PCI bus> on pcib1
pcib2: <ACPI PCI-PCI bridge> at device 0.0 on pci5
pci6: <ACPI PCI bus> on pcib2
atapci0: <Marvell 88SX6081 SATA300 controller> port 0x2000-0x20ff mem 0xd2200000-0xd22fffff irq 24 at device 1.0 on pci6
atapci0: [ITHREAD]
ata2: <ATA channel 0> on atapci0
ata2: [ITHREAD]
ata3: <ATA channel 1> on atapci0
ata3: [ITHREAD]
ata4: <ATA channel 2> on atapci0
ata4: [ITHREAD]
ata5: <ATA channel 3> on atapci0
ata5: [ITHREAD]
ata6: <ATA channel 4> on atapci0
ata6: [ITHREAD]
ata7: <ATA channel 5> on atapci0
ata7: [ITHREAD]
ata8: <ATA channel 6> on atapci0
ata8: [ITHREAD]
ata9: <ATA channel 7> on atapci0
ata9: [ITHREAD]
atapci1: <Marvell 88SX6081 SATA300 controller> port 0x2400-0x24ff mem 0xd2300000-0xd23fffff irq 28 at device 7.0 on pci6
atapci1: [ITHREAD]
ata10: <ATA channel 0> on atapci1
ata10: [ITHREAD]
ata11: <ATA channel 1> on atapci1
ata11: [ITHREAD]
ata12: <ATA channel 2> on atapci1
ata12: [ITHREAD]
ata13: <ATA channel 3> on atapci1
ata13: [ITHREAD]
ata14: <ATA channel 4> on atapci1
ata14: [ITHREAD]
ata15: <ATA channel 5> on atapci1
ata15: [ITHREAD]
ata16: <ATA channel 6> on atapci1
ata16: [ITHREAD]
ata17: <ATA channel 7> on atapci1
ata17: [ITHREAD]
uhci3: <Intel 82801I (ICH9) USB controller> port 0x18a0-0x18bf irq 23 at device 29.0 on pci0
uhci3: [ITHREAD]
usbus4: <Intel 82801I (ICH9) USB controller> on uhci3
uhci4: <Intel 82801I (ICH9) USB controller> port 0x18c0-0x18df irq 22 at device 29.1 on pci0
uhci4: [ITHREAD]
usbus5: <Intel 82801I (ICH9) USB controller> on uhci4
uhci5: <Intel 82801I (ICH9) USB controller> port 0x18e0-0x18ff irq 18 at device 29.2 on pci0
uhci5: [ITHREAD]
usbus6: <Intel 82801I (ICH9) USB controller> on uhci5
ehci1: <Intel 82801I (ICH9) USB 2.0 controller> mem 0xd2e26400-0xd2e267ff irq 23 at device 29.7 on pci0
ehci1: [ITHREAD]
usbus7: EHCI version 1.0
usbus7: <Intel 82801I (ICH9) USB 2.0 controller> on ehci1
pcib3: <ACPI PCI-PCI bridge> at device 30.0 on pci0
pci17: <ACPI PCI bus> on pcib3
vgapci0: <VGA-compatible display> mem 0xd1000000-0xd1ffffff,0xc0000000-0xcfffffff,0xd0000000-0xd0ffffff irq 21 at device 1.0 on pci17
fwohci0: <Texas Instruments TSB43AB22/A> mem 0xd2004000-0xd20047ff,0xd2000000-0xd2003fff irq 22 at device 3.0 on pci17
fwohci0: [ITHREAD]
fwohci0: OHCI version 1.10 (ROM=1)
fwohci0: No. of Isochronous channels is 4.
fwohci0: EUI64 00:30:48:00:00:20:3d:32
fwohci0: Phy 1394a available S400, 2 ports.
fwohci0: Link S400, max_rec 2048 bytes.
firewire0: <IEEE1394(FireWire) bus> on fwohci0
dcons_crom0: <dcons configuration ROM> on firewire0
dcons_crom0: bus_addr 0x291c000
fwe0: <Ethernet over FireWire> on firewire0
if_fwe0: Fake Ethernet address: 02:30:48:20:3d:32
fwe0: Ethernet address: 02:30:48:20:3d:32
fwip0: <IP over FireWire> on firewire0
fwip0: Firewire address: 00:30:48:00:00:20:3d:32 @ 0xfffe00000000, S400, maxrec 2048
fwohci0: Initiate bus reset
fwohci0: fwohci_intr_core: BUS reset
fwohci0: fwohci_intr_core: node_id=0x00000000, SelfID Count=1, CYCLEMASTER mode
atapci2: <ITE IT8213F UDMA133 controller> port 0x3020-0x3027,0x3014-0x3017,0x3018-0x301f,0x3010-0x3013,0x3000-0x300f irq 23 at device 4.0 on pci17
atapci2: [ITHREAD]
ata18: <ATA channel 0> on atapci2
ata18: [ITHREAD]
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
atapci3: <Intel ICH9 SATA300 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1c30-0x1c3f,0x1c20-0x1c2f at device 31.2 on pci0
ata0: <ATA channel 0> on atapci3
ata0: [ITHREAD]
ata1: <ATA channel 1> on atapci3
ata1: [ITHREAD]
pci0: <serial bus, SMBus> at device 31.3 (no driver attached)
atapci4: <Intel ICH9 SATA300 controller> port 0x1c88-0x1c8f,0x1c7c-0x1c7f,0x1c80-0x1c87,0x1c78-0x1c7b,0x1c50-0x1c5f,0x1c40-0x1c4f irq 18 at device 31.5 on pci0
atapci4: [ITHREAD]
ata19: <ATA channel 0> on atapci4
ata19: [ITHREAD]
ata20: <ATA channel 1> on atapci4
ata20: [ITHREAD]
pci0: <dasp> at device 31.6 (no driver attached)
acpi_button0: <Power Button> on acpi0
atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbd0: [ITHREAD]
psm0: <PS/2 Mouse> irq 12 on atkbdc0
psm0: [GIANT-LOCKED]
psm0: [ITHREAD]
psm0: model IntelliMouse Explorer, device ID 4
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart0: [FILTER]
uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0
uart1: [FILTER]
fdc0: <floppy drive controller> port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on acpi0
fdc0: [FILTER]
ppc0: <Parallel port> port 0x378-0x37f,0x778-0x77f irq 7 drq 3 on acpi0
ppc0: SMC-like chipset (ECP/EPP/PS2/NIBBLE) in COMPATIBLE mode
ppc0: FIFO with 16/16/9 bytes threshold
ppc0: [ITHREAD]
ppbus0: <Parallel port bus> on ppc0
plip0: <PLIP network interface> on ppbus0
plip0: [ITHREAD]
lpt0: <Printer> on ppbus0
lpt0: [ITHREAD]
lpt0: Interrupt-driven port
ppi0: <Parallel I/O> on ppbus0
orm0: <ISA Option ROMs> at iomem 0xcf000-0xd57ff,0xd5800-0xdbfff on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=0x300>
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
est0: <Enhanced SpeedStep Frequency Control> on cpu0
p4tcc0: <CPU Frequency Thermal Control> on cpu0
est1: <Enhanced SpeedStep Frequency Control> on cpu1
p4tcc1: <CPU Frequency Thermal Control> on cpu1
firewire0: 1 nodes, maxhop <= 0 cable IRM irm(0)  (me)
firewire0: bus manager 0
ZFS filesystem version 4
ZFS storage pool version 15
Timecounters tick every 1.000 msec
md0: Preloaded image </boot/preloaded.ufs> 7438336 bytes at 0xffffffff8102bc40usbus0: 12Mbps Full Speed USB v1.0
usbus1: 12Mbps Full Speed USB v1.0
usbus2: 12Mbps Full Speed USB v1.0
usbus3: 480Mbps High Speed USB v2.0
usbus4: 12Mbps Full Speed USB v1.0
usbus5: 12Mbps Full Speed USB v1.0
usbus6: 12Mbps Full Speed USB v1.0
usbus7: 480Mbps High Speed USB v2.0

ugen0.1: <Intel> at usbus0
uhub0: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
ugen1.1: <Intel> at usbus1
uhub1: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus1
ugen2.1: <Intel> at usbus2
uhub2: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus2
ugen3.1: <Intel> at usbus3
uhub3: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus3
ugen4.1: <Intel> at usbus4
uhub4: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus4
ugen5.1: <Intel> at usbus5
uhub5: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus5
ugen6.1: <Intel> at usbus6
uhub6: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus6
ugen7.1: <Intel> at usbus7
uhub7: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus7
ad0: 953869MB <Seagate ST31000333AS CC3H> at ata0-master UDMA100 SATA 3Gb/s
ad2: 953869MB <Seagate ST31000528AS CC38> at ata1-master UDMA100 SATA 3Gb/s
ad4: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata2-master UDMA100 SATA 3Gb/s
ad6: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata3-master UDMA100 SATA 3Gb/s
ad8: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata4-master UDMA100 SATA 3Gb/s
ad10: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata5-master UDMA100 SATA 3Gb/s
ad12: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata6-master UDMA100 SATA 3Gb/s
ad14: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata7-master UDMA100 SATA 3Gb/s
ad16: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata8-master UDMA100 SATA 3Gb/s
ad18: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata9-master UDMA100 SATA 3Gb/s
ad20: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata10-master UDMA100 SATA 3Gb/s
uhub0: 2 ports with 2 removable, self powered
uhub1: 2 ports with 2 removable, self powered
uhub2: 2 ports with 2 removable, self powered
ad22: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata11-master UDMA100 SATA 3Gb/s
uhub4: 2 ports with 2 removable, self powered
uhub5: 2 ports with 2 removable, self powered
uhub6: 2 ports with 2 removable, self powered
ad24: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata12-master UDMA100 SATA 3Gb/s
ad26: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata13-master UDMA100 SATA 3Gb/s
ad28: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata14-master UDMA100 SATA 3Gb/s
ad30: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata15-master UDMA100 SATA 3Gb/s
ad32: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata16-master UDMA100 SATA 3Gb/s
ad34: 1907729MB <Hitachi HDS722020ALA330 JKAOA3MA> at ata17-master UDMA100 SATA 3Gb/s
uhub3: 6 ports with 6 removable, self powered
uhub7: 6 ports with 6 removable, self powered
acd0: DVDR <TSSTcorp CDDVDW SN-S083C/SB01> at ata19-master UDMA100 SATA 1.5Gb/s
SMP: AP CPU #1 Launched!
Trying to mount root from ufs:/dev/md0
WARNING: TMPFS is considered to be a highly experimental feature in FreeBSD.
md1.uzip: 34163 x 16384 blocks
md1.uzip: 26184 requests, 8443 cached

Unfortunally I'm more "at home" with Windows and z/OS (3270 emulator) than with linux and unix atm, but by watching people like you I learn more day by day. And I dont have an insentive from work to really dig down into how to be a sysadm, since I work with daily operations with a number of the 2-300 various applications we have, running on various platforms (z/series, windows, hpux, linux, solaris).

Could the issue with my memory-settings being so low, have anything to do with the fact that I'm still using your livecd instead of a full install of FreeBSD? I'm guessing I won't be able to do much tuning, aslong as I use the livecd?

Though I dont have any need for superduper performance, this system being for my personal use only, it is gonna annoy the hell out of me if the PCI-X slots are a limited factor for me not having the performance I could have had :) Having said that I dont have a desire to throw unlimited (thought it feels like it some times) financial resources to achive this goal. But I will swap out my motherboard in a heartbeat, if it comes to it.
I just have to persuade myself to do it - new motherboard, cpu, ram and a new hba (and use my sas expander).

Order it and put a "From Santa"-sticker on it...? ;)
 
Oh you did not do ZFS-on-root install yet. Yes indeed if you are running from the LiveCD you cannot do tuning. Do you have a single disk that you don't want to include in benchmarking? Then you could use the LiveCD to install "ZFS-on-root" to that disk/pool. This could also be a USB stick of just 512MB large.

How to do this:
1) format disk (USB stick or small HDD) with GPT on the Disks page
2) create a system pool on the Pools->Create page, just select your disk and give the pool a name like systempool.
3) now go to Pools->Booting page and follow instructions
4) Reboot when done; make sure your BIOS is set to boot from the GPT formatted disk; any GPT formatted disk will do in fact. They will all boot into the same ZFS pool that is marked bootable.

Then when you rebooted from your system pool, then you don't need the livecd anymore. Go to tuning page and press the Recommended button and reboot again, then you can do benchmarking! It also saves you 0.5GB RAM and now you can explore FreeBSD on your own if you wish.

I did create this project for the many 'Windows users' (those which may know a great deal about computers in general; but don't like a textual interface common to UNIX-like operating systems) that are out there. They would like to use ZFS, but don't like to learn FreeBSD. I can perfectly understand that. They want something like FreeNAS; easy web-interface that everyone should be able to use and it should be simple enough for even your mom to setup storage on ZFS and share it across the network. I'm planning on a 'one-click setup' as well, for quick configuration and demonstrating the user friendly character of my project.

But many 'usability features' are not yet implemented, they are on my wishlist though. Like lot's of friendly notices that guide you on what to do and tell you whenever there's a problem and how to recover from it.

Don't think too badly about your PCI-X controllers just yet; wait for the tuning to kick in and see if that made things better.

Oh and building a new NAS is great fun! Just look at this board, it's porn to ZFS-guys like me. :D
Once you get the good taste, you would want to build your own advanced ZFS NAS with good hardware. But that also means no expanders; only HBA's. That board has 14 SATA connectors already, add four LSI 1068E HBAs and you have 32 + 14 = 46 SATA connectors at full bandwidth. Still not extremely expensive, the controllers should be about $100-120 each and the mobo isn't that expensive either considering its features. Ohhh i can't look at it too long. :D
 
I'll do an installation sometime this week, got both USB-sticks and spare disks lying around that I can use.

I've had various servers over the years, but it wasnt until a few years ago that I started to buy hardware with the intention to use it as a server, than to just re-use my old computerparts once I performed an upgrade on my main workstation/gaming-rig/etc.

Currently my main server is a WHS with aprox 30TB disk, which I also use for VMware, ftp, torrent etc. About 2 years ago I started to find out about ZFS, and used the OpenSolaris HCL to find which hardware I could use. I bought my current motherboard/hba based on what I found in the HCL. As I've said before I would like to have a solid storagesystem as the base, and then have other systems/services running in VM or on external clients. For me ZFS is this solid storagesystem I'm looking for.

Once you get the good taste, you would want to build your own advanced ZFS NAS with good hardware. But that also means no expanders; only HBA's.

Why no expanders? Quality or because you'll get more bandwith with only HBA's?

I do like that motherboard - have been looking at it myself for an upgrade. But I'd use a LSI 9200-8e to expand to another chassis (currently housing my WHS) and another HBA to replace my two SAT2-MV8. Like the LSI 9212-4i4e. If you can use reverse breakout to sff-8087 to a hp expander, then I could kill two birds with one stone. Cable management would be a bitch though...
 
New round of benchmarks, this time with all 16 drives:

(Click pictures to get bigger)




Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	636 MiB/sec	643 MiB/sec	668 MiB/sec	= 649 MiB/sec avg
WRITE:	431 MiB/sec	431 MiB/sec	431 MiB/sec	= 431 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	432 MiB/sec	430 MiB/sec	= 429 MiB/sec avg
WRITE:	345 MiB/sec	351 MiB/sec	352 MiB/sec	= 349 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	413 MiB/sec	416 MiB/sec	417 MiB/sec	= 415 MiB/sec avg
WRITE:	310 MiB/sec	310 MiB/sec	314 MiB/sec	= 311 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	680 MiB/sec	675 MiB/sec	680 MiB/sec	= 678 MiB/sec avg
WRITE:	34 MiB/sec	34 MiB/sec	34 MiB/sec	= 34 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	600 MiB/sec	621 MiB/sec	622 MiB/sec	= 614 MiB/sec avg
WRITE:	244 MiB/sec	247 MiB/sec	245 MiB/sec	= 245 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	425 MiB/sec	424 MiB/sec	413 MiB/sec	= 421 MiB/sec avg
WRITE:	295 MiB/sec	290 MiB/sec	287 MiB/sec	= 291 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	468 MiB/sec	463 MiB/sec	460 MiB/sec	= 464 MiB/sec avg
WRITE:	309 MiB/sec	316 MiB/sec	314 MiB/sec	= 313 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	482 MiB/sec	490 MiB/sec	= 487 MiB/sec avg
WRITE:	311 MiB/sec	312 MiB/sec	312 MiB/sec	= 312 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	451 MiB/sec	438 MiB/sec	436 MiB/sec	= 442 MiB/sec avg
WRITE:	262 MiB/sec	261 MiB/sec	261 MiB/sec	= 262 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	652 MiB/sec	659 MiB/sec	653 MiB/sec	= 655 MiB/sec avg
WRITE:	436 MiB/sec	428 MiB/sec	431 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	665 MiB/sec	665 MiB/sec	663 MiB/sec	= 664 MiB/sec avg
WRITE:	429 MiB/sec	430 MiB/sec	430 MiB/sec	= 430 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	674 MiB/sec	675 MiB/sec	669 MiB/sec	= 673 MiB/sec avg
WRITE:	430 MiB/sec	430 MiB/sec	432 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	668 MiB/sec	672 MiB/sec	668 MiB/sec	= 670 MiB/sec avg
WRITE:	431 MiB/sec	431 MiB/sec	431 MiB/sec	= 431 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	436 MiB/sec	433 MiB/sec	435 MiB/sec	= 435 MiB/sec avg
WRITE:	351 MiB/sec	342 MiB/sec	338 MiB/sec	= 344 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	421 MiB/sec	428 MiB/sec	428 MiB/sec	= 426 MiB/sec avg
WRITE:	348 MiB/sec	354 MiB/sec	348 MiB/sec	= 350 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	435 MiB/sec	434 MiB/sec	= 435 MiB/sec avg
WRITE:	348 MiB/sec	347 MiB/sec	350 MiB/sec	= 348 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	420 MiB/sec	424 MiB/sec	423 MiB/sec	= 422 MiB/sec avg
WRITE:	338 MiB/sec	342 MiB/sec	339 MiB/sec	= 340 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	389 MiB/sec	392 MiB/sec	388 MiB/sec	= 390 MiB/sec avg
WRITE:	295 MiB/sec	296 MiB/sec	287 MiB/sec	= 293 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	402 MiB/sec	406 MiB/sec	404 MiB/sec	= 404 MiB/sec avg
WRITE:	292 MiB/sec	265 MiB/sec	292 MiB/sec	= 283 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	401 MiB/sec	402 MiB/sec	403 MiB/sec	= 402 MiB/sec avg
WRITE:	305 MiB/sec	308 MiB/sec	312 MiB/sec	= 308 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	409 MiB/sec	404 MiB/sec	409 MiB/sec	= 407 MiB/sec avg
WRITE:	298 MiB/sec	301 MiB/sec	303 MiB/sec	= 300 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	633 MiB/sec	641 MiB/sec	640 MiB/sec	= 638 MiB/sec avg
WRITE:	45 MiB/sec	45 MiB/sec	44 MiB/sec	= 45 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	649 MiB/sec	636 MiB/sec	651 MiB/sec	= 645 MiB/sec avg
WRITE:	41 MiB/sec	42 MiB/sec	42 MiB/sec	= 42 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	659 MiB/sec	658 MiB/sec	654 MiB/sec	= 657 MiB/sec avg
WRITE:	39 MiB/sec	38 MiB/sec	39 MiB/sec	= 39 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	658 MiB/sec	667 MiB/sec	652 MiB/sec	= 659 MiB/sec avg
WRITE:	36 MiB/sec	36 MiB/sec	36 MiB/sec	= 36 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	551 MiB/sec	554 MiB/sec	543 MiB/sec	= 549 MiB/sec avg
WRITE:	246 MiB/sec	243 MiB/sec	246 MiB/sec	= 245 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	634 MiB/sec	625 MiB/sec	637 MiB/sec	= 632 MiB/sec avg
WRITE:	245 MiB/sec	241 MiB/sec	242 MiB/sec	= 243 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	403 MiB/sec	412 MiB/sec	396 MiB/sec	= 487 MiB/sec avg
WRITE:	289 MiB/sec	293 MiB/sec	294 MiB/sec	= 312 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	472 MiB/sec	467 MiB/sec	464 MiB/sec	= 487 MiB/sec avg
WRITE:	312 MiB/sec	315 MiB/sec	311 MiB/sec	= 312 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	492 MiB/sec	496 MiB/sec	486 MiB/sec	= 491 MiB/sec avg
WRITE:	312 MiB/sec	310 MiB/sec	312 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	440 MiB/sec	448 MiB/sec	= 441 MiB/sec avg
WRITE:	256 MiB/sec	259 MiB/sec	257 MiB/sec	= 257 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	615 MiB/sec	605 MiB/sec	598 MiB/sec	= 606 MiB/sec avg
WRITE:	416 MiB/sec	415 MiB/sec	416 MiB/sec	= 416 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	600 MiB/sec	604 MiB/sec	593 MiB/sec	= 599 MiB/sec avg
WRITE:	419 MiB/sec	426 MiB/sec	419 MiB/sec	= 421 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	650 MiB/sec	640 MiB/sec	638 MiB/sec	= 643 MiB/sec avg
WRITE:	434 MiB/sec	434 MiB/sec	431 MiB/sec	= 433 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	654 MiB/sec	652 MiB/sec	658 MiB/sec	= 655 MiB/sec avg
WRITE:	431 MiB/sec	433 MiB/sec	424 MiB/sec	= 429 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	406 MiB/sec	396 MiB/sec	398 MiB/sec	= 400 MiB/sec avg
WRITE:	305 MiB/sec	305 MiB/sec	304 MiB/sec	= 305 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	442 MiB/sec	453 MiB/sec	428 MiB/sec	= 441 MiB/sec avg
WRITE:	329 MiB/sec	332 MiB/sec	329 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	425 MiB/sec	422 MiB/sec	425 MiB/sec	= 424 MiB/sec avg
WRITE:	319 MiB/sec	321 MiB/sec	322 MiB/sec	= 321 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	432 MiB/sec	431 MiB/sec	= 431 MiB/sec avg
WRITE:	343 MiB/sec	338 MiB/sec	338 MiB/sec	= 340 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	365 MiB/sec	376 MiB/sec	362 MiB/sec	= 368 MiB/sec avg
WRITE:	266 MiB/sec	265 MiB/sec	263 MiB/sec	= 265 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	419 MiB/sec	412 MiB/sec	416 MiB/sec	= 416 MiB/sec avg
WRITE:	274 MiB/sec	279 MiB/sec	274 MiB/sec	= 276 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	448 MiB/sec	432 MiB/sec	= 440 MiB/sec avg
WRITE:	280 MiB/sec	270 MiB/sec	274 MiB/sec	= 274 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	401 MiB/sec	400 MiB/sec	410 MiB/sec	= 403 MiB/sec avg
WRITE:	289 MiB/sec	284 MiB/sec	284 MiB/sec	= 286 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	574 MiB/sec	590 MiB/sec	589 MiB/sec	= 584 MiB/sec avg
WRITE:	63 MiB/sec	64 MiB/sec	63 MiB/sec	= 63 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	592 MiB/sec	592 MiB/sec	587 MiB/sec	= 590 MiB/sec avg
WRITE:	58 MiB/sec	57 MiB/sec	57 MiB/sec	= 58 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	603 MiB/sec	599 MiB/sec	609 MiB/sec	= 604 MiB/sec avg
WRITE:	53 MiB/sec	54 MiB/sec	54 MiB/sec	= 53 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	629 MiB/sec	623 MiB/sec	621 MiB/sec	= 624 MiB/sec avg
WRITE:	48 MiB/sec	48 MiB/sec	48 MiB/sec	= 48 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	424 MiB/sec	429 MiB/sec	427 MiB/sec	= 427 MiB/sec avg
WRITE:	235 MiB/sec	235 MiB/sec	236 MiB/sec	= 235 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	530 MiB/sec	529 MiB/sec	557 MiB/sec	= 539 MiB/sec avg
WRITE:	247 MiB/sec	242 MiB/sec	242 MiB/sec	= 244 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	415 MiB/sec	431 MiB/sec	= 491 MiB/sec avg
WRITE:	289 MiB/sec	293 MiB/sec	290 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	466 MiB/sec	464 MiB/sec	465 MiB/sec	= 491 MiB/sec avg
WRITE:	315 MiB/sec	313 MiB/sec	314 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	485 MiB/sec	491 MiB/sec	= 488 MiB/sec avg
WRITE:	312 MiB/sec	312 MiB/sec	310 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	449 MiB/sec	451 MiB/sec	= 445 MiB/sec avg
WRITE:	262 MiB/sec	257 MiB/sec	257 MiB/sec	= 258 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	406 MiB/sec	405 MiB/sec	411 MiB/sec	= 407 MiB/sec avg
WRITE:	328 MiB/sec	328 MiB/sec	331 MiB/sec	= 329 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	480 MiB/sec	473 MiB/sec	459 MiB/sec	= 471 MiB/sec avg
WRITE:	373 MiB/sec	375 MiB/sec	373 MiB/sec	= 374 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	529 MiB/sec	514 MiB/sec	520 MiB/sec	= 521 MiB/sec avg
WRITE:	398 MiB/sec	403 MiB/sec	397 MiB/sec	= 399 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	536 MiB/sec	556 MiB/sec	557 MiB/sec	= 550 MiB/sec avg
WRITE:	409 MiB/sec	408 MiB/sec	408 MiB/sec	= 408 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	329 MiB/sec	323 MiB/sec	332 MiB/sec	= 328 MiB/sec avg
WRITE:	221 MiB/sec	220 MiB/sec	213 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	425 MiB/sec	408 MiB/sec	406 MiB/sec	= 413 MiB/sec avg
WRITE:	270 MiB/sec	273 MiB/sec	266 MiB/sec	= 270 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	422 MiB/sec	405 MiB/sec	408 MiB/sec	= 412 MiB/sec avg
WRITE:	281 MiB/sec	284 MiB/sec	284 MiB/sec	= 283 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	388 MiB/sec	392 MiB/sec	383 MiB/sec	= 387 MiB/sec avg
WRITE:	293 MiB/sec	288 MiB/sec	297 MiB/sec	= 293 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	232 MiB/sec	235 MiB/sec	233 MiB/sec	= 233 MiB/sec avg
WRITE:	150 MiB/sec	151 MiB/sec	153 MiB/sec	= 151 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	343 MiB/sec	339 MiB/sec	342 MiB/sec	= 341 MiB/sec avg
WRITE:	189 MiB/sec	188 MiB/sec	192 MiB/sec	= 190 MiB/sec avg

Now testing RAIDZ2 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	392 MiB/sec	395 MiB/sec	= 399 MiB/sec avg
WRITE:	231 MiB/sec	236 MiB/sec	229 MiB/sec	= 232 MiB/sec avg

Now testing RAIDZ2 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	380 MiB/sec	374 MiB/sec	377 MiB/sec	= 377 MiB/sec avg
WRITE:	240 MiB/sec	241 MiB/sec	242 MiB/sec	= 241 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	335 MiB/sec	334 MiB/sec	332 MiB/sec	= 334 MiB/sec avg
WRITE:	92 MiB/sec	93 MiB/sec	91 MiB/sec	= 92 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	389 MiB/sec	386 MiB/sec	399 MiB/sec	= 391 MiB/sec avg
WRITE:	85 MiB/sec	87 MiB/sec	88 MiB/sec	= 87 MiB/sec avg

Now testing RAID1 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	489 MiB/sec	491 MiB/sec	489 MiB/sec	= 490 MiB/sec avg
WRITE:	79 MiB/sec	79 MiB/sec	78 MiB/sec	= 79 MiB/sec avg

Now testing RAID1 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	557 MiB/sec	558 MiB/sec	560 MiB/sec	= 558 MiB/sec avg
WRITE:	72 MiB/sec	71 MiB/sec	71 MiB/sec	= 71 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	235 MiB/sec	237 MiB/sec	237 MiB/sec	= 237 MiB/sec avg
WRITE:	177 MiB/sec	176 MiB/sec	174 MiB/sec	= 176 MiB/sec avg

Now testing RAID1+0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	354 MiB/sec	352 MiB/sec	354 MiB/sec	= 353 MiB/sec avg
WRITE:	224 MiB/sec	223 MiB/sec	223 MiB/sec	= 223 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	421 MiB/sec	420 MiB/sec	414 MiB/sec	= 328 MiB/sec avg
WRITE:	289 MiB/sec	286 MiB/sec	288 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	466 MiB/sec	468 MiB/sec	466 MiB/sec	= 328 MiB/sec avg
WRITE:	306 MiB/sec	304 MiB/sec	305 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	482 MiB/sec	487 MiB/sec	487 MiB/sec	= 328 MiB/sec avg
WRITE:	307 MiB/sec	306 MiB/sec	307 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	433 MiB/sec	442 MiB/sec	444 MiB/sec	= 399 MiB/sec avg
WRITE:	253 MiB/sec	256 MiB/sec	253 MiB/sec	= 232 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ:	119 MiB/sec	118 MiB/sec	118 MiB/sec	= 118 MiB/sec avg
WRITE:	97 MiB/sec	95 MiB/sec	96 MiB/sec	= 96 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	231 MiB/sec	229 MiB/sec	231 MiB/sec	= 231 MiB/sec avg
WRITE:	182 MiB/sec	179 MiB/sec	187 MiB/sec	= 183 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	339 MiB/sec	342 MiB/sec	339 MiB/sec	= 340 MiB/sec avg
WRITE:	260 MiB/sec	260 MiB/sec	260 MiB/sec	= 260 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	119 MiB/sec	120 MiB/sec	120 MiB/sec	= 119 MiB/sec avg
WRITE:	93 MiB/sec	94 MiB/sec	93 MiB/sec	= 93 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	225 MiB/sec	224 MiB/sec	231 MiB/sec	= 227 MiB/sec avg
WRITE:	163 MiB/sec	164 MiB/sec	166 MiB/sec	= 164 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	120 MiB/sec	120 MiB/sec	120 MiB/sec	= 120 MiB/sec avg
WRITE:	89 MiB/sec	89 MiB/sec	92 MiB/sec	= 90 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	119 MiB/sec	119 MiB/sec	119 MiB/sec	= 119 MiB/sec avg
WRITE:	95 MiB/sec	96 MiB/sec	95 MiB/sec	= 95 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	245 MiB/sec	244 MiB/sec	245 MiB/sec	= 244 MiB/sec avg
WRITE:	95 MiB/sec	94 MiB/sec	94 MiB/sec	= 94 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	415 MiB/sec	405 MiB/sec	417 MiB/sec	= 328 MiB/sec avg
WRITE:	295 MiB/sec	293 MiB/sec	294 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	465 MiB/sec	469 MiB/sec	479 MiB/sec	= 328 MiB/sec avg
WRITE:	313 MiB/sec	312 MiB/sec	311 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	490 MiB/sec	488 MiB/sec	484 MiB/sec	= 328 MiB/sec avg
WRITE:	311 MiB/sec	311 MiB/sec	310 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	454 MiB/sec	437 MiB/sec	435 MiB/sec	= 399 MiB/sec avg
WRITE:	254 MiB/sec	258 MiB/sec	254 MiB/sec	= 232 MiB/sec avg

Done


One request: Change the graph color on Raidz2, Raidz+0 and Raidz2+0 to have different colors, or is that intentional?
 
Last test was using the livecd, and it only used about 2.5 GB ram.
Now I've installed on disk, and started a new benchmark - guess it will be another 24 hours until its done. So far though, it doesnt look like a big (if any) improvement.
 
Did you also perform tuning on the System->Tuning page? Else your kmem and ARC will still be low. Also don't forget to actually reboot after doing the tuning.

Hitting a performance ceiling with memory constraints is very logical, i'm more interested in ZFS scaling on memory-rich systems; preferably 12GiB+ but 8GiB should be decent as well.

And yes the colours is a bug, i'll address that in next update, coming in an hour or so. :)
 
Yes, I did use the perform tuning and also rebooted, before I started my current benchmark.

Below is a screenshot of my System -> Tuning
 
He also has a dual-core E8400. That's a lot of parity and checksumming calculations for a dual core to handle.
 
Back
Top