FreeBSD ZFS NAS Web-GUI

Status
Not open for further replies.
vmware NIC is virtual; it has no real link speed; it is CPU (emulation) bound. So it can show 100Mbps and be limited on 1MB/s or 1GB/s; it depends on your CPU as i understand.
 
Added more drives to the server and benchmarked with preview2.

CPU: Intel Core 2 Duo CPU E6750 @ 2.66GHz
RAM: 8GB DDR2
SATA: Adaptec 3805
HDD: 8x 2TB Seagate Barracuda LP (ST32000542AS)
OS: ZFSguru 0.1.7 preview2

Results:
Code:
ZFSGURU-benchmark, version 1
Test size: 64.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 5 seconds
Number of disks: 8 disks
disk 1: gpt/disk0
disk 2: gpt/disk1
disk 3: gpt/disk2
disk 4: gpt/disk3
disk 5: gpt/disk4
disk 6: gpt/disk5
disk 7: gpt/disk6
disk 8: gpt/disk7

* Test Settings: TS64; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Secure Erase. Now testing RAID0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	602 MiB/sec	604 MiB/sec	601 MiB/sec	= 602 MiB/sec avg
WRITE:	459 MiB/sec	459 MiB/sec	362 MiB/sec	= 427 MiB/sec avg
raidtest.read:	7321	7321	7401	= 7347 IOps ( ~473 MiB/sec )
raidtest.write:	5865	6322	5421	= 5869 IOps ( ~378 MiB/sec )
raidtest.mixed:	5708	6553	5932	= 6064 IOps ( ~390 MiB/sec )

Secure Erase. Now testing RAIDZ configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	354 MiB/sec	363 MiB/sec	400 MiB/sec	= 373 MiB/sec avg
WRITE:	151 MiB/sec	154 MiB/sec	154 MiB/sec	= 153 MiB/sec avg
raidtest.read:	7129	7071	7330	= 7176 IOps ( ~462 MiB/sec )
raidtest.write:	5679	5597	5716	= 5664 IOps ( ~365 MiB/sec )
raidtest.mixed:	5764	5929	6522	= 6071 IOps ( ~391 MiB/sec )

Secure Erase. Now testing RAIDZ2 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	378 MiB/sec	367 MiB/sec	329 MiB/sec	= 358 MiB/sec avg
WRITE:	228 MiB/sec	224 MiB/sec	237 MiB/sec	= 229 MiB/sec avg
raidtest.read:	7258	7171	7188	= 7205 IOps ( ~464 MiB/sec )
raidtest.write:	5908	5866	5791	= 5855 IOps ( ~377 MiB/sec )
raidtest.mixed:	6066	6511	6451	= 6342 IOps ( ~408 MiB/sec )
 
Nice. Remember that you still want to do tuning. Now you can use the new System->Tuning page, click the Reset to recommended button to do auto tuning, and reboot. The Raidtest (random) scores are meaningless for now; i'll do update soon to solve this. Then you can upgrade via System->Update page.
 
I released preview2a for web-upgrade, it addresses a few issues:

  • OpenSSH page did not properly set password
  • Files page displayed ZVOLs with erroneous properties
  • Services->iSCSI quick configuration did not properly add Netmask value
  • Services->iSCSI quick configuration now also adds initiatorname to AuthGroup9999 (Discovery)
  • Disks->Benchmark now has different default form values
  • Disks->Benchmark now tests 12 disks or more as well
  • Disks->Benchmark now has restored raidtest functionality (preview2 has broken raidtest performance scores)
  • Disks->Benchmark should be faster now, by using sync calls instead of high cooldown; the default cooldown now reduced to just 2 seconds.
  • Disks->Benchmark now adds nested configurations for RAID-Z (4 disk chunks) and RAID-Z2 (6 disk chunks)

Still largely untested; please let me know of any residual issues.
 
Sorry not familiar with this tuning stuff. Any guide on tuning zfs?

Also, is it normal for the smart data query to not work if I'm running on a Adaptec raid controller?
 
I'm having trouble booting from the new image.

I'm trying to write the image using dd on OS X.

I have tried:
- 2 different USB drives, one of which ran build 1.5 successfully. Both register and can be formatted in OS X fine.
- Redownloading the image twice.
- Building with command: dd if=ZFSguru-0.1.7-preview2.2.iso of=/dev/disk1 bs=1m. Blocksize is in Bytes apparently:
$ man dd | grep bs
bs=n Set both input and output block size to n bytes, superseding the
ibs and obs operands. If no conversion values other than

- Different USB ports

When I boot from the USB I get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER". Any ideas what I'm doing wrong?

I'll try build it on Windows tomorrow, I think that's how I did it last time.
 
You are using the .iso; this is an image meant for CD media only, not for HDDs!

The dd method you used is for the Binary image (ending with .img). I'm considering not offering this for download anymore, but instead integrate a function in web-gui that can create a USB stick. This should solve some issues with unbootable USB sticks and means i only need to release two things:
1) LiveCD .iso with system image
2) web interface tarball for web-update
 
Upgraded to 0.1.7-preview2a, and get this error when I try to run another benchmark with 16 drives.

PHP Parse error: syntax error, unexpected T_VARIABLE in /usr/local/www/zfsguru/benchmark.php on line 602
 
Yes last night i made some mistakes. :(

But i uploaded 0.1.7-preview2b now; so try updating via System->Update and it should finally work!
 
Updated and started the benchmark now.
See that you put it to default 32 GB, but I'm guessing atleast 24 hours to finish the benchmark :)
 
In my tests the 32GiB scores were not contaminated by RAM. But if you have 24GiB RAM then 32GiB will probably be buffered in RAM too much and you may need 128GiB even. :D

So it depends on your RAM and speed of the disks how high test size you need. I also thought about trying to rewrite the test so that it spends a fixed amount of seconds, rather than a target size. So it just reads from the pool for 60 seconds per test. That way the lower-disk count tests won't take so much time. I already evaded that problem a bit by starting the tests on higher disk counts now. It should now start with 16 disks, if you have that many, then 12+, then 8+, then 4+ then 1+.

It takes a lot of time to benchmark, yes, but you are stress testing your future setup; that is good! If any stability problems are there, you would know about them. This is exactly the kind of testing you need to do, i think, before putting it to real use. Any weak HDDs would fail quickly if they are stressed for some time; better now than later, when you have committed real data to it. If you can finish a benchmark, that also tells you your tuning settings are stable for the workloads that you produce with the test.

Changes to default values:

Testsize -> 32GiB (may be too low for high RAM sizes; but 8GiB should be ok)
Cooldown -> 2 seconds (now using sync calls instead so less cooldown is needed)
De-selected Random I/O test -> better to focus on sequential I/O first; most important for most NAS storage anyway
Default raidtest queue depth -> increased from 1 to 32; now allows scaling
Secure Erase -> de-selected, saves a little bit of time; not very useful on HDDs anyway

All these may cause the benchmark to be a little bit quicker, and you get the most interesting results pretty soon; while at first it started from disk1 to disk20 if you had that many. So i hope this relaxes benchmarking, together with the auto-tuning (Reset to recommended) button on the System->Tuning page.
 
You've made some great improvements to the benchmark scripts in 0.17

I hacked the scripts from 0.16 to only test raidz/raidz2 in disk configs of 4-8 . As i did not care for any other configurations.

This spead up the results and gave me data on only the configs I was interested in.

Which leads me to another point. Do you have any interest in pushing the scripts to a source repo (github/mercurial googlecode ) so that some forking may be done for those interested? Alternatively do you have any objections of someone else starting a repo from the existing code?
 
Sorry not familiar with this tuning stuff. Any guide on tuning zfs?

Also, is it normal for the smart data query to not work if I'm running on a Adaptec raid controller?
I think it's called passthrough or something? You dont want to use the hardware raid stuff when you use ZFS, let ZFS handle it all

You are using the .iso; this is an image meant for CD media only, not for HDDs!

The dd method you used is for the Binary image (ending with .img). I'm considering not offering this for download anymore, but instead integrate a function in web-gui that can create a USB stick. This should solve some issues with unbootable USB sticks and means i only need to release two things:
1) LiveCD .iso with system image
2) web interface tarball for web-update
I definitely am interested in the ability to make a USB stick version or some way to get the os onto the usb so the drives can be used for data

You've made some great improvements to the benchmark scripts in 0.17

I hacked the scripts from 0.16 to only test raidz/raidz2 in disk configs of 4-8 . As i did not care for any other configurations.

This spead up the results and gave me data on only the configs I was interested in.

Which leads me to another point. Do you have any interest in pushing the scripts to a source repo (github/mercurial googlecode ) so that some forking may be done for those interested? Alternatively do you have any objections of someone else starting a repo from the existing code?
Yeah plus push requests
 
Also

Code:
  pool: RaidZ-8TB
    id: 2973819487359521379
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

	RaidZ-8TB   UNAVAIL  newer version
	  ad4       ONLINE
	  ad6       ONLINE
	  ad8       ONLINE
	  ad10      ONLINE
	  ad12      ONLINE
But it still says it can be imported?
capturews.png
 
Since I really didn't worry about the data on that pool I did a few secure wipes and then I created a new pool and destroyed it

Afterwards I ran the benchmark, my numbers are going to be low I think because I have a dual core AMG Black Edition 2.6ghz and only 4gb ram in the current box (I have 12gb on the way and a quad core replacement somewhere)

I have 5 of these
1907729MB <WDC WD20EADS-00S2B0 01.00A01> at ata5-master UDMA100 SATA 3Gb/s
Or more info
Code:
ad10
	512         	# sectorsize
	2000398934016	# mediasize in bytes (1.8T)
	3907029168  	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	3876021     	# Cylinders according to firmware.
	16          	# Heads according to firmware.
	63          	# Sectors according to firmware.
	WD-WCAVY5176674	# Disk ident.

Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 5 disks
disk 1: gpt/1
disk 2: gpt/2
disk 3: gpt/3
disk 4: gpt/4
disk 5: gpt/5

* Test Settings: TS32; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	230 MiB/sec	241 MiB/sec	250 MiB/sec	= 241 MiB/sec avg
WRITE:	248 MiB/sec	247 MiB/sec	247 MiB/sec	= 248 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	224 MiB/sec	215 MiB/sec	214 MiB/sec	= 218 MiB/sec avg
WRITE:	277 MiB/sec	280 MiB/sec	278 MiB/sec	= 278 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	255 MiB/sec	247 MiB/sec	247 MiB/sec	= 250 MiB/sec avg
WRITE:	187 MiB/sec	187 MiB/sec	185 MiB/sec	= 186 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	232 MiB/sec	221 MiB/sec	227 MiB/sec	= 227 MiB/sec avg
WRITE:	218 MiB/sec	220 MiB/sec	221 MiB/sec	= 220 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	188 MiB/sec	190 MiB/sec	189 MiB/sec	= 189 MiB/sec avg
WRITE:	119 MiB/sec	121 MiB/sec	120 MiB/sec	= 120 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	232 MiB/sec	235 MiB/sec	242 MiB/sec	= 236 MiB/sec avg
WRITE:	153 MiB/sec	152 MiB/sec	150 MiB/sec	= 152 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	91 MiB/sec	90 MiB/sec	91 MiB/sec	= 91 MiB/sec avg
WRITE:	87 MiB/sec	87 MiB/sec	87 MiB/sec	= 87 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	89 MiB/sec	89 MiB/sec	84 MiB/sec	= 87 MiB/sec avg
WRITE:	86 MiB/sec	86 MiB/sec	86 MiB/sec	= 86 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	116 MiB/sec	123 MiB/sec	119 MiB/sec	= 119 MiB/sec avg
WRITE:	152 MiB/sec	152 MiB/sec	152 MiB/sec	= 152 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ:	105 MiB/sec	104 MiB/sec	104 MiB/sec	= 104 MiB/sec avg
WRITE:	91 MiB/sec	91 MiB/sec	91 MiB/sec	= 91 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	188 MiB/sec	188 MiB/sec	187 MiB/sec	= 188 MiB/sec avg
WRITE:	158 MiB/sec	157 MiB/sec	156 MiB/sec	= 157 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	235 MiB/sec	227 MiB/sec	234 MiB/sec	= 232 MiB/sec avg
WRITE:	204 MiB/sec	204 MiB/sec	204 MiB/sec	= 204 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	105 MiB/sec	105 MiB/sec	105 MiB/sec	= 105 MiB/sec avg
WRITE:	89 MiB/sec	89 MiB/sec	89 MiB/sec	= 89 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	183 MiB/sec	178 MiB/sec	175 MiB/sec	= 179 MiB/sec avg
WRITE:	146 MiB/sec	147 MiB/sec	147 MiB/sec	= 147 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	102 MiB/sec	102 MiB/sec	102 MiB/sec	= 102 MiB/sec avg
WRITE:	85 MiB/sec	85 MiB/sec	84 MiB/sec	= 85 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ:	96 MiB/sec	96 MiB/sec	96 MiB/sec	= 96 MiB/sec avg
WRITE:	90 MiB/sec	90 MiB/sec	90 MiB/sec	= 90 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ:	92 MiB/sec	91 MiB/sec	89 MiB/sec	= 90 MiB/sec avg
WRITE:	87 MiB/sec	87 MiB/sec	87 MiB/sec	= 87 MiB/sec avg

Done
 
Hey Sub, you need a donate button on your site.. you deserve a beer for all your hard work.

I'm guessing this is a slippery slope as far as the forum rules go though..
 
@vraa: you could not import that pool because it was of newer ZFS version than the system version; i.e. pool version was above 15. You can see what the system supports on the Status page "This system supports ZFS pool version 15".

You can upgrade pools to higher versions, but you can't downgrade to a lower version. So ZFS pool version 19 for example will be unusable by any ZFS system that is not at least version 19.

Your benchmarks look decent for only 4GiB memory; once you have 8/12GiB you want to do the tuning and then you should have higher speeds!

I definitely am interested in the ability to make a USB stick version or some way to get the os onto the usb so the drives can be used for data
You can do that already by installing ZFS-on-root to your USB stick.
1) format USB stick with GPT
2) create a pool
3) make it bootable by installing ZFS-on-root to the newly created pool on Pools->Booting
4) reboot and now boot from USB directly into ZFS

This saves 0.5GB memory usage and allows memory tuning, which is not possible with LiveCD.

Sorry not familiar with this tuning stuff. Any guide on tuning zfs?
Right now it should be as simple as clicking the "Reset to recommended" button on the System->Tuning page. This does not work with LiveCD; only after you installed to a pool can you perform tuning; the livecd will forget everything after reboot.

Also, is it normal for the smart data query to not work if I'm running on a Adaptec raid controller?
Yes, not all controllers pass SMART requests to the disks. Some controllers do have some utility to check SMART with proprietary tools (like Areca) but do not actually passthrough direct SMART queries from your OS to the disk; as such this controller essentially lacks true support for SMART (passthrough). The SuperMicro USAS-L8i controller and other HBAs do SMART just fine in my experience.

So bottom line: depends on your controller whether SMART works.
 
Pleased to inform you guys that ZFSguru version 0.1.7-preview2c is available for web-update, it adds:
- sectorsize override when creating new pool
- sectorsize override when using Disks->Benchmark feature
- twed disks are now recognised
- allows creating mirrors in odd disk numbers as well, and allows 2-disk RAID-Z, 3-disk RAID-Z2 and 4-disk RAID-Z3, though not very useful. :)

Just surf to the System->Update page to update. :)
 
I think openssh is still failing to set a password

I understand that it's too new of zpool version, but why am I still allowed to click import?
 
It allows you to try importing because the 'zpool import' command yielded a potentially importable pool with a unique identifier. This identifier is used to create those buttons, basically. It would just give you an error when actually trying it; wouldn't hurt. What should happen is an additional message that the target pool version is too high and may not be imported.

OpenSSH page indeed still doesn't work yet; it needs a script in order to work, i'll address that later. For now just enter the password on the directly attached keyboard/monitor, login with root and enter: passwd ssh
 
I was really looking forward to trying this out but for some reason it just kept hanging on me. I'd boot from the live CD and log in and get my IP info but that was about as much as I could ever manage before it simply hung. Quite disappointed. I ended up using Nexenta Core and Netapp-it instead which obviously now means my pools are too new to be able to import them into FreeBSD if I wanted to go back over :(

I'll be keeping an eye on this all the same because I think it's a very worthwhile project and should be supported.
 
Could you list your full system specs?
Could you also try disabling ACPI in the boot menu (where it counts from 10 to 0).
 
Updated to 0.1.7-preview2c and started to run more benchmarks, but when I got home from work the server had crashed with this message on the console:

panic: kmem_malloc(65536): kmem_map too small: 7512248320 total allocated
cpuid = 0
uptime: 14h34m38s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort

Output from the benchmark:
Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; 
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	660 MiB/sec	664 MiB/sec	665 MiB/sec	= 663 MiB/sec avg
WRITE:	471 MiB/sec	461 MiB/sec	466 MiB/sec	= 466 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	432 MiB/sec	432 MiB/sec	430 MiB/sec	= 431 MiB/sec avg
WRITE:	380 MiB/sec	382 MiB/sec	382 MiB/sec	= 381 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	411 MiB/sec	413 MiB/sec	= 411 MiB/sec avg
WRITE:	338 MiB/sec	329 MiB/sec	338 MiB/sec	= 335 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	689 MiB/sec	684 MiB/sec	687 MiB/sec	= 687 MiB/sec avg
WRITE:	36 MiB/sec	36 MiB/sec	36 MiB/sec	= 36 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	639 MiB/sec	645 MiB/sec	634 MiB/sec	= 639 MiB/sec avg
WRITE:	266 MiB/sec	258 MiB/sec	260 MiB/sec	= 261 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	444 MiB/sec	422 MiB/sec	453 MiB/sec	= 440 MiB/sec avg
WRITE:	313 MiB/sec	306 MiB/sec	305 MiB/sec	= 308 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	480 MiB/sec	473 MiB/sec	469 MiB/sec	= 474 MiB/sec avg
WRITE:	329 MiB/sec	327 MiB/sec	326 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	494 MiB/sec	488 MiB/sec	= 490 MiB/sec avg
WRITE:	335 MiB/sec	336 MiB/sec	331 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	446 MiB/sec	459 MiB/sec	450 MiB/sec	= 452 MiB/sec avg
WRITE:	271 MiB/sec	276 MiB/sec	276 MiB/sec	= 274 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	657 MiB/sec	650 MiB/sec	657 MiB/sec	= 655 MiB/sec avg
WRITE:	463 MiB/sec	465 MiB/sec	473 MiB/sec	= 467 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	670 MiB/sec	669 MiB/sec	660 MiB/sec	= 666 MiB/sec avg
WRITE:	446 MiB/sec	462 MiB/sec	461 MiB/sec	= 456 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	679 MiB/sec	677 MiB/sec	677 MiB/sec	= 678 MiB/sec avg
WRITE:	466 MiB/sec	471 MiB/sec	467 MiB/sec	= 468 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	672 MiB/sec	678 MiB/sec	675 MiB/sec	= 675 MiB/sec avg
WRITE:	469 MiB/sec	467 MiB/sec	466 MiB/sec	= 467 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	438 MiB/sec	437 MiB/sec	445 MiB/sec	= 440 MiB/sec avg
WRITE:	371 MiB/sec	354 MiB/sec	373 MiB/sec	= 366 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	429 MiB/sec	427 MiB/sec	432 MiB/sec	= 429 MiB/sec avg
WRITE:	378 MiB/sec	374 MiB/sec	379 MiB/sec	= 377 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	438 MiB/sec	438 MiB/sec	433 MiB/sec	= 436 MiB/sec avg
WRITE:	384 MiB/sec	367 MiB/sec	375 MiB/sec	= 375 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	434 MiB/sec	425 MiB/sec	= 430 MiB/sec avg
WRITE:	373 MiB/sec	366 MiB/sec	374 MiB/sec	= 371 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	400 MiB/sec	401 MiB/sec	399 MiB/sec	= 400 MiB/sec avg
WRITE:	324 MiB/sec	324 MiB/sec	317 MiB/sec	= 322 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	407 MiB/sec	408 MiB/sec	407 MiB/sec	= 407 MiB/sec avg
WRITE:	314 MiB/sec	316 MiB/sec	312 MiB/sec	= 314 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	410 MiB/sec	405 MiB/sec	= 408 MiB/sec avg
WRITE:	321 MiB/sec	329 MiB/sec	333 MiB/sec	= 328 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	405 MiB/sec	413 MiB/sec	412 MiB/sec	= 410 MiB/sec avg
WRITE:	327 MiB/sec	324 MiB/sec	326 MiB/sec	= 326 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	646 MiB/sec	654 MiB/sec	652 MiB/sec	= 651 MiB/sec avg
WRITE:	45 MiB/sec	46 MiB/sec	46 MiB/sec	= 45 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	658 MiB/sec	659 MiB/sec	651 MiB/sec	= 656 MiB/sec avg
WRITE:	42 MiB/sec	42 MiB/sec	43 MiB/sec	= 42 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	669 MiB/sec	667 MiB/sec	656 MiB/sec	= 664 MiB/sec avg
WRITE:	40 MiB/sec	40 MiB/sec	40 MiB/sec	= 40 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	666 MiB/sec	670 MiB/sec	661 MiB/sec	= 666 MiB/sec avg
WRITE:	38 MiB/sec	38 MiB/sec	38 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	584 MiB/sec	563 MiB/sec	581 MiB/sec	= 576 MiB/sec avg
WRITE:	250 MiB/sec	257 MiB/sec	251 MiB/sec	= 253 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	644 MiB/sec	654 MiB/sec	653 MiB/sec	= 650 MiB/sec avg
WRITE:	256 MiB/sec	262 MiB/sec	247 MiB/sec	= 255 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	428 MiB/sec	451 MiB/sec	= 490 MiB/sec avg
WRITE:	311 MiB/sec	302 MiB/sec	311 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	480 MiB/sec	475 MiB/sec	475 MiB/sec	= 490 MiB/sec avg
WRITE:	331 MiB/sec	327 MiB/sec	326 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	492 MiB/sec	488 MiB/sec	489 MiB/sec	= 490 MiB/sec avg
WRITE:	330 MiB/sec	323 MiB/sec	318 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	452 MiB/sec	453 MiB/sec	447 MiB/sec	= 451 MiB/sec avg
WRITE:	273 MiB/sec	277 MiB/sec	280 MiB/sec	= 276 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	602 MiB/sec	604 MiB/sec	607 MiB/sec	= 604 MiB/sec avg
WRITE:	458 MiB/sec	458 MiB/sec	458 MiB/sec	= 458 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	600 MiB/sec	585 MiB/sec	582 MiB/sec	= 589 MiB/sec avg
WRITE:	462 MiB/sec	459 MiB/sec	463 MiB/sec	= 461 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	653 MiB/sec	655 MiB/sec	651 MiB/sec	= 653 MiB/sec avg
WRITE:	465 MiB/sec	463 MiB/sec	468 MiB/sec	= 465 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	659 MiB/sec	658 MiB/sec	654 MiB/sec	= 657 MiB/sec avg
WRITE:	464 MiB/sec	460 MiB/sec	468 MiB/sec	= 464 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	403 MiB/sec	408 MiB/sec	398 MiB/sec	= 403 MiB/sec avg
WRITE:	330 MiB/sec	329 MiB/sec	326 MiB/sec	= 328 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	436 MiB/sec	453 MiB/sec	= 443 MiB/sec avg
WRITE:	353 MiB/sec	359 MiB/sec	354 MiB/sec	= 356 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	429 MiB/sec	425 MiB/sec	= 428 MiB/sec avg
WRITE:	351 MiB/sec	341 MiB/sec	351 MiB/sec	= 348 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	447 MiB/sec	442 MiB/sec	438 MiB/sec	= 442 MiB/sec avg
WRITE:	367 MiB/sec	367 MiB/sec	368 MiB/sec	= 367 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	380 MiB/sec	372 MiB/sec	374 MiB/sec	= 376 MiB/sec avg
WRITE:	286 MiB/sec	288 MiB/sec	284 MiB/sec	= 286 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	418 MiB/sec	422 MiB/sec	419 MiB/sec	= 420 MiB/sec avg
WRITE:	291 MiB/sec	296 MiB/sec	295 MiB/sec	= 294 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	454 MiB/sec	443 MiB/sec	434 MiB/sec	= 444 MiB/sec avg
WRITE:	305 MiB/sec	303 MiB/sec	294 MiB/sec	= 301 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	407 MiB/sec	410 MiB/sec	= 409 MiB/sec avg
WRITE:	314 MiB/sec	312 MiB/sec	311 MiB/sec	= 312 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	577 MiB/sec	589 MiB/sec	586 MiB/sec	= 584 MiB/sec avg
WRITE:	69 MiB/sec	65 MiB/sec	64 MiB/sec	= 66 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	609 MiB/sec	607 MiB/sec	604 MiB/sec	= 607 MiB/sec avg
WRITE:	58 MiB/sec	58 MiB/sec	62 MiB/sec	= 59 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	625 MiB/sec	618 MiB/sec	631 MiB/sec	= 625 MiB/sec avg
WRITE:	57 MiB/sec	57 MiB/sec	54 MiB/sec	= 56 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	636 MiB/sec	625 MiB/sec	635 MiB/sec	= 632 MiB/sec avg
WRITE:	48 MiB/sec	50 MiB/sec	48 MiB/sec	= 49 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	449 MiB/sec	441 MiB/sec	447 MiB/sec	= 446 MiB/sec avg
WRITE:	256 MiB/sec	246 MiB/sec	250 MiB/sec	= 251 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	529 MiB/sec	557 MiB/sec	549 MiB/sec	= 545 MiB/sec avg
WRITE:	258 MiB/sec	261 MiB/sec	263 MiB/sec	= 261 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	435 MiB/sec	439 MiB/sec	= 490 MiB/sec avg
WRITE:	312 MiB/sec	305 MiB/sec	305 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	471 MiB/sec	473 MiB/sec	477 MiB/sec	= 490 MiB/sec avg
WRITE:	326 MiB/sec	322 MiB/sec	326 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	489 MiB/sec	488 MiB/sec	489 MiB/sec	= 489 MiB/sec avg
WRITE:	335 MiB/sec	335 MiB/sec	330 MiB/sec	= 333 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	443 MiB/sec	457 MiB/sec	456 MiB/sec	= 452 MiB/sec avg
WRITE:	270 MiB/sec	273 MiB/sec	277 MiB/sec	= 274 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	421 MiB/sec	417 MiB/sec	437 MiB/sec	= 425 MiB/sec avg
WRITE:	358 MiB/sec	358 MiB/sec	354 MiB/sec	= 357 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	495 MiB/sec	484 MiB/sec	479 MiB/sec	= 486 MiB/sec avg
WRITE:	395 MiB/sec	403 MiB/sec	405 MiB/sec	= 401 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	536 MiB/sec	538 MiB/sec	523 MiB/sec	= 532 MiB/sec avg
WRITE:	438 MiB/sec	432 MiB/sec	434 MiB/sec	= 435 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	552 MiB/sec	547 MiB/sec	557 MiB/sec	= 552 MiB/sec avg
WRITE:	443 MiB/sec	448 MiB/sec	446 MiB/sec	= 446 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	325 MiB/sec	337 MiB/sec	328 MiB/sec	= 330 MiB/sec avg
WRITE:	237 MiB/sec	238 MiB/sec	232 MiB/sec	= 236 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	419 MiB/sec	415 MiB/sec	436 MiB/sec	= 424 MiB/sec avg
WRITE:	291 MiB/sec	291 MiB/sec	272 MiB/sec	= 285 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	416 MiB/sec	414 MiB/sec	416 MiB/sec	= 415 MiB/sec avg
WRITE:	308 MiB/sec	308 MiB/sec	307 MiB/sec	= 308 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	387 MiB/sec	389 MiB/sec	384 MiB/sec	= 387 MiB/sec avg
WRITE:	318 MiB/sec	318 MiB/sec	309 MiB/sec	= 315 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cW
 
(had to split the post in two because of forum limit)

Rebooted the server and started a new benchmark, and when I just got home from work, I see the server had crashed again, with a slightly different message on the console:

pid 2326 (php), uid 0, was killed: out of swap space
pid 975 (nfsd), uid 0, was killed: out of swap space
pid 966 (mountd), uid 0, was killed: out of swap space
panic: kmem_malloc(65536): kmem_map too small: 7512592384 total allocated
cpuid = 0
Uptime: 16h10m36s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort

Output from the benchmark:
Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; 
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	662 MiB/sec	663 MiB/sec	661 MiB/sec	= 662 MiB/sec avg
WRITE:	474 MiB/sec	475 MiB/sec	465 MiB/sec	= 471 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	429 MiB/sec	435 MiB/sec	= 432 MiB/sec avg
WRITE:	380 MiB/sec	381 MiB/sec	376 MiB/sec	= 379 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	409 MiB/sec	411 MiB/sec	413 MiB/sec	= 411 MiB/sec avg
WRITE:	336 MiB/sec	336 MiB/sec	334 MiB/sec	= 335 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	685 MiB/sec	688 MiB/sec	682 MiB/sec	= 685 MiB/sec avg
WRITE:	36 MiB/sec	36 MiB/sec	36 MiB/sec	= 36 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	631 MiB/sec	638 MiB/sec	647 MiB/sec	= 639 MiB/sec avg
WRITE:	267 MiB/sec	252 MiB/sec	260 MiB/sec	= 260 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	431 MiB/sec	447 MiB/sec	= 438 MiB/sec avg
WRITE:	312 MiB/sec	294 MiB/sec	307 MiB/sec	= 304 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	477 MiB/sec	472 MiB/sec	474 MiB/sec	= 474 MiB/sec avg
WRITE:	327 MiB/sec	328 MiB/sec	327 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	493 MiB/sec	491 MiB/sec	491 MiB/sec	= 492 MiB/sec avg
WRITE:	336 MiB/sec	334 MiB/sec	336 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	458 MiB/sec	450 MiB/sec	447 MiB/sec	= 452 MiB/sec avg
WRITE:	278 MiB/sec	275 MiB/sec	278 MiB/sec	= 277 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	652 MiB/sec	660 MiB/sec	661 MiB/sec	= 657 MiB/sec avg
WRITE:	470 MiB/sec	469 MiB/sec	473 MiB/sec	= 471 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	671 MiB/sec	664 MiB/sec	675 MiB/sec	= 670 MiB/sec avg
WRITE:	471 MiB/sec	465 MiB/sec	470 MiB/sec	= 469 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	678 MiB/sec	677 MiB/sec	673 MiB/sec	= 676 MiB/sec avg
WRITE:	465 MiB/sec	465 MiB/sec	472 MiB/sec	= 468 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	675 MiB/sec	678 MiB/sec	674 MiB/sec	= 676 MiB/sec avg
WRITE:	472 MiB/sec	473 MiB/sec	467 MiB/sec	= 471 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	437 MiB/sec	439 MiB/sec	442 MiB/sec	= 439 MiB/sec avg
WRITE:	365 MiB/sec	356 MiB/sec	373 MiB/sec	= 365 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	432 MiB/sec	437 MiB/sec	= 433 MiB/sec avg
WRITE:	372 MiB/sec	378 MiB/sec	375 MiB/sec	= 375 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	436 MiB/sec	436 MiB/sec	433 MiB/sec	= 435 MiB/sec avg
WRITE:	378 MiB/sec	377 MiB/sec	383 MiB/sec	= 379 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	433 MiB/sec	426 MiB/sec	= 430 MiB/sec avg
WRITE:	371 MiB/sec	355 MiB/sec	374 MiB/sec	= 367 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	400 MiB/sec	404 MiB/sec	400 MiB/sec	= 401 MiB/sec avg
WRITE:	322 MiB/sec	313 MiB/sec	322 MiB/sec	= 319 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	405 MiB/sec	408 MiB/sec	= 408 MiB/sec avg
WRITE:	311 MiB/sec	315 MiB/sec	311 MiB/sec	= 312 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	408 MiB/sec	407 MiB/sec	409 MiB/sec	= 408 MiB/sec avg
WRITE:	320 MiB/sec	315 MiB/sec	327 MiB/sec	= 320 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	409 MiB/sec	412 MiB/sec	411 MiB/sec	= 411 MiB/sec avg
WRITE:	317 MiB/sec	333 MiB/sec	324 MiB/sec	= 325 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	655 MiB/sec	653 MiB/sec	632 MiB/sec	= 647 MiB/sec avg
WRITE:	46 MiB/sec	46 MiB/sec	45 MiB/sec	= 46 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	658 MiB/sec	654 MiB/sec	663 MiB/sec	= 658 MiB/sec avg
WRITE:	42 MiB/sec	44 MiB/sec	43 MiB/sec	= 43 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	669 MiB/sec	653 MiB/sec	668 MiB/sec	= 663 MiB/sec avg
WRITE:	40 MiB/sec	40 MiB/sec	40 MiB/sec	= 40 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	663 MiB/sec	662 MiB/sec	674 MiB/sec	= 666 MiB/sec avg
WRITE:	38 MiB/sec	38 MiB/sec	37 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	591 MiB/sec	601 MiB/sec	573 MiB/sec	= 588 MiB/sec avg
WRITE:	254 MiB/sec	254 MiB/sec	261 MiB/sec	= 256 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	639 MiB/sec	656 MiB/sec	651 MiB/sec	= 648 MiB/sec avg
WRITE:	262 MiB/sec	249 MiB/sec	259 MiB/sec	= 257 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	428 MiB/sec	443 MiB/sec	= 492 MiB/sec avg
WRITE:	314 MiB/sec	303 MiB/sec	310 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	477 MiB/sec	474 MiB/sec	472 MiB/sec	= 492 MiB/sec avg
WRITE:	331 MiB/sec	333 MiB/sec	328 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	489 MiB/sec	489 MiB/sec	492 MiB/sec	= 490 MiB/sec avg
WRITE:	332 MiB/sec	330 MiB/sec	316 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	445 MiB/sec	443 MiB/sec	457 MiB/sec	= 448 MiB/sec avg
WRITE:	276 MiB/sec	278 MiB/sec	276 MiB/sec	= 277 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	603 MiB/sec	608 MiB/sec	600 MiB/sec	= 604 MiB/sec avg
WRITE:	457 MiB/sec	459 MiB/sec	457 MiB/sec	= 458 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	596 MiB/sec	580 MiB/sec	623 MiB/sec	= 600 MiB/sec avg
WRITE:	456 MiB/sec	465 MiB/sec	437 MiB/sec	= 453 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	652 MiB/sec	650 MiB/sec	649 MiB/sec	= 651 MiB/sec avg
WRITE:	470 MiB/sec	473 MiB/sec	473 MiB/sec	= 472 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	657 MiB/sec	663 MiB/sec	655 MiB/sec	= 658 MiB/sec avg
WRITE:	465 MiB/sec	470 MiB/sec	462 MiB/sec	= 465 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	397 MiB/sec	410 MiB/sec	413 MiB/sec	= 406 MiB/sec avg
WRITE:	334 MiB/sec	330 MiB/sec	325 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	439 MiB/sec	446 MiB/sec	452 MiB/sec	= 446 MiB/sec avg
WRITE:	350 MiB/sec	356 MiB/sec	353 MiB/sec	= 353 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	427 MiB/sec	430 MiB/sec	= 428 MiB/sec avg
WRITE:	346 MiB/sec	352 MiB/sec	347 MiB/sec	= 348 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	438 MiB/sec	445 MiB/sec	= 441 MiB/sec avg
WRITE:	368 MiB/sec	365 MiB/sec	366 MiB/sec	= 366 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	372 MiB/sec	372 MiB/sec	367 MiB/sec	= 370 MiB/sec avg
WRITE:	288 MiB/sec	282 MiB/sec	287 MiB/sec	= 286 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	418 MiB/sec	420 MiB/sec	= 421 MiB/sec avg
WRITE:	295 MiB/sec	296 MiB/sec	294 MiB/sec	= 295 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	448 MiB/sec	445 MiB/sec	452 MiB/sec	= 448 MiB/sec avg
WRITE:	303 MiB/sec	304 MiB/sec	303 MiB/sec	= 303 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	399 MiB/sec	411 MiB/sec	408 MiB/sec	= 406 MiB/sec avg
WRITE:	304 MiB/sec	308 MiB/sec	301 MiB/sec	= 304 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	586 MiB/sec	591 MiB/sec	585 MiB/sec	= 587 MiB/sec avg
WRITE:	66 MiB/sec	65 MiB/sec	64 MiB/sec	= 65 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	609 MiB/sec	605 MiB/sec	602 MiB/sec	= 605 MiB/sec avg
WRITE:	58 MiB/sec	62 MiB/sec	62 MiB/sec	= 61 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	625 MiB/sec	618 MiB/sec	639 MiB/sec	= 627 MiB/sec avg
WRITE:	57 MiB/sec	57 MiB/sec	55 MiB/sec	= 56 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	640 MiB/sec	638 MiB/sec	628 MiB/sec	= 636 MiB/sec avg
WRITE:	49 MiB/sec	49 MiB/sec	52 MiB/sec	= 50 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	446 MiB/sec	441 MiB/sec	439 MiB/sec	= 442 MiB/sec avg
WRITE:	249 MiB/sec	237 MiB/sec	253 MiB/sec	= 247 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	531 MiB/sec	553 MiB/sec	550 MiB/sec	= 545 MiB/sec avg
WRITE:	256 MiB/sec	262 MiB/sec	255 MiB/sec	= 258 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	436 MiB/sec	431 MiB/sec	= 490 MiB/sec avg
WRITE:	309 MiB/sec	303 MiB/sec	310 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	474 MiB/sec	479 MiB/sec	471 MiB/sec	= 490 MiB/sec avg
WRITE:	320 MiB/sec	310 MiB/sec	325 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	487 MiB/sec	487 MiB/sec	= 487 MiB/sec avg
WRITE:	324 MiB/sec	325 MiB/sec	312 MiB/sec	= 320 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	458 MiB/sec	448 MiB/sec	449 MiB/sec	= 452 MiB/sec avg
WRITE:	256 MiB/sec	268 MiB/sec	270 MiB/sec	= 265 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	418 MiB/sec	440 MiB/sec	430 MiB/sec	= 430 MiB/sec avg
WRITE:	354 MiB/sec	357 MiB/sec	354 MiB/sec	= 355 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	485 MiB/sec	483 MiB/sec	478 MiB/sec	= 482 MiB/sec avg
WRITE:	388 MiB/sec	407 MiB/sec	408 MiB/sec	= 401 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	533 MiB/sec	532 MiB/sec	514 MiB/sec	= 526 MiB/sec avg
WRITE:	435 MiB/sec	426 MiB/sec	433 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	580 MiB/sec	572 MiB/sec	545 MiB/sec	= 566 MiB/sec avg
WRITE:	442 MiB/sec	426 MiB/sec	448 MiB/sec	= 439 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	321 MiB/sec	337 MiB/sec	332 MiB/sec	= 330 MiB/sec avg
WRITE:	235 MiB/sec	231 MiB/sec	227 MiB/sec	= 231 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	417 MiB/sec	417 MiB/sec	435 MiB/sec	= 423 MiB/sec avg
WRITE:	291 MiB/sec	284 MiB/sec	291 MiB/sec	= 288 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	415 MiB/sec	421 MiB/sec	411 MiB/sec	= 415 MiB/sec avg
WRITE:	305 MiB/sec	306 MiB/sec	306 MiB/sec	= 306 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	387 MiB/sec	385 MiB/sec	383 MiB/sec	= 385 MiB/sec avg
WRITE:	313 MiB/sec	319 MiB/sec	313 MiB/sec	= 315 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cW

The common thing between the crashes (as I can see) is the fact that they both occured when doing RAIDZ2 benchmark with 4 disks, though in different runs.
 
RAID-Z2 is where most memory is consumed; actually RAID-Z3 but that's not supported yet. So your system is crashing due to memory exhaustion. This is party caused by the tuning. I thought increasing kmem up to RAM-1GiB would be enough, but this may be premature. I only have 8 disks to test though; 16 disks might need a bigger margin.

So try following tuning parameters:
kmem=7g
kmem_max=7g
ARC_min=4g
ARC_max=5g

Note the inclusion of kmem_max; by default this is not tuned. You can do all tuning on the tuning page, just change the values manually (and make sure that line is selected) and press the save button, then reboot.

If those values run stable for you, i'll consider changing the default tuning variables. Do note however, that this is an excellent stability test. If your NAS survives the benchmark, then ZFS memory tuning should be stable! It is possible that RAID-Z configurations would be perfectly stable with the current settings; still i would add a bigger margin; you don't want your new NAS to be crashing. Patches in FreeBSD 9-CURRENT are interesting in this regard, to streamline ZFS' hunger for RAM.
 
If you have 4GB ram you want to set vm.kmem_size to 50% more.

For example I have 6GB ram and have vm.kmem_size="9G"

I've not tuned the ARC, because in my experience FreeBSD is smarter than me to adjust it dynamicly.
 
RAID-Z2 is where most memory is consumed; actually RAID-Z3 but that's not supported yet. So your system is crashing due to memory exhaustion. This is party caused by the tuning. I thought increasing kmem up to RAM-1GiB would be enough, but this may be premature. I only have 8 disks to test though; 16 disks might need a bigger margin.

So try following tuning parameters:
kmem=7g
kmem_max=7g
ARC_min=4g
ARC_max=5g

Note the inclusion of kmem_max; by default this is not tuned. You can do all tuning on the tuning page, just change the values manually (and make sure that line is selected) and press the save button, then reboot.

If those values run stable for you, i'll consider changing the default tuning variables. Do note however, that this is an excellent stability test. If your NAS survives the benchmark, then ZFS memory tuning should be stable! It is possible that RAID-Z configurations would be perfectly stable with the current settings; still i would add a bigger margin; you don't want your new NAS to be crashing. Patches in FreeBSD 9-CURRENT are interesting in this regard, to streamline ZFS' hunger for RAM.

Using your values now, and will start a new benchmark after the reboot.
 
A very interesting thread. I'm using unRAID at the moment for storage, but always searching for better solutions. First i have no experience with ZFS or FreeBSD. I'm using Linux for two years, and played with the shell.

I have two questions:

- Can i use desktop drives? Are there no TLER problems?

You can do that already by installing ZFS-on-root to your USB stick.
1) format USB stick with GPT
2) create a pool
3) make it bootable by installing ZFS-on-root to the newly created pool on Pools->Booting
4) reboot and now boot from USB directly into ZFS

- Is there some where a more detailed how-to how to install to your usb stick?

Thank you.
 
Found a bug:

Created a pool with raidz2 (8 drives) and when I then try to add a 2nd raidz2 vdev (8 drives) I get this error message:

ERROR: You selected RAID5 (single parity) but have selected less than three disks. Please go back to select at least 3 disks.
 
A very interesting thread. I'm using unRAID at the moment for storage, but always searching for better solutions. First i have no experience with ZFS or FreeBSD. I'm using Linux for two years, and played with the shell.

I have two questions:

- Can i use desktop drives? Are there no TLER problems?



- Is there some where a more detailed how-to how to install to your usb stick?

Thank you.
No need for TLER with zfs, if your not using HW raid cards.
You can find a full how-to over at http://submesa.com/mesa but you will have to update via the web interface after your finished installing.
 
No need for TLER with zfs, if your not using HW raid cards.
You can find a full how-to over at http://submesa.com/mesa but you will have to update via the web interface after your finished installing.

Thanks for answering.

So if you use Software RAID you can use ordinary desktop HDD without TLER. I'm correct? You need TLER only with hardware RAID?
 
Whenever i run the random benchmarks, I keep getting:
Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 4 disks
disk 1: gpt/1
disk 2: gpt/2
disk 3: gpt/3
disk 4: gpt/4

* Test Settings: TS32; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 4 disks: czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory
d@czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory
d@czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory

Sequential seems to work fine though. This is running off the cd, as I don't know how to do ZFS-on-root
 
Have you tried to port Napp-it?
http://www.napp-it.org/index_en.html

How does it compare to your app?

Mesa/ZFSguru is at "preview" stage not recommended for prime time. I'm sure Sub.mesa would tell you not to use it in production. Sub has been kind enough to allow us to test these initial builds but there is no stable build as of yet. His system is also based upon FreeBSD while napp-it intergrates with NexentaCore but runs also on OpenIndiana, *Solaris* and Eon.

Philosophically I believe the goal with Mesa is to allow a total novice to set up FreeBSD/ZFS without too much homework. It has also become apparent that performance is a top priority.

If you need something today... napp-it is fine and I have used it on OpenIndiana. I also look forward to every new preview here as ultimately I will be running Mesa/guru or whatever the name becomes.
WF
 
Sub... it may be time to give this thing an official name? It deserves one and would make it easier for people to search for.
 
- Can i use desktop drives? Are there no TLER problems?
You don't need TLER on non-Windows software RAID, but you do need TLER on all/most (*) Hardware RAID and all Windows onboard/driver RAID.

If you have RAID edition drives, you may want to turn TLER off, since it is enabled by default. TLER can be dangerous if you lost your redundancy, because then there is no alternative when a disk encounters a bad sector, and thus you would want the disks to spend as long as they need to recover that data. TLER can 'cause' corruption of data if you already lost a full disk, and are rebuilding with a new disk but one of the remaining disks of your degraded RAID-Z would encounter a bad/weak sector, and TLER causes the drive to forfeit recovery after 7 seconds, not what you want in such a situation!

But the worst that may happen is corruption of a single file. Metadata is always replicated, even on single disk pools.

Is there some where a more detailed how-to how to install to your usb stick?
Right now i would advice against it. I'm revising the USB/binary image.

Instead, you may want to install ZFS-on-root to your USB stick, treating it as a normal small HDD excellent for serving your OS. It is not as durable as the binary distribution; it causes more writes to the NAND flash media. But even if your OS fails you shouldn't lose anything but config files on what filesystems you shared etc; all your data should still be there and simply reinstalling would get this thing up and running in a few minutes. ZFS is much less write heavy for unmanaged flash than say NTFS or Ext3 or UFS, because it does not overwrite existing blocks that often but writes any change to free space instead. Later i'll add a feature to the Web-GUI to install an embedded version to USB/CompactFlash, aimed at reducing writes at the target media. It would come with restrictions however. ZFS-on-root installs do not have these restrictions.

ZFS-on-root basics:
1) boot livecd and format target disk as GPT
2) create a pool
3) install ZFS-on-root, making it bootable, on the Pools->Booting page
4) reboot and boot from target disk
 
Last edited:
Whenever i run the random benchmarks, I keep getting:
Code:
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory

Sequential seems to work fine though. This is running off the cd, as I don't know how to do ZFS-on-root[/QUOTE]
Still a minor bug from a modification i made to the benchmark script, which deletes the raidtest profiles before creating them; people who had crashes would not be allowed to run the benchmark if otherwise. The bug is that if there is no such file, there should be no error message displayed. Consider it fixed in next update. Shouldn't affect any features of the benchmark script though; you should still get the nice graphs and everything.

[quote="MrLie, post: 1036422114"]Found a bug:

Created a pool with raidz2 (8 drives) and when I then try to add a 2nd raidz2 vdev (8 drives) I get this error message:[/QUOTE]
Thanks for reporting it! Fixed in next update.
 
Mesa/ZFSguru is at "preview" stage not recommended for prime time. I'm sure Sub.mesa would tell you not to use it in production.
True. 0.2.0 would be my first 'semi-stable' release, where i split stable and experimental builds so only those interesting in testing would run the experimental builds.

The underlying OS and ZFS code is labeled stable, in the sense that a corrupted ZFS pool due to a bug is a very remote possibility, due to it being used and deployed broadly and tested intensively by a smaller group of people.

So the biggest risk of my project is: things not working, error messages, unconvenient or cumbersome configuration of some stuff and non-existent core features. That makes it poor to advice to someone who need a working solution. Besides the core features it needs alot of testing and bugfixing and more user friendly usability.

Things will change when i start releasing experimental builds based on ZFS v28. This should, for the moment, NOT be used to store real data which you haven't backed up very well. The risk of a bug causing the pool to corrupt, is not that extremely remote. The patches have been tested but much less thorough than published stable ZFS code. Solaris currently is at ZFS v22, and it disabled the v21 dedupe because it's not considered stable enough.

So whatever you do, do not run unstable ZFS. If you're bitten by a bug then you will blame ZFS, but frankly it doesn't make much sense to choose ZFS for its reliability and then throw it out of the window by using an alpha/beta-quality version of ZFS. For now, ZFS v28 is to be considered unstable and only suitable for testing purposes. Nice to know what you'll be getting in the near future when it matures. 9.0 would probably release about summer 2011, where ZFS v28 becomes stable and production ready, assuming all goes well.

ZFS v28 experimental builds could be released in early december; but no promises.

Philosophically I believe the goal with Mesa is to allow a total novice to set up FreeBSD/ZFS without too much homework. It has also become apparent that performance is a top priority.
Well said. In essence, i want to make ZFS more accessible to many home users who would want data integrity/reliability and like ZFS's features but afraid of running something 'foreign' they don't know. A web-interface with a managed system can help reducing the threshold of ZFS, giving more people access to its features.

I cannot compete in terms of features with other projects, but i can attempt to be the 'easiest' ZFS product available, with LiveCD and easy installation methods (that i still have to work on) and a friendly uncluttered web-interface.

Sub... it may be time to give this thing an official name? It deserves one and would make it easier for people to search for.
ZFSguru is going to be the new name; but i still haven't launched the web-site. :mad:

Why ZFSguru? Well the name doesn't apply to me, but rather to the personified web-interface where the 'guru' will do the things for you, and warn you of things, give recommendations and provide information when requested. Something like that silly paperclip in Microsoft Office, actually. :D

But i hope that it would be a little more helpful than helping you with suicide notes. :D

The idea is that any 'smart' scripts which detects a problem or has some other idea, would be expressed via the guru, like: "I do not think it's wise to mix RAID-Z and single disks" when you try to do so. It would still let you do everything and can be turned off if desired. But this isn't just for novices; i want it to be tuned to your level of understanding: people new to ZFS, experienced ZFS users and system administrators. The 'verbosity' will lower as you increase the slider, and it would give more technical information instead of information aimed to help a novice user.

So i think this actually can be useful. It also personifies the product, which i think is fun. And of course this integrates well with a new logo, which i still don't have. :p

New website is going to be nice, though. I probably release it together with 0.1.7 final release, featuring a new System->Install page where you install either ZFS-on-root or USB embedded and it get's explained much better than is the case now. Together with some bugfixes and minor new features, it should be ready for 0.1.7 final.
 
Still a minor bug from a modification i made to the benchmark script, which deletes the raidtest profiles before creating them; people who had crashes would not be allowed to run the benchmark if otherwise. The bug is that if there is no such file, there should be no error message displayed. Consider it fixed in next update. Shouldn't affect any features of the benchmark script though; you should still get the nice graphs and everything..

Turns out I was impatient, I did start seeing performance numbers up until the system determined I was a failure for only having 4gb of ram without tuning.

I understand 8 is recommended, but do you have the tuning parameters for other numbers, such as 2, 4, 6?
 
Status
Not open for further replies.
Back
Top