Testing ZFS RAID-Z performance with 4K sector drives

Shall i tell you a little secret? :)

It is common belief that RAID5 is heavy on your CPU 'because it needs to calculate the parity'. However, the truth is much different, and even an Atom can do more than 2GB/s of XOR parity calculations; XOR is about the easiest instruction you can give your CPU, it's only limited by memory bandwidth. Some high-end systems can do more than 100GB/s of XOR calculations.

So when doing RAID5 or RAID6 the XOR is only a few percent of the total CPU usage by the RAID5 driver. So what is cpu intensive then? Answer: the splitting and combining of requests, to make the requests of the right size. This is especially true for traditional RAID5 with 'writeback'; it will hold a buffer of incoming write requests. It will try to glue them together, and form a 'full stripe block' of about a megabyte; only this magical size can be written to disk without letting the disks seek; so RAID5 writing is ALWAYS slow, except when writing exactly that magical value.

Some RAID5 engines have no such intelligence, and as such as extremely slow when writing, these can still be fast if you write requests of a particular size. Intelligent RAID5 drivers do this themselves and maintains writeback so it can scan for incoming write requests, glue them together and write in the most optimal write size; there is only one size which is fast.

RAID5 can only write fast when:
disk1: stripe 128K
disk2: stripe 128K
disk3 stripe 128K
disk4 stripe 128K

In this case the full stripe block is (4-1) * 128K = 384KiB. So only this size will be fast, and most of the CPU usage by the driver will be to make this 'write request split&combine' possible.

If you are going to use compression, encryption, de-duplication you would want more cores, but a fast dualcore like E8400 should be enough for a 8/16GiB ZFS system really; ZFS is much more memory hungry than CPU hungry. These CPUs are also 45nm and thus very power efficient.
 
But the RAID 0 write results have a built-in bottleneck. No need to even worry about RAID 5 until the RAID 0 problem is solved.
 
Well i'm waiting for the test results after tuning, the tests earlier clearly were limited by 0.5GiB of ARC memory for ZFS. The ARC for 8GiB RAM automatic tuning should be about 6GiB; so i'm interested to see those results.
 
I did two new benchmark-runs, but the server crashed on both runs.

Got some preliminary results though:





Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; 
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	660 MiB/sec	664 MiB/sec	665 MiB/sec	= 663 MiB/sec avg
WRITE:	471 MiB/sec	461 MiB/sec	466 MiB/sec	= 466 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	432 MiB/sec	432 MiB/sec	430 MiB/sec	= 431 MiB/sec avg
WRITE:	380 MiB/sec	382 MiB/sec	382 MiB/sec	= 381 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	411 MiB/sec	413 MiB/sec	= 411 MiB/sec avg
WRITE:	338 MiB/sec	329 MiB/sec	338 MiB/sec	= 335 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	689 MiB/sec	684 MiB/sec	687 MiB/sec	= 687 MiB/sec avg
WRITE:	36 MiB/sec	36 MiB/sec	36 MiB/sec	= 36 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	639 MiB/sec	645 MiB/sec	634 MiB/sec	= 639 MiB/sec avg
WRITE:	266 MiB/sec	258 MiB/sec	260 MiB/sec	= 261 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	444 MiB/sec	422 MiB/sec	453 MiB/sec	= 440 MiB/sec avg
WRITE:	313 MiB/sec	306 MiB/sec	305 MiB/sec	= 308 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	480 MiB/sec	473 MiB/sec	469 MiB/sec	= 474 MiB/sec avg
WRITE:	329 MiB/sec	327 MiB/sec	326 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	494 MiB/sec	488 MiB/sec	= 490 MiB/sec avg
WRITE:	335 MiB/sec	336 MiB/sec	331 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	446 MiB/sec	459 MiB/sec	450 MiB/sec	= 452 MiB/sec avg
WRITE:	271 MiB/sec	276 MiB/sec	276 MiB/sec	= 274 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	657 MiB/sec	650 MiB/sec	657 MiB/sec	= 655 MiB/sec avg
WRITE:	463 MiB/sec	465 MiB/sec	473 MiB/sec	= 467 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	670 MiB/sec	669 MiB/sec	660 MiB/sec	= 666 MiB/sec avg
WRITE:	446 MiB/sec	462 MiB/sec	461 MiB/sec	= 456 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	679 MiB/sec	677 MiB/sec	677 MiB/sec	= 678 MiB/sec avg
WRITE:	466 MiB/sec	471 MiB/sec	467 MiB/sec	= 468 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	672 MiB/sec	678 MiB/sec	675 MiB/sec	= 675 MiB/sec avg
WRITE:	469 MiB/sec	467 MiB/sec	466 MiB/sec	= 467 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	438 MiB/sec	437 MiB/sec	445 MiB/sec	= 440 MiB/sec avg
WRITE:	371 MiB/sec	354 MiB/sec	373 MiB/sec	= 366 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	429 MiB/sec	427 MiB/sec	432 MiB/sec	= 429 MiB/sec avg
WRITE:	378 MiB/sec	374 MiB/sec	379 MiB/sec	= 377 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	438 MiB/sec	438 MiB/sec	433 MiB/sec	= 436 MiB/sec avg
WRITE:	384 MiB/sec	367 MiB/sec	375 MiB/sec	= 375 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	434 MiB/sec	425 MiB/sec	= 430 MiB/sec avg
WRITE:	373 MiB/sec	366 MiB/sec	374 MiB/sec	= 371 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	400 MiB/sec	401 MiB/sec	399 MiB/sec	= 400 MiB/sec avg
WRITE:	324 MiB/sec	324 MiB/sec	317 MiB/sec	= 322 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	407 MiB/sec	408 MiB/sec	407 MiB/sec	= 407 MiB/sec avg
WRITE:	314 MiB/sec	316 MiB/sec	312 MiB/sec	= 314 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	410 MiB/sec	405 MiB/sec	= 408 MiB/sec avg
WRITE:	321 MiB/sec	329 MiB/sec	333 MiB/sec	= 328 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	405 MiB/sec	413 MiB/sec	412 MiB/sec	= 410 MiB/sec avg
WRITE:	327 MiB/sec	324 MiB/sec	326 MiB/sec	= 326 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	646 MiB/sec	654 MiB/sec	652 MiB/sec	= 651 MiB/sec avg
WRITE:	45 MiB/sec	46 MiB/sec	46 MiB/sec	= 45 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	658 MiB/sec	659 MiB/sec	651 MiB/sec	= 656 MiB/sec avg
WRITE:	42 MiB/sec	42 MiB/sec	43 MiB/sec	= 42 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	669 MiB/sec	667 MiB/sec	656 MiB/sec	= 664 MiB/sec avg
WRITE:	40 MiB/sec	40 MiB/sec	40 MiB/sec	= 40 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	666 MiB/sec	670 MiB/sec	661 MiB/sec	= 666 MiB/sec avg
WRITE:	38 MiB/sec	38 MiB/sec	38 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	584 MiB/sec	563 MiB/sec	581 MiB/sec	= 576 MiB/sec avg
WRITE:	250 MiB/sec	257 MiB/sec	251 MiB/sec	= 253 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	644 MiB/sec	654 MiB/sec	653 MiB/sec	= 650 MiB/sec avg
WRITE:	256 MiB/sec	262 MiB/sec	247 MiB/sec	= 255 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	428 MiB/sec	451 MiB/sec	= 490 MiB/sec avg
WRITE:	311 MiB/sec	302 MiB/sec	311 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	480 MiB/sec	475 MiB/sec	475 MiB/sec	= 490 MiB/sec avg
WRITE:	331 MiB/sec	327 MiB/sec	326 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	492 MiB/sec	488 MiB/sec	489 MiB/sec	= 490 MiB/sec avg
WRITE:	330 MiB/sec	323 MiB/sec	318 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	452 MiB/sec	453 MiB/sec	447 MiB/sec	= 451 MiB/sec avg
WRITE:	273 MiB/sec	277 MiB/sec	280 MiB/sec	= 276 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	602 MiB/sec	604 MiB/sec	607 MiB/sec	= 604 MiB/sec avg
WRITE:	458 MiB/sec	458 MiB/sec	458 MiB/sec	= 458 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	600 MiB/sec	585 MiB/sec	582 MiB/sec	= 589 MiB/sec avg
WRITE:	462 MiB/sec	459 MiB/sec	463 MiB/sec	= 461 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	653 MiB/sec	655 MiB/sec	651 MiB/sec	= 653 MiB/sec avg
WRITE:	465 MiB/sec	463 MiB/sec	468 MiB/sec	= 465 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	659 MiB/sec	658 MiB/sec	654 MiB/sec	= 657 MiB/sec avg
WRITE:	464 MiB/sec	460 MiB/sec	468 MiB/sec	= 464 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	403 MiB/sec	408 MiB/sec	398 MiB/sec	= 403 MiB/sec avg
WRITE:	330 MiB/sec	329 MiB/sec	326 MiB/sec	= 328 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	436 MiB/sec	453 MiB/sec	= 443 MiB/sec avg
WRITE:	353 MiB/sec	359 MiB/sec	354 MiB/sec	= 356 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	429 MiB/sec	425 MiB/sec	= 428 MiB/sec avg
WRITE:	351 MiB/sec	341 MiB/sec	351 MiB/sec	= 348 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	447 MiB/sec	442 MiB/sec	438 MiB/sec	= 442 MiB/sec avg
WRITE:	367 MiB/sec	367 MiB/sec	368 MiB/sec	= 367 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	380 MiB/sec	372 MiB/sec	374 MiB/sec	= 376 MiB/sec avg
WRITE:	286 MiB/sec	288 MiB/sec	284 MiB/sec	= 286 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	418 MiB/sec	422 MiB/sec	419 MiB/sec	= 420 MiB/sec avg
WRITE:	291 MiB/sec	296 MiB/sec	295 MiB/sec	= 294 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	454 MiB/sec	443 MiB/sec	434 MiB/sec	= 444 MiB/sec avg
WRITE:	305 MiB/sec	303 MiB/sec	294 MiB/sec	= 301 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	407 MiB/sec	410 MiB/sec	= 409 MiB/sec avg
WRITE:	314 MiB/sec	312 MiB/sec	311 MiB/sec	= 312 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	577 MiB/sec	589 MiB/sec	586 MiB/sec	= 584 MiB/sec avg
WRITE:	69 MiB/sec	65 MiB/sec	64 MiB/sec	= 66 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	609 MiB/sec	607 MiB/sec	604 MiB/sec	= 607 MiB/sec avg
WRITE:	58 MiB/sec	58 MiB/sec	62 MiB/sec	= 59 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	625 MiB/sec	618 MiB/sec	631 MiB/sec	= 625 MiB/sec avg
WRITE:	57 MiB/sec	57 MiB/sec	54 MiB/sec	= 56 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	636 MiB/sec	625 MiB/sec	635 MiB/sec	= 632 MiB/sec avg
WRITE:	48 MiB/sec	50 MiB/sec	48 MiB/sec	= 49 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	449 MiB/sec	441 MiB/sec	447 MiB/sec	= 446 MiB/sec avg
WRITE:	256 MiB/sec	246 MiB/sec	250 MiB/sec	= 251 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	529 MiB/sec	557 MiB/sec	549 MiB/sec	= 545 MiB/sec avg
WRITE:	258 MiB/sec	261 MiB/sec	263 MiB/sec	= 261 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	435 MiB/sec	439 MiB/sec	= 490 MiB/sec avg
WRITE:	312 MiB/sec	305 MiB/sec	305 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	471 MiB/sec	473 MiB/sec	477 MiB/sec	= 490 MiB/sec avg
WRITE:	326 MiB/sec	322 MiB/sec	326 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	489 MiB/sec	488 MiB/sec	489 MiB/sec	= 489 MiB/sec avg
WRITE:	335 MiB/sec	335 MiB/sec	330 MiB/sec	= 333 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	443 MiB/sec	457 MiB/sec	456 MiB/sec	= 452 MiB/sec avg
WRITE:	270 MiB/sec	273 MiB/sec	277 MiB/sec	= 274 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	421 MiB/sec	417 MiB/sec	437 MiB/sec	= 425 MiB/sec avg
WRITE:	358 MiB/sec	358 MiB/sec	354 MiB/sec	= 357 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	495 MiB/sec	484 MiB/sec	479 MiB/sec	= 486 MiB/sec avg
WRITE:	395 MiB/sec	403 MiB/sec	405 MiB/sec	= 401 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	536 MiB/sec	538 MiB/sec	523 MiB/sec	= 532 MiB/sec avg
WRITE:	438 MiB/sec	432 MiB/sec	434 MiB/sec	= 435 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	552 MiB/sec	547 MiB/sec	557 MiB/sec	= 552 MiB/sec avg
WRITE:	443 MiB/sec	448 MiB/sec	446 MiB/sec	= 446 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	325 MiB/sec	337 MiB/sec	328 MiB/sec	= 330 MiB/sec avg
WRITE:	237 MiB/sec	238 MiB/sec	232 MiB/sec	= 236 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	419 MiB/sec	415 MiB/sec	436 MiB/sec	= 424 MiB/sec avg
WRITE:	291 MiB/sec	291 MiB/sec	272 MiB/sec	= 285 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	416 MiB/sec	414 MiB/sec	416 MiB/sec	= 415 MiB/sec avg
WRITE:	308 MiB/sec	308 MiB/sec	307 MiB/sec	= 308 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	387 MiB/sec	389 MiB/sec	384 MiB/sec	= 387 MiB/sec avg
WRITE:	318 MiB/sec	318 MiB/sec	309 MiB/sec	= 315 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cW
 
(split post due to forum limitations)

2nd benchmark:





Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; 
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	662 MiB/sec	663 MiB/sec	661 MiB/sec	= 662 MiB/sec avg
WRITE:	474 MiB/sec	475 MiB/sec	465 MiB/sec	= 471 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	429 MiB/sec	435 MiB/sec	= 432 MiB/sec avg
WRITE:	380 MiB/sec	381 MiB/sec	376 MiB/sec	= 379 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	409 MiB/sec	411 MiB/sec	413 MiB/sec	= 411 MiB/sec avg
WRITE:	336 MiB/sec	336 MiB/sec	334 MiB/sec	= 335 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	685 MiB/sec	688 MiB/sec	682 MiB/sec	= 685 MiB/sec avg
WRITE:	36 MiB/sec	36 MiB/sec	36 MiB/sec	= 36 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	631 MiB/sec	638 MiB/sec	647 MiB/sec	= 639 MiB/sec avg
WRITE:	267 MiB/sec	252 MiB/sec	260 MiB/sec	= 260 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	435 MiB/sec	431 MiB/sec	447 MiB/sec	= 438 MiB/sec avg
WRITE:	312 MiB/sec	294 MiB/sec	307 MiB/sec	= 304 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	477 MiB/sec	472 MiB/sec	474 MiB/sec	= 474 MiB/sec avg
WRITE:	327 MiB/sec	328 MiB/sec	327 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	493 MiB/sec	491 MiB/sec	491 MiB/sec	= 492 MiB/sec avg
WRITE:	336 MiB/sec	334 MiB/sec	336 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	458 MiB/sec	450 MiB/sec	447 MiB/sec	= 452 MiB/sec avg
WRITE:	278 MiB/sec	275 MiB/sec	278 MiB/sec	= 277 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	652 MiB/sec	660 MiB/sec	661 MiB/sec	= 657 MiB/sec avg
WRITE:	470 MiB/sec	469 MiB/sec	473 MiB/sec	= 471 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	671 MiB/sec	664 MiB/sec	675 MiB/sec	= 670 MiB/sec avg
WRITE:	471 MiB/sec	465 MiB/sec	470 MiB/sec	= 469 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	678 MiB/sec	677 MiB/sec	673 MiB/sec	= 676 MiB/sec avg
WRITE:	465 MiB/sec	465 MiB/sec	472 MiB/sec	= 468 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	675 MiB/sec	678 MiB/sec	674 MiB/sec	= 676 MiB/sec avg
WRITE:	472 MiB/sec	473 MiB/sec	467 MiB/sec	= 471 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	437 MiB/sec	439 MiB/sec	442 MiB/sec	= 439 MiB/sec avg
WRITE:	365 MiB/sec	356 MiB/sec	373 MiB/sec	= 365 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	432 MiB/sec	437 MiB/sec	= 433 MiB/sec avg
WRITE:	372 MiB/sec	378 MiB/sec	375 MiB/sec	= 375 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	436 MiB/sec	436 MiB/sec	433 MiB/sec	= 435 MiB/sec avg
WRITE:	378 MiB/sec	377 MiB/sec	383 MiB/sec	= 379 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	430 MiB/sec	433 MiB/sec	426 MiB/sec	= 430 MiB/sec avg
WRITE:	371 MiB/sec	355 MiB/sec	374 MiB/sec	= 367 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	400 MiB/sec	404 MiB/sec	400 MiB/sec	= 401 MiB/sec avg
WRITE:	322 MiB/sec	313 MiB/sec	322 MiB/sec	= 319 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	410 MiB/sec	405 MiB/sec	408 MiB/sec	= 408 MiB/sec avg
WRITE:	311 MiB/sec	315 MiB/sec	311 MiB/sec	= 312 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	408 MiB/sec	407 MiB/sec	409 MiB/sec	= 408 MiB/sec avg
WRITE:	320 MiB/sec	315 MiB/sec	327 MiB/sec	= 320 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	409 MiB/sec	412 MiB/sec	411 MiB/sec	= 411 MiB/sec avg
WRITE:	317 MiB/sec	333 MiB/sec	324 MiB/sec	= 325 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	655 MiB/sec	653 MiB/sec	632 MiB/sec	= 647 MiB/sec avg
WRITE:	46 MiB/sec	46 MiB/sec	45 MiB/sec	= 46 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ:	658 MiB/sec	654 MiB/sec	663 MiB/sec	= 658 MiB/sec avg
WRITE:	42 MiB/sec	44 MiB/sec	43 MiB/sec	= 43 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	669 MiB/sec	653 MiB/sec	668 MiB/sec	= 663 MiB/sec avg
WRITE:	40 MiB/sec	40 MiB/sec	40 MiB/sec	= 40 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ:	663 MiB/sec	662 MiB/sec	674 MiB/sec	= 666 MiB/sec avg
WRITE:	38 MiB/sec	38 MiB/sec	37 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	591 MiB/sec	601 MiB/sec	573 MiB/sec	= 588 MiB/sec avg
WRITE:	254 MiB/sec	254 MiB/sec	261 MiB/sec	= 256 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ:	639 MiB/sec	656 MiB/sec	651 MiB/sec	= 648 MiB/sec avg
WRITE:	262 MiB/sec	249 MiB/sec	259 MiB/sec	= 257 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	428 MiB/sec	443 MiB/sec	= 492 MiB/sec avg
WRITE:	314 MiB/sec	303 MiB/sec	310 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	477 MiB/sec	474 MiB/sec	472 MiB/sec	= 492 MiB/sec avg
WRITE:	331 MiB/sec	333 MiB/sec	328 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	489 MiB/sec	489 MiB/sec	492 MiB/sec	= 490 MiB/sec avg
WRITE:	332 MiB/sec	330 MiB/sec	316 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	445 MiB/sec	443 MiB/sec	457 MiB/sec	= 448 MiB/sec avg
WRITE:	276 MiB/sec	278 MiB/sec	276 MiB/sec	= 277 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	603 MiB/sec	608 MiB/sec	600 MiB/sec	= 604 MiB/sec avg
WRITE:	457 MiB/sec	459 MiB/sec	457 MiB/sec	= 458 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	596 MiB/sec	580 MiB/sec	623 MiB/sec	= 600 MiB/sec avg
WRITE:	456 MiB/sec	465 MiB/sec	437 MiB/sec	= 453 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	652 MiB/sec	650 MiB/sec	649 MiB/sec	= 651 MiB/sec avg
WRITE:	470 MiB/sec	473 MiB/sec	473 MiB/sec	= 472 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	657 MiB/sec	663 MiB/sec	655 MiB/sec	= 658 MiB/sec avg
WRITE:	465 MiB/sec	470 MiB/sec	462 MiB/sec	= 465 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	397 MiB/sec	410 MiB/sec	413 MiB/sec	= 406 MiB/sec avg
WRITE:	334 MiB/sec	330 MiB/sec	325 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	439 MiB/sec	446 MiB/sec	452 MiB/sec	= 446 MiB/sec avg
WRITE:	350 MiB/sec	356 MiB/sec	353 MiB/sec	= 353 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	427 MiB/sec	430 MiB/sec	= 428 MiB/sec avg
WRITE:	346 MiB/sec	352 MiB/sec	347 MiB/sec	= 348 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	438 MiB/sec	445 MiB/sec	= 441 MiB/sec avg
WRITE:	368 MiB/sec	365 MiB/sec	366 MiB/sec	= 366 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	372 MiB/sec	372 MiB/sec	367 MiB/sec	= 370 MiB/sec avg
WRITE:	288 MiB/sec	282 MiB/sec	287 MiB/sec	= 286 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	426 MiB/sec	418 MiB/sec	420 MiB/sec	= 421 MiB/sec avg
WRITE:	295 MiB/sec	296 MiB/sec	294 MiB/sec	= 295 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	448 MiB/sec	445 MiB/sec	452 MiB/sec	= 448 MiB/sec avg
WRITE:	303 MiB/sec	304 MiB/sec	303 MiB/sec	= 303 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	399 MiB/sec	411 MiB/sec	408 MiB/sec	= 406 MiB/sec avg
WRITE:	304 MiB/sec	308 MiB/sec	301 MiB/sec	= 304 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	586 MiB/sec	591 MiB/sec	585 MiB/sec	= 587 MiB/sec avg
WRITE:	66 MiB/sec	65 MiB/sec	64 MiB/sec	= 65 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ:	609 MiB/sec	605 MiB/sec	602 MiB/sec	= 605 MiB/sec avg
WRITE:	58 MiB/sec	62 MiB/sec	62 MiB/sec	= 61 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	625 MiB/sec	618 MiB/sec	639 MiB/sec	= 627 MiB/sec avg
WRITE:	57 MiB/sec	57 MiB/sec	55 MiB/sec	= 56 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ:	640 MiB/sec	638 MiB/sec	628 MiB/sec	= 636 MiB/sec avg
WRITE:	49 MiB/sec	49 MiB/sec	52 MiB/sec	= 50 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	446 MiB/sec	441 MiB/sec	439 MiB/sec	= 442 MiB/sec avg
WRITE:	249 MiB/sec	237 MiB/sec	253 MiB/sec	= 247 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ:	531 MiB/sec	553 MiB/sec	550 MiB/sec	= 545 MiB/sec avg
WRITE:	256 MiB/sec	262 MiB/sec	255 MiB/sec	= 258 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ:	440 MiB/sec	436 MiB/sec	431 MiB/sec	= 490 MiB/sec avg
WRITE:	309 MiB/sec	303 MiB/sec	310 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	474 MiB/sec	479 MiB/sec	471 MiB/sec	= 490 MiB/sec avg
WRITE:	320 MiB/sec	310 MiB/sec	325 MiB/sec	= 326 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ:	488 MiB/sec	487 MiB/sec	487 MiB/sec	= 487 MiB/sec avg
WRITE:	324 MiB/sec	325 MiB/sec	312 MiB/sec	= 320 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ:	458 MiB/sec	448 MiB/sec	449 MiB/sec	= 452 MiB/sec avg
WRITE:	256 MiB/sec	268 MiB/sec	270 MiB/sec	= 265 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	418 MiB/sec	440 MiB/sec	430 MiB/sec	= 430 MiB/sec avg
WRITE:	354 MiB/sec	357 MiB/sec	354 MiB/sec	= 355 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	485 MiB/sec	483 MiB/sec	478 MiB/sec	= 482 MiB/sec avg
WRITE:	388 MiB/sec	407 MiB/sec	408 MiB/sec	= 401 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	533 MiB/sec	532 MiB/sec	514 MiB/sec	= 526 MiB/sec avg
WRITE:	435 MiB/sec	426 MiB/sec	433 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	580 MiB/sec	572 MiB/sec	545 MiB/sec	= 566 MiB/sec avg
WRITE:	442 MiB/sec	426 MiB/sec	448 MiB/sec	= 439 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ:	321 MiB/sec	337 MiB/sec	332 MiB/sec	= 330 MiB/sec avg
WRITE:	235 MiB/sec	231 MiB/sec	227 MiB/sec	= 231 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ:	417 MiB/sec	417 MiB/sec	435 MiB/sec	= 423 MiB/sec avg
WRITE:	291 MiB/sec	284 MiB/sec	291 MiB/sec	= 288 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ:	415 MiB/sec	421 MiB/sec	411 MiB/sec	= 415 MiB/sec avg
WRITE:	305 MiB/sec	306 MiB/sec	306 MiB/sec	= 306 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ:	387 MiB/sec	385 MiB/sec	383 MiB/sec	= 385 MiB/sec avg
WRITE:	313 MiB/sec	319 MiB/sec	313 MiB/sec	= 315 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cW
 
Did it crash in kmem_map too small with the default tuning settings, or when you performed tuning? The benchmarks still show untuned performance.

You can see very well on the RAID1 write scores that it degrades; if ZFS has plenty of RAM it should show a steady horizontal line assuming all disks have the same throughput.

You may also want to test each disk individually, to see if there are any 'duds' among them with lower performance that drag the array down. But i still think the tuning is the biggest bottleneck here.

Try the suggestions in the other thread; would really love to see how that many disks scale with more memory available to ZFS.
 
Both benchmarks that were interrupted, were run after I had performed the tuning.
Did a benchmark on all disks, and all were more or less performing equal.
 
sub.mesa, what about MrLie running the benchmark with only RAID 0 until that bottleneck is solved. That should save MrLie some time.
 
I'm allready running the benchmark, but with only 1 run on each config, so saving some time there.
 
Since it starts with RAID0 with a high disk count, you can stop the benchmark after your first test if it gives you low score. Or you can use the mini-benchmark and manually create pool. The mini-benchmark is on Pools page where you click a pool.

Did you manage to do the tuning MrLie? Does it say Test settings: kmem etc?
 
Preliminary results, while benchmark is running:






Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 1
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; TR1; 
* Tuning: KMEM=7g; KMAX=7g; AMIN=4g; AMAX=5g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@
READ:	664 MiB/sec	= 664 MiB/sec avg
WRITE:	475 MiB/sec	= 475 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@
READ:	426 MiB/sec	= 426 MiB/sec avg
WRITE:	383 MiB/sec	= 383 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@
READ:	410 MiB/sec	= 410 MiB/sec avg
WRITE:	337 MiB/sec	= 337 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@
READ:	687 MiB/sec	= 687 MiB/sec avg
WRITE:	34 MiB/sec	= 34 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@
READ:	637 MiB/sec	= 637 MiB/sec avg
WRITE:	249 MiB/sec	= 249 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	435 MiB/sec	= 435 MiB/sec avg
WRITE:	311 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	476 MiB/sec	= 476 MiB/sec avg
WRITE:	327 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	489 MiB/sec	= 489 MiB/sec avg
WRITE:	324 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	446 MiB/sec	= 446 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@
READ:	657 MiB/sec	= 657 MiB/sec avg
WRITE:	469 MiB/sec	= 469 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@
READ:	669 MiB/sec	= 669 MiB/sec avg
WRITE:	472 MiB/sec	= 472 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@
READ:	677 MiB/sec	= 677 MiB/sec avg
WRITE:	466 MiB/sec	= 466 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@
READ:	670 MiB/sec	= 670 MiB/sec avg
WRITE:	474 MiB/sec	= 474 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@
READ:	437 MiB/sec	= 437 MiB/sec avg
WRITE:	371 MiB/sec	= 371 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@
READ:	432 MiB/sec	= 432 MiB/sec avg
WRITE:	376 MiB/sec	= 376 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@
READ:	427 MiB/sec	= 427 MiB/sec avg
WRITE:	378 MiB/sec	= 378 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@
READ:	425 MiB/sec	= 425 MiB/sec avg
WRITE:	370 MiB/sec	= 370 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@
READ:	405 MiB/sec	= 405 MiB/sec avg
WRITE:	321 MiB/sec	= 321 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@
READ:	404 MiB/sec	= 404 MiB/sec avg
WRITE:	313 MiB/sec	= 313 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@
READ:	407 MiB/sec	= 407 MiB/sec avg
WRITE:	330 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@
READ:	406 MiB/sec	= 406 MiB/sec avg
WRITE:	328 MiB/sec	= 328 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@
READ:	646 MiB/sec	= 646 MiB/sec avg
WRITE:	47 MiB/sec	= 47 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@
READ:	658 MiB/sec	= 658 MiB/sec avg
WRITE:	44 MiB/sec	= 44 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@
READ:	654 MiB/sec	= 654 MiB/sec avg
WRITE:	41 MiB/sec	= 41 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@
READ:	658 MiB/sec	= 658 MiB/sec avg
WRITE:	38 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@
READ:	592 MiB/sec	= 592 MiB/sec avg
WRITE:	251 MiB/sec	= 251 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@
READ:	659 MiB/sec	= 659 MiB/sec avg
WRITE:	258 MiB/sec	= 258 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	429 MiB/sec	= 489 MiB/sec avg
WRITE:	311 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	467 MiB/sec	= 489 MiB/sec avg
WRITE:	330 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	486 MiB/sec	= 486 MiB/sec avg
WRITE:	334 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	453 MiB/sec	= 453 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@
READ:	609 MiB/sec	= 609 MiB/sec avg
WRITE:	462 MiB/sec	= 462 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@
READ:	587 MiB/sec	= 587 MiB/sec avg
WRITE:	467 MiB/sec	= 467 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmR


Screenshot of my tuning-page:
 
Done with the latest benchmark, no errors occured. Only did one run of each test though, to speed things up.





Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 1
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; TR1; 
* Tuning: KMEM=7g; KMAX=7g; AMIN=4g; AMAX=5g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@
READ:	664 MiB/sec	= 664 MiB/sec avg
WRITE:	475 MiB/sec	= 475 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@
READ:	426 MiB/sec	= 426 MiB/sec avg
WRITE:	383 MiB/sec	= 383 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@
READ:	410 MiB/sec	= 410 MiB/sec avg
WRITE:	337 MiB/sec	= 337 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@
READ:	687 MiB/sec	= 687 MiB/sec avg
WRITE:	34 MiB/sec	= 34 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@
READ:	637 MiB/sec	= 637 MiB/sec avg
WRITE:	249 MiB/sec	= 249 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	435 MiB/sec	= 435 MiB/sec avg
WRITE:	311 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	476 MiB/sec	= 476 MiB/sec avg
WRITE:	327 MiB/sec	= 327 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	489 MiB/sec	= 489 MiB/sec avg
WRITE:	324 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	446 MiB/sec	= 446 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@
READ:	657 MiB/sec	= 657 MiB/sec avg
WRITE:	469 MiB/sec	= 469 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@
READ:	669 MiB/sec	= 669 MiB/sec avg
WRITE:	472 MiB/sec	= 472 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@
READ:	677 MiB/sec	= 677 MiB/sec avg
WRITE:	466 MiB/sec	= 466 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@
READ:	670 MiB/sec	= 670 MiB/sec avg
WRITE:	474 MiB/sec	= 474 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@
READ:	437 MiB/sec	= 437 MiB/sec avg
WRITE:	371 MiB/sec	= 371 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@
READ:	432 MiB/sec	= 432 MiB/sec avg
WRITE:	376 MiB/sec	= 376 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@
READ:	427 MiB/sec	= 427 MiB/sec avg
WRITE:	378 MiB/sec	= 378 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@
READ:	425 MiB/sec	= 425 MiB/sec avg
WRITE:	370 MiB/sec	= 370 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@
READ:	405 MiB/sec	= 405 MiB/sec avg
WRITE:	321 MiB/sec	= 321 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@
READ:	404 MiB/sec	= 404 MiB/sec avg
WRITE:	313 MiB/sec	= 313 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@
READ:	407 MiB/sec	= 407 MiB/sec avg
WRITE:	330 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@
READ:	406 MiB/sec	= 406 MiB/sec avg
WRITE:	328 MiB/sec	= 328 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@
READ:	646 MiB/sec	= 646 MiB/sec avg
WRITE:	47 MiB/sec	= 47 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@
READ:	658 MiB/sec	= 658 MiB/sec avg
WRITE:	44 MiB/sec	= 44 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@
READ:	654 MiB/sec	= 654 MiB/sec avg
WRITE:	41 MiB/sec	= 41 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@
READ:	658 MiB/sec	= 658 MiB/sec avg
WRITE:	38 MiB/sec	= 38 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@
READ:	592 MiB/sec	= 592 MiB/sec avg
WRITE:	251 MiB/sec	= 251 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@
READ:	659 MiB/sec	= 659 MiB/sec avg
WRITE:	258 MiB/sec	= 258 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	429 MiB/sec	= 489 MiB/sec avg
WRITE:	311 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	467 MiB/sec	= 489 MiB/sec avg
WRITE:	330 MiB/sec	= 324 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	486 MiB/sec	= 486 MiB/sec avg
WRITE:	334 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	453 MiB/sec	= 453 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@
READ:	609 MiB/sec	= 609 MiB/sec avg
WRITE:	462 MiB/sec	= 462 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@
READ:	587 MiB/sec	= 587 MiB/sec avg
WRITE:	467 MiB/sec	= 467 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@
READ:	657 MiB/sec	= 657 MiB/sec avg
WRITE:	468 MiB/sec	= 468 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@
READ:	660 MiB/sec	= 660 MiB/sec avg
WRITE:	471 MiB/sec	= 471 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@
READ:	407 MiB/sec	= 407 MiB/sec avg
WRITE:	330 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@
READ:	449 MiB/sec	= 449 MiB/sec avg
WRITE:	351 MiB/sec	= 351 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@
READ:	431 MiB/sec	= 431 MiB/sec avg
WRITE:	343 MiB/sec	= 343 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@
READ:	433 MiB/sec	= 433 MiB/sec avg
WRITE:	367 MiB/sec	= 367 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@
READ:	368 MiB/sec	= 368 MiB/sec avg
WRITE:	288 MiB/sec	= 288 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@
READ:	416 MiB/sec	= 416 MiB/sec avg
WRITE:	295 MiB/sec	= 295 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@
READ:	443 MiB/sec	= 443 MiB/sec avg
WRITE:	304 MiB/sec	= 304 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@
READ:	404 MiB/sec	= 404 MiB/sec avg
WRITE:	305 MiB/sec	= 305 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@
READ:	577 MiB/sec	= 577 MiB/sec avg
WRITE:	64 MiB/sec	= 64 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@
READ:	612 MiB/sec	= 612 MiB/sec avg
WRITE:	61 MiB/sec	= 61 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@
READ:	633 MiB/sec	= 633 MiB/sec avg
WRITE:	55 MiB/sec	= 55 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@
READ:	620 MiB/sec	= 620 MiB/sec avg
WRITE:	50 MiB/sec	= 50 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@
READ:	433 MiB/sec	= 433 MiB/sec avg
WRITE:	252 MiB/sec	= 252 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@
READ:	553 MiB/sec	= 553 MiB/sec avg
WRITE:	258 MiB/sec	= 258 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	448 MiB/sec	= 486 MiB/sec avg
WRITE:	310 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	479 MiB/sec	= 486 MiB/sec avg
WRITE:	327 MiB/sec	= 334 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	490 MiB/sec	= 490 MiB/sec avg
WRITE:	338 MiB/sec	= 338 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	453 MiB/sec	= 453 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@
READ:	422 MiB/sec	= 422 MiB/sec avg
WRITE:	357 MiB/sec	= 357 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@
READ:	469 MiB/sec	= 469 MiB/sec avg
WRITE:	411 MiB/sec	= 411 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@
READ:	531 MiB/sec	= 531 MiB/sec avg
WRITE:	437 MiB/sec	= 437 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@
READ:	565 MiB/sec	= 565 MiB/sec avg
WRITE:	454 MiB/sec	= 454 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@
READ:	336 MiB/sec	= 336 MiB/sec avg
WRITE:	237 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@
READ:	420 MiB/sec	= 420 MiB/sec avg
WRITE:	287 MiB/sec	= 287 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@
READ:	409 MiB/sec	= 409 MiB/sec avg
WRITE:	292 MiB/sec	= 292 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@
READ:	395 MiB/sec	= 395 MiB/sec avg
WRITE:	311 MiB/sec	= 311 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@
READ:	233 MiB/sec	= 233 MiB/sec avg
WRITE:	166 MiB/sec	= 166 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@
READ:	337 MiB/sec	= 337 MiB/sec avg
WRITE:	206 MiB/sec	= 206 MiB/sec avg

Now testing RAIDZ2 configuration with 6 disks: cWmRd@
READ:	397 MiB/sec	= 397 MiB/sec avg
WRITE:	249 MiB/sec	= 249 MiB/sec avg

Now testing RAIDZ2 configuration with 7 disks: cWmRd@
READ:	381 MiB/sec	= 381 MiB/sec avg
WRITE:	257 MiB/sec	= 257 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@
READ:	331 MiB/sec	= 331 MiB/sec avg
WRITE:	99 MiB/sec	= 99 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@
READ:	391 MiB/sec	= 391 MiB/sec avg
WRITE:	96 MiB/sec	= 96 MiB/sec avg

Now testing RAID1 configuration with 6 disks: cWmRd@
READ:	492 MiB/sec	= 492 MiB/sec avg
WRITE:	82 MiB/sec	= 82 MiB/sec avg

Now testing RAID1 configuration with 7 disks: cWmRd@
READ:	555 MiB/sec	= 555 MiB/sec avg
WRITE:	76 MiB/sec	= 76 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@
READ:	235 MiB/sec	= 235 MiB/sec avg
WRITE:	193 MiB/sec	= 193 MiB/sec avg

Now testing RAID1+0 configuration with 6 disks: cWmRd@
READ:	352 MiB/sec	= 352 MiB/sec avg
WRITE:	244 MiB/sec	= 244 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	436 MiB/sec	= 336 MiB/sec avg
WRITE:	309 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	471 MiB/sec	= 336 MiB/sec avg
WRITE:	324 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	492 MiB/sec	= 336 MiB/sec avg
WRITE:	332 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	450 MiB/sec	= 397 MiB/sec avg
WRITE:	273 MiB/sec	= 249 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@
READ:	118 MiB/sec	= 118 MiB/sec avg
WRITE:	103 MiB/sec	= 103 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@
READ:	233 MiB/sec	= 233 MiB/sec avg
WRITE:	198 MiB/sec	= 198 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@
READ:	341 MiB/sec	= 341 MiB/sec avg
WRITE:	287 MiB/sec	= 287 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@
READ:	119 MiB/sec	= 119 MiB/sec avg
WRITE:	101 MiB/sec	= 101 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@
READ:	230 MiB/sec	= 230 MiB/sec avg
WRITE:	178 MiB/sec	= 178 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@
READ:	120 MiB/sec	= 120 MiB/sec avg
WRITE:	97 MiB/sec	= 97 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@
READ:	119 MiB/sec	= 119 MiB/sec avg
WRITE:	103 MiB/sec	= 103 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@
READ:	242 MiB/sec	= 242 MiB/sec avg
WRITE:	103 MiB/sec	= 103 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	438 MiB/sec	= 336 MiB/sec avg
WRITE:	310 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	472 MiB/sec	= 336 MiB/sec avg
WRITE:	326 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	487 MiB/sec	= 336 MiB/sec avg
WRITE:	337 MiB/sec	= 237 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	446 MiB/sec	= 397 MiB/sec avg
WRITE:	272 MiB/sec	= 249 MiB/sec avg

Done
 
Odd. The write ceiling is still there for RAID 0, but it increased slightly from 431 to 475 MB/s.

I still think it would be interesting to try a run with a larger record size (block transfer size), say 16 MB.
 
Using OpenSolaris and Bonnie++, I got also 475 MB/s write as can be seen in this post.

I'll start a new benchmark using 16 MB blocksize. Should be done when I get home from work, in about 8-10 hours time.

Edit: The scrolldown menu only allows 10 MB and 100 MB above the default 1 MB choice, so using 10 MB for this new run.
 
Last edited:
Benchmark using 10 MB blocksize is done:





Code:
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 1
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03

* Test Settings: TS32; TR1; BS10485760; 
* Tuning: KMEM=7g; KMAX=7g; AMIN=4g; AMAX=5g; 
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 16 disks: cWmRd@
READ:	584 MiB/sec	= 584 MiB/sec avg
WRITE:	414 MiB/sec	= 414 MiB/sec avg

Now testing RAIDZ configuration with 16 disks: cWmRd@
READ:	403 MiB/sec	= 403 MiB/sec avg
WRITE:	343 MiB/sec	= 343 MiB/sec avg

Now testing RAIDZ2 configuration with 16 disks: cWmRd@
READ:	373 MiB/sec	= 373 MiB/sec avg
WRITE:	301 MiB/sec	= 301 MiB/sec avg

Now testing RAID1 configuration with 16 disks: cWmRd@
READ:	640 MiB/sec	= 640 MiB/sec avg
WRITE:	35 MiB/sec	= 35 MiB/sec avg

Now testing RAID1+0 configuration with 16 disks: cWmRd@
READ:	597 MiB/sec	= 597 MiB/sec avg
WRITE:	254 MiB/sec	= 254 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	397 MiB/sec	= 397 MiB/sec avg
WRITE:	282 MiB/sec	= 282 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	432 MiB/sec	= 432 MiB/sec avg
WRITE:	301 MiB/sec	= 301 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	444 MiB/sec	= 444 MiB/sec avg
WRITE:	300 MiB/sec	= 300 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	417 MiB/sec	= 417 MiB/sec avg
WRITE:	251 MiB/sec	= 251 MiB/sec avg

Now testing RAID0 configuration with 12 disks: cWmRd@
READ:	583 MiB/sec	= 583 MiB/sec avg
WRITE:	409 MiB/sec	= 409 MiB/sec avg

Now testing RAID0 configuration with 13 disks: cWmRd@
READ:	597 MiB/sec	= 597 MiB/sec avg
WRITE:	431 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 14 disks: cWmRd@
READ:	596 MiB/sec	= 596 MiB/sec avg
WRITE:	431 MiB/sec	= 431 MiB/sec avg

Now testing RAID0 configuration with 15 disks: cWmRd@
READ:	597 MiB/sec	= 597 MiB/sec avg
WRITE:	429 MiB/sec	= 429 MiB/sec avg

Now testing RAIDZ configuration with 12 disks: cWmRd@
READ:	398 MiB/sec	= 398 MiB/sec avg
WRITE:	330 MiB/sec	= 330 MiB/sec avg

Now testing RAIDZ configuration with 13 disks: cWmRd@
READ:	389 MiB/sec	= 389 MiB/sec avg
WRITE:	342 MiB/sec	= 342 MiB/sec avg

Now testing RAIDZ configuration with 14 disks: cWmRd@
READ:	399 MiB/sec	= 399 MiB/sec avg
WRITE:	341 MiB/sec	= 341 MiB/sec avg

Now testing RAIDZ configuration with 15 disks: cWmRd@
READ:	391 MiB/sec	= 391 MiB/sec avg
WRITE:	331 MiB/sec	= 331 MiB/sec avg

Now testing RAIDZ2 configuration with 12 disks: cWmRd@
READ:	373 MiB/sec	= 373 MiB/sec avg
WRITE:	286 MiB/sec	= 286 MiB/sec avg

Now testing RAIDZ2 configuration with 13 disks: cWmRd@
READ:	375 MiB/sec	= 375 MiB/sec avg
WRITE:	289 MiB/sec	= 289 MiB/sec avg

Now testing RAIDZ2 configuration with 14 disks: cWmRd@
READ:	380 MiB/sec	= 380 MiB/sec avg
WRITE:	303 MiB/sec	= 303 MiB/sec avg

Now testing RAIDZ2 configuration with 15 disks: cWmRd@
READ:	380 MiB/sec	= 380 MiB/sec avg
WRITE:	298 MiB/sec	= 298 MiB/sec avg

Now testing RAID1 configuration with 12 disks: cWmRd@
READ:	594 MiB/sec	= 594 MiB/sec avg
WRITE:	46 MiB/sec	= 46 MiB/sec avg

Now testing RAID1 configuration with 13 disks: cWmRd@
READ:	619 MiB/sec	= 619 MiB/sec avg
WRITE:	42 MiB/sec	= 42 MiB/sec avg

Now testing RAID1 configuration with 14 disks: cWmRd@
READ:	627 MiB/sec	= 627 MiB/sec avg
WRITE:	41 MiB/sec	= 41 MiB/sec avg

Now testing RAID1 configuration with 15 disks: cWmRd@
READ:	628 MiB/sec	= 628 MiB/sec avg
WRITE:	37 MiB/sec	= 37 MiB/sec avg

Now testing RAID1+0 configuration with 12 disks: cWmRd@
READ:	568 MiB/sec	= 568 MiB/sec avg
WRITE:	250 MiB/sec	= 250 MiB/sec avg

Now testing RAID1+0 configuration with 14 disks: cWmRd@
READ:	601 MiB/sec	= 601 MiB/sec avg
WRITE:	249 MiB/sec	= 249 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	400 MiB/sec	= 444 MiB/sec avg
WRITE:	282 MiB/sec	= 300 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	427 MiB/sec	= 444 MiB/sec avg
WRITE:	303 MiB/sec	= 300 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	443 MiB/sec	= 443 MiB/sec avg
WRITE:	306 MiB/sec	= 306 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	412 MiB/sec	= 412 MiB/sec avg
WRITE:	253 MiB/sec	= 253 MiB/sec avg

Now testing RAID0 configuration with 8 disks: cWmRd@
READ:	552 MiB/sec	= 552 MiB/sec avg
WRITE:	412 MiB/sec	= 412 MiB/sec avg

Now testing RAID0 configuration with 9 disks: cWmRd@
READ:	580 MiB/sec	= 580 MiB/sec avg
WRITE:	422 MiB/sec	= 422 MiB/sec avg

Now testing RAID0 configuration with 10 disks: cWmRd@
READ:	587 MiB/sec	= 587 MiB/sec avg
WRITE:	421 MiB/sec	= 421 MiB/sec avg

Now testing RAID0 configuration with 11 disks: cWmRd@
READ:	575 MiB/sec	= 575 MiB/sec avg
WRITE:	421 MiB/sec	= 421 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@
READ:	383 MiB/sec	= 383 MiB/sec avg
WRITE:	300 MiB/sec	= 300 MiB/sec avg

Now testing RAIDZ configuration with 9 disks: cWmRd@
READ:	413 MiB/sec	= 413 MiB/sec avg
WRITE:	317 MiB/sec	= 317 MiB/sec avg

Now testing RAIDZ configuration with 10 disks: cWmRd@
READ:	399 MiB/sec	= 399 MiB/sec avg
WRITE:	317 MiB/sec	= 317 MiB/sec avg

Now testing RAIDZ configuration with 11 disks: cWmRd@
READ:	396 MiB/sec	= 396 MiB/sec avg
WRITE:	335 MiB/sec	= 335 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@
READ:	350 MiB/sec	= 350 MiB/sec avg
WRITE:	255 MiB/sec	= 255 MiB/sec avg

Now testing RAIDZ2 configuration with 9 disks: cWmRd@
READ:	388 MiB/sec	= 388 MiB/sec avg
WRITE:	263 MiB/sec	= 263 MiB/sec avg

Now testing RAIDZ2 configuration with 10 disks: cWmRd@
READ:	417 MiB/sec	= 417 MiB/sec avg
WRITE:	273 MiB/sec	= 273 MiB/sec avg

Now testing RAIDZ2 configuration with 11 disks: cWmRd@
READ:	377 MiB/sec	= 377 MiB/sec avg
WRITE:	278 MiB/sec	= 278 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@
READ:	570 MiB/sec	= 570 MiB/sec avg
WRITE:	63 MiB/sec	= 63 MiB/sec avg

Now testing RAID1 configuration with 9 disks: cWmRd@
READ:	569 MiB/sec	= 569 MiB/sec avg
WRITE:	62 MiB/sec	= 62 MiB/sec avg

Now testing RAID1 configuration with 10 disks: cWmRd@
READ:	590 MiB/sec	= 590 MiB/sec avg
WRITE:	53 MiB/sec	= 53 MiB/sec avg

Now testing RAID1 configuration with 11 disks: cWmRd@
READ:	584 MiB/sec	= 584 MiB/sec avg
WRITE:	50 MiB/sec	= 50 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@
READ:	435 MiB/sec	= 435 MiB/sec avg
WRITE:	246 MiB/sec	= 246 MiB/sec avg

Now testing RAID1+0 configuration with 10 disks: cWmRd@
READ:	525 MiB/sec	= 525 MiB/sec avg
WRITE:	248 MiB/sec	= 248 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	391 MiB/sec	= 443 MiB/sec avg
WRITE:	283 MiB/sec	= 306 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	436 MiB/sec	= 443 MiB/sec avg
WRITE:	302 MiB/sec	= 306 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	442 MiB/sec	= 442 MiB/sec avg
WRITE:	304 MiB/sec	= 304 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	414 MiB/sec	= 414 MiB/sec avg
WRITE:	247 MiB/sec	= 247 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@
READ:	418 MiB/sec	= 418 MiB/sec avg
WRITE:	334 MiB/sec	= 334 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@
READ:	504 MiB/sec	= 504 MiB/sec avg
WRITE:	376 MiB/sec	= 376 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@
READ:	519 MiB/sec	= 519 MiB/sec avg
WRITE:	399 MiB/sec	= 399 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@
READ:	530 MiB/sec	= 530 MiB/sec avg
WRITE:	394 MiB/sec	= 394 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@
READ:	329 MiB/sec	= 329 MiB/sec avg
WRITE:	218 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@
READ:	401 MiB/sec	= 401 MiB/sec avg
WRITE:	265 MiB/sec	= 265 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@
READ:	379 MiB/sec	= 379 MiB/sec avg
WRITE:	281 MiB/sec	= 281 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@
READ:	354 MiB/sec	= 354 MiB/sec avg
WRITE:	288 MiB/sec	= 288 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@
READ:	231 MiB/sec	= 231 MiB/sec avg
WRITE:	154 MiB/sec	= 154 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@
READ:	342 MiB/sec	= 342 MiB/sec avg
WRITE:	194 MiB/sec	= 194 MiB/sec avg

Now testing RAIDZ2 configuration with 6 disks: cWmRd@
READ:	385 MiB/sec	= 385 MiB/sec avg
WRITE:	230 MiB/sec	= 230 MiB/sec avg

Now testing RAIDZ2 configuration with 7 disks: cWmRd@
READ:	357 MiB/sec	= 357 MiB/sec avg
WRITE:	228 MiB/sec	= 228 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@
READ:	332 MiB/sec	= 332 MiB/sec avg
WRITE:	93 MiB/sec	= 93 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@
READ:	397 MiB/sec	= 397 MiB/sec avg
WRITE:	92 MiB/sec	= 92 MiB/sec avg

Now testing RAID1 configuration with 6 disks: cWmRd@
READ:	482 MiB/sec	= 482 MiB/sec avg
WRITE:	85 MiB/sec	= 85 MiB/sec avg

Now testing RAID1 configuration with 7 disks: cWmRd@
READ:	548 MiB/sec	= 548 MiB/sec avg
WRITE:	76 MiB/sec	= 76 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@
READ:	233 MiB/sec	= 233 MiB/sec avg
WRITE:	185 MiB/sec	= 185 MiB/sec avg

Now testing RAID1+0 configuration with 6 disks: cWmRd@
READ:	354 MiB/sec	= 354 MiB/sec avg
WRITE:	230 MiB/sec	= 230 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	407 MiB/sec	= 329 MiB/sec avg
WRITE:	275 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	436 MiB/sec	= 329 MiB/sec avg
WRITE:	300 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	442 MiB/sec	= 329 MiB/sec avg
WRITE:	305 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	418 MiB/sec	= 385 MiB/sec avg
WRITE:	253 MiB/sec	= 230 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@
READ:	118 MiB/sec	= 118 MiB/sec avg
WRITE:	98 MiB/sec	= 98 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@
READ:	231 MiB/sec	= 231 MiB/sec avg
WRITE:	191 MiB/sec	= 191 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@
READ:	344 MiB/sec	= 344 MiB/sec avg
WRITE:	272 MiB/sec	= 272 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@
READ:	119 MiB/sec	= 119 MiB/sec avg
WRITE:	94 MiB/sec	= 94 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@
READ:	230 MiB/sec	= 230 MiB/sec avg
WRITE:	172 MiB/sec	= 172 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@
READ:	119 MiB/sec	= 119 MiB/sec avg
WRITE:	93 MiB/sec	= 93 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@
READ:	119 MiB/sec	= 119 MiB/sec avg
WRITE:	103 MiB/sec	= 103 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@
READ:	241 MiB/sec	= 241 MiB/sec avg
WRITE:	99 MiB/sec	= 99 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@
READ:	398 MiB/sec	= 329 MiB/sec avg
WRITE:	284 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 12 disks: cWmRd@
READ:	428 MiB/sec	= 329 MiB/sec avg
WRITE:	302 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ+0 configuration with 16 disks: cWmRd@
READ:	444 MiB/sec	= 329 MiB/sec avg
WRITE:	296 MiB/sec	= 218 MiB/sec avg

Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@
READ:	417 MiB/sec	= 385 MiB/sec avg
WRITE:	248 MiB/sec	= 230 MiB/sec avg

Done
 
those results look whacky, you should be getting a lot higher speeds with 16 drives.

here my 2x 8 drive raiz2 setup using green 5400rpm drives
Code:
root@tera2>zpool status tank
  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 7h51m with 0 errors on Fri Nov  5 02:09:52 2010
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t50014EE204FC04D6d0  ONLINE       0     0     0
            c0t50014EE204FC0557d0  ONLINE       0     0     0
            c0t50014EE204FC6217d0  ONLINE       0     0     0
            c0t50014EE204FC8170d0  ONLINE       0     0     0
            c0t50014EE204FC94CCd0  ONLINE       0     0     0
            c0t50014EE204FC9C47d0  ONLINE       0     0     0
            c0t50014EE204FCB585d0  ONLINE       0     0     0
            c0t50014EE204FCB79Dd0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c0t50014EE25A2A81F1d0  ONLINE       0     0     0
            c0t50014EE25A519721d0  ONLINE       0     0     0
            c0t50014EE25A51CF8Ed0  ONLINE       0     0     0
            c0t50014EE2AFA6EBD0d0  ONLINE       0     0     0
            c0t50014EE2AFA6F10Ad0  ONLINE       0     0     0
            c0t50014EE2AFA709C8d0  ONLINE       0     0     0
            c0t50014EE2AFA74042d0  ONLINE       0     0     0
            c0t50014EE2AFA74448d0  ONLINE       0     0     0

errors: No known data errors
Code:
root@tera2>dd if=/dev/zero of=/tank/storage/zerofile.000 bs=10M count=3200
3200+0 records in
3200+0 records out
33554432000 bytes (34 GB) copied, 44.9532 seconds, 746 MB/s

Fri Nov 12 12:30:35 EST 2010
~
root@tera2>dd if=/tank/storage/zerofile.000 of=/dev/zero bs=10M
3200+0 records in
3200+0 records out
33554432000 bytes (34 GB) copied, 38.5238 seconds, 871 MB/s

Fri Nov 12 12:31:59 EST 2010
~
 
I have to agree with that statement Axan. Starting to wonder if there is some limitations with my hardware (motherboard and hba's), and have to starve the hunger to buy new hardware. Also read in another post that FreeBSD dont support the LSI 2008 chipset yet, so doing an upgrade will mean that I have to go back to OpenSolaris/OpenIndiana.

Choices, choices... :)

Atleast I'm not in any rush to get this thing working.
 
I have to agree with that statement Axan. Starting to wonder if there is some limitations with my hardware (motherboard and hba's), and have to starve the hunger to buy new hardware. Also read in another post that FreeBSD dont support the LSI 2008 chipset yet, so doing an upgrade will mean that I have to go back to OpenSolaris/OpenIndiana.

Choices, choices... :)

Atleast I'm not in any rush to get this thing working.

I think the freebsd 9 current has driver for lsi 2008 but no idea how stable it is yet. I use lsi 2008 hbas but I'm running nexenta.
 
After reading trough a lot of FUD (and a lot of good knowledge in between) about this topic, I decided today to get 6 F4EG drives.

Running nexenta, I got a modified zpool binary with ashift=12.

This is my raidz2 (4+2):

root@future:/slap# dd if=/dev/zero of=zerofile.000 bs=10M count=3200
3200+0 records in
3200+0 records out
33554432000 bytes (34 GB) copied, 95.2533 seconds, 352 MB/s

root@future:/slap# dd if=zerofile.000 of=/dev/zero bs=10M
3200+0 records in
3200+0 records out
33554432000 bytes (34 GB) copied, 79.967 seconds, 420 MB/s

I know this isn't scientific, but it doesn't look that bad does it?
 
those results look whacky, you should be getting a lot higher speeds with 16 drives.

here my 2x 8 drive raiz2 setup using green 5400rpm drives

Installed Nexenta now (default settings, no tuning at all) and did the same test as you. I think the issue with my performance lies with the hardware I got..

Code:
zpool root@nexenta:/# zpool status
  pool: pool
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool        ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            c0t0d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0

errors: No known data errors


Code:
root@nexenta:/pool# dd if=/dev/zero of=/pool/temp.txt bs=1M count=64K
65536+0 records in
65536+0 records out
68719476736 bytes (69 GB) copied, 204.315 seconds, 336 MB/s
root@nexenta:/pool# dd if=/pool/temp.txt of=/dev/zero bs=1M
65536+0 records in
65536+0 records out
68719476736 bytes (69 GB) copied, 110.252 seconds, 623 MB/s
root@nexenta:/pool#


Edit:
Made a raid0 with all 16 drives and did a new DD - somewhat improvement from the FreeBSD / Sub.Mesa gui:

Code:
root@nexenta:/# dd if=/dev/zero of=/pool/temp.txt bs=1M count=64K
65536+0 records in
65536+0 records out
68719476736 bytes (69 GB) copied, 118.898 seconds, 578 MB/s
root@nexenta:/# dd if=/pool/temp.txt of=/dev/zero bs=1M
65536+0 records in
65536+0 records out
68719476736 bytes (69 GB) copied, 87.1462 seconds, 789 MB/s
root@nexenta:/#
 
Last edited:
definitively bottleneck in the hardware somewhere, what mobo are you using?
 
ok that makes no sense 64 bit 133 mhz pci-x slot gives you 1064 MB/s of bandwidth
which is plenty for 8 hdds even if you assume about 80% efficiency
I wonder if its an issue with the driver for the marvel controller. Can you try linux mdam array just to see if the numbers change?
 
nm I'm an idiot, I forgot that pci-x unlike pci-e shares bandwidth among slots, so your 2x pci-x slots get 1064MB/s total with bus/protocol overhead 800MB/s is about the max your array config can do.

You could ditch 1 of those pci-x hbas for pci-e one based on something like the lsi 1068E
 
ya you're right, there is something wrong. Either it's a problem with the hba driver or 1 of those hdds is defective and is affecting the array.

Testing the setup on another OS (linux/windows) should help confirm/eliminate driver problem.

MrLie, did you test/bench each hdd individually?
 
nm I'm an idiot, I forgot that pci-x unlike pci-e shares bandwidth among slots, so your 2x pci-x slots get 1064MB/s total with bus/protocol overhead 800MB/s is about the max your array config can do.

You could ditch 1 of those pci-x hbas for pci-e one based on something like the lsi 1068E

Thats what I was afraid of, that the pci-x shared the total bandwith like regular old pci, but I couldnt find a definite answer - thanks for the "bad" news :p
I have to say that I'm tempted to ditch the motherboard for something better and use a LSI-2008 based HBA, instead of something based on the LSI-1068E, even if it means I'd need to rethink which software to use (OpenSolaris/OpenIndiana/FreeBSD/Nexenta).

Christmas is coming up soon, and I think I've been a good boy this year...:cool:
 
ya you're right, there is something wrong. Either it's a problem with the hba driver or 1 of those hdds is defective and is affecting the array.

Testing the setup on another OS (linux/windows) should help confirm/eliminate driver problem.

MrLie, did you test/bench each hdd individually?

Yes, tested each Hitatchi by itself, and all performed exactly the same (120-125 MB/s max), so really doubt that any of the drives are part of the issue.
Also, so far I've been using OpenSolaris 2009.06, Nexenta and FreeBSD - pretty much all with the same result. I think its a hw-issue.
 
Last edited:
opensolaris,nexenta and freebsd are based on similar architecture so the driver would be very much alike.
I would try it on linux or windows just to check.

If you do decide to change the mobo/hbas to lsi 2008 consider Supermicro X8DTH-6F it has built in lsi 2008 sas controller and plenty of room to grow. I'm using it in my build here

As for os for lsi 2008 I would go with nexentacore for following reasons:
opensolaris = dead
openindiana = not mature enough yet
freebsd = only beta driver for lsi2008 in 9.current
 
I basicly got two options:

1: Stay with my current setup
2: Figure out which and what to buy to get it done properly (without spending money like a drunk sailor on shore leave :))

Swapping between OS wont be that of a big issue, thanks to zpool import/export. Choosing the "right" hardware is tougher. I'll go sleep on it.
I have been looking at the motherboard you got Axan, but first thoughts were "its too overkill". Then again it does come with pretty much everything, so it looks to be good value for money despite the price.

Thanks for the help though - I really appriciate it.
 
If you do decide to change the mobo/hbas to lsi 2008 consider Supermicro X8DTH-6F it has built in lsi 2008 sas controller and plenty of room to grow. I'm using it in my build here

As for os for lsi 2008 I would go with nexentacore for following reasons:
opensolaris = dead
openindiana = not mature enough yet
freebsd = only beta driver for lsi2008 in 9.current

from a cost standpoint the Supermicro X8SI6 also integrates the SAS2008 and doesn't require stepping up to that expensive dual socket architecture. Drop in an i3-530 and 4-8GB of ram, done. Or spend $100 more for a Xeon X3440 (if I recall my X3440 was pegged at 50% during a zpool resilvering in IOSTAT, so ZFS likes CPU apparently). I was running NexentaCore+Napp-It GUI flawlessly on a X8SI6 as a test system for a while before breaking it down. It uses the mpt2sas driver if I recall. Everything "Just worked", if I remember right I was pushing like 1400MB/s sequential reads for Raid0 in the bonnie bench in Napp-It with 20 x Hitachi 2TB's connected to an HP Expander dual-linked to the SAS-2008.
 
Last edited:
X8SI6 is an option but with with only 1 8x pcie slot the expansion option is limited. It would work pretty good for single 20-24 drive norco but if you wanted to daisy chain norco cases you would be out pcie slots of hbas.
 
I have some serious problems with my setup. Copy to the drives works great (did 6 TB - no errors in zpool status) - but whenever I scrub I get errors - on ALL drives - and lots of them. Tested changing the speed to 150 and still the same problem. Never seen this before... AOC-USAS-L8I and F4EG compatibility problem?

This is scary...
 
X8SI6 is an option but with with only 1 8x pcie slot the expansion option is limited. It would work pretty good for single 20-24 drive norco but if you wanted to daisy chain norco cases you would be out pcie slots of hbas.

what does daisy chaining have to do with PCIe slots? there are any number of ways to daisy chain multiple additional chassis, even if there were no PCIe slots on that board. SFF-8087 -> SFF-8088 pci bracket for one. the whole reason you have only one PCIe x4 and one PCIe x8 is you've already got the host controller onboard. granted that board isn't for everyone. consider the SAS2008 controller costs $250 when you buy it standalone on a 9211-8i card, so for essentially $40 extra you're getting $200 worth of motherboard (at least based on what the featureset goes for on other supermicro boards without the SAS2008).

or you can drop a grand on an LGA1366 solution. that works too I guess.
 
or you can drop a grand on an LGA1366 solution. that works too I guess.

Would the Supermicro X8ST3-F work as compromise, assuming the onboard 1068E-controller work with the HP Expander? Its not SAS2, but would you actually get to use that extra bandwith as long as you only use normal SATA2-drives?
 
MrLie, those are indeed quite disappointing results. Not sure if you already gave your system specs, but... are you using that PCI-X controller with expander? This could explain some of the disappointing results. It would also be the immature Marvell driver that gives you lower results. If all OS give you low performance, then it's not unreasonable to blame the hardware.

In the future i will do an aggregated I/O test, which sends I/O to all disks at the same time outside control of any filesystem; this should fill the bandwidth to the max, and would be easy to spot any bandwidth issues or hardware bottlenecks.

I generally advice againast expanders/port multipliers and would suggest multiple PCI-express HBAs instead, which may end up being as cheap as when using expanders.

If you get SAS2008 (USAS2) then you can't use them on FreeBSD for awhile, though it's possible the 9-CURRENT driver get's perfected and then downstreamed (MFCed) to FreeBSD 8.x in a couple months; but that's not a fact you can really count on. It works excellent in OpenSolaris and would be supported by FreeBSD at one point, but it would mean you run OpenSolaris-derivative OS for awhile. Nothing wrong with that choice; but being able to switch to all different ZFS-capable OS is also worth something. The USAS1 (USAS-L8i or LSI 1068E chip) has that benefit there.

And yes that ST3-F board looks great. Only 'but' would be no ECC support. It's loaded with features and it's not that expensive board at all! The onboard controller should work great; not sure about expanders but wouldn't you want to do it 'right' this time? Perhaps you can sell them (expanders + PCI-X controller) on ebay or something and make the purchase of one additional 1068E HBA and the onboard one less expensive? They should sell for like 100-120 dollars, the Intel SASUC8i is a bit more pricey.
 
Would the Supermicro X8ST3-F work as compromise, assuming the onboard 1068E-controller work with the HP Expander? Its not SAS2, but would you actually get to use that extra bandwith as long as you only use normal SATA2-drives?

There are also the X8DT3-F and X8DT6-F, depending on whether you want 1068E or 2008 SAS2. They are around $450, so not the cheapest option, but they have the very nice Intel 5520 chipset, and plenty of room for expansion (if you later want to add a 2nd CPU or lots of RAM)

The X8DT6 is the single-IOH version of the X8DTH.
 
Back
Top