Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
ZFSGURU-benchmark, version 1
Test size: 64.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 5 seconds
Number of disks: 8 disks
disk 1: gpt/disk0
disk 2: gpt/disk1
disk 3: gpt/disk2
disk 4: gpt/disk3
disk 5: gpt/disk4
disk 6: gpt/disk5
disk 7: gpt/disk6
disk 8: gpt/disk7
* Test Settings: TS64;
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service
Secure Erase. Now testing RAID0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ: 602 MiB/sec 604 MiB/sec 601 MiB/sec = 602 MiB/sec avg
WRITE: 459 MiB/sec 459 MiB/sec 362 MiB/sec = 427 MiB/sec avg
raidtest.read: 7321 7321 7401 = 7347 IOps ( ~473 MiB/sec )
raidtest.write: 5865 6322 5421 = 5869 IOps ( ~378 MiB/sec )
raidtest.mixed: 5708 6553 5932 = 6064 IOps ( ~390 MiB/sec )
Secure Erase. Now testing RAIDZ configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ: 354 MiB/sec 363 MiB/sec 400 MiB/sec = 373 MiB/sec avg
WRITE: 151 MiB/sec 154 MiB/sec 154 MiB/sec = 153 MiB/sec avg
raidtest.read: 7129 7071 7330 = 7176 IOps ( ~462 MiB/sec )
raidtest.write: 5679 5597 5716 = 5664 IOps ( ~365 MiB/sec )
raidtest.mixed: 5764 5929 6522 = 6071 IOps ( ~391 MiB/sec )
Secure Erase. Now testing RAIDZ2 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ: 378 MiB/sec 367 MiB/sec 329 MiB/sec = 358 MiB/sec avg
WRITE: 228 MiB/sec 224 MiB/sec 237 MiB/sec = 229 MiB/sec avg
raidtest.read: 7258 7171 7188 = 7205 IOps ( ~464 MiB/sec )
raidtest.write: 5908 5866 5791 = 5855 IOps ( ~377 MiB/sec )
raidtest.mixed: 6066 6511 6451 = 6342 IOps ( ~408 MiB/sec )
PHP Parse error: syntax error, unexpected T_VARIABLE in /usr/local/www/zfsguru/benchmark.php on line 602
I think it's called passthrough or something? You dont want to use the hardware raid stuff when you use ZFS, let ZFS handle it allSorry not familiar with this tuning stuff. Any guide on tuning zfs?
Also, is it normal for the smart data query to not work if I'm running on a Adaptec raid controller?
I definitely am interested in the ability to make a USB stick version or some way to get the os onto the usb so the drives can be used for dataYou are using the .iso; this is an image meant for CD media only, not for HDDs!
The dd method you used is for the Binary image (ending with .img). I'm considering not offering this for download anymore, but instead integrate a function in web-gui that can create a USB stick. This should solve some issues with unbootable USB sticks and means i only need to release two things:
1) LiveCD .iso with system image
2) web interface tarball for web-update
Yeah plus push requestsYou've made some great improvements to the benchmark scripts in 0.17
I hacked the scripts from 0.16 to only test raidz/raidz2 in disk configs of 4-8 . As i did not care for any other configurations.
This spead up the results and gave me data on only the configs I was interested in.
Which leads me to another point. Do you have any interest in pushing the scripts to a source repo (github/mercurial googlecode ) so that some forking may be done for those interested? Alternatively do you have any objections of someone else starting a repo from the existing code?
pool: RaidZ-8TB
id: 2973819487359521379
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
RaidZ-8TB UNAVAIL newer version
ad4 ONLINE
ad6 ONLINE
ad8 ONLINE
ad10 ONLINE
ad12 ONLINE
ad10
512 # sectorsize
2000398934016 # mediasize in bytes (1.8T)
3907029168 # mediasize in sectors
0 # stripesize
0 # stripeoffset
3876021 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WCAVY5176674 # Disk ident.
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 5 disks
disk 1: gpt/1
disk 2: gpt/2
disk 3: gpt/3
disk 4: gpt/4
disk 5: gpt/5
* Test Settings: TS32;
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service
Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 230 MiB/sec 241 MiB/sec 250 MiB/sec = 241 MiB/sec avg
WRITE: 248 MiB/sec 247 MiB/sec 247 MiB/sec = 248 MiB/sec avg
Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 224 MiB/sec 215 MiB/sec 214 MiB/sec = 218 MiB/sec avg
WRITE: 277 MiB/sec 280 MiB/sec 278 MiB/sec = 278 MiB/sec avg
Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 255 MiB/sec 247 MiB/sec 247 MiB/sec = 250 MiB/sec avg
WRITE: 187 MiB/sec 187 MiB/sec 185 MiB/sec = 186 MiB/sec avg
Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 232 MiB/sec 221 MiB/sec 227 MiB/sec = 227 MiB/sec avg
WRITE: 218 MiB/sec 220 MiB/sec 221 MiB/sec = 220 MiB/sec avg
Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 188 MiB/sec 190 MiB/sec 189 MiB/sec = 189 MiB/sec avg
WRITE: 119 MiB/sec 121 MiB/sec 120 MiB/sec = 120 MiB/sec avg
Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 232 MiB/sec 235 MiB/sec 242 MiB/sec = 236 MiB/sec avg
WRITE: 153 MiB/sec 152 MiB/sec 150 MiB/sec = 152 MiB/sec avg
Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 91 MiB/sec 90 MiB/sec 91 MiB/sec = 91 MiB/sec avg
WRITE: 87 MiB/sec 87 MiB/sec 87 MiB/sec = 87 MiB/sec avg
Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 89 MiB/sec 89 MiB/sec 84 MiB/sec = 87 MiB/sec avg
WRITE: 86 MiB/sec 86 MiB/sec 86 MiB/sec = 86 MiB/sec avg
Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 116 MiB/sec 123 MiB/sec 119 MiB/sec = 119 MiB/sec avg
WRITE: 152 MiB/sec 152 MiB/sec 152 MiB/sec = 152 MiB/sec avg
Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ: 105 MiB/sec 104 MiB/sec 104 MiB/sec = 104 MiB/sec avg
WRITE: 91 MiB/sec 91 MiB/sec 91 MiB/sec = 91 MiB/sec avg
Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 188 MiB/sec 188 MiB/sec 187 MiB/sec = 188 MiB/sec avg
WRITE: 158 MiB/sec 157 MiB/sec 156 MiB/sec = 157 MiB/sec avg
Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 235 MiB/sec 227 MiB/sec 234 MiB/sec = 232 MiB/sec avg
WRITE: 204 MiB/sec 204 MiB/sec 204 MiB/sec = 204 MiB/sec avg
Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 105 MiB/sec 105 MiB/sec 105 MiB/sec = 105 MiB/sec avg
WRITE: 89 MiB/sec 89 MiB/sec 89 MiB/sec = 89 MiB/sec avg
Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 183 MiB/sec 178 MiB/sec 175 MiB/sec = 179 MiB/sec avg
WRITE: 146 MiB/sec 147 MiB/sec 147 MiB/sec = 147 MiB/sec avg
Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 102 MiB/sec 102 MiB/sec 102 MiB/sec = 102 MiB/sec avg
WRITE: 85 MiB/sec 85 MiB/sec 84 MiB/sec = 85 MiB/sec avg
Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 96 MiB/sec 96 MiB/sec 96 MiB/sec = 96 MiB/sec avg
WRITE: 90 MiB/sec 90 MiB/sec 90 MiB/sec = 90 MiB/sec avg
Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 92 MiB/sec 91 MiB/sec 89 MiB/sec = 90 MiB/sec avg
WRITE: 87 MiB/sec 87 MiB/sec 87 MiB/sec = 87 MiB/sec avg
Done
You can do that already by installing ZFS-on-root to your USB stick.I definitely am interested in the ability to make a USB stick version or some way to get the os onto the usb so the drives can be used for data
Right now it should be as simple as clicking the "Reset to recommended" button on the System->Tuning page. This does not work with LiveCD; only after you installed to a pool can you perform tuning; the livecd will forget everything after reboot.Sorry not familiar with this tuning stuff. Any guide on tuning zfs?
Yes, not all controllers pass SMART requests to the disks. Some controllers do have some utility to check SMART with proprietary tools (like Areca) but do not actually passthrough direct SMART queries from your OS to the disk; as such this controller essentially lacks true support for SMART (passthrough). The SuperMicro USAS-L8i controller and other HBAs do SMART just fine in my experience.Also, is it normal for the smart data query to not work if I'm running on a Adaptec raid controller?
panic: kmem_malloc(65536): kmem_map too small: 7512248320 total allocated
cpuid = 0
uptime: 14h34m38s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03
* Test Settings: TS32;
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g;
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service
Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 660 MiB/sec 664 MiB/sec 665 MiB/sec = 663 MiB/sec avg
WRITE: 471 MiB/sec 461 MiB/sec 466 MiB/sec = 466 MiB/sec avg
Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 432 MiB/sec 432 MiB/sec 430 MiB/sec = 431 MiB/sec avg
WRITE: 380 MiB/sec 382 MiB/sec 382 MiB/sec = 381 MiB/sec avg
Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 410 MiB/sec 411 MiB/sec 413 MiB/sec = 411 MiB/sec avg
WRITE: 338 MiB/sec 329 MiB/sec 338 MiB/sec = 335 MiB/sec avg
Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 689 MiB/sec 684 MiB/sec 687 MiB/sec = 687 MiB/sec avg
WRITE: 36 MiB/sec 36 MiB/sec 36 MiB/sec = 36 MiB/sec avg
Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 639 MiB/sec 645 MiB/sec 634 MiB/sec = 639 MiB/sec avg
WRITE: 266 MiB/sec 258 MiB/sec 260 MiB/sec = 261 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 444 MiB/sec 422 MiB/sec 453 MiB/sec = 440 MiB/sec avg
WRITE: 313 MiB/sec 306 MiB/sec 305 MiB/sec = 308 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 480 MiB/sec 473 MiB/sec 469 MiB/sec = 474 MiB/sec avg
WRITE: 329 MiB/sec 327 MiB/sec 326 MiB/sec = 327 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 488 MiB/sec 494 MiB/sec 488 MiB/sec = 490 MiB/sec avg
WRITE: 335 MiB/sec 336 MiB/sec 331 MiB/sec = 334 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 446 MiB/sec 459 MiB/sec 450 MiB/sec = 452 MiB/sec avg
WRITE: 271 MiB/sec 276 MiB/sec 276 MiB/sec = 274 MiB/sec avg
Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 657 MiB/sec 650 MiB/sec 657 MiB/sec = 655 MiB/sec avg
WRITE: 463 MiB/sec 465 MiB/sec 473 MiB/sec = 467 MiB/sec avg
Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 670 MiB/sec 669 MiB/sec 660 MiB/sec = 666 MiB/sec avg
WRITE: 446 MiB/sec 462 MiB/sec 461 MiB/sec = 456 MiB/sec avg
Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 679 MiB/sec 677 MiB/sec 677 MiB/sec = 678 MiB/sec avg
WRITE: 466 MiB/sec 471 MiB/sec 467 MiB/sec = 468 MiB/sec avg
Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 672 MiB/sec 678 MiB/sec 675 MiB/sec = 675 MiB/sec avg
WRITE: 469 MiB/sec 467 MiB/sec 466 MiB/sec = 467 MiB/sec avg
Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 438 MiB/sec 437 MiB/sec 445 MiB/sec = 440 MiB/sec avg
WRITE: 371 MiB/sec 354 MiB/sec 373 MiB/sec = 366 MiB/sec avg
Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 429 MiB/sec 427 MiB/sec 432 MiB/sec = 429 MiB/sec avg
WRITE: 378 MiB/sec 374 MiB/sec 379 MiB/sec = 377 MiB/sec avg
Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 438 MiB/sec 438 MiB/sec 433 MiB/sec = 436 MiB/sec avg
WRITE: 384 MiB/sec 367 MiB/sec 375 MiB/sec = 375 MiB/sec avg
Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 430 MiB/sec 434 MiB/sec 425 MiB/sec = 430 MiB/sec avg
WRITE: 373 MiB/sec 366 MiB/sec 374 MiB/sec = 371 MiB/sec avg
Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 400 MiB/sec 401 MiB/sec 399 MiB/sec = 400 MiB/sec avg
WRITE: 324 MiB/sec 324 MiB/sec 317 MiB/sec = 322 MiB/sec avg
Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 407 MiB/sec 408 MiB/sec 407 MiB/sec = 407 MiB/sec avg
WRITE: 314 MiB/sec 316 MiB/sec 312 MiB/sec = 314 MiB/sec avg
Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 410 MiB/sec 410 MiB/sec 405 MiB/sec = 408 MiB/sec avg
WRITE: 321 MiB/sec 329 MiB/sec 333 MiB/sec = 328 MiB/sec avg
Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 405 MiB/sec 413 MiB/sec 412 MiB/sec = 410 MiB/sec avg
WRITE: 327 MiB/sec 324 MiB/sec 326 MiB/sec = 326 MiB/sec avg
Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 646 MiB/sec 654 MiB/sec 652 MiB/sec = 651 MiB/sec avg
WRITE: 45 MiB/sec 46 MiB/sec 46 MiB/sec = 45 MiB/sec avg
Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 658 MiB/sec 659 MiB/sec 651 MiB/sec = 656 MiB/sec avg
WRITE: 42 MiB/sec 42 MiB/sec 43 MiB/sec = 42 MiB/sec avg
Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 669 MiB/sec 667 MiB/sec 656 MiB/sec = 664 MiB/sec avg
WRITE: 40 MiB/sec 40 MiB/sec 40 MiB/sec = 40 MiB/sec avg
Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 666 MiB/sec 670 MiB/sec 661 MiB/sec = 666 MiB/sec avg
WRITE: 38 MiB/sec 38 MiB/sec 38 MiB/sec = 38 MiB/sec avg
Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 584 MiB/sec 563 MiB/sec 581 MiB/sec = 576 MiB/sec avg
WRITE: 250 MiB/sec 257 MiB/sec 251 MiB/sec = 253 MiB/sec avg
Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 644 MiB/sec 654 MiB/sec 653 MiB/sec = 650 MiB/sec avg
WRITE: 256 MiB/sec 262 MiB/sec 247 MiB/sec = 255 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 435 MiB/sec 428 MiB/sec 451 MiB/sec = 490 MiB/sec avg
WRITE: 311 MiB/sec 302 MiB/sec 311 MiB/sec = 334 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 480 MiB/sec 475 MiB/sec 475 MiB/sec = 490 MiB/sec avg
WRITE: 331 MiB/sec 327 MiB/sec 326 MiB/sec = 334 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 492 MiB/sec 488 MiB/sec 489 MiB/sec = 490 MiB/sec avg
WRITE: 330 MiB/sec 323 MiB/sec 318 MiB/sec = 324 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 452 MiB/sec 453 MiB/sec 447 MiB/sec = 451 MiB/sec avg
WRITE: 273 MiB/sec 277 MiB/sec 280 MiB/sec = 276 MiB/sec avg
Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 602 MiB/sec 604 MiB/sec 607 MiB/sec = 604 MiB/sec avg
WRITE: 458 MiB/sec 458 MiB/sec 458 MiB/sec = 458 MiB/sec avg
Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 600 MiB/sec 585 MiB/sec 582 MiB/sec = 589 MiB/sec avg
WRITE: 462 MiB/sec 459 MiB/sec 463 MiB/sec = 461 MiB/sec avg
Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 653 MiB/sec 655 MiB/sec 651 MiB/sec = 653 MiB/sec avg
WRITE: 465 MiB/sec 463 MiB/sec 468 MiB/sec = 465 MiB/sec avg
Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 659 MiB/sec 658 MiB/sec 654 MiB/sec = 657 MiB/sec avg
WRITE: 464 MiB/sec 460 MiB/sec 468 MiB/sec = 464 MiB/sec avg
Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 403 MiB/sec 408 MiB/sec 398 MiB/sec = 403 MiB/sec avg
WRITE: 330 MiB/sec 329 MiB/sec 326 MiB/sec = 328 MiB/sec avg
Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 440 MiB/sec 436 MiB/sec 453 MiB/sec = 443 MiB/sec avg
WRITE: 353 MiB/sec 359 MiB/sec 354 MiB/sec = 356 MiB/sec avg
Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 430 MiB/sec 429 MiB/sec 425 MiB/sec = 428 MiB/sec avg
WRITE: 351 MiB/sec 341 MiB/sec 351 MiB/sec = 348 MiB/sec avg
Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 447 MiB/sec 442 MiB/sec 438 MiB/sec = 442 MiB/sec avg
WRITE: 367 MiB/sec 367 MiB/sec 368 MiB/sec = 367 MiB/sec avg
Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 380 MiB/sec 372 MiB/sec 374 MiB/sec = 376 MiB/sec avg
WRITE: 286 MiB/sec 288 MiB/sec 284 MiB/sec = 286 MiB/sec avg
Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 418 MiB/sec 422 MiB/sec 419 MiB/sec = 420 MiB/sec avg
WRITE: 291 MiB/sec 296 MiB/sec 295 MiB/sec = 294 MiB/sec avg
Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 454 MiB/sec 443 MiB/sec 434 MiB/sec = 444 MiB/sec avg
WRITE: 305 MiB/sec 303 MiB/sec 294 MiB/sec = 301 MiB/sec avg
Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 410 MiB/sec 407 MiB/sec 410 MiB/sec = 409 MiB/sec avg
WRITE: 314 MiB/sec 312 MiB/sec 311 MiB/sec = 312 MiB/sec avg
Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 577 MiB/sec 589 MiB/sec 586 MiB/sec = 584 MiB/sec avg
WRITE: 69 MiB/sec 65 MiB/sec 64 MiB/sec = 66 MiB/sec avg
Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 609 MiB/sec 607 MiB/sec 604 MiB/sec = 607 MiB/sec avg
WRITE: 58 MiB/sec 58 MiB/sec 62 MiB/sec = 59 MiB/sec avg
Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 625 MiB/sec 618 MiB/sec 631 MiB/sec = 625 MiB/sec avg
WRITE: 57 MiB/sec 57 MiB/sec 54 MiB/sec = 56 MiB/sec avg
Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 636 MiB/sec 625 MiB/sec 635 MiB/sec = 632 MiB/sec avg
WRITE: 48 MiB/sec 50 MiB/sec 48 MiB/sec = 49 MiB/sec avg
Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 449 MiB/sec 441 MiB/sec 447 MiB/sec = 446 MiB/sec avg
WRITE: 256 MiB/sec 246 MiB/sec 250 MiB/sec = 251 MiB/sec avg
Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 529 MiB/sec 557 MiB/sec 549 MiB/sec = 545 MiB/sec avg
WRITE: 258 MiB/sec 261 MiB/sec 263 MiB/sec = 261 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 435 MiB/sec 435 MiB/sec 439 MiB/sec = 490 MiB/sec avg
WRITE: 312 MiB/sec 305 MiB/sec 305 MiB/sec = 324 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 471 MiB/sec 473 MiB/sec 477 MiB/sec = 490 MiB/sec avg
WRITE: 326 MiB/sec 322 MiB/sec 326 MiB/sec = 324 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 489 MiB/sec 488 MiB/sec 489 MiB/sec = 489 MiB/sec avg
WRITE: 335 MiB/sec 335 MiB/sec 330 MiB/sec = 333 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 443 MiB/sec 457 MiB/sec 456 MiB/sec = 452 MiB/sec avg
WRITE: 270 MiB/sec 273 MiB/sec 277 MiB/sec = 274 MiB/sec avg
Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 421 MiB/sec 417 MiB/sec 437 MiB/sec = 425 MiB/sec avg
WRITE: 358 MiB/sec 358 MiB/sec 354 MiB/sec = 357 MiB/sec avg
Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 495 MiB/sec 484 MiB/sec 479 MiB/sec = 486 MiB/sec avg
WRITE: 395 MiB/sec 403 MiB/sec 405 MiB/sec = 401 MiB/sec avg
Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 536 MiB/sec 538 MiB/sec 523 MiB/sec = 532 MiB/sec avg
WRITE: 438 MiB/sec 432 MiB/sec 434 MiB/sec = 435 MiB/sec avg
Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 552 MiB/sec 547 MiB/sec 557 MiB/sec = 552 MiB/sec avg
WRITE: 443 MiB/sec 448 MiB/sec 446 MiB/sec = 446 MiB/sec avg
Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 325 MiB/sec 337 MiB/sec 328 MiB/sec = 330 MiB/sec avg
WRITE: 237 MiB/sec 238 MiB/sec 232 MiB/sec = 236 MiB/sec avg
Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 419 MiB/sec 415 MiB/sec 436 MiB/sec = 424 MiB/sec avg
WRITE: 291 MiB/sec 291 MiB/sec 272 MiB/sec = 285 MiB/sec avg
Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 416 MiB/sec 414 MiB/sec 416 MiB/sec = 415 MiB/sec avg
WRITE: 308 MiB/sec 308 MiB/sec 307 MiB/sec = 308 MiB/sec avg
Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 387 MiB/sec 389 MiB/sec 384 MiB/sec = 387 MiB/sec avg
WRITE: 318 MiB/sec 318 MiB/sec 309 MiB/sec = 315 MiB/sec avg
Now testing RAIDZ2 configuration with 4 disks: cWmRd@cW
pid 2326 (php), uid 0, was killed: out of swap space
pid 975 (nfsd), uid 0, was killed: out of swap space
pid 966 (mountd), uid 0, was killed: out of swap space
panic: kmem_malloc(65536): kmem_map too small: 7512592384 total allocated
cpuid = 0
Uptime: 16h10m36s
Cannot dump. Device not defined or unavailable.
Automatic reboot in 15 seconds - press a key on the console to abort
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 16 disks
disk 1: gpt/Disk04
disk 2: gpt/Disk05
disk 3: gpt/Disk06
disk 4: gpt/Disk07
disk 5: gpt/Disk08
disk 6: gpt/Disk09
disk 7: gpt/Disk10
disk 8: gpt/Disk11
disk 9: gpt/Disk12
disk 10: gpt/Disk13
disk 11: gpt/Disk14
disk 12: gpt/Disk15
disk 13: gpt/Disk16
disk 14: gpt/Disk01
disk 15: gpt/Disk02
disk 16: gpt/Disk03
* Test Settings: TS32;
* Tuning: KMEM=7g; AMIN=5g; AMAX=6g;
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service
Now testing RAID0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 662 MiB/sec 663 MiB/sec 661 MiB/sec = 662 MiB/sec avg
WRITE: 474 MiB/sec 475 MiB/sec 465 MiB/sec = 471 MiB/sec avg
Now testing RAIDZ configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 430 MiB/sec 429 MiB/sec 435 MiB/sec = 432 MiB/sec avg
WRITE: 380 MiB/sec 381 MiB/sec 376 MiB/sec = 379 MiB/sec avg
Now testing RAIDZ2 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 409 MiB/sec 411 MiB/sec 413 MiB/sec = 411 MiB/sec avg
WRITE: 336 MiB/sec 336 MiB/sec 334 MiB/sec = 335 MiB/sec avg
Now testing RAID1 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 685 MiB/sec 688 MiB/sec 682 MiB/sec = 685 MiB/sec avg
WRITE: 36 MiB/sec 36 MiB/sec 36 MiB/sec = 36 MiB/sec avg
Now testing RAID1+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 631 MiB/sec 638 MiB/sec 647 MiB/sec = 639 MiB/sec avg
WRITE: 267 MiB/sec 252 MiB/sec 260 MiB/sec = 260 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 435 MiB/sec 431 MiB/sec 447 MiB/sec = 438 MiB/sec avg
WRITE: 312 MiB/sec 294 MiB/sec 307 MiB/sec = 304 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 477 MiB/sec 472 MiB/sec 474 MiB/sec = 474 MiB/sec avg
WRITE: 327 MiB/sec 328 MiB/sec 327 MiB/sec = 327 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 493 MiB/sec 491 MiB/sec 491 MiB/sec = 492 MiB/sec avg
WRITE: 336 MiB/sec 334 MiB/sec 336 MiB/sec = 335 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 458 MiB/sec 450 MiB/sec 447 MiB/sec = 452 MiB/sec avg
WRITE: 278 MiB/sec 275 MiB/sec 278 MiB/sec = 277 MiB/sec avg
Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 652 MiB/sec 660 MiB/sec 661 MiB/sec = 657 MiB/sec avg
WRITE: 470 MiB/sec 469 MiB/sec 473 MiB/sec = 471 MiB/sec avg
Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 671 MiB/sec 664 MiB/sec 675 MiB/sec = 670 MiB/sec avg
WRITE: 471 MiB/sec 465 MiB/sec 470 MiB/sec = 469 MiB/sec avg
Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 678 MiB/sec 677 MiB/sec 673 MiB/sec = 676 MiB/sec avg
WRITE: 465 MiB/sec 465 MiB/sec 472 MiB/sec = 468 MiB/sec avg
Now testing RAID0 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 675 MiB/sec 678 MiB/sec 674 MiB/sec = 676 MiB/sec avg
WRITE: 472 MiB/sec 473 MiB/sec 467 MiB/sec = 471 MiB/sec avg
Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 437 MiB/sec 439 MiB/sec 442 MiB/sec = 439 MiB/sec avg
WRITE: 365 MiB/sec 356 MiB/sec 373 MiB/sec = 365 MiB/sec avg
Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 430 MiB/sec 432 MiB/sec 437 MiB/sec = 433 MiB/sec avg
WRITE: 372 MiB/sec 378 MiB/sec 375 MiB/sec = 375 MiB/sec avg
Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 436 MiB/sec 436 MiB/sec 433 MiB/sec = 435 MiB/sec avg
WRITE: 378 MiB/sec 377 MiB/sec 383 MiB/sec = 379 MiB/sec avg
Now testing RAIDZ configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 430 MiB/sec 433 MiB/sec 426 MiB/sec = 430 MiB/sec avg
WRITE: 371 MiB/sec 355 MiB/sec 374 MiB/sec = 367 MiB/sec avg
Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 400 MiB/sec 404 MiB/sec 400 MiB/sec = 401 MiB/sec avg
WRITE: 322 MiB/sec 313 MiB/sec 322 MiB/sec = 319 MiB/sec avg
Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 410 MiB/sec 405 MiB/sec 408 MiB/sec = 408 MiB/sec avg
WRITE: 311 MiB/sec 315 MiB/sec 311 MiB/sec = 312 MiB/sec avg
Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 408 MiB/sec 407 MiB/sec 409 MiB/sec = 408 MiB/sec avg
WRITE: 320 MiB/sec 315 MiB/sec 327 MiB/sec = 320 MiB/sec avg
Now testing RAIDZ2 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 409 MiB/sec 412 MiB/sec 411 MiB/sec = 411 MiB/sec avg
WRITE: 317 MiB/sec 333 MiB/sec 324 MiB/sec = 325 MiB/sec avg
Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 655 MiB/sec 653 MiB/sec 632 MiB/sec = 647 MiB/sec avg
WRITE: 46 MiB/sec 46 MiB/sec 45 MiB/sec = 46 MiB/sec avg
Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
READ: 658 MiB/sec 654 MiB/sec 663 MiB/sec = 658 MiB/sec avg
WRITE: 42 MiB/sec 44 MiB/sec 43 MiB/sec = 43 MiB/sec avg
Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 669 MiB/sec 653 MiB/sec 668 MiB/sec = 663 MiB/sec avg
WRITE: 40 MiB/sec 40 MiB/sec 40 MiB/sec = 40 MiB/sec avg
Now testing RAID1 configuration with 15 disks: cWmRd@cWmRd@cWmRd@
READ: 663 MiB/sec 662 MiB/sec 674 MiB/sec = 666 MiB/sec avg
WRITE: 38 MiB/sec 38 MiB/sec 37 MiB/sec = 38 MiB/sec avg
Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 591 MiB/sec 601 MiB/sec 573 MiB/sec = 588 MiB/sec avg
WRITE: 254 MiB/sec 254 MiB/sec 261 MiB/sec = 256 MiB/sec avg
Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
READ: 639 MiB/sec 656 MiB/sec 651 MiB/sec = 648 MiB/sec avg
WRITE: 262 MiB/sec 249 MiB/sec 259 MiB/sec = 257 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 426 MiB/sec 428 MiB/sec 443 MiB/sec = 492 MiB/sec avg
WRITE: 314 MiB/sec 303 MiB/sec 310 MiB/sec = 335 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 477 MiB/sec 474 MiB/sec 472 MiB/sec = 492 MiB/sec avg
WRITE: 331 MiB/sec 333 MiB/sec 328 MiB/sec = 335 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 489 MiB/sec 489 MiB/sec 492 MiB/sec = 490 MiB/sec avg
WRITE: 332 MiB/sec 330 MiB/sec 316 MiB/sec = 326 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 445 MiB/sec 443 MiB/sec 457 MiB/sec = 448 MiB/sec avg
WRITE: 276 MiB/sec 278 MiB/sec 276 MiB/sec = 277 MiB/sec avg
Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 603 MiB/sec 608 MiB/sec 600 MiB/sec = 604 MiB/sec avg
WRITE: 457 MiB/sec 459 MiB/sec 457 MiB/sec = 458 MiB/sec avg
Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 596 MiB/sec 580 MiB/sec 623 MiB/sec = 600 MiB/sec avg
WRITE: 456 MiB/sec 465 MiB/sec 437 MiB/sec = 453 MiB/sec avg
Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 652 MiB/sec 650 MiB/sec 649 MiB/sec = 651 MiB/sec avg
WRITE: 470 MiB/sec 473 MiB/sec 473 MiB/sec = 472 MiB/sec avg
Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 657 MiB/sec 663 MiB/sec 655 MiB/sec = 658 MiB/sec avg
WRITE: 465 MiB/sec 470 MiB/sec 462 MiB/sec = 465 MiB/sec avg
Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 397 MiB/sec 410 MiB/sec 413 MiB/sec = 406 MiB/sec avg
WRITE: 334 MiB/sec 330 MiB/sec 325 MiB/sec = 330 MiB/sec avg
Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 439 MiB/sec 446 MiB/sec 452 MiB/sec = 446 MiB/sec avg
WRITE: 350 MiB/sec 356 MiB/sec 353 MiB/sec = 353 MiB/sec avg
Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 426 MiB/sec 427 MiB/sec 430 MiB/sec = 428 MiB/sec avg
WRITE: 346 MiB/sec 352 MiB/sec 347 MiB/sec = 348 MiB/sec avg
Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 440 MiB/sec 438 MiB/sec 445 MiB/sec = 441 MiB/sec avg
WRITE: 368 MiB/sec 365 MiB/sec 366 MiB/sec = 366 MiB/sec avg
Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 372 MiB/sec 372 MiB/sec 367 MiB/sec = 370 MiB/sec avg
WRITE: 288 MiB/sec 282 MiB/sec 287 MiB/sec = 286 MiB/sec avg
Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 426 MiB/sec 418 MiB/sec 420 MiB/sec = 421 MiB/sec avg
WRITE: 295 MiB/sec 296 MiB/sec 294 MiB/sec = 295 MiB/sec avg
Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 448 MiB/sec 445 MiB/sec 452 MiB/sec = 448 MiB/sec avg
WRITE: 303 MiB/sec 304 MiB/sec 303 MiB/sec = 303 MiB/sec avg
Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 399 MiB/sec 411 MiB/sec 408 MiB/sec = 406 MiB/sec avg
WRITE: 304 MiB/sec 308 MiB/sec 301 MiB/sec = 304 MiB/sec avg
Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 586 MiB/sec 591 MiB/sec 585 MiB/sec = 587 MiB/sec avg
WRITE: 66 MiB/sec 65 MiB/sec 64 MiB/sec = 65 MiB/sec avg
Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
READ: 609 MiB/sec 605 MiB/sec 602 MiB/sec = 605 MiB/sec avg
WRITE: 58 MiB/sec 62 MiB/sec 62 MiB/sec = 61 MiB/sec avg
Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 625 MiB/sec 618 MiB/sec 639 MiB/sec = 627 MiB/sec avg
WRITE: 57 MiB/sec 57 MiB/sec 55 MiB/sec = 56 MiB/sec avg
Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
READ: 640 MiB/sec 638 MiB/sec 628 MiB/sec = 636 MiB/sec avg
WRITE: 49 MiB/sec 49 MiB/sec 52 MiB/sec = 50 MiB/sec avg
Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 446 MiB/sec 441 MiB/sec 439 MiB/sec = 442 MiB/sec avg
WRITE: 249 MiB/sec 237 MiB/sec 253 MiB/sec = 247 MiB/sec avg
Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
READ: 531 MiB/sec 553 MiB/sec 550 MiB/sec = 545 MiB/sec avg
WRITE: 256 MiB/sec 262 MiB/sec 255 MiB/sec = 258 MiB/sec avg
Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 440 MiB/sec 436 MiB/sec 431 MiB/sec = 490 MiB/sec avg
WRITE: 309 MiB/sec 303 MiB/sec 310 MiB/sec = 326 MiB/sec avg
Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 474 MiB/sec 479 MiB/sec 471 MiB/sec = 490 MiB/sec avg
WRITE: 320 MiB/sec 310 MiB/sec 325 MiB/sec = 326 MiB/sec avg
Now testing RAIDZ+0 configuration with 16 disks: cWmRd@cWmRd@cWmRd@
READ: 488 MiB/sec 487 MiB/sec 487 MiB/sec = 487 MiB/sec avg
WRITE: 324 MiB/sec 325 MiB/sec 312 MiB/sec = 320 MiB/sec avg
Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
READ: 458 MiB/sec 448 MiB/sec 449 MiB/sec = 452 MiB/sec avg
WRITE: 256 MiB/sec 268 MiB/sec 270 MiB/sec = 265 MiB/sec avg
Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 418 MiB/sec 440 MiB/sec 430 MiB/sec = 430 MiB/sec avg
WRITE: 354 MiB/sec 357 MiB/sec 354 MiB/sec = 355 MiB/sec avg
Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 485 MiB/sec 483 MiB/sec 478 MiB/sec = 482 MiB/sec avg
WRITE: 388 MiB/sec 407 MiB/sec 408 MiB/sec = 401 MiB/sec avg
Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 533 MiB/sec 532 MiB/sec 514 MiB/sec = 526 MiB/sec avg
WRITE: 435 MiB/sec 426 MiB/sec 433 MiB/sec = 431 MiB/sec avg
Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 580 MiB/sec 572 MiB/sec 545 MiB/sec = 566 MiB/sec avg
WRITE: 442 MiB/sec 426 MiB/sec 448 MiB/sec = 439 MiB/sec avg
Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 321 MiB/sec 337 MiB/sec 332 MiB/sec = 330 MiB/sec avg
WRITE: 235 MiB/sec 231 MiB/sec 227 MiB/sec = 231 MiB/sec avg
Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 417 MiB/sec 417 MiB/sec 435 MiB/sec = 423 MiB/sec avg
WRITE: 291 MiB/sec 284 MiB/sec 291 MiB/sec = 288 MiB/sec avg
Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 415 MiB/sec 421 MiB/sec 411 MiB/sec = 415 MiB/sec avg
WRITE: 305 MiB/sec 306 MiB/sec 306 MiB/sec = 306 MiB/sec avg
Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 387 MiB/sec 385 MiB/sec 383 MiB/sec = 385 MiB/sec avg
WRITE: 313 MiB/sec 319 MiB/sec 313 MiB/sec = 315 MiB/sec avg
Now testing RAIDZ2 configuration with 4 disks: cW
RAID-Z2 is where most memory is consumed; actually RAID-Z3 but that's not supported yet. So your system is crashing due to memory exhaustion. This is party caused by the tuning. I thought increasing kmem up to RAM-1GiB would be enough, but this may be premature. I only have 8 disks to test though; 16 disks might need a bigger margin.
So try following tuning parameters:
kmem=7g
kmem_max=7g
ARC_min=4g
ARC_max=5g
Note the inclusion of kmem_max; by default this is not tuned. You can do all tuning on the tuning page, just change the values manually (and make sure that line is selected) and press the save button, then reboot.
If those values run stable for you, i'll consider changing the default tuning variables. Do note however, that this is an excellent stability test. If your NAS survives the benchmark, then ZFS memory tuning should be stable! It is possible that RAID-Z configurations would be perfectly stable with the current settings; still i would add a bigger margin; you don't want your new NAS to be crashing. Patches in FreeBSD 9-CURRENT are interesting in this regard, to streamline ZFS' hunger for RAM.
You can do that already by installing ZFS-on-root to your USB stick.
1) format USB stick with GPT
2) create a pool
3) make it bootable by installing ZFS-on-root to the newly created pool on Pools->Booting
4) reboot and now boot from USB directly into ZFS
ERROR: You selected RAID5 (single parity) but have selected less than three disks. Please go back to select at least 3 disks.
No need for TLER with zfs, if your not using HW raid cards.A very interesting thread. I'm using unRAID at the moment for storage, but always searching for better solutions. First i have no experience with ZFS or FreeBSD. I'm using Linux for two years, and played with the shell.
I have two questions:
- Can i use desktop drives? Are there no TLER problems?
- Is there some where a more detailed how-to how to install to your usb stick?
Thank you.
No need for TLER with zfs, if your not using HW raid cards.
You can find a full how-to over at http://submesa.com/mesa but you will have to update via the web interface after your finished installing.
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 4 disks
disk 1: gpt/1
disk 2: gpt/2
disk 3: gpt/3
disk 4: gpt/4
* Test Settings: TS32;
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service
Now testing RAID0 configuration with 4 disks: czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory
d@czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory
d@czmIrm: benchmarks/raidtest.read: No such file or directory
rm: benchmarks/raidtest.write: No such file or directory
rm: benchmarks/raidtest.mixed: No such file or directory
Have you tried to port Napp-it?
http://www.napp-it.org/index_en.html
How does it compare to your app?
You don't need TLER on non-Windows software RAID, but you do need TLER on all/most (*) Hardware RAID and all Windows onboard/driver RAID.- Can i use desktop drives? Are there no TLER problems?
Right now i would advice against it. I'm revising the USB/binary image.Is there some where a more detailed how-to how to install to your usb stick?
Whenever i run the random benchmarks, I keep getting:
Code:rm: benchmarks/raidtest.write: No such file or directory rm: benchmarks/raidtest.mixed: No such file or directory Sequential seems to work fine though. This is running off the cd, as I don't know how to do ZFS-on-root[/QUOTE] Still a minor bug from a modification i made to the benchmark script, which deletes the raidtest profiles before creating them; people who had crashes would not be allowed to run the benchmark if otherwise. The bug is that if there is no such file, there should be no error message displayed. Consider it fixed in next update. Shouldn't affect any features of the benchmark script though; you should still get the nice graphs and everything. [quote="MrLie, post: 1036422114"]Found a bug: Created a pool with raidz2 (8 drives) and when I then try to add a 2nd raidz2 vdev (8 drives) I get this error message:[/QUOTE] Thanks for reporting it! Fixed in next update.
True. 0.2.0 would be my first 'semi-stable' release, where i split stable and experimental builds so only those interesting in testing would run the experimental builds.Mesa/ZFSguru is at "preview" stage not recommended for prime time. I'm sure Sub.mesa would tell you not to use it in production.
Well said. In essence, i want to make ZFS more accessible to many home users who would want data integrity/reliability and like ZFS's features but afraid of running something 'foreign' they don't know. A web-interface with a managed system can help reducing the threshold of ZFS, giving more people access to its features.Philosophically I believe the goal with Mesa is to allow a total novice to set up FreeBSD/ZFS without too much homework. It has also become apparent that performance is a top priority.
ZFSguru is going to be the new name; but i still haven't launched the web-site.Sub... it may be time to give this thing an official name? It deserves one and would make it easier for people to search for.
Still a minor bug from a modification i made to the benchmark script, which deletes the raidtest profiles before creating them; people who had crashes would not be allowed to run the benchmark if otherwise. The bug is that if there is no such file, there should be no error message displayed. Consider it fixed in next update. Shouldn't affect any features of the benchmark script though; you should still get the nice graphs and everything..