I'm having issues with ZFS performance on a proof-of-concept ZFS server I setup. I have 14 OCZ Deneva 2 SSDs connected to a Dell H200 HBA card (rebranded LSI SAS2008 card). I wanted to run a benchmark against a 14 disk Raid 10 SSD pool and was dissapointed by the results. I was only getting about 650MB/s write and 900MB/s read. I created a bunch of different pools and the numbers don't add up.
With a single drive configured I got about 350 write and 600 read
With a 14 disk RAID 0 I got the same 650 write and 900 read.
About the system:
Dell R910 with 4 8 core Xeons
128GB ECC Ram
Running Solaris 11 Express with Napp-it 0.8H
Using the Napp-it dd bench tool with default values. (Would rather use Bonnie++ but apparently it doesn't worh with Solaris 11)
Other troubleshooting I've done:
updated to the latest Dell BIOS
updated the H200 to the latest Dell firmware
checked that the OCZ drives had the latest firmware
tried using different pool versions v28 vs v31
With a single drive configured I got about 350 write and 600 read
With a 14 disk RAID 0 I got the same 650 write and 900 read.
About the system:
Dell R910 with 4 8 core Xeons
128GB ECC Ram
Running Solaris 11 Express with Napp-it 0.8H
Using the Napp-it dd bench tool with default values. (Would rather use Bonnie++ but apparently it doesn't worh with Solaris 11)
Other troubleshooting I've done:
updated to the latest Dell BIOS
updated the H200 to the latest Dell firmware
checked that the OCZ drives had the latest firmware
tried using different pool versions v28 vs v31