I'm having an issue where if I add more mirror vdevs to my zpool the individual performance of each disk decreases exponentially. I calculated 10% performance loss per vdev added. That is a HUGE hit. After 4 vdevs, it equals 50%
Here is the setup: I am running Solaris 11 virtualized under ESXi. I have 8 of these disks connected to an LSI 1068e based HBA flashed with IT firmware: Fujitsu MAY2073RC 73GB 10000 RPM 16MB Cache Serial Attached SCSI (SAS) 2.5" Hard Drive.
Here is my testing:
More testing I have done:
Here is the setup: I am running Solaris 11 virtualized under ESXi. I have 8 of these disks connected to an LSI 1068e based HBA flashed with IT firmware: Fujitsu MAY2073RC 73GB 10000 RPM 16MB Cache Serial Attached SCSI (SAS) 2.5" Hard Drive.
Here is my testing:
Code:
# zpool create testpool mirror c9t7d0 c9t8d0
# mkfile 2g /testpool/testfile = 48MB/s and disks are 90% busy, not TOO bad. Wonder where the other 10% is going?
Code:
# zpool create testpool mirror c9t7d0 c9t8d0 mirror c9t9d0 c9t14d0
# mkfile 2g /testpool/testfile = 75MB/s and disks are 80% busy, WTF? I should be getting 100MB/s after that first test...
Code:
# zpool create mirror c9t7d0 c9t8d0 mirror c9t9d0 c9t14d0 mirror c9t15d0 c9t16d0
# mkfile 2g /testpool/testfile = 106MB/s and disks are 65% busy, OK what the hell is going on here
Code:
# zpool create mirror c9t7d0 c9t8d0 mirror c9t9d0 c9t14d0 mirror c9t15d0 c9t16d0 mirror c9t17d0 c9t18d0
# mkfile 2g /testpool/testfile = 116MB/s and disks are 50% busy, F($*@ DAMN IT
More testing I have done:
- I hooked up 6x 15K SAS 3.5" disks and everything looks normal. 300MB/s writes and and 600MB/s reads. Crazy fast. To me, this means it's not the HBA or any other hardware. It's the disks.
- I also created a single RAID-Z vdev out of these 8 disks. Same result, 50% performance.
- I have NOT yet ruled out the backplane that these disks are connected to. But I doubt that is the cause.
- Just to clarify, when I say the disks are only 50% busy, I get that information from the %b column in "iostat"