G'day Guys,
I'm having some strange issues on my Solaris Express 11 and ZFS. I'm getting a lot of iostat errors on ONE of my zpools when I it under load:
The above iostat printout was taken half way through a "zfs send | zfs recv" between two zpools. The source zpool contains the top 8 disks, and the destination zpool contains the bottom 8 disks. The top 8 disks (zpool1) are Samsung 1.5 F2 disks, and the bottom 8 disks (zpool2) are Samsung 2TB F4 disks.
Before anyone asks, YES I have updated the 2TB disks to the latest firmware!
My rig comprises of the following significant items:
I have tried different disks (Seagate, WD, Hitachi, Samsung), and have not had any issues. It's only the 2TB Samsung F4 disks that have an I/O issues. I have also tried connecting the 2TB disks via a straight SATA card (non-SAS) and I still have the same issue.
Anyone else had issues as above??
I'm having some strange issues on my Solaris Express 11 and ZFS. I'm getting a lot of iostat errors on ONE of my zpools when I it under load:
Code:
root@arkf-san1:/dev/rdsk# iostat -exmn
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.1 0.4 2.9 31.6 0.0 0.0 17.4 2.1 0 0 0 0 0 0 c7t0d0
0.1 0.4 3.0 31.6 0.0 0.0 17.4 2.2 0 0 0 0 0 0 c7t1d0
23.6 20.9 454.9 213.1 0.0 0.1 0.0 2.8 0 3 0 0 0 0 c0t50024E900430BBEBd0 \
23.5 20.9 455.0 213.1 0.0 0.1 0.0 2.9 0 3 0 0 0 0 c0t50024E900430BBF2d0 |
23.6 21.0 455.0 213.1 0.0 0.1 0.0 2.7 0 3 0 0 0 0 c0t50024E900430BBEEd0 | Zpool1
23.5 21.0 455.1 213.1 0.0 0.1 0.0 2.8 0 3 0 0 0 0 c0t50024E900430BBDCd0 | Samsung F2
23.5 20.9 455.1 213.1 0.0 0.1 0.0 2.9 0 3 0 0 0 0 c0t50024E900430BC2Ad0 | 1.5TB disks
23.5 20.9 454.8 213.1 0.0 0.1 0.0 2.8 0 3 0 0 0 0 c0t50024E900431D9B0d0 |
23.4 20.9 454.8 213.1 0.0 0.1 0.0 2.8 0 3 0 0 0 0 c0t50024E900431D9ACd0 |
23.4 20.9 454.8 213.1 0.0 0.1 0.0 2.8 0 3 0 0 0 0 c0t50024E900431D9E9d0 /
0.0 15.9 0.1 11470.3 0.0 0.1 0.0 4.8 0 7 0 8 17 25 c0t50024E90047BBF9Ad0 \
0.0 15.8 0.1 11470.2 0.0 0.1 0.0 4.8 0 7 0 8 17 25 c0t50024E90047BBF98d0 |
0.0 15.9 0.1 11470.4 0.0 0.1 0.0 4.8 0 7 0 4 9 13 c0t50024E90047C51A2d0 | Zpool2
0.0 15.8 0.1 11470.2 0.0 0.1 0.0 4.8 0 7 0 8 17 25 c0t50024E90047BBC90d0 | Samsung F4
0.0 2.9 0.2 1847.4 0.0 0.0 0.0 4.9 0 1 0 14 35 49 c0t50024E90047C5450d0 | 2.0TB disks
0.0 2.9 0.2 1847.3 0.0 0.0 0.0 4.6 0 1 0 2 6 8 c0t50024E90047C5512d0 |
0.0 2.9 0.2 1847.4 0.0 0.0 0.0 4.6 0 1 0 6 10 16 c0t50024E90047C51B0d0 |
0.0 2.9 0.2 1847.3 0.0 0.0 0.0 4.6 0 1 0 2 4 6 c0t50024E90047C55C4d0 /
The above iostat printout was taken half way through a "zfs send | zfs recv" between two zpools. The source zpool contains the top 8 disks, and the destination zpool contains the bottom 8 disks. The top 8 disks (zpool1) are Samsung 1.5 F2 disks, and the bottom 8 disks (zpool2) are Samsung 2TB F4 disks.
Before anyone asks, YES I have updated the 2TB disks to the latest firmware!
My rig comprises of the following significant items:
- 8 x SAMSUNG EcoGreen F2 HD154UI hard disks
- 8 x 2TB SAMSUNG Spinpoint F4 HD204UI hard disks
- LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0 card
- HP SAS Expander Card
- NORCO RPC-4224 4U Rackmount Server Case
I have tried different disks (Seagate, WD, Hitachi, Samsung), and have not had any issues. It's only the 2TB Samsung F4 disks that have an I/O issues. I have also tried connecting the 2TB disks via a straight SATA card (non-SAS) and I still have the same issue.
Anyone else had issues as above??
Last edited: