FreeNAS miss-alignment with 4k Drives

Discussion in 'SSDs & Data Storage' started by Apollo686, Nov 13, 2010.

  1. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    I have FreeNAS 0.7.3 (5543) running on a Q9550 mini ITX system with 4GB of RAM. I previously had 3x WD20EARS drives attached and had good performance over gigabit ethernet. I just bought 3 more WD20EARS drives and destroyed the array to create a new one using graid5. All the drives are the version 3 (667 platters).

    The new configuration is 6x WD20EARS with 4 drives off the motherboard and 2 drives off an areca arc-1200 running as passthrough. With the new array I am copying data to the NAS and getting speeds of 10-15 MB/s average over gigabit. I had one transfer that started around 50 MB/s but then went down after a bit and has stayed around 9-12 MB/s.

    In FreeNAS I checked Satus| Disks and got:

    Disk------Capacity----------Device model------------------------------I/O statistics----------------------------------Temperature---------Status
    ad8-------1907730MB------WDC WD20EARS-00MVWB0------111.25 KiB/t, 84 tps, 9.13 MiB/s------41 °C------------------ONLINE
    ad10-----1907730MB------WDC WD20EARS-00MVWB0------118.35 KiB/t, 79 tps, 9.12 MiB/s------40 °C------------------ONLINE
    ad12-----1907730MB------WDC WD20EARS-00MVWB0------121.10 KiB/t, 77 tps, 9.12 MiB/s------38 °C------------------ONLINE
    ad14-----1907730MB------WDC WD20EARS-00MVWB0------121.30 KiB/t, 77 tps, 9.13 MiB/s------40 °C------------------ONLINE
    da1-------1907730MB------n/a---------------------------------------------62.28 KiB/t, 150 tps, 9.14 MiB/s------30 °C------------------ONLINE
    da2-------1907730MB------n/a---------------------------------------------62.28 KiB/t, 150 tps, 9.13 MiB/s------30 °C------------------ONLINE


    9.13 MiB/s seems very low, these drives should be closer to 90 MiB/s. Under diagnostics I noticed that the drives are setup with 63 spt. Could this be causing the slowdown by misaligning the 4k sectors? I was under the impression that without partitions this should not be an issue.

    If anyone has thoughts I would greatly appreciate it, and I can provide more information if needed.
     
    Last edited: Nov 13, 2010
  2. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    Also, checking performance with dd gives:

    dd if=/dev/zero of=/raid5/zerofile.000 bs=1m count=10000
    10485760000 bytes transferred in 155.132150 secs (67,592,436 bytes/sec)

    dd if=/raid5/zerofile.000 of=/dev/null bs=1m
    10485760000 bytes transferred in 127.880278 secs (81,996,694 bytes/sec)

    sysctl -a | grep kmem
    vm.kmem_size_scale: 2
    vm.kmem_size_max: 1342177280
    vm.kmem_size_min: 0
    vm.kmem_size: 1342177280

    sysctl -a | grep raid5
    kern.geom.raid5.wqf: 95
    kern.geom.raid5.wqp: 104
    kern.geom.raid5.blked2: 3317729
    kern.geom.raid5.blked1: 39410
    kern.geom.raid5.dsk_ok: 50
    kern.geom.raid5.wreq2_cnt: 19894149
    kern.geom.raid5.wreq1_cnt: 3034064
    kern.geom.raid5.wreq_cnt: 33885804
    kern.geom.raid5.rreq_cnt: 263748
    kern.geom.raid5.mhm: 32956347
    kern.geom.raid5.mhh: 174868708
    kern.geom.raid5.veri_w: 756
    kern.geom.raid5.veri: 15261832
    kern.geom.raid5.veri_nice: 100
    kern.geom.raid5.veri_fac: 25
    kern.geom.raid5.maxmem: 8000000
    kern.geom.raid5.maxwql: 50
    kern.geom.raid5.wdt: 3
    kern.geom.raid5.tooc: 5
    kern.geom.raid5.debug: 0

    graid5 status
    Name Status Components
    raid5/store COMPLETE CALM ad8
    ad10
    ad12
    ad14
    da1
    da2
     
  3. bAMtan2

    bAMtan2 [H]ard|Gawd

    Messages:
    1,469
    Joined:
    Sep 16, 2008
    heaven help me. I don't know. just please dont use 6 of those drives in raid 5.
     
  4. RabidSmurf

    RabidSmurf n00bie

    Messages:
    56
    Joined:
    Jun 30, 2009
    Not sure if it applies with FreeNAS (never used it).

    But I just resolved a very similar problem with my 2x EADS 2x EARS RAID 5 (1.5 TB drives)

    I had to jumper pins 7-8 on the EARS drives which is what WD recommends for windows XP, I am running win 7 but I am thinking it's possible that the controller I am using is primitive.

    Not really sure why that worked but it seemed to help a fair bit, that combined with using writeback mode has gotten the write performance into the realm of usability.
     
    Last edited: Nov 14, 2010
  5. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    The jumpers move the start sector from 63 to 64 which aligns the sectors on the 4k drives. This fixes the speed issues.

    The difference in my situation is I am not using partitions on the individual drives so I theoretically should not have this issue. It still seems to me like this might be the problem, but I'm trying to realign the disk rather than using the jumper.

    I was hoping sub.mesa would have some ideas as he has been very helpful in the past. :)
     
  6. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    The Green drives are aligning 8 512k blocks to each 4k block, which offsets sectors starting at 63 causing the slow drive performance. By starting at 64 this is avoided, but I'm not sure how to do this with FreeBSD Raid formatted drives.

    With linux I would:

    fdisk - "sudo fdisk -u /dev/sda"
    Delete existing partitions - "d"
    Create new partitions - "n"
    Tell it primary partition - "p"
    Tell it partition 1 = "1"
    Tell it to align at sector 64 - "64"
    Change the partition type (if you want the drive as part of your mdadm raid array) - "t"
    Select Linux raid autodetect fs - "fd"
    Write & Quit - "w"


    Does anyone know if there is a comparable way to do this with FreeNAS/FreeBSD? I am not partitioning my drives, but it's still listing the raid volumes as starting at 63.
     
  7. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    Ok, so I destroyed the partition and recreated checking use 4k blocks. It's still listing the partition as starting at 63 with 512k blocks but I'm getting 2-3x the speed:


    freenas:~# dd if=/dev/zero of=/mnt/nas/zerofile.000 bs=1m count=10000
    10000+0 records in
    10000+0 records out
    10485760000 bytes transferred in 62.202299 secs (168,575,119 bytes/sec)

    freenas:~# dd if=/mnt/nas/zerofile.000 of=/dev/null bs=1m
    10000+0 records in
    10000+0 records out
    10485760000 bytes transferred in 57.392362 secs (182,703,058 bytes/sec)


    Wondering if I can get more out of it. What are good speeds for a 6 disk array with single drive parity? I'm thinking somewhere in the 500-600mbps range.
     
  8. Apollo686

    Apollo686 Limp Gawd

    Messages:
    227
    Joined:
    Nov 15, 2004
    Converted the array over to RAID-Z1 and got the following numbers. Similar write but much faster read:


    freenas:~# dd if=/dev/zero of=/mnt/nas/zerofile.000 bs=1m count=10000
    10000+0 records in
    10000+0 records out
    10485760000 bytes transferred in 58.469940 secs (179,335,912 bytes/sec)

    freenas:~# dd if=/mnt/nas/zerofile.000 of=/dev/null bs=1m
    10000+0 records in
    10000+0 records out
    10485760000 bytes transferred in 27.471856 secs (381,690,992 bytes/sec)


    Not sure if this is a speed difference inherant in ZFS or if it is treating the data different and it's something I can apply to graid5.