WD Green Drive Wierd Question (Freebsd/ZFS system)

xp0

n00b
Joined
Jun 12, 2010
Messages
21
Has anyone noticed with their green drives that they will write and then slow down to almost a crawl, and then start write/reading again?

I have 4x2TB setup in a raidz on ZFS and when I write large files to the they will send at 60-100MB/sec for a little while and then slow down to almost 1MB/sec ... then speed back up. Sometimes it seems like they slow even less than that.

Question is, is there a way to prevent this? This doesnt seem normal ... I am running FreeBSD with the HDDs in a ZFS RAIDZ. If anyone knows any types of parameters or config I can enter, that would be great

Thanks...
 
Could you show me your zpool status output? What version of FreeBSD do you run? What is your memory size, and did you perform any memory tuning?
 
I just did the memory tweaks because my box was crashing with kmem_map errors ... so far, so good on that problem....

Beast# uname -a
FreeBSD Beast 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:02:08 UTC 2009 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64

4GB RAM per CPU (2xCPU total, 8GB Ram total)

Beast# cat /boot/loader.conf
accf_http_load="YES"
ahci_load="YES"
vboxdrv_load="YES"
vm.kmem_size="12G"
vfs.zfs.arc_max="4G"

Beast# zpool status
pool: library
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
library ONLINE 0 0 0
raidz1 ONLINE 0 0 0
label/disk1 ONLINE 0 0 0
label/disk2 ONLINE 0 0 0
label/disk3 ONLINE 0 0 0
label/disk4 ONLINE 0 0 0

errors: No known data errors

pool: school
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
school ONLINE 0 0 0
mirror ONLINE 0 0 0
label/500gdisk1wd ONLINE 0 0 0
label/500gdisk2sm ONLINE 0 0 0

errors: No known data errors
 
Can you do local tests and network test with iperf?

The local test could be:

# write 20GB file
dd if=/dev/zero of=/library/zero.000 bs=1m count=20000
# read 20GB file
dd if=/library/zero.000 of=/dev/null bs=1m

Carefully review the dd commands before hitting enter!
 
I just ran into the same problem and managed to trace it to one of the disks. Once I offlined that drive, the performance issues went away. It's showing 136 reallocated sectors at this point, so I just started an offline test with smartctl. I guess I'll probably RMA the drive and hope that solves the problem.
 
Yes, this is a somewhat frequently occurring issue; with many disks in a RAID-config, one 'dud' or poorly performing and possibly also failing disk can cause your pool performance to crash.
 
Interesting ... I have thought about that because this didnt seem normal.
Steve01S4 - can I ask how you managed to narrow it down to a specific disk? What tests can I do to see if I can find the same issue?

I will try and do some iperf tests in a few

Thanks a lot
 
xp0-

I use a Norco case that has activity LEDs for each hard drive. I noticed that whenever the system froze up, the light for one particular drive was always on. It wasn't even blinking, it was solidly on. The system wouldn't become responsive again until that light went out. If you haven't done it yet, check the smart data for your drives too. That might give you some clues.
 
Gotcha. I am not having a problem with my system freezing/crashing anymore, I have fixed that I think, but I am just having that weird issue with my drive. I will use the smart controls to see if I can figure something out
 
Check for:

- Current Pending Sector (non-zero is very bad)
- UDMA CRC Error Count (cabling errors)
 
And the smart control data will tell me that?

I should run the advance or whatever the option that gives the most info on the drive on all four of them, correct?
 
Back
Top