More RAID problems...

RabidSmurf

n00b
Joined
Jun 30, 2009
Messages
56
Ok so if you read my other thread you'll know I've been having trouble getting a RAID card to work with an onboard video, suffice to say I finally got it working with a new board.

Now my problem is absolutely atrocious write performance, like we're talking barely 100Mb/s (Yes megabits not bytes). This is drive to drive, not over the network.

The setup:

Zotac IONITX L-E Atom 330 Board
4GB DDR3 RAM
LSI Perc 5/i RAID Controller w/ 256MB RAM
Windows 7 Ultimate installed on a single 7200RPM 74GB HDD
4x 1.5 TB Western Digital "Green" drives in RAID 5 (2 are 32mb caching and 2 are 64mb caching).

RAID Config:

RAID Level: 5
Stipe Size: 128k
Disk Cache Policy: Enable
Current Write Policy: Write Through (I can't use Write Back yet since my battery backup isn't here yet, could this be the problem?).
Read Policy: Adaptive Read Ahead
IO Policy: Direct IO
Access Policy: Read Write

Now, I am aware that green drives aren't best for RAID, I went with them because they're cheap, I also know someone else who has a virtually identical setup and can manage around 80MB/s while writing to the RAID 5. I am getting about 15. So I am pretty sure it's not the drives...

I have a single 1.5TB with all my stuff on it that I am trying to transfer onto the RAID, I've tested writing from it to other single drives and get around 70MB/s which acceptable, to the RAID i get 15.

I know RAID 5 has parity calculations and won't be as good as something like RAID 0 but there is no way it can be this bad even with those green drives. I must've done something wrong.

Is it simply the fact that I am using Write Through right now, or do you guys have any other suggestions for improving this?

EDIT: What about PCI-E Settings in the BIOS?

Thanks in advance,

Kevin
 
Last edited:
Hey Guys

My BIOS has an option to "retrain" PCI-E to Gen2, which is currently enabled.

I don't know the diff between gen1 and gen 2, anybody know how this might affect my RAID controller?

I am leaning more and more towards PCI-E bus config in BIOS, but I am a newb in this regard, please help.


EDIT: Oops, "Rretrain PCI-E to Gen2" was actually disabled, I enabled it, it made no difference.

HDTune doesn't seem to have a a problem, it's showing transfer rates of 94 MB/S min and 230MB/s max, although it shows the drive as (Perc 5/i 2119GB) which is incorrect so I dunno what it's actually testing.

Ugh this aggravating.

Testing reading from the RAID and got about 60MB/s over 15GB so that seems to be acceptable, it's just the damn write performance that is so bad.

Kevin
 
Last edited:
Now my problem is absolutely atrocious write performance, like we're talking barely 100Mb/s (Yes megabits not bytes). This is drive to drive, not over the network.

Was this a large file transfer or tons of small files? How was it measured?
 
I have tried with with a bunch fo small MP3s as well as full blue ray movies (10+ GB) and it varies only slighty.

Win 7 tells you the transfer rate in the copy window, not sure how accurate, but that's how I am measuring currently.

I would like to note that inititally the transfer rate is pretty good 70ish then slowly goes down and down and down and finally stabilizes between 15 and 20 MB/s.

I only have this problem when copying to the RAID array, if I copy to my single 500GB drive it works fine.

Also reading from the RAID seems to be fine to.

The WD drives in raid are 1.5TB drives, 2 EADS, and 2 EARS. not sure if that might be playing a role, they are identical size and spindle speed wise, but the caching is different, and I don't think they have TLER.

Note: I've also updated the firmware on the PERC 5/i from the stock DELL firmware to LSI's 7.0.1
 
Last edited:
That's a good question, I hadn't thought of that.

It's currently in the middle of copying and my CPU utilization is hovering between 1% and 15%.

I believe the RAID controller should be handling most of the processing, if I am not mistaken.
 
IIRC, yes the BBU does affect write performance.
 
IIRC, yes the BBU does affect write performance.

Do you think it would affect it this much? I have almost no experience with a hardware RAID controller honestly, so any advice is appreciated.

My BBU is shipping currently, so maybe I'll just wait and see what kind of difference it makes once it gets here.

Thanks for all the replies guys.
 
I believe the RAID controller should be handling most of the processing, if I am not mistaken.

Yup, the PERC 5s are hardware cards, just thought I'd rule that out! Write back will help but there is something else wrong. HDTune kinda puzzles me, how is the array partitioned/file system etc?

Haven't owned a Windoze PC in 5+ years, not sure how to pull that info, maybe you are already familiar?
 
The array is a simple volume.

Primary partition: 4190.23 GB (there is no secondary or unallocated space).
File System: NTFS

Standard for Windows, all my drives are setup like this.
 
I am kind of starting to wonder if it is the drives.
My current thoughts:

Not the PCI-E bus config, since reading is working fine.
Not the RAID Configuration, although possibly a slight performance hit without BBU
Not CPU since it's barely being utilized

Really only thing left I can think of is the drives.

Has anyone mixed the WD15EARS with WD15EADS in RAID 5 before?

Technically they are the same drive except the EARS has a 64MB cache instead 32, also I think it may have a newer firmware version.

I believe neither the EARS or EADS uses TLER..

The friend I mentioned with a similar setup is using ALL EADS, could the difference in cache size be causing this?
 
kill the array, hook up only one drive as a JBOD passthrough, and see how the performance is then.

also, slap yourself for such a painful, dangerous configuration. my above suggestion is real, but heed my warning, your trouble with that system will not end here.

you probably don't even have a jumper on those EARS so they're stuck running in crazy offset emulation mode. if I had to guess, people have run into this before because those drives are cheap and tempting to use like this, even though nobody at WD would have ever tested them like that.
 
kill the array, hook up only one drive as a JBOD passthrough, and see how the performance is then.

also, slap yourself for such a painful, dangerous configuration. my above suggestion is real, but heed my warning, your trouble with that system will not end here.

you probably don't even have a jumper on those EARS so they're stuck running in crazy offset emulation mode. if I had to guess, people have run into this before because those drives are cheap and tempting to use like this, even though nobody at WD would have ever tested them like that.

I am not sure I understand what your suggestion is. The non raid write performance is fine.

Can you elaborate on this "jumper" you speak of?
 
Well, for starters, the WD green drives ARE slow... and the R/W required for RAID 5 performance is where the green drives are particularly slow.

It's very possible that's the best you're going to get without changing hardware, or changing your RAID level to 10.
 
Why not just enable the write back temporarily to see if having it enabled affects performance? The battery won't help speed, only if you lose power and have data waiting to write.
 
Why not just enable the write back temporarily to see if having it enabled affects performance? The battery won't help speed, only if you lose power and have data waiting to write.

Thanks I did not know that, will try it.

Also jumpering pins 7-8 on the EARS drives, did a bit of googling and it seems like it might fix the issue.
 
Ok with pins 7-8 jumpered on the EARS drives and forcing writeback I am now getting acceptable performance of 62 MB/s when transfering large files.

With smaller files I am getting around 70-80MB/, this is excellent. I mean it's not amazing, but for green drives it's acceptable, and all this setup needs to do is store files and stream media to another PC accross the network, nothing to heavy duty.

I think I am actually maxing out the read on the single drive I am copying from because if I copy directly to my OS drive instead I get the exact same speed.

All in all I think the problem is resolved.

Thanks to everyone for their suggestions.
 
happy to help. thank god you have a real raid card. this problem ticket could have gone much worse
 
Back
Top