Extremely low write speed on an onboard Intel controller

jadams

2[H]4U
Joined
Mar 14, 2010
Messages
4,086
http://www.newegg.com/Product/Product.aspx?Item=N82E16813153212

Thats the board. I have two of these. First one is running 2 drives in RAID1 mirror. Good speed. 150MB/sec R/W approximately.

Second one is a RAID5 strip. Read speeds are also around 150MB/sec, however the write speed is deplorable. I'm getting MAYBE if I'm lucky 17MB/sec. Could the parity really be causing an almost 90% reduction in speed. This doesnt sound right at all. I know software raid normally produces some less than desired results for performance, but this is BAD. None of the comments on the newegg page mention RAID5. People have used 1 and 0 without issue. And even I'm using RAID1 w/ out issue.

Intel RAID controller says the array is in good condition and the drives are healthy.

Ideas?
 
Link says the board uses the ICH9R, which is becoming somewhat dated at this point. The P35 board I bought back in ~2007 for my Q6600 rig also used the ICH9R.

What version of the Intel software are you using?

Depending on what OS you are using, you might try setting the drives up as individual drives and using Software Raid via Disk Managment instead. Obviously software raid isn't ideal but it would be hard to do worse than 17MB/sec.
 
Intel Rapid Storage Technology 12.5.0.1066 I believe thats the newest they have.

Some google searches lead me to believe I should disable caching to improve performance. Not too sure I want to do that.
 
Setting cache mode to WRITE BACK nets me 40MB/sec. It also bumped read speeds up to 180MB/sec. Still pretty low.
 
I've played around with RST R5 before and drawn the conclusion that it's just terrible, I saw exactly the same results as you and nothing I could do would improe it to an acceptable level. God know how they've managed to make such a mess of it MDADM is night and day by comparison.

How many disks are you using and strip size? Made a big difference in some test configs I tried, 6 disk was a no no thats for sure. Dropping back to 4 disk made a decent difference but it still wasn't right.
 
You may have better luck using Windows software Raid 5. But you can enable the system use memory as a cache which will give you a boost to perceived performance. This would allow writes to complete faster since it would be buffering into memory then slowly writing to disk. But yes, the speed is going to be bad writing to RAID 5 without a write cache. You can enable caching through the Policies tab on the storage controller. Keep in mind that if the system loses power you will lose the data you are writing, unless you have a UPS (good raid cards have a battery backed write cache).
 
I've played around with RST R5 before and drawn the conclusion that it's just terrible, I saw exactly the same results as you and nothing I could do would improe it to an acceptable level. God know how they've managed to make such a mess of it MDADM is night and day by comparison.

How many disks are you using and strip size? Made a big difference in some test configs I tried, 6 disk was a no no thats for sure. Dropping back to 4 disk made a decent difference but it still wasn't right.

Three disks. Data stripe size is 128K. Should I lower this?
 
You may have better luck using Windows software Raid 5. But you can enable the system use memory as a cache which will give you a boost to perceived performance. This would allow writes to complete faster since it would be buffering into memory then slowly writing to disk. But yes, the speed is going to be bad writing to RAID 5 without a write cache. You can enable caching through the Policies tab on the storage controller. Keep in mind that if the system loses power you will lose the data you are writing, unless you have a UPS (good raid cards have a battery backed write cache).

turning off the write cache actually doubled me speed from 18MB/s to ~40MB/sec.

This is what I currently have in the RST Utility

http://i.imgur.com/Ypp46O1.png
 
turning off the write cache actually doubled me speed from 18MB/s to ~40MB/sec.

I thought you enabled the write cache. That is what write back means. It means writes to the array go to the cache before being flushed to disk at a later time.
 
I thought you enabled the write cache. That is what write back means. It means writes to the array go to the cache before being flushed to disk at a later time.

Sorry, you're right. Going to edit that post.

Turning on WRITE BACK actually nets me 45-50MB/s write and 201MB/s read. Getting there...

Two questions I guess...

1. Would decreasing the stripe size speed things up? I assume this requires a rebuild.
2. What is it using as cache? System memory? Its currently only using 50% of the 2GB of DDR2 ram in there. I dont see ram utilization going up when I run the disk benchmarks.

It talks about using an SSD for caching which I'm going to look into. If thats the case I'd like to use this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820148610

The mobo has a mini pci-e connector and this would keep me from having to use a sata port for expand-ability later.

EDIT:

I dont think thats gonna work. documentation states that I dont meet the hardware requirements for accelerate an array with an SSD. This sucks...
 
Last edited:
1. Would decreasing the stripe size speed things up?

This should help at random writes.

2. What is it using as cache? System memory?

Yes this will be system memory. You probably have no control on how big the cache is.

I dont see ram utilization going up when I run the disk benchmarks.

It may be preallocated. Like 64MB or 128MB.
 
Back
Top