Slow Write Performance of two SSD in Raid 0

raminux

Limp Gawd
Joined
May 23, 2012
Messages
303
I havce two Crucial M4 128GB SSD in raid 0 on an IBM M5015 raid controller with 512MB of onboard ram for cache.

I did a benchmark and here is the result (screenshots from CrystalDiskMark):

http://www.flickr.com/photos/raminolta/8411969685/in/photostream/

I am disappointed with the writing results (this is a raid 0 of two SSD's) and I am wondering why it is so low?


In comparison, this is the result from a 256GB Samsung 840 Pro Series installed in an Asus Zenbook Prime:

http://www.flickr.com/photos/raminolta/8411969613/in/photostream

One single drive is having better write performance than two in raid 0! I know Samsung is faster SSD than Crucial's but, again in raid 0 I expect them to perform just as good as one Samsung alone.

Why do you think this is the case? I have enabled the caching on the the raid controller firmware. Could it be the reason for the slowness?

Thanks.
 
Last edited:
I don't know how flexible your installation is right now, but it seems like that motherboard has two native SATA6 ports from Intel right? Could you move the drives over to those and recheck the numbers? The problem could be the controller board, as it appears it is SATA2 and may be reducing the benchmark numbers.
 
Enabling write cache from your controller would increase performance, not decrease it.

If it's filled with many small files, considering that those Micron drives don't write all that fast, that may be why.
 
I don't know how flexible your installation is right now, but it seems like that motherboard has two native SATA6 ports from Intel right? Could you move the drives over to those and recheck the numbers? The problem could be the controller board, as it appears it is SATA2 and may be reducing the benchmark numbers.

From my experience before, if I move the drives to another controller, I may loose the raid configuration (whose raid table may depend on the controller). I can still try it.

M5015 is based on LSI 9260 and has SATA6 ports from what I have learned.
 
Enabling write cache from your controller would increase performance, not decrease it.

If it's filled with many small files, considering that those Micron drives don't write all that fast, that may be why.

This is only for the system drive. My data are on an hdd raid array connected to the same controller.
 
One question: does garbage collection automaticly runs or, I should run it. If it has to be done by me, how can I do it?

Thanks.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I have looked at a few sites and they all show the IBM card is SATA300 or SATA2 only, even though the chipset from LSI is supposedly SATA3 or 6Gbps capable. Could be a problem with the controller, or have something to do with the write caching messing up benchmark numbers. Is this your card:

http://www.amazon.com/dp/B003GDU7Y2/ref=asc_df_B003GDU7Y22365872?smid=A10XTVRAJRWPGO&tag=cnet-pc-20&linkCode=asn&creative=395105&creativeASIN=B003GDU7Y2

I also looked around and found few mentioned SATA 2 but majority state it is SATA 3. The IBM webpage states SATA 3:

https://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0738.html?OpenDocument#contents

Also at Lenovo:

http://support.lenovo.com/en_CA/product-and-parts/detail.page?&LegacyDocID=MIGR-73908

and

http://www.scsi4me.com/ibm-serveraid-m5015-pci-express-8-port-2x-sff-8087-sas-raid-controller-with-512mb-cache-and-battery.html

And since ithas the same hardware as SAS9260-8i, it is possible to cross-flash the firmware and convert it into an SAS9260-8i which is what I just did last night. So the card now is effectively a SAS9260-8i. This cut the card initiation time by half which is great though I don't think it had any impact on the disk performance.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
You just have to make sure TRIM is enabled in the firmware.

And looking at here, this is why your Crucial's write slow. Those drives have utter slow writes:

http://www.crucial.com/store/partspecs.aspx?IMODULE=CT128M4SSD2

Sustained Sequential Write Up to 175 MB/s (SATA 6Gb/s)

From what I have learned, trim doesn't work in raid configuration. But it must be the case that these disks have slow writes. I did a set of tests which I am writing in details soon.
 
As I guessed, changing the raid settings makes the disks inaccessible. Fortunately, I have had a backup of my personal data; it remained the pain of having to reinstall the OS and all applications. However, I took the opportunity to do some tests in the meanwhile. As we know, trim doesn't work in raid setups and M4's are not positively famous for their garbage collection performance (according to Anandtech). So their write performance degrades after a while particularly if they are in a raid array.

I did a secure erase before each test ensuring the disks are clean. Secure erasing and, tweaking the raid controller settings boosted the performance quite a lot.

Raid controller now is SAS9260-8i (after cross-flashing the firmware).
2 x Crucial M4 128GB in Raid 0 (secure erased and clean install)
Stripe size: 32k
Read: Normal
Disk Cache: Enabled
I/O: Cached
Default Write: Always Write Back

Here is the result:

http://www.flickr.com/photos/raminolta/8420101973/in/set-72157630070096636

It looks like this is the best I can achieve. I also updated the m4's firmware to the latest but, that didn't make a difference.

I also tested if turning off the windows 8 write cache buffer flush would make a difference but, it appeared not having any significant impact.

In another test, I removed the disks off the LSI 9260-8i and connected them to the onboard sata 3 ports. This is a Marvel SATA/RAID controller kind of infamous for having weak performance. My test confirmed this. I did a secure erase and clean install once again. Moving the disks between the two controllers makes the data on the disks inaccessible so that I have to do a re-installation anyway. Here is the result:

http://www.flickr.com/photos/raminolta/8420101645/in/set-72157630070096636/

This is quite a weaker result except for the 4K cache. It should confirm the limitation is not in the SATA 2 or 3 ports.
 
Last edited:
Enabling write cache from your controller would increase performance, not decrease it.
This isn't always true on LSI cards with SSD's and needs testing in specific workloads. Some LSI based controllers actually show better in Write Through mode instead of Write Back. And also with their Read Ahead set to off.
 
I think the latest Intel RST drivers under Windows 8 might support TRIM in raid configurations now. I don't have a setup like that so I'm not really sure to be honest, but that's what I have heard and understand.
 
A RAID controller (not HBA) with onboard cache should always achieve higher transfer rates with writeback caching than without (somewhat dangerous without BBU). The onboard cache will however become less and less effective if you use larger overall data sizes.

Intel RAID0 is the only hardware-based (it's not real hardware) RAID0 solution with TRIM. RAID0 will not increase random access transfer rates. The sequential transfer rates you achieved are fine for m4s. After all the 840 Pro is the fastest consumer drive you can currently get.

Do not use the Marvell ports for SSDs, if it is a PCIe x1 controller. It will be worse than Intel 3 Gbps ports.
 
This isn't always true on LSI cards with SSD's and needs testing in specific workloads. Some LSI based controllers actually show better in Write Through mode instead of Write Back. And also with their Read Ahead set to off.

That's either because of the LSI cards` quirkyness or because they're old cards. The old LSI cards aren't built to respond to the SSD workloads.

An SSD puts out way, way more IOPS than a HDD there's no comparison. As such, SSD's require capable controllers to get proper performance out of them. That old controller you bought off for $50 from the Bay may not cut it...

A RAID controller (not HBA) with onboard cache should always achieve higher transfer rates with writeback caching than without (somewhat dangerous without BBU). The onboard cache will however become less and less effective if you use larger overall data sizes.

This. Because the onboard cache is much more faster than the flash chips used in the SSD's.

Same reason why SandForce based drives don't perform on incomprehensible data as well as they do on comprehensible data because they have no cache. They rely their speed heavily upon SandForce's proptietery compression method.

And I'd also like to add that I'd never turn on write caching without BBU considering what it may cause.

Say your machine locked up while you're installing Windows updates, you wait wait and wait, but it doesn't move. So you go ahead, hit the switch and restart it. What may happen under such case? Corrupt Windows installion.
 
I wanted to add that you cannot compare the write rates of 128 GB SSDs with 256 GB SSDs. The write rates suffer the most from smaller drive sizes.

If you plan to keep the controller writeback cache on you should really get a BBU. While modern (journaling) filesystems can protect their metadata against power failures, a volatile write cache will pratically circumvent any journaling and may result in corrupt filesystems and not only corrupt data.
 
128, 256, 512 it doesn't matter in this case.

All Micron made drives write slow. It's a normality of the drive, even the P300 SLC writes slow and the P400 SLC is also going to write slower than competition at around 350 MB/s.
 
Thanks for the info.I switched back to 'write through' though I may buy a BBU.
 
128, 256, 512 it doesn't matter in this case.

My statement was more directed to the OP. If you compare the manufacturer's specs (write rates):

840 Pro 256 GB: 520 GB/s
840 Pro 128 GB: 390 GB/s

m4 256 GB: 260 GB/s
m4 128 GB: 175 GB/s

You will need 3-4 of the m4 128 GB drives to come close to the 840 Pro 256 GB drive. And that does not account for any diminishing returns the RAID may have.
 
From what I have learned, trim doesn't work in raid configuration. But it must be the case that these disks have slow writes. I did a set of tests which I am writing in details soon.

Actually that is dependent upon what controller you use. If you use Intel RST for your array, then indeed TRIM works with RAID beginning with 11.3 OROM and 11.5 drivers I do believe. I had to patch my bios on my motherboard to upgrade the OROM on it. TRIM seems to be working well with my array! The M4s were pretty slow in an array for me as well. Wasn't much of an improvment over my old c300s. One of the reasons why I upgraded to the Samsung pros.

here's the Samsungs in RAID 0. I can probably get better scores with a clean install.

crystaldisksamsung940pr.jpg
 
Last edited:
That's either because of the LSI cards` quirkyness or because they're old cards. The old LSI cards aren't built to respond to the SSD workloads.

An SSD puts out way, way more IOPS than a HDD there's no comparison. As such, SSD's require capable controllers to get proper performance out of them. That old controller you bought off for $50 from the Bay may not cut it...

Well, for whatever it's worth, the Dell R610 is a current shipping product and what I stated is in Dell's documentation. But feel free to take your worldly position up with Dell's engineering group if you like. :D
 
Back
Top