LSI RAID Controller help, 2308 chip on-board ASRock Extreme 11 motherboard is slow!

Vega

Supreme [H]ardness
Joined
Oct 12, 2004
Messages
7,143
Request some help for those who know about the LSI controllers. Using the 2308 controller chip found on the ASRock Extreme 11 motherboard and I have eight 128 GB OCZ Vertex 4 SSD's (Firmware 1.5) set up in RAID 0.

This is the very poor speed I am getting:

as-ssd-benchLSILogicalVolu71820127-43-56PM.png



This is the speed I got with a mere two 128 GB Vertex 4's on Z77 Intel Raid 0:

as-ssd-benchZybane81201212-57-53AM.png




Both are 64kb stripe.

Pretty much the only speed that looks correct on the LSI setup is the 4k read and Acc.time's. Everything else look's very slow. The 4k write is abysmal, and the sequential speed is far far below what eight M4's got here using the same controller chip (4180 MB/s seq read):

http://.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/5


These are the settings:

2.jpg


1.jpg


3.jpg



EvalBoardProperties.jpg


VirtualDriveProperties.jpg



Now in that last screenshot above, it shows disk cache enabled (which doesn't effect the speed with before and after change runs), read policy to none, write policy: write through.

But when I go to change any of those items by right clicking the virtual drive properties, you can see the window, the only option is disk cache policy.

According to this site: http://kb.lsi.com/KnowledgebaseArticle16553.aspx

There are a lot more options there that I don't have. What gives?

Anyone have a clue why this performance is so poor?

Am I suppose to buy some of this software for hundreds and hundreds of dollars located here: http://store.lsi.com/store.cfm/Advanced_Software_Options/

Just to get my LSI controller speed working properly? :eek:
 
Last edited:
Firmware looks a little old on that controller, I thought 14 was out? Asrock have any updates?

Edit: just looked on Supermicro's FTP, they're using the same firmware, released on May 22nd.

I'm also seeing stuff on the OCZ forums about the Vertex 4's not playing nicely with LSI cards, did you post over there yet?
 
Last edited:
your write policy has to be writeback if you wanted to have write caching. you can force writeback on pci-express raid controllers without a battery, otherwise it requires a battery. obviously you cannot have a battery on your motherboard controller, so if there's no option to force writeback, you're going to have to use writethrough.

write caching is a bad idea either way. unless you have a UPS (and even then it's a bad idea because you can crash), you're just going to lose massive amounts of data one day using write caching. write through is the correct policy for ssds.

if you really insist on writeback though, you can force it through software. go download fancycache for volumes, http://www.romexsoftware.com/en-us/fancy-cache/ and enable write caching. that will defer writes to make your benchmarks look great and your data to sit in ram while it awaits being written.

the listings in advanced software options, I'm pretty sure none of those are offered for your onboard LSI chip. you won't have to spend any money at the LSI store.

as for the sequential speed topping out at 2gb/s, that would be the amount of lanes dedicated to the lsi controller. my guess is it's getting 4x from your motherboard. the pci express cards get 8x, which is why they got exactly double of you. you won't be able to increase that one.
 
That card is connected using 8 PCIe 3.0 lanes, should be plenty of bandwidth. I'm a little suspect of the Vertex 4s.
 
That card is connected using 8 PCIe 3.0 lanes, should be plenty of bandwidth. I'm a little suspect of the Vertex 4s.

I guess that could be the case. I really don't think 1.5 firmware drives would cause half speed problems but I guess we'll see.
 
FWIW, Asrock marketing for the board makes this claim:

"the X79 Extreme 11 also comes with ten SATA3 ports, eight of which are SAS ports sourced from LSI SAS 2308 controller. With the use of LSI MegaRAID utility, users will be able to get high transfer speeds of up to 3.8GB/s through the use of eight SSDs in RAID 0 modes"

No information I could find on exactly which SSDs they were using. It would not surprise me one bit to find out that OCZ is doing some trickery on the vertex4 to make benchmarks look great on Intel/AMD controllers and that trickery was causing the drive to run horribly when connected to a real controller.
 
That card is connected using 8 PCIe 3.0 lanes, should be plenty of bandwidth. I'm a little suspect of the Vertex 4s.

Why would you be suspect the V4's? Look how fast they are in the 2-raid config on Z77. The overall AS-SSD score is lower with the LSI with eight drives!
 
Why would you be suspect the V4's? Look how fast they are in the 2-raid config on Z77. The overall AS-SSD score is lower with the LSI with eight drives!

Because the Vertex 4s have had issues with LSI controllers in the past. LSI owns Sandforce, that's where I would be looking, or the M4s which have already been tested.
 
An easy way to prove the theory would be to grab 4 M4s and create two 4 device volumes and compare benchmarks.
 
Firmware looks a little old on that controller, I thought 14 was out? Asrock have any updates?

Edit: just looked on Supermicro's FTP, they're using the same firmware, released on May 22nd.

I'm also seeing stuff on the OCZ forums about the Vertex 4's not playing nicely with LSI cards, did you post over there yet?

I will try and hunt some of those down. Got any quick links by chance? :D

your write policy has to be writeback if you wanted to have write caching. you can force writeback on pci-express raid controllers without a battery, otherwise it requires a battery. obviously you cannot have a battery on your motherboard controller, so if there's no option to force writeback, you're going to have to use writethrough.

write caching is a bad idea either way. unless you have a UPS (and even then it's a bad idea because you can crash), you're just going to lose massive amounts of data one day using write caching. write through is the correct policy for ssds.

if you really insist on writeback though, you can force it through software. go download fancycache for volumes, http://www.romexsoftware.com/en-us/fancy-cache/ and enable write caching. that will defer writes to make your benchmarks look great and your data to sit in ram while it awaits being written.

the listings in advanced software options, I'm pretty sure none of those are offered for your onboard LSI chip. you won't have to spend any money at the LSI store.

as for the sequential speed topping out at 2gb/s, that would be the amount of lanes dedicated to the lsi controller. my guess is it's getting 4x from your motherboard. the pci express cards get 8x, which is why they got exactly double of you. you won't be able to increase that one.

I have a UPS and have run with write-back caching on all the time and never had much issue. Even when PC would crash due to high GPU overclock etc. But that still wouldn't explain the very poor sequential read speed etc. I will try that software you posted and see what I get. Are you familiar with any of that software on the LSI store? I've read some people say it makes a huge difference, especially fastpath. Seems silly though that you would have to spend that much money just for software to get a RAID controller working "properly". I varified in the BIOS that the LSI RAID chip is running at 8x PCI-E 3.0, so I should be good to go for 4000+ MB/sec.

Here's someone else that doesn't seem to be getting very good speeds and it's with a 9265: http://www.ocztechnologyforum.com/f...03448-8-x-Vertex-4-128-GB-on-LSI-9265-i8-raid

That guy's speed doesn't seem to be the best but it is still way faster than mine! :mad:

An easy way to prove the theory would be to grab 4 M4s and create two 4 device volumes and compare benchmarks.

And an expensive and time consuming one lol. I will try thr RAID array with 2 drives, then 4, then 6 etc to see how they scale etc.
 
It's strange, the config BIOS for the 2308 on the EXtreme 11 is very basic and there is zero options to load a stripe size. It defaults to 64kb only according to the documentation I found online.

You thibk it's safe to flash to that firmware with it being integrated? I know they both use the 2308, just curious.
 
Are you familiar with any of that software on the LSI store? I've read some people say it makes a huge difference, especially fastpath.

yes, I am familiar with the software. I don't know if they actually sell keys for the integrated hba controllers. yours isn't listed, only various pci express cards in the megaraid line. if they did though, the only software that would apply to you would be fastpath.

I wouldn't worry too much about the fastpath, just try and get that max sequential issue fixed for now.
 
It's really weird you can't adjust the stripe size, are you booting off of it yet? If not, can you adjust it with the MegaRAID software?

If it lets you flash it with the LSI software, it should be ok to do, it basically only the controller you're worried about, people crossflash the LSI and OEM firmware all the time.

Also, that's why that 9265s results are higher in the 4k, it's because he has fastpath.
 
No, MegaRaid will not allow me to adjust the stripe size. I am booting off the array. As far as I know the only time you can adjust stripe size is at volume creation, otherwise the data would get destroyed. But like I said, there is zero options to adjust stripe size. I'll post a few pics here shortly.

As for fastpath, it basically sounds like I need to purchase that in order to get the most out of the LSI controller eh? Not sure why it costs so much and/or why ASRock doesn't include it.
 
Fastpath isn't compatible with it, I don't believe. You need to step up to the higher end cards to get it.

Install windows on another drive connected to an Intel port. That way you can test the array's settings and how they affect performance using the MegaRAID utility. Once you have it the way you want it, then you can go back and install windows on the array.
 
Fastpath isn't compatible with it, I don't believe. You need to step up to the higher end cards to get it.

Install windows on another drive connected to an Intel port. That way you can test the array's settings and how they affect performance using the MegaRAID utility. Once you have it the way you want it, then you can go back and install windows on the array.


The only thing is, the MegaRAID utility only has one setting I can change.

Below are screen shots of how ASRock implemented this chip and how I configured it in RAID.

PICT0002.jpg

PICT0007.jpg

PICT0008.jpg

PICT0009.jpg


The BIOS of the ASRock motherboard is P1.10.

I take it the configuration utility in my screen shots is HBA and not WebBIOS?

I noticed under the MegaRAID utility it doesn't allow me to change the write policy and keeps it write-through. Is there a way to force that to write back? (I have a UPS for the computer, but the motherboard LSI chip does not have a battery backup). I only have one option (Disk Cache Policy) and not all of the options as found here:

http://kb.lsi.com/KnowledgebaseArticle16553.aspx

Under MegaRAID my settings are: Access Policy- Read/Write, Disk Cache Policy- Enabled (only settings I can change), Read Policy- No read ahead, IO Policy- Direct IO, Write Policy- Write Through.

Also, in the BIOS configuration utility there is absolutely nowhere to change the stripe size during volume create. Is this on purpose? It only uses the default 64kb. Some say that my low performance may be because I am not using 128kb stripe, but I highly doubt that as 64kb has done really well in mixed-use scenarios in recent RAID tests versus other stripe sizes.
 
I don't like the fact it's labeled as Eval Board, that just seems off, same with not having a firmware build time listed.

It would be nice to see what options you have for creating the array from the MegaRAID utility instead of through it's bios. Do you have a spare drive you can throw windows on and load up the megaraid utility and then mess around with this array using it?
 
All LSI HBA cards have fixed 64kb stip size.
And this card has no on board cache, so only Write-Throught policy is possible.
 
I don't like the fact it's labeled as Eval Board, that just seems off, same with not having a firmware build time listed.

It would be nice to see what options you have for creating the array from the MegaRAID utility instead of through it's bios. Do you have a spare drive you can throw windows on and load up the megaraid utility and then mess around with this array using it?

I just may have to try that. I am in convo with an LSI tech too to see what he says. He agreed though upon initial inspection that the speeds are too low.

All LSI HBA cards have fixed 64kb stip size.
And this card has no on board cache, so only Write-Throught policy is possible.

So are you saying this LSI implementation kinda sucks and this is the best speed I will be getting?
 
I just may have to try that. I am in convo with an LSI tech too to see what he says. He agreed though upon initial inspection that the speeds are too low.

I'd ask the tech why it's labeled as an eval board, why it doesn't have any nvram size listed, why the firmware package version is all 0s and why there isn't a firmware build time. I wonder if you don't have a half baked firmware on that thing, that or the MegaRAID software isn't recognizing it correctly.
 
The 9207 review the OP linked was to was done with 8 M4's in IT mode with JBOD -- ie, not with RAID0.

If you run the Vertex4s in RAID0 with softraid through Windows, you'd see similar results -- that you can try easily. Just ditch the Volume (and pass through each drive individually), then go to window's disk manager and configure each V4 128 into one Striped Volume. When you run the benchmarks, it should be similar. You're not going to get 4.2GB/s combined reads in R0 with the HBA with either it's Raid management or Soft raid. However, if you run IOmeter on all 8 volumes separately, you'll see higher numbers. You'll see how many IOPS and BW you can get combined. HBAs aren't raid cards -- but the SAS2308 can do R0, R1, R10, etc. Secondly, before you do anything, run IOmeter on the volume you have now. Run sequential reads and writes 4K aligned at QD's greater than 1 and see what happens.
 
Last edited:
Vega, just got my board today and spend a half hour just staring at it and reading the manual that comes with it. I can't do any testing on mine as i don't have all the parts yet for my build.

This raid issue your having is making me nauseous and giving me asus striker MB flashbacks.
I hope you can resolve the issue. I didn't buy my ssd's yet, and now I don't know what the hell to buy, Vertex4 or M4s. I'm all new to raid configs, but I wanted to try it in Raid0 after seeing those crazy speeds these Reviewers where getting.. Vega wish you luck getting up and running.

Ckryan, What is IT mode with JBOD how do I set this up with 8 ssds on those lsi ports?
 
Ok, so I re-flashed the firmware of all V4's to 1.5 and secure erased. I set two of them up on Intel X79 RAID 0 with 64k stripe and tested 2x, 4x, and 6x V4's on the LSI 2308 RAID 0 64k stripe (only one possible apparently).

For a refresher, this is 2x V4's on Z77 with write-back on:

as-ssd-benchZybane81201212-57-53AM.png



This is 2x V4 on Intel X79 RAID 0 with write-back on:

2xX79WriteCacheOn.png


As you can see, Z77 always come in quite a bit faster than X79 in RAID 0 for some reason. I've seen the same thing over many different motherboards. Anyone know why?


This is 2x V4's on X79 with write-back off:

2xX79WriteCacheOff.png


For these results, it seems like the only major difference between having write-back on and off is the 4k write speed.


Now all LSI tests below have write-back off seeing as you cannot enable it. 2x V4's on LSI 2308:

2xLSIWriteCacheOff.png


Pretty much a drop across the board versus X79 besides 4k read/write.


4x V4 on LSI:

4xLSIWriteCacheOff.png


Here seq write doubles, but seq read only increases about 80% for some reason. 4k write actually decreases and I don't know how, but 4k-64Third stays the same.


6x LSI:

6xLSIWriteCacheOff.png


Here 6x drives is around 280% the speed of 2x drives in seq write, but drops down to only around 220% of 2x drives in seq read (300% being perfect of course). No change in 4k-64third versus 2x and 4x really. The only difference with 6x is somewhat lackluster seq speed increase.


And the original 8x LSI test that got me questioning all this:

as-ssd-benchLSILogicalVolu71820127-43-56PM.png


Seq write is doing pretty good, 367% out of a perfect 400% versus two drives. Seq read is much worse, only 290% versus two drives. Not sure why it's so low when running 8x PCI-E 3.0.

Once again 4k-64Third is pretty much the same as 2x drives. Measureably lower than X79 and Z77 with only two drives. I knew with RAID 0 the 4k read/write speeds would not increase, but I surely thought 4k-64Third would.

So basically, there is no need for this LSI controller to be on PCI-E 3.0 8x as it doesn't come close to saturating PCI-E 2.0 8x. Not sure how ASRock claim 3.8 GB/s using this controller.

All I can think of is maybe crappy firmware on the 2308 chip, some sort of bad drivers or this LSI chip is just not that good/too stripped down (no memory cache) and being integrated to really shine.

Here is a user with the exact 8x Vertex 128GB drives but using a LSI 9265-8i RAID card:

8xvertex4-128GB-fastpatch-4.jpg


Seq are about identical, but his 4k write and 4k-64Third speed destroys mine.

Any thoughts, anything I can test while I have the OS on 2x V4's X79 and 6x V4's on LSI? If this is as fast as it's going to get, for $1000 in drives and $250+ for the LSI controller I may think about reverting to something else. And apparently Fastpath won't work on this chip, so that rules out any help there.

Hopefully the LSI tech can shed some light on the topic.
 
Last edited:
Remember, the SAS2308 is not like the full on raid processors. The big boy RAID cards have cache and more RoC horsepower under the hood. The SAS2008 can't really do the parity calculation needed by parity RAID levels, and the cache really helps too. Running a HBA in intergrated RAID is more akin to using soft RAID through windows. Also, when you use Intel RAID, it uses some system ram as caching which helps quite a bit as I understand it. If you want to see how much data can go through the LSI HBA chip at one time, set up each drive by itself, then run one thread on each drive in IOmeter. You'll get way more than 3.3GB/s aggregate bandwidth.
 
Ya, basically using the LSI chip on this board is RAID "light". Do you guys think it's worth it with those speeds I am getting? It's like I am wasting the speed of all those V4's. Or just go back to 2-drive Intel Array? A real RAID card isn't an option as I use 4 GPU's.
 
Man that sucks. Had high hopes for your RAID setup. How the hell is ASrock claiming 3.8Gbps ? Aggregate speed ?
 
If you want to see if it's the raid card, I highly suggest the following:

Configure the drives as JBOD on the raid controller.
boot an ubuntu 12.04 live cd/usb stick
apt-get install mdadm
setup lnux software raid0 from your jbod disks.

This takes the raid processor out of the picture. I'm going to bet you will see much higher speeds.

Obviously the benchmark will have to be something that is cross platform....
 
One thing I forgot to ask, does anyone know if updating LSI firmware destroys the array/data? If so I won't waste my time installing everything before a firmware update attampt.
 
If you want to see if it's the raid card, I highly suggest the following:

Configure the drives as JBOD on the raid controller.
boot an ubuntu 12.04 live cd/usb stick
apt-get install mdadm
setup lnux software raid0 from your jbod disks.

This takes the raid processor out of the picture. I'm going to bet you will see much higher speeds.

Obviously the benchmark will have to be something that is cross platform....

Why use JBOD when they advertise Raid 0

Found this on JBOD at http://www.msexchange.org/articles_...lly-is-jbod-how-might-used-exchange-2010.html
Disadvantages of JBOD

No hardware increase in drive performance
There is an argument that JBOD can actually affect overall performance where multiple drives are in play, as it is more difficult for the drives to be used sequentially.

No redundancy
This is a major limitation of JBOD – if you lose the disk (in a single spindle JBOD) or one of the disks (using multiple drives) – you are heading back to your backups. If you have no backups then the data is gone!

Just on the basis of those two disadvantages, you might be asking the question – why consider a JBOD implementation in an Enterprise Exchange environment at all?

Well prior to Exchange 2010 many would agree with you wholeheartedly - however in Exchange 2010 the product team have managed (in certain configurations) to make JBOD a cost effective storage option.
 
One thing I forgot to ask, does anyone know if updating LSI firmware destroys the array/data? If so I won't waste my time installing everything before a firmware update attampt.

It shouldn't, just make sure the array is configured the same after the flash.
 
Why use JBOD when they advertise Raid 0

Found this on JBOD at http://www.msexchange.org/articles_...lly-is-jbod-how-might-used-exchange-2010.html
Disadvantages of JBOD

No hardware increase in drive performance
There is an argument that JBOD can actually affect overall performance where multiple drives are in play, as it is more difficult for the drives to be used sequentially.

No redundancy
This is a major limitation of JBOD – if you lose the disk (in a single spindle JBOD) or one of the disks (using multiple drives) – you are heading back to your backups. If you have no backups then the data is gone!

Just on the basis of those two disadvantages, you might be asking the question – why consider a JBOD implementation in an Enterprise Exchange environment at all?

Well prior to Exchange 2010 many would agree with you wholeheartedly - however in Exchange 2010 the product team have managed (in certain configurations) to make JBOD a cost effective storage option.

because you do the raid inside of windows instead of letting the card do it. you're still doing raid, just inside of windows.
 
Well I flashed the firmware and BIOS to the latest found for the LSI 9217_8i. Now it calls it a LSI 9217_8i instead of "Eval board". Unfortunately there is no speed increase.

Trying to figure out here how that guy in the Hexus Extreme 11 video got 3500+ MB/seq read..
 
This is pretty much as I suspected.

You get what you pay for strikes again. :)
 
Well I flashed the firmware and BIOS to the latest found for the LSI 9217_8i. Now it calls it a LSI 9217_8i instead of "Eval board". Unfortunately there is no speed increase.

Trying to figure out here how that guy in the Hexus Extreme 11 video got 3500+ MB/seq read..

Try the IT firmware and a soft raid in windows, that'll match the test done with the M4s, since that's how it seems they did their testing.
 
Back
Top