LSI Owner's Thread (SAS/SATA HBA and RAID cards)

I have run into a problem which is pretty serious on the LSI 9261 card which I would like for others to test and see if the same problem occurs.

I have created a RAID-5 array from 4x2TB drives. Everything works fine, but if I pull out a drive from that array, and then reinsert that same drive, the LSI card bios will not recognize it, and will claim the drive is bad.

Even if i format that HDD and then reinsert it, I wont be able to use it. If i insert a totally different drive, other than the ones which were used in the array it will detect it fine.

Simply doing a quick format in windows disk management for example isn't wiping out the very last 512 byte sector of the drive where most raid cards store their metadata. You'd need to zerofill the drive to wipe out that metadata. Tools like HDD Wipe Tool or HD Tune Pro trial can zero fill, unfortunately you'd have to wait for them to zerofill the entire drive just to hit the last sector, or you could use a tool like WinHex to jump straight to the last sector and zero it out.

I don't have a 9261 yet but will soon, and my first hunch is the behavior is by design rather than a bug. Perhaps there's a preference setting somewhere that controls that behavior. I can think of scenarios where the card not simply beginning a rebuild (write operation) on a previously "failed" drive is the preferred behavior.

Here's the question - when you power down the system and power back up with the "failed" drive inserted, does it still show the drive as failed/bad?

In any case its good that you're doing synthetic failure tests ahead of a real failure so you know what to expect - most people don't bother and then panic when a real failure does happen, in some cases making a bad situation worse.
 
Last edited:
Did this work? Supermicro where quite emphatic that it wouldn't, but there's been a few firmware updates since I asked. How many drives have you got hanging off X8DTH-6F + expander?

Haven't gotten my system up and running yet. Still waiting on the CPU to arrive on Monday, and then I'm ordering the last pieces (RAM and Expander) on Friday

I'll have 16 drives right off the bat, and I'll slowly be adding more

Does anyone wanna let me borrow their HP Expander to test compatibility before I purchase my own? :D
 
^ The SAS2008 chip on that mobo should work fine with the HP expander, based on the fact it works flawlessly with the LSI 9211-8i card which uses the same chip. You can't go wrong with an SM mobo, they're beasts especially that one. The only brand I use in my servers.
 
Simply doing a quick format in windows disk management for example isn't wiping out the very last 512 byte sector of the drive where most raid cards store their metadata. You'd need to zerofill the drive to wipe out that metadata. Tools like HDD Wipe Tool or HD Tune Pro trial can zero fill, unfortunately you'd have to wait for them to zerofill the entire drive just to hit the last sector, or you could use a tool like WinHex to jump straight to the last sector and zero it out.

I don't have a 9261 yet but will soon, and my first hunch is the behavior is by design rather than a bug. Perhaps there's a preference setting somewhere that controls that behavior. I can think of scenarios where the card not simply beginning a rebuild (write operation) on a previously "failed" drive is the preferred behavior.

Here's the question - when you power down the system and power back up with the "failed" drive inserted, does it still show the drive as failed/bad?

In any case its good that you're doing synthetic failure tests ahead of a real failure so you know what to expect - most people don't bother and then panic when a real failure does happen, in some cases making a bad situation worse.


Good point on the zero filling..

I did do a quick format the first time, and that didn't work. Second I tried the full format and waited hours to complete and still no luck.

And yea, I did the whole reboot/shutdown the system scenario and the drive still came up as "BAD". As for bios options on how it should behave, there aren't any unfortunately.

I will try as you said, something else other than the windows format to see if that's a go.
 
OK.. the zero filling has worked.

I reinserted the drive in the array and I was able to mark it as a "Good" un-configured drive.

Though I must say, the LSI WebBios is less than intuitive. Very hard to navigate around it and figure out the rebuild process.

Also, rebuilding the Raid-5 array is taking way too long. There is absolutely no data on this array, and yet its still taking hours to finish rebuilding.. (is this even normal ?)
What exactly is there to rebuild if no data is on the drives?
 
Also, rebuilding the Raid-5 array is taking way too long. There is absolutely no data on this array, and yet its still taking hours to finish rebuilding.. (is this even normal ?)
What exactly is there to rebuild if no data is on the drives?

The RAID card isn't aware of what data you are storing in the array, so in a rebuild it has to copy all of the data to the new drive, even if that data is mostly zeroes. AFAIK, assuming the RAID-5 parity calcutions are not a bottleneck, the rebuild should take about the take the same time as it would to do a full format of the disk if it were outside the array.
 
^ The SAS2008 chip on that mobo should work fine with the HP expander, based on the fact it works flawlessly with the LSI 9211-8i card which uses the same chip. You can't go wrong with an SM mobo, they're beasts especially that one. The only brand I use in my servers.

The X8DTH-6F is a great board, but the combination of it and an Areca 1680ix-24 has proved problematic for me, at least on one of my servers. Maybe a bad Areca card, or some weird incompatibility between the two. LSI are better supported by Supermicro (not surprising as SM OEM all their SAS stuff from LSI) so I'm probably going to switch to them.
 
The X8DTH-6F is a great board, but the combination of it and an Areca 1680ix-24 has proved problematic for me, at least on one of my servers. Maybe a bad Areca card, or some weird incompatibility between the two. LSI are better supported by Supermicro (not surprising as SM OEM all their SAS stuff from LSI) so I'm probably going to switch to them.

I have the X8DTH-IF and it's flawless with the LSI 9261-8i
 
^ good to know. I will probably go with 9280 4i4e though.

A question to anyone using CacheCade: I've looked through the docs on this, and I've got a question - does the card try to cache to the SSDs synchronously (in which case if you're doing a large sequential write, your SSDs might actually be slower than a large HD array, so write speeds may be slowed down) or asynchronously (so does it wait for an i/o "breathing space" before writing to the cache?). Their example benches with SQL Server use X25-Es, so clearly write performance of the SSDs you use is a factor, and I guess endurance too (this example, Adaptec MaxIQ and even Seagate's Momentus XT are all using SLC). I'm trying to weigh up using a few X25-Ms or C300s for CacheCade vs much more expensive X25-Es or Pliants (OK, dream on on the last one)
 
^ good to know. I will probably go with 9280 4i4e though.

A question to anyone using CacheCade: I've looked through the docs on this, and I've got a question - does the card try to cache to the SSDs synchronously (in which case if you're doing a large sequential write, your SSDs might actually be slower than a large HD array, so write speeds may be slowed down) or asynchronously (so does it wait for an i/o "breathing space" before writing to the cache?). Their example benches with SQL Server use X25-Es, so clearly write performance of the SSDs you use is a factor, and I guess endurance too (this example, Adaptec MaxIQ and even Seagate's Momentus XT are all using SLC). I'm trying to weigh up using a few X25-Ms or C300s for CacheCade vs much more expensive X25-Es or Pliants (OK, dream on on the last one)

If you are going to use -Ms then you are going to want to over-provision them. Around 20-25% is really the knee of the curve on over-provisioning, so you are looking at 110-120GB capacity with a 160-M and 55-60 with an 80-M. There were very good presentations on write endurance issues/tradeoff both at this years IDF and last years.

And honestly, I'm not so sure how much I would really trust Pliant Technology considering it looks like they basically lie on their specifications. If you have the money, I'd go STEC.

Hopefully we'll start to see more of the SSD vendors give real endurance specifications in the future now that JEDEC has standardized the requirements. So far, the only company that gives very detailed breakdowns of endurance is Intel and by extension Hitachi when they release their enterprise SAS SSDs.
 
If I was to use X25-Ms, I might over provision by as much as 50% if it bought me more IOPs for CacheCade. The R0 used for CacheCade can be up to 512GB, so using 32GB per X25-M over 16 disks should yield the best performance possible with X25-M's. Write performance still wouldn't be as good as X25-E, but it would be significantly cheaper. If the MLC endurance lessons that Intel have publicised are equally applicable to C300s then that might be an option too.

Regardless of which drive I use, I will probably get to the 512GB in stages. I was thinking of maintaining a ratio of about 1 (RAM) : 64 (SSD) : 4096 (HD), where the RAM is system RAM that Starwind is using as a write back cache and the HD is RAID-10 (of either 10K 2.5" SAS @ 600GB or 7.2K 3.5" SAS @ 2TB). So for example I could gradually build up to 8GB RAM, 512GB SSD (over 16x SSDs), and 32TB (useable) RAID-10 of 32x 2TB.

I guess at some point before 16 SSDs I might hit the IOP limit of the raid controller, so it might make sense to use fewer, larger SSDs instead.

As for Pliant, I won't believe anything either until I've seen them properly benchmarked. I think the specs thing must be either a mistake (and they are actually SAS-2), or they are exceeding SAS-1 speeds because they are making full use of being dual ported as well as full duplex. IMHO SAS has much more potential as an interface for SSDs than SATA, but there are still very few SAS SSDs, let alone ones which actually take advantage of the benefits of SAS (e.g. the Sandforce ones seem simply to be using a SAS <-> SATA bridge on top of their normal SATA interface). Pliants are so expensive that in many apps it might make more sense to buy RAM instead, I expect that STEC costs even more!
 
^ it sounds like i'm in the same boat as you in terms of planning an ideal scenario with CC fronting a big array.

It's hard to find any good info on CacheCade/FastPath. Example can someone confirm whether its one purchase and one key to unlock both FP and CC, or are they separate purchases? I'm also wondering whether i'd be better off with Crucial C300's or Intels for the SSD's for this specific purpose. Or even OCZ's.
 
you can buy fastpath or cachecade seperality, or together. both ways. you get a better price if you buy them together.

get the fastest SSD @ random access you can find and the best latency possible for the cachecade use. i woiuld suggest the C300 or X-25E
 
It's hard to find any good info on CacheCade/FastPath.

get the fastest SSD @ random access you can find and the best latency possible for the cachecade use. i woiuld suggest the C300 or X-25E

Yes,especially when just about the only place I've seen CacheCade mentioned (apart from LSI and news sites regurgitating LSI press releases) is this site and other forums where Computurd posts! I really would like to see some real world notes from the field.

For me, apart from the choice of SSD and the number of SSDs, the biggest problem is coming up with a sensible ratio of cache to hard disk. What I really need to know is a way of measuring over time, how much of my data is hot and how hot it is. Then I know what % of it I should aim for having cached. My application is storing VHDs for use by Hyper-V VMs. Each VM could have a different usage pattern, but in all cases, I would expect that the O/S files are fairly hot, then there'd be a varying amount of data depending on what the VM was actually doing, and a large chunk of empty space that's never read, but is there for expansion. If I had a way of sampling i/o within a few sample VMs, and could plot a nice chart that would tell me that for x% cached, y% of my reads are likely to come from cache, then settling on a sensible ratio wouldn't feel so much like making an uneducated guess.
 
From the Intel Raid Product Matrix (PDF)

A couple of new ones that I haven't seen announced by LSI (most of the cards look identical to LSI cards, they even have CacheCade / FastPath options)

Page 4 - RS2VB080 - due in December
- looks like the RS2BL080 / LSI 9260 8i, except...
- flash cache instead of BBU, presumably an ultra capacitor keeps things going until cache is copied from RAM to flash. Benefit: you don't have to worry about replacing the battery, and you don't have to worry about the (at best) 72 hour deadline to restore power you have with a battery.

Page 5 - RT3WB080 - due in November
- photo looks like RS2BL080 / LSI 9260 8i
- LSI SAS 2108 based so should support FastPath
- SATA only, limited to 16 drives max, supports SAS expanders (which it would need for 16 drives)
- 256MB cache
I would expect this to be cheaper than the LSI 9260 8i, and *maybe* a firmware flash could persuade it to forget some of its limitations..

BTW, Intel call CacheCade "SSD Cache", and FastPath is "Fast-Path".
 
LSI 9211-8i (SAS2008) produces some decent results with link aggregation to an HP SAS Expander, in terms of scaling efficiency in sequential transfers.

Test setup: LSI 9211-8i (IT firmware) + HP SAS Expander 2.02 + 16x Hitachi 2TB, Windows Raid0, NTFS 64k clusters

o624NS.gif
 
Last edited:
flash cache instead of BBU, presumably an ultra capacitor keeps things going until cache is copied from RAM to flash. Benefit: you don't have to worry about replacing the battery, and you don't have to worry about the (at best) 72 hour deadline to restore power you have with a battery.

nice find, barra! very compelling that. i was commenting the other day that they should put a super cap on mobos to help save write cache when using onboard raid. seems they are doing it for raid cards, eh? so when mobos :)
it would be a natural progression. yeah cachecade hard data is fleeting at best. there is even some data coming from LSI about using sata drives as low teir/ sas drives as a mid-tier/ and ssd as top tier in future implementations. here is a link to their recent presentations they did....

http://lsi.com/AI-Conference/

http://www.lsi.com/AI-Conference/SAS_Past_Present_and_Future.pdf

I am waiting right now...the 2208 is my next buy. i wish there was a timetable for it...
if the supercap is implemented well with these cards i am sure that could become a big thing in the future.
 
Hello all:

I'm trying to track down an issue with my brand new LSI 9211-8i. I'm running the card with 8 Micron C300 64GB on a EVGA P55 FTW motherboard. I have the card BIOS+Firmware up to date and up to date drivers for Windows 7. My problem is that the performance I am witnessing is way, way below what it should be (2.8GB/s sequential read and 600MB/s sequential write.)

I've tried the card in all my PCI-Express slots -- in the top slot and the third slot I seem to be capped around 500MB/s, and in the middle slot I am capped at 250MB/s. It feels like my card is being given only one lane by the motherboard. Has anybody else run into this problem? Is there a way to determine how many lanes my card is using? CDM benchmark below.

Thanks,
Ben

-----------------------------------------------------------------------
CrystalDiskMark 3.0 x64 (C) 2007-2010 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 514.008 MB/s
Sequential Write : 529.183 MB/s
Random Read 512KB : 708.025 MB/s
Random Write 512KB : 545.441 MB/s
Random Read 4KB (QD=1) : 32.184 MB/s [ 7857.5 IOPS]
Random Write 4KB (QD=1) : 73.934 MB/s [ 18050.3 IOPS]
Random Read 4KB (QD=32) : 330.067 MB/s [ 80582.8 IOPS]
Random Write 4KB (QD=32) : 299.993 MB/s [ 73240.5 IOPS]

Test : 1000 MB [C: 6.2% (29.2/469.3 GB)] (x5)
Date : 2010/10/21 0:00:25
OS : Windows 7 [6.1 Build 7600] (x64)
 
If I understand correct, the P55 FTW should be running at a minimum of x8 in first and second slot and at x4 in the third slot...
Not sure how big the influence of you having the OS on the array you are benchmarking is...
How in the raid configured?
 
If I understand correct, the P55 FTW should be running at a minimum of x8 in first and second slot and at x4 in the third slot...
Not sure how big the influence of you having the OS on the array you are benchmarking is...
How in the raid configured?

The labeling on the board says that the second slot is x4 while the third slot is x8. If I place the card in the second slot (labeled X4), it tops out around 250MB/s. The raid itself is a stripe. It's operating vastly slower than it should, which is alarming. It really does seem like the card is starved for lanes.
 
My problem is that the performance I am witnessing is way, way below what it should be (2.8GB/s sequential read and 600MB/s sequential write.)

how did you come to the conclusion that 2.8GB/s reads and 600MB/s writes are "what it should be"?

It really does seem like the card is starved for lanes.

i doubt thats the problem.

The raid itself is a stripe

what does that mean? a raid0 configured on the LSI card? raid0 configured in windows disk management with drives as JBOD? you're asking for help but giving half the information. you also haven't stated what your ultimate goal is - what was the purpose of buying that many SSD's?
 
Last edited:
My server is back finally after switching my Areca 1680ix-24 for LSI 9280 4i4e (the Areca card stopped working with my Supermicro X8DTH-6Fmobo, but still works fine on other servers). I've actually got two of the LSI cards; if the Intel expander that I've ordered works OK, I will be moving one to my other server that still has an Areca...

Two annoyances I've found with LSI/MSM

1) It seems impossible to migrate a 2 drive RAID-1 to a 4 drive RAID-10. That really sucks. Anyone know a workaround, other than backing up data, destroying RAID-1, create RAID-10 and restore?

2) Why can't I offline an entire volume? I have to power down the entire server to pull the disks out?!

On the plus side, at least the RAID cards can still see the drives after a reboot, which is an improvement over the Areca...
 
when you test expander with the lsi card and MSM, pay attention to the first and last drive in the list...
 
how did you come to the conclusion that 2.8GB/s reads and 600MB/s writes are "what it should be"?



i doubt thats the problem.



what does that mean? a raid0 configured on the LSI card? raid0 configured in windows disk management with drives as JBOD? you're asking for help but giving half the information. you also haven't stated what your ultimate goal is - what was the purpose of buying that many SSD's?

Odditory:

Thanks for the response. My conclusion regarding expected speed is based on the experiences of tomshardware who reached 1.8GB/s with 8x X-25E's on the same card plus the rated speed of the drives, plus the experiences of an xtremesystems user who posted similar results to what I was expecting with C300s and the same card.

I'm using RAID 0 on the card as opposed to JBOD/Windows Software RAID as making a software RAID bootable is a challenge. As for my ultimate goal, I don't suppose there really is one other than for-fun...I'm interested in my load times being minimal.

Also would note I dumped the P55 motherboard for a Rampage III Formula and am experiencing the same issue.

I came across another post on the xtremesystems forums (http://www.xtremesystems.org/forums/showpost.php?p=4469397&postcount=379) where a user with 4 C300's was seeing similar performance to what I was seeing with 8. It looks like the card is just capped. And it doesn't seem realistic to expect the drives to do 64MB/s in sequential read. I'm not opposed to going to a 9260-8i, but it seems as if there is something I am doing wrong that is preventing me from getting adequate performance. Any suggestions?
 
well never answered the NF200 question...but trust me, from experience, any mobo with NF200 is hopelessly strangled by it on the pci-e bus. i cannot believe how bad it borks raid results. kills latency, and with latency, goes everything else of course :(
had to sell my e759 so i could buy a e760 for this reason.
 
when you test expander with the lsi card and MSM, pay attention to the first and last drive in the list...

I've posted my experiences in a RES2SV240thread, but yes, going through the expander seems to reverse the slot numbers (within the group of four slots per SFF8087)
 
Last edited:
Just saw this: http://www.demartek.com/Reports_Free/Demartek_LSI_CacheCade_Performance_Evaluation_2010-11.pdf

They used a web server workload to test CacheCade performance - not directly applicable to my needs but still interesting. They used 2 x 32GB X25-E for caching a 6 x 500GB RAID-10 - although it looks like there was only 40GB of data so they actually had more cache than data. They also had 23GB of system stuff so effectively it's a 1:1 ratio of data to cache. They say that with 2 SSDs they saw at least a 5x improvement in all metrics, with 1 that dropped to 3x.
 
Does the MegaRAID BIOS Config Utility mouse feature work for other owners? Or is the cursor really going to be permanently lodged in the upper left corner?
 
I have a ways to go before posting for sale items but if anyone is interested in one of my LSI / 3ware PCI-X cards, just send me a PM.

MegaRAID SATA 300-8X w/ BBU

3ware 9550SXU-4LP
 
Are these numbers about what can be expected or do I have something configured incorrectly?

attobench32v234lsi2108r.jpg
 
Figured I'd take my luck in here; Is anyone able to comment back in reference to having a LSI2008 based HBA/Non-HBA card successfully passing the 'sleep' command through to their Motherboard. In other words, able to power-down/spin-down their Hard Drives after a certain amount of inactivity.

Currently in discussions with Engineers from Supermicro over compatibility issues with the AOC-USAS2-L8e + HP SAS Expander which uses the LSI2008 chipset, and they have asked for any known cards that work using the same chipset.

Cheers!
 
Hi Everyone.. I'm rather new to SAS and am trying to configure a LSI SAS 3801E PCI-E card connected to a SAS extender with 24 SAS disks attached. This is running Red Hat Enterprise Linux and I have all the latest drivers for the LSI card. The SAS 3801E supports 8 connections (4 connections each on 2 cables). The 24 SAS disks are setup as a Raid0. When I write a large amount of data to the drives (about 100 gb's), I only see one set of 4 ports from the LSI SAS 3801E card being used. Does anyone here know if I need to set up some sort of special configuration in Linux for it to use all 8 ports? I'm pretty sure it is not the Extender which is the problem because I can switch the port connectors, and it follows the port on the LSI SAS 3801E card. Thanks for any insight one might have.
 
That LSI 3801E doesn't support MPIO/link aggregation (or "2 cables" as you said). You'd need to move up to a SAS2008 based HBA and a SAS-2 based expander that supports dual linking if its important to you, and unless you're getting data in and out of that array with either 10GbE or another local array, I'm not sure you'll realize a benefit unless you move to another raid level, in which case rebuild times can take advantage of the additional bandwidth.
 
Last edited:
Hey peeps, nice appropriate thread for my question:
sas3080x.jpg

I'm looking at an LSI SAS 3080X-R(click) to use in dumb/HBA mode for ZFS, probably running on FreeBSD. If anyone has any experience using one of these controllers, particularly with FreeBSD and/or ZFS, I would love to hear from you. Looks like the only place to pick these up now is eBay so time is of the essence!

Thanks.

EDIT: I think this mail archive thread is worth a mention in case it can help anyone in a similar situation: http://www.mail-archive.com/[email protected]/msg106289.html
It appears from this that the card I'm looking at might work but I'd still love to hear from any [H] owners.
 
Last edited:
What is the latest firmware for the HD103SJ drives?

I have trouble running the drives in SATAII mode on my RAID card (3ware 9950-16port), in that the drive keeps disconnecting. Knock the drive down to SATAI mode, and no issues at all.

Current firmware is 1AJ100E4

I have some HD103UJ drives on the same controller, and they run in SATAII mode ok.

I can't seem to find any firmware at all on the web.
 
Still no LSI LSI SAS 9211-8i or LSI SAS 9211-4i users here care to comment on their luck with Hard Drive sleeping/spin-down?
 
Back
Top