LSI New megaraid cards

Computurd

Gawd
Joined
Dec 19, 2008
Messages
956
Hey guys just reading about these new uber raid cards that are from LSI, they are already available at mwave, but there is no info ou tthere about the differences in the cards, any input on these?
Megaraid SAS 9260-8i 8 PORT 6Gb/s SATA+SAS RAID
 
I doubt anyone has these yet, but I'm eager to see some benchmarks too. :)

Just give it some time, someone will post something. ;)
 
there is a listing on mwave for sale...need to figure out i fthey work with vertex drives...they are compatible with intel ssd that much i know
 
I doubt anyone has these yet, but I'm eager to see some benchmarks too. :)

Just give it some time, someone will post something. ;)

They're faster than anything you've ever seen :) But I'm just a tease, apparently, and can't release any of my numbers due to confidentiality reasons, but have been playing with these for about a year now. I think it won't be long until some numbers are actually out there, though.
 
I am going to give this one a go with my 8 30 GB vertex, see if it cant beat the 1.1 gb/s thourooughput im getting with my areca 1680-ix. card is inbound.
 
with five vertex on the card...i have it now LOL
putting the other three on shortly for more benchmarks..this is the same speed my areca was getting when i ttopped out...we will see if the lsi wil lbeat it ;)
RAIDSUBMIT.png
 
I think that I have read that the the real world limit for pci-e 1.1 x8 slots is 1GB to 1.4GB. Usually around 1.2GB

Myricom has tested their pci-e cards on several systems, and posted their results on pci-e 1.1 vs pci-e 2.0.
 
i have 8 30 GB vertex drives in raid 0...230mb/s each...x8=you do the math...something is holding me back here cant figure it out
 
I think that I have read that the the real world limit for pci-e 1.1 x8 slots is 1GB to 1.4GB. Usually around 1.2GB

Myricom has tested their pci-e cards on several systems, and posted their results on pci-e 1.1 vs pci-e 2.0.

Those benchmarks are also limited by the fact that the Myricom interconnect is 10 gigabits, or 1.2 GB/s. I'd be surprised if PCI express were limited to half their theoretical capacity by another factor.
 
stripe is 1mb..
array is 8 30 gb vertex raid 0

... yeah, start over again.
First, stripe should never be 1MB on a Windows host. Your real world performance blows.
Two, host filesystem should match segment size. 64k to 64k NTFS blocks, etc.

Myricom is only just now starting to learn the basics of extreme high performance interconnect. If you want real numbers, you call one of two people - Voltaire, who makes the best InfiniBand switches on the market for edge-core architecture, or Qlogic, who's forgotten more about disk interfaces and high performance interconnect than any other company has ever known.
 
i have 8 30 GB vertex drives in raid 0...230mb/s each...x8=you do the math...something is holding me back here cant figure it out

1. The raid card is a brand new product. All of the features listed on the box, may not be in the firmware yet.
2. In the mb bios, try setting the "maximum payload size" to 256 or 512. See this for reference.
3. The manual for your board says "These four PCI Express x16/x8 slots are reserved for video cards, and x1/x4 devices". So maybe if the slot doesn't have a graphics card in it... it is knocked down to 4x pci-e 2.0?
4. If all your other slots are filled with graphics cards, you may be hitting an overall limit on the board. Here is a reference, but note that when it says pci-e2, they mean pci-e slot #2.

Those benchmarks are also limited by the fact that the Myricom interconnect is 10 gigabits, or 1.2 GB/s. I'd be surprised if PCI express were limited to half their theoretical capacity by another factor.

Once overhead is taken into account, most of the things I have read say 1.6GB max for pcie 1.

The point of the myricom pci-e1 link, is the variation from one chipset to another. When they tested their card in various mb, they saw a significant variation in the hardware read/write speed (the first two result for each test). The mb they tested, gave results from ~0.8GB to ~1.4GB.

Even if there is inefficiency somewhere in their hardware read/write test, they still got different results for different chipsets.
 
wow thanks guys, that is very very helpful! if anything else hits you guys please speak up , im soaking it up here!
 
I am starting to believe that this issue is with the x1/x4 functionality with this board, and now i am wondering what can be done to resolve this issue?
 
3. The manual for your board says "These four PCI Express x16/x8 slots are reserved for video cards, and x1/x4 devices". So maybe if the slot doesn't have a graphics card in it... it is knocked down to 4x pci-e 2.0?

x4 PCI express 2.0 is 500 MB/s per lane, for a total of 2 GB/s. The card is also listed as 2.0 compliant, so it should work at at least that speed. Perhaps the partitions are unaligned to the SSDs at some layer?
 
here is a new bench i just ran with I/O meter, it seems to be blasting away right now, gonna try upping the pci-e clock a tad, see what that gives me
1300.png
 
no none yet, that i am about to do, i am goiing to up the nf200 voltage a tad and raise the pci-e clock
 
*sigh* Which is what I already said.
You people need to start listening. Like it or not, I do know this better than any of you. Go see here. sam is leading you off on a bloody snipe hunt.
 
Very impressive. Persistent storage with 2GB/sec capability for about $3 a gigabyte.

The last time I had to build a system with that kind of throughput, it cost about $600 a gigabyte.
 
First, stripe should never be 1MB on a Windows host. Your real world performance blows.
Two, host filesystem should match segment size. 64k to 64k NTFS blocks, etc.

First - depends on workload. For anything but single threaded video serving though, I can agree. Remember that multiple sequential streams are essentially a random workload.

Two - This is old, outdated advice. Starting with Windows 2003, and continuing with 2008 and R2, Windows does excellent scatter/gather I/O, and extensive caching. Not uncommon at all to see Windows writing 1MB stripes. In fact, apps like SQL 2005+ and Exchange are optimized to do just that. Wonder where your RAM went on your 2008 or Vista box? Its just a giant disk cache......For almost everyone, just leave NTFS block size alone..you'll be much happier.

Myricom is only just now starting to learn the basics of extreme high performance interconnect. If you want real numbers, you call one of two people - Voltaire, who makes the best InfiniBand switches on the market for edge-core architecture, or Qlogic, who's forgotten more about disk interfaces and high performance interconnect than any other company has ever known.

AreEss - I am going to have to seriously disagree with you here too. Myricom practically INVENTED high performance interconnect technology. Voltaire assembles nice switches, using Mellanox chipsets. Qlogic actually has a better IB switch technology IMO, and makes their own chipsets (since they acquired Silverstorm).

Anyway, the reason I bring this up is that Infiniband, while having excellent latency, is horribly inefficient for throughput. Myri-10G runs at 99.95% of line rate. With their MX layer, you can bond multiple 10GbE adapters, so that 20, or even 40GbE is possible, using PCI-E 2.0 x8 cards.

With IB, even with QDR, you have serious overhead. First, IB uses 8/10B encoding, so your 40GbE QDR is really only giving you a data throughput of 32Gb. It gets better though, since very few apps talk native verbs. So, if you actually want TCP/IP over IB (IPoIB), knock another 40-50% off your throughput. End result is that your fancy 40GbE connection is lucky to hit 16Gb throughput.. By comparison, a bonded Myri10g adapter gets you a full 20GbE.

Back to the OP, realize that most current SAS controllers simply can't keep up with the massive IOPS generated by a good SSD drive. Raiding X25s, on most controllers, will actually give you worse results, at least on real world workloads.

Before you get too excited with Iometer, try to run a real world 50/50, or even 70/30 R/W mix. Watch that performance plunge......
 
Last edited:
Wow.
I'm very impressed that you had a 30% improvement, by simply changing the stripe size to 64k.
 
well that is not what i did, actually. I did not change the stripe size i left it at 1 MB, it was just anumber of various tweaks to PCI-e bus, NF200 voltage, and read ahead settings. There were a few bumps along the way as i was learning this new card! Very Very pleased with it now though!
 
Back
Top