8 to 16 SSDs RAID 0

stenrulz

n00b
Joined
Aug 20, 2012
Messages
33
Hello,

I am currently looking into setting up a decent size RAID 0 array with 8 to 16 SSDs, most likely Samsung 840 pro or OCZ Vectors with Adaptec RAID 71605E or MegaRAID SAS 9286CV-8eCC. I have looked at the HBA cards as well but I would like the option to boot Windows of the array at some stage. From my understanding no software raids allow booting, LSI fastpath is similar to software raid to provide extra performance from your CPU but without the limitation. What RAID card and SSDs do you recommend? From my understanding most cards will be able to provide the MB/s but not the IOPS.

Thank you.
 
Are you SURE you want to but your data in the hands of 8 or 16 disks NOT FAILING?

Is this for a highly specialized environment that you would even need to consider whatever performance you might gain from this (needless to say there will be bottlenecks elsewhere). Also if it is such a high profile environment, can you afford for one of those disks to fail?
 
i'm pretty sure you can boot and install OS on intel raid

i've done 2x OCZ vertex raid0 on a x58 chipset
 
westrock2000, i am not worried about any redundancy and do expect them to fail from time to time.
 
I would look into one of the SSD optimized solutions like Fastpath so that you will have good 4K QD1 performance. I know that using SSD RAID0 (without optimizations like fastpath) slightly reduces your 4K low QD performance with 2 drives however not sure how much that continues with this many drives.
 
I would look into one of the SSD optimized solutions like Fastpath so that you will have good 4K QD1 performance. I know that using SSD RAID0 (without optimizations like fastpath) slightly reduces your 4K low QD performance with 2 drives however not sure how much that continues with this many drives.

From my understanding it depends on the raid card, looking at some info Adaptec supports around 500K IOPS in raid 0 on the latest series. Can not find any hard figures for LSI latest series.
 
I built a couple of "raid-packs".
Here with 16 Samsung 840 Pro and an Adaptec 71605E.
original.jpg



Depending on your workload:
predominantely sequential read/write: stability ok, performance up to 6.8 GB/s at the OS level
predominantely random I/O: I'd rather go the LSI route. Driver issues, lower performance (IOPS) and stability issues with the adaptec controllers.

Andy
 
AndyE, that image is for a HBA build. It does not support any boot option as it is only software raid. Also, does LSI raid have any issues with Samsung?
 
LSI can boot, I doubt they have any compatibility issues with samsung. they can definitely handle 8 - 16 raid arrays. get fastpath as well.
 
LSI can boot, I doubt they have any compatibility issues with samsung. they can definitely handle 8 - 16 raid arrays. get fastpath as well.

They are not on the compatibility list as they are still newish drives, does anyone has these with an LSI RAID controller? I am not sure if LSI can provide more IOPS then Adatpec as they can support 500K, LSI i have got no spec sheets stating the limitations.
 
AndyE, that image is for a HBA build. It does not support any boot option as it is only software raid. Also, does LSI raid have any issues with Samsung?
I used the Adaptec HBA with both options.
1) Raid0 in the OS
2) Raid0 in the HBA

and yes, you can boot

They are not on the compatibility list as they are still newish drives, does anyone has these with an LSI RAID controller? I am not sure if LSI can provide more IOPS then Adatpec as they can support 500K, LSI i have got no spec sheets stating the limitations.
I did not check any official compatibility list, but the LSI 9207-8i adapters work fine with the Samsung 830 pro and 840 pro series.
I configured my systems up to 6x LSI adapters, achieving either 2.2 mio IOPS with 4 KB blocks (=8.6 GB/s random I/O),or 20 GB/s sequential transfer rates. Some of my apps had runtimes up to 14 days, reading and writing in the Petabyte range without issues.
original.jpg


Here are the IOPS rates with the LSI 2308 raid controllers in an X79 system. 445k IOPS per 8-port controller are possible, I never managed to achieve the advertised 700k.
original.jpg


The Adaptec drivers are not at the same "stability" level as the LSI ones. While the new 7-series are rock solid with hard disk drives (I use a 24-port Adaptec 7 series for my homeserver with 3 TB WD Red drives) and these controllers achieve good performance in sequential SSD workloads, the driver bluescreen the OS regularily when heavy random I/O with fast SSDs is the workload.

The LSI driver is rock solid, whatever the workload, it never had any driver related issues. Independent of the number of controllers in the system (I used 1 to 8)
 
...I have looked at the HBA cards as well but I would like the option to boot Windows of the array at some stage....
Independent of the technical feasibility, booting from a massive raid0 is a proven bad practice.
 
Thank you so much for the information.

I used the Adaptec HBA with both options.
1) Raid0 in the OS
2) Raid0 in the HBA

and yes, you can boot

How did you get it to boot from software raid? My understanding is that HBA is just a SAS controller without RAID feature set.

Also, the RAID 0 on the HBA with the RAID firmware. Is this hardware based raid, software or firmware based?

I did not check any official compatibility list, but the LSI 9207-8i adapters work fine with the Samsung 830 pro and 840 pro series.
I configured my systems up to 6x LSI adapters, achieving either 2.2 mio IOPS with 4 KB blocks (=8.6 GB/s random I/O),or 20 GB/s sequential transfer rates. Some of my apps had runtimes up to 14 days, reading and writing in the Petabyte range without issues.
original.jpg
Nice, does the LSI cards support RAID 0 between controllers?

Here are the IOPS rates with the LSI 2308 raid controllers in an X79 system. 445k IOPS per 8-port controller are possible, I never managed to achieve the advertised 700k.
original.jpg
The IOPS for RAID seem a little bit on the low side, is this with fastpath enabled?

The Adaptec drivers are not at the same "stability" level as the LSI ones. While the new 7-series are rock solid with hard disk drives (I use a 24-port Adaptec 7 series for my homeserver with 3 TB WD Red drives) and these controllers achieve good performance in sequential SSD workloads, the driver bluescreen the OS regularily when heavy random I/O with fast SSDs is the workload.
Any reason be hide the bluescreens? Besides the BSOD how does 8 SSD in the Adaptec series 7 RAID compare to the LSI 92XX, mainly IOPS? As the Adaptec doco says 500K+ IOPS.

The LSI driver is rock solid, whatever the workload, it never had any driver related issues. Independent of the number of controllers in the system (I used 1 to 8)
Great, for 8 SSD RAID 0 build you would go for LSI? Which card? MegaRAID SAS 9271-8iCC?

Just out of interest have you found any PCIe SSD card that can provide the same performance? Also, would you upgrade or change any part of that LSI and Samsung 840 Pro setup?
 
Just out of interest have you found any PCIe SSD card that can provide the same performance?

I would examine the FusionIO cards (if you have the budget).
 
Thank you so much for the information.
You are welcome.
How did you get it to boot from software raid? My understanding is that HBA is just a SAS controller without RAID feature set.
HBA's normally support via HW/FW Raid0 and Raid1. Raid5 and Raid6, et. al. is computationally more expensive and need additional HW and usually more buffer memory.
Just for clarification, I booted via HW/FW raid0 setups. Didn't spent time trying via software raid.

As written in a separate post above. I would clearly separate the boot drive from the data raid, whatever the raid level is.

Also, the RAID 0 on the HBA with the RAID firmware. Is this hardware based raid, software or firmware based?
I'd think it is a combination of hardware and firmware.


Nice, does the LSI cards support RAID 0 between controllers?
No.
BTW, this is often not needed as it limits flexibility and performance

The IOPS for RAID seem a little bit on the low side, is this with fastpath enabled?
445k IOPS with 4 KB sector is low? You should check all raid controllers released before summer 2012 (independent of the price range) to get an understanding of low performance.
Only a few raid controllers are fast enough to cope with the speed attained by contemporary SSDs. All others are the first bottleneck in a system with many SSDs.

Any reason be hide the bluescreens? Besides the BSOD how does 8 SSD in the Adaptec series 7 RAID compare to the LSI 92XX, mainly IOPS? As the Adaptec doco says 500K+ IOPS.
The more complex the workload (many writes, big and small, paired with sequential transfers, etc) the faster the adaptec 7-series ran into a stability issue. If data is only streamed, it is somehow ok, raise the demand and reliability goes down. Use multiple HBAs in the same system and the problems start even faster.

I don't know how the Adaptec engineers managed to achieve 500k IOPS, I never got even close to that. Depending on the workload, the LSI controllers with 8 SSDs delivered more than twice the IOPS vs. an Adaptec 7-series with 16 SSDs. WIth Hard disks and the slower performance of HDs (only a few hundered IOPS (=interrupts/sec)), usually didn't cause problems. Raise it to hundred of thousands interrupts/sec, and driver stability becomes critical.

Similar to the 10 GBit network cards, the intel drivers are a standout from a quality perspective

Great, for 8 SSD RAID 0 build you would go for LSI? Which card? MegaRAID SAS 9271-8iCC?
It depends on the workload and the system configuration:
For Raid0 or Raid1 I went with the recent LSI 2308 based LSI cards. I used the 9207-8i cards for 250$. For Raid5/50/6/60 a more sophisticated card might be justified

Just out of interest have you found any PCIe SSD card that can provide the same performance?
FusioIO is often recommended. I did not use those cards for 2 reasons:
1) too expensive, for that amount I could built much higher performing SSD/HBA subsystems
2) too slow (for sequential data transfers). These cards were on PCI Express v2 (not sure if they have now PCIv3 cards)

Also, would you upgrade or change any part of that LSI and Samsung 840 Pro setup?
I broke a few records in my field with Samsung 840 Pro SSDs (256GB versions). Reasons:
  1. stable and predictable write performance over the full capacity
  2. NO dependency on TRIM command for fast/sustainable write perf. All raid cards (except Intel mobo based chips) don't offer TRIM support
  3. proactive and realtime garbage collection
  4. Points 1-3 are a prerequisite to maintain high write performance in a raid. If only one in8 or one in 16 SSDs is in slow write mode, the whole raid has low performance. Thats why (for me) OCZ Vertex were for instance unusable for my systems, despite nice performance numbers with little benchmark utilities.
As an example:
  1. write your SSD over the full capacity. to the last byte. Check the write performance from begin to the end.
  2. Delete the files on the SSD on a controller without the TRIM command
  3. Immediately write again the full capacity to the SSD. Check and compare performance against the previous run.
Finally, the most important point: Which application software should be accelerated? Is it able to leverage the new I/O capabilities? Does it allow overlapped and asynchronous I/O? Is your memory system fast enough for the transferrate you intend to achieve? Do you need fast burst I/O or high sustainable rates? What is the compute/IO ratio in your apps (Will the cache of your i7 system be able to feed your CPU while the SSD array is hammering data into your system? etc, etc, ...
Andy
 
Last edited:
What are you using these driver for? Just curious. I'm having a hard time imagining the situation where a bootable RAID 0 setup of that size makes sense. First thing I thought of switching rims on my car and only using one hand tightened lug nut on each. Go out for a cruise just to see how lucky I am.
 
Thank you so much for clearing some things up, you have been very helpful.

2 more questions, if you do not mind.

Have you compare the IOPS between HBA raid and the mega raid series?

Have you looked at the lsi sas 9300-8i? I know it is a SAS 3.0 but it should be back would compatible. As well it has the fision mpt 2.5 which supports double the IOPS.
 
2 more questions, if you do not mind.
Have you compare the IOPS between HBA raid and the mega raid series?
No, not really.
What I would expect is that the raid0/1 performance is probable at the same level. Raid 5/6 takes anway a huge hit on the write performance - independent of the controller.
Depending on your application, you might go the raid10 route for top write performance (which can be taken with a hBA adapter as well). The disadvantage for raid 10 is that you need twice the SSDs for your array.

Have you looked at the lsi sas 9300-8i? I know it is a SAS 3.0 but it should be back would compatible. As well it has the fision mpt 2.5 which supports double the IOPS.
They had been announced, but I haven't seen them available in my region (Europe).

The original LSI 2308 cards had one "problem":
8x 6Gbit/s SAS/SATA ports could not saturate the PCI express v3 x8 interface, so the max sustainable bandwidth was in the 4-4.2 GByte/s range. The PCI express v3 interface would accomodate up to 7 GB/sec. This is one reason why I chose for a few workloads the adaptec 7 series. With 12-16 SSDs (ea. 540 MB/s), the PCI interface could be saturated.

The 12 GBit/s interface allows to better "balance" the aggregated SSD bandwidth with the capability of the HBA on the PCI bus. Currently, Toshiba is one of the few companies offering 12 GB/s SSDs, where the GB/$ ratio is about 5 times more expensive than a good performing 6 Gb/s SSD like the Samsung 840 Pro. So yes, it would be technical feasible, but the cost efficiency is much worse (for now).

It is like with FusionIO cards, you have to check where the bottleneck is. For instance most PCI-SSD cards have PCI v2.0 x8 interfaces limiting the transferrate to 2-3 GB/s, which is easy to achieve with a LSI 9207-8i and 8 Samsung SSDs for a fraction of the cost

A single Fusion IODrive with 2.400 GB capacity and 3 GB/s read speed sets you back for 33.000 $. This performance can be achieved with 2 LSI 9207 HBAs and 16 Samsung SSDs 256GB for about 3.800 $ (actually the performance will be better with 8 GB/sec). A cost ratio of 9:1

Lastly: Don't be disappointed that your shiny new 16 SSD array doesn't show incredible performance with the known benchmark utilities. Your array is probable ok and the fault is with the benchmark utilities. They are just not made for such setups. Use IOMeter instead.
 
Last edited:
No, not really.
What I would expect is that the raid0/1 performance is probable at the same level. Raid 5/6 takes anway a huge hit on the write performance - independent of the controller.
Depending on your application, you might go the raid10 route for top write performance (which can be taken with a hBA adapter as well). The disadvantage for raid 10 is that you need twice the SSDs for your array.


They had been announced, but I haven't seen them available in my region (Europe).

The original LSI 2308 cards had one "problem":
8x 6Gbit/s SAS/SATA ports could not saturate the PCI express v3 x8 interface, so the max sustainable bandwidth was in the 4-4.2 GByte/s range. The PCI express v3 interface would accomodate up to 7 GB/sec. This is one reason why I chose for a few workloads the adaptec 7 series. With 12-16 SSDs (ea. 540 MB/s), the PCI interface could be saturated.

The 12 GBit/s interface allows to better "balance" the aggregated SSD bandwidth with the capability of the HBA on the PCI bus. Currently, Toshiba is one of the few companies offering 12 GB/s SSDs, where the GB/$ ratio is about 5 times more expensive than a good performing 6 Gb/s SSD like the Samsung 840 Pro. So yes, it would be technical feasible, but the cost efficiency is much worse (for now).

It is like with FusionIO cards, you have to check where the bottleneck is. For instance most PCI-SSD cards have PCI v2.0 x8 interfaces limiting the transferrate to 2-3 GB/s, which is easy to achieve with a LSI 9207-8i and 8 Samsung SSDs for a fraction of the cost

A single Fusion IODrive with 2.400 GB capacity and 3 GB/s read speed sets you back for 33.000 $. This performance can be achieved with 2 LSI 9207 HBAs and 16 Samsung SSDs 256GB for about 3.800 $ (actually the performance will be better with 8 GB/sec). A cost ratio of 9:1

Lastly: Don't be disappointed that your shiny new 16 SSD array doesn't show incredible performance with the known benchmark utilities. Your array is probable ok and the fault is with the benchmark utilities. They are just not made for such setups. Use IOMeter instead.

For the 12Gbit/s card, I did not mean to get SAS 3.0 SSDs but still use the samsungs 840 pro. The reason for the card was fusion mpt 2.5 over 2.0. What do you think about the difference in MPT versions?
 
For the 12Gbit/s card, I did not mean to get SAS 3.0 SSDs but still use the samsungs 840 pro. The reason for the card was fusion mpt 2.5 over 2.0. What do you think about the difference in MPT versions?
Haven't had a chance to use a card with the newer MPT.
Don't know if the advertised perf improvement of 1mio IOPS vs. 700k IOPS of the predecessor will be visible with normal drives in normal working conditions.

Usually, the benefits you will gain is more dependent on the architecture of your application software than on those specsheet differences.
 
So you expect all 8-16 drives to fail at once?

With raid 0 if one drive fails the whole array fails. So the more drives you add in the array the more chances you'll have some sort of problem.

Also is this setup going to be used for anything or is it more for "Ha look at what I did"?
 
Haven't had a chance to use a card with the newer MPT.
Don't know if the advertised perf improvement of 1mio IOPS vs. 700k IOPS of the predecessor will be visible with normal drives in normal working conditions.

Usually, the benefits you will gain is more dependent on the architecture of your application software than on those specsheet differences.

LSI Datablot also looks good if i get the 12G model, as i can dump 16 SSDs one the one card. :)
 
... as i can dump 16 SSDs one the one card. :)
16 SSDs can't be connected directly on the LSI 9300 cards
There are 2 connectors for 4-fold fan-out cables (total: 8 SATA/SAS connectors). Alternatively, if you have external/internal cases, you can use the 4-lane cables
 
Yes it would involve a SAS expander.

Also, the LSI SAS 9207-8i and LSI SAS 9300-8i do not advertise RAID support while LSI SAS 9311-8i does. Are they all RAID supported with the correct firmware loaded?
 
It's probable more cost effective to use 2x LSI-9300 cards to get 16 SSDs connected. (With 9207-8i it was)

yea, true.

Also, the LSI SAS 9207-8i and LSI SAS 9300-8i do not advertise RAID support while LSI SAS 9311-8i does. Are they all RAID supported with the correct firmware loaded?
The firmware download includes both versions but still a bit unclear.
 
Read up more on Fastpath and Fusion-MPT, MPT uses a ARM processor to do the RAID 0 while Fastpath will off load high IOPS over to your CPU much like software raid does. Providing a higher performance over MPT, if i am reading it right. Have you tested any fastpath enabled cards?

The firmware question is still odd, as the 9300_8i Firmware download includes SA9300_8i_IT and SA9311_8i_IR. I would think LSI would only include firmware that works on that model controller. as the ZIP name is specific to that model.

As well using Fusion-MPT RAID 0, what is the maximum IOPS you have ever got out of one card? Not in HBA software RAID 0 but the firmware RAID.
 
Last edited:
Andy, have you had any luck ordering the new 12G HBA? They are coming in stock within 3 weeks here. :)

Also, spoke with some LSI partners they say that the IR firmware is very limited in performance as its trying to run on top of a slow process. Great for direct IO, software raid but not much else. The FastPath is good for high IOPS as its basically software raid/drivers to give you the extra performance but they could not get any figures/data to back it up. Then they might be just trying to sell the higher priced model. Thoughts about the above and my last post above?
 
stenrulz,
a few comments to your questions. My experience is based on the PCIE 3.0 generation of LSI HBA's, the Adaptec 72405 (24-port raid adapter) and the Adaptec 71605E HBA (both are PCIE 3.0), plus some older raid controllers. For SSD's, my personal "arsenal" contains about 15 different SSD models for performance evaluation. Last year, I decided to settle on 48x Samsung 830 Pro 128 GB SSDs, which were for my particular workload the most cost effective solution. This year I added 24x Samsung 840 Pro 256GB for the better write performance.

Number 1 thing you need to consider is: What will be the workload you are running against the array? Fileserver? fast game loading? sorting? sequential transfers with high or low CPU load? relational databases? a web or mail server? etc, etc, ..

There is no one size fits all configuration. If you have your CPUs maxxed out, any offloading of raid related stuff helps overall system performance (sometimes). Are the CPUs almost idling around, then the software approach ist often faster then the HW solution (CPUs tend to grow faster in performance than special purpose raid controllers)

What is the time to recover your data in case some problems arise? minutes, hours, weeks, don't care?
The answer to this question profoundly impacts your configuration.

Do you need screaming performance for a certain application? IOPS or bandwidth?
at what data availability level?

I hope you get the point I am trying to make here ....

To your questions:
1) I don't have a LSI megaraid controller, but based on the Adaptec Raid vs. HBA Adapter, the HBA adapter was from a IOPS perspective much faster (thanks to the Intel CPU)
2) high IOPS is an interrupt intensive process. For benchmarks you can crank super high rates out. Unfortunately, with CPUs reaching 80/90/100% utilization, no capabilities are left to do useful stuff.
3) The LSI 2308 based controllers have good performance - all previous raid controllers pale in comparison (for IOPS)
4) More important for me than any individual peak performance is the "performance envelope" that the combined solution (controller/SSD) have good performance in many workloads.
5) The best performance you will most often get by running the individual drives without any raid functionality. If this is what you can do with your app then go for this approach
6) What is the jitter of your solution? How much of your best performance is available at any point in time, or at any fill factor?
7) I've upgraded all my LIS to IR FW. No problems to report

Sorry, have to run.
Andreas
 
I am unable to do full direct individual drives without any raid the closest would be fastpath. I would be looking at around 4GB/s and 500,000 IOPS. Basically I see my options any of the following:
1) Megaraid with fastpath
2) LSI HBA IR with hardware raid
3) Adaptec hardware raid

Thoughts?
 
I am unable to do full direct individual drives without any raid the closest would be fastpath. I would be looking at around 4GB/s and 500,000 IOPS. Basically I see my options any of the following:
1) Megaraid with fastpath
2) LSI HBA IR with hardware raid
3) Adaptec hardware raid

Thoughts?

Depending of your workload type. Adaptec's nature is for sure sequencial workloads.
If you want pure random 4K QD1, go for Areca cards, they have even better performance than LSI on QD1 type. For higher QD's, ok Adaptec and LSI shines, depending on what you want.
If you want some help of write cache, forget the faspath because that key simply disables all the cache.
I would like to see the new series 8, teorically they will be 60% faster than series 7, reaching 700k IOPS (of course at high QD)
 
stenrulz,
a few comments to your questions. My experience is based on the PCIE 3.0 generation of LSI HBA's, the Adaptec 72405 (24-port raid adapter) and the Adaptec 71605E HBA (both are PCIE 3.0), plus some older raid controllers. For SSD's, my personal "arsenal" contains about 15 different SSD models for performance evaluation. Last year, I decided to settle on 48x Samsung 830 Pro 128 GB SSDs, which were for my particular workload the most cost effective solution. This year I added 24x Samsung 840 Pro 256GB for the better write performance.

Number 1 thing you need to consider is: What will be the workload you are running against the array? Fileserver? fast game loading? sorting? sequential transfers with high or low CPU load? relational databases? a web or mail server? etc, etc, ..

There is no one size fits all configuration. If you have your CPUs maxxed out, any offloading of raid related stuff helps overall system performance (sometimes). Are the CPUs almost idling around, then the software approach ist often faster then the HW solution (CPUs tend to grow faster in performance than special purpose raid controllers)

What is the time to recover your data in case some problems arise? minutes, hours, weeks, don't care?
The answer to this question profoundly impacts your configuration.

Do you need screaming performance for a certain application? IOPS or bandwidth?
at what data availability level?

I hope you get the point I am trying to make here ....

To your questions:
1) I don't have a LSI megaraid controller, but based on the Adaptec Raid vs. HBA Adapter, the HBA adapter was from a IOPS perspective much faster (thanks to the Intel CPU)
2) high IOPS is an interrupt intensive process. For benchmarks you can crank super high rates out. Unfortunately, with CPUs reaching 80/90/100% utilization, no capabilities are left to do useful stuff.
3) The LSI 2308 based controllers have good performance - all previous raid controllers pale in comparison (for IOPS)
4) More important for me than any individual peak performance is the "performance envelope" that the combined solution (controller/SSD) have good performance in many workloads.
5) The best performance you will most often get by running the individual drives without any raid functionality. If this is what you can do with your app then go for this approach
6) What is the jitter of your solution? How much of your best performance is available at any point in time, or at any fill factor?
7) I've upgraded all my LIS to IR FW. No problems to report

Sorry, have to run.
Andreas
AndyE

I've been working with adaptec controllers (series 6 yet) yes i know they are slow on 4K QD1 but like their storage manager and stability. However I need higher IOPS at QD1 (OS level). I saw sometimes my adaptec puting the buffer on 4K QD1 at 40mb/s, the controller itself simple don't reach at speed (tops at 15-20mb/s).
With series 8 on the horizon and accordingly to your experience on series 7, what do you expect from it? the same history? I've been undecided between Adaptec and Areca. Iknow the Areca are super fast. Any experience on that? thank you
 
Back
Top