Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
If these cannot work in IT-mode firmware, you would run the risk of disks detaching (disappearing) when they encounter active bad sectors. You should be fine though, as long as your disks do not have bad sectors and they never fart.
In this configuration, i would pay special attention to your backup. Do you have a 1:1 backup of your data, is the quality of the backup high? Do you rely your backup if your primary server fails totally?
I don't remember from previous discussion whether those IBM cards have a BIOS utility where you can set the timeouts, like is possible on my SuperMicro USAS-L8i cards. If you can set it to 120 seconds, you shouldn't have any problems if in the future one of your disks encounters an unreadable sector prompting recovery longer than 10 seconds, which usually is the point for strict RAID firmware to disconnect the disk and consider it failed, which is rather ridiculous to be honest. Especially considering higher capacity drives (2TB) encounter uBER (Uncorrectable Bit-Error-Rate) more frequently.
sub.mesa is correct. I've got two of these cards running in my main Solaris 11 Express NAS with napp-it and another one in a test Nexenta Community Edition that I'm considering for a disaster recovery server that will be housed in my detached garage.
All three cards are working great on my systems. If any of you have read my thread in this forum about my server build-up, I was having a hell of a time with bad hard drives and bad M1015's (of which the eBay seller sent me some replacement ones and they're working great). With the bad drives attached and the dropout timeout set to 120 seconds in the BIOS, I haven't had a single problem with drives dumping off expectedly. Then again, my testing has been very limited, so take my experience with a grain of salt.
sub.mesa is correct. I've got two of these cards running in my main Solaris 11 Express NAS with napp-it and another one in a test Nexenta Community Edition that I'm considering for a disaster recovery server that will be housed in my detached garage.
All three cards are working great on my systems. If any of you have read my thread in this forum about my server build-up, I was having a hell of a time with bad hard drives and bad M1015's (of which the eBay seller sent me some replacement ones and they're working great). With the bad drives attached and the dropout timeout set to 120 seconds in the BIOS, I haven't had a single problem with drives dumping off expectedly. Then again, my testing has been very limited, so take my experience with a grain of salt.
My SuperMicro controller displays text during boot (technically: POST) where it tells you a key combination (alt+d or something) to enter the controller setup; like the BIOS of the controller much like RAID adapters have. When entering that environment, i get a blue background, and it's not called WebBIOS. So you probably are running different firmware, and the problem is that the IBM cannot be flashed with IT firmware as i understand.
Not the news you've been hoping for, i guess.
If these cannot work in IT-mode firmware, you would run the risk of disks detaching (disappearing) when they encounter active bad sectors. You should be fine though, as long as your disks do not have bad sectors and they never fart.
In this configuration, i would pay special attention to your backup. Do you have a 1:1 backup of your data, is the quality of the backup high? Do you rely your backup if your primary server fails totally?
Did Nexenta or Solaris recognize the card without any additional driver work?
Some stuff I've read seems to reveal that these drivers are flaky in Solaris/Nexenta right now. :/
The MegaRAID cards like the 9240/M1015 even have a different command-line utility than the lower-end cards, called megacli or something like that as opposed to sasflash/sas2flash. It could be that the timeout setting is done through that.
ARE FEELING ADVENTURESOME
I found someone on here that said the M1015 in OpenSolaris/OpenIndiana was flaky at best with the imr_sas driver. I never said that myself. I've got no experience with it.No, I had to download the imr_sas driver directly from LSI's website to get them both up and running. After unzipping and doing a simple "pkgadd -d ." command, I was up and running after a restart.
You've said the term "flakey" with regard to these drivers in another thread if I remember correctly, but don't recall you ever posting anything concrete or consistent. acesea in this thread said the same exact thing about OpenIndiana, which I have no experience with. There are two other web links in that thread on page three that detail a kernel panic (in OpenSolaris), a ZIL failure (OpenSolaris and Nexenta), and two cards that dropped dead with no explanation of how they failed. Mind you, all of these accounts of failure were also around May 2010 with an older set of drivers. The latest drivers were released at the turn of the year, nearly seven months after.
I'm pretty sure that you can find any computer product on the market via Newegg, read the reviews, and you'll discover that every product has it's outliers when it comes to reliability.
Not that my experience is the authority on the matter, but I've passed nearly 10TB of data through my systems so far and not a single hiccup. Again, I've also done minimal testing with failing hard drives to see if I could get the system to do a drive dropout. Everything has been running like a top.
I believe you can make the change through MegaCLI, but I did it through the add-on utilty called MegaRAID Storage Manager. Go to the following link:
http://www.lsi.com/storage_home/pro...as/entry_line/megaraid_sas_9240-8i/index.html
Click on Support and Downloads and you'll be presented with all of the drivers for the card, the latest firmware, and the MegaRAID Storage Manager (near the bottom). Installing it is a matter of running an install.sh script and rebooting. It can't get much easier than that. The cool thing about the storage manager is that once you install the software on your server, you can take the client portion of the software and install it on your end-user machine to gain access to the HBA's settings from there. Very convenient.
I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions.... as it helps me in the long run too. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-D
I found someone on here that said the M1015 in OpenSolaris/OpenIndiana was flaky at best with the imr_sas driver. I never said that myself. I've got no experience with it.
I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions.... as it helps me in the long run too. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-D
This is why i've developed the habit of always doing my own testing, theres too much FUD on the internet and often done by people that don't necessarily know what they're doing, and make a lot of false assumptions without bothering to do further testing. newegg reviews being a great example of that as was mentioned. before i'm ready to declare something flakey i'm going to do a lot of process of elimination and isolation testing to make sure i'm not making a lot of assumptions about what might have been a bad piece of b-stock/refurb hardware to begin with, or bad cabling, or bad drives, or a bad motherboard/cpu/memory, etc.
and exactly for that reason i've been running two new retail boxed 9240-8i's side-by-side with my M1015's as a behavior model, and they've thus far been identical and trouble-free on Solaris Express 11 so I take any vague reports of flakiness with a big grain of salt.
I can confirm that my ASUS P8H67-M EVO is not enamored by the M1015 (x4 slot to clarify) at the moment.
I'm testing 25 x M1015's along with 9240's and 9211's plus HP and Intel expanders and about 50 drives, which should be done in a couple weeks.
I hear you about budget/means. In my case I'm trying to put together a more comprehensive review/guide together for it, together with pjkenned. I'm testing 25 x M1015's along with 9240's and 9211's plus HP and Intel expanders and about 50 drives, which should be done in a couple weeks.
Easiest for you might just be to try snagging a 9211-8i off ebay. i thought sub.mesa said its supported in the latest FreeBSD, or at least in his latest zfsGURU. or if you're buying a new mobo then an Supermicro X8SI6 has one built in.
Yet another example of FUD (great word odditory!). No specifics what-so-ever and no reflection on process of elimination to determine whether or not something else within the mix is at fault. This is a pretty easy one though. Your problem is the fact that the card is a PCIe 8x device that you are trying to use on a 4x interface (that's in a 16x physical slot). The solution to the problem is to put the M1015 in the true 16x slot and you should be fine.
Speaking of FUD... you do know that you can run a PCIe x8 electrical card in a PCIe x16 physical slot that is PCIe x4 electrical... right? You do know that your effective bandwidth is limited but it works?
Speaking of FUD... you do know that you can run a PCIe x8 electrical card in a PCIe x16 physical slot that is PCIe x4 electrical... right? You do know that your effective bandwidth is limited but it works?
In this case, I believe the answer to this question is no, especially when the LSI manual says that it's a true x8 card. There's no mention in the manual that you can use a lower bandwidth slot. HBAs aren't like video cards where you can plug them into a half bandwidth slot and expect them to work just fine (but at a slower rate).
Specifically, what are the problems you are seeing? Are you seeing slow transfer rates? Is it POSTing? I don't want to sound like a dick, but details are needed to even attempt to assist you.
Yet another example of FUD (great word odditory!). No specifics what-so-ever and no reflection on process of elimination to determine whether or not something else within the mix is at fault. This is a pretty easy one though. Your problem is the fact that the card is a PCIe 8x device that you are trying to use on a 4x interface (that's in a 16x physical slot). The solution to the problem is to put the M1015 in the true 16x slot and you should be fine.
Works in the x4 slot off of the Intel 3420's (similar DMI bus config to the Sandy Bridge) so it is not an x8 vs. x4 issue. Sorry. Feel free to read PCIe specs for an explanation of why this works.
Don't worry, not asking for help, I am more than capable of handling this one. Just watch what you post since someone using Google will see what you write and may not know better. Motherboard - RAID Card compatibility is not a given, especially with consumer boards as a FYI.
LOL, you got the wrong guy putting pjkenned and FUD in the same sentence. For not only is he the force behind ServeTheHome.com and the how-to guides there but he enjoys painting, mystery novels and illegal dogfighting. and he's also who i'm collaborating with for the M1015 writeup that will appear on his site.
I just now saw the link in your sig. Are you Patrick? Great web site with a buttload of information. Keep it up!
I stand corrected, but you still didn't define "enamored". Given your findings, Odditory's, _gea's, and sub.mesa's on this forum, we all know how touchy ZFS performance is relative to hardware choices. In giving my answer to you, I automatically assumed that enamored meant you've got it running, but it's not optimal. What exactly is the M1015 doing with that particular motherboard? I'm curious for the sake of learning more about the M1015.
Correction. WRT sub.mesa's comments in this thread on the M1015, he is writing his speculation about what he thinks might be an issue...there is no evidence in his posts that he has every tried this card for anything.
So basically you're saying that even in RAID mode, the disks not configured as being part of an array (thus lacking metadata) would be served directly to the OS without interference from the RAID firmware?you're making the assumption that a 9240/M1015 treats any disk as a raid disk, it doesn't. gone are the days of "simulated" JBOD on a raid controller by having to configure individual disks in separate RAID-0's. Thus ERC timeout issue is irrelevant for JBOD disks on 9240/M1015 because default behavior of the iMR light raid stack *in last several firmware revs* is to treat unconfigured disks as JBOD/dumb mode and doesn't apply its raid-oriented ruleset to them
This should be tested with disks having Current Pending Sector in the SMART output and then read the entire drive. That will encounter the unreadable sector at one point, and the disk suddenly disappears then it's BAD and if it just timeouts/resets/fails but the disk stays there and other sectors can be read after the failed I/O, then that would be the GOOD result that we all want.
Curiously, as I continue to read sub.mesa's posts on ZFS, he is becoming more and more convincing - convincing me that outside of Sun/Oracle supported configurations, ZFS is bug filled, fragile and not yet ready. I'm amused every time he writes things like "ZFS excersizes the weaknesses of SAS expanders", etc., when what he really should be writing "SAS expanders expose the fragility of ZFS".