IBM ServeRAID M1015 Solaris JBOD?

Mindflux

Limp Gawd
Joined
Feb 5, 2011
Messages
251
Anyone using these for ZFS? I am reading some driver stability issues. Any personal experience here? Odditory?
 
I have been running 3 of them in linux for a while now, and they are stable with the latest megaraid_sas drivers (the ones that came with the 2.6.37 kernel). I have not had any issues since upgrading the firmware on the M1015s to the latest (Dec 2010) LSI 9240-8i firmware.
 
If these cannot work in IT-mode firmware, you would run the risk of disks detaching (disappearing) when they encounter active bad sectors. You should be fine though, as long as your disks do not have bad sectors and they never fart.

In this configuration, i would pay special attention to your backup. Do you have a 1:1 backup of your data, is the quality of the backup high? Do you rely your backup if your primary server fails totally?
 
If these cannot work in IT-mode firmware, you would run the risk of disks detaching (disappearing) when they encounter active bad sectors. You should be fine though, as long as your disks do not have bad sectors and they never fart.

In this configuration, i would pay special attention to your backup. Do you have a 1:1 backup of your data, is the quality of the backup high? Do you rely your backup if your primary server fails totally?


Even in JBOD they are no good huh? What about the SAS 2008 cards on some supermicro boards, they can be flashed to IT can't they?

http://www.servethehome.com/howto-flash-supermicro-x8si6f-lsi-sas-2008-controller-lsi-firmware/

Is the M1015 just vendor locked or something?
 
Yes the M1015 is vendor locked and can't be flashed to IT firmware. It's more of a rebadged 9240 than a 9211, so technically there really *isn't* an IT firmware to flash it to.. The best you can do is flash LSI 9240-8i firmware on it (which doesn't accomplish much since apparently the firmware from IBM is the same revision), but even then it will identify itself as an M1015. I believe Odditory has been running them on Solaris with the LSI 9240 driver, so maybe he can comment.

I would hope that even RAID firmware would be smart enough to realize that a drive marked as being in JBOD mode shouldn't be dropped due to lack of TLER. TLER only makes sense for RAID arrays, but it's probably easier for them to just have one timeout value for everything rather than a per-disk setting.
 
I don't remember from previous discussion whether those IBM cards have a BIOS utility where you can set the timeouts, like is possible on my SuperMicro USAS-L8i cards. If you can set it to 120 seconds, you shouldn't have any problems if in the future one of your disks encounters an unreadable sector prompting recovery longer than 10 seconds, which usually is the point for strict RAID firmware to disconnect the disk and consider it failed, which is rather ridiculous to be honest. Especially considering higher capacity drives (2TB) encounter uBER (Uncorrectable Bit-Error-Rate) more frequently.
 
I don't remember from previous discussion whether those IBM cards have a BIOS utility where you can set the timeouts, like is possible on my SuperMicro USAS-L8i cards. If you can set it to 120 seconds, you shouldn't have any problems if in the future one of your disks encounters an unreadable sector prompting recovery longer than 10 seconds, which usually is the point for strict RAID firmware to disconnect the disk and consider it failed, which is rather ridiculous to be honest. Especially considering higher capacity drives (2TB) encounter uBER (Uncorrectable Bit-Error-Rate) more frequently.

sub.mesa is correct. I've got two of these cards running in my main Solaris 11 Express NAS with napp-it and another one in a test Nexenta Community Edition that I'm considering for a disaster recovery server that will be housed in my detached garage.

All three cards are working great on my systems. If any of you have read my thread in this forum about my server build-up, I was having a hell of a time with bad hard drives and bad M1015's (of which the eBay seller sent me some replacement ones and they're working great). With the bad drives attached and the dropout timeout set to 120 seconds in the BIOS, I haven't had a single problem with drives dumping off expectedly. Then again, my testing has been very limited, so take my experience with a grain of salt.
 
sub.mesa is correct. I've got two of these cards running in my main Solaris 11 Express NAS with napp-it and another one in a test Nexenta Community Edition that I'm considering for a disaster recovery server that will be housed in my detached garage.

All three cards are working great on my systems. If any of you have read my thread in this forum about my server build-up, I was having a hell of a time with bad hard drives and bad M1015's (of which the eBay seller sent me some replacement ones and they're working great). With the bad drives attached and the dropout timeout set to 120 seconds in the BIOS, I haven't had a single problem with drives dumping off expectedly. Then again, my testing has been very limited, so take my experience with a grain of salt.

Did Nexenta or Solaris recognize the card without any additional driver work?

Some stuff I've read seems to reveal that these drivers are flaky in Solaris/Nexenta right now. :/
 
sub.mesa is correct. I've got two of these cards running in my main Solaris 11 Express NAS with napp-it and another one in a test Nexenta Community Edition that I'm considering for a disaster recovery server that will be housed in my detached garage.

All three cards are working great on my systems. If any of you have read my thread in this forum about my server build-up, I was having a hell of a time with bad hard drives and bad M1015's (of which the eBay seller sent me some replacement ones and they're working great). With the bad drives attached and the dropout timeout set to 120 seconds in the BIOS, I haven't had a single problem with drives dumping off expectedly. Then again, my testing has been very limited, so take my experience with a grain of salt.

Is the timeout setting done through the BIOS settings screen, or do you have to run some sort of special utility (e.g. sas2flash, megacli or similar) to set it? I have an M1015 flashed with the latest LSI firmware and I couldn't find anything in the BIOS setup (I think it's called WebBIOS) that looked like a timeout value.
 
My SuperMicro controller displays text during boot (technically: POST) where it tells you a key combination (alt+d or something) to enter the controller setup; like the BIOS of the controller much like RAID adapters have. When entering that environment, i get a blue background, and it's not called WebBIOS. So you probably are running different firmware, and the problem is that the IBM cannot be flashed with IT firmware as i understand.

Not the news you've been hoping for, i guess. :(
 
My SuperMicro controller displays text during boot (technically: POST) where it tells you a key combination (alt+d or something) to enter the controller setup; like the BIOS of the controller much like RAID adapters have. When entering that environment, i get a blue background, and it's not called WebBIOS. So you probably are running different firmware, and the problem is that the IBM cannot be flashed with IT firmware as i understand.

Not the news you've been hoping for, i guess. :(

That's correct. The M1015 uses the 9240 RAID firmware and there currently does not appear to be a way to flash it with IT firmware (the sas2flash utility doesn't even see the card).

The BIOS/POST setup screen (I think you hit CTRL-C or something to get into it) is much more elaborate than the IT firmware setup on my 1068e-based HBAs, but I didn't see any timeout settings last time I looked.

The MegaRAID cards like the 9240/M1015 even have a different command-line utility than the lower-end cards, called megacli or something like that as opposed to sasflash/sas2flash. It could be that the timeout setting is done through that.
 
If these cannot work in IT-mode firmware, you would run the risk of disks detaching (disappearing) when they encounter active bad sectors. You should be fine though, as long as your disks do not have bad sectors and they never fart.

In this configuration, i would pay special attention to your backup. Do you have a 1:1 backup of your data, is the quality of the backup high? Do you rely your backup if your primary server fails totally?

you're making the assumption that a 9240/M1015 treats any disk as a raid disk, it doesn't. gone are the days of "simulated" JBOD on a raid controller by having to configure individual disks in separate RAID-0's. Thus ERC timeout issue is irrelevant for JBOD disks on 9240/M1015 because default behavior of the iMR light raid stack *in last several firmware revs* is to treat unconfigured disks as JBOD/dumb mode and doesn't apply its raid-oriented ruleset to them, including ERC timeout - they are true passthrough and it does not mark them with any metadata. when disks aren't part of a raidset the card simply doesn't interfere or intervene, there's nothing *for* it to error correct on a pass-through disk because there's no other duplicate or parity data to correct it *with*.

Is target-mode firmware technically cleaner for JBOD based striped arrays like ZFS given that it removes some complexity and perceived overhead, perhaps, probably, but still arguable, as performance is pretty much identical between 9211 w/ IT and M1015 in my testing with Solaris Express 11 and raidz2. Is the M1015 for everyone, no, only if you want to save a few bucks over a 9211-8i and ARE FEELING ADVENTURESOME, because while results in my own testing have been positive it needs more testing by more people before anyone stamps a guarantee on it for ZFS use.

But you're right about keeping good backups.
 
Last edited:
Did Nexenta or Solaris recognize the card without any additional driver work?

Some stuff I've read seems to reveal that these drivers are flaky in Solaris/Nexenta right now. :/

No, I had to download the imr_sas driver directly from LSI's website to get them both up and running. After unzipping and doing a simple "pkgadd -d ." command, I was up and running after a restart.

You've said the term "flakey" with regard to these drivers in another thread if I remember correctly, but don't recall you ever posting anything concrete or consistent. acesea in this thread said the same exact thing about OpenIndiana, which I have no experience with. There are two other web links in that thread on page three that detail a kernel panic (in OpenSolaris), a ZIL failure (OpenSolaris and Nexenta), and two cards that dropped dead with no explanation of how they failed. Mind you, all of these accounts of failure were also around May 2010 with an older set of drivers. The latest drivers were released at the turn of the year, nearly seven months after.

I'm pretty sure that you can find any computer product on the market via Newegg, read the reviews, and you'll discover that every product has it's outliers when it comes to reliability.

Not that my experience is the authority on the matter, but I've passed nearly 10TB of data through my systems so far and not a single hiccup. Again, I've also done minimal testing with failing hard drives to see if I could get the system to do a drive dropout. Everything has been running like a top.

The MegaRAID cards like the 9240/M1015 even have a different command-line utility than the lower-end cards, called megacli or something like that as opposed to sasflash/sas2flash. It could be that the timeout setting is done through that.

I believe you can make the change through MegaCLI, but I did it through the add-on utilty called MegaRAID Storage Manager. Go to the following link:

http://www.lsi.com/storage_home/pro...as/entry_line/megaraid_sas_9240-8i/index.html

Click on Support and Downloads and you'll be presented with all of the drivers for the card, the latest firmware, and the MegaRAID Storage Manager (near the bottom). Installing it is a matter of running an install.sh script and rebooting. It can't get much easier than that. The cool thing about the storage manager is that once you install the software on your server, you can take the client portion of the software and install it on your end-user machine to gain access to the HBA's settings from there. Very convenient.


I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions.... as it helps me in the long run too. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-D
 
No, I had to download the imr_sas driver directly from LSI's website to get them both up and running. After unzipping and doing a simple "pkgadd -d ." command, I was up and running after a restart.

You've said the term "flakey" with regard to these drivers in another thread if I remember correctly, but don't recall you ever posting anything concrete or consistent. acesea in this thread said the same exact thing about OpenIndiana, which I have no experience with. There are two other web links in that thread on page three that detail a kernel panic (in OpenSolaris), a ZIL failure (OpenSolaris and Nexenta), and two cards that dropped dead with no explanation of how they failed. Mind you, all of these accounts of failure were also around May 2010 with an older set of drivers. The latest drivers were released at the turn of the year, nearly seven months after.
I found someone on here that said the M1015 in OpenSolaris/OpenIndiana was flaky at best with the imr_sas driver. I never said that myself. I've got no experience with it.


I'm pretty sure that you can find any computer product on the market via Newegg, read the reviews, and you'll discover that every product has it's outliers when it comes to reliability.

Not that my experience is the authority on the matter, but I've passed nearly 10TB of data through my systems so far and not a single hiccup. Again, I've also done minimal testing with failing hard drives to see if I could get the system to do a drive dropout. Everything has been running like a top.



I believe you can make the change through MegaCLI, but I did it through the add-on utilty called MegaRAID Storage Manager. Go to the following link:

http://www.lsi.com/storage_home/pro...as/entry_line/megaraid_sas_9240-8i/index.html

Click on Support and Downloads and you'll be presented with all of the drivers for the card, the latest firmware, and the MegaRAID Storage Manager (near the bottom). Installing it is a matter of running an install.sh script and rebooting. It can't get much easier than that. The cool thing about the storage manager is that once you install the software on your server, you can take the client portion of the software and install it on your end-user machine to gain access to the HBA's settings from there. Very convenient.


I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions.... as it helps me in the long run too. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-D

I bought a M1015 off a guy here on [H] but I had him hold on since I started reading some "not so good" info on it. Now I'm kind of torn between taking it or just coping with a BR10i or something.
 
I found someone on here that said the M1015 in OpenSolaris/OpenIndiana was flaky at best with the imr_sas driver. I never said that myself. I've got no experience with it.

This is why i've developed the habit of always doing my own testing, theres too much FUD on the internet and often done by people that don't necessarily know what they're doing, and make a lot of false assumptions without bothering to do further testing. newegg reviews being a great example of that as was mentioned. before i'm ready to declare something flakey i'm going to do a lot of process of elimination and isolation testing to make sure i'm not making a lot of assumptions about what might have been a bad piece of b-stock/refurb hardware to begin with, or bad cabling, or bad drives, or a bad motherboard/cpu/memory, etc.

and exactly for that reason i've been running two new retail boxed 9240-8i's side-by-side with my M1015's as a behavior model, and they've thus far been identical and trouble-free on Solaris Express 11 so I take any vague reports of flakiness with a big grain of salt.
 
Last edited:
I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions.... as it helps me in the long run too. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-D

Well, I'm waiting for a few more pieces to arrive, but later this week I plan on reconfiguring my backup server with a new motherboard, and I can give my M1015 another try. Right now I just use this box to back up my main server which is running FreeBSD 8.2/ZFSguru, so it's not the end of the world if the array gets trashed. I will have to shift from FreeBSD to SE11/OI though, as I can't confirm if this card is known to work in FreeBSD or not (it definitely doesn't work for me, but I don't know if its the driver, or a bad card).
 
Last edited:
I can confirm that my ASUS P8H67-M EVO is not enamored by the M1015 (x4 slot to clarify) at the moment.
 
Last edited:
This is why i've developed the habit of always doing my own testing, theres too much FUD on the internet and often done by people that don't necessarily know what they're doing, and make a lot of false assumptions without bothering to do further testing. newegg reviews being a great example of that as was mentioned. before i'm ready to declare something flakey i'm going to do a lot of process of elimination and isolation testing to make sure i'm not making a lot of assumptions about what might have been a bad piece of b-stock/refurb hardware to begin with, or bad cabling, or bad drives, or a bad motherboard/cpu/memory, etc.

and exactly for that reason i've been running two new retail boxed 9240-8i's side-by-side with my M1015's as a behavior model, and they've thus far been identical and trouble-free on Solaris Express 11 so I take any vague reports of flakiness with a big grain of salt.

That's fine when you have the budget/means for it. But I need to pick a card that's going to be reliable and keep my home NAS/SAN running with hopefully no interruptions outside of some random part failure. So when I read that some folks are having a problem with it that I may not want to debug myself I'm more likely to shy away from it. Of course, if I read every review about every part I'd never get a chance to order anything. I guess a law of averages comes into play there.
 
I hear you about budget/means. In my case I'm trying to put together a more comprehensive review/guide together for it, together with pjkenned. I'm testing 25 x M1015's along with 9240's and 9211's plus HP and Intel expanders and about 50 drives, which should be done in a couple weeks.

Easiest for you might just be to try snagging a 9211-8i off ebay. i thought sub.mesa said its supported in the latest FreeBSD, or at least in his latest zfsGURU. or if you're buying a new mobo then an Supermicro X8SI6 has one built in.
 
Last edited:
I can confirm that my ASUS P8H67-M EVO is not enamored by the M1015 (x4 slot to clarify) at the moment.

Yet another example of FUD (great word odditory!). No specifics what-so-ever and no reflection on process of elimination to determine whether or not something else within the mix is at fault. This is a pretty easy one though. Your problem is the fact that the card is a PCIe 8x device that you are trying to use on a 4x interface (that's in a 16x physical slot). The solution to the problem is to put the M1015 in the true 16x slot and you should be fine.
 
I'm testing 25 x M1015's along with 9240's and 9211's plus HP and Intel expanders and about 50 drives, which should be done in a couple weeks.

Damn! I didn't realize you were doing this. Definitely looking forward to reading about your findings.
 
I hear you about budget/means. In my case I'm trying to put together a more comprehensive review/guide together for it, together with pjkenned. I'm testing 25 x M1015's along with 9240's and 9211's plus HP and Intel expanders and about 50 drives, which should be done in a couple weeks.

Easiest for you might just be to try snagging a 9211-8i off ebay. i thought sub.mesa said its supported in the latest FreeBSD, or at least in his latest zfsGURU. or if you're buying a new mobo then an Supermicro X8SI6 has one built in.


Yeah but for that money I can buy a whole system board with a SAS 2008 on it. ;)
 
Yet another example of FUD (great word odditory!). No specifics what-so-ever and no reflection on process of elimination to determine whether or not something else within the mix is at fault. This is a pretty easy one though. Your problem is the fact that the card is a PCIe 8x device that you are trying to use on a 4x interface (that's in a 16x physical slot). The solution to the problem is to put the M1015 in the true 16x slot and you should be fine.

Holy... actually I am starting to think it is the SMBus issue since the ASUS board is a consumer motherboard. This is nothing new, especially on non-server ASUS boards. BTW I have tried this card in two LGA 1366 platforms, four LGA 1156 platforms, and now three LGA 1155 platforms. Still working on a solution but figure it is worthwhile to post since this is a cheap alternative to using the Cougar Point on board.

Speaking of FUD... you do know that you can run a PCIe x8 electrical card in a PCIe x16 physical slot that is PCIe x4 electrical... right? You do know that your effective bandwidth is limited but it works?
 
Speaking of FUD... you do know that you can run a PCIe x8 electrical card in a PCIe x16 physical slot that is PCIe x4 electrical... right? You do know that your effective bandwidth is limited but it works?


My head hurts.
 
Speaking of FUD... you do know that you can run a PCIe x8 electrical card in a PCIe x16 physical slot that is PCIe x4 electrical... right? You do know that your effective bandwidth is limited but it works?

In this case, I believe the answer to this question is no, especially when the LSI manual says that it's a true x8 card. There's no mention in the manual that you can use a lower bandwidth slot. HBAs aren't like video cards where you can plug them into a half bandwidth slot and expect them to work just fine (but at a slower rate).

Specifically, what are the problems you are seeing? Are you seeing slow transfer rates? Is it POSTing? I don't want to sound like a dick, but details are needed to even attempt to assist you.
 
In this case, I believe the answer to this question is no, especially when the LSI manual says that it's a true x8 card. There's no mention in the manual that you can use a lower bandwidth slot. HBAs aren't like video cards where you can plug them into a half bandwidth slot and expect them to work just fine (but at a slower rate).

Specifically, what are the problems you are seeing? Are you seeing slow transfer rates? Is it POSTing? I don't want to sound like a dick, but details are needed to even attempt to assist you.

Works in the x4 slot off of the Intel 3420's (similar DMI bus config to the Sandy Bridge) so it is not an x8 vs. x4 issue. Sorry. Feel free to read PCIe specs for an explanation of why this works.

Don't worry, not asking for help, I am more than capable of handling this one. Just watch what you post since someone using Google will see what you write and may not know better. Motherboard - RAID Card compatibility is not a given, especially with consumer boards as a FYI.
 
Yet another example of FUD (great word odditory!). No specifics what-so-ever and no reflection on process of elimination to determine whether or not something else within the mix is at fault. This is a pretty easy one though. Your problem is the fact that the card is a PCIe 8x device that you are trying to use on a 4x interface (that's in a 16x physical slot). The solution to the problem is to put the M1015 in the true 16x slot and you should be fine.

LOL, you got the wrong guy putting pjkenned and FUD in the same sentence. For not only is he the force behind ServeTheHome.com and the how-to guides there but he enjoys painting, mystery novels and illegal dogfighting. and he's also who i'm collaborating with for the M1015 writeup that will appear on his site.
 
Last edited:
Works in the x4 slot off of the Intel 3420's (similar DMI bus config to the Sandy Bridge) so it is not an x8 vs. x4 issue. Sorry. Feel free to read PCIe specs for an explanation of why this works.

Don't worry, not asking for help, I am more than capable of handling this one. Just watch what you post since someone using Google will see what you write and may not know better. Motherboard - RAID Card compatibility is not a given, especially with consumer boards as a FYI.

I just now saw the link in your sig. Are you Patrick? Great web site with a buttload of information. Keep it up!

I stand corrected, but you still didn't define "enamored". Given your findings, Odditory's, _gea's, and sub.mesa's on this forum, we all know how touchy ZFS performance is relative to hardware choices. In giving my answer to you, I automatically assumed that enamored meant you've got it running, but it's not optimal. What exactly is the M1015 doing with that particular motherboard? I'm curious for the sake of learning more about the M1015.
 
LOL, you got the wrong guy putting pjkenned and FUD in the same sentence. For not only is he the force behind ServeTheHome.com and the how-to guides there but he enjoys painting, mystery novels and illegal dogfighting. and he's also who i'm collaborating with for the M1015 writeup that will appear on his site.

Yeah yeah, I just figured that out. :p

Patrick is the Chuck Norris of home server storage IMO. Both of you let me know if you need anything from my end with regard to testing the M1015 with Solaris 11 Express in an ESXi 4.1 environment and in a bare-metal Nexenta environment.
 
I just now saw the link in your sig. Are you Patrick? Great web site with a buttload of information. Keep it up!

I stand corrected, but you still didn't define "enamored". Given your findings, Odditory's, _gea's, and sub.mesa's on this forum, we all know how touchy ZFS performance is relative to hardware choices. In giving my answer to you, I automatically assumed that enamored meant you've got it running, but it's not optimal. What exactly is the M1015 doing with that particular motherboard? I'm curious for the sake of learning more about the M1015.

Correction. WRT sub.mesa's comments in this thread on the M1015, he is writing his speculation about what he thinks might be an issue...there is no evidence in his posts that he has every tried this card for anything.

Curiously, as I continue to read sub.mesa's posts on ZFS, he is becoming more and more convincing - convincing me that outside of Sun/Oracle supported configurations, ZFS is bug filled, fragile and not yet ready. I'm amused every time he writes things like "ZFS excersizes the weaknesses of SAS expanders", etc., when what he really should be writing "SAS expanders expose the fragility of ZFS".
 
Correction. WRT sub.mesa's comments in this thread on the M1015, he is writing his speculation about what he thinks might be an issue...there is no evidence in his posts that he has every tried this card for anything.

My comment was with respect to ZFS experience in general, not just the M1015.

This is definitely digressing from M1015 discussion though.
 
you're making the assumption that a 9240/M1015 treats any disk as a raid disk, it doesn't. gone are the days of "simulated" JBOD on a raid controller by having to configure individual disks in separate RAID-0's. Thus ERC timeout issue is irrelevant for JBOD disks on 9240/M1015 because default behavior of the iMR light raid stack *in last several firmware revs* is to treat unconfigured disks as JBOD/dumb mode and doesn't apply its raid-oriented ruleset to them
So basically you're saying that even in RAID mode, the disks not configured as being part of an array (thus lacking metadata) would be served directly to the OS without interference from the RAID firmware?

If that's true, there would be no issue using these for ZFS. I can only share my own experience with Areca ARC-1230 hardware RAID with 1.43 firmware where both 'passthrough' disks as well as 'JBOD' mode would disconnect disks on bad sectors as i personally experiences multiple times. Haven't had such issues for at least two years now with my USAS-L8i controllers in IT-mode firmware, while they did develop new bad sectors.

So i guess the real problem is that controller behavior differs and we don't have a clear picture of how controllers vary in timeout behavior. Either you run in non-RAID mode without any problems, or you run in RAID-mode which would be problem-free only on specific controllers and/or specific firmware.

To be honest.. this is a nightmare! People just want to buy a controller and make it work. A lot of controllers have some RAID functionality and thus you have to ask the question of how it will deal with timeouts. Personally i recommend using only well-tested controllers just to avoid any potential headaches. Buying the wrong controller can be quite a bummer.

But if you're right about the M1015 having no problems with long timeouts even in RAID mode firmware, then that is great news! Just an idea: perhaps you could create a list of controllers and how they deal with timeouts, i.e. whether or not it is safe to use non-TLER disks on them. This should be tested with disks having Current Pending Sector in the SMART output and then read the entire drive. That will encounter the unreadable sector at one point, and the disk suddenly disappears then it's BAD and if it just timeouts/resets/fails but the disk stays there and other sectors can be read after the failed I/O, then that would be the GOOD result that we all want.
 
This should be tested with disks having Current Pending Sector in the SMART output and then read the entire drive. That will encounter the unreadable sector at one point, and the disk suddenly disappears then it's BAD and if it just timeouts/resets/fails but the disk stays there and other sectors can be read after the failed I/O, then that would be the GOOD result that we all want.

I've still got a couple crap drives laying around with CPSs. Give me a set of commands to run and I will test this. One of the drives had a head strike (external drive that took a dive off my desk while hooked up), so it dies nicely after hitting the bad spots.
 
The easiest would just be a read of the entire surface:

dd if=/dev/ad6 of=/dev/null bs=1m conv=noerror

Replace ad6 with the raw device name. Check the smart data first to ensure it has Current Pending Sectors. Then run the command and if it completes with the device still being present, then your controller rocks. If the device is gone, then the controller sucks (or FreeBSD driver, theoretically).
 
Great thread, as usual. I've been waiting for a discussion on this card.

As for consumer motherboard incompatibility of the 9240/M1015, there is an old thread I found here that I came across after experiencing the same issue myself.

In a couple of days, I will be testing in a AMD 785G based motherboard next and eventually the SuperMicro X9SCM-F when I can get my hands on one.
 
Curiously, as I continue to read sub.mesa's posts on ZFS, he is becoming more and more convincing - convincing me that outside of Sun/Oracle supported configurations, ZFS is bug filled, fragile and not yet ready. I'm amused every time he writes things like "ZFS excersizes the weaknesses of SAS expanders", etc., when what he really should be writing "SAS expanders expose the fragility of ZFS".

I'd change that to ZFS is capable of exposing weaknesses in any attached storage system. I can peak at over 1Gbyte/sec local disk io with SSD ZIL.

I have finally achieved a stable design of IT mode controllers, SMC expanders and very selected disks. (ST32000542 or ST32000444SS)

I avoid all non Enterprise WD's. ST32000641's don't work with SMC 3Gb/sec expanders.

My experiences with 6Gb/sec controllers (mixed 3/6 expanders etc.) have all been bad.
A single 3Gb/sec device drops the speed of all devices with most expanders.
 
Emulsifide: That is me (I made my user names on STH before anyone else registered!) Thanks for the kind words, and the offer for help.

sub.mesa: Pass-through is handled differently controller (manufacturer) to controller (manufacturer). For example, Adaptec 3 and 5 series both write data to the JBOD/ pass through disks last time I checked. Hence why one could not move a disk from Adaptec to LSI. LSI IT mode (as an example) to Areca and back the drives ended up being plug-setup-and-play since no metadata was written.

On the idea of easy controllers, that is a big push for my site/ forums. odditory and others are helping out, but since driver support outside of the Windows world is not universal, and because people use consumer boards (SMbus issues w/ a "dumb" HP SAS Expander on an ASUS P6T7 WS Supercomputer is a good example) saying one controller just works, is inexpensive, and performs well is no easy task. The goal of the STH forums is to make a community that is somewhat platform agnostic in advice given. A user can then ask about ZFS somewhere other than the Oracle/ Nexenta/ OpenIndiana/ FreeNAS/ FreeBSD forums where there would be an obvious bias or on another platform's forums (e.g. unRAID) where the thought of ZFS would raise ire with its use of RAID-Z2 over unRAID's RAID 4. The goal is to provide a place where one can answer the two questions, which OS and what hardware in one spot.

Still a lot of work left on the forums but thanks to the awesomeness of people like odditory and others, it is getting there.

BTW if anyone has anything they think would be a useful guide either for the main site, a forum post, or both, feel free to post in the Article Suggestion forums. I will probably start opening up to some user submissions for main page posts (odditory is one example of someone who has talked to me about this) in the near future.

[H] is awesome, but also having a place that is less focused on the "what one hard drive or solid state drive should I use in my PC" and focused on the dual what OS and what hardware questions for external storage subsystems is useful from an organization perspective to at minimum augment what we have here.
 
Back
Top