do I really need a sas expander card

alphabanks

n00b
Joined
Oct 31, 2012
Messages
62
Guys I would like to talk about sas expanders as of right now I have an IBM M1015 the controller can support up to 32 drives. However, can you connect more drives without having to get into a sas expander I ask because the board only has one pci express slot which I already have a raid controller in? I was hoping if they simply made some type of cable that would allow this task.
 
Why don't you limit your questions to one thread. That is much easier.

I'm not a fan of expanders or port multipliers. If you have a motherboard with only one PCI-express slot; you bought the wrong hardware for the right task. If you want to power up to 40 disks, you can use four simple IBM M1015 controllers in four PCI-express slots, combined with the onboard ports provided by the chipset.

Expanders are relatively expensive - generally more expensive than buying multiple REAL controllers - generate more heat - consume more power - lower performance and may introduce several compatibility issues.

If at all posible, do not use SAS expanders. Perhaps buying a new motherboard with more than one PCI-express slot would fit your build?
 
A sas expander is normally about the same price as an 8port hba, but come with 20-24 ports.

Even compared to a used m1015 from ebay, it is cheaper.

Lots of desktop drive makers have started cripling their firmware so they don't work properly with sas expanders, do in order to make sure it would, you would have to by enterprise grade disks
 
Four times IBM M1015 would allow for an 38-disk setup with that kind of money, without resorting to RAID controllers, SAS expanders or port multipliers. The best setup is an 'honest' setup with multiple REAL controllers. It is cheaper, it consumes lower power, it is faster, it gives less chance on problems and doesn't really have a disadvantage.

The only advantage SAS expanders brings is that you can connect many disks to a system with too few PCI-express slots. So if you insist on connecting 32 disks to a single PCI-express port, you would have no other option than to use expanders. Note that many controllers above 16 ports already include an internal expander; for example the Areca family with 'ix' in the product name. Just to be clear: I do not recommend buying this kind of hardware.

If you have older hardware like DDR2 memory, it may even be cheaper to build a new system with new motherboard and processor and memory. This gives you new hardware for almost the same cost as an HP SAS Expander setup. Because those expanders are VERY expensive. Instead of costing you 40 dollars it costs more like 380 dollars. I would much rather buy four IBM M1015 for that price. The added cost of new motherboard/CPU/RAM may be well worth it.
 
@OP no such cable exists but like I linked you can get expanders that don't require a PCIe slot and don't require a system overhaul in order to get the same number of drives spread across multiple RAID/HBA cards (esp if you're doing RAID since you can't expand an array across multiple cards). I can't find any power stats on the expander but I have a hard time believing it uses more power than 4 of those RAID cards would...
 
SAS expanders unlike SAS controllers generally do not play well with SATA drives, which is usually the biggest issue people run into when trying to use a SAS expander. T
 
Four times IBM M1015 would allow for an 38-disk setup with that kind of money...

Perhaps true, but only if you are comparing used/fleabay pricing for the HBAs against new full warranty pricing for the SAS expander. Apples to apples (new vs new or eBay vs eBay) the price of 1 HBA + the expander is always less than 4x HBA (unless you are lazy or just a bad shopper).

Also, note that you have to be in the E5 class MB to get the 32 lanes of PCIe required to use all 4 HBAs that you propose. Yes, you can do it if you accept a board with a PCIe switch on it or plug some of them into x4 slots - but one of your own arguments against the expander is that it is lower performance...well, so is your 4 HBA option if you accept a PCIe switch or start using less lanes than the HBA requires. And using that LGA2011-based MB and CPU will add at least $200 to your build cost compared to E3/i7 options that don't provide enough PCIe to support 4 or more HBAs.

Finally, please note that there are currently no HBAs on the market that actually deliver more than 8 ports of native SAS. The current generation cards with more than 8 ports - LSI, Adaptec and Areca - all have an 8-port SAS RoC chip and a built in SAS expander. Except, of course, for the funky LSI 16 port card that is really two HBAs velcro'd together on one card using a PCIe x16 connector.

Dont misunderstand. Up to about 24 drives (30 if you include the MB based ports) a multiple HBA approach can make sense. That is exactly what I use for my 20 drive server setup. But for 31+ drives doing silly things to avoid using expandeders is just silly...
 
To be fair the Adaptec 7 and 7q series use PCIe 3.0 and have up to 24 native ports without an expander and are available now (and can be used as HBAs, drives not specifically reserved for RAID pass through natively), but those are both RAID cards, the HBA only 7H series is the one that doesn't come out til next month.
 
Perhaps true, but only if you are comparing used/fleabay pricing for the HBAs against new full warranty pricing for the SAS expander. Apples to apples (new vs new or eBay vs eBay) the price of 1 HBA + the expander is always less than 4x HBA (unless you are lazy or just a bad shopper).
I just newegged the price, I'm from Europe so I'm not that much aware of US pricing. But I bought my IBM M1015 for around $60 with the normal price being $90-$100. Assuming the HP SAS Expander is about $380 that would buy you about four of them?

I know many people shop from eBay for about half the price. Specifically the IBM M1015 since it gets shipped in many server systems but replaced by a 'real' hardware RAID controller and thus the IBM M1015 gets dumped on eBay. But generally I avoid buying second hand; maybe I'm just conservative.

Also, note that you have to be in the E5 class MB to get the 32 lanes of PCIe required to use all 4 HBAs that you propose.
Well this SuperMicro X9SCM-F board is suitable:
http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCM-F.cfm

Two PCI-express slots will run at 4 lanes only indeed. But that is not a problem. Consider what four times 2GB/s can do in terms of bandwidth. The problem with SAS Expanders is worse since it limits the concurrent access to all disk devices; something RAID-Z in ZFS is particularly sensitive to.

So if building a NAS under 40 ports I would still opt for a build without expanders. It can be done with an affordable setup that has higher performance particularly with ZFS and generally less prone to errors or compatibility issues. Up to 70 would be possible with four LSI SAS 9202-16e.

Please understand. I do respect your interest in SAS expanders specifically for large builds; it is hard to do without. And generally it can work extremely well. For example in cases where you want to connect a large amount of disks to a hardware RAID controller. Using SAS expander that is relatively easy and the hardware RAID can provide good very good performance in this combination. This would make sense for large builds based on Windows.
 
You are mix-mastering aggressive used prices for the m1015 with lazy, brand new, manufacturer list pricing for the expander. M1015s have occasionally been available below $70, but nothing recent at that price. Current market price on eBay for server pull m1015 is +/- $100. The lower prices you quote should be considered a thing of history since IBM is no longer shipping that part in any significant quantity.

As for the MB, you don't get to argue both side of the coin. If you are adamant that the SAS expander sucks for performance then you don't get to argue for using the x4 slots on the HBA. Note, please, that your own math applied to the throughput of the expander has the same result as you just came up with for using the x4 slots. With spinning drives at least. Different story for SSD.

Please, at least be self consistent when defending your position!
 
I'm not defending anything. I just speak aloud my opinions. :)

But I'm not saying I'm right in anything. You can convince me just as easily as you can convince others with good arguments. I'm an honest listener, or at least try to be.

You seem to imply that my price comparison is not correct. Could you estimate what the regular price level would be in US for:

HP SAS Expander brand new:
HP SAS Expander cheap on eBay:
IBM M1015 brand new:
IBM M1015 cheap on eBay:

In my own country, this would be about (320-???-95-65). I got mine brand new at $65, but that was a lucky deal. :)

But of course, pricing can vary per country and especially continent. Nevertheless, going for expanders doesn't necessarily mean you pay less for the amount of ports you need. Only if you compare versus the 16-port HBA which are more expensive per port than their regular 8-port cousins.

As for the performance issue. As I tried to explain, the concurrent access issue with expanders can hurt ZFS performance more than just a limit of bandwidth due to only half the number of PCI-express lanes being utilised. This only caps the maximum throughput. The latency is what matters to RAID-Z. In RAID5 this issue is far less bothersome since it does not affect the latency of other I/O sent to other disks. Each disk processes its own I/O in RAID5. But in RAID-Z all disks are involved in I/O always at exactly the same time and if one disk takes longer, then it would count as if all disks would take that long.

This is not entirely true, but not to make things excessively complicated I would argue that the principle of SAS expanders hurting ZFS performance more than regular bandwidth cutoff is sound. At least in theory, RAID5 would have far less issues with varying latencies between I/O than RAID-Z would.

That said, if you build a big box with a large number of disks you often have an excess of disk bandwidth at your disposal. So it doesn't need to be very efficient in terms of getting the most potential out of your disks. But the IOps will hurt, you really need to invest heavily in L2ARC if using expanders on ZFS in a large build of mechanical disks.
 
Well, the nice thing about raidz (or it's realy cripple), the more disks you put per vdev, the smaller the i/o is. 6disk vdev raidz2 will be 16k per disk, and 10disk raidz2 will be 8k.

My calcs say, if you dedicate 200MB/sec per disk, a single 4lane sas2 will give you 12disks of full throughput, with the blocksize per disk decreasing the more disks you access at once, the better the interweaving of the data, and the less, or same, latency will be observed.

I will say though, I have never run a raidz system on expanders, I just don't use raidz systems, and use mirrors instead.

I do know many people that run 4 supermicro 45disk chassis that are expander based onto a single system doing raidz, and they don't have any noticable issue.

Sure things are not 100% optimal, but it's about cost, and what is needed.

HP expander new (amazon): $266
Intel 36port expander new (amazon): $365
IBM m1015 new (amazon): $217
LSI 9211-8i new (amazon): $243

IBM m1115 pull (amazon): $113
IBM m1015 pull (amazon): $120
 
purchasing 4 raid controllers for 32 disks is the most ridiculous post I have ever seen here.

some higher controllers can support something like 128 disks on their own. get an expander. nobody needs 4 pci-e slots for only 32 drives.

a motherboard that can support 4x/8x in 4 slots is going to be expensive and pointless, don't buy 4 raid controllers for one combined raid group.
 
sas expanders allow for the creation of a DAS

I'm getting good speeds with an ibm 5015 and an intel expander in a norco 4020
 
SAS expanders unlike SAS controllers generally do not play well with SATA drives, which is usually the biggest issue people run into when trying to use a SAS expander. T

Calling bullshit on this one too.
 
Finally, please note that there are currently no HBAs on the market that actually deliver more than 8 ports of native SAS. The current generation cards with more than 8 ports - LSI, Adaptec and Areca - all have an 8-port SAS RoC chip and a built in SAS expander. Except, of course, for the funky LSI 16 port card that is really two HBAs velcro'd together on one card using a PCIe x16 connector.

I don't believe this is the case

The LSi 9202-16e is the card that requires PCIe x16 and has 2 chips on the same board
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx

But both the LSi 9201-16i and the 9201-16e have only 1 chip and only take up a PCIe x8
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9201-16e.aspx
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9201-16i.aspx

I don't see any mention of a built-in expander on these cards.

Also I don't think this thread ever got to a satisfactory conclusion.
http://hardforum.com/showthread.php?t=1548145

I read through it several times and no one seemed to agree.

Some felt it was a software bug that may or may not have been fixed.

Whilst others felt it was due to the fact that although you can run SATA drives over SAS HBAs, RAID Cards and Expanders that they are entirely different technologies, and that the problem was related to encapsulating the SATA data within the SAS framework.

Some folk have had issues whereas others have run large SATA over SAS Expander systems for years with no problem.

I myself was going to get a Norco 4244 or XCase RM424 and stick a HP SAS expander in there and run it as a JBOD.

But when the opportunity came to get some 16 port controllers cheap.

I decided to play on the safe side and use purely HBAs.
 
Last edited:
If I was going for a 4*PCIe motherboard (which I am, the Supermicro X9SAE-V, actually 6*PCIe in 8/8/4/4/1/1) it wouldn't be to put 4 HBAs in but 2 or 3 (linked to 3 to 6 expanders) and an infiniband card, because there is no point in having 1GB/s of performance from the drives if you can only access the server through a gigabit connection !
 
I agree with sub.mesa, unless absolutely necessary, using a SAS expander is a ridiculous idea to me. It lowers performance, gives the potential for compatibility issues, adds complexity and additional failure points, isn't necessarily cost effective, etc. And then what if you need to update firmware on one/some of your drives? I'm not sure if I could trust a SAS expander there, etc. No thanks.
 
Last edited:
purchasing 4 raid controllers for 32 disks is the most ridiculous post I have ever seen here.
Probably because you are not thinking in terms like a ZFS box, but rather conventional RAID done by the controllers firmware. In that case, a single RAID controller with SAS expander would be more logical as I already stated in my earlier message.

For ZFS systems, you do not want RAID controllers, but HBA controllers. IBM M1015 is a RAID controller shipped with IR mode firmware. But ZFS users would want to turn it into a regular non-RAID controller with IT mode firmware. They don't even need to flash an option ROM if they don't need the controller to boot from.

Even better, if you use ZFS mirroring you can put half the disks on 2 controllers and the other half on the other 2 controllers. You would be able to keep running with minimum 1 maximum 2 failed controllers; pretty slick.

Besides, normal controllers fail much less often than RAID controllers in my experience. I've had a few failed Areca. Those also have to cooled with a fan instead of passive dissipation. Those shitty little fans always fail especially when combined with a bit of dust. And in general, something that consumes less power with passive cooling only is more reliable - in my own experience anyway. For example, a hot GPU can fail after a few years, that is not uncommon.

I prefer to limit complexity and think that you should try to avoid expanders whenever possible. There would be no reason why you have disk problems; if you suspect it is the controller you can also try switching some cables to easily verify this. In other words, going for multiple controllers should give you a lower chance of problems in general.

a motherboard that can support 4x/8x in 4 slots is going to be expensive
The board I mentioned is not very expensive. The X9SCM-F should sell for under $250 (EURO 165 is what I found). But it puzzles me that the IBM M1015 appears to be much more expensive in the US than in europe.
 
I just checked ebays european listings. The prices you quote dont exist there either. A year ago when supply was rich, maybe. But there are no current listings or recent completed sales anywhere near that low.
 
The shops I order at are under a login, but prices have indeed risen since I last checked. I searched for some generic European price comparison sites and found this:

HP SAS Expander
http://www.heise.de/preisvergleich/hp-smart-array-sas-expander-card-468406-b21-a514339.html
200 EUR ~= $260

IBM M1015
http://www.heise.de/preisvergleich/ibm-serveraid-m1015-90y4556-a815389.html
120 EUR ~= $156

I just quote the lowest price listed. But even this comparison is far removed from the near 4 times cheaper price I mentioned in my earlier posts. So I stand corrected.
 
Two problems. First the OP said he only had 1 PCIe slot available and that was taken by his RAID card, your solution was to buy a new MB so afterwards he could spend the same amount of money on HBA cards as he could on an expander but without buying a new MB, additionally buying a MB with more PCIe slots may entail buying a new case as well as the OP never stated what he was working with. Second the OP never even said he was working with ZFS to begin with, you're just assuming he is. If he's working with a card based RAID then using multiple controllers is the worst idea.
 
Back
Top