Looking for affordable HBA Raid Card choices

af22

Gawd
Joined
Jun 21, 2010
Messages
610
From reading many posts, it seems like the IBM M1015 is a very popular choice.

I was wondering if there are any cheaper alternatives?

I'm looking for a card that works with ESXi pass through, and supports 8 sata hard drives. I would most likely use it in IT mode and eventually want to use it for ZFS.

Would this still be the IBM M1015? I see a lot of other ebay generated "recommended" listings by adaptec, dell, which are all half the price...

Any advice would be great. Thanks.
 
If you want to use ZFS, you better get straight SAS HBA, not RAID controller.

IBM M1015 is probably the best choice unless you can find LSI models for cheaper.
 
Anything 6Gbps LSI, otherwise you could do a cheaper 3Gbps LSI card e.g. Dell Perc5 or IBM BR10i but you won't have support for >2TB SATA disks. The 6Gbps cards do work with larger drives. It's not related to SATA spec, but the controller changes they did between generations.
 
In HBA mode, do the cards utilize the on board RAM + Battery?

I'm a little confused on this subject. The reason I ask is when i converted my 2008R2 into a VM on ESXi, according to the VMWARE support, ESXi does not support disk write cache. This caused my write speed to drop significantly as it no longer caches into memory.

The VMWARE support page says that disk write cache should be done by the SAS card with onboard memory + battery.
 
I have the M1015 converted to the LSI SAS 9211-8i HBA, though I'm unsure if it utilizes the RAM & battery at all.
 
The M1015 does not have a BBU (on board battery). For that you need to setup up to a IBM M5014 or equivalent LSI.

I run two M1015s in a ZFS server. Rock solid, fast and most importantly they just work.

I played with one if ESXI and it detects the card without issues. Keep in mind everything that I did was running in IT mode.
 
Isn't ZFS basically offloading the "controller" function to system CPU and Memory, and of course SSD Cache if you have it? Isn't that the point, whether Nexentastor..Napp-IT etc...just software that's leveraging a "passthrough" HBA and processing is handled by the system CPU/Memory?
 
The main point of using passthrough with ZFS is so it can manage the drives itself - AFAIK there is no way (glad to be proved wrong) to be able to hot-plug, add, remove, etc virtual drives provided by ESXi.
 
Isn't ZFS basically offloading the "controller" function to system CPU and Memory, and of course SSD Cache if you have it? Isn't that the point, whether Nexentastor..Napp-IT etc...just software that's leveraging a "passthrough" HBA and processing is handled by the system CPU/Memory?

Exactly! The most common reason for a configuration of this type is an "All-In-One" scenatio, while still maintaining ESXI availability. A by-product is a reduction of transfer medium overhead/limitations between ZFS NAS and ESXi VM's as well.
 
The main point of using passthrough with ZFS is so it can manage the drives itself - AFAIK there is no way (glad to be proved wrong) to be able to hot-plug, add, remove, etc virtual drives provided by ESXi.

Hot plug in a virtual HD, is working now, but useless. Or are you talking about a passthrough disk to the ZFS VM. Wow, that'd make me nervous...
 
Back
Top