SAS cluster as shared storage for vSphere

rogue_jedi

Supreme [H]ardness
Joined
Sep 6, 2002
Messages
4,821
We're looking at moving to a vSphere-based infrastructure at work. Currently we have a few servers (3 Sun x4150s and a 2200m2) running things. We'd be planning to put the x4150s into a vSphere cluster. (Cloud?) We have a Supermicro 846e1 (24-bay case with an LSI SAS expander) connected to one of the x4150s for storage purposes. Would we be able to connect the 3 x4150s to the Supermicro chassis using another SAS switch/expander, then use that as the shared storage for vSphere? Also, if anyone has done something similar to this, can you recommend a SAS expander/switch to use?
 
You will not be able to use an expander. in SAS, the expander only allows for a port multiplication, essentially. The basic principle of the expander if that it will allow a single port to be fanned out to multiple other ports.

You will need a SAS switch in order to accomplish what you are trying to do. LSI makes a SAS switch that I believe is called the Lynx3090, and it's a 9port SAS switch. FYI they also OEM it out to Sun, IBM, NetApp, and several others. SAS (just like FC) devices all get a WWN. the SAS switches get zoned, just like an FC switch, so that the initiators can see the target (servers can see storage).

Also, to accomplish this, you will need SAS HBAs (not RAID controllers) in each of the initiators (servers), and in the storage cabinet, as well. For high availability, you will want two physical SAS adapters per initiator, and two on the target, and two switches. The SAS switches look like they are about $3k/each though, and the HBAs are likely going to run $300-500/ea. The config looks like this:

untitled-21.jpg
 
Alternatively, you could use the 2200m2 as a NFS server, exporting the SAS over 1GbE to a storage network, or your could export the SAS as iSCSI. While this will slow your disk access, you can save yourself a lot of $. The only problem is that you will not be able to boot any of the hosts from the SAS drives in the storage cabinet. All of the hosts will have to have internal disk, at least to boot from.
 
We were trying to avoid going with a NFS/iSCSI setup, mostly because it introduces a lot of potential failure points, but we are considering that.

Thanks for confirming the switch vs. expander thing, that's pretty much what we thought, but the stuff we'd seen was a little vague - most of the SAS switches appear to use the LSI SASx36 expander chip, or something similar, so we weren't sure whether something like the Chenbro CK13601 would work. It sounds like the LSI expander can be configured to work as a switch or an expander, but I guess expanders generally aren't configured to work as switches?

These are the HBAs we'd likely use; we have one of these already in one of the x4150s, and we'd probably just get more, as we like it.

Would doing it using a SAS switch require an HBA in the Supermicro chassis or would we be able to just plug its expander into the switch?

Also, do you know if vSphere supports this kind of configuration, or would we have to do something with iSCSI/NFS for that setup?
 
Expanders are not generally configured as switches, correct. LSI is a very smart company, and is used to designing one set of hardware/chipset and just enabling/disabling features in software (firmware). This decreases their production and R&D costs, so it's not unusual...it's just that a lot of other companies don't do this.

The chenbro that you linked are designed to go from one system to another, as follows:

untitled-22.jpg


Servers are on the left, with SAS HBAs in the initiators. The Norco cabinet is on the right, with 3x Expanders installed. The expanders fan out to drives, and there's an expander per Server (initiator). That way, you don't need the SAS switches, you just need the SAS HBAs in each server, and an expander per set of drives on the target side. As long as the storage controller chip on each of the devices are supported under the ESX3/4 IO HCL, the disks will be seen, and are supported in production.

In looking at the phsyical connections on the SAS HBAs that you have listed, in comparison to the Expander that you linked, will not be physically compatible. LSI makes SAS HBAs and expander cards that are physically compatible. The Expander that you linked requires a SAS cable connection, and the HBAs are using Infiniband connection. Other than that, getting the physical connections squared will be the only hold up in this config. It would be a cheap solution, but it will not allow for each of the servers to have a shared pool. Each server has it's own pool of storage. Cheap, but not what you want.
 
Last edited:
It only works if it's a supported SAS device on supported sas connectors.

Please, for the love of God people, STAY ON SUPPORTED HARDWARE.
 
It only works if it's a supported SAS device on supported sas connectors.

Please, for the love of God people, STAY ON SUPPORTED HARDWARE.

I vote that you stray from the HCL, and then call this guy ^^^^^ and complain about it. :D
 
FYI, he writes the HCL, personally for storage devices. Anything that's not on there, just ask him, and he'll write the driver for the device for ESX, test it, and then get it approved. He's the man. :cool:
 
Actually, you can stray from the HCL on the SAS connector side, depending. Iopoetve doesn't have the time to write all the supported combinations, and any RAID controller that fully obfuscates an unsupported SAS expander will work. Iopoetve will complain, but VMWare only sees the RAID controller and not the expander, so... :D
 
Actually, you can stray from the HCL on the SAS connector side, depending. Iopoetve doesn't have the time to write all the supported combinations, and any RAID controller that fully obfuscates an unsupported SAS expander will work. Iopoetve will complain, but VMWare only sees the RAID controller and not the expander, so... :D

I was thinking more of the HBA card than the connector - my bad on the wording :p
 
So, I guess my question is this:

If we set it up like this:
willitblend.png


Will it work for the shared storage vSphere needs for failover/availability features? If not, what else would be required?
 
So, apart from lacking redundancy in the physical SAS connectivity, this setup should work with vSphere?
 
should, but as noted already, stick with devices on the HCL, if you want support.
 
Back
Top