SAS Multipath - Supermicro Dual Expander Backplane - BPN-SAS2-826EL2

porum

n00b
Joined
Nov 19, 2013
Messages
3
Hi there,

let's for example take the BPN-SAS2-826EL2. It has two dual-port expanders with 2 ports each:
(also see: http://www.supermicro.com/manuals/other/BPN-SAS2-826EL_1.0.pdf or http://www.supermicro.com/manuals/chassis/2U/SC826.pdf Appendix F)

PRI_J0
PRI_J1
SEC_J0
SEC_J1

Given supermicro's definition J0's are "From HBA or higher backplane" and J1's are "To lower backplane in cascaded system".

Now my question is: how to implement a simple n-way cascaded multipath setup (consisting of e.g. 2 servers with 2 SAS HBAs each and 2 JBODs each with that backplane mentioned above)?
I can only imagine implementing this, if I can plug a HBA to J1's in the end of the multipath loop, since the J0's are used for cascading at least for the (n-1) last JBODs in the loop.
The goal basically is to have two server's seeing ALL disks in the whole SAS domain (I'd solve the rest within software). Basically I could use a SAS switch, but do I really have to?

A hopefully simple example of what I am trying to achieve:

Server1-HBA1 => JBOD1-PRI_J0
Server1-HBA2 => JBOD1-SEC_J0

JBOD1->PRI_J1 => JBOD2->PRI_J0
JBOD1->SEC_J1 => JBOD2->SEC_J0

JBOD2->PRI_J1 => Server2-HBA1
JBOD2->SEC_J1 => Server2-HBA2


Is this possible? If not - why not (I'd love to hear some technical detail) and if yes - is there any other/better way (bear in mind, that the 2 JBODs are just an example, it could also be easily 4 or more - therefor using a "star connection scheme" is not feasible).


Are there any other caveats/pitfalls I should be aware of? :)


regards,
po
 
Last edited:
Yes, that setup works.

Question is, do you really want to have that setup? Mainly a question of, if you failover, since both servers are on both busses, both of them could be screwed.

But then, I personally have ben configuring mine like this, Except I flip/flop mine from your diagram, and switch Server1-HBA2 and Server2-HBA2 connection point.
 
Hi there,

first of all: thanks for your confirmation. Your 'flip flop'-suggestion is what I normally use when it comes to fiberchannel multipathing, but I wasn't sure if it can also be applied to SAS ;-)


Question is, do you really want to have that setup? Mainly a question of, if you failover, since both servers are on both busses, both of them could be screwed.

Could you elaborate on 'being screwed'? :) .. the big picture behind my idea is the following:
* each of the two servers only actively maintains half of the storage, but potentially can reach/see the whole SAS domain
* in case of a complete server going down, the remaining server will present it's formerly 'inactive' storage half actively to the outer world

Currently the only way of getting rid of the 'same bus' argument (and keeping the same amount/quality of redundancy) is to have (n*2) HBAs in each server where n is the amount of JBODs that are going to be connected in a multipathed fashion, and that again is not feasible imho.



regards,
po
 
Most of the big guys put one controller per path, not on both paths.

If the hba causes an issue, it could cause the whole path to go down.

This is a small failure case though. But throwning more stuff onto a shared bus, means more things that can screw it up for everyone.
 
So wait, you have single-port HBA's? Not dual-port HBA's? If you had dual-port HBA's you could provide HBA & JBOD & cable redundancy w/o any daisy chaining.

In general I tend to recommend the most diverse possible setup that maintains the flattest topology possible. eg: avoid daisy chaining as long as possible, try to have each JBOD connected directly to an HBA wherever possible.

I'd also suggest you avoid the SAS switches at the moment if this is enterprise/production. Super useful things for labs where you need to move storage between systems on a daily basis and 'storage' and 'systems' are both > 1 or 2, but in terms of their use in production environments, we tend to steer clear if we can, we've had some problems with them flaking out and going weird on us. .. And the single external AC power supply is just cheesy and not confidence inspiring. :D
 
Wait, are we saying, we are not using dedicated jbod chassis? but using the expanders built into those cases, with motherboards?

That idea I wouldn't do. You have no way to stonith the head system, without also taking down the disks along with it. Atleast that I can think of right now.
 
I think you might be a little confused.... if I am understanding what you want properly.

Multipathing SAS is not for connecting 2 x servers to a bunch of drives.

It's for having redundant paths for one server to a bunch of drives.

eg each (Dual ported SAS drive) has 2 x WWN numbers and can be addressed (communicated with) on either WWN Number...... From the same server...usually which has 2 x HBA's

Not for 2 x different servers to talk to the same drives withing a JBOD (each talking to a drives different WWN)

UNLESS

You are thinking of say using a SAS switches.... and each server is ONLY talking to say half the drives each within a JBOD. (eg 12 drives each withing a 24 drive JBOD chassis)

.
 
stanza33, that doesn't make much sense.

Normal sas config is to setup one server on sas channel A, and another server on sas channel B.

If it is supported by your disks, you can have 32 different servers talking to the same disk, at the same time, OVER the same sas channel.

Using multible servers on the same sas channel to share a disk, is not large sas practice currently, but it used to be with fc disks. It has been becomming much more common to do this with sas, as it is much cheaper to share a sas array between multible esxi servers, and gives you faster speeds. Than having a larger system connected via fc/iscsi/fcoe.
 
stanza33, that doesn't make much sense.

Normal sas config is to setup one server on sas channel A, and another server on sas channel B.

If it is supported by your disks, you can have 32 different servers talking to the same disk, at the same time, OVER the same sas channel.

Using multible servers on the same sas channel to share a disk, is not large sas practice currently, but it used to be with fc disks. It has been becomming much more common to do this with sas, as it is much cheaper to share a sas array between multible esxi servers, and gives you faster speeds. Than having a larger system connected via fc/iscsi/fcoe.

Normal maybe for a cluster or HA setup, where the servers are setup in a fail over situation....

But I believe not where both servers can read write to the same drives "at the same time"

But the OP does say he will solve the rest with "software" so maybe it's a cluster config he is thinking of setting up?

Eg
Just as with iSCSI or FC..... Yes the wwn's can be presented as such to anything that can connect and see them ( say with no zoning involved ) but unless the head units are AWARE things can get real messy real quick. (Wink)

.
 
That makes no sense.

I read and write to the same disks at the same time using many servers without issue.

Why would sas magically cause this issue to appear? It is not sas's fault, sas was made to handle this. sata was not made to do this though.
 
That makes no sense.

I read and write to the same disks at the same time using many servers without issue.

Why would sas magically cause this issue to appear? It is not sas's fault, sas was made to handle this. sata was not made to do this though.

Hi Patrick,

Could you please explain how you 'read and write to the same disks at the same time using many servers without issue' ?

Are you just talking about accessing the same drives that are shared via NFS or something similar?

If that is not what you mean.. then how are you doing it? I have never heard of multiple servers being able to access actual raw drives before.

Thanks,
Mark
 
No one on this forum has heard of gfs? ocfs? vmfs?

You use a filesystem that doesn't limit access to a single system.
 
The idea behind my setup was only using redundant paths. My definition of 'solving things in software' was not related to a clusterfs but rather that each server only uses half of the disks at a time (not the same half obviously), therefor providing two 'logical heads'. In case of one server failing, the other will take over.


regards,
po
 
No one on this forum has heard of gfs? ocfs? vmfs?

You use a filesystem that doesn't limit access to a single system.

Wow.. very cool.

I knew about cluster file systems but TBH I had only thought of them being used by systems that were mounting the same remote block devices that were shared via one system actually directly attached. (eg: Multiple system mounting the same iSCSI target)

For some reason I never realized you could do it direct as well.. although thinking about it now of course you can...

Thanks for the info! Guess I have a bunch of reading to do.

EDIT: Patrick - mind if I ask what OS and FS you are using specifically?


Cheers,
Mark
 
Back
Top