Switches For SAN

Jay_2

2[H]4U
Joined
Mar 20, 2006
Messages
3,583
I am setting up a new SAN and I am wondering what switches are best to use for this? I know they need a lot of cache per port. What switches do you guys use?

Thanks.
 
I have used everything from HP Procurve 2910al-24Gs to Dell 5448s to Cisco catalyst 3750Gs to Procurve 1810G-24s. Pretty much any layer 2 or 3 managed switch with jumbo frames that supports LACP (if you decide to do that) would be my recommendation.
 
Nice in the gallery.. Might want to have marley move this.

We use Dell PowerConnect 5424s for our SAN Switches. I honestly would rather we have used stacked 6224s instead, but that will be next years project.
 
Nice in the gallery.. Might want to have marley move this.

We use Dell PowerConnect 5424s for our SAN Switches. I honestly would rather we have used stacked 6224s instead, but that will be next years project.

I know... don't know how I amanged to put it here!
 
We use stacked 6224's for our SAN backend and 5448's for our frontend server connectivity.
 
Helpful if it supports jumbo frames AND flow control at the same time (the HP 2824s don't).
 
We use stacked 6224's for our SAN backend and 5448's for our frontend server connectivity.

I know you had said why a while ago, but again why? seems like the 6200's for everything would be a good call? That's what we'll wind up doing. Having a stack of switches as our core switch and then 2 more stacked together just for iscsi / san switching
 
I know you had said why a while ago, but again why? seems like the 6200's for everything would be a good call? That's what we'll wind up doing. Having a stack of switches as our core switch and then 2 more stacked together just for iscsi / san switching

Cost maybe? Even then looking at procurve 2510/2810 over 62xx may be a better call, unless you can get some good pricing from dell.
 
HP E5500-24G is what I have planned. I think it may be a little over kill, but its our standard gb capable core switch and I have deemed that SAN functionality is core system.
 
I have beeen looking at 2510 I have been using dell powerconnect 5224
 
HP E5500-24G is what I have planned. I think it may be a little over kill, but its our standard gb capable core switch and I have deemed that SAN functionality is core system.

Waaaay overkill, but damn, if you have the budget for it, overkill is under-rated.
 
I know you had said why a while ago, but again why? seems like the 6200's for everything would be a good call? That's what we'll wind up doing. Having a stack of switches as our core switch and then 2 more stacked together just for iscsi / san switching

Yeah it was a cost decision. Having to manage both the 54xx and the and the 62xx I wish they were all the 62xx switches.
 
One issue I have at the moment is really slow reads from the iSCSI SAN ( I assumed it was the 5224 switch) so I pulled out an old unmanged gigabit switch and I am getting the exact same speeds. Multipath is working but I get about 21% on one link and about 10% on the other link, giving me about 40MB/s read. Writes are about 160MB/s



Uploaded with ImageShack.us

RAID 10 6 x 372GB SAS + 2 Hot Spare.
 
Last edited:
I'm using 1810G-24's here, no issues, just two dedicated to SAN only. I'm doing MPIO, Flow Control and Jumbo Frames.

From the two 1810G's on the MD3000i



The 1810G up top is a single switch for the AX150i w/MPIO. The two 1810G's at the bottom are for the MD3000i w/MPIO.


 
how many NICs do your servers have?

Also what are the specs of yourSAN? RAID Level, number of disks etc?
 
MD3000i has dual controllers, 4 x GbE, 14 x 15K SAS, RAID10 + Hot Spare. HV's have Dual, Dual Port Intel NIC's, 4 x GbE.
 
I assumed the MD3000i had one controller for failover only? I didn't know they could both be used at the same time.

I assume you have them on differnt subnets do each server have 2 x GbE to each switch?
 
That's how I understand it as well, it's connected to all, but only pushing data across the two at the same time. Which explains the lower numbers on the benchmarks. I just put two NIC's in the event one fails.
 
yes that what I thought.

I plan on running 2 switches one to each controller put them on different subnets and connect each server to each switch in the event of a failure the failover should just work. We will not be using VLANs (security reasons) so the only way to access the SAN is to be connected to the switch or via a server connected to the switch.

I will proababably run 2 x 1Gb links from each server to each switch (4 in total)
 
Yeah, that's exactly what we did. If I need to adjust config on the switch I need a direct connect, no VLAN's. No real point unless you absolutely need to get in remotely. But the damn things are 15 feet away in the server room. :)
 
For our XenServer cluster and HP Lefthand SAN, we are using two HP Procurve 2910al-48 port gigabit switches.

The rock out .
 
Back
Top