iSCSI Users. How Did you Setup Your Switches?

calvinj

[H]ard|Gawd
Joined
Mar 2, 2009
Messages
1,738
I'm looking to see how people who are using iSCSI setup their switches. 2012 is fast approaching and I need to turn in what I need for the budget and I'm torn between a couple of options.

Add 2 more switches to my core switch stack for iscsi (vlan the networks off from the everyday vlans).

or

Stack 2 switches by themselves just for iscsi. have one port from that stack back to the core for management.

Our bandwidth for iSCSI is pretty small at this point. Were in good shape for IOPS and don't see alot of rapid growth in the coming years.

Depending on which way I go is dependant on which switches I order so I'm looking for some idea's. I'm putting this in here because we only use our SAN for virtulization.
 
We often see customers do dedicated switches for isCSI. Do a lot of 2960S switches from Cisco for that.

Up to you..either way is good..others just do an iSCSI VLAN..as long as your switches are good and capable.
 
They are good and capable, just torn either way.

If I segment out I can try and get the new 7000 PowerConnects.. If I don't segement it out I'll wind up haveing 2x 6248 switches and 2x 6224s all in one stack.
 
Looks like we have 1 vote for each. Anybody else put in their two cents?
 
There is no right or wrong answer. Some people just like to have separate switches..and others just do VLANs. I'm all for doing a VLAN as long as the switches have no performance issues..but sometimes customers request dedicated switches for iSCSI.
 
One vote in favor of VLAN's.

There should be no discernible difference (from a functional POV) between VLAN'ing and a dedicated switch. You may see a performance impact if your switch is being hammered close to its fabric capabilities, but that is few and far inbetween.

I suppose the more important question is what type of switches you were looking at for each solution. Personally, I'd rather get a higher end switch than a couple lower end dedicated switches.
 
iSCSI switches would end up being PowerConnect 6224s regardless. Just a matter of stacking them in with other switches or keeping separate.

So far since we have servers in with our Core switches. I'll wind up stacking everything together. One management location.
 
One vote in favor of VLAN's.

There should be no discernible difference (from a functional POV) between VLAN'ing and a dedicated switch. You may see a performance impact if your switch is being hammered close to its fabric capabilities, but that is few and far inbetween.

I suppose the more important question is what type of switches you were looking at for each solution. Personally, I'd rather get a higher end switch than a couple lower end dedicated switches.

Higher end or lower doesn't matter that much. I've sent people to best buy to buy cheap linksys boxes before :)
 
iSCSI switches would end up being PowerConnect 6224s regardless. Just a matter of stacking them in with other switches or keeping separate.

So far since we have servers in with our Core switches. I'll wind up stacking everything together. One management location.

If you can stack them then stack them.

The fabric will be the same regardless
 
Higher end or lower doesn't matter that much. I've sent people to best buy to buy cheap linksys boxes before :)

:confused:

Depends on what you're planning on doing with them? You cannot tell me with a straight face that there is no perceivable difference in performance and functionality between a "dumb" switch and a managed one (eg Cisco 49xx).
 
I always advise VLANs, broadcast from iSCSI SAN can be huge at times.

I would also say make sure you have a decent router when routing between VLANs as you are limited by the interface speed on this device.
 
:confused:

Depends on what you're planning on doing with them? You cannot tell me with a straight face that there is no perceivable difference in performance and functionality between a "dumb" switch and a managed one (eg Cisco 49xx).

There is no perceivable difference in performance and functionality between a dumb switch and a managed one (assuming isolated, dedicated switches).
 
I always advise VLANs, broadcast from iSCSI SAN can be huge at times.

I would also say make sure you have a decent router when routing between VLANs as you are limited by the interface speed on this device.

routing iSCSI traffic is explicitly not supported and will work very, very, poorly. (At least for ESX).
 
:confused:

Depends on what you're planning on doing with them? You cannot tell me with a straight face that there is no perceivable difference in performance and functionality between a "dumb" switch and a managed one (eg Cisco 49xx).

If all you are doing is iSCSI from a few hosts to a target any reasonable switch should work fine. If you pound every port or use other advanced features...well, that's what separates the men from the boys.
 
If all you are doing is iSCSI from a few hosts to a target any reasonable switch should work fine. If you pound every port or use other advanced features...well, that's what separates the men from the boys.

I use a SG-200-20 however all my server's - iscsi traffic is on a vlan by it's self..
 
If you simply want to isolate traffic, use VLANs.

If you need to isolate switch processing, use separate switches.

Comes down to network traffic load and available switch processing power.

In my lab I use 2 $25 trendnet gigabit switches for storage traffic.
 
We use 2 ProCurve Switch 1800-8Gs; 1 set of interfaces on each switch. Connects 3 iScsi to an IBM Bladecenter running the VMs.
 
Something that can give you a lot of the benefits of having separate switches without going separate switches is port-gapping. Put storage traffic on different physical ports on the switches, even between two switches, than you do other traffic. You can build A/B SANs that way without doubling your switches. If you do QoS for things like voice, this keeps your iSCSI packets from competing with voice traffic, which may take precedence.

If you have a powerful enough router (or more likely, switch that also routes) you can route iSCSI without any problem.

From end to end, iSCSI performance on a network is affected by two things: Latency and segment loss. Switches are getting lower and lower in latency, and if you keep other traffic off the links you can get good, consistently low latency on your switch ports so long as you don't oversaturate.

If you've got good sized buffers (but not too big) then you can prevent losing TCP segments, which keeps retransmits from happening. Flow control keeps frames from overloading the buffers, again eliminating retransmissions.

Of course there are other iSCSI performance factors, but those are two that are affected by the network itself.

Also, jumbo frames are often recommended, but they may not make much of a difference.

http://www.boche.net/blog/index.php...mparison-testing-with-ip-storage-and-vmotion/
 
Back
Top