Any thoughts on current best practice surrounding iSCSI segmentation when moving to 40G or 100G?

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
Conventional wisdom for years and years has been segment your iSCSI traffic onto separate physical interfaces/hardware. Never combine iSCSI and your data traffic onto the same NIC.
This worked well in the 1G days, and this thinking extended as the industry moved to 10G. I remember hot debates weather or not jumbo frames were still relevant once you moved servers and storage into 10G.

Now that 40G/100G is cheaper and cheaper our organization is moving our SAN's over to 40G. Our VM host servers are currently running 4x10G. active/passive for DATA, and active/passive for iSCSI.
I'd like to hear from others that have moved their servers to 40G or beyond and what you're doing as it relates to iSCSI and DATA traffic sharing the same physical 40G interface. Are you continuing to segment on separate physical interfaces? If not, have you noticed any performance issues when DATA and iSCSI share a single 40G link? Or have you seen that once you move into 40G that your bandwidth is now more than what your servers can push and thus it's safe to bring iSCSI and DATA back onto the same physical interface?

(All the above is assuming enterprise datacenter level hardware, i.e. Cisco Nexus, Intel/Broadcom/Mellanox NIC's, ESXi vSphere clusters, etc.)
 
Very interesting question and while I can't afford to touch anything at this level, I think the question is related to the ability to max out the bandwidth, which was much easier on 1Gb, less easy on 10Gb, and becomes difficult currently with 40Gb and 100Gb, but will change as arrays move to all flash. So while running them both on 40Gb may not be an issue today, that's only a function of the storage not using all the bandwidth right now imo.
 
Bumping this. It's somewhat about 'maxing out the bandwidth', but also equally about mixing block level iscsi traffic with file level traffic like NFS, CIFS, etc. on the same CPU of the network interface.
Has anyone that has moved to 40GigE to their servers mixed iscsi and file traffic on the same interface?
 
I don't think you're going to get a very straight forward answer to this. There are many factors as to why you would do it one way or another, and it's going to depend upon the architecture you're using.

Why are you even purchasing 40G links? Because you can? Is it because your 10G links are being saturated? Is your SAN even fast enough for it to matter? That alone will be a simple no we can't do it, or it's not going to be an issue any time soon question.

The real reason why people separated them in the first place is likely less about bandwidth, and more about security. Most iSCSI traffic I've seen is not encrypted so it would be possible to sniff that block level traffic. So in most cases that usually means you would set up an isolated network where those connections don't have a way to leak anywhere else. Your front end traffic should be encrypted, and has to be routable in order for devices to connect to the services you provide. The only way you'd be able to do that on the same interface is if you were dumping iSCSI onto a specific VLAN. You could get away with not allowing that VLAN to be trunked over certain paths, but if someone misconfigured it then it's very possible someone would be able to flip the vlan on an access switch somewhere in your network and try to connect to your SAN directly. For the cost of a pair of extra ports per server and another switch or two and you could avoid the complexity and security issues by physically isolating the connections.
 
At my old job we moved to FCoE and had dedicated switches but these were 10GB links(several) with a 40GB uplink back to the core.
 
Segregate always when possible.

It's really no different than somebody telling you let's just run the public guest network on the same as the network with the nuclear launch systems (exaggeration, but think about it for a bit).

Otherwise you'll find yourself telling the world that 640K is enough memory for anyone.

With that said, stupidity reigns. The VxRail system we use puts everything in one basket. Architecture and design are passe.

I'm concerned that NetBeui and IPX will make a comeback, now that we don't care what we slather together. If you run Apple devices, the pillaging has probably already begun!

I realize that many will disagree. But they could be Dell/EMC folks.
 
Back
Top