iSCSI over dvswitch, or hybrid?

Berg0

[H]ard|Gawd
Joined
May 16, 2005
Messages
1,038
Curious what most people are doing in this scenario. I've got a few hosts with 4 1Gb NICs, currently have a DvSwitch with ports 0 and 1 for vm traffic, mgmt, and vmotion, and a standard vSwitch with port 2 and 3 for my iSCSI with the virtual initiators bound to physical ports as per the relevant KBs.

Anyone doing 4-port DvSwitch with no separate vSwitch for iSCSI? Any downside other than not being able to manage the switches without vCenter? If I go the pure DvSwitch route, do I set up 4 paths per host, binding to each of the NICs in the DvSwitch?

Sorry if these questions seem silly, I've read a few conflicting articles.
 
Just a note, I always prefer to have my mgmt. and vmotion on a standard vSwitch, that way if your vDistSwitch goes down you can still manage the hosts.

And trust me, if you have a vdist go down, you know the possible pain I speak of.
 
There's several ways to tackle this problem:

Run all 4 through the same dvSwitch. Create a portgroup for each iSCSI connection and bind each NIC accordingly. Just make sure that to check that your NICs are ordered the same on each host and document which ones you're going to pass storage traffic through and replicate it. Use host profiles if possible to avoid screw ups. Risks - if you lose vCenter the dvSwitch still functions but you can't manage it. I run two 10GbE links through one dvSwitch and I haven't once had a vCenter issue. While it can happen, I think its operationally more complex splitting it out. If you have the license and you have several hosts, I like this method. Easy to configure and operationally less complex.

Two dvSwitches - split out storage to one and the rest to the other. Eh, operationally complex and doesn't really buy you much unless you just like having two dvSwitches to say you're running two.

Two standard vSwitches - separate storage and the rest. I do this for my remote offices, because I have standard licensing and no vCenter on site. Operationally less complex and basically bulletproof outside of incorrectly configuring the failover orders.

I like option 1 and 3. Don't make it any more complex than you need it to be. And while you have to acknowledge the vCenter management issue of the dvSwitch should you have a problem, I think the benefits far outweigh the risks in that case.
 
There's several ways to tackle this problem:

Run all 4 through the same dvSwitch. Create a portgroup for each iSCSI connection and bind each NIC accordingly. Just make sure that to check that your NICs are ordered the same on each host and document which ones you're going to pass storage traffic through and replicate it. Use host profiles if possible to avoid screw ups. Risks - if you lose vCenter the dvSwitch still functions but you can't manage it. I run two 10GbE links through one dvSwitch and I haven't once had a vCenter issue. While it can happen, I think its operationally more complex splitting it out. If you have the license and you have several hosts, I like this method. Easy to configure and operationally less complex.

Two dvSwitches - split out storage to one and the rest to the other. Eh, operationally complex and doesn't really buy you much unless you just like having two dvSwitches to say you're running two.

Two standard vSwitches - separate storage and the rest. I do this for my remote offices, because I have standard licensing and no vCenter on site. Operationally less complex and basically bulletproof outside of incorrectly configuring the failover orders.

I like option 1 and 3. Don't make it any more complex than you need it to be. And while you have to acknowledge the vCenter management issue of the dvSwitch should you have a problem, I think the benefits far outweigh the risks in that case.

Unless there's a need for the advanced features of a vDS I'd tend to go with option 3. Distributed switches are cool and bring great features like LBT but unless there's a requirement or definitive operational advantage by using it sometimes it's best to just follow the KISS methodology.
 
Totally agree. I manage this stuff daily and I'm the only one doing it so I keep it as simple as absolutely necessary. The lab is where you play with stuff you don't 'need'. OP make sure you take those things into consideration.
 
One more post for keeping it simple. The lowest complexity is the way to go unless there's an operational requirement for higher complexity.
 
I'm a fan of management/storage on a standard switch - no need to tie something to its own dependency. All VM traffic goes through DV switches if possible (except VC - management clusters for the win, especially when you add in NSX).
 
I do hybrid, MGMT vSwitch/everything else dvSwitch when I can. Of course there are times when I only have 2 x 10Gb ports and need some sort of QoS so I go all dvSwitch w/NIOC.

Another reason why I love UCS.

Anyway, recovering from VMware's Distributed Switch is nothing when you've had to recover from Cisco's Nexus 1kv...not fun.
 
Full Distributed.

Most environments are running blades with two 10GB NICS though, I'm a big fan of converged networking with NIOC.
 
/me waits for the day you have to restore VC on a DVS. :p

Me keeps backups, separate VC and Database servers.

Very worst case scenario, I build out a quick normal switch, attach vcenter to it, power it on then move it back. Better than wasting 10GBE NIC ports.

Additionally I'm completely dependent on vCenter with vCloud Director anyway, all the resource pools tied in to the vCD database.
 
I keep the management port (or an extra vmk) on the standard switch when I can, everything else distributed.
 
Me keeps backups, separate VC and Database servers.

Very worst case scenario, I build out a quick normal switch, attach vcenter to it, power it on then move it back. Better than wasting 10GBE NIC ports.

Additionally I'm completely dependent on vCenter with vCloud Director anyway, all the resource pools tied in to the vCD database.

many restores will end up being that worst case though - that's what I hate about it. Can't attach the VM to the new DVS if it requires a re-register of the VM :(

Also the reason I'm less fond of blades, and also fond of management clusters still - avoids all that, and I use NSX a lot so you have to have one.
 
Back
Top