![]() |
|
#1
|
|||
|
|||
|
New Virtualization servers and SAN setup help
The previous network admin has everything setup on VLAN 2 and this is also the default VLAN on the network. There are currently only two vlans (2 and 15) where 15 is public side.
I recently purchase four servers with two dual port NICs in them and an HP SAN. I also have two 48 port gigabit HP switches that these four servers and SAN will connect to. I want to break up the network into multiple VLANs because the business is growing and I think too it would just be easier to separate everything. All of the servers, PCs, switches, printers, etc. reside on VLAN 2 at the moment. In breaking these out, I need some help with logical configuration of the switches and network topology. I want to use VLAN 60 for all iSCSI traffic between the new VM servers and the SAN. I want to use VLAN 160 for the actual VMs. I want to use VLAN 180 for management of switches, for SNMP, ssh to devices, etc... Currently, our network looks like: VLAN 180 - 192.168.180.0 VLAN 60 - 192.168.60.0 etc.. So my questions are as follows: 1) On the two new switches for the new VM servers and the SAN, do I need to setup the management vlan (180) as the primary vlan on it and set the default gateway to the next hop? Our core switch is L3 and has an interface on it for VLAN 180 set to 192.168.180.1. 2) If I set the IP address of SANSWITCH1 to say 192.168.180.10, would the default gateway on the switch need to be 192.168.180.1 or would it need to be the next hop of the primary vlan? 3) Will the traffic flowing through the switch connect through that default gateway or just through the trunk ports from the two san switches back to the core switch? 4) Right now, each VM server has four physical network ports for use. Ports 1 and 2 go to SANSWITCH 1 and ports 3 and 4 go to SANSWITCH 2. I have those trunked and untagged on VLAN 60 (iSCSI traffic). They are tagged on the other VLANs that would need to access it, such as VLAN 160 for the servers. Is there a better way of doing this? 5) I want both SANSWITCH 1 and 2 to have the management IP address if I can and then two ports on each switch will be bonded back to the core switch for a 2gb link from switch to switch. 6) For the default-gateway mentioned above...is that just for the IP address that I assign for the switch, example of 192.168.180.10 on VLAN 180? So, the management ports next hop would then need to be 192.168.180.1? Thanks in advance for any insight into my questions. ![]() ![]() ![]()
Last edited by corge; 01-31-2011 at 08:34 AM. Reason: pictures of servers for reference |
|
#2
|
|||
|
|||
|
First, draw me a diagram. This won't be too difficult.
__________________
Virtualization and Storage blog: http://www.jasonnash.com |
|
#3
|
|||
|
|||
|
This is easy. Your core switch should be your default gateway. He knows about all VLANs. He has an IP as the gateway on all VLANs and will do the routing between them. You can put management IPs on the other switches on the management VLAN..the default gateway on everything should point to the appropriate IP on the core switch.
You COULD have all 3 switches do Layer 3 and routing if you wanted. That can get messy and I'm not sure your two new switches can even do that. The benefit there is you can route between VLANs on the same switch...but it really depends on how often you do inter-VLAN routing. If a frame is sent to a host on the same VLAN the switches will send it out the most direct path..meaning, server to server should never go through your core since your two server access switches are direct connected, UNLESS those ports are blocked by spanning-tree. Speaking of which, make your core switch your spanning-tree root. If the HP supports it I suggest doing Per-VLAN Spanning-Tree (PVST). That way you run a diff instance of spanning-tree for each VLAN and therefore a port that is blocked for one VLAN may be used by another VLAN for better utilization. Tag everything. Set the native (or default) VLAN to something you won't use. I hate native VLANs. Tag everything. If all 4 NICs on a server will carry iSCSI traffic (or server traffic, management, VM, whatever) then tag it for all 4. My suggestion is to not do that and run iSCSI over dedicated NICs for that purpose.
__________________
Virtualization and Storage blog: http://www.jasonnash.com |
|
#4
|
|||
|
|||
|
Which IP address on my core switch should be my default gateway? The one that it has on VLAN 2 and the main IP address for the switch or the 180.1 (Management VLAN 180) or 160.1 (Server VLAN 160)?
Each server has an IPMI port on it for dedicated management and the SANs each have a management port onboard. What about untagging the main VLAN the traffic is supposed to be on? Right now I have the servers all untagged on 60 (iscsi traffic) and tagged on other stuff. By the way....the servers each have two onboard NICs and I have a four port NIC card installed separately in each one, so 6 NIC ports total. Right now I was just going to use the two onboard and two of the four on the expansion cards to connect. Would you suggest something different? |
|
#5
|
|||
|
|||
|
Every VLAN has its own default gateway. So VLAN 2 will have an IP, VLAN 180 will have an IP. Things on each VLAN will use the IP for the VLAN to which they belong. Then the Layer 3 switch will route between them for you.
I just like to tag everything. It's up to you...but that's my thought. Makes it easy to change things. If you have the ports use all 6. 2 on each server for iSCSI and then the rest for management and/or virtual machine traffic.
__________________
Virtualization and Storage blog: http://www.jasonnash.com |
|
#6
|
|||
|
|||
|
What brands of NICs are the onboard/expansion?
I mainly use Dell servers that tend to have broadcom NICs as the onboard. I use those mainly for management/vmotion traffic. On the 4 port I'd do a 2 port bond for VM traffic, and use 2 ports for iscsi traffic, sending 1 port to each switch. I wouldn't configure the storage VLAN on the core switch. Really depends on if the initiators or on the same subnet as the storage management network. |
|
#7
|
|||
|
|||
|
Each server has an IPMI (management port, ethernet, IP set in BIOS)....two onboard Intel gigE NICs and an expansion four port network card, all gigE from Intel. So, one management port, and 6 ports available for everything else, all Intel.
At the moment and from the start, I was planning on using the two onboard ports as a bonded 2gb link to one switch and 2 of the ports on the expansion card in a 2gb bonded link to the second switch. This would leave two ports unused. Is this not how this needs to be done? Does each port need to be on it's own VLAN? The reason I was going to bond them is for the purpose of the 2gb link and then using a 2gb link to each switch for redundancy. Ok....on the HP switches, they are L3 switches, but I wasn't going to do IP routing on them. I have a menu system on it whereby I can go in and configure each VLAN. I can give the switch and IP address on the VLAN. There is only one default-gateway I can set on the switch. Do I need to set an IP address for the switch on each VLAN ID in the menu? For the default-gateway, do I need to set the gateway for a specific VLAN or what? The two switches I'm using for this VM setup are HP 2910al-48g. http://h10010.www1.hp.com/wwpc/us/en...8-3901671.html The core switch is a Cisco 3560G. The core switch has a main IP address of 192.168.1.72 on VLAN 2 (the main and really only VLAN there was before I started breaking things out). I have added several more VLANs, whereby they are all routed on this L3 core switch and not the router. Anyway, the Cisco L3 has setup on it the VLAN interfaces of all the VLANs we're going to be using now. So, VLAN 180 on the switch is 192.168.180.1, VLAN 60 is 192.168.60.1, etc, etc..... |
|
#8
|
|||
|
|||
|
anyone else have any input into this?
|
![]() |
|
|