L2 between datacenters

Berg0

[H]ard|Gawd
Joined
May 16, 2005
Messages
1,038
Hi guys,

Looking for some feedback on various ways of stretching an L2 subnet across 2 datacenters. Use case is for VMware SRM, not wanting to re-address our VMs for a failover like we do now. I'm hoping to get away with not having to get an MPLS VPN due to the expense. I have Juniper SRX clusters at each site, each site is on a 100Mbit EDI, currently have route based L3 IPsec VPN between sites.

opinions on a few options? :

1. pay for an MPLS VPN
2. try to config (somewhat fuzzy on supported-ness of it) L2 over GRE over route based IPsec tunnel > http://www.juniper.net/us/en/local/pdf/implementation-guides/8010086-en.pdf
3. just re-address al the VMs
4. dual NAT > http://www.juniper.net/us/en/local/pdf/app-notes/3500151-en.pdf (page 14)

What is everyone else running SRM doing?
 
Last edited:
Contact your datacenter rep and ask them what solutions they have?
 
I work for Comcast as a Metro-E Engineer, installing MPLS circuits for our customers. We have a ton of customers who use MPLS between data centers. You can front the connection with a router and go layer 3, or simply go layer 2. Most do layer 3 for the benefit of dynamic routing protocols. Keep in mind that MPLS may be more expensive, but you are also paying for the support via SLAs. I know for our customers, we have a 4 hour MTTR. (mean time to repair) or we owe money back to the customer. We also guarantee 5 9s of availability.

I guess it really depends on how reliable do you need your connection?
What is your business willing to spend?
What are your support needs?

VPNs over a basic L3 internet connection will work but its the internet...
A layer 2 MPLS VPN thur your provider network usually never leaves your provider's network and BW is guaranteed via a CIR. Food for thought
 
Can't speak to any Juniper solutions, but I'm finishing up a customer's DC move using OTV on some ASR 1Ks to stretch L2. IMO, this is the best method (or FabricPath, if you had dark fiber) as it limits your failure domain. We have 20 or so VLANs bridged and it's been rock-solid so far.
 
I would have suggested the tunnel if you don't want to pay for an MPLS (I'm another enterprise ISP WAN tech and it's pretty much our go-to solution for scenarios like these) but reading about OTV as Vito suggested above, it sounds pretty awesome.
 
should also mention the sites are geographically distant from each other, about 1800 miles, in different countries. 100Mb WAN connections at each site.

OTV looks awesome, was actually talking about it last night with a friend, but we're locked into Juniper for the next while.

Budget is basically zero at this point, which is why VPLS over GRE over IPsec is appealing
 
Have you looked at VXLAN?

VXLAN is not a DC interconnect solution. It's for bridging L2 domains across L3 networks in the same datacenter...think DC pods. Not for something 1800 miles apart.

Solutions are simple. Stretch VLANs, use something like OTV, or if you plan to fail ALL the VMs at once just have two L2 domains with the same space.

I do a lot of SRM design and those are the answers. Else, re-IP the VMs.
 
If this is truly a DR site why not use BGP for fail over? Point the IP block to both data centers one with a higher cost. Your primary goes down. Routes converge. You're back up and running in a few seconds. That's how we do it and it surprisingly well.
 
If this is truly a DR site why not use BGP for fail over? Point the IP block to both data centers one with a higher cost. Your primary goes down. Routes converge. You're back up and running in a few seconds. That's how we do it and it surprisingly well.

That's for external routing. This is more about internal server addressing. And also why I asked if they needed to only fail parts of the environment (just some servers) or all of it.
 
That's for external routing. This is more about internal server addressing. And also why I asked if they needed to only fail parts of the environment (just some servers) or all of it.

I was going to say the same thing, but I think the dude is talking about the same subnet in two DCs doing an active/standby. It essentially blackholes one DC at a time, but I suppose it could work.
 
I was going to say the same thing, but I think the dude is talking about the same subnet in two DCs doing an active/standby. It essentially blackholes one DC at a time, but I suppose it could work.

To avoid blackholing the a DC generally speaking you dual home the servers each with unique addresses that weren't being redirected so that you can manage them. We do this for a customer who has a DC in the PacificNW. N Korea could nuke the primary DC and the secondary DC in southwestern US would pick up the traffic in under 30 seconds.

Yes my example is for external traffic but dynamic routing protocols work the same way externally or internally. OSPF or IBGP would both do the job for you.
 
Back
Top