Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Layer 2/3 Spanned VLANs & Asymmetric Routing

I have a requirement to configure multiple spanned layer 2 networks between data centres for clustering services and I will be using MPLS xconnects.

One of the spanned networks requires layer 3 connectivity and therefore the subnet needs to be configured across four switches (two in each data centre) along with HSRP.  Each switch will advertise the network into BGP – not MPBGP.  My concern is asymmetric routing and I want traffic to follow the same path on and off the LAN.

The issue is that traffic will always exit via the switch it first traverses as they have connected interfaces on the spanned VLAN.  Some of the traffic is SQL transactions and (for example) I don’t want local traffic to use Data Centre B (DC-B) to get to a server in DC-B and the server respond using HSRP primary in DC-A.

As there is no way of changing the admin distance of connected interfaces I had considered placing the network in a VRF in each data centre.  I’m not sure if it is possible to redistribute a connected route from a VRF into the BGP global RIB to achieve this?

Has anyone come across a similar problem, and does anyone have a more elegant solution?

Thanks for any ideas...


  • LAN Switching and Routing
New Member

Re: Layer 2/3 Spanned VLANs & Asymmetric Routing

I thought I’d give an update on this:

- I tried the VRF option based on the 'Internet Access from an MPLS VPN Using a Global Routing Table' feature.  The problem was the next hop with the GLOBAL route command could not be local to the router, i.e. a loopback in the GRT.  I may investigate this option further.

- One valid option is to use MHSRP, i.e. different HSRP groups.  By having server in DC-A point to group 1 in DC-A and servers in DC-B pointing to group 2 in DC-B.  This works as long as the servers do not move from one site to the other.  For example if VMOTION was used with VMWARE the gateway would need to change from group 1 to group 2 if a server was moved to avoid the asymmetrical routing problem.

- My favorite option, which unfortunately didn't work was to inject two /25 routes in DC-A and advertise the /24 from DC-B.  Theoretically the best match should work but I found the CEF route was preferred (I am using BVIs in Dynamips for testing at the moment):

Before the BVI was shutdown (with a connected interface for

sh ip cef, version 95, epoch 0, cached adjacency
0 packets, 0 bytes
  via, BVI212, 0 dependencies
    next hop, BVI212
    valid cached adjacency

After the BVI was shutdown (the two /25 routes from the other DC are used):

sh ip cef, version 70, epoch 0, per-destination sharing
0 packets, 0 bytes
  via, 0 dependencies, recursive
    traffic share 1
    next hop, Ethernet1/1 via
    valid adjacency
  via, 0 dependencies, recursive
    traffic share 1
    next hop, Ethernet1/0 via
    valid adjacency
  0 packets, 0 bytes switched through the prefix
  tmstats: external 0 packets, 0 bytes
           internal 0 packets, 0 bytes

I would imagine CEF would behave the same on Nexus 7000s and Catalyst 6500s as it's obviously finding an ARP entry and using that rather than the route.

Does anyone use VMOTION / Geo-Clustering / etc, between data centres and if so how do you work around this layer 3 problem??

This widget could not be displayed.