Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about how to plan, design, and implement Cisco Overlay Transport Virtualization (OTV) in your Data Center Network with Cisco experts Anees Mohamed Abdulla and Pranav Doshi.
Anees Mohamed Abdulla is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. He has 10 years of experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching content engineer and LAN switching TAC engineer. He holds a bachelor's degree in electronics and communications and has a CCIE certification 18764 in routing and switching.
Pranav Doshi is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. Pranav has experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching TAC engineer and now network consulting engineer. He holds a bachelor's degree in electronics and communications and a master's degree in electrical engineering from the University of Southern California.
Remember to use the rating system to let Anees and Pranav know if you have received an adequate response.
Because of the volume expected during this event, Anees and Pranav might not be able to answer each question. Remember that you can continue the conversation on the Data Center, sub-community forum shortly after the event. This event lasts through August 23, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
With technologies like L2TP in Cisco routers, what are the advantages in using OTV considering that we have to pay for the OTV licenses? Thanks!
What are the reasons of not using OTV for l2 extension? If we have the right platform, nexus 7000 or asr 1k today, is there any reason to use other l2 extension technology other than OTV?
I don't see any reason to say why OTV is not the Layer 2 extension solution if you have the required hardware. The crictical component of Fault Isolation which is supported by OTV is a key element for its successful in the DCI domain.
OTV does require MTU size of the layer 3 interfaces between Data Centers to be increased to 42 bytes. And the same limitation is there for EoMPLS and VPLS as well. I don't see this would be a major issue for most of the customers.
Thanks Anees! So OTV is the prefered option if hardware meets requirement.
About MTU, other layer 2 extension supports fragmentation, but OTV doesn't. What is the solution if the transport doesn't support jumbo MTU?
There are two options.
Transport supports jumbo MTU : Deploy OTV with Nexus 7000 or ASR 1K
Transport does not support jumbo MTU : Deploy OTV with ASR 1K
Can you elabrate bit more on that? Why should I use ASR 1K if transport does not support jumbo MTU? Does ASR1K support OTV fragmentation?
It is nothing specific to OTV. ASR 1k can do packet fragmentation and reassembly. But in N7k, fragmentation occurs in software and we don't want that to happen in software due to performance issues. Hence in N7k when you run OTV we mark the OTV packets with DF bit set.
Thanks Anees! I thought there is DF bit set for OTV packet? Can you share the document that shows fragmentation is supported on ASR1K?
All those Layer 2 extension technologies require STP to be extended between Data Centers if you need to have multiple paths between Data Centers. OTV does not extend STP rather it has its own mechanism (AED election) to avoid loop when multiple paths are enabled. It means any STP control plane issue, we don't carry to the other Data Center.
OTV natively suppresses Unknown Unicast Flooding across the OTV overlay. Unknown unicast flooding is a painful problem in layer 2 network and difficult to troubleshoot to identify the root cause if you don't have proper network monitoring tool.
It has ARP optimization which eliminates flooding ARP packets across Data Center by responding locally with cached ARP messages. One of the common issues I have seen in Data Center is some server or device in the network sends continuous ARP packets which hits Control plane in the Aggregation layer which in turn causes network connectivity issue.
The above three points proves the Layer 2 domain isolation between data centers. If you have redundant Data Centers with Layer 2 extended without OTV, the above explained layer 2 issue which happens in one Data Center carries the same failure to the second data center which creates the question of what is the point of having two different Data Centers if we can not isolate the failure domain.
OTV natively supports HSRP localization with few command lines. This is a very important requirement in building Active/Active Data Center.
Even though your question is related to L2TP, OTV deserves the comparison with VPLS and those comparison will also be applicable for L2TP. The below link explains in detail...
Hi Anees and Pravanv,
I have some questions about OTV,
Q.1 Is this a protocol?
Q.2 Is it only used in data centers?
OTV is a LAN extension technology called, Overlay Transport Virtualization. OTV is an IP-based functionality that has been designed to provide Layer 2 extension capabilities over any transport infrastructure between Data Centres.
OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2 domains while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits of an IP-based interconnection.
Please find a short video to OTV as a technology, its a 4-min video which will give you a good introduction to the OTV feature.
Also here is a more detailed white paper which goes through the introduction of OTV as a technology and its deployment scenarios:
With OTV hardening in 5.2 and above, edge devices in the same site appear to be able to form overlay adjacencies with each other. This appears to be beneficial if the site adjacency fails for some reason. Typically edge devices are connected in a point to point fashion with another data center for DCI, however to form an overlay adjacency to an edge device within the same site seems to require a vpls or switched layer 2 transport between all devices in all data centers, otherwise how would an overlay adjacency form between two devices in the same site? Is OTV supported over VPLS? If so, are there any additional spanning tree concerns when doing so? Are there docs that cover this scenario?
Can I Mix a Cisco Nexus 7000 and a Cisco ASR 1000 for OTV?
Mixing the two types of devices is not supported at this time when the devices will be placed within the same site.
However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully supported. For
this scenario, please keep the separate scalability numbers in mind for the two different devices, because you will
have to account for the lowest common denominator.
Please check the below link and I believe it will answer your question..
I took a diagram and paragraph from the above URL and pasted below:
The use of VDCs must be introduced in this case, even if the SVIs are not defined on the DCI layer devices. The basic requirement for OTV to be functional is in fact that every OTV control packet originated by the Join interface of an OTV edge device must be received on the Join interface of all the other edge devices being part of the same OTV overlay.
Since OTV will be on a dedicated VDC and the DCI circuits will be on another Router or VDC, OTV adjacency will be formed by all OTV Edge devices.
Please let me know if this answers your question..
Thank you for the reply, Mohamed.
In the document you mention there is a point to point design with a collapsed aggregation core (figure. 1-46). Is there a configuration example for that design anywhere?
I am unable to see any public facing document with the specific configuration you are looking for. But please give me a day, I will publish the config fromone of my deployment.
I have published this document.
Please let me know if you have any questions.
Hi Expert Team,
I have following query. We have two data centers. In both data centers we have two Nexus 7K switches. Configured production VDC and OTV VDC in both the switches and in both data centers. Multiple VRF's are configured in both Data Centers under production VDC. There are two point to point fiber links between two data centers that is part of VRF-A under production VDC. One link has 10G BW and Second Link has 1G BW. We have L3 joint interface from OTV VDC to Production VDC in internal VRF.Need to run dynamic routing protocol between data centers to have link failover. Now question is do we need to have multiple joint interfaces per VRF at both the data centers or just single joint interface and in which VRF it should be ideally configured. Also do we need to have port channeling between OTV VDC's in same data centers. ..Please provide answer to following queries..that would be gr8 help. Thanks
Answer to the first part of your question :
No, you do not need multiple join interfaces. There are two ways you can achieve what you are trying to :
1. Extend the internal VRF between the two Data Centers and run the a separate instance of dynamic routing protocol for that VRF as well.
2. Have the join interface in the same VRF as the VRF-A.
Answer to the second part of your question :
No, you do not need port-channel between the two OTV-VDCs.
Hope this helps.
Thanks for the prompt reply. Can you please share any config. example of Multi-VRF & dual homed OTV Config.
I have doubt how dynamic protocol adjency would form on JOIN interface for Multi-VRF config.
Please review this document and let me know if that answers your question.
With the advent of VXLAN, considering the fact that it can be configured entirely at the virtual layer (no reconfiguration required on the physical networking equipment as long as the destinations are pingable, do you see a real requirement for OTV in a virtual DC1 and DC2 environment?
you are correct. If you use virtual Data centers with VXLAN and you don't have vlans extended to physical network, then you don't have the requirement to use OTV.
We implementing OTV between two DCs, but the OTV adjancency don't come UP.
In one DC is NX7Ks with SUP1 and in the second DC are NX-7Ks with SUP2.
It's possible to have problems because the OTV should work between equiments with different supervisors?
OTV should come up even if the Nexus at one end is running on SUP1 and the other is running on SUP2. If you are using OTV in multicast mode here is something that could help assist in troubleshooting :
Would suggest revisiting the configs or reach out to Cisco TAC to troubleshoot this in detail.