Cisco continues to recommend back-to-back cabled nV Edge solutions (as this is the most reliable configuration). However, in the case that nV Edge systems are geographically separated, this document will provide recommendations for such conditions. Additionally, this document does not cover nV Edge deployment. For that, please visit the ASR9000 XR nV Edge Deployment Guide.
ASR9000 nV EDGE L2 EOBC
nV – Network Virtualization
nV Edge – Network Virtualization on Edge routers
IRL – Inter Rack Links (or Data Links) are used for data forwarding between chassis
Control Plane – the hardware and software infrastructure that deals with messaging / message passing across processes on the same or different nodes (RSPs or LCs)
EOBC - Ethernet Out of Band Channel - (a.k.a., ICL or Control Links) are the ports used to establish the control plane extension between chassis
Data Plane – the hardware and software infrastructure that deals with forwarding, generating and terminating data packets.
UDLD – Uni Directional Link Detection protocol. An industry standard protocol used in Ethernet networks for monitoring link forwarding health.
FPD – Field Programmable Device (fpgas etc.. which can be upgraded)
Split Brain - A condition where loss of (or degraded) control plane traffic leads to Rack 1 rebooting and coming up as a separate system.
ASR9K nV Edge (cluster) systems are formed by connecting two chassis back-to-back using both control links and data links. However, if necessity requires the chassis to be geographically separated, and L2 or L3 "cloud" could be inserted between the systems and used to extend the EOBC functionality.
NOTE: While there are an abundant number of ways to "extend" the EOBC via L2 or L3 clouds, this document will only discuss and L2 EoMPLS PW solution (as this is the officially recommended method).
Traditional vs. Distance-Separated nV Edge Solutions
Figure 1 - Traditional back-to-back fiber-optic cabled nV Edge solution
Cisco recommends an EoMPLS PW (Ethernet over MPLS Pseudowire) cloud network for distance-separated nV Edge systems.
Figure 3 - EoMPLS PW configuration
Why an EoMPLS PW network?
Before answering this question, it is important that the reader understand that both the Control Links and Data Links have minimum operational requirements. Meaning that the L2 cloud between Rack 0 and Rack 1 must be stable enough to (a) meet packet delay requirements (b) guaranteed packet deliver and (c) not introduce stray traffic, packets or noise that could disrupt the control plane communication.
It is for these reasons that Cisco recommends the Control Link and Data Link traffic be tunneled from Rack0 to Rack1.
Advantages of an EoMPLS PW Network
Using Figure 3 above, consider the following:
Control Link packets are already L2 packets.
RSP SFP+ ports are not currently capable of initiating a Pseudowire, nor do they currently have any QoS support
Important! Control Packets should not be switched in the network because of EOBC timing sensitivities.
EoMPLS PW Network Solutions:
With L2VPN Pseudowire switching:
We prevent Control Link and Data Link packets from being manipulated (e.g., encapsulation, processing, etc) and slowed down/interrupted by extending the layer 2 virtual private network (L2VPN) pseudowires across a multiprotocol label switching (MPLS) network.
More importantly, an L2VPN Pseudowire build means there will be nothing to configure on the nV Edge (cluster) side.
nV Edge Link Requirements
This section details the network requirements of both Control Links and Data Links. In other words, these are the conditions that the network (between cluster systems) must meet in order to make the cluster systems "believe" they are connected back-to-back.
These are the message intervals and timeout values that need to be preserved/supported in the L2 cloud:
Data Link (IRL)
Message Interval: 20 msec
Time Out Interval: 100 msec
Message Interval: 50 msec
Time Out Interval: 250 msec
Control Link Only Requirements
The following requirements apply only to Control Link traffic:
Control Plane Traffic Requirements
The EOBC connections are the heart of the nV Edge cluster system because they create a "unified" control plane between both chassis.
Degradation in control plane traffic beyond the values specified above can lead to the destabilization of the nV system and result in a "split-brain" scenario.
A minimum MTU of 1600 is required end-to-end from Rack0 to Rack1. (However, this value is the exact bare minimum and a value of 2K or higher is recommended)
While most control plane traffic will never require a 1600 MTU, it is during a system upgrade that the control plane traffic + TFTP boot packets will pushed the minimum MTU size to 1600.
Failure to meet this requirement will result in Rack1 not booting after an image upgrade because the large TFTP boot packets (with a 1400 MTU) will have been dropped and not have be re-transmitted (due to being UDP)
MAC Address Restriction
The following MAC address should not exist or be configured in the L2 EOBC network: 01-00-cc-cc-cd-dd
UDLD runs on control plane links to ensure bi-directional forwarding health.
In back-to-back connection, no issue using the standard (well known) UDLD MAC because traffic traversed a single cable.
In and L2 EOBC network, the well known UDLD MAC can get processed (punted) by the intermediate systems and cause the ICLs to flap (which could lead to a split-brain condition)
To ensure this did not happen, the NV system now uses an alternate MAC address to forward BPDUs (01-00-cc-cc-cd-dd) and is why this MAC address should not be configured or exist anywhere else in the L2 EOBC network
Before employing Pseudowire connections, we need to mention that there are very specific wiring patterns required for EOBC (control plane) connections - see Figure 4 below: