Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 

ASR9000 nV Edge: L2 EOBC - Geographically Separated Cluster Systems

Disclaimer

Cisco continues to recommend back-to-back cabled nV Edge solutions (as this is the most reliable configuration). However, in the case that nV Edge systems are geographically separated, this document will provide recommendations for such conditions. Additionally, this document does not cover nV Edge deployment. For that, please visit the ASR9000 XR nV Edge Deployment Guide.

 

 

ASR9000 nV EDGE L2 EOBC

[TOC:faq]

Glossary

  • nV – Network Virtualization
  • nV Edge – Network Virtualization on Edge routers
  • IRL – Inter Rack Links (or Data Links) are used for data forwarding between chassis
  • Control Plane – the hardware and software infrastructure that deals with messaging / message passing across processes on the same or different nodes (RSPs or LCs)
  • EOBC - Ethernet Out of Band Channel - (a.k.a., ICL or Control Links) are the ports used to establish the control plane extension between chassis
  • Data Plane – the hardware and software infrastructure that deals with forwarding, generating and terminating data packets.
  • UDLD – Uni Directional Link Detection protocol. An industry standard protocol used in Ethernet networks for monitoring link forwarding health.
  • FPD – Field Programmable Device (fpgas etc.. which can be upgraded)
  • Split Brain - A condition where loss of (or degraded) control plane traffic leads to Rack 1 rebooting and coming up as a separate system.

 

Problem Definition

ASR9K nV Edge (cluster) systems are formed by connecting two chassis back-to-back using both control links and data links. However, if necessity requires the chassis to be geographically separated, and L2 or L3 "cloud" could be inserted between the systems and used to extend the EOBC functionality.

NOTE: While there are an abundant number of ways to "extend" the EOBC via L2 or L3 clouds, this document will only discuss and L2 EoMPLS PW solution (as this is the officially recommended method).

 

Traditional vs. Distance-Separated nV Edge Solutions

  • Figure 1 - Traditional back-to-back fiber-optic cabled nV Edge solution
  • Figure 2 - Distance-separated nV Edge solution

 

 

Cisco's Recommended Distance-Separated nV Edge Solution

Cisco recommends an EoMPLS PW (Ethernet over MPLS Pseudowire) cloud network for distance-separated nV Edge systems.

 

   Figure 3 - EoMPLS PW configuration

 

Why an EoMPLS PW network?

Before answering this question, it is important that the reader understand that both the Control Links and Data Links have minimum operational requirements. Meaning that the L2 cloud between Rack 0 and Rack 1 must be stable enough to (a) meet packet delay requirements (b) guaranteed packet deliver and (c) not introduce stray traffic, packets or noise that could disrupt the control plane communication.

It is for these reasons that Cisco recommends the Control Link and Data Link traffic be tunneled from Rack0 to Rack1.

 

Advantages of an EoMPLS PW Network

Using Figure 3 above, consider the following:

  • ASR9K Restrictions:
    • Control Link packets are already L2 packets.
    • RSP SFP+ ports are not currently capable of initiating a Pseudowire, nor do they currently have any QoS support
    • Important! Control Packets should not be switched in the network because of EOBC timing sensitivities.
  • EoMPLS PW Network Solutions:
    • With L2VPN Pseudowire switching:
      • We prevent Control Link and Data Link packets from being manipulated (e.g., encapsulation, processing, etc) and slowed down/interrupted by extending the layer 2 virtual private network (L2VPN) pseudowires across a multiprotocol label switching (MPLS) network.
      • More importantly, an L2VPN Pseudowire build means there will be nothing to configure on the nV Edge (cluster) side.

 

nV Edge Link Requirements

This section details the network requirements of both Control Links and Data Links. In other words, these are the conditions that the network (between cluster systems) must meet in order to make the cluster systems "believe" they are connected back-to-back.

 

Message Intervals

These are the message intervals and timeout values that need to be preserved/supported in the L2 cloud:

  • Data Link (IRL)
    • Message Interval: 20 msec
    • Time Out Interval: 100 msec
  • Control Link
    • Message Interval: 50 msec
    • Time Out Interval: 250 msec

 

Control Link Only Requirements

The following requirements apply only to Control Link traffic:

  • Control Plane Traffic Requirements
    • The EOBC connections are the heart of the nV Edge cluster system because they create a "unified" control plane between both chassis.
    • Degradation in control plane traffic beyond the values specified above can lead to the destabilization of the nV system and result in a "split-brain" scenario.

 

  • MTU Requirements
    • A minimum MTU of 1600 is required end-to-end from Rack0 to Rack1. (However, this value is the exact bare minimum and a value of 2K or higher is recommended)
    • While most control plane traffic will never require a 1600 MTU, it is during a system upgrade that the control plane traffic + TFTP boot packets will pushed the minimum MTU size to 1600.
    • Failure to meet this requirement will result in Rack1 not booting after an image upgrade because the large TFTP boot packets (with a 1400 MTU) will have been dropped and not have be re-transmitted (due to being UDP)

 

  • MAC Address Restriction
    • The following MAC address should not exist or be configured in the L2 EOBC network: 01-00-cc-cc-cd-dd
    • UDLD runs on control plane links to ensure bi-directional forwarding health.
    • Original Problem:
      • In back-to-back connection, no issue using the standard (well known) UDLD MAC because traffic traversed a single cable.
      • In and L2 EOBC network, the well known UDLD MAC can get processed (punted) by the intermediate systems and cause the ICLs to flap (which could lead to a split-brain condition)
    • Solution:
      • To ensure this did not happen, the NV system now uses an alternate MAC address to forward BPDUs (01-00-cc-cc-cd-dd) and is why this MAC address should not be configured or exist anywhere else in the L2 EOBC network

 

Configuration Considerations

Before employing Pseudowire connections, we need to mention that there are very specific wiring patterns required for EOBC (control plane) connections - see Figure 4 below:

 

Figure 4 - Cluster EOBC Physical Wiring Requirements

 

When creating Pseudowire mappings, the thing to remember is to always ensure that the mappings support the physical wiring requirements (as depicted in Figure 5 below):

 

Figure 5 - Pseudowire Mapping Supporting Physical Wiring Requirements

 

 

 

Sample Configuration

The following serves as a simple configuration example of the recommended EoMPLS PW EOBC L2 network.

 

Sample Topology

The following is a physical representation. For a logical representation, please refer to Figure 3 above.

 

Sample Configuration

 

PE 0PE  1

hostname pe0

!

interface Loopback0

 ipv4 address 1.1.1.1 255.255.255.255

!

interface GigabitEthernet0/0/0/16

 l2transport

 !

!

interface GigabitEthernet0/0/0/17

 l2transport

 !

!

interface GigabitEthernet0/0/0/18

 l2transport

 !

!

interface GigabitEthernet0/0/0/19

 l2transport

 !

!

interface TenGigE0/0/1/0

 mtu 1600

 ipv4 address 17.1.1.1 255.255.255.0

!

interface TenGigE0/0/1/1

 mtu 1600

 ipv4 address 21.1.1.1 255.255.255.0

!

interface TenGigE0/0/1/2

 l2transport

 !

!

interface TenGigE0/0/1/3

 l2transport

 !

!

interface TenGigE0/0/2/0

 mtu 1600

 ipv4 address 31.1.1.1 255.255.255.0

!

interface TenGigE0/0/2/1

 mtu 1600

 ipv4 address 41.1.1.1 255.255.255.0

!

router ospf 100

 router-id 1.1.1.1

 area 0

  interface Loopback0

  !

  interface TenGigE0/0/1/0

  !

  interface TenGigE0/0/1/1

  !

  interface TenGigE0/0/2/0

  !

  interface TenGigE0/0/2/1

  !

 !

!

l2vpn

 xconnect group x1

  p2p 1

   interface GigabitEthernet0/0/0/16

   neighbor ipv4 2.2.2.2 pw-id 1

   !

  !

 !

 xconnect group x2

  p2p 1

   interface GigabitEthernet0/0/0/17

   neighbor ipv4 2.2.2.2 pw-id 2

   !

  !

 !

xconnect group x3

  p2p 1

   interface GigabitEthernet0/0/0/18

   neighbor ipv4 2.2.2.2 pw-id 3

   !

  !

 !

xconnect group x4

  p2p 1

   interface GigabitEthernet0/0/0/19

   neighbor ipv4 2.2.2.2 pw-id 4

   !

  !

 !

 xconnect group y5

  p2p 1

   interface TenGigE0/0/1/2

   neighbor ipv4 2.2.2.2 pw-id 5

   !

  !

 !

 xconnect group y6

  p2p 1

   interface TenGigE0/0/1/3

   neighbor ipv4 2.2.2.2 pw-id 6

   !

  !

 !

!

mpls ldp

 router-id 1.1.1.1

 interface TenGigE0/0/1/0

 !

 interface TenGigE0/0/1/1

 !

 interface TenGigE0/0/2/0

 !

 interface TenGigE0/0/2/1

 !

!

end

hostname pe1

!

interface Loopback0

 ipv4 address 2.2.2.2 255.255.255.255

!

interface GigabitEthernet0/0/0/16

 l2transport

 !

!

interface GigabitEthernet0/0/0/17

 l2transport

 !

!

interface GigabitEthernet0/0/0/18

 l2transport

 !

!

interface GigabitEthernet0/0/0/19

 l2transport

 !

!

interface TenGigE0/0/1/0

 mtu 1600

 ipv4 address 17.1.1.2 255.255.255.0

!

interface TenGigE0/0/1/1

 mtu 1600

 ipv4 address 21.1.1.2 255.255.255.0

!

interface TenGigE0/0/1/2

 l2transport

 !

!

interface TenGigE0/0/1/3

 l2transport

 !

!

interface TenGigE0/0/2/0

 mtu 1600

 ipv4 address 31.1.1.2 255.255.255.0

!

interface TenGigE0/0/2/1

 mtu 1600

 ipv4 address 41.1.1.2 255.255.255.0

!

router ospf 100

 router-id 2.2.2.2

 area 0

  interface Loopback0

  !

  interface TenGigE0/0/1/0

  !

  interface TenGigE0/0/1/1

  !

  interface TenGigE0/0/2/0

  !

  interface TenGigE0/0/2/1

  !

 !

!

l2vpn

 xconnect group x1

  p2p 1

   interface GigabitEthernet0/0/0/16

   neighbor ipv4 1.1.1.1 pw-id 1

   !

  !

 !

 xconnect group x2

  p2p 1

   interface GigabitEthernet0/0/0/19

   neighbor ipv4 1.1.1.1 pw-id 2

   !

  !

 !

 xconnect group x3

  p2p 1

   interface GigabitEthernet0/0/0/18

   neighbor ipv4 1.1.1.1 pw-id 3

   !

  !

 !

 xconnect group x4

  p2p 1

   interface GigabitEthernet0/0/0/17

   neighbor ipv4 1.1.1.1 pw-id 4

   !

  !

 !

 xconnect group y5

  p2p 1

   interface TenGigE0/0/1/2

   neighbor ipv4 1.1.1.1 pw-id 5

   !

  !

 !

 xconnect group y6

  p2p 1

   interface TenGigE0/0/1/3

   neighbor ipv4 1.1.1.1 pw-id 6

   !

  !

 !

!

mpls ldp

 router-id 2.2.2.2

 interface TenGigE0/0/1/0

 !

 interface TenGigE0/0/1/1

 !

 interface TenGigE0/0/2/0

 !

 interface TenGigE0/0/2/1

 !

!

end

 

 

 

Troubleshooting section coming soon ...

691
Views
0
Helpful
0
Comments