This document describes "BFD over Logical Bundle" (BLB) implementation on NCS5500 platforms. BLB is supported on these platforms since IOS XR 6.3.2. For older platforms (like ASR9K or CRS) running IOS XR, BLB has been supported since 4.3.0. Unless noted, details in this document are applicable across all platforms running IOS XR.
There are many documentations already about XR implementation of BLB or BFD in general, this document took some info from them while also adding new data about specific NCS5500 implementation.
This is also a living document, the document will be updated if we have new information.
In the context of routing, purpose of BFD is to detect communication failure between two routers faster than what is supported by routing protocols detection timers. BFD detects the failure by monitoring incoming BFD control packets from neighbor router. If a number of packets are lost in transmission for whatever reason and thus not received by the monitoring router, the monitoring router will bring down routing session to the neighbor router.
BFD detects failure by monitoring incoming BFD control packets from neighbor router.
Keep in mind that BFD will bring down the routing session, but it will be up to the routing protocols to bring up the routing session again (i.e. BFD is only responsible to bring down, not to bring up routing sessions). So it is possible that the following scenario might happen:
BFD misses BFD control packets from a specific neighbor.
BFD session goes down, down BFD session brings down routing session (say, OSPF) to that neighbor.
After a while, OSPF for some reason manages to recover on its own (maybe the problem only affects BFD packets and not OSPF packets).
OSPF session to the neighbor will recover.
But BFD session will still be down since it still experiences missing BFD packets. So now we have OSPF up and BFD down. This outcome is counter-intuitive but expected.
BLB is a BFD implementation on bundle interface. Since bundle interface has multiple member links, we need a somewhat more complex implementation of BFD when it runs on bundle interface, as compared with when it runs on regular physical interface.
BFD implementation on bundle interface: BVLAN VS BOB VS BLB
There are three different BFD over bundle interface implementations on IOS XR platform:
BVLAN ("BFD over VLAN over bundle")
Supported since IOS-XR 3.3, withdrawn and then supported again on 3.8.2. Now has reached end of development and thus not advised to be implemented anymore. BFD session can run only on VLAN subinterfaces (e.g. be1.1), not on main bundle interface (e.g. be1).
BoB ("BFD over Bundle")
Also known as:
Supported since IOS-XR 4.0.1 for ASR9K.
Each bundle member link runs its own BFD session.
In the above figure, we will have 4 total BFD sessions running (1 session per member link).
bundlemgr, i.e. down BFD session on a specific member link can potentially bring down the whole bundle interface (say when down member link would make number of available links to fall below required minimum). Down bundle interface will in turn bring down routing session.
BLB ("BFD over Logical Bundle")
Also known as:
Supported since IOS-XR 4.3.0 for ASR9K, since 6.3.2 for NCS5500. Replaces BVLAN implementation. Does not support echo mode (since BLB relies on bfd multipath implementation). BFD session can run on main bundle interface (e.g. be1) as well as on VLAN subinterface (e.g. be1.1). For ASR9K, BoB and BLB coexistence can be configured via "bfd bundle coexistence bob-blb <>" config, but this is not supported yet for NCS5500.
On NCS5500 platforms, BFD is hardware offloaded, meaning processing of the BFD packets will mostly be done in LC NPU. LC CPU will process BFD packets only during BFD initialization process. No BFD packets are ever sent to RP. This is different than default operation of ASR9K platform in which RP and LC will work together to support BFD sessions.
User will configure a specific LC to host BFD sessions, which doesn't need to be the same LC on which bundle member links reside. So for example, it's possible that bundle member links are on LC slot 2 and slot 3, while the BFD sessions are actually hosted by LC on slot 5.
This is different with the way BFD on non-bundle interface works. For non-bundle BFD, the BFD sessions will always be hosted on the LC where the port resides. So for example, say we configure OSPF BFD over Hu0/6/0/32.147 interface, then this BFD session will always be hosted by LC on slot 6.
In case of BLB, the host LC will: - Send BFD packets into bundle by querying FIB for the list of next hops for the BFD session destination address. This is done according to load-balance algorithm, thus different sessions may use different member links of the bundle. - Receive BFD packets from bundle via internal path.
At any time, Tx packets for a particular BFD session will be on one single bundle member link. Rx packets, on the other hand, can be received on any bundle member link.
When LC NPU doesn't receive BFD packets in timely manner, it will generate protection packet towards LC CPU. LC CPU will then bring BFD sessions down and notify routing protocol clients.
Same as other XR platforms, BFD packets will use UDP with destination port of 3784. Source port might differ though.
Here's an example of BLB operation:
In the figure above, be1 has 2 member links: link1 on LC1 and link2 on LC2. LC1 is configured as BFD host LC. We configure 4 VLAN subinterfaces: be1.1, be1.2, be1.3, and be1.4.
BFD session X is running for be1.1 (say, to serve OSPF on be1.1). BFD session Y is running for be1.2 (say, to serve ISIS on be1.2).
Based on load-balance algorithm for the next hops of the BFD session destination address: - BFD packets for BFD session X will be transmitted on link1 - BFD packets for BFD session Y will be transmitted on link2
Incoming BFD packets for any session can be received on any link (link1 or link2).
Routing protocols (single-hop BGP, ISIS, OSPF, static as of IOS-XR 6.3.2), i.e. down BFD brings down routing session. Detection of physical bundle member link failure is done via ifmgr and/or LACP informing bundlemgr, which doesn't have anything to do with BFD.
In case of member link failure (that happens to host a specific BLB session), bundlemgr will update the load-balance tables and transmit the BFD packets using different member link, which means a failure of member link will NOT bring down the BLB session.
When it comes to BLB implementation, the only time bundle will bring down BLB sessions is when the whole bundle goes down. This is because BLB then won't be able to transmit BFD packets on any bundle member link.
No need for BFD config under protocol since routing protocol is NOT client to BoB. BoB is configured on main bundle interface (i.e. be1), NOT on VLAN subinterface (i.e. be1.1). Only ietf mode is supported for NCS5500 platform.
Configure BFD under each desired protocol (exactly the same with BVLAN). Configure multipath capability under BFD since BFD needs to use multiple paths (i.e. bundle member links) to reach BFD neighbor.
There is no specific algorithm to pick the host LC for particular BLB session. At any point in time, any configured LC that have sufficient resource (in terms of PPS, etc) can host any new BLB session. Whenever a host LC can no longer support these session type and PPS etc, any new BLB session will be created in next LC in list. Whenever a host LC restarts, its hosted BLB sessions will be brought down and recreated in next host LC in list.
Supported Scale and Timers
How many BFD sessions can run on one time depends on multiple factors as follow:
How many clients served by the BFD sessions. The less BFD clients to support, the more BFD sessions can be run. For example: We can run more BFD sessions if they serve only OSPF clients, instead of the whole OSPF, ISIS, BGP, and static route as clients.
How aggressive the BFD timers are. The less aggressive the timers are, the more BFD sessions can be run. For example: We can run more BFD sessions if they're configured with 300ms*3 timers, instead of 150ms*3.
On NCS5500 platforms, supported scale is as follows:
Quick note about LC:
Modular NCS-5500 platforms like NCS-5508 support multiple LC, while "pizza boxes" platforms like NCS-5501 is considered a single LC as a whole (LC 0/0/CPU0).
During test, each of BLB session has OSPF, ISIS, and BGP as client.
On NCS5500 platforms, recommended timer is as follows:
minimum of 300ms with multiplier 3. Value more aggressive (i.e. less) than 300ms can be configured but not advised.
"show bfd summary" command will give you the info about supported BFD PPS and sessions per chassis.
BLB and NSR
RP switch over will not tear down existing BLB sessions when NSR is configured under each desired routing protocols.
Same with ASR9K, only BFD async mode is supported for BLB, echo mode is not supported. In fact, echo mode is not supported at all with NCS5500 as of 6.3.2 release.
Specific to NCS5500, the following clients are not supported.
IPv6 (e.g. OSPFv3).
MPLS protocols like RSVP-TE and LDP.
Since BFD processing is hardware offloaded, BFD packet counters will not increment when we issue certain show bfd commands like "show bfd session detail". This is expected since regular BFD CLI command will derive the counter from LC CPU, not from LC NPU.
NCS-5508 "potat" | | |Bundle-Ether2.1 : 1 BLB session with OSPF and static route as client |Bundle-Ether2.2 : 1 BLB session with ISIS and BGP as client | | NCS-5501 "birin"
Router configurations (only relevant config is shown)
RP/0/RP0/CPU0:potat#sh bfd session Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ---------- BE2.1 126.96.36.199 0s(0s*0) 900ms(300ms*3) UP Yes 0/6/CPU0 BE2.2 188.8.131.52 0s(0s*0) 900ms(300ms*3) UP Yes 0/6/CPU0
RP/0/RP0/CPU0:potat#sh bfd session detail interface be2.1 I/f: Bundle-Ether2.1, Location: 0/6/CPU0 Dest: 184.108.40.206 Src: 220.127.116.11 State: UP for 0d:21h:35m:54s, number of times UP: 1 Session type: SW/V4/SH/BL Received parameters: Version: 1, desired tx interval: 300 ms, required rx interval: 300 ms Required echo rx interval: 0 ms, multiplier: 3, diag: None My discr: 12584150, your discr: 845, state UP, D/F/P/C/A: 0/0/0/1/0 Transmitted parameters: Version: 1, desired tx interval: 300 ms, required rx interval: 300 ms Required echo rx interval: 0 ms, multiplier: 3, diag: None My discr: 845, your discr: 12584150, state UP, D/F/P/C/A: 0/1/0/1/0 Timer Values: Local negotiated async tx interval: 300 ms Remote negotiated async tx interval: 300 ms Desired echo tx interval: 0 s, local negotiated echo tx interval: 0 ms Echo detection time: 0 ms(0 ms*3), async detection time: 900 ms(300 ms*3) Label: Internal label: 64119/0xfa77 Local Stats: Intervals between async packets: Tx: Number of intervals=3, min=160 ms, max=726 ms, avg=385 ms Last packet transmitted 77754 s ago Rx: Number of intervals=4, min=100 ms, max=270 ms, avg=183 ms Last packet received 77753 s ago Intervals between echo packets: Tx: Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx: Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx): Number of packets: 0, min=0 ms, max=0 ms, avg=0 ms MP download state: BFD_MP_DOWNLOAD_ACK State change time: Dec 14 18:38:06.721 Session owner information: Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- ospf-cybi 300 ms 3 300 ms 3 ipv4_static 300 ms 3 300 ms 3
H/W Offload Info: H/W Offload capability : Y, Hosted NPU : 0/6/CPU0 Async Offloaded : Y, Echo Offloaded : N Async rx/tx : 5/4
RP/0/RP0/CPU0:potat#sh bfd session detail destination 18.104.22.168 I/f: Bundle-Ether2.2, Location: 0/6/CPU0 Dest: 22.214.171.124 Src: 126.96.36.199 State: UP for 0d:21h:39m:36s, number of times UP: 1 Session type: SW/V4/SH/BL Received parameters: Version: 1, desired tx interval: 300 ms, required rx interval: 300 ms Required echo rx interval: 0 ms, multiplier: 3, diag: None My discr: 12584129, your discr: 824, state UP, D/F/P/C/A: 0/0/0/1/0 Transmitted parameters: Version: 1, desired tx interval: 300 ms, required rx interval: 300 ms Required echo rx interval: 0 ms, multiplier: 3, diag: None My discr: 824, your discr: 12584129, state UP, D/F/P/C/A: 0/1/0/1/0 Timer Values: Local negotiated async tx interval: 300 ms Remote negotiated async tx interval: 300 ms Desired echo tx interval: 0 s, local negotiated echo tx interval: 0 ms Echo detection time: 0 ms(0 ms*3), async detection time: 900 ms(300 ms*3) Label: Internal label: 64098/0xfa62 Local Stats: Intervals between async packets: Tx: Number of intervals=3, min=160 ms, max=616 ms, avg=383 ms Last packet transmitted 77975 s ago Rx: Number of intervals=4, min=100 ms, max=374 ms, avg=209 ms Last packet received 77975 s ago Intervals between echo packets: Tx: Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx: Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx): Number of packets: 0, min=0 ms, max=0 ms, avg=0 ms MP download state: BFD_MP_DOWNLOAD_ACK State change time: Dec 14 18:38:06.721 Session owner information: Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-cybi 300 ms 3 300 ms 3 bgp-default 300 ms 3 300 ms 3
H/W Offload Info: H/W Offload capability : Y, Hosted NPU : 0/6/CPU0 Async Offloaded : Y, Echo Offloaded : N Async rx/tx : 5/4
On Windows: The csm.ini file resides in the c:\Users\<username> or c:\Documents and Settings\<username> directory
On UNIX/MAC: The .csmrc file resides in the user home directory (i.e. cd ~)
NOTE: CSM v2.0 (Java client application) is an End-of-Life product. There is no planned development. Customers are advised to migrate to CSM Server v3.4. CSM Server is a web application with centralized user authentication and database which provides end-to-end software management for IOS-XR devices. CSM v3.4 can be downloaded at
Up until 6.1.2, IOS-XR sshv2 supports only CBC ciphers (aes128-cbc,aes192-cbc,aes256-cbc,3des-cbcaes128-cbc,aes192-cbc,aes256-cbc,3des-cbc). That is, if a client were to request a CTR cipher (for e.g.: ssh -c aes128-ctr -l dpullat 188.8.131.52), IOS-XR will close the connection with:
RP/0/RSP0/CPU0:Feb 21 14:37:24.551 : SSHD_: %SECURITY-SSHD-6-INFO_GENERAL : Enc name is NULL: client aes128-ctr server aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc
CBC ciphers have been well known for their security vulnerability:
As part of this effort to disable CBC ciphers and enable only CTR ciphers for SSHv2 on IOS-XR, from release 6.1.2 onwards, all CBC ciphers are disabled or not supported on IOS-XR. Only CTR ciphers are supported from 6.1.2 and up. This change was brought in byCSCvb53125.
Next, IOS-XR will have the capability to configure a specific CTR cipher to use, for customers who wish to strictly enforce a particular one. This is targeted for an upcoming release.
With release 6.1.3, IOS-XR 64-bit (eXR) brings in the feature of Flexible Packaging (aka Golden ISO or GISO).
What is it:
Cisco releases IOS-XR 64-bit software shipped with a mini ISO which contains mandatory IOS-XR packages for a given platform, set of optional packages as RPMs and software patches, SMUs.
In response to Customer demands on more flexible ways to manage software on the router, Golden ISO was developed as a customised ISO which customers and field teams can build offline out of the mini ISO by using Cisco Released Golden ISO build script (written in python). The IOS-XR flexible packaging install infrastructure facilitates this feature from release 6.1.3 onwards on its 64-bit versions.
When the system is booted up with Golden ISO, additional SMUs/optional packages present in Golden ISO will be auto-installed. IOS-XR configuration, if present in Golden ISO, will be auto-applied. The router, once booted with a GISO, is ready for running traffic.
Golden ISO is not an image released by Cisco. The customer can create their own Golden ISO using Cisco released Golden ISO creation tool, Cisco released mini ISO. One can also move from one GISO to another within the same release. And, we are integrating GISO with CSM.
With release 6.1.2 of IOS-XR, 64-bit eXR is now available on CCO. With a ton of drivers and forward-looking features and functionality, eXR is the next generation of IOS-XR that runs on virtualized environment with underlying 64 bit Linux kernel.
The key capabilities of IOS XR 64-bit OS includes: • Telemetry — push towards smarter visibility of the network by streaming data to a configured receiver for analysis and troubleshooting purposes • Application Hosting — leverage hosting of third-party applications in container environment • Data Models — automate configurations that belong to multiple routers across the network • Flexible Packaging — easy routine upgrades and maintenance with modularized Redhat Packet Manager (RPM) packages
Migration to eXR is achieved easily and the CCO documentation for Migration from 32-bit QNX kernel XR to 64-bit Linux kernel XR is now available.