During the event, Cisco expert Xander Thuijs provides an in-depth overview of the Cisco ASR 9000 Series Aggregation Services Routers. He will also show the packet walkthrough and explain troubleshooting best practices and tips. Xander will also discuss quality of service (QoS) implementation and forwarding architecture.
Xander Thuijs is a principal engineer for the Cisco ASR 9000 Series and Cisco IOS-XR product family at Cisco. He is an expert and advisor in many technology areas, including IP routing, WAN, WAN switching, MPLS, multicast, BNG, ISDN, VoIP, Carrier Ethernet, System Architecture, network design and many others. He has more than 20 years of industry experience in carrier Ethernet, carrier routing, and network access technologies. Xander holds a dual CCIE certification (number 6775) in service provider and voice technologies. He has a master of science degree in electrical engineering from Hogeschool van University in Amsterdam.
The following experts were helping Xander to answer few of the questions asked during the session: Aleksandar Vidako, Sadananda Phadke, and Krishna Eranti. Aleksandar, Sadananda, and Krishna are members of the ASR9000 Escalation team and have vast knowledge.
Q. Is there an impact on any features, such as access control lists (ACL) or security, when using super-frames? If so, can that grouping be disabled?
A. No, super-frames is implemented in the hardware of the Fabric Interface ASIC (FIA). Also, there is not a show command that provides the number of packets aggregated into super-frames. The question has been raised before whether it makes sense from a troubleshooting point of view to create counters. Cisco determined that this was not value-added, so that ability is not available. Super-framing by itself provides the efficiency of fabric forwarding and it does not have an impact on performance.
Q. What is the default queue buffer size?
A. That depends on the QoS configuration. Each network processing unit (NPU) has frame memory attached to it and this frame memory is where packet buffering is seen. The Trident -L and -B cards send a 50ms burst of traffic.The Trident -E card sends a maximum 150ms burst of traffic. The Typhoon base line card is about three times as much as the Trident, so it sends 300ms per NPU and it is serve first-come first-served. So, if on a Trident card there is one interface on NPU, that interface can use either 50ms or 150ms of buffering. If you created two sub-interfaces, then 150ms will shared by those two sub-interfaces and then it depends on the queue limit configuration in the Q0S policy as to how much of that buffer you want to be assigned for the interface. You can allow oversubscription, but then you run into packet anarchy.
Q. Is there a shortcut for a Bundle-EthernetX interface, such as port-channel interface (poX), in Cisco IOS® ?
Q. If Network Based Application Recognition (NBAR) will be supported, will it run on the Route Switch Processor (RSP) or the Liquid Cooled (LC) CPUs?
A. NBAR is not supported.
Q. Are both Fabric connections from LC to RSP used to send data at the same time?
A. Yes, both Fabric connections from LC to RSP are used to send data at the same time in a loadbalancing fashion
Q. Is it safe to have only feed A to shelf 0 and feed B to shelf 1 and have high availability (HA)?
A. This gives 1:1 redundancy on feed failure (like the AC modules). You need to ensure in this mode that if you lose one feed that the available power bricks can still provide enough power required for the cards.
Q. What is the revolutions per minute (RPM) on these hard disk drives (HDDs) compared to the solid state drives (SDDs)? Will the spinning drives be slow?
Q. Are there any plans to end-of-life (EOL) the RSP2 in the near future? Is the extra RAM, after an upgrade, used solely for routing purposes?
A. There are no plans for the EOL of the RSP2 as of now. The extra RAM is for control plane processing, the routing table, and so on.
Q. As shown in the picture during the webcast, if RSP0 is active, can the Fabric connection to RSP1 be used to switch data?
A. Yes. The Fabric of both RSPs can be used simultaneously to forward traffic. It is active/active Fabric.
Q. Where will multicast be replicated on the MOD160?
A. Multicast is replicated on the MOD160 the same as on other types of cards. Modular port adapters (MPAs) do not make L2/L3 forwarding decisions. In general, multicast is replicated by Fabric (to egress LCs) and within egress LC, by Fabric Interface ASIC ( FIA), bridge (in the case of trident cards), and the network processor for egress ports.
Q. Does super-framing take into account the queue priority? Are all the queues treated equally when their frames are being put into super-frames?
A. This is not required. If a packet needs to be dropped due to virtual queue index (VQI) overflowing (or flow-control from the egress LC), then it will be dropped in ingress FIA itself. High priority traffic is always preserved as marked on the ingress interface. VQI flow off happen on a per priority basis.
Q. Does the ASR 9000 Series only support Eth?
A. The SIP-700 type of line card supports shared port adapters (SPAs).
Q. How is the packet forwarding decision made in the case of EtherChannel?
A. The decision is made on the ingress LC based on a hash computation derived from packet header contents.
Q. Why is it that ASR 9000 platforms are not able to forward Multiprotocol Label Switching (MPLS) traffic when using Bridge Group Virtual Interface (BVI) and in order to have that work we need to configure per-vrf allocation mode?
A. In per-prefix allocation, the label is directly associated with a forwarding adjacency which cannot be on BVI. Therefore, a lookup must be enforced after the MPLS label is popped for that reason we need to have per-ce or per-vrf labels to force that extra lookup.
Q. If fat-pw is configured on an ingress port, does it have any effect on egress equal-cost multi-path (ECMP)/link aggregation (LAG) ports on the same chassis?
A. Fat-pw is more useful in IP routers since without fat-pw, even if there are multiple ECMP links, the L2 traffic uses only one ECMP link.
Note that on a PE router even if you use fat PW we still select the egress path based on inner label before the fat label is inserted, so the decission is made on PW label.
Q. Are there going to be any changes to the dropped packet counters on interfaces for packets such as Cisco Discovery Protocol (CDP) or Spanning Tree Protocol (STP). Where will these drop counters be disabled?
A. Cisco does not plan to allow a CLI command to disable any drop counter.
Q. Does the ASR 9000 Series support full In-Service Software Upgrade (ISSU) from Release 4.2.3 to Release 4.3.x?
A. No ISSU is supported starting XR 4.3.0
Q. What is the biggest size of a super-frame?
A. The biggest super-frame size is 9K.
Q. In regards to two stage forwarding; does the ingress NPU need to learn all destination MAC addresses or how is egress LC decided?
A. No, Address Resolution Protocol (ARP) tables, and hence the adjacency tables, are local to a line card. The ingress card needs to know only that the prefix is associated with a certain output interface. Adjacency lookup and L2 rewrites are performed on the egress card. MAC addresses have egress LC/ports associated on which these address are learned.
For L2VPN, all learnt macs in every bridge domain is sent to all NPU's L2 tables.
Q. Is the ARP and Adjacency table located the same in all LCs?
A. No, each LC only keeps the ARP and Adjacency entries of addresses attached to the line cards. This is not exchanged between line cards.
ASR 9000 Series Modular Line Cards and Modular Port Adapter
Q. When is the 8x10G modular port adapter (MPA) expected? Will it be line-rate?
A. It will be line-rate if it is put into MUX160 because MUX160 provides two NPUs. Then you can do one NPU per 4X10G out of that MPA. This MPA is scheduled to be released along with XR 4.31 in May 2013.
Q. As ASR 9001 supports 80Gbps (2 x FIA), if I use all four on-board 10G ports can I really only use 2 x 20x1GE MPAs to keep from oversubscribing?
A. In the ASR 9001, there are two NPUs and these NPUs each serve two of the four on-board 10G ports. Since Typhoon NPU can do about 16G and 44 million packets per second, that basically leaves you with two 10G ports that are fixed and provide 40G of bandwidth out of that NPU's which is still available for the bay. You can either use 1x40G or 4x10G or 2x10G or 20x1G and you would not oversubscribe the NPUs.
Q. Is the ASR9001 oversubscribed if I use 2 x 4port 10G MPA and the 4 x on-board 10G ports (12 x 10G)?
A. It is not oversubscribed, but just like the Mod80 LC it is dependent on the features you enable. You may not get the same performance.
with a 4 port 10G MPA you have 6x10G per NPU same as the 36x10 LC.
Q. Can I mix different generations of line cards in the same chassis?
A. Yes, you can use different generation of line cards in the same chassis.
Q. Can you cluster virtual routing and forwarding (VRF) instances on two line cards?
A. Cluster is a system-wide functionality. From a logical perspective it is still a single router. VRFs are configured in the same way as on a single chassis configuration.
Q. I have an 8T/L line card and only 1 RSP (4G). Will it be line rate?
A. The 8T/L card is an oversubscribed card and will be linerate for larger packet sizes and limited features with dual RP.
The second RP can be used to minimize the oversubscription level to 1:2. On a signle RP the bW is limited to 46G, with dual RP's to 92G..
The NPU is limited to about 15G, so with a single RP the bottleneck is the fabric links, with dual RP the limit is caused by the NPU.
Q.Is there a reason why optic licenses for G709 are per line card type instead of just simply per slot?
Q. Are there any modules/cards that perform Coarse Wave Division Multiplexing (CWDM)/Dense Wave Division Multiplexing (DWDM) functions?
A. Yes, all the line cards support IP over DWDM. CWDM can also be completed since there are different optics. You can use either color optics with a fixed wavelength or there are tunable optics for which you can configure the wavelength. Also, we have the ability to do G709 FEC. However that requires a software license and that support on all Typhoon line cards, but only on the Trident line card A9K-2T20G and 8x10.
Q. Is there a limitation on the number of maximum DWDM-XFP-C that can be used in Typhoon cards?
Q. Is there a rule of thumb to decide the bandwidth for Data Plane in a Network Virtualization (nV) Cluster?
A. Generally, two nodes of the cluster will do what we call Rack locality. The low call member of the bundle is preferred, or low call ECMP down to CE. In such cases, the inter-chassis bandwidth requirements are very minimal. Only single-homed devices use Inter Rack Link (IRL) when packet receiving on the peer node.This depends on the amount of CE and the bandwidth requirement on those. It also depends on how many are single-homed, which would basically constitute your IRL requirement. It is advisable to have two minimum 10G for redundancy because IRL also uses both for keep alive to detect their aliveness.
Q. How many of your customers use nV at the Internet edge and not the BGN edge? We need to understand how much this is embraced by the community.
A. There is a tremendous amount of interest in nV edge clustering as it is a very popular concept. The clustering is rather new, but a lot of customers are deploying it. Especially in the US region where a lot of providers leverage this capability. This is definitely something worth considering and is embraced by the community.
Q. In nV cluster, if ingress traffic comes on chassis-1, will it use the same chassis-1 for egress if ingress and egress are in two different line cards?
A. Inter-chassis traffic for downstream--> No.
Q. Is the nV technology specific to certain series or does it apply to all series?
A. All ASR 9000 Series chassis, ASR 9922, ASR 9010, ASR 9006, and ASR 9001, can work as an nV host.