This discussion is locked

Ask the Experts: Understanding Cisco ASR 9000 Series Aggregation Services Routers Platform Architecture and Packet Forwarding Troubleshooting

Unanswered Question
May 13th, 2013

Understanding Cisco ASR 9000 Series Aggregation Services Routers Platform Architecture and Packet Forwarding Troubleshooting with Xander ThuijsWith Xander Thuijs

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn how to Cisco ASR 9000 Series Aggregation Services Routers with Cisco expert Xander Thuijs. The Cisco ASR 9000 Series Aggregation Services Routers product family offers a significant added value compared to the prior generations of carrier Ethernet routing offerings. The Cisco ASR 9000 Series is an operationally simple, future-optimized platform using next-generation hardware and software. The ASR 9000 platform family is composed of the Cisco ASR 9010 Router, the Cisco ASR 9006 Router, the Cisco ASR 9922 Router, Cisco ASR 9001 Router and the Cisco ASR 9000v Router.

This is a continuation of the live Webcast.

Xander Thuijs is a principal engineer for the Cisco ASR 9000 Series and Cisco IOS-XR product family at Cisco. He is an expert and advisor in many technology areas, including IP routing, WAN, WAN switching, MPLS, multicast, BNG, ISDN, VoIP, Carrier Ethernet, System Architecture, network design and many others. He has more than 20 years of industry experience in carrier Ethernet, carrier routing, and network access technologies. Xander  holds a dual CCIE certification (number 6775) in service provider and voice technologies. He has a master of science degree in electrical engineering from Hogeschool van University in Amsterdam.

Remember to use the rating system to let Xander know if you have received an adequate response.

Xander might not be able to answer each question because of the volume expected during this event. Remember that you can continue the conversation on the Service Providers community XR OS And Platforms  shortly after the event. This event lasts through Friday, May 24, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

Webcast  related links:

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Average Rating: 5 (1 ratings)
Gregory Snipes Mon, 05/13/2013 - 12:18

I have noticed a distinct lack of data from Cisco about the performance of the ASR 9000 routers. This has posed a challenge for me, as I do not really know at what kinds of loads I should be looking to step up from something like an ASR 1000 or 7600 to the ASR 9000.

Could you give us some hard numbers regarding how much data the router can really push?

xthuijs Tue, 05/14/2013 - 10:03

Gregory, assuming you mean pps performance right? Ok that number is hard to give as it is very much feature (set) dependent. I have some hopefully helpful write up here:

The pps performance per NPU also depends on the sw release in question. Because we are fixing up sw paths for certain swtiching scenarios constantly where we can.

Any case to give you some ball park numbers:

Trident NPU: ~17Mpps per direction, ~15G bw limitation.

Typhoon NPU: ~44Mpps per direction, 60G bw limitation.

For instance in terms of pps hits:

ingress ACL on typhoon gives about a 28% performance hit.

ABF (access list based forwarding) gives you about 32% performance hit.

The capacity for ASR9000 with the RSP440 is 440G per slot.

Next generation NPU's and fabric will go to the extend of providing an easy 6x100G cards at linerate.

Choosing between 7600 and A9K? You'll definitely want to pick the ASR9000 when you have plans for high density 10G aggregation, 40GE needs or 100G needs. Also IOS-XR provides for a lot of robustness and improvements over classic IOS.

If Watts per Gig is a concern, then A9K is also the right choice.

Did I address the main concerns/questions in terms of performance or did I leave something open?



e.nieuwstad Wed, 05/15/2013 - 06:29

crosspost from a different forum here but I'll give it a try

We are building a DCI with vpls. I was wondering how mac address learning works after a vmotion. The minimum time-out which can be configured on the VPLS devices (ASR9000) is 120 seconds. So only after 120 second the original mac entry times out and the mac addres is switched to the correct site. I know VMWare does a gratuitous arp after a vmotion but I can't find confirmation that the VPLS routers will update their mac entries based on this gartuitous arp. Can someone confirm I can't be the first to do a vmotion on a vpls network.

xthuijs Wed, 05/15/2013 - 06:33

When we see a packet, whatever that is with an smac that we know on a different port, we instantiate a mac move.

This operation can be policed and prevented if necessary.

The grat arp, if having the right smac, will do that for us then.

If there is an event that causes a mac flush, eg an STP convergence, we withdraw all the macs from the bridge domain and send a vpls mwd out the pw's. This results in (temporary) flooding until the macs are learnt again.



e.nieuwstad Wed, 05/15/2013 - 06:51

xander thanks for the quick response. The following was confusing me

MAC Address Aging

A MAC address in the MAC table is considered valid only for the duration of the MAC address aging time. When the time expires, the relevant MAC entries are repopulated. When the MAC aging time is configured only under a bridge domain, all the pseudowires and attachment circuits in the bridge domain use that configured MAC aging time.

A bridge forwards, floods, or drops packets based on the bridge table. The bridge table maintains both static entries and dynamic entries. Static entries are entered by the network manager or by the bridge itself. Dynamic entries are entered by the bridge learning process. A dynamic entry is automatically removed after a specified length of time, known as aging time, from the time the entry was created or last updated.

If hosts on a bridged network are likely to move, decrease the aging-time to enable the bridge to adapt to the change quickly. If hosts do not transmit continuously, increase the aging time to record the dynamic entries for a longer time, thus reducing the possibility of flooding when the hosts transmit again.

the bold sentences made me think the MACs first had to timeout on the old PW/AC before they are learned via a different PW/AC

xthuijs Wed, 05/15/2013 - 07:35

Ah I see where the confusion comes from! yeah that is correct, but not accurate. Because what this document is not talking about is the "mac move" concept. This can be controlled by "mac security". If we are seeing a known MAC in a BD on a different EFT (l2transport interface in teh same BD), then either we can relearn the mac to the new port and flush the old "binding", shutdown the efp or drop the packet. That is configured with mac security under the bridge domain config placed in the l2vpn config mode. By default we will allow the mac move, and that could be triggered by that grat arp, so you should be fine.

You will want to control the mac moving however. Because everytime we learn about a MAC, we send a "copy" to ALL NPU's in the system to inform them of the new mac (as seen by the MAC_NOTIFY Np counter). These packets are processed and dropped, but consume pps obviously. As all npu's have the same fib and mac table, regardless of whether they need it or not, such updates can affect performance unnecessarily, hence the need to control mac moves.

(this concept described is known as hardware mac learning, which is awesome and fast, but has a gotcha too as described)



e.nieuwstad Wed, 05/15/2013 - 23:46

thanks for clarifying this. One last question we use ingress HSRP filtering on the N7K devices at the different end sites. This will result in the same HSRP macs being learned by the ASR devices on both ends which is, based on your post, an unwanted situation. Would an ACL filtering the HSRP messages on the incoming EFP prevent the ASR from learning the MAC and copying them to all other ASR devices?

xthuijs Thu, 05/16/2013 - 07:13

The virtual MAC will be programmed as active only in the active HSRP router's MAC filtering table. That is from an L3 perspective, so I am assuming you have a Bridge Domain with a BVI on which you run HSRP.

The EFP's that are in that BD, part of that hsrp enabled BVI, will have that (L3) mac filter for the vmac of HSRP.

(this is btw why you can't reuse HSRP group-ID's on the same NPU for HSRP; because overlapping groups use the same vmac and if they fail over independently from each other, inconsistent mac filtering will cause that HSRP group to fail whom was still active on that node who had an HSRP failover, hence vmac removed, to the peer node).

I think I am getting carried away here with my answer but the short answer to your Q directly is that ACL comes BEFORE mac learning. So if we deny a packet with ACL, than eventhough that mac is new to us, we will not learn it in the BD. The ACL being applied to the EFP in the bridgedomain. This is irrespective of using an L2 or L3 ACL.



e.nieuwstad Thu, 05/16/2013 - 07:59

You are indeed getting a bit carried away The ASR is only responsible for L2 transport with VPLS. The HSRP/L3 function is on the Nexus 7000 devices. So I will simply add an acl on the incoming EFP to filter out the HSRP messages so they will not be flooded over the PW's.

Thanks for the extensive replies and the enthousiam

xthuijs Thu, 05/16/2013 - 08:04

haha, sorry about that yeah, you can definitely add an ACL to filter the HSRP messages and that will prevent the mac to be learned based on those filtered packets. All good!


xander Thu, 05/16/2013 - 13:56

Hi Xander,

When a packet comes from one port to another port in the same LC even in the same NPU, why this packet has to travel until the fabric in RSP?. Can´t the NPU do this task?

xthuijs Thu, 05/16/2013 - 13:58

Theoretically yes that is possible, however we want to use the central fabric because then the arbiters know the load per entity. If it were locally switched the fabric had no notion of this and might send more traffic down to the interface or NPU that it may not be able to handle.


ivan.cavar Mon, 05/20/2013 - 05:36


I already post the question in Other Service Provider Subjects, but now I see that this is a topic for ASR questions...

I wil C/P my qestions here:

I have two questions regarding ASR 9010 in dual chassis, with dual RSP per chassis, with full redundant connections between.

First question is when I had upgrade ASR from 4.3.0. to 4.3.1. version, and activate installation packages:

RP/0/RSP0/CPU0:router (admin)#install activate disk0:*4.3.1* sync

Router went down for about ten minutes... In upgrade procedure package: it is stated:

Note:  The Router will reload at the end of activation to start using the new  packages. This operation will impact traffic. Typically this operation  may take at least 20 minutes to complete.

Is there any way to upgrade router in dual chassis configuration without any impact on traffic forwarding?

Second  question is when can we expect upgrade that will enable etherchannel on  9000v satellites, when we are using etherchannel from ASR to 9000v for  redundancy?


xthuijs Mon, 05/20/2013 - 06:25

Hi Ivan,

cluster ISSU (in service upgrade) is being worked on. One of the interim steps will be to separate/isolate one of the nodes of the cluster, ugprade it, bring it back in service and have the other one being upgraded by the now new active.

this still results in downtime, but not as heavy as it is right now.

More to be worked on btw on this.

As for the bundle/bundle on the satellite, that is at this point an IOS-XR 51 deliverable. Its definitely on the roadmap.


ivan.cavar Wed, 05/22/2013 - 04:02

Ok, thank you. When can we expect 5.1. version?

And one more question, input shaping on 9000v satelites ports isn't implement yet... when can we expect that feature?



xthuijs Wed, 05/22/2013 - 05:51

hi ivan: XR51 is schedule for 2nd half of this year, say fall (september/oct time frame).

Input shaping, or any feature for that matter on satellite is done by the 9k host. So you're bound to the capability of the LC that your SAT is conencting to. Since we only have 1 TM (traffic manager) for ingress which is limited to 30G, cards that have more then 30G of interfaces per typhoon have their ingress TM disabled for speed purposes.

So the 24x10 will have ingress shaping capabilities, but the 36x10 has it disabled.



ciscomoderator Tue, 05/21/2013 - 13:10

Hello Xander,

Excellent work in the live event. Here are some questions that were not answered during the webcast:

  1. Is there a Cisco lab available for ASR 9000
  2. How will MOD160 perform with multiple 9000NVS?
  3. Is there a shortcut for a Bundle-EthernetX interface, such as port-channel interface (poX), in Cisco IOS® ?
  4. What is the revolutions per minute (RPM) on these hard disk drives (HDDs) compared to the solid state drives (SDDs)? Will the spinning drives be slow?

Thank you for answering these questions.

- Cisco Support Community Moderator

xthuijs Tue, 05/21/2013 - 13:47
  1. Is there a Cisco lab available for ASR 9000

we have "XR4U" stations coming available soon when XR 511 comes alive. The plan is for a downloadable play image like that. In the interim we have 2 demo systems available, and they can be booked via your account manager representative.

  1. How will MOD160 perform with multiple 9000NVS?

very well. the mod 160 has 4 NPU's, 2 per bay. So if you have a 4x10 MPA to serve a satellite, you effectively have a single NPU per 20 1Gigs from the satellite. The pps performance will be stellar. However it might be price technically more ideal to connect satellite with a 36x10. Since the MOD-x has native MPA's with 1G also.

     2. Is there a shortcut for a Bundle-EthernetX interface, such as port-channel interface (poX), in Cisco IOS® ?.

usability enhancement is there, we are trying to push this into a new reasonable release. follow CSCuh04526

     3. What  is the revolutions per minute (RPM) on these hard disk drives (HDDs)  compared to the solid state drives (SDDs)? Will the spinning drives be  slow?

depends on the type we had avaialble at time of production, you will see different sizes and disks on the RSP2. the rpm of the HD is not so much an issue as much as the buffered writing we used to do in XR. This is fixed up with XR43 where the disk writing performance is much better. the HD/SDD is used for logging storage only (and maybe your pictures) but other then that we're not that concerned with write perf of the HD.



ciscomoderator Thu, 05/23/2013 - 06:52


Thanks a lot for the responses. here are few more questions from the live event:

  1. Will 100G ITU Grid support for the ASR 9000 Series platform be available soon?
  2. Is there a reason why optic licenses for G709 are per line card type instead of just simply per slot?

Thanks a lot,

Cisco Moderator

xthuijs Thu, 05/23/2013 - 08:27

Answer 1 is that it depends on the availability of the optics, 100G is a fairly new standard and there are not that many (cost effective) options out there, so when they do, we will qualify them and get them supported.

Answer 2 I am not sure what the difference is between slot and linecard, but we had decided to make this optic license per linecard/slot as that fits the "pay for what you use" model the licensing tried to provide. If you have a set of interfaces that require the (E)FEC, then you only need a license for those cards on which those interfaces reside.



ciscomoderator Thu, 05/23/2013 - 06:53

And few others:

  1. Is there a limitation on the number of maximum DWDM-XFP-C that can be used in Typhoon cards?
  2. Will there be any tunable DWDM SFP+ for ASR 9000 Series
  3. Will 100G ITU Grid support for the ASR 9000 Series platform be available soon?

Thank you for your quick response.

Cisco Moderator

xthuijs Thu, 05/23/2013 - 08:30

Some answers:

1) No, there is no limit, but when we are using the 36x10 or the 24x10, there are some restrictions in which positions you can put higher powered optics that consume more then 1W (eg with ER optics same deal). This because of the cooling characteristics of the chassis.

We ahve a new fan tray out now, called the -v2, that eliminates this restriction of the slot positions.

What it comes down to is that on teh 24x10 you can only put high powered optics in the lower 12 positions and for the 36x10 in the lower 12 also.

2) Yes they are available today

3) see above/earlier question.



aweintraub Thu, 05/23/2013 - 09:53

On the question above about licenses:

  1. Is there a reason why optic licenses for G709 are per line card type instead of just simply per slot

I think the question is 'why are there 24x, 36x, MOD80, ADV, all different kinds of optics licenses' instead of just 'enable G709 features on slot 1'.

xthuijs Thu, 05/23/2013 - 11:14

Aah right thanks for that clarification Aaron!

I just got word from our marketing team on this, the rationale behind the different lics per card was to maintain a fair pricing per port.

That is why there are different lics per linecard (density).




Login or Register to take actions

This Discussion

Posted May 13, 2013 at 11:10 AM
Replies:24 Avg. Rating:5
Views:6234 Votes:0

Related Content

Discussions Leaderboard