ASR9000/XR Understanding Route scale

Document

Mon, 04/20/2015 - 07:23
Jun 28th, 2012

 

 

Introduction

In this document we'll discuss how the route scale of the ASR9000 linecards work. There is a significant difference between the first generation linecards known as Trident based vs the next gen linecards that are Typhoon based.

 

In this article you'll find guidance to identify if you have a Trident or Typhoon linecard, what the scale type really means and what it affects, how the rotue scale parameters work on Trident and how the route scale is different on Typhoon.

 

Do I have a Trident or a Typhoon linecard?

The following linecards are Trident based:

 

40G series:

A9K-4T

A9K-8T/4 (8 port 10GE oversubscribed linecard)

A9K-2T20G

A9K-40GE

 

80G series:

A9K-8T

A9K-16T/8 (16 port 10GE oversubscribed linecard)

 

Regardless of the scale version denoted by the suffix -L, -B, -E

 

The following linecards are Typhoon based:

A9K-24x10GE

A9K-100G

A9K-MOD80

A9K-MOD160

and the ASR9001

 

Regardless of the scale version denoted by the suffix -TR, -SE

 

SIP-700 is CPP based

 

What is the difference between the L/B/E type of the cards?

All ASR9000 linecards come in different scale versions with different price points. The scale version does NOT affect the route scale.

Also the architectural layout of the linecard is the same and all features supported on one type of the linecard is supported on other scale versions also.

 

So what is precisely different then between these line card scale types?

The following picture gives a layout of the Network Processor (NP) and the memory that is attached to it.

 

Screen Shot 2012-06-28 at 10.25.22 AM.png

The lookup or Search memory is used for the L2 MAC table (in IOS terms "CAM" table) and by the FIB (Forwarding Information Base, eg where the CEF table puts the forwarding info).

 

The L/B/E cards change the Stats, Frame and TCAM size and therefore derive a different scale based on:

 

Stats memory:

     Interface counters, QOS counters, EFP counters.

     The more stats memory I have, the more QOS Policies I can have, the more interfaces/EFP's/Xconnects etc I can support on this card.

 

Frame memory:

     Used for packet buffering by QOS

     The more frame memory I have allows me to buffer more packets in the queues

 

TCAM:

     Used by vlan to interface matching (eg I get a vlan/combo in and to which subinterface does that match the closest), ACL scale and QOS      matching scale.

 

The SEARCH memory is not changing between L/B/E hence the Route or MAC scale remains the same between these cards.

 

To Sum up: The difference between L/B/E (for Trident) or TR/SE (for Typhoon) mainly affects the:

  • QOS scale (Queues and Policers)
  • EFP scale (L2 transport (sub)interfaces and cross connects)

 

What does not change between the types is:

MPLS label scale, Routes , Mac , Arp , Bridgedomain scale

 

What about these hw-module scale profile commands on Trident ?

The Trident linecards provide a great amount of flexibility based on the deployment scenario you have.

As you could see from the above description, the search memory is not affected by the scale type of the linecard.

Considering that the ASR9000 was originally developed as an L2 device, it divided that search memory shared between MAC and Route scale

in "favor" of the MAC scale leaving a limit route capability.

 

With the ASR9000 moving into the L3 space we provided scale profiles to effectively adjust the sharing of the Search memory between L2 and L3 in a more user defined manner. So by using the command

 

RP/0/RSP1/CPU0:A9K-BOT(admin-config)#hw-module profile scale ?

  default  Default scale profile

  l3       L3 scale profile

  l3xl     L3 XL scale profile

 

You can move that search memory in favor of L2 or L3:

 

       "default" or L2 mode                l3xl mode

Screen Shot 2012-06-28 at 10.37.11 AM.pngScreen Shot 2012-06-28 at 10.37.30 AM.png

Which inherently means that the increase FIB scale goes as the cost of the MAC scale and in the following manner:

 

Screen Shot 2012-06-28 at 10.42.11 AM.png

Notes:

1) This scale table is Trident specific.

2) Some values are testing limits eg IGP routes, some are hardware bound

3) The EFP number is dependant on the scale type of the linecard (E/B/L), which this tries to show is that the EFP scale is not affected by the HW profile scale command.

 

Typhoon Specific

Typhoon has a FIB capability of 4M routes. Typhoon uses separate  memory for L2 and L3 and therefore the profile command discussed above  is not applicable to the Typhoon based linecards.

 

 

Understanding IPv4 and IPv6 route scale

As you can see in the scale table above the number of IPv6 routes is half of the number of ipv4 routes. v6 routes consume more space in the FIB structures and the system calculates in this manner that the number of v6 routes consumes twice as much as the number of v4 routes.

 

Now when we state that we have 1M FIB scale in the L3 mode. We should read it as 1M credits.

Then knowing that a v4 route consumes 1 credit and an ipv6 route consumes 2 credits, we can compile the following formula:

 

Number of IPv4 routes + 2 * the number of IPv6 routes <= Number of credits as per scale profile

 

Typhoon Specific

This logic of v4/v6 scale is the same for Typhoon, but with the notion that Typhoon has 4M credits

 

Understanding Subtrees

One concept that was referenced in the scale table is the SUBTREE. Subtree is a method of implementing a Forwarding Information Base. Trident uses this implementation methodology.

 

While the route scale in say the L3 profile is 1M ipv4 routes, it depends which VRF the routes are in based on their tableID and what the subtree size is.

 

Table ID's 0 to 15 have a subtree assigned per /8. That means that they can reach a 1M route scale individually as long as you don't exceed the number of routes per subtree size. In L3 mode the subtree size is 128k. That means in order to reach the 1M route scale I need to assign 8 /8's filled with 128K each to reach that one million routes.

 

Note the route scale mentioned is the sum of all routes together combined of all vrf's

 

VRF table ID's higher then 15 to the max vrf scale only have one subtree total which means that the route scale for those VRF's is 128k tops in L3.

 

IPv6 routes have one subtree period, meaning that V6 cannot have more then the subtree size as directed by the scale profile configured.

 

The following picture visualizes the subtree.

Each subtree can point to either a non recursive leaf (NRLDI) or a recursive leaf (RLDI)

 

  1. You can have 4 (or 8, requires admin config profile and has some pps implications) recursive ECMP (eg BGP paths)
  2. Each of those recursive can point to 32 non recursive paths (eg IGP loadbalancing)
  3. Which in turn can be a bundle path with 64 members max.

Screen Shot 2012-06-28 at 11.15.10 AM.png

 

How to find the tableID of a vrf

 

Note that tableID's are assigned only when you enable an IP address on one interface that is member of the vrf.

 

RP/0/RSP0/CPU0:Viking-Top#show uid data location 0/0/CPU0 gigabitEthernet 0/0/0/2 ingress | i Table
Fri Apr  9 16:24:32.878 EDT
  Table-ID:           256

 

RP/0/RSP0/CPU0:Viking-Top(config)#vrf GREEN

 

RP/0/RSP0/CPU0:Viking-Top(config-vrf)#address-family ipv4 unicast
RP/0/RSP0/CPU0:Viking-Top(config-vrf-af)#commit
RP/0/RSP0/CPU0:Viking-Top(config-vrf-af)#int g0/0/0/12
RP/0/RSP0/CPU0:Viking-Top(config-if)#vrf GREEN

RP/0/RSP0/CPU0:Viking-Top#show uid data location 0/0/CPU0 gigabitEthernet 0/0/0/12 ingress | i Table
Fri Apr  9 16:22:40.263 EDT
  Table-ID:           512
RP/0/RSP0/CPU0:Viking-Top#

 

Table ID assignment:

 

1) when an RSP boots, tableID assignments start at 0. (verified in labs)

2) ID zero is reserved for the global routing table. (given)

3) The first 15 tableID’s can carry > 128K* routes, given that no more than 128k* routes per subtree(/8) (given limitation)

4) Understanding that reconfiguring a VRF would increment the tableID values (verified in lab), and might eventually push it out of the preferred table space.

5) No more than a total of 1M routes per system (or as defined by the scale profile), regardless of in which tableID these routes are.

6) In order to reach that of 1M route scale, the command hw-module profile l3 or l3XL needs to be configured.

 

If you have <15 VRF’s configured, reloading would still result in the fact that these tableID’s will get in the larger table space (tableID assigned <15).

Although one vrf may not get assigned the same tableID value after reload, but this is not interesting from a user perspective.

 

NOTE1: table ID is a 16 bit value, that is byte swapped. So tableId 1 has value 256, table id 2 has value 512 etc

 

NOTE2: We're working on an enhancement to make a vrf "sticky"  to a particular tableId so you can make sure that this vrf will always  get the higher route scale. Track: CSCtg2546 for that.

 

*NOTE3: 128k or 256k depending on the scale profile used. Some older pre 401 releases had a smaller subtree size of 64k.

Typhoon Specific

Typhoon uses the MTRIE implementation for the FIB and therefore the above Subtree explanation and its associated restrictions do NOT apply to linecards using the Typhoon forwarder.

 

 

Monitoring L3 Scale

You can use SNMP for this by pulling the route summaries for EACH vrf or using the CLI command as follows:

 

RP/0/RSP0/CPU0:A9K-TOP#show route vrf all sum

 

VRF: RED

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

connected       1         1         0          272

local           2         0         0          272

bgp 100         0         0         0          0

Total           3         1         0          544

 

VRF: test

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

connected       0         1         0          136

local           1         0         0          136

bgp 100         1         0         0          136

Total           2         1         0          408

 

VRF: private

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

static          0         0         0          0

connected       1         0         0          136

local           1         0         0          136

bgp 100         0         0         0          0

dagr            0         0         0          0

Total           2         0         0          272

 

The number of routes is provided per source (IGP/BGP etc) and for the FIB that doesn't matter.

Also the memory that is presented is XR CPU memory and is not the memory that is used by the hardware.

 

Because of the Trident subtree implementation, if you want to be accurate you need to count the number of routes in the VRF table ID's

0-15 (and tableID0 being the global routing table) on a per /8 bases.

 

Show commands

 

This is the global view as to how things are implemented.

 

  • The RIB, routing information base solely resides on the RSP and is fed by all the routing protocols you have running.
  • The size of the RIB can grow as far as memory scales.
  • The RIB compiled a CEF table which we also call the FIB, forwarding information base which is distributed to the linecards
  • The linecards complete the FIB entries with the L2 Adjacencies (eg ARP entries which are isolated to the linecards only, UNLESS you have BVI's when those L2 ADJ's are shared on all LC's)
  • The complete entry is then programmed into the NPU.

 

 

Screen Shot 2012-06-28 at 11.00.49 AM.png

 

Screen Shot 2012-06-28 at 11.01.04 AM.png

 

Screen Shot 2012-06-28 at 11.01.12 AM.png

Screen Shot 2012-06-28 at 11.01.19 AM.png

Screen Shot 2012-06-28 at 11.01.27 AM.png

 

Summary view

The easiest way to verify and validate resources for the linecard is via this command:

 

RP/0/RSP1/CPU0:A9K-BOT#show cef resource loc 0/1/CPU0

Thu Jun 28 11:02:41.855 EDT

CEF resource availability summary state: GREEN

CEF will work normally

  ipv4 shared memory resource: GREEN

  ipv6 shared memory resource: GREEN

  mpls shared memory resource: GREEN

...

 

 

 

 

Using the ASR9000 as a Route Reflector

This can be done no problem. A route reflector is generally never in the forwarding path of the traffic. This means that we can put all the routes in the RIB and not install them in the FIB based on a policy.

We can use the table-policy under the BGP config to pull in an RPL that denies the installation of routes into the FIB.

Then we can use the RP CPU memory for reflecting routes as far as memory scales.

How far we can go?

Depends on the Paths, attributes, size of the attributes and whether you have the high or low scale RSP memory version.

Numbers can be anywhere from 7M to 20M, depending.

N/A.

 

 

mlinder01 Mon, 07/02/2012 - 09:42

what is in a mixed environment meaning a trident and a typhoon line card in the same chassis. how is this affecting the route scale? does the route scale remains at 1M?

Alexander Thuijs Fri, 04/05/2013 - 07:56

If you mix both trident and typhoon and knowing that the FIB is loaded to all cards, then you run into the lowest common denominator scenario whereby effectively you're limited by the trident route scale. When you exceed trident fib scale, typhoon will hold the routing info just fine, but trident will have an incomplete FIB.

You could leverage selective-vrf-download to limit the routes on edge facing trident cards that only hold those routes for those vrf's it serves.

xander

lovecg1121 Thu, 04/04/2013 - 15:45

Hi Alexander,

Can you please tell me the difference between A9K-MOD80-SE and A9K-MOD80-TR in details?

Thanks

Alexander Thuijs Fri, 04/05/2013 - 08:01

The major difference between TR and SE cards is the size of the memory attached to the NPU.

SE cards have more STATS memory, TCAM and FRAME memory. So on SE cards you can support more (sub)interfaces/EFP's, QOS policies, buffering, larger ACL's and more QOS classes.

The TR card is the equivalent to the Trident -L card, and the SE version is equivalent to the -E card. (there is no -B type for typhoon).

TR cards have also a limit of 8 queues per interface.

Note that the FIB and MAC scale is the same for both TR and SE cards.

regards

xander

Pedro Morais Wed, 05/22/2013 - 04:58

Hi Alexander,

What about XR version (P and PX) does it influence route scale? Or the RSP? I’m asking this, because while doing some tests with a Trident based scenario (and “old” RSP) and a Typhoon + Trident scenario (with RSP440) I found different limits for the Trindent LCs.

On Trindent only scenario I was able to reach 256K IPV6 routes in CEF. On Trident + Typhoon scenario only 128K IPv6 routes was installed in CEF.

Thanks!

Cheers,

Pedro

Alexander Thuijs Wed, 05/22/2013 - 05:47

hi pedro,

hte RSP version (which determines p vs px) doesnt affect the route scale (other then the RIB scale size, which is always higher then what the hardware can support anyway).

Trident is bound by subtree's and the subtree size is defined by the hw-module scale command. You will need the L3XL

to get the 256k per subtree.

Typhoon doesn't have subtree's so you are not limited by that, nor does the hw scale profile apply to these cards.

Depending on your version you may be running in a lowest common denominator limit here, which is a sw limit whereby typhoon is limtied to 128k because that is the trident default and l3 scale sizes. So you may need to override that.

regards

xander

Pedro Morais Wed, 05/22/2013 - 06:39

Hi Xander,

Thanks for your prompt reply.

So the rule "Number of IPv4 routes + 2 * the number of IPv6 routes <= Number of credits as per scale profile" doesn't apply to Trident LCs in default profile?

Cheers,

Pedro

Alexander Thuijs Wed, 05/22/2013 - 08:06

Hi Pedro, that forumula applies for both trident (regardless of scale profile) and typhoon.

the subtree limitation is trident specific, and the subtree size is dependent on the scale profile ran on it.

That subtree situation does not apply to typhoon (as it uses mtrie)

xander

Jean-Marie NGOK GWEM Mon, 08/19/2013 - 15:54

Hi alexander,

Regarding the use of  the ASR9001 as a route-reflector isn't it much easier to overload  all of the router self-generated routes ? So this RR will never be in the transit path. RR only reflect the routes whithout influencing BGP path selection process. And This scenario is true if we're using a dedicated RR to reflect only VPNv4 routes.

Alexander Thuijs Mon, 08/19/2013 - 15:57

Several options for RR;

using XRVR (the xr virtual router) running on a blade, has no forwarding, just control plane, with dedicatd functionality of RR mainly as of now.

For a RR not in the forwarding path there is no need for the FIB routes, just how to get to the BGP speakers (so IGP only I would say).

You can use the table policy in RPL to prevent route installation in the RIB and just contain it in the BGP table for reflecting.

I have an article on route policy language also discussing that, let me know if you can't find it.

thanks

xander

Carlos A. Silva Tue, 12/03/2013 - 10:47

hi Xander,

quick question regarding resource allocation. say i'm using the default profile and basically the number of MACs learned is close to 0. does that mean that the box will use the L2 memory to learn L3 routes as long as that memory is free and claim it later if needed?

another question is regarding L3, what if i'm not running ipv6, do get to learn more than 512 k ipv4 routes in default mode?

i'd appreciate it if you'd elaborate further regarding this use case.

in 7600 there is a command 'show platform hardware capacity' that allows you to see more specific memory use statistics. is there such a command in asr9k?

thanks in advance!

c,

Alexander Thuijs Tue, 12/03/2013 - 13:02

hey carlos, the memory for L2 and L3 is shared, and is precarved at system boot time.

so if you are in default profile mode, then both L2 and L3 have 512k mac and 512k routes.

the routes cant take the memory from L2 in this case as it is not its region.

if you put the profile mode into L3 or L3XL, in that case the precarved mem allows for more routes at the expense

of L2 mac table size.

Know the command you ref, but dont have that. I think there is a show l2vpn capability command, but not sure how well it updates in the profile modes, also it doesnt show you how far a long you are in the usage of the actual mem tables.

it is a "manual" effort by counting (or using utility wc -l to count hte lines = macs etc or use show route summary to get a route impression)

regards

xander

Carlos A. Silva Tue, 12/03/2013 - 13:52

Thank you very much, Xander.

Just to be clear, behavior would be the same regarding v4/v6 routes, correct? (Like you explained L2/L3 entries)?

Alexander Thuijs Tue, 12/03/2013 - 14:18

Correct when we state L3 scale it is v4 and v6 combined as per formula mentioned above.

so L3 profile with 1M credits, v4 taking one credit, v6 taking 2. Typhoon having 4M credits.

Note that typgoon has dedicated L2 and L3 memroy and no carving is necessary, hence profile config doesnt apply there.

cheers!

xander

wxue2 Wed, 12/11/2013 - 07:17

Hi, Xander :

     Can I ask a short question , assume I choose default profile for Trident , while Internet routes increase fast and nearly approach 512K , if it's burst beyong 512K , what will happen , is that kind of outage happen and can you share some comments on how to avoid this ?

     Thanks .

Alexander Thuijs Wed, 12/11/2013 - 07:28

If you happen to run into that scenario the prefixes that cant be added to the table will have forwarding issues, either they follow a less specific route, the default route or get dropped.

You could use bgp maximum prefix for instance, but that is merely a mitigation obviously, but it provides some extra level

of alerting in case you are reaching that scale.

the only true resolution to this is to move to a L3 or L3XL profile mode to have the increased L3 scale.

regards

xander

mlinder01 Wed, 12/11/2013 - 07:35

Does the L3XL Scale Profile still requires the Scale-Profile License? this is no longer necessary right?

yansenyansen Tue, 01/14/2014 - 00:29

Hi Xander,

for memory (bytes) shown on "show route vrf all sum"

Is that memory on the RSP?

so in this case RIB maximum memory is 6GB for RSP-440-TR. .

or

6GB will be shared with other process? if shared with other process, what's the ratio is allocated for RIB.

correct me if i am wrong

BR.

Alexander Thuijs Tue, 01/14/2014 - 04:11

That is correct, that is memory consumption on the RSP.

the TR version of the RSP440 has indeed 6G (12G on the SE version), this is shared between all processes.

XR being a 32 bit operating system cannot allocate more then 4G per process. The memory mapper is 64 bit, this means

that different processes can allocation 4G each max.

Deduct some heap size, you're left with about near 3G of usable memory per process.

For BGP this means you can pull in about 10-20M paths (depending on path attributes, multipath and overlapping attributes.

Each process allocates memory on a first come first serve or as needed basis.

Typhoon cards being able to serve 4M routes in the FIB (which is derived from the RIB) can easily be fit in the memory of the RIB. Fortunately a rib route is not 1000 times the size of a FIB route .

regards

xander

bill.xing Fri, 05/09/2014 - 03:21

Hi experts.

My question is the technical background of the following line cards. The line cards' part numbers are as follows:

    A9K-MOD80-SE
    A9K-MOD160-SE

I have read their corresponding data sheet. However, I wasn't able to draw distinction between these line cards.
In another word, what is the difference between MOD80 and MOD160?

Alexander Thuijs Fri, 05/09/2014 - 04:44

Hi Bill,

Both the modular cards have 2 bays. What the difference is the number of NPU's that is supporting the Bay. The mod80 has 2 NPU's, 1 per Bay. The Mod 160 has a total of 4 and gives 2 NPU's per bay.

What does that mean:

The Mod160 gives support for the 8x10MPA and 2x40GMPA extra over the mod80 that cannot carry these 2 (because of bw limits, an NPU can go up to 60Gbps).

When you take eg a 4x10G MPA, in the MOD80 they are all served by a single NPU. On the MOD160 it is 2x10G per NPU.

So pps perforamnce wise the mod160 is more powerful, but more expensive, and the mod160 gives access to two new higher density MPA's (2x40G and 8x10G) that the mod80 can't serve.

 

regards

bill.xing Fri, 05/09/2014 - 12:16

thanks for the reply

I would appreciate you idea about my assumption about ASR 9000 series line cards and RSP'a modes which are as follows:
A) The Transport optimized version
B) The Service Edge optimized version

To compare Service Edge with Transport one, Transport has low queue and basic QoS mechanism or QoS feature than the Service Edge one. Even though, their FIB and MAC address tables characteristics are same as one another. Another parameter is the price which Transport one is cheaper than the Service Edge one.

In your idea is it ok to use transport mode on a core of carrier where the only critical parameter is the forwarding speed and capacity and not the QoS features or policers?

 

Thank you so much for your help.

 

Alexander Thuijs Sat, 05/10/2014 - 05:15

Hi Bill!

Both LC types TR vs SE are the same in hw forwarding capacity.

What the SE gives you more is:

l2transport interfaces

l3 interfaces

queues for shaping

policer scale

regards!

xander

bill.xing Tue, 05/13/2014 - 05:11

Hi Xthuijs.

Thank you for your reply && the clarification.

Speaking of "l2transport interfaces," does TR support xconnect?

And can you clarify me about the "l3 interfaces." What does that mean that TR ones don't support!!!

 

Again thank you very much.

Alexander Thuijs Tue, 05/13/2014 - 05:16

Hi Bill!

TR cards also support xconns, just at a lower scale.

Oh sorry, I meant to convey that the overall L3 interface (total count) scale of TR cards is lower then SE cards.

xander

Jean-Marie NGOK GWEM Mon, 07/28/2014 - 22:27

Hi Alex,

We have installed a ASR9010 as a edge router for our internal core network and the hardware confguration is pretty straightforward linecard wise : 6 cards of 24x10G, 2 cards of 36x10G with dual RSP-440. 

Design recommands to shutdown few ports or cost out the router from the network in case we lose one RSP-440 to prevent the oversubscription of the 36x10G linecards. But I am avising QoS to give different traffic different treatments in that case. What would you advise us to do ?

Alexander Thuijs Mon, 09/22/2014 - 05:10

configure or load a bunch of routes, take a traffic generator and set up various streams to the various destinations served. keep it cranking up in terms of packet rate.

If you are interested in the route learning capability, then keep the stream going, send a BGP withdraw or disconnect the peer, we'll be dropping, then start the bgp session and see the prefixes being loaded and starting to forward. measure the time from bgp peer up to prefix loaded to hw installed (forwarding)

cheers

xander

steven.pkwong Mon, 09/22/2014 - 02:47

hi Xander,

 

I have a question on the  version 5.1.3 of ASR9000 about the number of VPLS Max MAC addresses. And the network are using typhoon line card.

I found that VPLS Max MAC addresses is 128000 on the version 5.1.3. I tried to issue hw-module profile scale default under admin configuration mode. But it have not any effect.

How can change VPLS Max MAC addresses  to 512000? Is it to change the scale profile?

 

Alexander Thuijs Mon, 09/22/2014 - 05:08

the hw module scale command pertains to trident to divvy up the mem for L3 and L2.

typhoon has 2 unique chips for that so the command doesnt apply there.

the default mac limit per BD in 513 was set to 128k, you can change the mac limit in the bd or ac config with the command "mac limit" under that config section.

though if you need 512k mac in a single BD, seems a bit unusual :)

xander

Emperor2000 Sat, 10/11/2014 - 09:10

Hello

It seems that we are running into the subtree limitation and i would like to know if there is some way of getting around it? 

We aim towards terminating 2 fulll tables in a VRF but we are running into the max 250k prefix issue that is mentioned above. 

We are running Trident cards and RSP-4G in the system.

The box isnt running that many VRFs atm so according to my understanding of the above guide i should not run into the limitation though? When im looking for the table id i get 0x1500 as output.

We have put the system into L3XL mode so as i understand it we should be able to get more prefixes into one single VRF?

 

Alexander Thuijs Mon, 10/13/2014 - 06:01

hi!

if you have XR 41 or later, you can use the "mode big" config command under the vrf definition to have it stick to a tableID of 1-15 to keep it in the large space. Without that, and given the current tableID it is a small vrf, limited to 256k routes in the l3xl profile. TableID's larger then 15 have only one subtree, which is then in the l3xl limited to 256k.

regards

xander

Emperor2000 Wed, 10/22/2014 - 00:59

Hello

Thank you for the reply. If i understand correctly then if you use "mode big" command under vrf then you should be able to have more than 256k Prefixes since you are then effectivly using a table id between 1-15 (unless i presume you are doing this on more VRFs then can fit inside the first 15 adressable blocks)

Alexander Thuijs Wed, 10/22/2014 - 07:42

that is correct! every time you configure a vrf the table ID bumps, even if you remove a vrf with ID 7, while you're currently at 10, the next one will be 11, not reusing 7.

So in order to make the table ID sticky, when you have more then 15 vrf's defined, you can ensure that this vrf in mode big gets that table ID you need to have that larger route scale.

One note, just to make sure, this is trident specific (subtree), doesn't apply to typhoon (mtrie)

thanks

xander

Roberto Lopez Fri, 10/17/2014 - 16:28

Xander,

A couple of questions I really would appreciate you answer...

1) Is the máximum number of MAC address for a Typhoon card also 512.000?

2) What is the maximum number of ARP entries in a ASR 9006? Is this mesured by linecards?

Best Regards,

Robert.

 

 

 

 

Alexander Thuijs Sat, 10/18/2014 - 06:17

hi robert,

typhoon has 2M mac.

arp entries; they are per LC (there is no need for an ingress LC to know the arp adj for an egress interface on a different LC right), this is tested to 128k per LC, but this technically can go as far as memory allows.

note that if you have bvi or bundle's those arp's have to be replicated to those LC's that have members in that BD where the bvi is or to those LC's that have members in that bundle.

Note that ARP is not part of the MAC scale, that would be 2 different things.

cheers

xander

Roberto Lopez Mon, 10/20/2014 - 13:20

Hi Xander,

Thank you very much for your answer and the article in general because it contains  a  lot of useful information.

I have a terrible confusion regarding the combination of RSP-440SE or RSP-440TR with linecards TR or SE. For instance if I use a RSP-440SE with linecards MOD-160TR then there is the restriction of 8 queues per port in the MPA that I insert in the MOD. Viceversa if I use a RSP-440TR and use linecards like MOD-160SE then can I use the 256K per NPU available por SE linecards?

I still can not understand if the RSP dictates the number queues supported by the linecards or if this is independent per linecard and RSP has nothing to do with it. I believe RSP has no influence in the number of queues of the linecards, but I really appreciate your guidance on this matter.

 

Best regards,

Robert.

Alexander Thuijs Mon, 10/20/2014 - 14:46

hi robert! thanks! :)

yeah the LC version of TR/SE determines the queues in the NPU's available, this is regardless of the RSP version.

the RSP SE/TR determines the control plane MD scale, such as if you have large qos scale, large routing tables and and and, then you want the SE version of the RSP.

Also when running BNG you need the SE RSP version.

otherwise if you just need queues, you can use the TR RSP and the SE linecards.

xander

Roberto Lopez Tue, 10/21/2014 - 21:47

Xander,

Thanks again for helping me clarify the concepts. Just one more question... The TR LCs only support 8 queues per port, but I also see that TR LCs support 8K policers per NPU, so if this is true then must mean that policers do not consume queues, am I right? So I can have multiple subinterfaces with QoS policy-maps for policing without worrying about using any queues at all.

 

Best Regards,

Robert.
 

Alexander Thuijs Tue, 10/21/2014 - 23:21

that is correct robert,

a policer doesn't require a queue.

if you have 3 classes with a policer only, they all use the default class-default.

on egress only priority, bandwidth or shape at parent instantiates a queue.

if you use the cmd show qos int <interface> <direction> you can see the QID. if the QID is unique, it means you burn one of those 8 on the TR card.

check also CL preso's 2013/2014 from orlando/sanfran ID 2904 for more interesting detail on QOS that expands on this above and the concept of QID's etc.

cheers!

xander

Deniz AYDIN Tue, 11/04/2014 - 06:46

Hi Xander,

 Can you five information about hw-module profile package? I can not find any information about this in docs.

Thanks.

Alexander Thuijs Tue, 11/04/2014 - 09:23

Hi Deniz,

because the ucode space in the Trident NPU's is limited and people requested too many features :) we had rip something out in favor of another feature.

So the hw module profile feature l2 (as opposed to default) gives you PBB (legacy PBB) at the cost of netflow, v6 urpf and ipv6.

There are also other feature or profiles that provide for some code optimization for instance as an LSR router, but coming at the cost of something then also.

In general, except for the hw-module profile scale l3xl (to make sure trident has enough route scale when running is an inet edge) you generally dont need to tune these.

cheers!

xander

Deniz AYDIN Thu, 11/20/2014 - 04:35

Hi Xander,

We are using IOS XR version 5.1.3, enhanced line cards and using l3 scale. We need full route table support (internet in a vrf). But it says depricated for l3 scaling! I don't find any information on the docs, neither in conf or command reference guide. I guess default scaling will not be enough as current internet table is more than 500K. 

hw-module profile scale ?
  bng-max  BNG max scale profile
  default  Default scale profile
  l2       L2 scale profile
  l3       L3 scale profile (depricated)
  l3xl     L3 XL scale profile
  lsr      LSR scale profile
  sat      nV Satellite scale profile

Best Regards,

Deniz.

Alexander Thuijs Thu, 11/20/2014 - 04:43

Hi Deniz, the profile scale l3/l3xl are for Trident linecards only. The enhanced ethernet cards are typhoon which have 4M fib routes and dont have the subtree restriction discussed here above.

The only thing to note is that the l3xl changes the memory heap size a bit for Intel Processors (eg RSP440), so BGP has a bit more scale in terms of its table (eg when you have millions of paths).

But the l3/l3xl to reach 1M or 1.3M FIB is for Trident only, Typhoon has 4M regardless of that command.

xander

Deniz AYDIN Thu, 11/20/2014 - 11:12

Ahh I missed,sory for that. You have already explained this on the doc:(

Thanks a lot.

Won Lee Mon, 12/15/2014 - 12:46

Hi Xander,

How does the PIC(prefix Independant Convegence) Edge affects the FIB scaling? (ASR9k RSP2/Trident LC/L3 profile in SP network). Would eachPIC backup paths be counted as a seperate FIB entry? if does, that means the FIB entry max will be lower than the listed max.

 

Thanks in advance!

-Won 

Actions

This Document