thank you for all your help and Support
about 9000 family routers i have a big question that I can't find any answer
I have a ASR 9006 routerr that i want to terminate 512K subscriber on it but unfortunately I can't find any license about 512K subscriber
The only license that cisco offer is A9K-BNG-LIC-8K for every Slot or line card
If i have 2 line card that are A9K-MOD80-SE should i pre order only 2 license ? or more ?
And i want To know what is difference between asr9k-bng-px.pie-5.1.1 and A9K-BNG-LIC-8K
And If i don’t want to get license , does my router work as a BNG and how many subscriber does it supports without any license ?
Thank you again
Hi Foad, in that hardware configuration you won't be able to reach 512k subs, here is why:
On the NPU we have a hard limit of 32k subs (this is because the uidb table, that is the interface descriptor table) is 32k max. On the Mod 80 we have 2 NPU's. so on the MOD80 you can never go beyond 64k subs.
On the 9006 you have 4 LC slots. 4x64k=256k. And that leaves then no room for uplink interfaces (and no redundancy on access either).
For this scale you probably want to consider the 9010 or 9922 that provide more slots.
so on the MOD80 you can never go beyond 64k subs
Xander, are you talking about 64K single stack or 64K dual stack sessions per MOD80?
64k dual stack dimitris on the MOD80. that is the max that card can every carry (because it only has 2 NPU's).
Here is a good overview I think
Thanks for the quick answer!
Based on your answer, I assume that the term "sessions" in the scaling matrix refers to dual-stack sessions.
Sorry for returning back on this, but we need some additional clarifications regarding the ASR9K BNG scalability.
Our main question is if there are any session limits per 10GE port or per MOD80 slot?
For fully understanding the scalability options, I would like to ask you to answer the following questions if possible:
1. I have one MOD80 with one MPA-4x10GE. Can I reach 64K sessions or each NPU is assigned to one slot, so 32K sessions per MOD80 slot?
2. I have one MOD80 with one MPA-4x10GE. Can I distribute the supported sessions unequally between the 10GE ports and reach the maximum supported sessions (e.g. if the max is 64K, can I distribute them like this: 40 in the first, 24 in the fourth and 0 in the second/third)?
3. I have one MOD80 with one MPA-4x10GE. Assuming I can reach 64K sessions (question 1), can I use 2 x 10GE as access/customer facing interfaces (terminating the PPPoE sessions) and the other 2 x 10GE as uplink interfaces?
4. I have one MOD80 with two MPA-4x10GE. Can I distribute the 64K sessions unequally between them and reach the maximum supported sessions (e.g. 20 in the first MPA and 44 in the second)?
5. I have a full 9006 (4 x MOD80, 8 x MPA-4x10GE) and I want to reach 256K sessions. Why are you saying that I don't have room for uplink interfaces? Can't I use some of the GE interfaces for uplink?
When I say "sessions" I always mean dual-stack PPPoE sessions :)
Thank you very much in advance,
Each NPU has a limit of 32k. A MOD80 has 2 NPU's. One per bay.
So the total sum of sessions on that MPA can NOT exceed 32k in a mod80. There is no per port limit. However if you have shaping QOS, you need to configure the QOS resources such as to where you need them on which port. Because the parent shapers are in 8k chunks, and if you have one interface with 10k sessions, you need to assign 2 chunks to that interface. This chunking stuff only applies to sessions that have shaping needs (policing not affected by this).
2) Yes you can (but 32k per NPU here then right). Especially when there is no QOS need, there is no additional config needed. If shaped QOS is needed, "1" above applies.
3) Yes you can! Say you have 16k sessions EACH on port 0 and 1, you can use port 2 and 3 for uplinks no problemo!
4) Yes you can! :) The LC can hold 64k max, that means 2 fully loaded NPU's with sessions, but if you say only need 40k, then the first NPU can hold 32k and the other one 8k, not a problem.
5) 4 MOD80's with 2 per slot accounting for 32k each. This is 256k total. If you have the 4x10 MPA and you're not using ALL your interfaces for bng access, then yeah you have some to spare for uplinks, sure thing. To reach this scale you need LC based subs, so can't use bundle access (as that would pull them to the RSP).
having 16k subs over a 10G is about 80k each, doesnt seem like a lot of bw, but hey that is a different story.
Dual-stack: see chart above, yes we can do the 128k today, target is to go higher and higher! (but that likely means that you need more LC's because we need to keep distributing them to the LC processing as you can understand).
Thank you very much Xander. You were very enlightening once again :)
To sum up, in case I have an 9006 with 2 x MOD80 + 2 x MPA (one MPA in each MOD80), I can reach up to 64K dual stack PPPoE sessions (RP or LC based).
In order to reach 128K dual stack PPPoE sessions, I need 2 additional MPAs and the corresponding licenses, but the sessions must be LC based.
haha thanks Dimitris :) and you are correct! that summary is accurate!
These are some of the benefits of LC subscribers:
Does this mean that I can have 64K sessions on the RP and additional 64K on 1 x MOD80 + 2 x MPA so 128K totally???!!!
haha, no you can't do that Dimitris :) You still have the limit of 32k (uIDB's) per NPU.
So the MOD-80 with 2 NPU's is always limited to 64k max total ever.
We currently support 64k subs per LC, the testing (not technical) limit on the 36x10 is that although you can theoretically support more then 64k subs, that we havent validated.
This regardless of whether the subscriber is on the RP or LC.
In order to go >64k subs, you need 2 LC's at minimum.
Where if the LC used is a MOD80 it is capped, due to the explained hw limit to 64k, however an LC with more then 2 NPU's can go beyond the 64k (eg 36x10, 24x10, MOD160).
Your comments were really helpful.
As I looked into the hardware architectures of 36x10G, 24x10G and MOD160, it's realized they've got 6 , 8 and 4 NPUs respectively. However it's a bit strange that 36x10G has less NPUs than 24x10G. (source: Cisco Live! BRKARC-2003, 2013)
1- Can I simply conclude that they support 192k, 256k and 128k subs ? Do the tests confirm that?
2- I want to put two ASR9006s as BNGs in two different sites and connect them via nV, each supporting 512K subs, fully redundant. Having obtained the cluster licenses, do I need to worry about anything else?
the 24x10 has 3x10G per npu and the 36x10 has 6x10G per npu, so effectively double loaded. the choice is here between price and performance.
24x10 has more horsepower, but is price wise more expensive per port. the 36x10 is more dense, lower price per port, but also has lesser horse power as now the full 45mpps per direction is shared over 6 instead of 3 interfaces.
a linecard supports 64k subs total, regardless of the number of npu's it has. the limitation is not the npu (npu can hold max 32k subs), but the LC CPU power which has been tested up to 64k, but can go to 128k. So to reach say 512k subs you need the subs over 4 LC's.
cluster wouldnt increase the scale, it only provides for stateful redundancy. note that the 512k subs is an RP scale limitation in that case, andsince the control plane is fully shared between 2 nodes in a cluster, you dont have more control plane power.
note also that with 512k subs you will want LC based subscribers (eg terminated on gig or 10gig subifs and not bundles). Bundle interfaes will pull control of the session to the RP, phy (sub)ints will leave the control on the LC CPU.
cluster is particularly useful when you have bundles whereby one member is on one node and the other member on the other node.
So what I am trying to say is at 512k scale, you dont want to use cluster technology anymore and resort to stateless redundancy.
Thank you xander for the elaborated answer.
1- How many pppoe subs I can terminate on the RSP itself? I mean without having LCs. Tough I don't think it's recommended, yes?
2- Is there any difference between PPPoE and IPoE sessions on ASR9K regarding the load on the LC/RSP, features, restrictions etc.?
3- Attached is a scheme of what I have considered for a BNG of 256K subs. Is it correct? If I add two MOD160-SEs to the remaining slots, can I could on 512K subscribers?
rp based (or bundle based) subscribers can go up to 128k, terminated on the RSP.
ipoe and pppoe have no diff in terms of scale etc. only that pppoe adds 8 bytes so take fragmetnation or the accommodation of that into consideration.
in your current configuration, unless I mis calc, I come to 4x32k sub is 128k, but yes you can add more modX cards to get more sessions, but these need to be terminated then on the gig/10gig to terminate the control on the LC and not on the RP.
1- As of your first phrase, I can say ASR-9001 could support up to 128k as it's RP based but because of having just 2 NPUs, it's limited to 64k subs. Right? Is there anyway to put extra 64k subs on the RP ?
2- I corrected the diagram and re-attached it. Now It supports 256k subs and can get up to 512k ;)
PPPoE sessions are coming on port bundles of first two interfaces of MPA-4x10G s and the uplink traffic gets out from the next two ports. I believe as the port bundles are based on two interfaces of the same NPU, the load isn't sent to the RP. Correct?
the limitation for the asr9001 is the number of NPU's it has, 32k per npu max.
if you have a bundle over the two NPU's it would then bring scale down to 32k (since each sub is programmed on each of the bundle member NPU's.
when you have bundles, the control is delegated to the RSP, regardless of where the members actually are (same/different NPU or LC doesn't matter).
I'm newer in ISG BRAS and it's issues and I study the cisco ISG documents but I can't get exactly the matters I am very interesting to work with Cisco ISG features . For example we want to have destination accounting for our pppoe users or we want to have some advertisement in first of user pppoe sessions or we want to put some of users in different vrf or we want to apply some different QOS for some special destination address how to do these with BRAS. I have some information about basic pppoe scenarios and implementation in cisco 7206vxr.
all possible! you're looking at an http redirect scenario, the users dont even have to be in a different vrf for that. the advantage of using http redirect is that we can take the users request and send them over some place different based on the redirection target, this can also be based on the destinations they want to go to.
Once you decide that the redirection should stop we can remove that service from teh session for full access.
Same with QOS. this can be applied at will and modified at will also.
Thank you for your answer but how can we start working with ISG or IWAG ? we need termination about 170K concurrent PPPOE sessions in one ISG box through the one 10gig interface,which box and AAA server proper for this.could you please give simple scenario with related commands about how we use multiple http redirect.What is traffic and control type in class map, how and when can we use it.And one thing more can we use hotspot scenarios for 170k users scale.Is it proper with respect PPPOE scenario.If it is possible or you please introduce some good document about ISG and ISG scenario
I am totally confused because it clearly says that dual stack sessions take more memory. Dimitris said that he has one MPA on each MOD 80. This would mean that only one NPU per MOD80 is in use, right?
If this is correct than ha can have 64 IPv4 only sessions and 32K dual stack sessions?
I know about the 32K limit per NPU, but I think that because of
Please check the BNG scale roadmap above. Maybe I am missing something.
This is what I believe, after trying to figure it out and of course after Xander's assistance :)
Although the table above is quite old and the limitations have been probably changed till now, the limitations in the scale roadmap (e.g. 128K IPv4 sessions, 64K IPv6 sessions etc) have to do with the whole chassis.
The scalability of the modules/NPUs remains stable (32K sessions per NPU) but the whole chassis scalability is changing over the years.
So, assuming that the table is up to date, you can go up to 64K IPv6 sessions in an ASR9006 with the following configuration:
the above table is still accurate for the moment. with lc based subs we can get 128k dual stack or ipv6 only. and ipv4 only on lc subs is 256k. these nubmers are system limits.
where it says in the table "with new hw" it means the TH linecards. This is currently what we are working on (BNG on tomahawk LC).
with the current Typhoon LC's we are pretty much capped by the LC CPU performance and memory. Tomahawk with a faster more capable proc (hex core intel) and more memory opens the door to more session scale. along with the RSP880's more memory capacity.
the dual stack piece (or v6 only for that matter) is merely seeing a cap due to the extra memory per session it consumes as opposed to a v4 only. For the hardware forwarding/npu, it is still a single session.
I have to be sure about the max. number of PPPoE sessions. You have explained it well but I'm still not sure
This is the setup:
Slot 0: 1xMOD80 with 1xA9K-MPA-2X10GE
Slot 1: 1xMOD80 with 1xA9K-MPA-2X10GE
1. Every MOD80 has two NPU's, one per bay. In my
2. In the BNG Scale
hey smail, correct, in your config eventhough you have 2 NPU's per slot, you are only accessing one since you have only one mpa installed.
Each LC can support 64k sessions, with 32k per NPU. So per LC you need minimally 2 NPU's active to reach the LC scale of 64k.
Note that if you have RP based sessions, meaning access via a bundle, and if you have 2 members in your bundle, you'll configure the sessions on each npu that serves a member.
in order to reach 128k, you'd need 2 LC's and per LC 2 NPU's carrying sessions.
but that means for your hw config here, you can't have multiple members in your bundle.