maximum number of T1 serial interfaces via MLPPP?

Answered Question
Feb 24th, 2009


Do you know if I have a 2800 series and a 3800 series router, do you know what is the maximum number of T1 serial interface I can bundle together to form one logical multilink Port?

I usually have 4xT1 (6Mbps) and done via two vwic2-2MFT-T1/E1 interface cards. But I wonder if it is okay to do 8xT1 or even more to form a single logical Port?


I have this problem too.
0 votes
Correct Answer by marikakis about 7 years 8 months ago

Here is a relatively recent document (Updated: Jan 29, 2008) that suggests the use of 8 T1/E1 is possible at least in high end products, and warns about CPU usage even in this case:

The above document refers to a white paper:

Table 1 in this white paper suggests that you can expect use of MLPPP with any serial PA (probably because, as this same table says, MLPPP is a software only solution) and you can have 2-8 T1/E1's per bundle in general.

Still, to be on the safe side, you might consider asking cisco about this as Joseph suggested.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 4.3 (3 ratings)
Giuseppe Larosa Wed, 02/25/2009 - 00:05

Hello Joyce,

it should be possible but probably a single DS3 or E3 link can be cheaper then 8 T1 (this depends from the offers)

I would move to a T3 or E3 link if possible both C2800 and C3800 support network module for single T3 or E3.

It should be easier for the router to handle a single T3.

Hope to help


Paolo Bevilacqua Wed, 02/25/2009 - 01:57

8 T1s is OK, probably even 12.

As Giuseppe recommends, beyond few circuits, look for a DS3, it's much less trouble.

blackladyJR Wed, 02/25/2009 - 07:38


Yes I know DS3 is better but I have the issue in the location with DS3 delivery and the price is significant higher than 8T1. So I need to know if Cisco has any offical documentation to speak to the maxiumum number of MLPPP bundle the router can handle via multi-port serial in either 2800 ro 3800.

In the old days when we just do load sharing, there are limitation on eigrp of 4 or 6 maximum..etc. So that's why I wonder if there is any document for MLPPP to show the max number of T1 interfaces. I can't find any reference in cisco website so wonder if anyone may have the information. for example, will the maxiumum number varies of which type of interface card to use on which router platform. So say if I am using 3800 and using 4 port HWIC serial, so how many of these 4 port card can I bundle together? Or if i have just the vwic-2mft cards instead, so 2 per card, how many of those can I bundle together?



marikakis Wed, 02/25/2009 - 08:43


First of all, I agree with the previous posts. Also, the general tendency when bandwidth demands grow up to this point, is to grow fast to another level, so be prepared. I cannot answer all your questions in full. Still, I can say what we were doing in a similar situation that we were using 8 E1's until we were able to get an E3, just in case this can be of any help, although admittedly limited.

We were using 7200 routers at the endpoints between 8 E1's. We had 2 multilinks with 4 E1's each and we were load balancing traffic over the 2 multilinks. I don't know the reason that lead to the grouping per-four because those were already in place when I started working and bandwidth demands grew fast, so the multilinks quickly died. Note that each multilink is one entity to the eyes of the routing protocol. In our case, OSPF was load balancing between 2 equal cost links. Even if you do run into some type of limitation, the worst case scenario (besides not being able to bundle at all), is to be able to bundle only 2 interfaces in one multilink. Even in this case, you can still make it with the routing, provided that you have enough available slots on your routers of course.

Kind Regards,


p.s. Note also that those routers were dedicated to handle just those 2 multilinks. They were not latest NPE, but you can get an idea of the overhead. When we removed those multilinks and put the E3 in place, the CPU usage graphs took a dive.

Joseph W. Doherty Wed, 02/25/2009 - 10:05

I don't recall seeing a documented logical bundle limit for MLPPP. It's possible you'll first run into a limit for the number of supported T1 interfaces on the platform, plus you'll likely exceed the Cisco recommendation for T1 interfaces on the various ISRs (which isn't too high for the 2800 series). This might be a question best answered by TAC or Cisco sales support.

marikakis Wed, 02/25/2009 - 10:13

Joseph is probably right. We might have done the grouping per-four just because we had available 4 port adapters or to keep at least one multilink somewhat stable in case we had a failure in one of the E1's (flapping E1 going up and down all the time) or something like that (e.g. adapter failure). Those failure scenarios are probably factors to take into account.

Correct Answer
marikakis Wed, 02/25/2009 - 11:16

Here is a relatively recent document (Updated: Jan 29, 2008) that suggests the use of 8 T1/E1 is possible at least in high end products, and warns about CPU usage even in this case:

The above document refers to a white paper:

Table 1 in this white paper suggests that you can expect use of MLPPP with any serial PA (probably because, as this same table says, MLPPP is a software only solution) and you can have 2-8 T1/E1's per bundle in general.

Still, to be on the safe side, you might consider asking cisco about this as Joseph suggested.

blackladyJR Wed, 02/25/2009 - 11:42

Hi Maria,

Thanks for finding the URLs, from the white paper, it seems like you said in Table 1 that it says 2-8 and didn't say it has to be PA card for 7x00 series router. So probably lower end 3800 is fine for 8T1.

Interesting in Table 2 that puzzles me, it says "Supports IOS Quality of Service

" NO. What does it mean? I certainly have policy-map applied to the mulitlink interface today in many routers so that feature in table 2 saying "no" is very odd or unless it means some different QoS :)

Yes, to be safe, I will open a tac case to give them the exact 2800 and 3800 that I plan to use to see if they see any issue with it.

I won't go higher than 8 anyway as 8 T1 will be close enough to the cost of DS3. Right now I have 4 T1 and planning to increase upto 8 and it still saves money to stay with 8T1 vs DS3. Customer is fully aware of the benefit to go with DS3 with a frac-Port speed with the benefit to grow easily.

thanks for all your help.


marikakis Wed, 02/25/2009 - 11:53


I did not comment on how recent the white paper is, because it doesn't say. I thought that even if it is old, those would probably be the worst news you could hear. I also noticed earlier that at the last line of the table it mentions even 2500/3600 CPE support, so this is probably encouraging. The fun with this white paper is at the beginning, where it describes all the difficulties that those solutions try to overcome. The description is very realistic.

I am glad I could be of some help. It is always a pleasure to try to answer your tough questions :-)

Kind Regards,


Joseph W. Doherty Wed, 02/25/2009 - 12:34

"I did not comment on how recent the white paper is, because it doesn't say."

The PDF version shows a copyright of 1998.

marikakis Wed, 02/25/2009 - 14:52

Joseph, thanks for pointing that out. I found myself puzzled at various occasions about the date missing at the bottom of some cisco web documents. Now I learned the trick :-)

Joseph W. Doherty Wed, 02/25/2009 - 12:41

Perhaps another option, if you don't intend to use MLPPP fragmentation is usage of multiple routed links. (I suspect MLPPP fragmentation would be the major CPU consumer.) I realize routers didn't used to support more than 6 maximum-paths, but this has been increased on some later IOSs. At the moment I'm looking at a 12.4 2811 OSPF that notes it supports 16. CEF does a fairly decent job of flow balancing, quickly.

marikakis Wed, 02/25/2009 - 14:48

The first document above says about the CPU usage: "The trade-off for the increased functionality is that Multilink PPP requires greater CPU processing than load balancing solutions. Packet reordering, fragment reassembly, and the Multilink PPP protocol itself increases the CPU load." So, many factors contribute to the end result of the actual CPU usage of the MLPPP solution, but I suppose Joseph is right in the sense that MLPPP fragmentation is something you might consider avoiding, in order to cut some of the overhead (unless you are not satisfied with the traffic distribution, your router can handle more load and you think you could push things further to balance the links). Still, the document suggests at the very beginning to "Disable Multilink PPP fragmentation whenever possible". We were not using MLPPP fragmentation and we still had high CPU load on the 7200's.

I have seen per-destination load balancing with CEF work well only in links of higher speed, such us STM-1. For such low speed links the CEF load balancing would have to be per-packet to avoid some links being underutilized while others are congested (which can happen very easily in networks with unpredictable traffic patterns). With per-packet load balancing you may have packet re-ordering. MLPPP preserves packet order. Still, we were using 2 multilinks, with per-packet load balancing between the 2 multilinks and we had no issues. However, this will depend on the services one runs. We were not passing voice traffic over those multilinks, but only internet traffic.

p.s. Now that I think about it more, it seems the fellow engineer who must have done our setup actually balanced the 2 solutions. MLPPP with CEF flavor. Is she brilliant or what? :-)

blackladyJR Wed, 02/25/2009 - 15:02

I have a few 4xT1 MLPPP sites with 2851 and it seems the CPU load is very low. I have other 2801 with 2xT1 with per-destination load sharing via "maximum-path 2" under BGP (I have BGP between CE and PE on the 2xT1 WAN links).

For the site that I need upto 8xT1, I will use minimum 3825 instead of 2800 series.

But like Joe said, usually the load sharing normally is 6xT1/E1, so that's why I post the original question wondering what may be the maximum for MLPPP if I want 8xT1 in one logical port. I know MLPPP is a different animal as layer 3 load sharing, but since I can't find the document in MLPPP showing the maximum, that's why I want to find out to be safe before telling customer to order 8xT1 and then if it doesn't work well, then it will be bad.



marikakis Wed, 02/25/2009 - 15:21


Yes, if you say something to the customer and it doesn't work it will be bad indeed. The way you put it is very realistic, and for that reason not many people will dare to answer this with complete confidence. In such cases, talking to cisco is the only safe way to go and get some definite answer. Even if the answer turns out not be perfectly true, you can say you did your homework and the best you could for this to work, but there were "unforeseen technical difficulties" that even cisco could not see. This is more of a politics than technical advice, but I have seen this work as well.

Kind Regards,


marikakis Wed, 02/25/2009 - 16:00

Also, if things for some reason turn out looking bad, consider trying 2 multilinks (4 T1's each) with per-packet load sharing between the multilinks as your last resort. As I said previously (edited post), this might actually be a balance between the 2 solutions (MLPPP and CEF per-packet load balancing). It might be the case that 8 links in a single bundle have more MLPPP protocol overhead than 2 multilinks with 4 links each.

marikakis Wed, 02/25/2009 - 16:55

I will try to explain this last point further. Typically, 2 x 4 = 1 x 8 in most cases. However, "Multilink PPP keeps track of packet sequencing and buffers packets that arrive early. Multilink PPP preserves packet order across the entire NxT1/E1 bundle with this ability." as the first document says. It might be the case that its harder to preserve packet order and balance traffic across 8 links in one bundle than 8 links in two bundles. If you use 8 links in 2 bundles, MLPPP does lighter work for each of the bundles, and CEF being lighter than MLPPP does the rest of the work to make those multilinks cooperate. This is not a perfect argument since I don't know implementation details, but is not unreasonable either.

p.s. You could also try per-destination CEF instead of per-packet for the 2 bundles, in case it works for you. As other people said here, you need to think about the requirements of your services.

Joseph W. Doherty Wed, 02/25/2009 - 16:46

"Is she brilliant or what?"

Maybe. As you point out if you use CEF per-packet you risk out-of-sequence deliver, which, as you also note, MLPPP guarantees. However, if you only run per-packet CEF on two links, whether real or bundles, most TCP stacks wait for 3 dup ACKs before considering a packet lost, so most TCP flows should be fine. (For impact to non-TCP, depends on the app.)



On another aspect of T-3s, don't forget you can sometimes purchase fractional T-3. Also, it's becoming more common to find WAN Ethernet handoffs at attractive pricing. I.e. there might be other options available rather than 8 or more T-1s or a full T-3 to provide the 12 Mbps or so bandwidth you're seeking.

blackladyJR Wed, 02/25/2009 - 19:09

thanks for Maria and Joe's ideas.

Everything makes sense.

I have already tried WAN ethernet but it's not available in the area for this location. PTT only sells full DS3 and on MPLS side, I can have frac-DS3 Port rate instead of full DS3 Port. But that is still way more expensive than 6T1 approximate.

I always push for ds3 loop with frac-ds3 Port for any sites with higher than 4 T1 requirement as the rule of thumb. Just this one happens the pricing and availability works out better with more T1s and so to prompt me to look into the maximum number of interfaces MLPPP can support.

thanks everyone again.


Joseph W. Doherty Thu, 02/26/2009 - 05:45

Looks like you've covered all the bases for private PTT. Fully understand the issue of limited options at any one location.

Have you also investigated running a logical PTT VPN across the Internet? Properly configured, I've often found they can perform very well and often they're very cost effective for bandwidth.

You also mention MPLS, no other options with FR or ATM? With the latter, the IMA T-1 modules work fairly well in the ISRs and don't place the same load on the router as MLPPP. I recall, though, there is a cap of 8 links (hardware limitation?), where with MLPPP the limitation should be an IOS cap.

blackladyJR Thu, 02/26/2009 - 06:21

Hi Joe,

FR or ATM are already retired products from public Carrier and MPLS pretty much is the norm for any big customer connection. MPLS is so much better without dealing with n(n-1)/2 pvcs to have full mesh and all those BGP attributes that can use in PE to manipulate routes.

VPN certainly is an option but the main drawback is lack of QoS in the cloud guarantee while MPLS has the guarantee to ensure business quality.

Speaking of QoS, since you are the expert :), I saw some really strange behavior yesterday and opened a tac case and waiting for reply.

I have just a LLQ and a AF31 and a default class. (forget everything we said in previous QoS topic as this one does not have the FR pvc CIR limitation when trying to do bandwidh allocation).

On a T1, I have half of those which is 768k is LLQ. Then I want everything to be AF31 so my ACL is "permit ip any any".

Max reserved is 90%.

I put AF31 with remaining 90% and default with remaining 10%.

The strange part is that when there is very light traffic (no voice going on, just about 200kbps of AF31 traffic), I saw there are 88k "5 min drop rate" and packets dropping. Very strange because the T1 is not even heavy (200k traffic out of a T1) but I have packets dropping in AF31. So really not sure if it is IOS bug or what. The 200k rate and drop rate was from "show policy-map int s0/0" output. From "show int s0/0" output, the Tx and Rx load is like 17/255 only. So really don't understand why I am dropping packet.

Any ideas?

Joseph W. Doherty Thu, 02/26/2009 - 06:58

Oops, behind the times again, didn't realize couldn't obtain FR or ATM any longer. (Client I working, at the moment, uses MPLS, but PE to CE handoffs are ATM IMA and some, I thought, FR. Of course that's not the same as end-to-end FR or ATM.)

On the isssue of VPN and QoS, yes a common lamment, but I've found the biggest congestion issues appear to be Internet ingress/egress. Igress to Internet is easy, egress from Internet can be controlled much as I've done with FR or ATM (at Internet ingress for traffic directed to remote site). Usually works/performs very well.

As noted above, client I'm working with uses MPLS in a big way. However, when they were FR and ATM, I recommended against MPLS since its QoS is much less grangular than what you can do (i.e. vendor supported QoS - not technically what could be done). Predicted worst and erratic performance; need to buy more bandwidth. Both have come true!

The QoS issue you saw, was it perhaps on a very high 12.4T release? Believe I saw a similar problem which disappeared when I dropped down to 12.4(15)T. Haven't written it up yet.

blackladyJR Thu, 02/26/2009 - 09:19


In the old days, there are indeed using FR or ATM to get to an MPLS PE. Now most companies has "direct" access to the MPLS PE. The encapsulation itself usually is either PPP or FR. Using FR as encapsulation method allows more traffic shaping command to apply to it. We can also do MLFR (FRF.16) besides MLPPP. I don't know if other companies provide ATM encapsulation for "Direct" MPLS PE access though. Indirect Yes, I still have client site going to ATM first and then a pvc from ATM to the MPLS PE.

As for QoS, it's c2800nm-adventerprisek9-mz.123-14.T5, so not the high 12.4T.

Router1>sh policy-map int s0/0/0


Service-policy output: QoS

Class-map: EF-Voice (match-any)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: dscp ef (46)

0 packets, 0 bytes

5 minute rate 0 bps

Match: dscp cs5 (40)

0 packets, 0 bytes

5 minute rate 0 bps

Match: ip precedence 5

0 packets, 0 bytes

5 minute rate 0 bps

Match: access-group name EF-VoiceGold

0 packets, 0 bytes

5 minute rate 0 bps

QoS Set

dscp ef

Packets marked 0


Strict Priority

Output Queue: Conversation 264

Bandwidth 768 (kbps) Burst 19200 (Bytes)

(pkts matched/bytes matched) 0/0

(total drops/bytes drops) 0/0

Class-map: AF31 (match-any)

48286 packets, 4885745 bytes

5 minute offered rate 30000 bps, drop rate 10000 bps

Match: dscp af31 (26)

10 packets, 640 bytes

5 minute rate 0 bps

Match: access-group name AF31

48276 packets, 4885105 bytes

5 minute rate 30000 bps

QoS Set

dscp af31

Packets marked 48293


Output Queue: Conversation 265

Bandwidth remaining 90 (%) Max Threshold 64 (packets)

(pkts matched/bytes matched) 4100/1575375

(depth/total drops/no-buffer drops) 0/580/0

Class-map: class-default (match-any)

181 packets, 11349 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any


Output Queue: Conversation 266

Bandwidth remaining 10 (%) Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0


This Discussion