Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member


I am running MLPPP throughout my WAN. Most recently I installed a 3845 and I had to turn MLPPP fragmentation off in order to get the circuits to stop dropping packets. I am have read multiple articles on MLPPP fragementation and still do not understand why this would cause a problem. I am experiencing lost fragments and I am thinking that the cause of this may be that the circuits in the bundle have difference latency-prehaps for diversity and this is causing a problem when fragmentation is enabled. I have 50 other sites where fragmentation is enable and they are running fine. Can anyone explain why fragmentation would cause this kind of problem? From the show ppp multilink command, I am seeing lost fragments and discarded fragments. When I experienced the problem aforementioned, I opened a TAC case but the engineer could not explain why the fragmentation would cause this problem, so I'm reaching out to the Cisco IT community for more indepth information.

New Member


Hello there,

Firstly, it would be good to see your router configs on both the ends of the MLPPP mlink.

Have you tried the below ? --

The following example sets a 100-millisecond wait period for receiving expected fragments before declaring the fragments lost:

Router(config)# interface multilink 9

Router(config-if)# ip address

Router(config-if)# ppp multilink

Router(config-if)# ppp multilink interleave

Router(config-if)# ppp multilink fragment delay 20

Router(config-if)# ppp timeout multilink lost-fragment 0 100

What is the actual linerate ?

Are you losing specific classes of traffic ? (as in ; are you using any Layer3 Queueing mechanisms ?)

Please post the configs and all that you have tried so far along with what all cisco tac suggested to you.




Also do "sh ppp multilink" and make sure you have all of the physical interfaces in the ML group "Active." If one link in the bundle goes down, it will still get traffic *sent* but it won't arrive.

A PPP bundle with inactive links will do that.


New Member


There are 5 t1s in the bundle. All circuits are up and forwarding packets. This installation has been up for 2 weeks. Just recently the site start complaining about degraded performance. I am just trying to understand WHY MLPPP with fragmentation ON causing slow responses and on one occasion I could not get a site up until I disable ppp fragmentation.

During further research, I have read that it is necessary sometimes disable fragmentation and other times it is not necessary. If fragmentation is ON, packets are fragmented before MTU checking. With it OFF, MTU checking is done prior to fragmentation. Therefore one could deduce that if MTU checking is done after fragmentation, could it be possible that MLPPP is modifying the DF bit in the IP header. I know that's a stretch since ppp is at layer 2 and DF bit is at layer 3 but PPP does have a NCP nego. layer. If my stretch is correct-modifies DF-some applications may not work correctly.

Lastly, disabling MLPPP fragmenation fixed my problem. I am trying to understand why.

New Member



I tend to disagree. When the multilink interface is configured with the 'bandwidth' and 'frame-relay fragment-delay' commands, it looks up at the size of the frame and determines what the serialization delay for that size of the packet would be and decides to fragment it if it is above the set limit.

Have you set the right bandwidth on the multilink interface and configured fragment delay in ms and an acceptable timeout?

Also, have you tried this - show inter stats and show interfaces switching (hidden) to see what switching path it is using ?

You might want to try using multiclass if you are using interleaving too.

Also, it would be worth doing a debug ppp multilink fragment during non-peak hours.

Why don't you post the configs on both the ends ?



CreatePlease to create content