I am running MLPPP throughout my WAN. Most recently I installed a 3845 and I had to turn MLPPP fragmentation off in order to get the circuits to stop dropping packets. I am have read multiple articles on MLPPP fragementation and still do not understand why this would cause a problem. I am experiencing lost fragments and I am thinking that the cause of this may be that the circuits in the bundle have difference latency-prehaps for diversity and this is causing a problem when fragmentation is enabled. I have 50 other sites where fragmentation is enable and they are running fine. Can anyone explain why fragmentation would cause this kind of problem? From the show ppp multilink command, I am seeing lost fragments and discarded fragments. When I experienced the problem aforementioned, I opened a TAC case but the engineer could not explain why the fragmentation would cause this problem, so I'm reaching out to the Cisco IT community for more indepth information.
There are 5 t1s in the bundle. All circuits are up and forwarding packets. This installation has been up for 2 weeks. Just recently the site start complaining about degraded performance. I am just trying to understand WHY MLPPP with fragmentation ON causing slow responses and on one occasion I could not get a site up until I disable ppp fragmentation.
During further research, I have read that it is necessary sometimes disable fragmentation and other times it is not necessary. If fragmentation is ON, packets are fragmented before MTU checking. With it OFF, MTU checking is done prior to fragmentation. Therefore one could deduce that if MTU checking is done after fragmentation, could it be possible that MLPPP is modifying the DF bit in the IP header. I know that's a stretch since ppp is at layer 2 and DF bit is at layer 3 but PPP does have a NCP nego. layer. If my stretch is correct-modifies DF-some applications may not work correctly.
Lastly, disabling MLPPP fragmenation fixed my problem. I am trying to understand why.
I tend to disagree. When the multilink interface is configured with the 'bandwidth' and 'frame-relay fragment-delay' commands, it looks up at the size of the frame and determines what the serialization delay for that size of the packet would be and decides to fragment it if it is above the set limit.
Have you set the right bandwidth on the multilink interface and configured fragment delay in ms and an acceptable timeout?
Also, have you tried this - show inter stats and show interfaces switching (hidden) to see what switching path it is using ?
You might want to try using multiclass if you are using interleaving too.
Also, it would be worth doing a debug ppp multilink fragment during non-peak hours.
We are pleased to announce availability of Beta software for 16.6.3.
16.6.3 will be the second rebuild on the 16.6 release train targeted
towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are
looking for early feedback from customers befor...
Introduction Featured Speakers Luis Espejel is the Telecommunications
Manager of IENova, an Oil & Gas company. Currently he works with Cisco
IOS® and Cisco IOS XE platforms, and NX to some extent. He has also
worked as a Senior Engineer with the Routing P...
In this session you can learn more about Layer 3 multicast and the best
practices to identify possible threats and take security measures. It
provides an overview of basic multicast, the best security practices for
use of this technology, and recommendati...