High CPU with multilink ppp

Unanswered Question
Feb 27th, 2008
User Badges:

Hi to all,

just wondering if anyone can help me with a high cpu issue.I know that multilink ppp has a high impact on cpu due to the balancing.(I've got 4 serial lines) but it easyly reaches 100% with little traffic.Any suggestions to help out the cpu.Have I got a pps problem?

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
s.ganguly Wed, 02/27/2008 - 08:02
User Badges:

Hi,


Sounds like it can be almost anything :-)....que drops to fragmentation. As you say, MLP puts quite a burden on the cpu as it is.


Can you check if your router is fragmenting please ? Command is "show ip traffic".


Also please check if packets that congest the input queue are forwarded to the router, or are forwarded *through* the router. Run the "show interfaces [type number] switching" and see if you can see anything beside "cache misses".


Cheers


santanu



freccetta Wed, 02/27/2008 - 08:35
User Badges:

great this is what I checked already..I saw that there are no frags.Let me say that most of the packets go right through the router.Here are the shows......



Attachment: 
s.ganguly Thu, 02/28/2008 - 02:15
User Badges:

Hi,


Thanks for the output. Your output shows the follwoig:


myrouter# show interfaces fastEthernet 0/0

Protocol IP

Switching path Pkts In Chars In Pkts Out Chars Out

Process 46593 7332641 13464 996536

Cache misses 18447 - - -


The difference of IP packets processed out and cache misses may (or mynot) indicate that some packest are destined for the router rather than through the router.


Other possibilities ( beisides input drops) that I can think of:


1) A network loop can also be a reason for the traffic overload. Verify your network topology.


2)If there is a possibility that a single device is generating packets at an extremely high rate and thus overloading the router, you can determine the MAC address of that device by adding the ip accounting mac-address {input|output} interface configuration command to the configuration of the overloaded interface.


3) Could be a bug in the Cisco IOS software version running on the router, you can check the Bug Toolkit for a bug that reports similar symptoms in a similar environment.


4) I see you have quite a few access-lists on the 2800 platform. Repeatedly going over long access lists can be very CPU-intensive depending on volume of traffic. With NetFlow switching, if the flow is already in the cache, you no longer need to check the access list. So in this case, NetFlow switching could be useful. You can enable NetFlow switching by issuing the ip route-cache flow command.


5) Please amke sure your QoS class and policies are correctly marking and not doing something else ( for e.g dropping packets)


Best


santanu

Joseph W. Doherty Wed, 02/27/2008 - 17:59
User Badges:
  • Super Bronze, 10000 points or more

You might try "no ppp multilink fragmentation".

freccetta Thu, 02/28/2008 - 00:15
User Badges:

for this IOS it's 'ppp multilink fragment disable'. I tried it already without any benefit.And as you can see from my shows (that I attached) my multilink hasn't got a fragment issue.Correct or did I miss something ? I do have to look into the 'sh int multi1 switching' right ?


Any other ideas?

dongdongliu Thu, 02/28/2008 - 00:56
User Badges:

hi,


i`m not sure but maybe "debup ppp multilike events" would find some info.

freccetta Thu, 02/28/2008 - 01:50
User Badges:

mmmm I'm running low on CPU and not able to get a console login.To much risk for that type of debug .Even if the events i suppose are ok because the link is going fine

Joseph W. Doherty Thu, 02/28/2008 - 05:05
User Badges:
  • Super Bronze, 10000 points or more

Unsure whether MLP fragmentation would show as "normal" (MTU too large) fragmentation.


Also, likely you'll need to insure MLP fragmentation is not active on the other side of the link too.


PS:

My understanding of MLP fragmentation: With it on, individual packets are fragmented so the fragments can be sent concurrently on all links. They're reassembled on the other side.


With it off, individual packets are round-robined between the links. Depending on their sizes, they can now arrive out-of-order but MLP will resequence them back into original order before forwarding them.


I'm often within an environment where they use lots of 2811s using MLP, although not sure if any have more than 3 T-1/E-1 (I recall one with 4 but don't recall whether it's IMA or not). MLP fragmentation off; don't see the CPU load you've encountered. One other difference, using 12.4 not 12.4T.

Craig Norborg Thu, 02/28/2008 - 05:37
User Badges:
  • Bronze, 100 points or more

I've heard you should have fragmentation turned off if you have multiple T1 circuits.


Do you have CEF turned on? I'd recommend it if its not..

freccetta Thu, 02/28/2008 - 06:17
User Badges:

yes I've tried turning fragmentation off and of course CEF is running.

Actions

This Discussion