Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

3750G CPU Spike - Version 12.2(44)SE2

Good day,

I'm having a problem with the stack that user are running. It seems like there's a spike in the CPU process in the history. We try to trace for the problem, but no results.

We've monitored the switch for a 20-30min but failed to capture the cpu process that causing the spike.

Already checked the document regarding the troubleshooting from here

http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/troubleshooting/cpu_util.html#wp1023843

I would like to request an opinion on how to move forward to troubleshoot this issue.

The stack is running on default SDM template.

Switch : 3 : (Master)

---------------------

CAM Utilization for ASIC# 0                      Max            Used

                                             Masks/Values    Masks/values

Unicast mac addresses:                        784/6272         24/111  

IPv4 IGMP groups + multicast routes:          144/1152          6/26   

IPv4 unicast directly-connected routes:       784/6272         24/111  

IPv4 unicast indirectly-connected routes:     272/2176          8/44   

IPv4 policy based routing aces:                 0/0             0/0    

IPv4 qos aces:                                768/768         260/260  

IPv4 security aces:                          1024/1024         27/27   

Note: Allocation of TCAM entries per feature uses

a complex algorithm. The above information is meant

to provide an abstract view of the current TCAM utilization

SW_FLOOR#show processes cpu history

    1          11111                                  11111111

    0777778888811111888889999999999999999999999999999900000000

100                                                          

90                                                          

80                                                          

70                                                          

60                                                          

50                                                          

40                                                          

30                                                          

20                                                          

10 **********************************************************

   0....5....1....1....2....2....3....3....4....4....5....5....

             0    5    0    5    0    5    0    5    0    5   

               CPU% per second (last 60 seconds)

    1111111111111111111111111111111111111111111111111111111131

    0711211122112121101311311111110111100111101122111222211261

100                                                          

90                                                          

80                                                          

70                                                          

60                                                          

50                                                          

40                                                         *

30                                                         *

20  *                                                      *

10 ##########################################################

   0....5....1....1....2....2....3....3....4....4....5....5....

             0    5    0    5    0    5    0    5    0    5   

               CPU% per minute (last 60 minutes)

              * = maximum CPU%   # = average CPU%

                                    1                                    

    3399729593936294369243942392249202932293359543922393339795933493329442

    6082769792834691439554855297509504669266067604755397333698733183118341

100   *   * * *   *   *   *   *   * * *   *   *   *   *     * *   *   *  

90   **  * * *   *   *   *   *   * * *   *   *   *   *   * * *   *   *  

80   *** * * *   *   *   *   *   * * *   *   *   *   *   *** *   *   *  

70   *** * * *   *   *   *   *   * * *   *   *   *   *   *** *   *   *  

60   *** *** * * *  **   *   *   * * *   *  ***  *   *   *****   *   *  

50   *** *** * * *  ** * **  *   * * *   *  ***  *   *   *****   *   *  

40 * *** *** * * ** ** * **  *  ** * **  ** **** *   **  *****  **   ***

30 ********************************* *** *************************** ***

20 **********************************************************************

10 ######################################################################

   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.

             0    5    0    5    0    5    0    5    0    5    0    5    0

                   CPU% per hour (last 72 hours)

SW_FLOOR#show controllers cpu-interface

ASIC    Rxbiterr   Rxunder    Fwdctfix   Txbuflos   Rxbufloc   Rxbufdrain

-------------------------------------------------------------------------

ASIC0     0          0          0          0          0          0        

ASIC1     0          0          0          0          0          0        

ASIC2     0          0          0          0          0          0        

ASIC3     0          0          0          0          0          0        

ASIC4     0          0          0          0          0          0        

ASIC5     0          0          0          0          0          0        

ASIC6     0          0          0          0          0          0        

ASIC7     0          0          0          0          0          0        

ASIC8     0          0          0          0          0          0        

ASIC9     0          0          0          0          0          0        

ASICA     0          0          0          0          0          0        

ASICB     0          0          0          0          0          0        

ASICC     0          0          0          0          0          0        

cpu-queue-frames  retrieved  dropped    invalid    hol-block  stray

----------------- ---------- ---------- ---------- ---------- ----------

rpc               3323412172 0          0          0          0        

stp               177089741  0          0          0          0        

ipc               197648553  0          0          0          0        

routing protocol  706262091  0          0          0          0        

L2 protocol       36135326   0          0          0          0        

remote console    0          0          0          0          0        

sw forwarding     149357     0          0          0          0        

host              17019584   0          0          0          0        

broadcast         580150179  0          0          0          0        

cbt-to-spt        0          0          0          0          0        

igmp snooping     246174119  0          0          0          0        

icmp              0          0          0          0          0        

logging           0          0          0          0          0        

rpf-fail          0          0          0          0          0        

dstats            0          0          0          0          0        

cpu heartbeat     1181561604 0          0          0          0        

Supervisor ASIC receive-queue parameters

----------------------------------------

queue 0 maxrecevsize 7E0 pakhead 2D6C9A8 paktail 2D991B4

queue 1 maxrecevsize 7E0 pakhead 2F67DE0 paktail 2F66140

queue 2 maxrecevsize 7E0 pakhead 2DD9618 paktail 2DD8EF0

queue 3 maxrecevsize 7E0 pakhead 34F708C paktail 34F0FF0

queue 4 maxrecevsize 7E0 pakhead 2F7CF7C paktail 2F7A0F8

queue 5 maxrecevsize 7E0 pakhead 3163A04 paktail 30668BC

queue 6 maxrecevsize 7E0 pakhead 34EB538 paktail 34EB1A4

queue 7 maxrecevsize 7E0 pakhead 3421794 paktail 343205C

queue 8 maxrecevsize 7E0 pakhead 34433E0 paktail 344698C

queue 9 maxrecevsize 7E0 pakhead 32BCDB4 paktail 32BCDB4

queue A maxrecevsize 7E0 pakhead 2A9462C paktail 32A17A4

queue B maxrecevsize 7E0 pakhead 350CCE4 paktail 3510290

queue C maxrecevsize 7E0 pakhead 32C78F0 paktail 32DCCDC

queue D maxrecevsize 7E0 pakhead 32B9330 paktail 32BC8DC

queue E maxrecevsize 0 pakhead 0 paktail 0

queue F maxrecevsize 7E0 pakhead 3298178 paktail 3298C34

Supervisor ASIC exception status

--------------------------------

Receive overrun    00000000   Transmit overrun 00000000

FrameSignatureErr  00000000   MicInitialize    00000000

BadFrameErr        00000000   LenExceededErr   00000000

BadJumboSegments   00000000

Supervisor ASIC Mic Registers

------------------------------

MicDirectPollInfo               80000200

===========================================================

Complete Board Id:0x4010

===========================================================

SW_FLOOR#show proc cpu sort | ex 0.00%

CPU utilization for five seconds: 10%/0%; one minute: 9%; five minutes: 9%

PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

  94   547397716 122376069       4473  0.79%  0.83%  0.84%   0 hpm counter proc

  58   4056455623109704117          0  0.63%  0.64%  0.63%   0 Fifo Error Detec

175    60770568 120362357        504  0.47%  0.08%  0.06%   0 CDP Protocol    

134   226769550  11847533      19140  0.31%  0.37%  0.36%   0 HQM Stack Proces

135   196354673  94645466       2074  0.31%  0.28%  0.31%   0 HRPC qos request

  72    295887261745482093         16  0.31%  0.04%  0.03%   0 HLFM address lea

  95    92203362 368839310        249  0.31%  0.17%  0.16%   0 HRPC pm-counters

   9     4267699  13146607        324  0.15%  0.11%  0.14%   0 ARP Input       

126   3523574891708765682        206  0.15%  0.35%  0.40%   0 Hulc LED Process

163    30626649  38777878        789  0.15%  0.04%  0.02%   0 Port-Security   

  40    61864734  11902008       5197  0.15%  0.11%  0.11%   0 Compute load avg

SW_FLOOR#show proc cpu sort | ex 0.00%

CPU utilization for five seconds: 15%/0%; one minute: 10%; five minutes: 9%

PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

107         678       200       3390  5.27%  0.49%  0.16%   1 SSH Process     

126   3523575381708765871        206  0.95%  0.40%  0.41%   0 Hulc LED Process

  94   547397768 122376086       4473  0.79%  0.83%  0.84%   0 hpm counter proc

  58   4056455873109704460          0  0.63%  0.64%  0.63%   0 Fifo Error Detec

  34   201464844  59067572       3410  0.63%  0.33%  0.32%   0 Per-Second Jobs 

134   226769567  11847534      19140  0.31%  0.36%  0.36%   0 HQM Stack Proces

   9     4267717  13146634        324  0.31%  0.13%  0.14%   0 ARP Input       

  72    295887441745482280         16  0.31%  0.06%  0.04%   0 HLFM address lea

135   196354690  94645476       2074  0.31%  0.28%  0.31%   0 HRPC qos request

  90    44492382 561003225         79  0.15%  0.05%  0.05%   0 hpm main process

  40    61864743  11902009       5197  0.15%  0.11%  0.11%   0 Compute load avg

SW_FLOOR#show proc cpu sort | ex 0.00%

CPU utilization for five seconds: 10%/0%; one minute: 10%; five minutes: 9%

PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

107         737       210       3509  1.11%  0.54%  0.18%   1 SSH Process     

  94   547397792 122376089       4473  0.79%  0.82%  0.84%   0 hpm counter proc

126   3523575631708765960        206  0.63%  0.42%  0.41%   0 Hulc LED Process

  58   4056456213109704626          0  0.63%  0.63%  0.63%   0 Fifo Error Detec

134   226769584  11847535      19140  0.31%  0.36%  0.36%   0 HQM Stack Proces

187    93529973 225156345        415  0.31%  0.04%  0.04%   0 Spanning Tree   

   9     4267742  13146650        324  0.31%  0.15%  0.15%   0 ARP Input       

  95    92203378 368839353        249  0.15%  0.16%  0.16%   0 HRPC pm-counters

  55   2160710691464043216        147  0.15%  0.29%  0.34%   0 RedEarth Tx Mana

  34   201464844  59067576       3410  0.15%  0.31%  0.32%   0 Per-Second Jobs 

163    30626665  38777882        789  0.15%  0.05%  0.02%   0 Port-Security   

135   196354698  94645480       2074  0.15%  0.27%  0.30%   0 HRPC qos request

SW_FLOOR#show proc cpu sort | ex 0.00%

CPU utilization for five seconds: 10%/0%; one minute: 10%; five minutes: 9%

PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

107         787       220       3577  1.11%  0.54%  0.18%   1 SSH Process     

  94   547397810 122376091       4473  0.79%  0.82%  0.84%   0 hpm counter proc

126   3523575881708766009        206  0.63%  0.42%  0.41%   0 Hulc LED Process

  58   4056456293109704718          0  0.63%  0.63%  0.63%   0 Fifo Error Detec

134   226769584  11847535      19140  0.31%  0.36%  0.36%   0 HQM Stack Proces

187    93529973 225156354        415  0.31%  0.04%  0.04%   0 Spanning Tree   

   9     4267750  13146659        324  0.31%  0.15%  0.15%   0 ARP Input       

  95    92203387 368839373        249  0.15%  0.16%  0.16%   0 HRPC pm-counters

  55   2160710781464043262        147  0.15%  0.29%  0.34%   0 RedEarth Tx Mana

  34   201464878  59067577       3410  0.15%  0.31%  0.32%   0 Per-Second Jobs 

163    30626665  38777883        789  0.15%  0.05%  0.02%   0 Port-Security   

135   196354706  94645484       2074  0.15%  0.27%  0.30%   0 HRPC qos request

Thank you.

1 REPLY
Hall of Fame Super Gold

3750G CPU Spike - Version 12.2(44)SE2

Upgrade to 12.2(55)SE8 and observe.

You're using a pretty old IOS.

167
Views
0
Helpful
1
Replies