cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5525
Views
0
Helpful
6
Replies

Cisco 4506 High CPU Utilization

sahmad
Level 1
Level 1

Hello,

Yesterday afternoon one of our 4506 switch climbed up to 96% CPU utilization. I haven't done any configuration changes. Here are the processes that have high CPU utilization

40  36630921841089949084       3360  8.63% 10.56% 11.29%   0 Cat4k Mgmt HiPri

41  30929587802851505705       1084 36.61% 36.53% 36.18%   0 Cat4k Mgmt LoPri

76    72485492 270422107        268 7.91%  7.72%  7.68%   0 IP Input

113    35661224  40030007        890 21.91% 28.13% 29.84%   0 DHCPD Receive

After doing sh platform health these are the high ones:

S2w-JobEventSchedule  10.00   7.54     10      8  100  500   9   9    7  36703:06

Stub-JobEventSchedul  10.00  12.23     10     48  100  500   12  12    9  51004:51

K2CpuMan Review       30.00  29.44     30     99  100  500  33  32   25  37067:58

K2AccelPacketMan: Tx  10.00  12.15     20      1  100  500   12  12   10  11871:22

And finally sh platform cpu packet statistics give me this

Packets Dropped In Hardware By CPU Subport (txQueueNotAvail)

CPU Subport  TxQueue 0       TxQueue 1       TxQueue 2       TxQueue 3

------------ --------------- --------------- --------------- ---------------

           0           11045           14031          149981       188579662

           1               0               0               0         5919279

           2               0          115638               0               0

RkiosSysPacketMan:

Packet allocation falures: 0

Packet Buffer(Software Common) allocation falures: 0

Packet Buffer(Software ESMP) allocation falures: 0

Packet Buffer(Software EOBC) allocation falures: 0

IOS Packet Buffer Wrapper allocation falures: 0

Packets Dropped In Processing Overall

Total                5 sec avg 1 min avg 5 min avg 1 hour avg

-------------------- --------- --------- --------- ----------

           146521131         0         0         0          0

Packets Dropped In Processing by CPU event

Event             Total                5 sec avg 1 min avg 5 min avg 1 hour avg

----------------- -------------------- --------- --------- --------- ----------

Input Acl                    146002289         0         0         0          0

SA Miss                             27         0         0         0          0

Packets Dropped In Processing by Priority

Priority          Total                5 sec avg 1 min avg 5 min avg 1 hour avg

----------------- -------------------- --------- --------- --------- ----------

Normal                        46723179         0         0         0          0

Medium                          518884         0         0         0          0

High                          99797883         0         0         0          0

Packets Dropped In Processing by Reason

Reason             Total                5 sec avg 1 min avg 5 min avg 1 hour avg

------------------ -------------------- --------- --------- --------- ----------

SrcAddrTableFilt                     24         0         0         0          0

L2DstDrop                             7         0         0         0          0

L2DstDropInAcl                       46         0         0         0          0

NoDstPorts                           32         0         0         0          0

NoFloodPorts                  146521022         0         0         0          0

Total packet queues 16

Packets Received by Packet Queue

Queue                  Total           5 sec avg 1 min avg 5 min avg 1 hour avg

---------------------- --------------- --------- --------- --------- ----------

Esmp                        6279742115       238       247       203        193

L2/L3Control                1320811357        57        48        43         41

Host Learning                 24933459         1         0         0          0

L3 Fwd Medium                     5813         0         0         0          0

L3 Fwd Low                    72923122         0         0         0          0

L2 Fwd High                      11130         0         0         0          0

L2 Fwd Medium                   164016         0         0         0          0

L2 Fwd Low                   242645408       227       237       193        185

L3 Rx High                           9         0         0         0          0

L3 Rx Low                     89296999       439       461       378        364

RPF Failure                     129420         0         0         0          0

Packets Dropped by Packet Queue

Queue                  Total           5 sec avg 1 min avg 5 min avg 1 hour avg

---------------------- --------------- --------- --------- --------- ----------

L2/L3Control                  18470371         0         0         0          0

Host Learning                  5825831         0         0         0          0

L2 Fwd Low                      405210         0         0         0          0

L3 Rx Low                         9863         0         0         0          0

I will be rebooting the switch at night to see if it helps.


Thank you.

Salman

1 Accepted Solution

Accepted Solutions

Hi Salman,

If DHCP requests are storming through the clients you probably wanna look into rate-limiting DHCP requests via DHCP snooping.

ip dhcp snooping limit rate rate

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html#high_cpu

http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/25ew/configuration/guide/conf/dhcp.html#wp1073418

-Vishesh

View solution in original post

6 Replies 6

Liam Kenneally
Level 1
Level 1

Hi Amad,

If the reboot doesn't work try a "undebug all" Someone may have run debugs in the past and a traffic flow is now setting off these debugs. You should see them if you are connected via console. Alternativly if you are Telnet/SSH'd you should see them if you type 'terminal monitor'

Worth a shot.

Kind Regards,

Liam

Thanks Liam.

The reboot brought it back to 36%. It jumped back to 95% after about 5 hours of rebooting and now it is still at 35%.

Today CPU jumped to 90% again. The DHCPD Receive is at 26%.

Salman

I think you have some abnormal process on some client equipment in your network (eg virus) which is causing high number of broadcast DHCPD packets...you should check something on that topic. If you didn't change anything on core sw then it must be something which is changed - probably client PC

HTH,

Dragan

HTH,
Dragan

Hi Salman,

If DHCP requests are storming through the clients you probably wanna look into rate-limiting DHCP requests via DHCP snooping.

ip dhcp snooping limit rate rate

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html#high_cpu

http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/25ew/configuration/guide/conf/dhcp.html#wp1073418

-Vishesh

Thanks Vishesh,

That's what I was thinking. I was wondering if there was a way to track it. Implementing DHCP
Snooping rate limit brought the CPU down to 33%.

Salman

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card