Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hi,

We are getting the HIGH CPU process due to Cat4k Mgmt Lopri with both core Switches installed at the Customer site.

HTAINHYD03XXXCS0001#sh processes cpu sorted

CPU utilization for five seconds: 89%/2%; one minute: 68%; five minutes: 59%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

52 41015708283160588800 0 68.02% 43.64% 33.26% 0 Cat4k Mgmt LoPri

51 1020058096 667843543 1527 7.83% 11.70% 12.78% 0 Cat4k Mgmt HiPri

111 18148967922560591317 0 7.43% 7.41% 7.42% 0 Spanning Tree

HTAINHYD03XXXCS0002#sh processes cpu sorted

CPU utilization for five seconds: 66%/3%; one minute: 51%; five minutes: 48%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

52 3078834762717954896 0 37.51% 20.72% 18.28% 0 Cat4k Mgmt LoPri

51 18107779121821980917 993 8.63% 13.57% 14.40% 0 Cat4k Mgmt HiPri

105 3375271842297524860 0 8.07% 4.48% 3.80% 0 IP Input

111 1901379976 80949735 23488 6.63% 6.75% 6.75% 0 Spanning Tree

223 91174320 296086319 307 1.43% 1.46% 1.45% 0 HSRP Common

Also attaching here the logs show version, show process cpu sorted,show logging, show interface for both the Switches.

Suggest me on this issue facing and confirm if there is any BUG impact or some other specific reason??

Regards,

Ashutosh

9 REPLIES
Hall of Fame Super Silver

Re: High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hello Ash,

these processes

Cat4k Mgmt LoPri and Cat4k Mgmt HiPri

are specific of C4500 architecture to further troubleshoot your issue you should read the following specific document:

http://www.cisco.com/en/US/products/hw/switches/ps663/products_tech_note09186a00804cef15.shtml

These processes are actually containers of multiple processes and you need to use

show platform health

to go on in the investigation

Edit:

from the log buffer of the first switch there are some messages that are quite important

068722: May  8 11:47:15.797 IST: %C4K_HWACLMAN-4-ACLHWPROGERR: Input Security: Vlan522-ADWEA - hardware TCAM limit, some packet processing will be software switched.

068723: May  8 11:47:15.797 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 46/Normal) Security: Vlan522-ADWEA - insufficient hardware TCAM masks.

Same kind of messages are reported for other Vlans

CLMAN-4-ACLHWPROGERR: Input Security: Vlan424-Armstrong - hardware TCAM limit, some packet processing will be software switched.

068743: May 14 12:59:41.618 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 16/Normal) Security: Vlan424-Armstrong - insufficient hardware TCAM masks.

CLMAN-4-ACLHWPROGERR: Input Security: Vlan427-Laureate - hardware TCAM limit, some packet processing will be software switched.

068747: May 14 13:01:37.008 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 30/Normal) Security: Vlan427-Laureate - insufficient hardware TCAM masks.

This means that the device is falling back to process switching because the portion of the TCAM table dedicated to implementation of ACLs is full, you should review the ACLs applied to SVI vlan522, vlan 440, vlan 424 and so on  to understand why this happens. This may explain the high cpu usage on the device

on the first device there is also an IP address conflict with an external host

069004: Jun 14 12:37:59.602 IST: %IP-4-DUPADDR: Duplicate address 10.119.81.161 on Vlan525, sourced by 4487.fc8a.2929

OUI search for 4487fc provides the following result:

44-87-FC   (hex)          ELITEGROUP COMPUTER SYSTEM CO., LTD.
4487FC     (base 16)          ELITEGROUP COMPUTER SYSTEM CO., LTD.
                    NO. 239, Sec. 2, Ti Ding Blvd.
                    Taipei  11493
                    TAIWAN, REPUBLIC OF CHINA

Edit2:

the same kind of error appears in the log buffer of the second switch:

069846: Jul  4 16:30:55.682 IST: %C4K_HWACLMAN-4-ACLHWPROGERR: Input Security: Vlan480-Oracle - hardware TCAM limit, some packet processing will be software switched.

069847: Jul  4 16:30:55.682 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 47/Normal) Security: Vlan480-Oracle - insufficient hardware TCAM masks.

Hope to help

Giuseppe

New Member

Re: High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hello Giuseppe,

Also attaching here the output command sh platform cpu pack stat all, show platform health. Show logging  to investigate this case brief.

Find the attachment for both the Switches.

Regards,

Ashutosh

Hall of Fame Super Silver

Re: High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hello Ashutosh,

the issue is with the switch configuration: ACLs or policy maps used for QoS have used all the portion of TCAM dedicated to this purpose, and this causes fall back to process switching in several client vlans with impact on cpu usage

http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.2/53SG/system/messages/emsg.html#wp1253343

Edit:

you may want to open a TAC service request to get further help on this.

Hope to help

Giuseppe

New Member

Re: High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hi,

Agree with the reason explaining here for this cause.

Also confirm if some other reason may be found out with the sh platform cpu pack stat all, show platform health. hope that may help us.

Regards,

Ashutosh

Hall of Fame Super Silver

High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hello Ashutosh,

as far as I can see in the latest log files there are no other reasons for high cpu usage

Packets Received by Packet Queue

Queue                  Total           5 sec avg 1 min avg 5 min avg 1 hour avg

---------------------- --------------- --------- --------- --------- ----------

[output omitted ]

ACL sw processing            650584504      1338      1096       906        785

then you see many items used at 100% related to ACL or QoS

like

Rkios QoS PolicyMaps      356.32       0%            0.03      100%

AclClassifierIdToCla       48.00       0%            0.07      100%

Rkios QoS ClassMaps       896.00       0%            0.16      100%

AclToIosFilterMapLis      384.00       0%            0.07      100%

and

AclOp                    2048.00       0%            0.17      100%

AclOpAceSet              2559.96       0%            3.10      100%

AclClassifier            1280.00       0%            3.71      100%

AclFeature               2570.04       0%            9.35      100%

Acl                      1408.00       0%            4.08      100%

Ace24                    9215.85       3%          287.78      100%

to be noted also some entries related to PIM are at 100%

PimPhyports              1054.68      24%          261.56      100%

PimPorts                  906.25      31%          282.75      100%

PimModules                162.00       1%            3.16      100%

PimSlots                    6.00       2%            0.16      100%

PimChassis                 38.25       6%            2.39      100%

Hope to help

Giuseppe

New Member

High CPU utilization due to Cat4k Mgmt LoPri in WS-C4507R-E

Hi HCL,

Have you resoved this issue? If so how do you resolved I've the same problem too. :-(

Best regards,

Alcides Miguel

New Member

Hello All!!I have the same

Hello All!!

I have the same issue

I attached my files, hope you could help me with some hints about what could be the problem in my case.

 

Many thanks in advance.

 

I use "storm-control

I use "storm-control broadcast level 2M" for all interface to find which is a problem interface.

New Member

I have enabled a VMWARE

 have enabled a VMWARE servers and these produced the 4500 CPU high.
Reviewing the working modalities found this document delivered to the server administrator.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006129

Regards.

35725
Views
0
Helpful
9
Replies
CreatePlease login to create content