cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11948
Views
6
Helpful
16
Replies

Multicast causing high CPU

derawat28
Level 1
Level 1

Hi,

In my network we are facing high CPU causing because of multicast traffic...

The source of mylticast having 155.x.x.x and Group 239.x.x.x.

currently we have placed a access-list to stop causing high CPU as below but now concerned is that we are not able to get the multicast video from this source.

access-list 1 deny 239.0.0.0 0.255.255.255

When we take out the above acl so again it causing the high CPU.. I have checked the and found the RPF failure as below,

ds1#sh ip mroute 239.x.x.x count

IP Multicast Statistics

102 routes using 77310 bytes of memory

29 groups, 2.51 average sources per group

Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second

Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 239.x.x.x, Source count: 1, Packets forwarded: 0, Packets received: 467885775

RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0

Source: 155.x.x.x/32, Forwarding: 0/-92/0/0, Other: 467885775/20990877/446894898

Please anyone can help to find the issue and how we may resolve it?

Appreciate your time and efforts for me.

Thanks and Regards

16 Replies 16

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Devender,

there is something strange in the output :

Forwarding: 0/-92/0/0

-92 packets/s forwarded is not real.

to get better help more details are needed:

What platform and what IOS release runs on ds1 ?

if a catalyst 6500 issue sh ver and sh module and post here

if you like post a filtered version without public unicast ip address (don't care about multicast addresses in range 239 they are private addresses).

Most of drops are not caused by RPF failures but by other reasons including OI list null.

So not only verify if packets are arriving from the same L3 interface that would be used for routing unicast packets directed to the source 155.x.x.x (this is the RPF check)

Check also if your PIM is partitioned and receivers cannot join the group.

PIM must be enabled on all interfaces where you are running the unicast routing protocol to avoid that a link failure can partition the Multicast routing domain.

For the high cpu usage post

sh processes cpu sorted 1min

this can tell exactly what processes are more cpu intensive.

Hope to help

Giuseppe

DS1#sh version

Cisco IOS Software, s72033_rp Software (s72033_rp-IPSERVICESK9_WAN-VM), Version

12.2(33)SXH1, RELEASE SOFTWARE (fc3)

Technical Support: http://www.cisco.com/techsupport

Copyright (c) 1986-2008 by Cisco Systems, Inc.

Compiled Thu 17-Jan-08 07:19 by prod_rel_team

ROM: System Bootstrap, Version 12.2(17r)SX5, RELEASE SOFTWARE (fc1)

DS1 uptime is 23 weeks, 1 day, 21 hours, 36 minutes

Uptime for this control processor is 23 weeks, 1 day, 21 hours, 35 minutes

Time since DS1 switched to active is 23 weeks, 1 day, 21 hours, 35 minutes

System returned to ROM by reload at 18:00:12 Singapo Fri May 9 2008 (SP by reloa

d)

System restarted at 20:54:36 Singapo Fri May 16 2008

System image file is "bootdisk:s72033-ipservicesk9_wan-vz.122-33.SXH1.bin"

This product contains cryptographic features and is subject to United

States and local country laws governing import, export, transfer and

use. Delivery of Cisco cryptographic products does not imply

third-party authority to import, export, distribute or use encryption.

Importers, exporters, distributors and users are responsible for

compliance with U.S. and local country laws. By using this product you

agree to comply with applicable laws and regulations. If you are unable

to comply with U.S. and local laws, return this product immediately.

A summary of U.S. laws governing Cisco cryptographic products may be found at:

http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

If you require further assistance please contact us by sending email to

export@cisco.com.

cisco WS-C6506-E (R7000) processor (revision 1.1) with 1040384K/8192K bytes of m

emory.

Processor board ID SAL1212K0EE

SR71000 CPU at 600Mhz, Implementation 1284, Rev 1.2, 512KB L2 Cache

Last reset from s/w reset

10 Virtual Ethernet interfaces

75 Gigabit Ethernet interfaces

2 Ten Gigabit Ethernet interfaces

1917K bytes of non-volatile configuration memory.

65536K bytes of Flash internal SIMM (Sector size 512K).

Configuration register is 0x2102

Patching is not available since the system is not running from an installed imag

e. To install please use the "install file" command

=====================================

DS1#sh mod

Mod Ports Card Type Model Serial No.

--- ----- -------------------------------------- ------------------ -----------

1 24 CEF720 24 port 1000mb SFP WS-X6724-SFP

5 5 Supervisor Engine 720 10GE (Active) VS-S720-10G

The process which high at the time of high CPU is "IP Input".

When we take out the ACL so our all receiver can get the multicast program from the source.

May I know how may I do my further investigation (I mean can you suggest some of the commands?)?

You might feel silly to ask you as mentioned above but as I am not so smart in Multicast so I hope you will assist me.

Thanks a lot.

Regards

Devender

Hello Devender,

for RPF check use

sh ip rpf

to verify if you have enabled PIM on all the required interfaces

compare the output of

sh ip pim interface

with a show for your current unicast routing protocol

for example

sh ip ospf interfaces

OR sh ip eigrp interfaces

And also:

sh ip pim neighbors

sh ip ospf neighbors OR sh ip eigrp neigh

This has to be done also on the routers/switches were the receivers are connected.

the ip pim sparse-dense-mode or other type must be enabled on all involved L3 interfaces: the one where is the source, those where there are the receivers and all the backbone links to be able to build the distribution tree

Hope to help

Giuseppe

Hi Giuseppe,

Thanks again for taking interest and for your helping hand.

I have checked all the interfaces for the PIM and OSPF on source DS as well as receviers DS and I can see all the interfaces are enabled with PIM- sparse-dense mode.

#### Below is the output from the source side###

DS1#sh ip rpf

RPF information for ? (source IP add)

RPF interface: Port-channel16

RPF neighbor: ? (x.x.x.x)

RPF route/mask: x.x.x.0/25

RPF type: unicast (ospf 1)

RPF recursion count: 0

Doing distance-preferred lookups across tables

=============================================

DS1#sh ip mroute 239.192.10.102 count

IP Multicast Statistics

17 routes using 19672 bytes of memory

5 groups, 2.40 average sources per group

Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second

Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

================================================

#sh ip pim interface

172.x.x.x Vlan252 v2/SD 0 30 1 0.0.0.0

=============================================

DS1#sh ip ospf interface

Vlan252 is administratively down, line protocol is down

Internet Address 172.21.252.252/23, Area 0.0.0.236

Note: I dont know why my client has shut this vlan however according to me this vlan should be up as it belongs to source subnet.....???

##### On Receiver Side######

ds1#sh ip rpf

RPF information for ? (source IP add)

RPF interface: Vlan252

RPF neighbor: ? (source IP add) - directly connected

RPF route/mask: x.x.x.0/25>>>>>> Source subnet

RPF type: unicast (connected)

RPF recursion count: 0

Doing distance-preferred lookups across tables

===========================================

ds1#sh ip pim interface

x.x.x.124 Vlan252 v2/SD 1 30 1 155.69.252.125

==============================

s1#sh ip ospf interface

Vlan252 is up, line protocol is up

I hope now you will have more clear picutre and will be able to suggest in more details.

Thanks a ton.

Regards

I don't think redundant non-RPF traffic is the cause. He is using a VS-S720-10G supervisor. non-RPF traffic should be dropped in hardware.

cisco_lad2004
Level 5
Level 5

what ttl value for the multicast address causing high CPU ?

If its 1, it be punted to CPU.

U could debug on the upstream device ( so u dont make a struggling router even worse) or ask the servers guys to check what TTL value is the stream coming out with.

HTH

Sam

I agree with SAM on this. Once of my customer faced the same problem a couple of months back and we found out the application which was generating the multicast streams had a ttl of 1 due to which arre the packets were processed by the CPU. Changing the TTL to 10 in out case helped us bringing down the CPU.

-amit singh

Hi,

Thanks for you all for your time and interest in my issue.

I would like to share again that as long as there is ACL in our Ds for deny 239.x.x.x it is not going to cause CPU high but at the movement we are taking it off CPU rising very fast.

My concerned is for my two multicast channel as after key in this ACL we are not being able to reach those two channel but when we are using the group ID 224.x.x.x there is no issue and we can reach to multicast channel and also not causing CPU high.

Currently those two channel are in the multicast group 239.x.x.x

Thanks and Regards

when u use ACL u are blocking multicast stream...so nothing hits CPU. when u remove it, any multicast groups with ttl one will hit ur CPU.

so please check ur ttl.

HI,

With great thanks, can you please tell me how may I check that?

2 options:

1-ask the server guys who are setting up ttl values before they stream multicast.

2-run a debug on teh upstream device before the one with high CPU ( so u don't kill ur box).

debug ip mroute 239.x.x.x

alternatively, u could just ask the server guys to increase TTL value. when they are done, remove ACL and validate CPU is ok.

let us know how it went.

Sam

See if the router is sending a lot of TTL Expired messages:

sh ip traffic | i time exceeded

If this counter is increasing rapidly, this is a strong indication.

Alex.DEPREZ
Level 1
Level 1

I know this topic is old, but since this is the first result on google concerning high cpu and multicast.

I resovled a similar problem, an HD IPTV Streamer flooding on 32 multicast groups, 4500-X CPU raise to 100% each time I activate PIM on the interface facing the streamer. The multicast groups were 239.x.x.x , changed to 225.x.x.x and everything worked fine.

Still looking for an explanation regarding this

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card