cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
790
Views
0
Helpful
13
Replies

multicast routing question

branfarm1
Level 4
Level 4

Hi there. I have a question regarding multicast routing. I have a server that is dual-home to a pair of 4507R's. The 4507R's are running HSRP for the local gateway, OSPF for routing, and PIM sparse-dense mode for multicast. The multicast source is from a router off of a VLAN that is shared between the 4507R's, but is only physically connected to the secondary 4507R. Both switches have IP's in the VLAN and are running PIM. The router is the DR for that vlan. I have static mroute's for the sources on each 4507R, so for all intents and purposes, the 4507R's are configured the same in regards to multicast.

I'm having a problem where the server randomly drops one of it's multicast feeds for a minute, then picks it back up successfully. The server is subscribing to seven feeds, and only one drops at a time, but it's completely random as to which feed will drop off.

Can anyone suggest some multicast troubleshooting tools I can use to help diagnose this? Or does anyone have any ideas what might be wrong?

13 Replies 13

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Nyle,

on the multilayer switches use

sh ip igmp groups just after a session drops it provides the time the router is in forwarding state for each group.

This is just to see if there is an IGMP issue here.

In case, if the traffic volume is not high you could fix using 7 ip igmp join-groups one for each stream on the SVI vlan interface.

verify also who is the fowarder for each group another possible reason could be a brief duplication of the stream as per the dense mode behavior of flood and prune.(this happens if there is no RP for the groups otherwise it is sparse mode)

The two routers should use the PIM assert message to decide who has to forward traffic on the vlan and they have to do this for each group.

To see if this can be the issue you can also try to shut the vlan on one 4507R and see if with only one router the problem stops.

Hope to help

Giuseppe

Thanks for the response Giuseppe.

I didn't notice the problems until I configured both of my switches to participate in PIM. In fact, you helped me with my original configuration: http://forums.cisco.com/eforum/servlet/NetProf?page=netprof&type=Subscriptions&loc=.2cc156be/6&forum=Network%20Infrastructure&topic=LAN%2C%20Switching%20and%20Routing

Any ideas what I should do to resolve this? I can provide configs if necessary.

Thanks again

Hello Nyle,

if the multicast groups are treated as dense mode I'm afraid the second router can try to flood and then prune when it loses the PIM assert competition.

The server application could be sensible to duplicated frames.

post a filtered version of the current configuration the parts of interest for multicast and the sh ip route of the sources of the 7 streams.(the unicast routing source info is used to decide who wins the PIM assert) taken on both switches.

Also post a sh ip igmp groups

If I remember correctly your previous post was about redundancy and multicast with two multilayer switches.

Hope to help

Giuseppe

Here are the requested outputs and configs:

Switch1#sh ip igmp groups

IGMP Connected Group Membership

Group Address Interface Uptime Expires Last Reporter

224.0.17.36 Vlan1 1d14h 00:02:06 10.17.0.20

224.0.17.37 Vlan1 1w4d 00:02:00 10.17.0.19

224.0.17.38 Vlan1 1d14h 00:02:06 10.17.0.20

224.0.17.39 Vlan1 1w4d 00:01:58 10.17.0.19

224.0.17.40 Vlan1 1d14h 00:02:04 10.17.0.20

224.0.17.41 Vlan1 1w4d 00:01:59 169.254.148.228

224.0.17.42 Vlan1 1d14h 00:02:58 10.17.0.20

224.0.17.43 Vlan1 1w4d 00:01:59 10.17.0.28

224.0.17.44 Vlan1 1d14h 00:02:02 10.17.0.20

224.0.17.45 Vlan1 2d14h 00:02:00 10.17.0.19

224.0.17.46 Vlan1 1d14h 00:02:58 10.17.0.20

224.0.17.47 Vlan1 2d14h 00:02:04 10.17.0.19

224.3.0.8 Vlan1 1d14h 00:02:05 10.17.0.20

224.3.0.9 Vlan1 2d14h 00:02:07 10.17.0.19

Switch2#sh ip igmp groups

IGMP Connected Group Membership

Group Address Interface Uptime Expires Last Reporter

224.0.17.36 Vlan1 1d14h 00:02:04 10.17.0.20

224.0.17.37 Vlan1 7w0d 00:02:08 10.17.0.19

224.0.17.38 Vlan1 1d14h 00:02:08 10.17.0.20

224.0.17.39 Vlan1 7w0d 00:02:09 169.254.148.228

224.0.17.40 Vlan1 1d14h 00:02:10 10.17.0.20

224.0.17.41 Vlan1 7w0d 00:02:04 10.17.0.19

224.0.17.42 Vlan1 1d14h 00:02:08 10.17.0.20

224.0.17.43 Vlan1 7w0d 00:02:07 10.17.0.28

224.0.17.44 Vlan1 1d14h 00:02:06 10.17.0.20

224.0.17.45 Vlan1 2d14h 00:02:06 10.17.0.19

224.0.17.46 Vlan1 1d14h 00:02:06 10.17.0.20

224.0.17.47 Vlan1 2d14h 00:02:06 10.17.0.19

224.3.0.8 Vlan1 1d14h 00:02:52 10.17.0.20

224.3.0.9 Vlan1 2d14h 00:02:50 10.17.0.19

There aren't any static routes in the unicast routing table for the sources for these feeds. I thought that was what mroutes were for? The sources for the groups in question are 206.200.5.x and 206.200.6.x.

Thanks again for the help,

--Brandon

Hello Brandon,

sorry I was not sure about your name and I checked your profile, I remenber we had also other interesting discussions.

I apologize for this.

to understand if these groups are served by an RP

do on both switches the following:

sh ip pim rp mapping

you have configured to accept auto-rp but from the current show we cannot say if an RP exists for the groups.

and do

sh ip mroute 224.0.17.46 for example

I see there is good stability of the groups but some are up only by 1day and 14 hours.

By the way, what is the frequency of the events of stream stopped on the server ?

Hope to help

Giuseppe

Guiseppe,

When I did the sh ip pim rp mapping, I did not see an RP for the 224.0.17.X groups. I do have other multicast feeds though, and I show RP's for them. I can send those outputs if you want.

Here are the mroutes:

Switch1#sh ip mroute 224.0.17.46

IP Multicast Routing Table

Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

L - Local, P - Pruned, R - RP-bit set, F - Register flag,

T - SPT-bit set, J - Join SPT, M - MSDP created entry,

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel

Y - Joined MDT-data group, y - Sending to MDT-data group

Outgoing interface flags: H - Hardware switched

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.17.46), 1w5d/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Vlan320, Forward/Sparse-Dense, 1w4d/00:00:00, H

Vlan310, Forward/Sparse-Dense, 1w4d/00:00:00, H

Vlan300, Forward/Sparse-Dense, 1w4d/00:00:00, H

Vlan1, Forward/Sparse-Dense, 1w5d/00:00:00, H

(206.200.5.146, 224.0.17.46), 08:27:14/00:02:55, flags: T

Incoming interface: Vlan310, RPF nbr 10.5.0.100, Mroute

Outgoing interface list:

Vlan1, Forward/Sparse-Dense, 02:00:50/00:00:00, H

Vlan300, Prune/Sparse-Dense, 00:02:28/00:00:32, H

Vlan320, Prune/Sparse-Dense, 00:02:28/00:00:32, H

Switch2#show ip mroute 224.0.17.46

IP Multicast Routing Table

Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

L - Local, P - Pruned, R - RP-bit set, F - Register flag,

T - SPT-bit set, J - Join SPT, M - MSDP created entry,

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel

Y - Joined MDT-data group, y - Sending to MDT-data group

Outgoing interface flags: H - Hardware switched

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.17.46), 7w0d/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Vlan1, Forward/Sparse-Dense, 7w0d/00:00:00, H

Vlan300, Forward/Sparse-Dense, 7w0d/00:00:00, H

Vlan320, Forward/Sparse-Dense, 7w0d/00:00:00, H

Vlan310, Forward/Sparse-Dense, 7w0d/00:00:00, H

(155.195.63.176, 224.0.17.46), 00:02:40/00:00:19, flags:

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Vlan310, Forward/Sparse-Dense, 00:02:40/00:00:00, H

Vlan320, Forward/Sparse-Dense, 00:02:40/00:00:00, H

Vlan300, Forward/Sparse-Dense, 00:02:40/00:00:00, H

Vlan1, Forward/Sparse-Dense, 00:02:40/00:00:00, H

(206.200.5.146, 224.0.17.46), 08:59:22/00:02:57, flags: T

Incoming interface: Vlan310, RPF nbr 10.5.0.100, Mroute

Outgoing interface list:

Vlan320, Prune/Sparse-Dense, 00:02:41/00:00:19, H

Vlan300, Prune/Sparse-Dense, 00:02:41/00:00:19, H

Vlan1, Forward/Sparse-Dense, 08:59:22/00:00:00, H

The frequency of the events is very random. I haven't been able to pick out a pattern yet. I will say though, that I realized this is affecting more than just these 7 feeds. I have another set of feeds that is experiencing the same issue.

I know that one solution to the problem would be to roll back the changes to have only one switch handle the multicast, but I would like to solve the technical issue so I can have better redundancy. I really appreciate your help with this.

--Brandon

Hello Brandon,

I would suggest to setup an RP for these groups to have them treated as sparse mode.

I'm afraid that in dense mode sometimes the second switch can send duplicated frames for a little interval the time to exchange PIM Assert with the other switch and this can cause problems on the server receiving duplicated packets.

the simplest way to do it could be that of using one of the two as the RP.

use a standard acl to define groups to be matched

access-list 25 permit 224.0.17.0 0.0.0.255

ip pim rp group-list 25

do this on both switches but with the same ip address.

then, wait some time to see if the groups in sparse mode are not suffering this issue.

Hope to help

Giuseppe

Thanks for the quick response.

So what you're saying is to add this config to each switch:

Switch1:

int loopback0

ip address 192.168.10.1

no shut

access-list 25 permit 224.0.17.0 0.0.0.255

ip pim rp 192.168.10.1 group-list 25

switch2:

int loopback0

ip address 192.168.10.1

access-list 25 permit 224.0.17.0 0.0.0.255

ip pim rp 192.168.10.1 group-list 25

Is that right?

Giuseppe,

One more question: Do you think dense mode is the right way to go here? It seems like I should be running sparse mode, since my hosts are making the requests for the data. What do you think?

--Brandon

Hello Brandon,

hosts will always use IGMP reports to signal their will to receive traffic either in dense mode either in sparse mode.

The proposal to move to sparse mode for the groups is in the hope to solve the issue that could be caused by temporary packet duplications.

in dense mode a router strategy is : first I send out the traffic, if some other router complains of receiving unwanted traffic or I don't detect any end user out this interface I will stop to send traffic for this group.

In sparse mode the concept is opposite: I don't send out traffic until a downstream router with a pim message or end users with IGMP reports don't tell me they want it. (explicit join).

Hope to help

Giuseppe

Hello Brandon,

just a difference I was proposing one of the two as RP and I was unclear indeed.

so actually the ip address for RP is that of loop M on only one router :

just remove the config of loop0 on switch1.

let's make the PIM DR also the RP for these groups.

Hope to help

Giuseppe

Giuseppe,

Should I make the loopback address a part of the servers vlan, or of the vlan that is connected to the router we receive the multicast stream from?

Hello Brandon,

neither the loopback ip address sits in its own subnet. You can use a loopback that you are using already as OSPF router-id for example.

What is important is that this ip address has to be advertised in the unicast routing protocol in use (OSPF I suppose).

For this reason only one is used so that the RP to group mapping is made without any ambiguity.

Hope to help

Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: