Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Strange multicast problem when using Alteris Deployment Server

Hi,

We use the PXE boot funtion on our desktop PCs and laptops. Multicast TFTP is enabled within the BIOS to grab a boot file and only recently we have been witnessing an "Open TFTP Timeout" during the bootup process.

I have checked over the IGMP / IP PIM configuration across our network and everything looks fine. However when I check the IGMP group that the client is trying to join I see address 224.1.1.2. Yet when I look at the IGMP group on the port to which the Altiris Deployment server connects I only see group 225.1.2.3.

A colleague has shown me the configuration for the MTFTP server on the deployment server and the group address is set at 224.1.1.0.

Shouldn't the group address on the Altiris deployment server also be 224.1.1.2 or is the 225.1.2.3 address within the 224.1.1.0 IGMP group and the problem is in fact something to do with the network ?

Any help would be appreciated.

Chris

48 REPLIES
Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

>> However when I check the IGMP group that the client is trying to join I  see address 224.1.1.2. Yet when I look at the IGMP group on the port to  which the Altiris Deployment server connects I only see group 225.1.2.3.

it looks like the client is trying to get traffic from a different group, but when you check IGMP membership you can check receiver interest.

So server is interested in receiving traffic on 224.1.1.2

can 225.1.2.3 fit with 224.1.1.0?

it is unlikely as you can see first byte 225 is different then 224.1.1.0

check with

sh ip mroute 225.1.2.3

if there is any source sending packets to this group

(I mean (S,225.1.2.3) entries the (*, 225.1.2.3) should be present in any case)

sh ip pim rp mapping 225.1.2.3

there is an RP for this address?

if there is not you have a problem

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe and thanks for your reply.

When I look on the first hop distribution switch (the switch block that contains the server) I see the following:

(*, 225.1.2.3), 4d17h/00:02:55, RP 172.16.255.4, flags: SJC

Incoming interface: Vlan227, RPF nbr 172.16.227.250, Partial-SC

Outgoing interface list:

Vlan192, Forward/Sparse-Dense, 4d17h/00:02:55, H

The RP mapping shows the following:

ci_t2_c200_sfdist_sw1#sh ip pim rp mapping 225.1.2.3

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4

RP 172.16.255.4 (?), v2v1

Info source: 172.16.255.7 (?), elected via Auto-RP

Uptime: 4d17h, expires: 00:02:50

When I check the RP itself at 172.16.255.4, I see the following:

(*, 225.1.2.3), 4w0d/00:02:57, RP 172.16.255.4, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Vlan227, Forward/Sparse-Dense, 4d17h/00:02:57

Everything looks fine for the group.

I have just looked at the IGMP group the client is trying to join and this has now changed to 224.1.1.3 all by itself. I have checked over the core switch and can see the following:

ci_t2_65_c200_core2#sh ip mroute 224.1.1.3

IP Multicast Routing Table

Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

L - Local, P - Pruned, R - RP-bit set, F - Register flag,

T - SPT-bit set, J - Join SPT, M - MSDP created entry,

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

U - URD, I - Received Source Specific Host Report,

Z - Multicast Tunnel, z - MDT-data group sender,

Y - Joined MDT-data group, y - Sending to MDT-data group

V - RD & Vector, v - Vector

Outgoing interface flags: H - Hardware switched, A - Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 00:23:23/00:03:24, RP 172.16.255.4, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Vlan232, Forward/Sparse-Dense, 00:00:05/00:03:24

Vlan224, Forward/Sparse-Dense, 00:09:32/00:02:49

(172.16.192.49, 224.1.1.3), 00:23:23/00:02:16, flags:

Incoming interface: Vlan228, RPF nbr 172.16.228.251

Outgoing interface list:

Vlan232, Forward/Sparse-Dense, 00:00:05/00:03:24

Vlan224, Forward/Sparse-Dense, 00:09:32/00:02:49

Device 172.16.192.49 is the server address....So the server is a member of the 224.1.1.3 group, yet when I check on the switch port it is 225.1.2.3.

I just do not understand this.

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

>>

Device 172.16.192.49 is the server address....So the server is a  member of the 224.1.1.3 group, yet when I check on the switch port it is  225.1.2.3.

I  just do not understand this.

note:

a source S for group G does not need to be a member of group G, it is simply a source of packets sent with destination G

then the server can be a multicast receiver on its own, and it can have joined a totally different multicast group G2 not related to the first.

This is what happens here

the following output shows the source is active:

(172.16.192.49,  224.1.1.3), 00:23:23/00:02:16, flags:

Incoming interface: Vlan228,  RPF nbr 172.16.228.251

Outgoing interface list:

Vlan232,  Forward/Sparse-Dense, 00:00:05/00:03:24

Vlan224,  Forward/Sparse-Dense, 00:09:32/00:02:49

Can the host receive on 224.1.1.3?

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Can the host receive on 224.1.1.3?

Do you mean the server 172.16.192.49 ? How would I check this on the network ? If it is a source of the multicast group (S,G), would it not be able to automatically recieve on the same group ?

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

I think I may have found a problem.

When I trace the S,G multicast tree across the backbone I see the following:

1) On the Server distribution switch (switch block containing the server 172.16.192.49)

ci_t1_c050_sfdist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 16:53:00/stopped, RP 172.16.255.4, flags: SP
  Incoming interface: Vlan228, RPF nbr 172.16.228.250, RPF-MFD
  Outgoing interface list: Null

(172.16.192.49, 224.1.1.3), 16:53:00/00:03:21, flags: T
  Incoming interface: Vlan192, RPF nbr 0.0.0.0, RPF-MFD
  Outgoing interface list:
    Vlan228, Forward/Sparse-Dense, 16:08:12/00:02:49, H

2) On the Core switch (also the RP for the group)

ci_t2_65_c200_core2#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 16:54:57/00:03:18, RP 172.16.255.4, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan232, Forward/Sparse-Dense, 00:00:37/00:02:52
    Vlan222, Forward/Sparse-Dense, 00:05:46/00:00:38
    Vlan224, Forward/Sparse-Dense, 16:08:53/00:03:18

(172.16.192.49, 224.1.1.3), 16:54:57/00:01:53, flags:
  Incoming interface: Vlan228, RPF nbr 172.16.228.251
  Outgoing interface list:
    Vlan232, Forward/Sparse-Dense, 00:00:37/00:02:52
    Vlan222, Forward/Sparse-Dense, 00:05:46/00:00:38
   Vlan224, Forward/Sparse-Dense, 16:08:53/00:03:18

3) On the distribution switch containing the clients using PXE Boot and the Multicast TFTP:

ci_t1_3a1_dist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 16:42:50/00:02:38, RP 172.16.255.4, flags: SJC
  Incoming interface: Vlan224, RPF nbr 172.16.224.250, Partial-SC
  Outgoing interface list:
    Vlan5, Forward/Sparse-Dense, 00:00:34/00:02:38, H
    Vlan67, Forward/Sparse-Dense, 00:01:10/00:01:49, H
    Vlan8, Forward/Sparse-Dense, 00:02:33/00:00:41, H
    Vlan6, Forward/Sparse-Dense, 00:07:53/00:02:22, H
    Vlan68, Forward/Sparse-Dense, 16:10:37/00:02:28, H

On the above switch you can see the end hosts joining the group 224.1.1.3 but there is no S,G entry for the server.

We had an issue a few weeks ago where a rogue DHCP device dished out an ip address used by one of our HSRP groups and caused a few issues, this seems to coincide with when the multicasting stopped for this particular service. Could this have caused an issue with the multicast stream ?

Do you think a clearing of the mroute cache on the backbone switches might resolve the issue ?

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

cumulative answer to your two posts

>> How would I check this on the network ? If it is a source of the  multicast group (S,G), would it not be able to automatically recieve on  the same group ?

no, as noted in my previous post a source is not a member of the group, it can be but being a source is not enough to be a member of group G

I meant on the intended receiver, that is what counts

>> We had an issue a few weeks ago where a rogue DHCP device dished out an  ip address used by one of our HSRP groups and caused a few issues, this  seems to coincide with when the multicasting stopped for this particular  service. Could this have caused an issue with the multicast stream ?

Yes, there can be issues in interaction of HSRP and multicast.

I agree that there can be a relationship between the two events

Edit:

Incoming interface: Vlan224, RPF nbr 172.16.224.250,  Partial-SC

this partial-SC flag is the sign of a problem

you could use debug ip pim to see error messages

can you check on last switch

with

sh ip rpf

sh ip rpf 172.16.192.49

what should be the RPF neighbor for RP and for source?

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi ,

Output of show commands on the last distribution switch before hitting the receiver:

ci_t1_3a1_dist_sw1#sh ip rpf 172.16.255.4
  RPF information for ? (172.16.255.4)
  RPF interface: Vlan224
  RPF neighbor: ? (172.16.224.250)
  RPF route/mask: 172.16.255.4/32
  RPF type: unicast (ospf 1)
  RPF recursion count: 0
  Doing distance-preferred lookups across tables

ci_t1_3a1_dist_sw1#sh ip rpf 172.16.192.49
  RPF information for ? (172.16.192.49)
  RPF interface: Vlan224
  RPF neighbor: ? (172.16.224.250)
  RPF route/mask: 172.16.192.0/24
  RPF type: unicast (ospf 1)
  RPF recursion count: 0
  Doing distance-preferred lookups across tables

I haven't used this command before. The RP is 172.16.255.4 for the group yet the RPF neighbour is an interface on the RP itself. We have dual distribution switches / cores throughout the network so RPF should be blocking the stream from replicating across multiple paths, should I therefore expect the RFP neighbour to be the redundant / diverse path ?. Does the above indicate that Reverse Path Forward mechanism is actually blocking the path to the RP ? If so this would explain why the receivers aren't seeing the stream.


Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

RPF checks look like fine

let's go on with

sh ip pim neighbor

on last switch

and

sh ip pim int

edit:

another thought

there is a companion switch in client vlan that could be the PIM DR on user segments

non PIM DR switch should have only the (*,G) entry

you could try to change HSRP active router on user lan segment to see if there is any effect

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

As requested (many thanks for your help!)

ci_t1_3a1_dist_sw1#sh ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
172.16.2.252      Vlan2                    6w6d/00:01:19     v2    1 / S P
172.16.3.252      Vlan3                    6w6d/00:01:34     v2    1 / S P
172.16.4.252      Vlan4                    6w6d/00:01:28     v2    1 / S P
172.16.5.252      Vlan5                    6w6d/00:01:28     v2    1 / S P
172.16.6.252      Vlan6                    6w6d/00:01:41     v2    1 / S P
172.16.7.252      Vlan7                    6w6d/00:01:15     v2    1 / S P
172.16.8.252      Vlan8                    6w6d/00:01:39     v2    1 / S P
172.16.10.252     Vlan10                   6w6d/00:01:17     v2    1 / S P
172.16.18.252     Vlan18                   6w6d/00:01:31     v2    1 / S P
172.16.20.252     Vlan20                   6w6d/00:01:27     v2    1 / S P
172.16.25.252     Vlan25                   6w6d/00:01:33     v2    1 / S P
172.16.27.252     Vlan27                   6w6d/00:01:30     v2    1 / S P
172.16.64.252     Vlan64                   6w6d/00:01:38     v2    1 / S P
172.16.65.252     Vlan65                   6w6d/00:01:31     v2    1 / S P
172.16.66.252     Vlan66                   6w6d/00:01:40     v2    1 / S P
172.16.67.252     Vlan67                   6w6d/00:01:20     v2    1 / S P
172.16.68.252     Vlan68                   6w6d/00:01:18     v2    1 / S P
172.16.69.252     Vlan69                   6w6d/00:01:16     v2    1 / S P
172.16.70.252     Vlan70                   6w6d/00:01:39     v2    1 / S P
172.16.71.252     Vlan71                   6w6d/00:01:43     v2    1 / S P
172.16.72.252     Vlan72                   6w6d/00:01:34     v2    1 / S P
172.16.73.252     Vlan73                   6w6d/00:01:32     v2    1 / S P
172.16.74.252     Vlan74                   6w6d/00:01:20     v2    1 / S P
172.16.76.252     Vlan76                   6w6d/00:01:39     v2    1 / S P
172.16.82.252     Vlan82                   6w6d/00:01:33     v2    1 / S P
172.16.83.252     Vlan83                   6w6d/00:01:30     v2    1 / S P
172.16.84.252     Vlan84                   6w6d/00:01:18     v2    1 / S P
172.16.85.252     Vlan85                   6w6d/00:01:22     v2    1 / S P
172.16.88.252     Vlan88                   6w6d/00:01:37     v2    1 / S P
172.16.89.252     Vlan89                   6w6d/00:01:23     v2    1 / S P
172.16.90.252     Vlan90                   6w6d/00:01:17     v2    1 / S P
172.16.91.252     Vlan91                   6w6d/00:01:43     v2    1 / S P
172.16.92.252     Vlan92                   6w6d/00:01:17     v2    1 / S P
172.16.93.252     Vlan93                   6w6d/00:01:21     v2    1 / S P
172.16.94.252     Vlan94                   6w6d/00:01:18     v2    1 / S P
172.16.95.252     Vlan95                   6w6d/00:01:24     v2    1 / S P
172.16.96.252     Vlan96                   6w6d/00:01:21     v2    1 / S P
172.16.97.252     Vlan97                   6w6d/00:01:19     v2    1 / S P
172.16.98.252     Vlan98                   6w6d/00:01:16     v2    1 / S P
172.16.99.252     Vlan99                   6w6d/00:01:43     v2    1 / S P
172.16.102.252    Vlan102                  6w6d/00:01:37     v2    1 / S P
172.16.104.252    Vlan104                  6w6d/00:01:16     v2    1 / S P
172.16.105.252    Vlan105                  6w6d/00:01:26     v2    1 / S P
172.16.106.252    Vlan106                  3w2d/00:01:42     v2    1 / S P
172.16.223.250    Vlan223                  4w0d/00:01:37     v2    1 / S P
172.16.224.250    Vlan224                  4w1d/00:01:41     v2    1 / S P

Vlans 223 and 224 (3rd octet) are the SVIs that form OSPF adjancies with the two Cores.

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

I think you should use the debug ip pim

to see what happens on the distribution switch

have a PC joining the group and see what happens

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

I let an existing entry in the mroute cache time out before turning on a test machine, output below.

ci_t1_3a1_dist_sw1#debug ip pim 224.1.1.3
PIM debugging is on

004216: May 12 13:36:52: PIM(0): Received RP-Reachable on Vlan224 from 172.16.255.4
004217: May 12 13:36:52: PIM(0): Received RP-Reachable on Vlan224 from 172.16.255.4
004218: May 12 13:36:52:      for group 224.1.1.3
004219: May 12 13:36:52: PIM(0): Update RP expiration timer (270 sec) for 224.1.1.3
004220: May 12 13:36:52: PIM(0): Forward RP-reachability for 224.1.1.3 on Vlan68

004221: May 12 13:37:15: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004222: May 12 13:37:15: PIM(0): Insert (*,224.1.1.3) join in nbr 172.16.224.250's queue
004223: May 12 13:37:15: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004224: May 12 13:37:15: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Join
004225: May 12 13:37:15: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

004226: May 12 13:38:14: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004227: May 12 13:38:14: PIM(0): Insert (*,224.1.1.3) join in nbr 172.16.224.250's queue
004228: May 12 13:38:14: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004229: May 12 13:38:14: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Join
004230: May 12 13:38:14: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

004231: May 12 13:38:22: PIM(0): Received RP-Reachable on Vlan224 from 172.16.255.4
004232: May 12 13:38:22: PIM(0): Received RP-Reachable on Vlan224 from 172.16.255.4
004233: May 12 13:38:22:      for group 224.1.1.3
004234: May 12 13:38:22: PIM(0): Update RP expiration timer (270 sec) for 224.1.1.3
004235: May 12 13:38:22: PIM(0): Forward RP-reachability for 224.1.1.3 on Vlan68

004236: May 12 13:38:23: PIM(0): Insert (172.16.255.4,224.1.1.3) prune in nbr 172.16.224.250's queue
004237: May 12 13:38:23: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004238: May 12 13:38:23: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Prune
004239: May 12 13:38:23: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

ci_t1_3a1_dist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 00:09:06/00:02:55, RP 172.16.255.4, flags: SP
  Incoming interface: Vlan224, RPF nbr 172.16.224.250, RPF-MFD
  Outgoing interface list: Null

<>

004240: May 12 13:38:55: PIM(0): Building Triggered (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004241: May 12 13:38:55: PIM(0): Insert (*,224.1.1.3) join in nbr 172.16.224.250's queue
004242: May 12 13:38:55: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004243: May 12 13:38:55: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Join
004244: May 12 13:38:55: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

004245: May 12 13:39:14: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004246: May 12 13:39:14: PIM(0): Insert (*,224.1.1.3) join in nbr 172.16.224.250's queue
004247: May 12 13:39:14: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004248: May 12 13:39:14: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Join
004249: May 12 13:39:14: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

ci_t1_3a1_dist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 00:17:49/00:00:25, RP 172.16.255.4, flags: SJC
  Incoming interface: Vlan224, RPF nbr 172.16.224.250, Partial-SC
  Outgoing interface list:
    Vlan68, Forward/Sparse-Dense, 00:08:16/00:00:25, H

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

if you can repeat the same test on core switch to see the same debug output on it

we see that the switch attempts to prune from shared tree and to join source based tree but something goes wrong and it does not reach forward state for source based tree

>>>>004240: May 12 13:38:55: PIM(0):  Building Triggered (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004241:  May 12 13:38:55: PIM(0): Insert (*,224.1.1.3) join in nbr  172.16.224.250's queue
004242: May 12 13:38:55: PIM(0): Building  Join/Prune packet for nbr 172.16.224.250
>>>> 004243: May 12 13:38:55:  PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit  Join
004244: May 12 13:38:55: PIM(0): Send v2 join/prune to  172.16.224.250 (Vlan224)

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Guiseppe,

The next debug, showing pim on the distribution switch where the receiver / client is, and the Core / RP switch:

Distribution Switch where the receiver is

---------------------------------------------------------

004250: May 13 09:48:49: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3
004251: May 13 09:48:49: PIM(0): Insert (*,224.1.1.3) join in nbr 172.16.224.250's queue
004252: May 13 09:48:49: PIM(0): Building Join/Prune packet for nbr 172.16.224.250
004253: May 13 09:48:49: PIM(0):  Adding v2 (172.16.255.4/32, 224.1.1.3), WC-bit, RPT-bit, S-bit Join
004254: May 13 09:48:49: PIM(0): Send v2 join/prune to 172.16.224.250 (Vlan224)

RP and Core switch

-----------------------------

000654: .May 13 09:48:49: PIM(0): Received v2 Join/Prune on Vlan222 from 172.16.222.251, to us
000655: .May 13 09:48:49: PIM(0): Join-list: (*, 224.1.1.3), RPT-bit set, WC-bit set, S-bit set
000656: .May 13 09:48:49: PIM(0): Update Vlan222/172.16.222.251 to (*, 224.1.1.3), Forward state, by PIM *G Join
000657: .May 13 09:48:49: PIM(0): Update Vlan222/172.16.222.251 to (172.16.192.49, 224.1.1.3), Forward state, by PIM *G Join
000658: .May 13 09:48:49: PIM(0): Received v2 Join/Prune on Vlan224 from 172.16.224.251, to us
000659: .May 13 09:48:49: PIM(0): Join-list: (*, 224.1.1.3), RPT-bit set, WC-bit set, S-bit set

000660: .May 13 09:48:49: PIM(0): Update Vlan224/172.16.224.251 to (*, 224.1.1.3), Forward state, by PIM *G Join
000661: .May 13 09:48:49: PIM(0): Update Vlan224/172.16.224.251 to (172.16.192.49, 224.1.1.3), Forward state, by PIM *G Join
000662: .May 13 09:48:50: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3

000663: .May 13 09:48:50: PIM(0): Insert (172.16.192.49,224.1.1.3) join in nbr 172.16.228.251's queue
000664: .May 13 09:48:50: PIM(0): Building Join/Prune packet for nbr 172.16.228.251
000665: .May 13 09:48:50: PIM(0):  Adding v2 (172.16.192.49/32, 224.1.1.3), S-bit Join
000666: .May 13 09:48:50: PIM(0): Send v2 join/prune to 172.16.228.251 (Vlan228)

000667: .May 13 09:48:53: PIM(0): Received v2 Register on Vlan227 from 172.16.227.251
000668: .May 13 09:48:53:      (Data-header) for 172.16.192.49, group 224.1.1.3
000669: .May 13 09:48:53: PIM(0): Send v2 Register-Stop to 172.16.227.251 for 172.16.192.49, group 224.1.1.3
000670: .May 13 09:48:53: PIM(0): Insert (172.16.192.49,224.1.1.3) join in nbr 172.16.228.251's queue
000671: .May 13 09:48:53: PIM(0): Building Join/Prune packet for nbr 172.16.228.251
000672: .May 13 09:48:53: PIM(0):  Adding v2 (172.16.192.49/32, 224.1.1.3), S-bit Join
000673: .May 13 09:48:53: PIM(0): Send v2 join/prune to 172.16.228.251 (Vlan228)

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

sorry for late answer

RP device is behaving correctly

000658: .May 13 09:48:49:  PIM(0): Received v2 Join/Prune on Vlan224 from 172.16.224.251, to us
000659:  .May 13 09:48:49: PIM(0): Join-list: (*, 224.1.1.3), RPT-bit set,  WC-bit set, S-bit set

000660:  .May 13 09:48:49: PIM(0): Update Vlan224/172.16.224.251 to (*,  224.1.1.3), Forward state, by PIM *G Join
000661: .May 13 09:48:49:  PIM(0): Update Vlan224/172.16.224.251 to (172.16.192.49, 224.1.1.3),  Forward state, by PIM

just to know: in vlan 68 where are the users there is only the last switch or there are two switches providing L3 services to users in this vlan?

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe,

There are two distribution switches that provide layer 3 to the receivers / clients of the multicast stream. The switch I have been debugging is the one containing the active HSRP gateway for Vlan 68.

When I look at the mroute information for 224.1.1.3 on the standby switch I see the following:

ci_t1_c075_dist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 00:25:08/00:01:38, RP 172.16.255.4, flags: SP
  Incoming interface: Vlan211, RPF nbr 172.16.211.250, RPF-MFD
  Outgoing interface list: Null

I just debugged PIM on the standby switch and saw the following:

003387: .May 14 07:01:27: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3

003388: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan68 from 172.16.255.4
003389: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan68 from 172.16.255.4
003390: .May 14 07:02:19:      for group 224.1.1.3
003391: .May 14 07:02:19: PIM(0): Update RP expiration timer (270 sec) for 224.1.1.3
003392: .May 14 07:02:19: PIM(0): Not RPF interface, group 224.1.1.3
003393: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan82 from 172.16.255.4
003394: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan82 from 172.16.255.4
003395: .May 14 07:02:19:      for group 224.1.1.3
003396: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003397: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan8 from 172.16.255.4
003398: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan8 from 172.16.255.4
003399: .May 14 07:02:19:      for group 224.1.1.3
003400: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003401: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan5 from 172.16.255.4
003402: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan5 from 172.16.255.4
003403: .May 14 07:02:19:      for group 224.1.1.3
003404: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003405: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan7 from 172.16.255.4
ci_t1_c075_dist_sw1#
003406: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan7 from 172.16.255.4
003407: .May 14 07:02:19:      for group 224.1.1.3
003408: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
ci_t1_c075_dist_sw1#
003409: .May 14 07:02:27: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3

What does the "duplicate RP-reachable" mean ? Could this be a problem ?

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

I cannot see your last post

any news?

also what platform and what IOS image is running on last switch (the one affected) ?

is IGMP snooping enabled on vlan 68?

I'm thinking of possible bugs ( I know it was working before, but sometimes they are triggered by some network changes)

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe,

I have copied in my last post again below so you can read it. We have the same IOS / Platform across the backbone, consisting of the Sup720-3B on version 12.2(33)SXH6

There are two distribution switches that provide layer 3 to the receivers / clients of the multicast stream. The switch I have been debugging is the one containing the active HSRP gateway for Vlan 68.

When I look at the mroute information for 224.1.1.3 on the standby switch I see the following:

ci_t1_c075_dist_sw1#sh ip mroute 224.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.3), 00:25:08/00:01:38, RP 172.16.255.4, flags: SP
  Incoming interface: Vlan211, RPF nbr 172.16.211.250, RPF-MFD
  Outgoing interface list: Null

I just debugged PIM on the standby switch and saw the following:

003387: .May 14 07:01:27: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3

003388: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan68 from 172.16.255.4
003389: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan68 from 172.16.255.4
003390: .May 14 07:02:19:      for group 224.1.1.3
003391: .May 14 07:02:19: PIM(0): Update RP expiration timer (270 sec) for 224.1.1.3
003392: .May 14 07:02:19: PIM(0): Not RPF interface, group 224.1.1.3
003393: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan82 from 172.16.255.4
003394: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan82 from 172.16.255.4
003395: .May 14 07:02:19:      for group 224.1.1.3
003396: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003397: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan8 from 172.16.255.4
003398: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan8 from 172.16.255.4
003399: .May 14 07:02:19:      for group 224.1.1.3
003400: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003401: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan5 from 172.16.255.4
003402: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan5 from 172.16.255.4
003403: .May 14 07:02:19:      for group 224.1.1.3
003404: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
003405: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan7 from 172.16.255.4
ci_t1_c075_dist_sw1#
003406: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan7 from 172.16.255.4
003407: .May 14 07:02:19:      for group 224.1.1.3
003408: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3
ci_t1_c075_dist_sw1#
003409: .May 14 07:02:27: PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.1.1.3

What does the "duplicate RP-reachable" mean ? Could this be a problem ?

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

I've discovered that the problem is related to the specific web browser I use.

it is correct that the standby switch if it is not the PIM DR in segment (you can check this with sh ip pim neighbors) and if it does not provide a better path to the  source of multicast traffic shouldn't have interface vlan68 in its OI list.

About the other messages:

003402: .May 14 07:02:19: PIM(0): Received RP-Reachable on Vlan5 from 172.16.255.4
003403: .May 14 07:02:19:      for group 224.1.1.3

003404: .May 14 07:02:19: PIM(0):   Duplicate RP-reachable from 172.16.255.4 for 224.1.1.3

my guess is that 172.16.255.4 is the loopback address of companion switch, the two devices share multiple LAN segments (different Vlans / IP subnets).

For a reason that we haven't understood up to now, the companion switch the one that I've called last switch in some of my previous posts is trapped in PIM sparse mode on the shared tree and it is not able to join the source specific tree (that partial SC in the sh ip mroute).

these messages like the one above say that actually something strange is happening there, the standby switch receives a PIM message from companion about the involved group (224.1.1.3) on other client vlans.

I think this is not the  root cause but another symptom that affected switch is not behaving  correctly.

I would agree to clear ip mroute on it as a start and if this is not enough you could think of a reload.

Another possible action plan: make the standby switch the PIM DR on segment by using the ip pim dr-priority command in interface mode and see if this solves this can give you time to handle the misbehaving switch as described in the previous sentence

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe,

We have a change raised this evening to clear the multicast routes. I will let you know how we get on.

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe,

We cleared the ip mroute cache and the problem still remains. I will schedule a switch reload and see if this helps.

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

thanks for your update. The issue is still there.

I would consider also making the companion switch PIM DR on segment for vlan 68 (client vlan if I remember correctly)

using interface configuration command ip pim dr-priority (check with help I'm not sure of spelling)

Hope to help

Giuseppe

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Giuseppe,

I forgot to mention that I have already tried swapping the DR to the other distribution switch. The problem remains with the same output.

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Well, we have finally managed to reboot the switch to no avail. The problem still remains. I am wondering if this is a bug in the code as the problem did start to manifest shortly after we completed a complete backbone upgrade to 12.2(33)SXH6

Hall of Fame Super Silver

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

nice to hear from you even if this is not good news

>> Well, we have finally managed to reboot the switch to no avail. The problem still remains. I am wondering if this is a bug in the code as the problem did start to manifest shortly after we completed a complete backbone upgrade to 12.2(33)SXH6

Unfortunately this is something that cannot be excluded!

What IOS image was running before?

Consider also moving to a 12.2(33)SXI2a or later that could fix this.

Hope to help

Giuseppe

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Chris,

could you please send a small network diagram pointing out the following:

1- The RP for the Group 224.1.1.2

2- Hosts joining the group 224.1.1.2.

3- The multicast source.

Thanks,

Mohamed

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Mohamed,

Thanks for your response. I have attached a modified diagram of our backbone architecture. We have what I think is a text book configuration for multicast routing using auto-rp and pim sparse/dense mode. The two core switches are auto-rp candidates, with one of the cores acting as the rp-mapping agent.

All backone routed links are configured in sparse-dense mode. The multicast group was changed to 227.1.1.3 by our server guys just incase the group it uses by default (224.1.1.2) was causing issues. The Alteris PXE boot server that advertises this group sits on the server farm, the clients (receivers) all sit on the switch block access layers.

I am beginning to think that this is something to do with the way in which the PXE boot service operates, because we have CCTV streaming across the network fine using a proper shared multicast tree, i.e. I can see clients on the access distribution layers joining the shared tree for a source which is on another switch block.

Giuseppe - we were on software release 12.2(18)SXF8 prior to our upgrade.

Re: Strange multicast problem when using Alteris Deployment Serv

Hello Chris,

2 issues could resulted this problem:-

1- Now, You mentioned the RP is Core 1, while when you issue (sh ip mroute 224.1.1.2), the RPF interface is Core 2??? Why ?

Answer: Although you have configured 2 RPs for the same groups for redundancy purposes, but be informed that only ONE RP is going to be elected by the hosts joining the group. The pim join message will be send from the DR to the RP with the highest IP address for that group.

So, here I would explicitly make the primary RP has the highest IP address first, Secondly , I would check the RPF.

Please post: (sh ip mroute 227.x.x.x) from the Active HSRP gateway for the recievers in that group and let us see if it points to the correct RP.

Also from the RP itself (Core 1) , post (sh ip mroute 227.x.x.x) and let us see if there is a multicast source known to the RP.

2- I would make sure that the DR for the multicast groups (224.x.x.x) and (227.x.x.x) is the Active HSRP router , as from your previous output of (sh ip mroute) from the Active HSRP router, the output is not the desired, the Forwarding (Outgoing Interface is: Null) and this shouldnt be, that means its not the PIM DR for that Group. make the DR the Active HSRP by manually setting a highest ip address on the interface OR changing the DR priority to a higer value than the current DR.

Please come back with the output of those commands and let us know the result,

HTH

Mohamed

New Member

Re: Strange multicast problem when using Alteris Deployment Serv

Hi Mohamed,

The RP is actually Core 2 (the core on the right in the diagram) - my mistake.

Output from the Active HSRP gateway for Vlan 68 - this is the Vlan I am focusing on at the moment.

ci_t1_3a1_dist_sw1#sh ip mroute 227.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 227.1.1.3), 1d01h/00:02:55, RP 172.16.255.4, flags: SJC
  Incoming interface: Vlan224, RPF nbr 172.16.224.250, Partial-SC
  Outgoing interface list:
    Vlan8, Forward/Sparse-Dense, 00:00:27/00:02:32, H
    Vlan74, Forward/Sparse-Dense, 00:01:27/00:01:32, H
    Vlan73, Forward/Sparse-Dense, 00:01:57/00:01:02, H
    Vlan5, Forward/Sparse-Dense, 00:02:14/00:00:45, H
    Vlan92, Forward/Sparse-Dense, 12:29:12/00:02:27, H
    Vlan6, Forward/Sparse-Dense, 1d01h/00:02:31, H
    Vlan68, Forward/Sparse-Dense, 00:00:04/00:02:55, H

Below is the output from the RP, which is Core 2 on ip 172.16.255.4:

ci_t2_65_c200_core2#sh ip mroute 227.1.1.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 227.1.1.3), 1d01h/00:02:31, RP 172.16.255.4, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan222, Forward/Sparse-Dense, 00:00:58/00:02:31
    Vlan224, Forward/Sparse-Dense, 1d01h/00:02:30

(172.16.192.49, 227.1.1.3), 1d01h/00:01:48, flags:
  Incoming interface: Vlan228, RPF nbr 172.16.228.251
  Outgoing interface list:
    Vlan222, Forward/Sparse-Dense, 00:00:58/00:02:31
    Vlan224, Forward/Sparse-Dense, 1d01h/00:02:30

As you can see, the Multicast source, 172.16.192.49 can be seen on the RP, however the distribution switch just wont join the shared tree, it lists the *,G entries, but not the S,G.

Re: Strange multicast problem when using Alteris Deployment Serv


Hi Chris,

The output from the RP looks fine,  However, its not from the distribution Switch.

1- You should recieve the groups (224.0.0.39) and (224.0.0.40) from the output of the sh ip mroute at the distribution switch. you have only showed a shared tree of 227.x.x.x , IS there any shared/source base tree for the groups (224.0.1.39) and (224.0.1.40) respectively?


--- You will need to make sure the mapping Agent configured correctly on Core2 the RP--

2- Do you have reachability to the source of the multicast stream from the Distribution switch?

Please confirm,

Mohamed

5534
Views
0
Helpful
48
Replies
CreatePlease to create content