cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
36346
Views
15
Helpful
13
Replies

%PIM-6-INVALID_RP_JOIN: Received

Tshi M
Level 5
Level 5

We are receiving the following event message in the log:

%PIM-6-INVALID_RP_JOIN: Received (*, 224.0.1.40) Join from 0.0.0.0 for invalid RP 10.33.63.1

However, that is the correct RP address and I already checked the link below:

http://www.cisco.com/en/US/tech/tk828/technologies_tech_note09186a0080094b55.shtml#invalidrp

13 Replies 13

mchin345
Level 6
Level 6

A downstream PIM router has sent a join message for the shared tree, which this router does not want to accept. This behavior indicates that this router will let only downstream routers join to a specific rendezvous point.

Recommended solution is configure all downstream leaf routers (first-hop routers to multicast sources) to join to the RP that is allowed by upstream routers toward the validated rendezvous point.

Hi mchin,

Thanks for the reply but as I mentioned in my posting, I had alread read the link. The information you gave are from the link that I posted.

Regards,

This is an old posting but I finally got to solve the problem. Strange enough but adding "ip pim rp-address x.x.x.x" got rid off the messages in the log. It is strange because the rp was supposedly advertise by auto-rp.

Hello Etienne,

you have found a workaround that can be helpful to somebody else in a similar issue so I rated it accordingly

Best Regards

Giuseppe

thanks much :-)

Hi, seems I am in a similar boat but I am geting the %PIM-6-INVALID_RP_JOIN: Received (*, 224.0.1.40) Join from 0.0.0.0 for invalid RP x.x.x.x

on a router which is not the RP and I can only find support from Cisco where people are getting this error when its occurring on the RP.

The x.x.x.x is in reality a valid IP and is the IP address of the only RP in my network

Any help would be greatly appreciated

Many thanks in advance

David

Hey David,

You did mention it so I am going to ask. Did you add ip pim rp-address x.x.x.x where x.x.x.x is the address in the log? That what i had to do.

Thanks,

Hi Etienne,

I did try it and unfortunately it did not work.  I think because this is a command intended to be used on a RP and the 6509 I am getting the messages on is not a RP.

I only have a single RP in the network and its sitting in my Server Dist Layer directly off the Core 6509 which is getting the messages

Thanks

David

This is not exactly the same issue as this is for the same error but being received on a none RP device but may be of use to someone....

Was getting notifications of Nov 2 14:19:11.060 GMT: %PIM-6 INVALID_RP_JOIN: Received (*, 224.0.1.40) Join from 0.0.0.0 for invalid RP 10.X.X.X on my Core Switch

My Core is not the RP on our network another switch in our server dist layer is the RP with a loopback 0 interface configured with the 10.x.x.x RP IP address.

So for some reason my core switch was receiving an IGMP join requests when it wasn’t the Participating RP?

As we have a fairly large network it wasn't easy to look at all down stream routers to see if I could find a router with an incorrect RP statically configured, but I did and there wasn't one!!

Took a while but to sort this but here is the solution:

I run a debug ip pim 224.0.1.40

This showed the following Nov  2 13:57:29.253 GMT: PIM(0): Join-list: (*, 224.0.1.40),, ignored, invalid RP 10.x.x.x from 10.y.y.y

This identified the downstream router that was requesting the join 10.y.y.y

Next I SSHd to the down-stream L3 switch 10.y.y.y and could see it did have the correct RP Unicast IP listed in global config (ip pim rp-address 10.x.x.x).

I added ip pim autorp listener to the downstream L3 switch and removed ip pim rp-address 10.x.x.x

Checked the Multicasting was still working using show ip mroute and the messages had stopped.

I have no idea why specifying an RP IP address in global config would have caused this issue, as we only have one RP and that was the correct IP address for it.  It was as if the core 6509 was denying the join request from the downstream router as it was specifically requesting to join 224.0.1.40 via a specific unicast IP

I know that if you look to see what mc groups your RP is set up with mc addresses 224.0.1.39 and .40 are not listed.  I presume this is because they are not really groups that a node would join, but more an advertisement of what is out there and what is available when you get there.

I think part of this issue may have been down to the fact that I run sparse-mode only along with autorp.  The RP sends out its advertisements to say that its an RP via multicast, but sparse-mode routers need to know what their RP is before they can participate in the multicasting therefore it causes a catch 22 situation.  Someone, possibly when this was implemented has hard-coded the RP IP address to get around this issue I think.

By changing this to ip pim autorp listener you are specifying that the sparse-mode routers use dense-mode for just enough time to discover the RP and what groups are available, and will then switch to using sparse-mode from then on for the groups it wants to subscribe to

I do have the same configs at a couple of other remote sites i.e. ip pim rp-address 10.x.x.x instead of ip pim autorp listener but these are 2911 routers and not L3 switches and these appear to handle the MC traffic to them just fine without causing the original error to occur.

Happy to discuss with any MC experts out there as I am new to this and it was already implemented before my time of starting with this company.

Cheers all

David

David,

Thanks for posting your solution...

Hello,

I was able to alleviate this situation by configuring/setting the "downstream router's" PIM DR Priority to 0 on that interface. This made the upstream router reporting the log messages the PIM DR on that up-stream interface, ceasing these log messages entirely. I suspect these log messages came about due to redundant links from the remote-site already having PIM DR interfaces configured (by default?). Just a hypothesis.

HTH

f.mbomda
Level 1
Level 1
Please
Just make sure that your ACL permit the RP announce and discovery address ( 224.0.1.39 - 224.0.1.40 )

Atmelko
Level 1
Level 1

Hi!

I got the same message on my test bench with 3725 routers. The problem was solved when I specified the RP-address on the RP-router its own loopback ip pim rp-address 2.2.2.2

All the best!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: