09-22-2006 02:41 PM
I'm trying to get a Multicast VPN deployed across a MPLS backbone.
The CE router has a multicast group, multicast routing enabled and it's interfaces are in sparse-dense mode. The CE is using autorp, (the discovery and annoucement commands) configured for the RP Mapping
The PE router has a Multicast aware VPN for the CE router, The interface to the CE router is in sparse dense mode, and in the Multicast VPN. The PE router also has the MDT tree configured as well.
The Loopbacks are all enable on the MPLS backbone with sparse-dense-mode for BGP. All interface going from PE-P routers are in sparse mode.
I can get multicast traffic from the CE to the first PE. I have the RP mapped, and I can ping my multicast groups. So it looks like multicast traffic is going from CE to PE. However the PE router doesn't seem to pass it on to other PE or P routers. Again BGP loopbacks and all physical interfaces are in sparse mode on the backbone. I do have PIM adj across the backbone as well.
But I can't get multicast traffic across.
I've read about everything and I'm getting stumped.
Any advice on what I'm missing??? I think it is real close, but I just must be missing something.
Any Help is greatly appreciated!!
Thank you for your time,
Karl Solie
Solved! Go to Solution.
09-27-2006 06:47 PM
The backward compatibility between VPNv4 MDT (old mode) and IPv4 tunnel SAFI (new mode)has been implemented in 12.2(33)SRA1 and 12.0(30)S3.
You will see the following translation being performed on the box running backward compatible code for a peer that doesn't support the ipv4 tunnel SAFI:
r8#sh bgp vpnv4 unicast all nei 192.168.100.5
BGP neighbor is 192.168.100.5, remote AS 1, internal link
Member of peer-group internal for session parameters
BGP version 4, remote router ID 192.168.100.5
...
For address family: IPv4 MDT
Translates address family VPNv4 Unicast <+++ actual translation between old and new
Hope this helps,
09-22-2006 03:56 PM
Hi Karl,
When multicast doesnt work there could be N number of reasons for it.
Need some more answers,
1) Are you running SSM in the backbone.
2) Is this a live network, if yes do you have other customer who are running successfully. (this answer will move us away from troubleshooting the backbone)
3) outputs of
a) "show ip mroute"
b) "show ip mroute vrf
c) "show ip pim vrf
HTH-Cheers!
Swaroop
09-24-2006 05:12 PM
HI thanks for the message.
I do have some SSM commands on the backbone.
I have SSM default and range command on it. That's about it, not knowing much about SSM I don't know what I all need.
This is running in a lab, and not production, we are preping for production next week.
Do you want the show mroute on the PE's or CE's
Thanks,
Karl Solie
09-24-2006 05:31 PM
I"m getting this message on the egress PE, it may help you...
%PIM-6-INVALID_RP_JOIN: VRF V24:Multicast-VPN: Received (*, 224.0.1.40) Join from 0.0.0.0 for invalid RP 172.20.1.4
172.20.1.4 is my RP from the CE router. It is reachable from the PE withint he VRF and the CE.
Hope that helps,
Karl Solie
09-25-2006 12:41 AM
Do you have the RP address configured for the C-domain on the PE routers. I think, this is probably why you are getting the error message.
Just configure "ip pim vrf V24 rp-address 172.20.1.4" on the PE and see if it solves the issue.
Hope this helps,
09-25-2006 04:10 PM
I was using autoRP on the CE routers. Do I need to configure a RP on the PE routers when using this. That doesn't sound right, does it? I can try it if you think it may help...
Thanks,
Karl
09-25-2006 05:09 PM
As Martin indicated, MVPN is made of two different parts. The service provider core, referred to as P-domain and the customer network or C-domain. The PE participates in both.
If you run auto-rp in the C-domain, you would definitely need to run auto-rp on the PE router, for that specific VRF instance, just as you do on any multicast enabled routers in the C-domain.
I would recommend using static RP in the C-domain though. If you need RP redundancy, you can implement anycast RP.
For more information on anycast RP, please refer to the following document:
http://www.cisco.com/en/US/tech/tk828/technologies_white_paper09186a00800d6b60.shtml
As far as the P-domain is concerned, I usually prefer to use SSM when it is possible.
Hope this helps,
09-25-2006 06:02 PM
This sounds promising.
So if I make autorp, or put a static RP in my P-domain, that matches what I have for an RP in customer VRF.
I'll give this a shot...
Thanks...
Karl
09-25-2006 06:28 PM
Not in the P-domain. You would actually need to configure auto-RP or static RP on the PE but in the C-domain section (using the vrf specific commands).
for instance:
ip pim vrf V24 rp-address 172.20.1.4
By default the RP is the local router, which is probably why you are seeing the error message since it clashes with the RP configured on the CE.
Hope this helps,
09-25-2006 06:07 PM
Would I need to configure a statis RP or AutoRP on the P routers. Routers in the MPLS domain that do not have the VRF configured on them, but Multicast traffic runs through them??
Thanks again,
Karl Solie
09-25-2006 06:39 PM
The P routers only participate in the P-domain. So the PIM configuration would be in relation to the P-domain only.
The PEs participate in both the P-domain and C-domain and therefore need to be configured for both.
Hope this helps,
09-26-2006 07:07 PM
09-26-2006 09:00 PM
The "sh ip pim vrf xxx nei" command is not showing the tunnel between the PEs.
The default range for SSM is 232/8. You are currently using 239.232.0.1 for the default MDT. Make sure you either change the default SSM range or use an address in the 232/8 range for the default MDT.
Hope this helps,
09-26-2006 09:58 PM
I made the change, thanks for cathing that, I though for sure that might fix things
Heres what the VRF looks like now
ip vrf V24:Multicast-VPN
description Multicast
rd 1998:30022
route-target export 1998:30020
route-target import 1998:30020
maximum routes 100 80
mdt default 232.0.0.1
mdt data 232.1.1.0 0.0.0.255 threshold 1
I go this message when I configured it..however I still don't see the tunnel from the PEs?
Sep 27 00:25:47: %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.20.80.1 on interface Tunnel0
x-cob#sh ip pim vrf V24:Multicast-VPN neighbor
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
192.168.202.2 GigabitEthernet1/10.600 2d09h/00:01:16 v2 1 / DR S
x-cob#
Still no tunnels.
Is there a chance I have this statement in BGP
no bgp default ipv4-unicast
could this be messing up multicast?
Or could the TE tunnels be messing it up. I do have the TE tunnels in sparse mode.
Thanks again for your help..
Karl Solie
09-26-2006 10:16 PM
I noticed something else. The configuration on the P is as follow:
ip pim ssm range 1
What is ACL 1? Could you change it to "ip pim ssm default" instead.
Hope this helps,
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: