I'm trying to get a Multicast VPN deployed across a MPLS backbone.
The CE router has a multicast group, multicast routing enabled and it's interfaces are in sparse-dense mode. The CE is using autorp, (the discovery and annoucement commands) configured for the RP Mapping
The PE router has a Multicast aware VPN for the CE router, The interface to the CE router is in sparse dense mode, and in the Multicast VPN. The PE router also has the MDT tree configured as well.
The Loopbacks are all enable on the MPLS backbone with sparse-dense-mode for BGP. All interface going from PE-P routers are in sparse mode.
I can get multicast traffic from the CE to the first PE. I have the RP mapped, and I can ping my multicast groups. So it looks like multicast traffic is going from CE to PE. However the PE router doesn't seem to pass it on to other PE or P routers. Again BGP loopbacks and all physical interfaces are in sparse mode on the backbone. I do have PIM adj across the backbone as well.
But I can't get multicast traffic across.
I've read about everything and I'm getting stumped.
Any advice on what I'm missing??? I think it is real close, but I just must be missing something.
Any Help is greatly appreciated!!
Thank you for your time,
Solved! Go to Solution.
The backward compatibility between VPNv4 MDT (old mode) and IPv4 tunnel SAFI (new mode)has been implemented in 12.2(33)SRA1 and 12.0(30)S3.
You will see the following translation being performed on the box running backward compatible code for a peer that doesn't support the ipv4 tunnel SAFI:
r8#sh bgp vpnv4 unicast all nei 192.168.100.5
BGP neighbor is 192.168.100.5, remote AS 1, internal link
Member of peer-group internal for session parameters
BGP version 4, remote router ID 192.168.100.5
For address family: IPv4 MDT
Translates address family VPNv4 Unicast <+++ actual translation between old and new
Hope this helps,
When multicast doesnt work there could be N number of reasons for it.
Need some more answers,
1) Are you running SSM in the backbone.
2) Is this a live network, if yes do you have other customer who are running successfully. (this answer will move us away from troubleshooting the backbone)
3) outputs of
a) "show ip mroute"
b) "show ip mroute vrf
c) "show ip pim vrf
HI thanks for the message.
I do have some SSM commands on the backbone.
I have SSM default and range command on it. That's about it, not knowing much about SSM I don't know what I all need.
This is running in a lab, and not production, we are preping for production next week.
Do you want the show mroute on the PE's or CE's
I"m getting this message on the egress PE, it may help you...
%PIM-6-INVALID_RP_JOIN: VRF V24:Multicast-VPN: Received (*, 220.127.116.11) Join from 0.0.0.0 for invalid RP 172.20.1.4
172.20.1.4 is my RP from the CE router. It is reachable from the PE withint he VRF and the CE.
Hope that helps,
Do you have the RP address configured for the C-domain on the PE routers. I think, this is probably why you are getting the error message.
Just configure "ip pim vrf V24 rp-address 172.20.1.4" on the PE and see if it solves the issue.
Hope this helps,
I was using autoRP on the CE routers. Do I need to configure a RP on the PE routers when using this. That doesn't sound right, does it? I can try it if you think it may help...
As Martin indicated, MVPN is made of two different parts. The service provider core, referred to as P-domain and the customer network or C-domain. The PE participates in both.
If you run auto-rp in the C-domain, you would definitely need to run auto-rp on the PE router, for that specific VRF instance, just as you do on any multicast enabled routers in the C-domain.
I would recommend using static RP in the C-domain though. If you need RP redundancy, you can implement anycast RP.
For more information on anycast RP, please refer to the following document:
As far as the P-domain is concerned, I usually prefer to use SSM when it is possible.
Hope this helps,
This sounds promising.
So if I make autorp, or put a static RP in my P-domain, that matches what I have for an RP in customer VRF.
I'll give this a shot...
Not in the P-domain. You would actually need to configure auto-RP or static RP on the PE but in the C-domain section (using the vrf specific commands).
ip pim vrf V24 rp-address 172.20.1.4
By default the RP is the local router, which is probably why you are seeing the error message since it clashes with the RP configured on the CE.
Hope this helps,
Would I need to configure a statis RP or AutoRP on the P routers. Routers in the MPLS domain that do not have the VRF configured on them, but Multicast traffic runs through them??
The P routers only participate in the P-domain. So the PIM configuration would be in relation to the P-domain only.
The PEs participate in both the P-domain and C-domain and therefore need to be configured for both.
Hope this helps,
The "sh ip pim vrf xxx nei" command is not showing the tunnel between the PEs.
The default range for SSM is 232/8. You are currently using 18.104.22.168 for the default MDT. Make sure you either change the default SSM range or use an address in the 232/8 range for the default MDT.
Hope this helps,
I made the change, thanks for cathing that, I though for sure that might fix things
Heres what the VRF looks like now
ip vrf V24:Multicast-VPN
route-target export 1998:30020
route-target import 1998:30020
maximum routes 100 80
mdt default 22.214.171.124
mdt data 126.96.36.199 0.0.0.255 threshold 1
I go this message when I configured it..however I still don't see the tunnel from the PEs?
Sep 27 00:25:47: %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.20.80.1 on interface Tunnel0
x-cob#sh ip pim vrf V24:Multicast-VPN neighbor
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR
192.168.202.2 GigabitEthernet1/10.600 2d09h/00:01:16 v2 1 / DR S
Still no tunnels.
Is there a chance I have this statement in BGP
no bgp default ipv4-unicast
could this be messing up multicast?
Or could the TE tunnels be messing it up. I do have the TE tunnels in sparse mode.
Thanks again for your help..
I noticed something else. The configuration on the P is as follow:
ip pim ssm range 1
What is ACL 1? Could you change it to "ip pim ssm default" instead.
Hope this helps,
Did you have time to fix this issue?
First off, I really apprecaite everyone's help here. It seems better then the TAC...so Thanks for the help!
I'm pretty confident in my config and you guys seem to agree. I've applied almost everything we are talking about. Using static RPs on the CEs, putting the static RP in the VRF on the PEs, The MDT data and default tree are now in the right range. 132..I think. and all interface (save PE-CE) are in sparse mode. Runing SSM on the backbone, and PIM neighbors are formed on all physical interfaces.
Here is what I have left and done.
1. The default tree doesn't seem to be built from PE, to PE.
2. The 7609 has my multicast test stream in the VRF and can reach my test address. I' moved a MCast client to a port on the 6509, added it to the VRF, put it in the right PIM mode. and it receives the stream fine.
3. The PE a 7609 doesn't seem to pass/build multicast tree to the other PE a 7206. Note this does run through a P router which is a 7304.
I think this is a bug or something. I checked the bug reports, but it's really hard to find a exact match if you know what I mean.
This is a debug of RPF on the customer router...Looks like it's failng??
*Apr 5 21:38:34.997: PIM(0): Building Periodic Join/Prune message for 188.8.131.52
*Apr 5 21:38:44.957: PIM(0): Send v2 Null Register to 172.20.1.14
*Apr 5 21:38:44.957: PIM(0): Received v2 Register-Stop on FastEthernet0/0.601 from 172.20.1.14
*Apr 5 21:38:44.957: PIM(0): for source 0.0.0.0, group 0.0.0.0
Until the P-domain configuration is right, the C-domain will not function properly.
Can you include a "show ip mroute" from the P router.
I believe you have found some resolution to the problem.
Can you throw some light as to what the problem was.
I don't have a solution. I mean I think the config would work, but it doesn't seem to work
What would I need for SSM commands on the backbone. Can you think of anything else I could be missing on the backbone or P to PE routers??
Can u check whether you can see the other endpoints when you execute the
"show ip pim vrf
If you see the other endpoint PE when you execute this command, then there is no problem with the Mcast config on the backbone.
If you have left SSM at default range, then there isnt really much config needed as you said. just that your default and data mdt use SSM so they should fall in the configured SSM range.
Even i had faced the same problem.Finally figured out that the problem was with the IGMP snooping on the switch between the PE switch and the P Router.The switch was snooping for IGMP but the routers were only sending PIM joins.
Just check by disabling IGMP snooping in any switch across the core.
Let me know if it works.
Well i have a working configuration that is sort of similar to your requirement. Maybe if you take a look at it it could help.
Just to make thing easier for you, ill breifly describe the setup.
I have two router a 7204 acting as a main, & a 7206 route acting as a backup, these two routers are performing both P & PE functionality. The CE is connected to both routers using an ethernet interface running ospf(PE-CE), on his main site, and he has some branches running some serial links. I am running sparse-mode on the Main site and dense mode on the branches. This is working perfectly fine with the configuration attached. I hope this helps
Thanks for the info...
We did get all the multicast working with TE tunnels and using SSM on the backbone....
Thanks again, and thanks for everyone out here...
Hi ok lets split this down into two sections.
1. Your MPLS core. Ok so you want to run multicast over it. Things to do:-
a)Ensure PIM is ensble on ALL interfaces within your core that will be used to transport traffic.
b) Decide on PIM-SM or PIM-SSM OR PIM BIDIR.
c) If you use SM or BIDIR then you need to sort out your RP design. If you use SSM then you dont require any RPs.
d) If you use SSM you need to either use default or you should manually enter a range i.e. 239.x.x.x /24.
e) Ensure that your data and default mdts are allocated from the above range.
Next up you think about the customer
a) PIM SM between PE and CE - I would NEVER allow DM into an MPLS cloud.
b) RP - is it static or dynamic.
c) static is easy - You need on every PE ip pim rp vrf XXXXX 184.108.40.206
d) auto-RP - ooo now again auto-rp can fallback to dense mode - apply command no ip pim dense-mode fallback (not sure of syntax been a while). You will then need to make sure where you are setting your announce and your discovery commands to have enough scope i.e. 255.
Also - remember multicast is only as good as your unicast :P
Thanks, I will try these and let you know.
Right now I'm in shotgun approach trying a little of everything to see if I can get it to work. I wonder if I might be stepping on myself here.
One command that is useful for you in your core is show ip pim vrf XXXX neigh - you shall see the CE neighbour and you should also see EVERY PE device where the same default mdt has been configured. Its funny and im not wanting to jump the gun here but if my memory serves me right here your IOS code has a default mdt bug in it - from other threads i am assuming you are running 12.2(28)????
Not a bug!!!
Yes I'm running 12.2.28sb2 on 7200...
Is there a Default MDT bug in this code??
Thanks again for all your help,
Thanks again for the ideas.
I've backed it out to try and do simple Static RPs. I'm trying to use SSM on the backbone. You will see SSM and static RPs in the configs, I don't think they get in the way of each other and I can remove either one if need be.
There is the way the lab goes
Ingress-CE -> Ingress PE -> P -> Egress PE -> Egress CE
I've include what I think is the relevant configs from all the routers, this might help a little. Please see the attachment..