Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Community Member

Multicast LSP MLDP for MVPN - multicast tree not working

Hello everyone. I'm in need of help in a sample lab to run MLDP for MVPN. The topology is a simple CE (R1) <--> PE (R2) <--> P (R3) <--> PE (R4) <--> CE (R5). MPLS forwarding is working across the core. R1 is sending ICMP to 224.2.2.2 sourced from its loopback. I'm using 15.2 in GNS3. Below is the config and show output info. Please tell me what i'm missing in this lab.

Your comments are greatly appreciated. Thanks in advance.

Mike G.

R1#

!
ip multicast-routing
!
!
interface Loopback0
 ip address 100.0.0.1 255.255.255.255
!
interface FastEthernet0/0
 ip address 10.1.0.1 255.255.255.0
 ip pim sparse-mode
!
router rip
 version 2
 network 10.0.0.0
 network 100.0.0.0
 no auto-summary
!
ip pim bidir-enable
!


R2#

!
ip vrf yellow
 rd 2:200
 vpn id 50:10
 mdt preference mldp
 mdt default mpls mldp 100.0.0.1
 mdt data mpls mldp 255
 mdt data threshold 40
 route-target export 2:200
 route-target import 2:200
!
ip multicast-routing
ip multicast-routing vrf yellow
!
mpls mldp logging notifications
!
interface Loopback0
 ip address 50.0.0.2 255.255.255.255
 ip pim sparse-mode
!
interface Loopback100
 ip vrf forwarding yellow
 ip address 100.0.0.2 255.255.255.255
 ip pim sparse-mode
!
interface FastEthernet0/0
 ip vrf forwarding yellow
 ip address 10.1.0.2 255.255.255.0
 ip pim sparse-mode
!
interface FastEthernet1/0
 ip address 10.2.0.2 255.255.255.0
 mpls ip
!
router ospf 1
 router-id 50.0.0.2
 network 10.0.0.0 0.255.255.255 area 0
 network 50.0.0.0 0.0.0.255 area 0
!
router rip
 version 2
 no auto-summary
 !
 address-family ipv4 vrf yellow
  redistribute bgp 1
  network 10.0.0.0
  network 100.0.0.0
  default-metric 5
  no auto-summary
  version 2
 exit-address-family
!
router bgp 1
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor 50.0.0.4 remote-as 1
 neighbor 50.0.0.4 update-source Loopback0
 neighbor 50.0.0.6 remote-as 1
 neighbor 50.0.0.6 update-source Loopback0
 !
 address-family ipv4
  redistribute rip
 exit-address-family
 !
 address-family vpnv4
  neighbor 50.0.0.4 activate
  neighbor 50.0.0.4 send-community extended
  neighbor 50.0.0.6 activate
  neighbor 50.0.0.6 send-community extended
 exit-address-family
 !
 address-family ipv4 vrf yellow
  redistribute connected
  redistribute rip
 exit-address-family
!
mpls ldp router-id Loopback0
!


R3#

!
ip multicast-routing
!
interface Loopback0
 ip address 50.0.0.3 255.255.255.255
 ip pim sparse-mode
!
interface FastEthernet0/0
 ip address 10.2.0.3 255.255.255.0
 mpls ip
!
interface FastEthernet1/0
 ip address 10.3.0.3 255.255.255.0
 mpls ip
!
router ospf 1
 router-id 50.0.0.3
 network 10.0.0.0 0.255.255.255 area 0
 network 50.0.0.0 0.0.0.255 area 0
!

R4#

!
ip vrf yellow
 rd 2:200
 vpn id 50:10
 mdt preference mldp
 mdt default mpls mldp 100.0.0.1
 mdt data mpls mldp 255
 mdt default 239.1.1.1
 mdt data 238.2.2.0 0.0.0.255 threshold 40
 mdt data threshold 40
 route-target export 2:200
 route-target import 2:200
!
ip multicast-routing
ip multicast-routing vrf yellow
!
mpls mldp logging notifications
!
interface Loopback0
 ip address 50.0.0.4 255.255.255.255
 ip pim sparse-mode
!
interface Loopback100
 ip vrf forwarding yellow
 ip address 100.0.0.4 255.255.255.255
 ip pim sparse-mode
!
interface FastEthernet0/0
 ip vrf forwarding yellow
 ip address 10.4.0.4 255.255.255.0
 ip pim sparse-mode
!
interface FastEthernet1/0
 ip address 10.3.0.4 255.255.255.0
 mpls ip
!
router ospf 1
 router-id 50.0.0.4
 network 10.0.0.0 0.255.255.255 area 0
 network 50.0.0.0 0.0.0.255 area 0
!
router rip
 version 2
 no auto-summary
 !
 address-family ipv4 vrf yellow
  redistribute bgp 1
  network 10.0.0.0
  network 100.0.0.0
  default-metric 5
  no auto-summary
  version 2
 exit-address-family
!
router bgp 1
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor 50.0.0.2 remote-as 1
 neighbor 50.0.0.2 update-source Loopback0
 !
 address-family ipv4
  redistribute rip
 exit-address-family
 !
 address-family vpnv4
  neighbor 50.0.0.2 activate
  neighbor 50.0.0.2 send-community extended
 exit-address-family
 !
 address-family ipv4 mdt
  neighbor 50.0.0.2 activate
  neighbor 50.0.0.2 send-community extended
 exit-address-family
 !
 address-family ipv4 vrf yellow
  redistribute connected
  redistribute rip
 exit-address-family
!
mpls ldp router-id Loopback0
!

R5#
!
ip multicast-routing
!
interface Loopback0
 ip address 100.0.0.5 255.255.255.255
!
interface FastEthernet0/0
 ip address 10.4.0.5 255.255.255.0
 ip pim sparse-mode
 ip igmp join-group 224.2.2.2
!
router rip
 version 2
 network 10.0.0.0
 network 100.0.0.0
 no auto-summary
!
ip pim bidir-enable
!

>>> Trace to R5's loopback works.

R1#trace 100.0.0.5

Type escape sequence to abort.
Tracing the route to 100.0.0.5

  1 10.1.0.2 16 msec 60 msec 28 msec
  2 10.2.0.3 [MPLS: Labels 16/21 Exp 0] 128 msec 84 msec 96 msec
  3 10.4.0.4 [MPLS: Label 21 Exp 0] 108 msec 96 msec 44 msec
  4 10.4.0.5 152 msec 124 msec 104 msec
R1#

>>> PING to multicast address fails.


R1#ping 224.2.2.2 so lo0 repeat 50

Type escape sequence to abort.
Sending 50, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 100.0.0.1
.......
R1#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.1.40), 00:30:58/00:02:49, RP 0.0.0.0, flags: DPL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

R1#
R2#sh ip pim vrf yellow neigh
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.0.1          FastEthernet0/0          00:22:38/00:01:16 v2    1 / S G
R2#

>>> No PIM peering over the LSPVIF0 interface.

R2#show mpls mldp database
  * Indicates MLDP recursive forwarding is enabled

LSM ID : 1 (RNR LSM ID: 2)   Type: MP2MP   Uptime : 00:06:55
  FEC Root           : 100.0.0.1
  Opaque decoded     : [mdt 50:10 0]
  Opaque length      : 11 bytes
  Opaque value       : 02 000B 0000500000001000000000
  RNR active LSP     : (this entry)
  Upstream client(s) :
    None
      Expires        : N/A           Path Set ID  : 1
  Replication client(s):
    MDT  (VRF yellow)
      Uptime         : 00:06:55      Path Set ID  : 2
      Interface      : Lspvif0

R2#


R2#sh ip pim vrf yellow mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 0               Lspvif0     Loopback0                yellow
R2#

R2#sh ip mroute vrf yellow
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.1.40), 00:17:04/00:02:58, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:17:02/00:02:46
    Loopback100, Forward/Sparse, 00:17:03/00:02:58

R2#

R5#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.2.2.2), 00:23:23/00:02:31, RP 0.0.0.0, flags: SJCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:23:23/00:02:31

(*, 224.0.1.40), 00:23:23/00:02:39, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:23:23/00:02:39

R5#

Everyone's tags (1)
2 ACCEPTED SOLUTIONS

Accepted Solutions
Cisco Employee

Hi, On R2 and R4 (PE devices)

Hi,

 

On R2 and R4 (PE devices), You seems to be use R1 as mLDP ROOT,

 

 mdt default mpls mldp 100.0.0.1
 

So R2 and R4 will not be able to build the MP2MP tree. You can try chaning the mLDP ROOT as R3 and make the below configuration change,

 

 mdt default mpls mldp 50.0.0.3

 

-Nagendra
 

 

Cisco Employee

On your CE routers, you dont

On your CE routers, you dont have any RP configured. For Bidir, you need a RP towards which the tree will be build.

 

Can you make R1 or R5 as RP and configure it on R1, R5 R2,R4?. On R2 and R4 you need use vrf specific commands. YOu should then see *,G and S,G created.

 

-Nagendra

4 REPLIES
Cisco Employee

Hi, On R2 and R4 (PE devices)

Hi,

 

On R2 and R4 (PE devices), You seems to be use R1 as mLDP ROOT,

 

 mdt default mpls mldp 100.0.0.1
 

So R2 and R4 will not be able to build the MP2MP tree. You can try chaning the mLDP ROOT as R3 and make the below configuration change,

 

 mdt default mpls mldp 50.0.0.3

 

-Nagendra
 

 

Community Member

Hi Nagendra,Thank you for

Hi Nagendra,

Thank you for noticing the errror.

Yes, the change to root 50.0.0.3 enabled PIM peering between PE's R2 & R4. My mistake indeed. However, ping to mcast group still fails and ingress PE R2 is still not installing an (S,G) or (*,G) to 224.2.2.2. Below are the outputs...

 

R2#sh ip pim vrf yellow neigh
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.0.1          FastEthernet0/0          00:34:26/00:01:18 v2    1 / S G
50.0.0.4          Lspvif0                  00:12:54/00:01:37 v2    1 / DR S P G
R2#

 

R1#ping 224.2.2.2 so lo0 repeat 10

Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 100.0.0.1
..........
R1#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.1.40), 00:27:55/00:02:04, RP 0.0.0.0, flags: DPL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

R1#

R2#sh ip mroute vrf yellow
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.1.40), 00:27:21/00:02:42, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:27:20/00:02:19
    Loopback100, Forward/Sparse, 00:27:20/00:02:42

R2#

 

-MikeG

Cisco Employee

On your CE routers, you dont

On your CE routers, you dont have any RP configured. For Bidir, you need a RP towards which the tree will be build.

 

Can you make R1 or R5 as RP and configure it on R1, R5 R2,R4?. On R2 and R4 you need use vrf specific commands. YOu should then see *,G and S,G created.

 

-Nagendra

Community Member

Hello Nagendra,You are the

Hello Nagendra,

You are the man! That fixed the issue i've been battling with for days. I can't thank you enough.

Regards,

MikeG

533
Views
0
Helpful
4
Replies
CreatePlease to create content