Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Isolating multicast traffic with common group address using VRF-Lite on 6500 not working as expected

Hi, thanks for looking.

I have a QA and PRODUCTION environment configured for a new project. We try to keep these environments as close a mirror image in terms of  config as possible, just with differing VLANs and IP addressing etc... For the purpose of this discussion lets say the QA VLANs are VLAN1, VLAN2, VLAN3 and for PRODUCTION they are VLAN11, VLAN12,  VLAN13. The servers in each VLAN uses Multicast PGM to communicate. The  configured MC group address is 239.0.0.239 and traffic sent to this address should be  received by all servers in each of their respective environments VLANs. The software installed on the servers that uses this PGM, uses it for a sort of keepalive and synchronization of status. All servers  use IGMPv2 to join.

For this I'm using a 6500 with a FWSM. The FWSM Version is 4.0(7) and the sup720 is running s72033-advipservicesk9_wan-mz.122-18.SXF17a.

Because the FWSM doesnt support PGM we had to extend the  VLAN to the 6500. The server continue to use the FWSM as their default gateway though. The SVI’s for each VLAN sit in a VRF according to their  assigned environment. So QA VLAN1,VLAN2 and VLAN3 sit in VRF_QA and PROD VLAN11,  VLAN12, VLAN13 sit in VRF_PROD. Multicast isn't enabled globally on the 6500 but each VRF has multicast routing enabled and each  SVI has PIM and PGM enabled.

So one of the vlan SVIs config looks like:

interface Vlan13

ip vrf forwarding MC_PGM_QA

ip address 20.64.70.3 255.255.255.240

ip pim sparse-dense-mode

ip pgm router

!

With this config in place, igmp and mroute output looks good  and the servers can use multicast as required. So they join the group and the mroute reflects the correct egress interfaces for traffic on each VRF. The problem is that we would like  to use the same MC group address of 239.0.0.239 on both QA and PRODUCTION,  mirroring config and all that, but with the above config in place the MC traffic  from QA bleeds into PRODUCTION and vice versa which makes the developers cry. In other words when QA servers send traffic to the 239.0.0.239 MC group, the production servers receive a copy rather than it being confined to the QA vlans, which according to the mroute output should be happening.

My multicast knowledge isn't great but from looking at the  output of mroute tables, igmp and some debug it looks like it should work  as desired. I’m not sure if this is a L3 issue or L2 due to the MC addresses being  common to both environments. I'm guessing the quick fix is to just give production a different MC group address but I'd rather know if this setup is possible and what could be wrong with the config, perhaps it's a bug since we're running quite an old IOS release.

Any pointers apreciated.

Many thanks,

Scott

  • LAN Switching and Routing
Everyone's tags (6)
742
Views
0
Helpful
0
Replies
This widget could not be displayed.