Isolating multicast traffic with common group address using VRF-Lite on 6500 not working as expected
Hi, thanks for looking.
I have a QA and PRODUCTION environment configured for a new project. We try to keep these environments as close a mirror image in terms of config as possible, just with differing VLANs and IP addressing etc... For the purpose of this discussion lets say the QA VLANs are VLAN1, VLAN2, VLAN3 and for PRODUCTION they are VLAN11, VLAN12, VLAN13. The servers in each VLAN uses Multicast PGM to communicate. The configured MC group address is 188.8.131.52 and traffic sent to this address should be received by all servers in each of their respective environments VLANs. The software installed on the servers that uses this PGM, uses it for a sort of keepalive and synchronization of status. All servers use IGMPv2 to join.
For this I'm using a 6500 with a FWSM. The FWSM Version is 4.0(7) and the sup720 is running s72033-advipservicesk9_wan-mz.122-18.SXF17a.
Because the FWSM doesnt support PGM we had to extend the VLAN to the 6500. The server continue to use the FWSM as their default gateway though. The SVI’s for each VLAN sit in a VRF according to their assigned environment. So QA VLAN1,VLAN2 and VLAN3 sit in VRF_QA and PROD VLAN11, VLAN12, VLAN13 sit in VRF_PROD. Multicast isn't enabled globally on the 6500 but each VRF has multicast routing enabled and each SVI has PIM and PGM enabled.
So one of the vlan SVIs config looks like:
ip vrf forwarding MC_PGM_QA
ip address 184.108.40.206 255.255.255.240
ip pim sparse-dense-mode
ip pgm router
With this config in place, igmp and mroute output looks good and the servers can use multicast as required. So they join the group and the mroute reflects the correct egress interfaces for traffic on each VRF. The problem is that we would like to use the same MC group address of 220.127.116.11 on both QA and PRODUCTION, mirroring config and all that, but with the above config in place the MC traffic from QA bleeds into PRODUCTION and vice versa which makes the developers cry. In other words when QA servers send traffic to the 18.104.22.168 MC group, the production servers receive a copy rather than it being confined to the QA vlans, which according to the mroute output should be happening.
My multicast knowledge isn't great but from looking at the output of mroute tables, igmp and some debug it looks like it should work as desired. I’m not sure if this is a L3 issue or L2 due to the MC addresses being common to both environments. I'm guessing the quick fix is to just give production a different MC group address but I'd rather know if this setup is possible and what could be wrong with the config, perhaps it's a bug since we're running quite an old IOS release.
[toc:faq]The ProblemOn traditional switches whenever we have a trunk
interface we use the VLAN tag to demultiplex the VLANs. The switch needs
to determine which MAC Address table to look in for a forwarding
decision. To do this we require the switch to do...
[toc:faq]Introduction:Netdr is a tool available on a RSP720, Sup720 or
Sup32 that allows one to capture packets on the RP or SP inband. The
netdr command can be used to capture both Tx and Rx packets in the
software switching path. This is not a substitut...
IntroductionOSPF, being a link-state protocol, allows for every router
in the network to know of every link and OSPF speaker in the entire
network. From this picture each router independently runs the Shortest
Path First (SPF) algorithm to determine the b...