Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Multicast on UCS

hi all,

can anybody explain me in detail how multicast traffic is forwarded between UCS blades?

MAC adresses from on ServerPorts are learnd - those managed by the UCS wont age out.

Unknown unicasts are dropped on uplink ports

broadcast traffic is forwarded to all servers.

regarding multicast traffic I only found the following statement:

Server-to-server multicast and broadcast traffic is sent through all uplink ports in the same VLAN.

but 2 Blades on UCS dont receive Multicast Traffic from eachothe - by default.

any ideas regarding this issue?

Thx lukas

  • Unified Computing
Everyone's tags (2)
13 REPLIES
New Member

Re: Multicast on UCS

Hi Lukas,

We have the same problem too.

We need to cluster two RedHat 5.4 installed onto two different B-Series UCS blades.

They can heartbeat each other on vlan 2300. The interfaces on the hosts are configured to be active on the same Fabric Interconnect, so the heartbeat traffic do not need to cross some other switch to make the hosts communicate.

Doing a tcpdump on eth0.2300 on each host, I can clearly see multicast packets going out from hostA to be BLACKHOLED, because they never reach hostB.

Issuing a "show ip igmp snooping groups" on the FabricInterconnect shows that soon after RedHat boot hostA and hostB register themselves to receive the multicast flow. While this entry is active, there are no flow problems for multicast between the hosts. When the entry rapidly cleares out, multicast flow gets blackholed.

The hosts don't register ever again with the FabricInterconnect: they don't send out a join message so they do not generate a snoop entry.

RedHat support suggests to deal with multicast problems on the switch, but the solution given is relative to catalyst switches, now we are in a Nexus environment. It also seems to be impossible to insert some static cam entries or to disable igmp snooping on FabricInterconnect, so no room for some other testing.

One solution would be to change the heartbeat in broadcast instead of multicast, but RedHat clearly states this is not possible for version 5.4.

I've one question: isn't the Fabric Interconnect supposed to flood the multicast patcket on all available ports in vlan 2300, when there is no snooping entry ? This would be one solution to my problem.

If anyone has some answers, please help (:

Thank you, bye

New Member

Re: Multicast on UCS

I'm resurrecting this thread since I think I'm seeing the same issue setting up an IGMP multicast MS NLB cluster. Did anyone find a resolution for this?

New Member

Re: Multicast on UCS

Hi Harold,

Cisco TAC gave us a solution many days ago, but I forgot to write it here, sorry

In our environment, UCS system is connected upstream to Nexus7000. There was no way to change the multicast behaviour of RedHat (RedHat support didn't help so much), we had to work only on the Nexus (thank you Kartic !).

You have to constantly poll the hosts with multicast queries that can trigger multicast reports, which in turn populate igmp snooping table on UCS and upstream switches.

You can turn on "igmp querier" feature on Nexus7000, by issuing this commands on the vlan where you need multicast:

vlan 100

  ip igmp snooping querier 192.168.1.100  [subnet doesn't matter, write any ip you want]
  no ip igmp snooping link-local-groups-suppression    [this ensures that igmp reports containing two-groups can pass through switches successfully]
  name vlan-name

Now you could see multicast traffic flow through the entire broadcast domain.

Hope this helps, bye

New Member

Re: Multicast on UCS

one of our customers had the same issue, they didn't have any multicast router doing the membership queries so after 3 minutes, the membership was removed from the interconnect.

Their upstream switch couldn't send any snoop query so customer wrote a simple perl script sending the snoop query which solved this issue.

John

Hi Daniel, Many thanks for

Hi Daniel,

 

Many thanks for your answer. you just saved my life :P

 

seems the command line has been changed with the new nexus versions.

 

the commands are:

vlan 100

 name vlan-name

vlan configuration 100

 ip igmp snooping querier 192.168.1.100

 no ip igmp snooping link-local-groups-suppression   

 

hope this will be useful for future reference.

Cisco Employee

Keep in mind this line will

Keep in mind this line will cause you to stop flooding link-local multicast if anything on the VLAN joins the link-local multicast group.

 no ip igmp snooping link-local-groups-suppression

This will essentially put you a single packet away from an outage of each link-local multicast group, at least it will drop all traffic to devices that behave like normal and do not join the link-local multicast range.

Cisco Employee

Multicast on UCS

By default most switches will supress link-local (224.0.0.0/24)  group joins and flood these groups like broadcasts, but this can be disabled with the following command:

no ip igmp snooping link-local-groups-suppression 

With this feature enabled, the behavior will be identical to normal until something joins a link-local group. For a group that has anything joined, it will stop flooding the multicast and start forwarding only to ports that joined the group. This feature was a good workaround for an old (and long fixed) bug where the switch ignored an entire IGMPv3 Join packet if it has any link-local groups in the packet, but other than this specific scenario the feature is almost never needed.

It is not necessary for a device to join a link-local group because they are flooded by default, so this feature is never needed. Make sure to thorughly investigate if this is in your upstream switch config because you may be 1 join packet away from the network no-longer flooding link-local multicast groups.

VIP Red

I post a few slides which I

I post a few slides which I did for a customer who had issues with MC; highlighting the changes of UCS firmware after V2.1

New Member

Hi,We have different

Hi,

We have different multicast groups like 239.255.255.255,  225.0.0.36 etc. Different multicast groups are used to receive multicast on different vnic. All was working fine before updating to Version 2.1 We recently upgraded the ucs to version 2.1 and after that we do not receive multicast traffic on blades for group 225.0.0.36. We opened a case and TAC says " this issue is because the group ip multicast mac address use  same as link local address 224.0.0.X , octets contains zeroes (x.0.0.y), the address is classified as link-local". They say to change multicast group but we cannot change the multicast group. Can anyone suggest a solution here. 

14698
Views
10
Helpful
13
Replies