Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 

Nexus 2k: Server in NLB Cluster not able to connect with IP PIM enabled

 

Introduction


Cisco Nexus 2000 Series Fabric Extenders are designed to provide connectivity for rack and blade servers, as well as converged fabric deployments. Cisco Nexus 5000, Nexus 6000, Nexus 7000, and Nexus 9000 Series Switches, as well as Cisco UCS Fabric Interconnect, act as parent switches for the fabric extenders.

 

IP PIM


Enabling PIM on an interface also enables Internet Group Management Protocol (IGMP) operation on that interface. An interface can be configured to be in dense mode, sparse mode, or sparse-dense mode. The mode describes how the Cisco IOS software populates its multicast routing table and how the software forwards multicast packets it receives from its directly connected LANs.


Spare Mode


A sparse mode interface is used for multicast forwarding only if a join message is received from a downstream router or if group members are directly connected to the interface. Sparse mode assumes that no other multicast group members are present. When sparse mode routers want to join the shared path, they periodically send join messages toward the RP. When sparse mode routers want to join the source path, they periodically send join messages toward the source; they also send periodic prune messages toward the RP to prune the shared path.


Dense Mode


Initially, a dense mode interface forwards multicast packets until the router determines that there are group members or downstream routers, or until a prune message is received from a downstream router. Then, the dense mode interface periodically forwards multicast packets out the interface until the same conditions occur. Dense mode assumes that multicast group members are present. Dense mode routers never send a join message. They do send prune messages as soon as they determine they have no members or downstream PIM routers. A dense mode interface is subject to multicast flooding by default.

 

Problem


Two servers in a cluster setup - CAS01 and CAS02. Each has a pair of interfaces to Nexus2k (same switch). vPC is setup for each server pair and then setup a NLB address. Problem is when bringing up the CAS02 on the PC, it is unable to ping the CAS01 server or get Internet access but can be seen by other devices in the network. If one of the CAS02 interfaces is down it is then able to ping CAS01 and regain Internet access. But this leaves the server unable to join the NLB.

 

Network Diagram


Configuration


Following is the configuration of Nexus 5500 managing the Nexus 2000 connecting to NLB config.

====snip=========
interface port-channel13
 description vPC CAS01
 switchport access vlan 36
interface Ethernet101/1/9
 description CAS01-1
 switchport access vlan 36
 channel-group 13 mode active
interface Ethernet101/1/10
 description CAS01-2
 switchport access vlan 36
 channel-group 13 mode active

interface port-channel41
 description vPC CAS02
 switchport access vlan 36
interface Ethernet101/1/14
 description CAS02-1
 switchport access vlan 36
 channel-group 41 mode active
interface Ethernet101/1/15
 description CAS02-2
 switchport access vlan 36
 channel-group 41 mode active

mac address-table static 03bf.ac19.2700 vlan 36 interface port-channel13 port-channel41
======snip=========


Resolution


Check for IP PIM snooping on the switches. This feature may cause the NLB cluster to go offline. To fix this issue

a) Change the NLB Cluster on the CAS01/02 servers to be IGMP Multicast.

b) Change the 6509 ARP statement to reflect the multicast mac address (for routing purposes)

c) Change the 7K statement to also reflect the multicast mac address

d) Then add an IGMP Querier to the VLAN 36 network because there was no IP PIM enabled on the interfaces (layer3).

This document is based on following discussion
https://supportforums.cisco.com/discussion/12085131/nexus2k-nlb-teaming-two-server-single-switch

 

Related Information


vPC Status Down between Nexus 5000 and Directly connected Server
Cisco Nexus: Using Ethanalyzer for Troubleshooting Issues

469
Views
0
Helpful
0
Comments