My server admins are wanted to create a server cluster using NLB on the NICS. In their documentation it is reccomended against plugging into a Layer 3 switch due to that fact that mac address conflicts could occur? Does anyone know if this is true? Are there known issues with using Microsofts NLB drivers with Cisco 4006's?
It stands for Network Load Balancing.
NLB uses the same MAC address across all NICs doing NLB (some implementations vary however) which confuses switches because they are seeing same MAC on different ports. In Microsoft KB articles, and elsewhere they recommend plugging all NLB servers into a hub then have one connection between hub and switch. This solves the MAC address being on multiple port issue on switch. Another way to get around this is to configure NLB to use multicast.
Our experience has been that when the NLB uses a multicast MAC address the traffic bound for the multicast MAC is flooded across the switch, or the L2 network. What suggested methods are there for overcoming this?
Other than setting up IGMP/ CGMP on my L3 switch, I'm not sure? I know the server guys are telling me their options are to use either unicast or mulitcast. I am wanting to try both configs. to test in the lab. I think we might end up setting up a multicast group and enabling CGMP / IGMP so that the multicast traffic does not flood the whole subnet and just communicated to the 2 cluster servers that it needs to. Does anyone have any recomendations?
There are documented problems with this process...here is the link. Let me know if this helps..
It's not that MAC address conflicts occur, the situation is that the NLB servers will respond to ARP requests with a dummy MAC address. The servers use their globally unique NIC MAC address as the source address in outgoing frames. The incoming frames from the clients are destined to the dummy address, which is never learned by the switch because it is never in a source address field. Then of course the switch floods the frame to all interfaces. This is the microsoft method for ensuring that all NICs in the NLB 'cluster' see all of the incoming frames ! This is the default. There is another option.
One solution is to wire all of the load balanced NICs to a HUB (not a switch), and then wire the hub to the switch. In addition there is a registry setting in the Windows boxes that disables the 'feature' of using a dummy csource MAC address. This is nicely written up at the Microsoft site, search for KB193602.
We had fun with this one, let me tell you. During normal operations this 'feature' causes a few 10's of Kbps ambient flooded traffic, hardly noticeable. But when backups ran, there were about 1Mbps worth of ACKNOWLEDGEMENTS coming back from the backup server ! More than enough to flood a few 1Mbps links we have. Had some very annoyed users for a while. By the way, it is possible to control the flooding somewhat by using strategicly placed static MAC addresses (pointing to links to the server switch).
Ex. you have two Catalysts where the NLB interfaces is connected (for redundancy) and you are using static mac statements. The statements have to point to the ports where the NLB nics are connected and to the ports where the switches are interconnected. Traffic coming into one switch, will be forwared to the NLB and to the second switch. The second switch will forward the traffic to the other NLB interface. Will it also send the traffic back to the first switch?
Both cluster servers and the web server will all be plugged into the same switch. ( A 4006 L3)
Is that what you needed to know?
For NLB and Cisco L2 or L3 switches I found it best to use IGMP Snooping Imediate Leave. This solved all the problems I had while LB our firewalls. It would actually freeze the entire network until we rebooted the switch.
After I added the IGMP it has been running smooth and reaction time from firewalls has greatly improved.
Do you have a sample setup for me to take a look @? I've never configured IGMP Snooping and I would like to see what a real config. looks like.