09-04-2013 12:10 AM - edited 03-07-2019 03:17 PM
Hello.
I have to configure NLB cluster that is connecting to two 3560 switches. Also there is a L2 link between two switches.
If I will statically map virtual cluster mac address to the link between two 3560 switches on both switches, would there be a loop?
My suspicion is that when one of the switches recieve the frame on the link connecting to the second switch, which has the cluster virtual mac address as the destination mac address, would it send the frame back to the second switch because of static mapping of virtual mac to that interface?
Thanks in advancce.
Kunafin Andrey
Solved! Go to Solution.
09-04-2013 04:23 AM
Multicast Mode (without IGMP option) requires 2 configuration tasks:
1) Static ARP entry(s) to bind the Multicast MAC address to the IP address of the NLB cluster.
The reason is that Cisco routers do not accept this combination in ARP requests (see RFC 1812 3.3.2).
By default the cluster MAC address is 03:bf:
arp [vrf] 03-bf-xx-xx-xx-xx arpa
2) Now with the ARP entry, it would actually work because the group bit of the cluster MAC address is set and the frames will be flooded within the VLAN. To avoid this undesired flooding, you should bind the cluster MAC address only to the ports (including uplinks) which are needed for NLB communication. You don't need to do this for functionaltiy reasons, it's just for preventing from flooding.
And no, this will not result in layer-2 loops.
mac-address-table static 03-bf-xx-xx-xx-xx vlan
interface [ ... ]
(Depending on the platform, the commands could vary.)
Hope that helps
Rolf
Additional Links:
http://www.cisco.com/en/US/tech/tk870/tk877/tk880/technologies_tech_note09186a008011b481.shtml
09-04-2013 01:43 AM
Hi,
depending on the NLB mode, additional configuration on Cisco devices is needed:
What mode do you have configured?
Btw: I wouldn't recommend to use MS NLB at all.
Best regards
Rolf
09-04-2013 02:56 AM
Btw: I wouldn't recommend to use MS NLB at all.
I second this. Heck, even MS themselves do NOT recommend using the MS-based NLB. MS even published an official documentation recommending using third-party NLB instead of MS NLB.
09-04-2013 03:38 AM
Sorry, I forgot to note the NLB mode. Multicast mode is using.
I agree with not to use NLB at all. However NLB is the wish of our customer.
I just want to check that static mapping of virtaul MAC to the link between switches wouldn't create a loop. And is it a right solution at all, or is it better to connect the cluster to a single switch?
Thanks in advance.
09-04-2013 03:48 AM
Here, read this: Microsoft Unified Communications Load Balancer Deployment
09-04-2013 04:11 AM
Unfortunately, I didn't find any information of how to configure two cisco switches to work with one NLB cluster (without flooding packets destined to that cluster out of all ports within the same vlan).
P.S.: NLB is going to work on Microsoft Server 2003 R2.
09-04-2013 04:23 AM
Multicast Mode (without IGMP option) requires 2 configuration tasks:
1) Static ARP entry(s) to bind the Multicast MAC address to the IP address of the NLB cluster.
The reason is that Cisco routers do not accept this combination in ARP requests (see RFC 1812 3.3.2).
By default the cluster MAC address is 03:bf:
arp [vrf] 03-bf-xx-xx-xx-xx arpa
2) Now with the ARP entry, it would actually work because the group bit of the cluster MAC address is set and the frames will be flooded within the VLAN. To avoid this undesired flooding, you should bind the cluster MAC address only to the ports (including uplinks) which are needed for NLB communication. You don't need to do this for functionaltiy reasons, it's just for preventing from flooding.
And no, this will not result in layer-2 loops.
mac-address-table static 03-bf-xx-xx-xx-xx vlan
interface [ ... ]
(Depending on the platform, the commands could vary.)
Hope that helps
Rolf
Additional Links:
http://www.cisco.com/en/US/tech/tk870/tk877/tk880/technologies_tech_note09186a008011b481.shtml
09-04-2013 04:46 AM
ARP entries have already been configured. The main task was to prevent undesired flooding within the vlan without creating a l2-loop.
To avoid this undesired flooding, you should bind the cluster MAC address only to the ports (including uplinks) which are needed for NLB communication. You don't need to do this for functionaltiy reasons, it's just for preventing from flooding.
And no, this will not result in layer-2 loops.
That is what i needed.
Thank you for your help.
09-04-2013 04:52 AM
You're welcome.
Thanks for rating and marking as the correct answer.
11-05-2013 11:57 AM
Hello all,
I am sort of in the same boat but I have a single 4500x switch stack with two different Sharepoint servers attached to build the cluster with and both Sharepoint servers are on VMs. The VMs have multiple connections to the stack with at least four servers on each link. I gather that I need to bind the "virtual" MAC address to the link used specifically by the sharepoint servers but will there be any issues with this being a single stack or the fact that there are three other servers on that same link?
Network Load Balancing is new to me and all I had to start with was the programmers comming to me and saying I needed to apply a static ARP command and disable snooping on my switches to make their Sharepoint cluster work. The docs I am reading on CCO are helping it make sense but all seem to reference single standalone servers and not VMs. I just want to make sure that I understand it all right so that I know that the changes I need to make are correct and do not break something else.
Thanks in advance ...
Brent
BTW... I am thinking that I should be pushing for a hardware based load balancer instead???
11-06-2013 02:16 AM
Hi Brent,
BTW... I am thinking that I should be pushing for a hardware based load balancer instead?
well, if buying a hardware based LB is within your budget, I wouldn't even think about implementing NLB.
Even Microsoft doesn't recommend NLB, as Leo has stated in the original discussion.
However, if you have to implement NLB, you should try to isolate the NLB-traffic as far as possible. The flooding scope of NLB-traffic is the broadcast-domain, so a separate VLAN would help to prevent the non-NLB servers from receiving NLB-traffic. If not needed, you may want to prune that VLAN on tunks. On trunked connections to VMs which have to include this VLAN you'll also have to deal with the NLB bandwidth consumption. Perhaps you may want to mark-down NLB-traffic which exceeds a certain volume to the scavenger class to prevent other servers in that VM from losing available bandwidth.
HTH
Rolf
03-16-2016 12:24 AM
Dear Rolf,
I have the same scenario My exchange Server is running with NLB. I know it is not recommended but it was running fine with my existing switch L2 2950, but my company wants to upgrade the switch to 2960 which does not support multicast support think so.
While configured NLB in unicast mode am able to ping my virtual IP through my New 2960 switch but in multicast was fail to do so
I configured the Static mac address
arp VIP VMAC arpa
mac address-table static VMAC vlan interface(Connected towards the server)
Was fail again
Please help me to avoid this issue
Thanks & regards
Nitin
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide