cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2002
Views
5
Helpful
8
Replies

Nexus vPC question

andrew.lerner
Level 1
Level 1

I am curious how failover is accomplished in the following scenario:

2 NX5ks and 2 NX2ks (each connected to a single N5K, NOT dual-homed) and a Server is connected to each NX2k, with vPC configured.  See diagram below.

In the event of a peer link failure, i understand how how the server understands NOT to send traffic to the secondary 2k switch (vpc shuts the port-channel).

However in the event of a SWITCH failure (i.e., nk501), how is the SERVER notified to STOP sending traffic to n2k01?  n2k01 has no connectivity upstream at this point.  The server has not lost link to either n2k01 or n2k02 so how does IT know to stop sending to n2k01?

Do the port-channels automatically shutdown on n2k01 when the upstream 5k connectivity is lost, or is information passed via n2k02 to the server via etherchannel?

n5k01 ----peer link-----n5k02

   ||                              ||

   ||                              ||

   X                            ||

   ||                              ||

   ||                              ||

n2k01                   n2k02

       \                    /

        \                  / 

         \                /

          \              /

          server (with vpc)

Thanks in Advance!

8 Replies 8

scott1322
Level 1
Level 1

Hi Andrew,

Is the server a blade chassis? If it is something like the HP C-class chassis then using Cisco 312x or Flex10's I think there is software available for this exact type of scenario. I think it was called something like smartlink. If it's to a single server with 2 nic's that are "teamed" then I am sure the same software exisits from the vendor of the server.

I think the new NX-OS supports vPC links to dual x 2k's, but I havent touched the nexus for a few months now and cannot confirm this.

Collin Clark
VIP Alumni
VIP Alumni

If I understand your question correctly, the answer would be spanning-tree.You're running a U shaped L2 domain. Spanning tree will see the links down and open blocked ports as necessary.

andrew.lerner
Level 1
Level 1

I opened a TAC on this and they confirmed that when an upstream 5K dies, all local interfaces on the 2K die as well. From the server/hosts's perspective, link is lost to the 2K that has no upstream connectivity.  So it is a link-loss event for all ports on the 2K when its upstream 5K dies....according to TAC.

Huh, never would have guessed that. Thanks for posting the info.

ronbuchalski
Level 1
Level 1

Andrew,

Two things....

1) NIC teaming is not the same as etherchannel.  So, in the case of a server having two NICs teamed as a single interface, it will strictly react based on external link status.  Of course, it will also be able to react to NIC failure, which is considered internal to the server.

2) Regarding port-channel and vPC, I am using a scenario where I have a Catalyst 2950 connected via Gi0/1 and Gi0/2 to each N2Ks in a rack, and configured as a LACP-speaking port-channel to a vPC on the Nexus 5K.  This provides path diversity in the event of an interface, cable, N2K, or N5K failure.  I tested the scenario of losing the link between the N2K and N5K (which also simulates the loss of the N5K).  Upon loss of the N5K-N2K link, the interface on the Catalyst 2950 immediately saw loss of link and dropped the interface, but maintained connectivity via the remaining port-channel member.

Upon re-establishment of the N5K-N2K link, it takes 20-30 seconds for the N2K to re-establish communication with the N5K, after which time it will activate its' ports.  At that time, the port-channel connection on the Catalyst 2950 regained the second member, and the full port-channel is re-established.

Hope this answers your question.

-rb

Gents,

I have been reading this post with interest as I have a situation where Servers with Dual Nics will be connected to two different N5Ks. In this scenario must I configure the N5Ks as a vPC.

thanks

Ian.

Confirmed that when a 5K dies, all ports on a downstream 2K lose link immediately.  Tested/Verified in our environment by powering off the 5K and observing link loss on a server plugged into the 2K.

I am curious as to how you connected a Catalyst to the Nexus 2K. I didn't believe this could work since you cannot disable BPDUGuard on a Fabric Extender. Were you able to disable it?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: