Info on Nexus 5000's w/ 2000 extenders.

Unanswered Question
Apr 14th, 2009

Hi all,

Just a couple of q's

From what I understand, the Nexus 2000 extenders are managed by the Nexus 5000's almost like they are an internal module. Can someone clarify how management is determined?

Like, say you have two Nexus 5000's, and the fabric extenders are are connected to both for redundancy. Are both 5000's capable of managing the 2000's or is one elected for the job?

Also, are there any detailed documentation guides on design and deployment specifically with Nexus platforms?

Thanks in advance.


I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
spreed Tue, 04/14/2009 - 06:41

Hello Paul,

We are just testing the Nexus 5020s and FEXs (2000s). The FEXs currently are only managed by one parent 5020 at a time. They go through a process ( I call it association) which makes them logically a part of the parent. If you have a second 5020 connected to the FEX and the link is active, the 5020 know it's there but thats it. If first 5020 is taken off line (link down or rebooted) the FEX will associate with the second box; however that process takes about 45 seconds. Don't forget that the ports connected to the FEX have to be configured specially to support the FEX as follows:

interface Ethernet1/15

switchport mode fex-fabric

fex associate 101

There are a couple of guides available:


johgill Thu, 04/23/2009 - 06:03

There are 3 stages of FEX redundancy. First is 1 5k to 1 2k, then vPC between 5k's, but 1 2k only connects to 1 5k at a time. In the release after this, the goal is to have 2 5k's connected and a FEX connect via vPC (virtual port-channel) to both 5k's.

paul_alexander Thu, 04/23/2009 - 06:14

Thanks for the info. This is kinda what I meant, do you know where i'd find design information similar what you have just mentioned?


paul_alexander Thu, 04/23/2009 - 11:18

Actually, im a little confused as to what you mean now.

I though vPC was only a N7K feature....

johgill Thu, 04/23/2009 - 11:20


Sorry, vPC is coming out in the next release for the 5k. I am looking for some documentation supporting my previous statement. I have seen a few road maps but not sure how much of that is well documented externally.


a12288 Tue, 07/07/2009 - 06:27

Hi, John.

Any plan for FEX to support LACP at host interfaces, thanks.


johgill Tue, 07/07/2009 - 06:34

I know host port-channels on FEX should be supported in 4.1(3), out in a few weeks. I believe this will include LACP and mode on.

paul_alexander Tue, 07/07/2009 - 07:04

Yeah been playing with 4.1 over the last couple of days - it works really well.

It will be even better once the configuration can be synchronized across the 5K's.

a12288 Wed, 07/08/2009 - 05:12

Hi, Paul.

What is your vPC scenario? if I have 2 Nexus 5010 and 2 C6500/SUP720, I would like to create 2 vPC channels, one for left-hand C6500/Sup720 goes to 1 pair of N5010, another for right-hand C6500/Sup720 goes to the same pair of N5010, and the uplinks would be multiple of Giga interfaces, any comments on this.


Patrick Murphy Fri, 07/31/2009 - 20:18

Hello All:

I've just upgrade my Nexus 5010's to 4.1(3) and created the peer-link and peer-link keep-alive between the 2 5010's. I've dual-homed the FEX using VPC to both 5010's. Everything appears to be operational and the FEX is associated to both 5010's. However, when configuring a host port for the FEX from the first 5010, it doesn't appear the configuration of that port is synchronized to the second 5010.

Is this to be expected since the 5000's act a separate switches? It would be a nightmare to keep the configs in sync across both switches for the same host port. Maybe I need to blow away the configs since I was perviously only single home this single FEX to only one of the 5010's.

Let me know if anybody has seen this with 4.1(3).



johgill Mon, 08/03/2009 - 04:31


You can consider the switches separate, but aware of the virtual port channel they share. Configuration is completely separate as well.

The active-active (dual-homed) scenario is a new concept, and I'm sure there will be changes in the future - but right now the mechanism doesn't exist to synchronize configurations.


John Gill

Patrick Murphy Wed, 08/05/2009 - 06:08

Thanks for the confirmation. I think we will wait on implementing dual-homed VPC for a few more revisions.

At least 4.1(3) fixes issues with TACACS we were seeing.

Robert Rowland III Tue, 09/08/2009 - 04:02


Yes, keeping the 5Ks in sync can be a drag - although once you know about it keeping them alike should be easy.

We have our FEXes cross connected (dual-homed as you wrote) to 5020s as well - even though we can not yet make LACP links for server teaming we will not have to re-cable the FEX/5020 links.

Bad things will occur if you do not keep the 5020s in sync manually :)

johgill Tue, 09/08/2009 - 04:16

Config-sync is in the works. You should see it released before the end of the year.

Daniel Barr Tue, 08/11/2009 - 10:34

It would appear this feature (host port-channels on FEX) did not in fact make it into 4.1(3), is that correct? Is it still on the roadmap?

johgill Tue, 08/11/2009 - 13:32

If you are using 2 5k's to control 1 FEX in vPC, referred to as FEX Active-Active mode, you still cannot make a host port-channel on the FEX.

You should see the error message:

"Fabric in Fex A-A mode, can't configure sat port as member"

Is this what you are seeing?

Daniel Barr Wed, 08/12/2009 - 08:41

No, our situation is the opposite: one 5k and two 2k's.

My question was actually referring to your post of Jul 7:

"I know host port-channels on FEX should be supported in 4.1(3), out in a few weeks. I believe this will include LACP and mode on."

I took that as you were referring to creating a port-channel between two host ports on a single FEX. But I guess you were actually referring to vPC?

I know the answer to my overall question though - there is no possibility of vPC configurations in an environment with a single 5k.

johgill Wed, 08/12/2009 - 12:22

Sorry for all the confusion, but the "host port-channel" refers to a host forming a channel to two FEXs. These two FEXs must be single-homed.

In the future, we will shoot for a vPC from 5k -> FEX and another vPC from FEX to host.

We do not have a plan to support a channel from a host to just 1 FEX though.



a12288 Wed, 10/21/2009 - 08:01

Hi, John.

in 1 FEX dual-homed to 2 NX5010 scenario, the vPC number must be same, what about the FEX association number, does it matter here? I tried to use different FEX number on 2 NX5010 to associate SAME FEX, but "show fex" complains "Discovered", or I have to use same FEX association number?

say my config in bellow:

1st NX5010

NX5001# sh run int eth 1/19

version 4.1(3)N1(1a)

interface Ethernet1/19

description To NX2001.3

switchport mode fex-fabric

fex associate 100

channel-group 100

interface port-channel100

switchport mode fex-fabric

vpc 100

fex associate 100

speed 10000

NX5001# sh fex


Number Description State Model Serial


100 FEX0100 Discovered N2K-C2148T-1GE JAF1334AAHM

2nd NX5010NX5002# sh run int eth 1/19

version 4.1(3)N1(1a)

interface Ethernet1/19

description To NX2001.4

switchport mode fex-fabric

fex associate 101

channel-group 101

interface port-channel101

switchport mode fex-fabric

vpc 100

fex associate 101

speed 10000

NX5002# sh fex


Number Description State Model Serial


101 FEX0101 Discovered N2K-C2148T-1GE JAF1334AAHM

the reason I am asking is that I read on another post what Cisco SE recommends to use different FEX association numbers.


johgill Wed, 10/21/2009 - 10:48

I would use the same number since you are referring to the same FEX. I don't know the reasons for using different numbers, but in the future the configurations will likely sync and that will be a problem.



Joseph.Cram Mon, 11/02/2009 - 16:33

John - In the Active/Active configuration, what is the maximum number of FEX with 4.1?

johgill Tue, 11/03/2009 - 16:03


Any one Nexus 5000 can have 16 port channels (only 12 before 4.1(3)).

If you have each FEX going to two N5k's via vPC (which called Active/Active), each N5k will count the FEX as part of one channel. Active/Active mode implies the usage of a channel to connect to a FEX, more specifically a virtual port-channel (vPC).

This means an Active/Active configuration will still only support 16 FEX in total.

Also, please post new questions in a new thread so it is easy for everyone to find the questions and answers and stay on topic.



raylinzone Wed, 12/16/2009 - 07:31

Depending on your redundancy scenario, there is also another great feature of the

5000 to FEX top of rack switch setup that our team deploys in our current datacenters.

If you want in rack redundancy of top of rack switches you can NIC team your servers and instead

of using a protocol to check the health between the NICs over the network you can just monitor whether

the server NICs still have link.

The setup is like this.

1. I have 2 N5Ks that each have a single fiber link that runs to it's own respective TOR in the supported rack.

2. Every NIC on every server in the rack is NIC teamed and one link from the team goes to one TOR and the other link to the other TOR.

3. The NIC team is set to failover if there is a link down event on the Active port. (this means that you will not have to broadcast

    NIC health checks over your network)

The Feature:

The one 10gig fiber link that has been used between the FEX and the 5000 acts like a backplane and not as a link.

If this goes down due to N5K failure, FEX failure, or 10 gig link failure, the FEX will shutdown all of its ports which means that in every

failure scenario of the N5K or FEX, all your servers will failover to their secondary NICs with no downtime.

That being said, if two servers want to communicate to each other on the same VLAN and they exist on the same FEX, their

communication still has to go up to the N5K and come back down to the FEX durring that communication.

The best part about all this as a network infrastructure builder is that I only have to configure the N5k and then just power on and fiber

up the FEX and the FEX automatically configures itself.

One point of management, syslogging, alerting, auth, monitoring. I love this platform.



ronbuchalski Thu, 03/18/2010 - 10:02


I have done the same thing as you, and it works well.  Single-connected 2Ks to 5Ks, and two 2Ks in the top of each rack.  Without redundant 2Ks connected to redundant 5Ks, you have no rack redundancy!

I have two followup questions:

1) Is there any problem in using mgmt0 on the N5Ks as the VPC Keepalive port?  I know that the docs recommend using TWO ports configured in an etherchannel configuration to support the VPC keepalive connection, but I have a hard time justifying the cost of four SFPs and four ports for a VPC keepalive connection.  I currently use mgmt0 on my N7Ks as a VPC keepalive connection, because I had a REAL problem justifying the cost of four 10G ports and four 10G SFP+ for the VPC keepalive.

2) We discovered (after purchase) that the N2K only supports 1G connections, which causes us a problem for those devices in the Data Center which need 10/100M connectivity in the rack (APC PDUs, some ILO ports, some older servers that cannot yet be decommisioned).  As a solution I have connected a Catalyst 2950 to the N2K (via Gi0/1 on the 2950), so now I have 24 10/100 ports in the rack.  I learned that the N2K ports are Host ports, and do not support spanning tree from a switch.  Thus I can only connect the 2950 to a single N2K in the rack, which compromises redundancy for devices attached to the 2950.

What I would like to do is create VPC on the N5Ks, etherchannel the two Gigabit uplinks on the 2950 (Gi0/1 and Gi0/2) and connect the 2950 to both N2Ks in the rack, so it would 'look' like an etherchannel-connected host via VPC to the N5K.  Is this a workable thing to do?

Thanks in advance for any responses.


raylinzone Thu, 03/18/2010 - 10:51


In my configuration I have nexus 7000's VPC'd down to 5000's. I do not VPC together the 5000's but each rack has 2 FEX TORs. One TOR is supported by one 5000 and the other FEX is supported by a seperate 5000. So I deploy the 5000's in sets that suport 12 cabinets a piece. In the rack, we NIC team our servers where one NIC is active and exists in one FEX and the other is passive and exists in the other FEX. If we loose a fex due to bad hardware at the FEX, loss of uplink to one of the 5000's, or loss of a 5000, the servers in the effected racks will fail over to the other NIC and have a whole new uplink.

As for the 10/100 support, we do not uplink to the FEXs themselves although it is possible to do. The config to allow normal switch operation is below but it is not a recommended practice by Cisco. Unfortunatley in your case it sounds like you're supporting servers with the need whereas in my network we only have 10/100 for ILO/DRAC. We created a completely seperate network this which consists of a redundant set of 3560s uplinked to our Nexus 7000s and then some cheaper SLM248G dumb switches hanging all off that with only one VLAN on it.

interface EthernetXXX/1/XX

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

Your config with trunking may look mare like:

interface EthernetXXX/1/XX

  switch mode trunk

  switch trunk native vlan X

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

  channel-group X mode on   (you may be stuck using on because I'm not sure what the 2950 will support for port aggregation. the other option on the FEX being LACP)

interface port-channel X

  switch mode trunk

  switch trunk native vlan X

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

This should get you going. Be sure to hard set the speed on the 2950 gig ports.

I haven't done VPC on the 5000s yet because we don't have value for it yet. By VPCing from our distribution layer Nexus 7000s to the 5000s we eliminate networking loops. Even when deployed though, spanning-treee configuration is in place if badness occurs on the vPC.

Let me know how it goes. I've attached a design.

ronbuchalski Thu, 03/18/2010 - 14:27


Thanks for the tips.  It sounds like we have similar designs, although your option for a separate network for the 10/100 is a different approach than we're considering.  We wanted to eliminate copper between racks and have done so with only two small exceptions.

The Catalyst 2950 supports PAgP, LACP, and Static etherchannel.  I prefer using LACP.

What did you use for VPC interconnect on your N7Ks?  I'm curious.  We only have 10G ports on ours, and I didn't want to spend the money on SFPs for VPC, as well as sacrifice expensive 10G ports for it.  We entertained deploying some 1G ports on the N7K, but the cost was $$$$$!!!  So, for only a very small number of 1G ports at the N7K, we modified the design to get rid of them.


raylinzone Thu, 03/18/2010 - 14:46

We have our VPC peer link as a 2 ten gig links. Overkill by far but because of the shared asic structure of the 32 port 10 gig blades and the way we have everything connected, it's not a huge waste.

Depending on the 10 gig blade you have, the oversubscription rate can be 4:1 which is what we have. So it all works out in the end.


This Discussion