cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5868
Views
0
Helpful
30
Replies

Info on Nexus 5000's w/ 2000 extenders.

paul_alexander
Level 1
Level 1

Hi all,

Just a couple of q's

From what I understand, the Nexus 2000 extenders are managed by the Nexus 5000's almost like they are an internal module. Can someone clarify how management is determined?

Like, say you have two Nexus 5000's, and the fabric extenders are are connected to both for redundancy. Are both 5000's capable of managing the 2000's or is one elected for the job?

Also, are there any detailed documentation guides on design and deployment specifically with Nexus platforms?

Thanks in advance.

Paul

30 Replies 30

Patrick,

Yes, keeping the 5Ks in sync can be a drag - although once you know about it keeping them alike should be easy.

We have our FEXes cross connected (dual-homed as you wrote) to 5020s as well - even though we can not yet make LACP links for server teaming we will not have to re-cable the FEX/5020 links.

Bad things will occur if you do not keep the 5020s in sync manually :)

Config-sync is in the works. You should see it released before the end of the year.

It would appear this feature (host port-channels on FEX) did not in fact make it into 4.1(3), is that correct? Is it still on the roadmap?

If you are using 2 5k's to control 1 FEX in vPC, referred to as FEX Active-Active mode, you still cannot make a host port-channel on the FEX.

You should see the error message:

"Fabric in Fex A-A mode, can't configure sat port as member"

Is this what you are seeing?

No, our situation is the opposite: one 5k and two 2k's.

My question was actually referring to your post of Jul 7:

"I know host port-channels on FEX should be supported in 4.1(3), out in a few weeks. I believe this will include LACP and mode on."

I took that as you were referring to creating a port-channel between two host ports on a single FEX. But I guess you were actually referring to vPC?

I know the answer to my overall question though - there is no possibility of vPC configurations in an environment with a single 5k.

Sorry for all the confusion, but the "host port-channel" refers to a host forming a channel to two FEXs. These two FEXs must be single-homed.

In the future, we will shoot for a vPC from 5k -> FEX and another vPC from FEX to host.

We do not have a plan to support a channel from a host to just 1 FEX though.

Regards,

John

Understood. Thanks.

Hi, John.

in 1 FEX dual-homed to 2 NX5010 scenario, the vPC number must be same, what about the FEX association number, does it matter here? I tried to use different FEX number on 2 NX5010 to associate SAME FEX, but "show fex" complains "Discovered", or I have to use same FEX association number?

say my config in bellow:

1st NX5010

NX5001# sh run int eth 1/19

version 4.1(3)N1(1a)

interface Ethernet1/19

description To NX2001.3

switchport mode fex-fabric

fex associate 100

channel-group 100

interface port-channel100

switchport mode fex-fabric

vpc 100

fex associate 100

speed 10000

NX5001# sh fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

100 FEX0100 Discovered N2K-C2148T-1GE JAF1334AAHM

2nd NX5010NX5002# sh run int eth 1/19

version 4.1(3)N1(1a)

interface Ethernet1/19

description To NX2001.4

switchport mode fex-fabric

fex associate 101

channel-group 101

interface port-channel101

switchport mode fex-fabric

vpc 100

fex associate 101

speed 10000

NX5002# sh fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

101 FEX0101 Discovered N2K-C2148T-1GE JAF1334AAHM

the reason I am asking is that I read on another post what Cisco SE recommends to use different FEX association numbers.

thanks

I would use the same number since you are referring to the same FEX. I don't know the reasons for using different numbers, but in the future the configurations will likely sync and that will be a problem.

Regards,

John

John - In the Active/Active configuration, what is the maximum number of FEX with 4.1?

Joseph,

Any one Nexus 5000 can have 16 port channels (only 12 before 4.1(3)).

If you have each FEX going to two N5k's via vPC (which called Active/Active), each N5k will count the FEX as part of one channel. Active/Active mode implies the usage of a channel to connect to a FEX, more specifically a virtual port-channel (vPC).

This means an Active/Active configuration will still only support 16 FEX in total.

Also, please post new questions in a new thread so it is easy for everyone to find the questions and answers and stay on topic.

Thanks!

John

Depending on your redundancy scenario, there is also another great feature of the

5000 to FEX top of rack switch setup that our team deploys in our current datacenters.

If you want in rack redundancy of top of rack switches you can NIC team your servers and instead

of using a protocol to check the health between the NICs over the network you can just monitor whether

the server NICs still have link.

The setup is like this.

1. I have 2 N5Ks that each have a single fiber link that runs to it's own respective TOR in the supported rack.

2. Every NIC on every server in the rack is NIC teamed and one link from the team goes to one TOR and the other link to the other TOR.

3. The NIC team is set to failover if there is a link down event on the Active port. (this means that you will not have to broadcast

    NIC health checks over your network)

The Feature:

The one 10gig fiber link that has been used between the FEX and the 5000 acts like a backplane and not as a link.

If this goes down due to N5K failure, FEX failure, or 10 gig link failure, the FEX will shutdown all of its ports which means that in every

failure scenario of the N5K or FEX, all your servers will failover to their secondary NICs with no downtime.

That being said, if two servers want to communicate to each other on the same VLAN and they exist on the same FEX, their

communication still has to go up to the N5K and come back down to the FEX durring that communication.

The best part about all this as a network infrastructure builder is that I only have to configure the N5k and then just power on and fiber

up the FEX and the FEX automatically configures itself.

One point of management, syslogging, alerting, auth, monitoring. I love this platform.

Thanks,

-Ray

Ray,

I have done the same thing as you, and it works well.  Single-connected 2Ks to 5Ks, and two 2Ks in the top of each rack.  Without redundant 2Ks connected to redundant 5Ks, you have no rack redundancy!

I have two followup questions:

1) Is there any problem in using mgmt0 on the N5Ks as the VPC Keepalive port?  I know that the docs recommend using TWO ports configured in an etherchannel configuration to support the VPC keepalive connection, but I have a hard time justifying the cost of four SFPs and four ports for a VPC keepalive connection.  I currently use mgmt0 on my N7Ks as a VPC keepalive connection, because I had a REAL problem justifying the cost of four 10G ports and four 10G SFP+ for the VPC keepalive.

2) We discovered (after purchase) that the N2K only supports 1G connections, which causes us a problem for those devices in the Data Center which need 10/100M connectivity in the rack (APC PDUs, some ILO ports, some older servers that cannot yet be decommisioned).  As a solution I have connected a Catalyst 2950 to the N2K (via Gi0/1 on the 2950), so now I have 24 10/100 ports in the rack.  I learned that the N2K ports are Host ports, and do not support spanning tree from a switch.  Thus I can only connect the 2950 to a single N2K in the rack, which compromises redundancy for devices attached to the 2950.

What I would like to do is create VPC on the N5Ks, etherchannel the two Gigabit uplinks on the 2950 (Gi0/1 and Gi0/2) and connect the 2950 to both N2Ks in the rack, so it would 'look' like an etherchannel-connected host via VPC to the N5K.  Is this a workable thing to do?

Thanks in advance for any responses.

-rb

Ron,

In my configuration I have nexus 7000's VPC'd down to 5000's. I do not VPC together the 5000's but each rack has 2 FEX TORs. One TOR is supported by one 5000 and the other FEX is supported by a seperate 5000. So I deploy the 5000's in sets that suport 12 cabinets a piece. In the rack, we NIC team our servers where one NIC is active and exists in one FEX and the other is passive and exists in the other FEX. If we loose a fex due to bad hardware at the FEX, loss of uplink to one of the 5000's, or loss of a 5000, the servers in the effected racks will fail over to the other NIC and have a whole new uplink.

As for the 10/100 support, we do not uplink to the FEXs themselves although it is possible to do. The config to allow normal switch operation is below but it is not a recommended practice by Cisco. Unfortunatley in your case it sounds like you're supporting servers with the need whereas in my network we only have 10/100 for ILO/DRAC. We created a completely seperate network this which consists of a redundant set of 3560s uplinked to our Nexus 7000s and then some cheaper SLM248G dumb switches hanging all off that with only one VLAN on it.

interface EthernetXXX/1/XX

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

Your config with trunking may look mare like:

interface EthernetXXX/1/XX

  switch mode trunk

  switch trunk native vlan X

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

  channel-group X mode on   (you may be stuck using on because I'm not sure what the 2950 will support for port aggregation. the other option on the FEX being LACP)

interface port-channel X

  switch mode trunk

  switch trunk native vlan X

  spanning-tree port type normal
  spanning-tree bpduguard disable
  spanning-tree guard none

This should get you going. Be sure to hard set the speed on the 2950 gig ports.

I haven't done VPC on the 5000s yet because we don't have value for it yet. By VPCing from our distribution layer Nexus 7000s to the 5000s we eliminate networking loops. Even when deployed though, spanning-treee configuration is in place if badness occurs on the vPC.

Let me know how it goes. I've attached a design.

Ray,

Thanks for the tips.  It sounds like we have similar designs, although your option for a separate network for the 10/100 is a different approach than we're considering.  We wanted to eliminate copper between racks and have done so with only two small exceptions.

The Catalyst 2950 supports PAgP, LACP, and Static etherchannel.  I prefer using LACP.

What did you use for VPC interconnect on your N7Ks?  I'm curious.  We only have 10G ports on ours, and I didn't want to spend the money on SFPs for VPC, as well as sacrifice expensive 10G ports for it.  We entertained deploying some 1G ports on the N7K, but the cost was $$$$$!!!  So, for only a very small number of 1G ports at the N7K, we modified the design to get rid of them.

-rb