This discussion is locked

Ask The Expert: Nexus 5000 and 2000

Unanswered Question
Feb 24th, 2012

Read the bioWith Lucien Avramov

Welcome to the Cisco Support Community Ask the Expert conversation. Get an update on Nexus 5000 and 2000 from Lucien Avramov. Lucien is a technical marketing engineer in the Server Access Virtualization Business Unit at Cisco, where he supports the Cisco Nexus 5000, 3000 and 2000 Series. He was previously a customer support engineer and Technical Leader in the Cisco Technical Assistance Center. He holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales as well as the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183. 

Remember to use the rating system to let Lucien know if you have received an adequate response. 

Lucien might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through March 9 , 2012. Visit this forum often to view responses to your questions and the questions of other community members.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Average Rating: 5 (2 ratings)
whitlelisa Tue, 02/28/2012 - 12:29

Hello Lucien,

Nice to see you again in the Ask the Experts. Is there a compatability table/matrix somewhere posted between Nexus products? I want to know in particular if the Nexus 2248TP GE is compatible with SFP-10G-SR?

Thank you.

Lisa

Lucien Avramov Wed, 02/29/2012 - 14:02

Thank you for your question.

2248TP-GE is compatible with SFP-10G-SR for the 10GE uplinks.

When you use transceivers such as SFP-10G-SR or the FET ( FET-10G) , it's important to have the same transceiver on N5K as well as the N2K port.

You can find the compatibility information on cisco.com website:

http://www.cisco.com/en/US/products/hw/modules/ps5455/products_device_support_tables_list.html

You can either bookmark this link, or use a search engine with '

cisco sfp compatibility matrix' keywords.

Please let me know if you have any other questions.

rammuthaiya Wed, 02/29/2012 - 23:05

Hi,

Please let know the best practice for connecting the server to Nexus 2000 and from Nexus 2000 to Nexus 5000.

This will enable me to do desing of our customers.

Ram

Lucien Avramov Thu, 03/01/2012 - 04:52

If you have 5500 Nexus running 5.1.3.x version of code and above, the best practice is to use eVPC technology : this allows you to dual home your Nexus 2000 to a pair of Nexus 5500 regardless if you have single homed or dual homed (with port-channel) servers.

You can take a look regarding eVPC at:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_n1_1/b_Cisco_n5k_layer2_config_gd_rel_513_N1_1_chapter_01010.html

Otherwise, the design is with:

-for dual attached server with a port-channel or FCOE with SAN A / SAN B : nexus 2000 directly attached to a nexus 5000 (no dual homing)

-dual homed nexus 2000 to each Nexus 5000 when you have single homed servers or no need for port-channeling.

Here is a design guide for that:

Data Center Aggregation Layer Design and Configuration with Cisco Nexus Switches and Virtual PortChannels

More information in general can be found at www.cisco.com/go/nexus5000 section design and section white paper

alanjbrown Thu, 03/01/2012 - 07:36

Lucien,

We are looking to perform a two step Nexus 5000 upgrade as follows:

Stage 1: Non-ISSU

4.1(3)N2(1a) to 5.0(3)N2(2b)

Stage 2: ISSU

5.0(3)N2(2b) to 5.1(3)N1(1a)

We have a pair of N5K-C5020P-BF and N2K-C2148T-1GE in a vPC configuration (vPC-Peered Dual-Supervisor Virtual Modular System Dual-Homed FEXs )

I'm not quite sure about the upgrade process:

Stage 1: Non-ISSU

1) Install all on primary switch

2) Reload fex from secondary

3) Reload secondary switch after saving boot statements

Power cycle both 5020 for Power sequencer upgrade.

Stage 2: ISSU

1) Install all Primary

2) Install all Secondary

I have obviously simplified the process but have I got the basic steps correct.

Al

Robert.Rizzo Fri, 03/02/2012 - 18:38

I'm trying to figure out how to forward the udp broadcasts from my vpn phones coming in a one vlan destine for the UC vlan?  Problem is our VPN phones work great connected to the ASA however when we push firmware to them they fail.  In looking at the logs on the firewall everything seems to be passing but it seems that the phone is sending a udp broadcast and the UC servers are on a different vlan?  We are using a Nexus 5548P with the L3 card and 2248's for the servers.

Thanks

Lucien Avramov Mon, 03/05/2012 - 03:40

Great steps, I'd upgrade with a small change for Stage 1)

Stage 1)

a) Install all on the primary

b) Power Cycle Manually Primary

c) Install all on the Secondary

d) Power Cycle Manually Secondary

(no need to reload the Nexus 2000, they will reload automatically initiated by the Nexus 5000 upgrade completion)

Stage 2) as you suggested

5.0(3)N2(2b) release notes:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/notes/Rel_5_0_3_N2_1/Nexus5000_Release_Notes_5_0_3_N2.html

5.1(3)N1(1a) release notes with ISSU (please read the limitations section):

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/notes/Rel_5_1_3_N1_1/Nexus5000_Release_Notes_5_1_3_N1.html#wp348065

Just as an FYI, we upgraded the power sequencer in 4.2.(1)N1 version so any upgrade from an earlier version would undergo the power-sequencer upgrade. In your show version, it should show 1.2 version for the power sequencer once upgraded.

Also please note, that if you were to use the boot strings in step 1) instead of the install all command, it would *not* have upgraded your power sequencer.

Finally, you can take a look at a typical upgrade video for Nexus 5000 / 2000 here:

https://supportforums.cisco.com/videos/1206

diana.mascarenhas Wed, 03/07/2012 - 16:32

Hi Lucien,

As I mentioned earlier I do not see any latency in the ping response from the first pair of switches which is not cross connected(Connected as regular port channels), but I see the latency in the secondpair which are cross connected via VPC with 2 roots in the spanning tree. Should I give up the idea of using VPC and use regular ports channels as in the first pair.

Lucien Avramov Wed, 03/07/2012 - 23:47

Let's please move to a recommended design first and then we can

investigate this further. You don't have a proper vPC design in your

scenario, on the pair of Nexus switches I pointed out.

diana.mascarenhas Mon, 03/05/2012 - 02:49

Hi Lucien,

I have 2 pair of Nexus switches in my setup as follows

The first pair connection as below

==========================

Nexus 1 (configured with vpc 1)----- 2 connections------ 6500 catalyst sw1(po 9)

Nexus 2 (configured with vpc 1) ----- 2 connections------ 6500 catalyst sw2 (po9)

po2 on nexus pair for vpc peer link

Spanning tree on Nexus 1

po1     root fwd1      p2p peer stp

po2    desg fwd 1   vpc peer link



Spanning tree on Nexus 2

po1      altn blk1     p2p peer stp

po2      root fwd1   vpc peer link

The second pair connection

=====================

Nexus3 (configured with vpc 20 ) ------ 1 connection ------- 6500 catalyst sw1 (po20)

              (configured with vpc 30) ------- 1 connection ------- 6500 catalyst sw2 (po30)

Nexus4 (configured with vpc 20) ----- 1connection ---- 6500 catalyst sw1 (po20)(stp guard root)

              (configured with vpc 30) ----- 1 connection ----6500 catalyst sw2 (po30)(stp guard root)

po1 on nexus pair for vpc peer link

Spanning tree on Nexus 3

po1      desg fwd1        vpc peer link

po20    root fwd1           p2p peer stp

po30    altn blk1            p2p peer stp

Spanning tree on Nexus 4

po1      root fwd1           vpc peer link

po20    root fwd 1          p2p peer stp

po30    altn blk1               p2p peer stp

Problem Observed :  High Ping response

Source server on 1st pair of switches  ; Destination server on 2nd pair of switches

Ping response from 1st pair of switches to destination server : normal (between 1 to 3 ms)

Ping response fron 2nd pair of switches to source server  :   (jumping from 3ms to 100+ ms).

There is no errors or packet drops on any of the above ports, I cannot understand why the ping response is high for connections from second pair.

Lucien Avramov Mon, 03/05/2012 - 03:57

I would like to see your first Nexus Pair cross-connected to your 6500 catalyst chassis, in order to use vPC.

Otherwise you may use regular port-channels.

Could you make this design change and look again at your ping times?

diana.mascarenhas Mon, 03/05/2012 - 18:01

Hi,

I cannot change the design in the first pair of Nexus as it is in production. I am aware that using vpc serves no purpose there as the behaviour is similar to port channels & anyway I am not facing high respone time issue on those switches. 

The high response time is on the second pair cross connected via vpc to 6500 catalyst.  I  can see two root paths in the Spanning tree output, is that behaviour normal or it is the cause of latency ?

Lucien Avramov Tue, 03/06/2012 - 05:21

You are creating peer-traffic using the peer-keepalive link with this design actually, therefore I suggested the change to make it more predictable and reduce the peer-link traffic between the switches.

whitlelisa Mon, 03/05/2012 - 12:39

Hi Lucien, another question regarding FabricPath and vPC+

Do I need separate Portchannels/links between the two CORE/Distribution switches for vPC+ peer link and FabricPath ? Can I use the same link that I have for FabricPath for vPC+ peer link ?

This vPC+ setup will be used as a L2 link for an existing DC. Is this supported topology where I have vPC+ going to two different 3750s at access layer of the existing DC ?

Nexus 7009 at CORE/Distribution have F2 line cards.

Thank you

Lisa

Lucien Avramov Tue, 03/06/2012 - 05:20

Great question. You don't need to have a seperate link between core/distribution switches for the vpc+ peer-link and fabricpath, they are the same link. The VPC+ peer link is called a FabricPath core port. You can use the same link therefore for FabricPath and the vPC+ Peer-Link.

If you're 3750s switches are in a stack with the cross stack port channel, it will be supported. Otherwise, you will not be able to do this with 2 independant switches, as it seems to be the case as you describe it.

whitlelisa Tue, 03/06/2012 - 22:08

I have another question, Lucien:

How does load balacing with FCOE work?

For classical Lan traffic it is SRC/DST MAC (default) as we know it for years. What is it for FCoE?

Thank you

Lisa

Lucien Avramov Wed, 03/07/2012 - 00:35

On the Nexus 5000, the default load balancing mechanism on the LACP port-channel is source-destination. If we leave it in this state, all the FCoE traffic will take the same link in the port channel when the Nexus 5000 is forwarding frames to the upstream device.

To enable the Nexus 5000 to load balance using exchange IDs, we configure it for 'source-dest-port' load balancing.

Nexus5000(config)# port-channel load-balance ethernet ?

  destination-ip    Destination IP address

  destination-mac   Destination MAC address

  destination-port  Destination TCP/UDP port

  source-dest-ip    Source & Destination IP address   --------> SID/DID

  source-dest-mac   Source & Destination MAC address

  source-dest-port  Source & Destination TCP/UDP port --------> SID/DID/OXID

  source-ip         Source IP address

  source-mac        Source MAC address

  source-port       Source TCP/UDP port

jefflingle Tue, 03/06/2012 - 23:45
Lucien,

I'm deploying 5596UP's in a vPC design with my servers each connecting via 4 x 10GB links (2 iSCSI SAN/ 2 LAN).  Some of these servers are ESX hosts which are apart of a cluster.  I have been reading up on VN-Link, VN-tag, and Adapter-FEX and understand some of the benefits of using a 1000v and vn-tag capable nics over ESX vswitch would be easier management and monitoring, but are there any performance imporvemnts with this or any thing else worth noting?

Thanks.

Lucien Avramov Wed, 03/07/2012 - 06:09

The performance between dVS or 1000v should be similar. You get many extra features with Nexus 1000v that you don't get with the vmware dVS.

If you need higher performance then you can look at the adapter-fex solution as this will give you up to 30% of improved performance compared to software switches (so depending on the workload type your vm's will use you may benefit from adapter-fex technology).

jefflingle Wed, 03/07/2012 - 07:34

Thanks,

are there any docs you can link me to explaining the adapter-fex performance in more detail?

jefflingle Fri, 03/09/2012 - 08:36

Thanks for the help, what is the best way for me to find out when that doc is released?  Is it something you can send to me?

Lucien Avramov Fri, 03/09/2012 - 08:38

Absolutely! Please send me a private message and also look at the CCO

page white paper section on the Nexus 5000 page.

nikisal Thu, 03/08/2012 - 16:48

hi

we are connecting 5548's and 5596's to esxi hosts and cannot get multiple vlans to trunk ,  is this a nexus issue or on the server/vmware side

Lucien Avramov Fri, 03/09/2012 - 08:24

Using trunks between ESXi is most common practice when connecting ESX hosts to switches. Take a look at the port configuration, make sure you in have 'switchport mode trunk'. Take a look then at the show interface e1/x
(where x is the port going to your ESX host), and make sure it says 'connected' as state, if not the reason will be indicated. Make sure you configure the list of vlans on the ESXi side as well, and you should be good to go.

Actions

Login or Register to take actions

This Discussion

Posted February 24, 2012 at 1:02 PM
Stats:

Related Content

Discussions Leaderboard