With Lucien Avramov
Welcome to the Cisco Support Community Ask the Expert conversation. Get an update on Nexus 5000 and 2000 from Lucien Avramov. Lucien is a technical marketing engineer in the Server Access Virtualization Business Unit at Cisco, where he supports the Cisco Nexus 5000, 3000 and 2000 Series. He was previously a customer support engineer and Technical Leader in the Cisco Technical Assistance Center. He holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales as well as the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183.
Remember to use the rating system to let Lucien know if you have received an adequate response.
Lucien might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through March 9 , 2012. Visit this forum often to view responses to your questions and the questions of other community members.
Nice to see you again in the Ask the Experts. Is there a compatability table/matrix somewhere posted between Nexus products? I want to know in particular if the Nexus 2248TP GE is compatible with SFP-10G-SR?
Thank you for your question.
2248TP-GE is compatible with SFP-10G-SR for the 10GE uplinks.
When you use transceivers such as SFP-10G-SR or the FET ( FET-10G) , it's important to have the same transceiver on N5K as well as the N2K port.
You can find the compatibility information on cisco.com website:
You can either bookmark this link, or use a search engine with '
cisco sfp compatibility matrix' keywords.
Please let me know if you have any other questions.
Please let know the best practice for connecting the server to Nexus 2000 and from Nexus 2000 to Nexus 5000.
This will enable me to do desing of our customers.
If you have 5500 Nexus running 5.1.3.x version of code and above, the best practice is to use eVPC technology : this allows you to dual home your Nexus 2000 to a pair of Nexus 5500 regardless if you have single homed or dual homed (with port-channel) servers.
You can take a look regarding eVPC at:
Otherwise, the design is with:
-for dual attached server with a port-channel or FCOE with SAN A / SAN B : nexus 2000 directly attached to a nexus 5000 (no dual homing)
-dual homed nexus 2000 to each Nexus 5000 when you have single homed servers or no need for port-channeling.
Here is a design guide for that:
More information in general can be found at www.cisco.com/go/nexus5000 section design and section white paper
We are looking to perform a two step Nexus 5000 upgrade as follows:
Stage 1: Non-ISSU
4.1(3)N2(1a) to 5.0(3)N2(2b)
Stage 2: ISSU
5.0(3)N2(2b) to 5.1(3)N1(1a)
We have a pair of N5K-C5020P-BF and N2K-C2148T-1GE in a vPC configuration (vPC-Peered Dual-Supervisor Virtual Modular System Dual-Homed FEXs )
I'm not quite sure about the upgrade process:
Stage 1: Non-ISSU
1) Install all on primary switch
2) Reload fex from secondary
3) Reload secondary switch after saving boot statements
Power cycle both 5020 for Power sequencer upgrade.
Stage 2: ISSU
1) Install all Primary
2) Install all Secondary
I have obviously simplified the process but have I got the basic steps correct.
I'm trying to figure out how to forward the udp broadcasts from my vpn phones coming in a one vlan destine for the UC vlan? Problem is our VPN phones work great connected to the ASA however when we push firmware to them they fail. In looking at the logs on the firewall everything seems to be passing but it seems that the phone is sending a udp broadcast and the UC servers are on a different vlan? We are using a Nexus 5548P with the L3 card and 2248's for the servers.
You can use the command
ip directed-broadcast under L3 interfaces or SVI's
You can take a look at:
Great steps, I'd upgrade with a small change for Stage 1)
a) Install all on the primary
b) Power Cycle Manually Primary
c) Install all on the Secondary
d) Power Cycle Manually Secondary
(no need to reload the Nexus 2000, they will reload automatically initiated by the Nexus 5000 upgrade completion)
Stage 2) as you suggested
5.0(3)N2(2b) release notes:
5.1(3)N1(1a) release notes with ISSU (please read the limitations section):
Just as an FYI, we upgraded the power sequencer in 4.2.(1)N1 version so any upgrade from an earlier version would undergo the power-sequencer upgrade. In your show version, it should show 1.2 version for the power sequencer once upgraded.
Also please note, that if you were to use the boot strings in step 1) instead of the install all command, it would *not* have upgraded your power sequencer.
Finally, you can take a look at a typical upgrade video for Nexus 5000 / 2000 here:
As I mentioned earlier I do not see any latency in the ping response from the first pair of switches which is not cross connected(Connected as regular port channels), but I see the latency in the secondpair which are cross connected via VPC with 2 roots in the spanning tree. Should I give up the idea of using VPC and use regular ports channels as in the first pair.
Let's please move to a recommended design first and then we can
investigate this further. You don't have a proper vPC design in your
scenario, on the pair of Nexus switches I pointed out.
I have 2 pair of Nexus switches in my setup as follows
The first pair connection as below
Nexus 1 (configured with vpc 1)----- 2 connections------ 6500 catalyst sw1(po 9)
Nexus 2 (configured with vpc 1) ----- 2 connections------ 6500 catalyst sw2 (po9)
po2 on nexus pair for vpc peer link
Spanning tree on Nexus 1
po1 root fwd1 p2p peer stp
po2 desg fwd 1 vpc peer link
Spanning tree on Nexus 2
po1 altn blk1 p2p peer stp
po2 root fwd1 vpc peer link
The second pair connection
Nexus3 (configured with vpc 20 ) ------ 1 connection ------- 6500 catalyst sw1 (po20)
(configured with vpc 30) ------- 1 connection ------- 6500 catalyst sw2 (po30)
Nexus4 (configured with vpc 20) ----- 1connection ---- 6500 catalyst sw1 (po20)(stp guard root)
(configured with vpc 30) ----- 1 connection ----6500 catalyst sw2 (po30)(stp guard root)
po1 on nexus pair for vpc peer link
Spanning tree on Nexus 3
po1 desg fwd1 vpc peer link
po20 root fwd1 p2p peer stp
po30 altn blk1 p2p peer stp
Spanning tree on Nexus 4
po1 root fwd1 vpc peer link
po20 root fwd 1 p2p peer stp
po30 altn blk1 p2p peer stp
Problem Observed : High Ping response
Source server on 1st pair of switches ; Destination server on 2nd pair of switches
Ping response from 1st pair of switches to destination server : normal (between 1 to 3 ms)
Ping response fron 2nd pair of switches to source server : (jumping from 3ms to 100+ ms).
There is no errors or packet drops on any of the above ports, I cannot understand why the ping response is high for connections from second pair.
I would like to see your first Nexus Pair cross-connected to your 6500 catalyst chassis, in order to use vPC.
Otherwise you may use regular port-channels.
Could you make this design change and look again at your ping times?
I cannot change the design in the first pair of Nexus as it is in production. I am aware that using vpc serves no purpose there as the behaviour is similar to port channels & anyway I am not facing high respone time issue on those switches.
The high response time is on the second pair cross connected via vpc to 6500 catalyst. I can see two root paths in the Spanning tree output, is that behaviour normal or it is the cause of latency ?
You are creating peer-traffic using the peer-keepalive link with this design actually, therefore I suggested the change to make it more predictable and reduce the peer-link traffic between the switches.
Hi Lucien, another question regarding FabricPath and vPC+
Do I need separate Portchannels/links between the two CORE/Distribution switches for vPC+ peer link and FabricPath ? Can I use the same link that I have for FabricPath for vPC+ peer link ?
This vPC+ setup will be used as a L2 link for an existing DC. Is this supported topology where I have vPC+ going to two different 3750s at access layer of the existing DC ?
Nexus 7009 at CORE/Distribution have F2 line cards.
Great question. You don't need to have a seperate link between core/distribution switches for the vpc+ peer-link and fabricpath, they are the same link. The VPC+ peer link is called a FabricPath core port. You can use the same link therefore for FabricPath and the vPC+ Peer-Link.
If you're 3750s switches are in a stack with the cross stack port channel, it will be supported. Otherwise, you will not be able to do this with 2 independant switches, as it seems to be the case as you describe it.
I have another question, Lucien:
How does load balacing with FCOE work?
For classical Lan traffic it is SRC/DST MAC (default) as we know it for years. What is it for FCoE?
On the Nexus 5000, the default load balancing mechanism on the LACP port-channel is source-destination. If we leave it in this state, all the FCoE traffic will take the same link in the port channel when the Nexus 5000 is forwarding frames to the upstream device.
To enable the Nexus 5000 to load balance using exchange IDs, we configure it for 'source-dest-port' load balancing.
Nexus5000(config)# port-channel load-balance ethernet ?
destination-ip Destination IP address
destination-mac Destination MAC address
destination-port Destination TCP/UDP port
source-dest-ip Source & Destination IP address --------> SID/DID
source-dest-mac Source & Destination MAC address
source-dest-port Source & Destination TCP/UDP port --------> SID/DID/OXID
source-ip Source IP address
source-mac Source MAC address
source-port Source TCP/UDP port
I'm deploying 5596UP's in a vPC design with my servers each connecting via 4 x 10GB links (2 iSCSI SAN/ 2 LAN). Some of these servers are ESX hosts which are apart of a cluster. I have been reading up on VN-Link, VN-tag, and Adapter-FEX and understand some of the benefits of using a 1000v and vn-tag capable nics over ESX vswitch would be easier management and monitoring, but are there any performance imporvemnts with this or any thing else worth noting?
The performance between dVS or 1000v should be similar. You get many extra features with Nexus 1000v that you don't get with the vmware dVS.
If you need higher performance then you can look at the adapter-fex solution as this will give you up to 30% of improved performance compared to software switches (so depending on the workload type your vm's will use you may benefit from adapter-fex technology).
we are connecting 5548's and 5596's to esxi hosts and cannot get multiple vlans to trunk , is this a nexus issue or on the server/vmware side
Using trunks between ESXi is most common practice when connecting ESX hosts to switches. Take a look at the port configuration, make sure you in have 'switchport mode trunk'. Take a look then at the show interface e1/x
(where x is the port going to your ESX host), and make sure it says 'connected' as state, if not the reason will be indicated. Make sure you configure the list of vlans on the ESXi side as well, and you should be good to go.