With Prashanth Krishnappa
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn about how to troubleshoot the Nexus 5000/2000 series switches.
Prashanth Krishnappa is an escalation engineer for datacenter switching at the Cisco Technical Assistance Center in Research Triangle Park, North Carolina. His current responsibilities include escalations in which he troubleshoots complex issues related to the Cisco Catalyst, Nexus and MDS product lines as well as providing training and author documentation. He joined Cisco in 2000 as an engineer in the Technical Assistance Center. He holds a bachelor's degree in electronics and communication engineering from Bangalore University, India, and a master's degree in electrical engineering from Wichita State University, Kansas. He also holds CCIE certification (#18057).
Remember to use the rating system to let Prashanth know if you have received an adequate response.
Prashanth might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through June 29, 2012. Visit this forum often to view responses to your questions and the questions of other community members.
I have migrated my FCOE set up from a pair of 5020s to a pair of Nexus 5500s and my vFCs is not coming up. Configurations have been triple checked and they are identical - can you help?
Unlike 50x0, in 5500s, until 5.1(3)N1(1), FCOE queues are not created by default when you enable "feature fcoe"
Make sure you have the following policies under system QoS
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type qos input fcoe-default-in-policy
service-policy type network-qos fcoe-default-nq-policy
I have a couple nexus 5k which are peer together. I linked a 3750 Switch to this peer with trunk port. I've configured all vlans on VPC peers as 3750.
I have eigrp on 3750 and I want to migrate all routing from 3750 to new nexus switches. 3750 can see nexus switches hsrp IP address on one of the vlans and vise versa.
Can I use thus vlan inteface in both sides for eigrp neighborship or I have to create L3 interface on nexus switchs instead of existing trunk port?
I believe that you have the same restrictions on a N5K as on a N7K. But check the design guides on cisco.com. This is from the N7K guide.
Layer 3 and vPC: Guidelines and Restrictions
Attaching a L3 device (router or firewall configured in routed mode for instance) to vPC domain using a vPC is not a supported design because of vPC loop avoidance rule.
To connect a L3 device to vPC domain, simply use L3 links from L3 device to each vPC peer device.
L3 device will be able to initiate L3 routing protocol adjacencies with both vPC peer devices.
One or multiple L3 links can be used to connect to L3 device to each vPC peer device. NEXUS 7000 series support L3 Equal Cost Multipathing (ECMP) with up to 16 hardware load-sharing paths per prefix. Traffic from vPC peer device to L3 device can be load-balanced across all the L3 links interconnecting the 2 devices together.
Using Layer 3 ECMP on the L3 device can effectively use all Layer 3 links from this device to vPC domain. Traffic from L3 device to vPC domain (i.e vPC peer device 1 and vPC peer device 2) can be load-balanced across all the L3 links interconnecting the 2 entities together.
Thanks for having this session. First question that I have is whether Jumbo MTU is supported across the vPC Peer Link on N5Ks? Below is the output when I tried to configure this in N7K, but I presume N5K may have the same symptom. Thanks.
RCS-WG1(config-if)# int po10
RCS-WG1(config-if)# mtu 9216
ERROR: port-channel10: Cannot configure port MTU on Peer-Link.
RCS-WG1(config-if)# sh int po10
port-channel10 is up
Hardware: Port-Channel, address: 70ca.9bf8.eef5 (bia 70ca.9bf8.eef5)
Description: *** vPC Peer Link ***
MTU 1500 bytes, BW 20000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Port mode is trunk
full-duplex, 10 Gb/s
Input flow-control is off, output flow-control is off
Switchport monitor is off
EtherType is 0x8100
Members in this channel: Eth3/1, Eth3/2
Last clearing of "show interface" counters never
30 seconds input rate 23120 bits/sec, 34 packets/sec
30 seconds output rate 23096 bits/sec, 34 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 23.05 Kbps, 32 pps; output rate 23.06 Kbps, 31 pps
12 unicast packets 60639 multicast packets 6 broadcast packets
60657 input packets 5024553 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
12 unicast packets 60688 multicast packets 340 broadcast packets
61040 output packets 5568549 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause
2 interface resets
Jumbo QoS configuration on Nexus 5000 is configured per QoS class of group using QoS configurations and not per interface. When applied, the QoS setting applies to all ethernet interfaces including the peer-link
Note that when you are configuring jumbo on switches configured for FCoE, use the following
policy-map type network-qos fcoe+jumbo-policy class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9216 multicast-optimize system qos service-policy type network-qos fcoe+jumbo-policy
The 2nd question that I have is regarding the effectiveness of storm control on the Nexus platform, partcilarly N5K and N7K.
The concern that I have is the storm control falling threshold capability as with the mid-range Catalyst platforms (eg: C3750), in which the port blocks traffic when the rising threshold is reached. The port remains blocked until the traffic rate drops below the falling threshold (if one is specified) and then resumes normal forwarding. The graph below is excerpted from OReilly – Network Warrior.
N7K and N5K never mentions anything about the falling threshold mechanism, so the graphs looks like this (as get from N7K config guide).
In the event of broadcast storms, theoretically the graph looks like this.
This means only 50% of the broadcast packets will be suppressed or dropped. Assuming 300’000 broadcast packets hitting the SVI within a second, 150’000 will be hitting the SVI, which is often sufficient to cause high CPU, switch starts not responding to UDLD from peers, peer devices blocking ports due to UDLD, and then a disastrous network meltdown.
Appreciate your comment upon this. Thanks.
In addition to any user configured storm control, the SVI/CPU is also protected by default Control plane policing. In lab, I tested sending 10Gig line rate broadcast in a 5500 and noticed that the switch CPU and other control plane protocols were not affected.
F340.24.10-5548-1# sh policy-map interface control-plane class copp-system-class-default
service-policy input: copp-system-policy-default
class-map copp-system-class-default (match-any)
match protocol default
police cir 2048 kbps , bc 6400000 bytes
conformed 45941275 bytes; action: transmit
violated 149875654008 bytes; action: drop<<<---------
I have few questions about Nexus 500s and 5500s
Hardware port channel resources-
According to the below document 16 hardware PortChannels is the limit on 5020 and 5010 switches . Does the 5548s with layer 3 daughter card has any kind limitation on the hardware portchannel resources.Does a FEX (2248 or 2232) dualhomed to a 5548 with L3 consume a Hardware portchannel?
Can you please confirm the below designs- 5548 are running on 5.1(3)N1(1), 5010s are on 5.0(3)N2(1)
55xx support 48 local port channels. But only port-channels having more than one interface in it count against this 48 limit. Since your FEX only has one interface per Nexus 5k, it
does not use up a resource.
Regarding your topologies, you are referring to the Enhanced vPC(E-vPC). E-vPC is only
supported on the 55xx platforms. So your second topology is not supported since it used Nexus 5010 as parent switch for the FEX.
Since the number of FEXs supported by 5548 with L3 card is eight, even if I use two 10Gig links between a FEX and each 5548, this will only consume 8 hardware portchannels out of available 48.
Do we have any limit on the 2248/2232 FEXs, will the port channel on a FEX count against the Limit of a 5500.
Since the number of FEXs supported by 5548 with L3 card is eight,
"Up to 24 fabric extenders per Cisco Nexus 5548P, 5548UP, and 5596UP switch (8 fabric extenders for Layer 3 configurations)" - This line was taken from the data sheet of the B22 (Table 2).
"Up to 24 fabric extenders per Cisco Nexus 5548P, 5548UP, 5596UP switch (16 fabric extenders for L3 configurations): up to 1152 Gigabit Ethernet servers and 768 10 Gigabit Ethernet servers per switch" - This line was taken from the data sheet of the Nexus 2000 (Table 2).
The FEX support with layer 3 has now been increased to 16
I just completed Nexus training so I'm eager to learn from the experts here.
Good responses Prashanth.
Since we are in this topic, are we able to turn "on" or "off" the L3 capabilities of the N5548UPs?
Most of the features including L3 features are enabled using the feature command. Here is an output from a lab switch
5548-1(config)# feature ?
bgp Enable/Disable Border Gateway Protocol (BGP)
cts Enable/Disable CTS
dhcp Enable/Disable DHCP Snooping
dot1x Enable/Disable dot1x
eigrp Enable/Disable Enhanced Interior Gateway Routing Protocol (EIGRP)
fcoe Enable/Disable FCoE/FC feature
fcoe-npv Enable/Disable FCoE NPV feature
fex Enable/Disable FEX
flexlink Enable/Disable Flexlink
hsrp Enable/Disable Hot Standby Router Protocol (HSRP)
http-server Enable/Disable http-server
interface-vlan Enable/Disable interface vlan
lacp Enable/Disable LACP
msdp Enable/Disable Multicast Source Discovery Protocol (MSDP)
npiv Nx port Id Virtualization (NPIV) feature enable
npv Enable/Disable FC N_port Virtualizer
ospf Enable/Disable Open Shortest Path First Protocol (OSPF)
pim Enable/Disable Protocol Independent Multicast (PIM)
port-security Enable/Disable port-security
private-vlan Enable/Disable private-vlan
privilege Enable/Disable IOS type privilege level support
rip Enable/Disable Routing Information Protocol (RIP)
ssh Enable/Disable ssh
tacacs+ Enable/Disable tacacs+
telnet Enable/Disable telnet
udld Enable/Disable UDLD
vpc Enable/Disable VPC (Virtual Port Channel)
vrrp Enable/Disable Virtual Router Redundancy Protocol (VRRP)
vtp Enable/Disable Vlan Trunking Protocol (VTP)
So if a feature is not needed, you can turn it off using "no feature".
Thank you for your reply. Does that mean that if I do not turn on any L3 features on the Nexus 5500, I can be able to scale up to 24 FEXs dual-homed?
P.s. yup, I have L3 daughterboard installed
The switch is considered Layer 3 only when you install the L3 licenses. So if you do not need the L3 features at this time and want to use it as L2 switch with upto 24 FEX, you could uninstall all the L3 licenses.
Correct me if I'm wrong, What I do to be able to scale up to 24 FEXs is by backing up my licenses, then uninstalling the L3 license. And if one day in the future I need this L3 feature, I just restore my license from the backup that I've made?
Btw, how do I restore backed up licenses on the N5500? I can't seem to find any guides on it.
Thanks in advance.
Just installing license using "clear license" command should be enough. Here is an example from my lab switch.
5548-1# clear license ?
WORD License file to be uninstalled
But if you want to back up all the license files, you can do that too. Here is how you do it.
1)copy license bootflash:file-name.tar
2)Then issue "clear license" like above.
If you need to, reinstall the license at some point in the future
1)tar extract bootflash:file-name.tar
2)Install the license back using "install license bootflash:" command.
Question about Enhanced vPC-
"The Dual-homed FEX topology can also be deployed for servers that have multiple NICs
but do not support 802.3ad"
Does the above statement mean we can't use LACP to dual home a server to dual homed FEXs?
here is one simple question: if I have Nexus 5k connected to another cisco device that also supports vPC, will CDP work if port on one side belongs to vPC while on another side is configured as simple access port?
Is there any Nexus Switch Emulator/Manager available which can be used for getting switch statistics?
Is there any Nexus Switch Emulator/Manager available which can be used for getting switch statistics?
not sure but try DCNS.
I'm curently facing a similar problem. Do you het an answer or solve this issue?
The Cisco answer is diabling Layer 3 features but do we need to rmove the license and / or the Layer 3 card?
Newer versions of NX-OS supports up to 16 EvPCs (my customer uses 10 FEXs so far) hence I did not attempt to try out removing the licenses. I have yet to try removing the license as my project timeline grew shorter and I was unable to test anymore. If you have the time, do give Prashant's guides a try and let us know If I have a chance to deal with N55XX with L3 module, I will attempt to try this out
Is this post still active?
Would like to ask further on L3 modules. Lets say I'm performing an NX-OS upgrade on a Nexus5500 series which is equipped with L3 module. I have L3 license installed, however I do not use any L3 capabilities at all.
Whenever I perform an ISSU check for upgrading, the output will always be "disruptive" which I suspect is due to the reason that I have an L3 module installed. I know for a fact that my configurations on a Nexus5500 unit without an L3 module will be non-disruptive. Can I safely tell the customer that their production network will not encounter any downtime if I perform an NX-OS upgrade?
Thanks in advance!