cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
30043
Views
5
Helpful
41
Replies

ASK THE EXPERT - CISCO NEXUS 5000 AND 2000 SERIES

ciscomoderator
Community Manager
Community Manager

Welcome  to the Cisco Networking   Professionals Ask the Expert conversation.  This is an opportunity to get information on Data Center Switching the Cisco Nexus 5000 Series Switches and Nexus 2000 Series Fabric Extenders with Lucien Avramov.  Lucien is a Customer Support Engineer at the Cisco Technical Assistance Center. He currently works in the data center switching team supporting customers on the Cisco Nexus 5000 and 2000. He was previously a technical leader within the network management team. Lucien holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales. He also holds the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183.

Remember to use the rating system to let Lucien know if you have received an adequate response.

Lucien might not be able to answer each question due to the volume expected    during this event. Our moderators will post many of the  unanswered   questions in other discussion forums shortly after the  event. This   event  lasts through October 22, 2010. Visit this forum  often to view responses to your questions and the questions of other  community  members.

41 Replies 41

areanetsenato
Level 1
Level 1

Hi,

i want some information about CFS (Cisco Fabric Services).

We're installing 4 Nexus 5000 and 12 N2K-C2248TP and configuration synchronization could be a problem.

We're using version 4.2(1)N1(1) but documentation if unclear for me. Have you any examples or supplemental docs ?!!?

Thank

Marco

Very good point. CFS is used on N5K for the vpc consistency check at this point. You can see it if you run show vpc consistency-parameters global.

As of today, with 4.2 code, your configuration needs to be the same on your N5K or N2K that are vpc'ed. From our next release coming out very soon, there will be a feature called config-sync that will actually take benefit of CFS for pushing configurations between both N5K that form a vpc pair.

Hello,

Utilizing a Nexus5010 4.1(3)N2(1a). It is hosting both 10Gb and FC.

Mod Ports  Module-Type                      Model                  Status
--- -----  -------------------------------- ---------------------- ------------
1    20     20x10GE/Supervisor               N5K-C5010P-BF-SUP      active *
2    8      8x1/2/4G FC Module               N5K-M1008              ok

On the few ports that specifically service only FCoE, can we allocate 100% bandwidth to fcoe?

Default appears to be 50% but that seems like a waste on these specific ports.

Is it possible to do per-port fcoe queuing? Any issues with putting fcoe at 100%?

Regards.

You can certainly change that, but to impact traffic ingress / egress it needs to be a queuing QoS and will be a global config for all the interfaces.

Look at this document:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/qos/421_n2_1/Cisco_Nexus_5000_Series_NX-OS_Quality_of_Service_Configuration_Guide_Rel_421_N2_1_chapter3.html#task_1135199

You could configure, but this would be for all your traffic and I wouldnt recommend you.

class  type  queuing  class-fcoe

bandwidth  percent  100

class  type  queuing  class-default

bandwidth  percent  0

You can check your queuing configuration with show queuing interface e1/1 for example.

Hi,

I've got a simple question:

Is there a command to configure the logging server source-interface?

I need to configure a VLAN Interface as the source-interface for logging, no VRF!

Just like "snmp-server source-interface trap VlanXXX" but for "logging server X.X.X.X ..."

Nexus 5010, version 4.2(1)N2(1)

Thanks,

Thomas

Hi Thomas,

On NX-OS 4.1(3)N2(1) and later you can use an SVI on N5K. Actually you are already using it for the "snmp-server source interface".

feature interface-vlan

interface vlan A
ip address/mask

{vlan A needs to be trunked upstream to your L3 device}

vrf context default
ip route {destination networks or host of your syslog server} {vlan A gateway}

log server x.y.z.w (log level) use-vrf default

Thanks

Hi okarakok,

thanks for your reply, I'll try your solution.

So at this time there's no solution for defining a logging server interface-source completely _without_ vrf?

Like with "snmp-server source-interface trap VlanXXX" or "ip tacacs source-interface VlanXXX" where no vrf is used.

Maybe it's possible in futere NX-OS releases?

Regards,

Thomas

Hi Thomas,

I am afraid there is no other solution in the existing NX-OS releases.

I believe that might be possible in NX-OS 5.0(3) but it is not committed yet.

Thanks,

Ozden

Under 4.2 code, you can specify logging server x.x.x.x without the need to specify the vrf at the end. That will use the management vrf. Else, as per the configuration guide you need to specify the vrf (it will be the ''default'' vrf since there are only 2 VRFs on the 5000).

Selecting the vrf or the interface should be the same result for you, provided you have proper routing under that vrf, it will go via the interface of your choice: show ip route x.x.x.x vrf default, show ip route x.x.x.x vrf management.

I have a question about the 5000 series and a windows NLB cluster (load balancing cluster). The issue i am running into is that the windows NLB cluster uses a multicast mac address with a non-multicast IP address. With any other cisco device in my current environment we can work around this issue by adding a static arp and mac entry on our core switches (6513's). However when i try to add a static mac entry on the nexus boxes i receive the following error

"ERROR: mac address is an ip multicast mac"

I am wondering if there is a work around, or if no workaround will this be addressed in any upcoming IOS releases?

Thanks in advance for the help!

I'm glad you asked.

There is an enhancement request, that will resolve this query:

CSCtd22110 - Need support for static multicast MAC entries on Nexus 5000.

tiwang
Level 3
Level 3

Hi out there

We have implemented a VMWare environment around a Dell Blade Center and 2 set of redundant Nexus switches - a set of 2 NX 5010 for access to iSCSI Storage (not FCOE) and a set of 2 NX 5020 with a dual set of NX 2148 in our datacenter.

We have ran into a few challenges here - first a simple where I discovered that I cannot enable jumboframes on the interfaces on the 5010 handling the iSCSI but this has to be done as a policy-map - the published way around define a QOS policy - but first - is this necessary at all? As far as I can see jumboframes are transmitted on the ChannelPort between the 5010 boxes without any need to modify the default policy map.

Here is output from one of the NX5020 where I haven't defined the MTU in the default qos policy map - as far as I can see all ports running in trunk mode will process jumboframes ? Buf if I ping with DF set and a packet of 2K the packets are discarded?

SW5020-01# sh int | inclu jumb
27215058 jumbo packets  0 storm suppression packets
27985468 jumbo packets
2558685 jumbo packets  0 storm suppression packets
15912 jumbo packets
0 jumbo packets  0 storm suppression packets
0 jumbo packets
    ..

0 jumbo packets
0 jumbo packets  0 storm suppression packets
15337 jumbo packets
2027659 jumbo packets  0 storm suppression packets
7744804 jumbo packets
11939986 jumbo packets  0 storm suppression packets
17561369 jumbo packets

So - how can I verify if the policy map I have defined is up and run? And is it necessary at all since the box in fact is processing jumboframes?

Second - we are running Microsoft wlbs on windows 2003 servers in unicast mode. The Dell servers are equipped with 10GB pass-through modules  and we have implemented NX 1000V distributed switch in the VMWare 4.1 servers. The Nexus 1000 VSM is reporting a lot of dropped packets - but I cannot find any definition on what is causing dropped packets on the 1000V - I expect it is related to the wlbs in unicast mode.

Here is some output from the 1000V - a port-profile which is assigned to the NLB interface on the VM:

Nexus1000V# sh int veth23
Vethernet23 is up
    Port description is ts-12a1, Network Adapter 1
Hardware is Virtual, address is 02bf.ac15.9513
    Owner is VM "ts-12a1", adapter is Network Adapter 1
Active on module 12
VMware DVS port 1089
Port-Profile is ASP_NLB
Port mode is access
5 minute input rate 1466 bytes/second, 0 packets/second
5 minute output rate 1383193 bytes/second, 10049 packets/second
Rx
77329 Input Packets 224 Unicast Packets
0 Multicast Packets 77105 Broadcast Packets
104532694 Bytes
Tx
246967099 Output Packets 239170089 Unicast Packets
151048 Multicast Packets 7645962 Broadcast Packets 87406048 Flood Packets
41291846309 Bytes
0 Input Packet Drops 7251473 Output Packet Drops

Nexus1000V#

How can I identify these dropped packets? - and any suggestion to run wlbs in unicast mode in the nx 1000V environment?

best regards /ti

Regarding Jumbo MTU, they are not enabled by default on the 5000.

I explain here steps by steps how to configure and check your jumbo mtu configuration:

https://supportforums.cisco.com/videos/1215

Your question is interesting, if you have not configured jumbo and you see the counters increment, that is due to the cut-trough hardware architecture of the nexus 5000 switching. The frames are fragmented on the egress and very often times, since they are hardware switches, the frame is a little over the 1500 mark, so the counter does increment with a jumbo packet: for example 1530,1540. This is not enough to conclude your switch is actually processing jumbo, you need to look at the show queuing interface as showed on the video.

For Nexus 1000v, I would suggest that you span the traffic and look at a packet capture to find out if there are retransmissions etc.

David Williams
Level 1
Level 1

When our sales engineers were here pitching the UCS and Nexus we were told that by enabling vPC on the Nexus we lose 50% of the available vlans. When we are only talking about 500 vlans that is pretty significant.  What is confusing us is that we cannot find anything in any Cisco documents to indicate that this is valid.  Perhaps we misunderstood, perhaps the sales engineer misspoke.  I was hoping you could shed some light on this.

Thanks,

Dave

I'm not aware of such limitation.

For example on Nexus 5000, 2248, 2232, you can have up to 507 vlans. The number should not decrease by two with the vPC, it should be unrelated.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: