This discussion is locked


Unanswered Question
Aug 13th, 2010

Welcome to the Cisco Networking  Professionals Ask the Expert conversation. This is an opportunity to learn how to troubleshoot Nexus 5000 & 2000 series with Lucien Avramov.  Lucien Avramov is a Customer Support Engineer at the Cisco Technical  Assistance Center. He currently works in the data center switching team  supporting customers on the Cisco Nexus 5000 and 2000. He was previously  a technical leader within the network management team. Lucien holds a  bachelor's degree in general engineering and a master's degree in  computer science from Ecole des Mines d'Ales. He also holds the  following certifications: CCIE #19945 in Routing and Switching, CCDP,  DCNIS, and VCP #66183.

Remember to use the rating system to let Lucien know if you have received an adequate response.

Lucien might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered   questions in other discussion forums shortly after the event. This  event  lasts through August 30, 2010. Visit this forum often to view  responses  to your questions and the questions of other community  members.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (12 ratings)
balsheikh Sat, 08/14/2010 - 15:01

Hi Lucien,

I have multiple concerns and appreciate your comment in this regard.

how is Nexus 5k along with extend fabric 2k can approve STP in a data center ?

is there  any plan down in the road map to have Nexus 2K supporting FCoE and Nexus 5k to be L3 switch ?



Lucien Avramov Mon, 08/16/2010 - 09:32

1.I'm not sure I understand what you mean by approve Spanning Tree Protocol.

There are two spanning tree features you can configure on the fabric extender: spanning-tree port type edge and spanning-tree port type edge trunk (if you have a trunk to an ESX server for example).

As far as how the 5k talks to the 2k for packet forwarding, it's via the VNTag. VNTag is a Network Interface Virtualization (NIV) technology. It allows the fabric extender to act as a data path of the Nexus 5000 for all policy and forwarding. It's added to the packet between the fabric extender and the Nexus 5k. Finally, it's stripped before the packet is sent to hosts.

2. You can have vpc and FCoE on the 2232. Also, there is a plan to have the Nexus 5k to have an L3 feature. Please consult with your Cisco system engineer / account team for the details.

More information on FCoE on the 2232, section Fibre Channel over Ethernet Support:

The datasheets are a good reference for the features supported:

tenaro.gusatu.novici Mon, 08/16/2010 - 11:56

Hi Lucien,

could you please make quick/short comparison between Nexus 5k with 2000.



Lucien Avramov Mon, 08/16/2010 - 12:53

I'm glad you asked.

The Nexus 2000 also called the fabric extender is an extension of the Nexus 5000. It's configured from the Nexus 5000 directly.

Often times it's perceived as a lower end switch for data center due to it's nomination. It's actually not a switch, and it needs a Nexus 5000 to function.

The goal is to provide access to the server farm. As it's name fabric extender, you can see it as a NIC card extender between your server NICS to the Nexus 5000. It has up to 48 ports and there are 3 different kind of Nexus 2000: supporting 1GB with 2148, 100mg/1gb with 2248 and 10 GB to the NIC with the 2232.

The Nexus 2000 provides you more physical ports to the servers  avoiding to use the 10 GB dedicated ports on the Nexus 5000 for 1gb servers for instance. Also it simplifies the cabling, you can have server racks with a Nexus 2000 on top going to Nexus 5000s.

Product page (look at the video):

Here are the datasheets for each:

White paper:

tenaro.gusatu.novici Tue, 08/17/2010 - 01:36

Thanks for the great (short and precise) answer!

Just to complete this story, could you please confirm (or deny) my assumption: customer will be able to get all kind of ports (100/1000/10000 Mbps) thanks to different SFPs and later, when more ports will be needed customer can add Nexus 2000, right?



Lucien Avramov Tue, 08/17/2010 - 09:28

You are getting it, but let me be more precise, as it's not exactly what you said:

-The Nexus 5000 is a cut-trough switch with dedicated ports. Each port supports 10 GB speed. On the 5020, the first 16 ports you can set the speed to 1GB. On the 5010, the first 8 ports can be set to 1GB. You can connect 1GB servers to those ports.

-If you have 100 megs  servers, you need to use another device than the Nexus 5000. The best would be Nexus 2248 but another switch such as a Catalyst 3750-GE will work too (1GB capable so it can connect to the Nexus 5000).

-The Nexus 2148 and 2248 have 48 RJ-45 connections for the servers and 4 10 GE ports to the Nexus 5000.

Also, note that you have a great oversubcription rate on the Nexus 2000: for example the 2148/2248, have 48 1GB ports a total of 48 GBps. It also has 4 10 GB ports uplink to the Nexus 5000, totalling a 40GBps. This means that, economically it's better to use a Nexus 2000 connected to a Nexus 5000: this will use only 4 10 GB ports and you will be able to connect up to 48 servers instead of 8 for 5010 / 16 for 5020.

You may now wonder what SFPs are supported for the Nexus switches, so here is the compatibility matrix, look at data-center section:

tenaro.gusatu.novici Tue, 08/17/2010 - 01:50

Me again,

if it is not out of the scope, could you please compare Nexus 5000 vs Fiber Interconnect 6100? Can we say that one is a subset of another or each device has its own unique features?

For example, I know 5k can accept 1Gbps or 10Gbps connections from servers and also FC connections from storages; seems to me same can be achived on 6120XP, with appropriate SFPs, of course.



Lucien Avramov Tue, 08/17/2010 - 09:48

The architecture of the 6120 and the Nexus 5000 is similar.

However the use of them is different:

The 6120 / 6140 are dedicated devices for UCS-B chassis. They are configured trough the UCS manager that is a software residing on the 6100.

You can not connect other devices to it, whereas the Nexus 5000 is a switch and you can connect to it servers, switches 1GB and 10 GB or 6100s.

tenaro.gusatu.novici Tue, 08/17/2010 - 13:45

This is very important for me and it will be great if you can confirm: storage (using FC interface) can't be connected directly to 6120XP, i.e. Nexus must be used to establish communication between the UCS and the storage?

huangedmc Wed, 08/18/2010 - 08:14


When will the Nexus 2000's support spanning-tree?

If we have a blade enclosure w/ Cisco CGESM or 3100's where 10Gbps isn't required, we'd like the flexibility to uplink to Nexus 2K's.

However, we were told we'd need to filter BPDU's on the blade switches as a workaround; otherwise Nexus 2K's would err-disable the trunk ports.

That's a bit dangerous to run switches w/o STP.


If two servers in the same VLAN need to communicate to each other on the same Nexus 2K FEX, would packets be locally switched between the two ports on the FEX, or would they need to go to the Nexus 5K, and then back down to the FEX?


How come Cisco dropped the support for "write mem" & PAgP ether-channel on Nexus?

It's quite an inconvenience to have to do "copy run start", and since "write" is the command to erase NVRAM, we couldn't create a CLI alias that starts anything w/ "write".

One of the recommendations to avoid split-brain scenario w/ VSS is to use enhanced PAgP on Cat 6K's.

As a result, we'd have inconsistant configurations in the datacenter where PAgP is used on Cat 6K's, and LAcP is used on Nexus...

Not a big deal, but we'd like to keep things consistant.


As of today, you can only mesh between servers & FEX's, or between FEX's & 5K's, but not both.

Will there be support to do both in the future?


When FCoE is implemented, how does Nexus handle the priority queue for data traffic when there's congestion?

Obviously if the network is sized & configured properly, this wouldn't be an issue, but this is more a hypothetical question, and also to satisfy our storage team, to make sure we have a true lossless network to support their SAN.

Between the SAN traffic & priority queue, who gets forwarded first?

Hi Can you please compare the use of flexlinks as opposed to using STP?
What is the roadmap for using dual administration point for FEX, i.e 1 2k connected with active and redundant N5k, is it available today, or is it not even on the roadmap?

We have aggregated all our 10/100 links on 3560 and connected it to N5k using twingig convertor.. is it the right decision? Don't want to use core (6500 in our case) for seperate leg...

Do we have all COS features available on 2k/5k?

What importance does 5k have in the presence of N7k?

Lastly, kindly brief abt 1k (VM arch) and trasparent migration of services over VM stacks (don't know if i used the right terms).



Lucien Avramov Wed, 08/18/2010 - 12:16

1.Flexlink is a fast convergence feature and allows one of the switchport interface to backup another switchport interface. It permits faster STPconvergence. Here are more details about it:

2. You could configure active / passive fex by shutting down on of the uplinks to the fex on the Nexus 5000. If you have 2200s then you can do VPC to the FEX and VPC to the host, which is far a better option than active / passive, since with active / passive you loose uplink bandwidth.

3. Yes this is a good choice, except that if you go with Nexus 2248, then you can connect 100mg/1GB servers to it.

4. Regarding COS, it is supported and here is the configuration guide:

5. The N7k can provide you with the distribution layer. You can have a fully VPC redundant HA design. You could if you like use a pair of catalyst 6k in VSS upstream, however the Nexus 7000 will provide you much better fabric performance and oversubcription rate.

6. The Cisco 1000v supports VN-Link which provides:

-Policy-based virtual machine connectivity

-Mobile virtual machine security and network policy

-Non-disruptive operational model for your server virtualization and networking teams

Here are a few good documents:

Data Sheets:

White papers:


Lucien Avramov Wed, 08/18/2010 - 11:16


Nexus 2000 is not a switch, it's a fabric extender. Picture it as a hardware extension of your server NICs. For this reason it's not running spanning-tree.

You can however configure the spanning tree type as edge port trunk and connect a server with a trunk to the FEX.


The packets will always go from the Nexus 2000 to the Nexus 5000 and back.


Nexus runs a different operating system, called NX-OS. It's a data center focused feature set for mission critical environments, 24x7 continuous operations, high density / performance ethernet and data center specific link-layer types.

It is different, so it does have advantages, such as being able to run any command from anywhere in configuration mode, and it does miss certain commands such as the write mem.

However, as I long time IOS user, I can tell you that even write mem was too long to type, and using ''wr'' was faster.

Good news is that you can actually create an alias for wr on NX-OS:

SJ-SV-N5K-3(config)# cli alias name wr copy r s

SJ-SV-N5K-3(config)# wr

[########################################] 100%

Regarding PaGP as you said it's not supported at this point and LACP is the recommended. I don't have more information as why it was decided to proceed this way. My guessing is that it was what was first demanded by the industry, or more largely deployed in the data centers.


Yes it's in the works. I would suggest you to contact your system engineer / account team for the details and the timelines.


FCoE traffic is marked on Nexus 5000 with a COS of 3. It's allocated a separate priority-queue and buffers for this traffic. It's considered as high priority traffic for the switch and preceeds the rest of the traffic when a congestion occurs. F

pfitzatsterilite Fri, 08/20/2010 - 09:00


I see you listed the compatibility link for 1Gb transceivers in this blog.

I bought the GLC-T-AX SFP (Axiom flavor) for the Nexus 5020 - this is for a trunk link back to my 3750 (CAT5) switch port.

I cannot get the link to come lights, no indication of any kind that the two switches see each other.

Do you know if anyone else is having trouble using these Axiom SFPs?   ...or am I missing something?



Lucien Avramov Fri, 08/20/2010 - 09:35

Nexus is very sensitive to which SFPs you are using.

You need to use Cisco branded SFP.

The compatibility guide for 1GB (I listed only the 10GB earlier) is here:

You can see that it mentions : GLC-T and not the GLC-T-AX.

If you do a show interface on your Nexus CLI or a show interface brief, it will tell you the reason the link is down "SFP validation failed" for example.

Also I will add here on the 5010, only the first 8 ports can be downgraded to a speed of 1000 and for the 5020, the first 16 ports.

pfitzatsterilite Fri, 08/20/2010 - 10:03

Thanks for replying so quickly. I do see that the -AX flavor is not listed but the switch does recognize it...see below.

Do you see anything that would indicate that it is not supported?  I've also opened a TAC case to verify either way!

shonexus01# sho int ether1/1 transceiver details
    sfp is present
    name is OEM
    part number is GLC-T-AX
    revision is 1.0
    serial number is MGT801992
    nominal bitrate is 1300 MBits/sec
    cisco id is --
    cisco extended id number is 4

    Invalid calibration


Thanks - hopefully, others looking to purchase 1Gb SFPs read this note if it turns out that these SFPs from Axiom are not supported!!


Lucien Avramov Fri, 08/20/2010 - 10:14

The problem is that your SFP is not CISCO, it's OEM.

A proper example of output would be:

show int e1/1 transceiver details


sfp is present


type is 10Gbase-(unknown)

part number is 74752-9025

revision is A

serial number is MOC12432433

nominal bitrate is 12000 MBits/sec

Link length supported for copper is 1 m(s)

cisco id is --

cisco extended id number is 4

Lucien Avramov Fri, 08/20/2010 - 12:10

Forgot to include you an example for 1GB of the GLC-T (again GLC-T-AX is not cisco part) :

show int e1/1 transceiver details


    sfp is present

    name is CISCO-AVAGO    

    type is 10Gbase-(unknown)

    part number is ABCU-5710RZ-CS2

    revision is    

    serial number is AGM120527BX    

    nominal bitrate is 1300 MBits/sec

    Link length supported for copper is 100 m(s)

    cisco id is --

    cisco extended id number is 4

Pavel Dimow Sun, 08/22/2010 - 13:22

Hello Lucien,

We are trying to migrate some C6500 QoS configuration to Nexus 5000. It's a typical scenario when migrating from standalone

C6500 to VSS as distribution with N5K and FEX 2148 as access layer. However we have some problems mostly regarding QoS.

The first problem is that we can't perform packet DSCP marking (yes, we can classify)

but instead we need to use CoS marking.

Q1: Is there any plans in the near future to support DSCP marking on N5K?

Other problem is that Nexus is lacking policing capabilities and the documentation is not

clear on this matter.

Q2: For example is there anyway we can achieve something like

police cir 10000000 bc 4000 be 4000 conform-action set-dscp-transmit af42 exceed-action policed-dscp-transmit violate-action policed-dscp-transmit

I see that we can guarantee BW but I can't find any info whats happens when congestion occur?

Q3: If don't use, and don't plan to use FCoE at all, should we just "reserve" 0% of BW to FCoE traffic? If not what's the minimum

we can reserve for fcoe class?

class  type  queuing  class-fcoe

bandwidth  percent  0

Q4: Regarding Q2

Is it possible remark traffic that exceeds CIR? We can use bandwidth command for CIR but I can't see anything else we can use

for remarking.

Thank you

Lucien Avramov Mon, 08/23/2010 - 11:53
Q1: 2148T hardware can only support CoS based traffic classification.
You can still configure ACL based classification on the 2148T interfaces (including DSCP),  this actual classification will occur on the Nexus 5000.
I don't know the specific about roadmaps for future support on DSCP, I advise you here to contact your Cisco System Engineer / Account team to find out more. Also, note that the 2148T supports two classes of services and 2 queues. The 2200 series (2224,2232,2248) support 6 hardware queues.

Q2: You can only do CoS marking, not DSCP, so there is no way to police with dscp either.

There are 3 policy types:
-qos: define traffic classification rules. Attach point is system qos or ingress interface
-queuing: set priority queue, deficit weigh round robin. Attach point is system qos, egress, ingress interface.
-network-qos: system class type (drop / no-drop), MTU per class of service, Buffer size, marking. Attach point is system qos.

Prefer service policy attached under interface when same type of service policy is attached at both system qos and interface.
Qos and network-qos policy-map are required to create new system classes.

As indicated above, network-qos policy is used for packet marking.

When congestion occurs, depending on the policy configuration:
-PAUSE upstream transmitter for lossless traffic
-Tail drop for regular traffic when buffer is exhausted
Priority flow control (PFC) or 802.3X PAUSE can be deployed to ensure lossless for applications that can not tolerate packet loss.
A buffer management module monitors buffer usage for no-drop class of service. It will signal MAC to generate PFC or PAUSE when the buffer usage crosses threshold.

The FCoE traffic is assigned to class-fcoe which is a no-drop system class.
The other class of service by default have normal drop behavior (tail drop) and can be configured as no-drop.

Q3: Exactly, you can use bandwitdh percent 0 for the fcoe class if you don't plan on using it.

Q4: No remarking due to platform limitations

I want to use your question here to explain more about the QoS on Nexus 5000/2000 with configuration examples:

The service policy configured under “system qos” will be populated from N5k to FEX only when all the matching criteria are “match cos”.
If there are some other match clauses, such as match dscp, or match ip access-group in the qos policy-map the FEX won’t accept the service policy and all the traffic will be placed into the default queue.

For the ingress traffic(from server to network direction) if the traffic is not marked with CoS value it will be placed in the default queue on FEX.
Once the traffic is received on N5k it will be classified based on configured rule and are placed in the proper queue.

For the egress traffic (from N5k to FEX to server) it is recommended to mark the traffic with CoS value on N5k so that FEX can classify and queue the traffic properly.

Here is a a complete configuration for N5k and Nexus 2248 to classify the traffic and configure proper bandwidth for each type of traffic.
The configuration example only applies to N5k and 2248. The configuration for 2148 is slightly different due to the fact that 2148 has only two queues for user data. The Nexus 2248 has 6 hardware queues for user data, which is same as Nexus 5000. You can apply similar configuration by reducing the number of queues used.

Class-map for global qos policy-map, which will be used to create CoS-queue mapping.
class-map type qos voice-global
  match cos 5
class-map type qos critical-global
  match cos 6
class-map type qos scavenger-global
  match cos 1
class-map type qos video-signal-global
  match cos 4

This qos policy-map will be attached under “system qos”. It will be downloaded to 2248 to create CoS to queue mapping.
policy-map type qos classify-5020-global
  class voice-global
    set qos-group 5
  class video-signal-global
    set qos-group 4
  class critical-global
    set qos-group 3
  class scavenger-global
    set qos-group 2

class-map type qos Video
  match dscp 34
class-map type qos Voice
  match dscp 40,46
class-map type qos Control
  match dscp 48,56
class-map type qos BulkData
  match dscp 10
class-map type qos Scavenger
  match dscp 8
class-map type qos Signalling
  match dscp 24,26
class-map type qos CriticalData
  match dscp 18

This qos policy-map will be applied under all N5k and 2248 interfaces to classify all incoming traffic based on DSCP marking. T
he policy-map will be applied under Nexus 2248 interfaces the traffic will be classified on N5k.

policy-map type qos Classify-5020
  class Voice
    set qos-group 5
  class CriticalData
    set qos-group 3
  class Control
    set qos-group 3
  class Video
    set qos-group 4
  class Signalling
    set qos-group 4
  class Scavenger
    set qos-group 2

class-map type network-qos Voice
  match qos-group 5
class-map type network-qos Critical
  match qos-group 3
class-map type network-qos Scavenger
  match qos-group 2
class-map type network-qos Video-Signalling
  match qos-group 4

This policy-map type network-qos will be applied under “system qos” to define the MTU, marking and queue-limit(not configured here).

policy-map type network-qos NetworkQoS-5020
  class type network-qos Voice
    set cos 5
  class type network-qos Video-Signalling
    set cos 4
    mtu 9216
  class type network-qos Scavenger
    set cos 1
    mtu 9216
  class type network-qos Critical
    set cos 6
    mtu 9216
  class type network-qos class-default
    mtu 9216

  class-map type queuing Voice
    match qos-group 5
  class-map type queuing Critical
   match qos-group 3
  class-map type queuing Scavenger
   match qos-group 2
  class-map type queuing Video-Signalling
   match qos-group 4

The queuing interface will be applied under “system qos” to define the priority queue and how bandwidth is shared among non-priority queues.
policy-map type queuing Queue-5020
  class type queuing Scavenger
    bandwidth percent 1
  class type queuing Voice
  class type queuing Critical
    bandwidth percent 6
  class type queuing Video-Signalling
    bandwidth percent 20
  class type queuing class-fcoe
    bandwidth percent 0
  class type queuing class-default
    bandwidth percent 73

The input queuing policy determines how bandwidth are shared for FEX uplink in the direction from FEX to N5k.
The output queueing policy determines the bandwidth allocation for both N5k interfaces and FEX host interfaces.

system qos
service-policy type qos input classify-5020-global
service-policy type network-qos NetworkQoS-5020
service-policy type queuing input Queue-5020
service-policy type queuing output Queue-5020

Apply service-policy type qos under physical interface in order to classify traffic based on DSCP.
For etherchannel member the service-policy needs to be configured under interface port-channel.

interface eth1/1-40
service-policy type qos input Classify-5020

interface eth100/1/1-48
service-policy type qos input Classify-5020

You can check if the CoS to queue mapping are properly configured under FEX interfaces with the command show queuing interface.
You can also use it to check the bandwidth and MTU configuration. And, the same commands apply for N5K.

N5k# sh queuing interface ethernet 100/1/1
Ethernet100/1/1 queuing information:
  Input buffer allocation:
  Qos-group: 0  2  3  4  5  (shared)
  frh: 2
  drop-type: drop
  cos: 0 1 2 3 4 5 6
  xon       xoff      buffer-size
  21760     26880     48640   
  queue    qos-group    cos            priority  bandwidth mtu
  2        0            0 2 3           WRR        73      9280
  4        2            1               WRR         1      9280
  5        3            6               WRR         6      9280
  6        4            4               WRR        20      9280
  7        5            5               PRI         0      1600
  Queue limit: 64000 bytes
  Queue Statistics:
  queue  rx              tx                  
  2      113822539041    1             
  4      0               0             
  5      0               0             
  6      417659797       0             
  7      0               0             
  Port Statistics:
  rx drop         rx mcast drop   rx error        tx drop      
  0               0               0               0             
  Priority-flow-control enabled: no
  Flow-control status:
  cos     qos-group   rx pause  tx pause  masked rx pause
  0              0    xon       xon       xon
  1              2    xon       xon       xon
  2              0    xon       xon       xon
  3              0    xon       xon       xon
  4              4    xon       xon       xon
  5              5    xon       xon       xon
  6              3    xon       xon       xon
  7            n/a    xon       xon       xon

Latest QoS configuration guide:

Pavel Dimow Mon, 08/23/2010 - 12:59


thank you for your reply, regarding DSCP classification, are there any drawbacks
when classification occurs on N5K instead on FEX?

Lucien Avramov Mon, 08/23/2010 - 14:24

Regarding drawbacks, I'm not sure, at the end of the day the traffic from/to the FEX will go trough the N5K as it's in it's data path. Probably this involves more CPU usage on the N5K, but I don't see a major drawback.

About your previous post, you can remark traffic with CoS on the 5k.

For remarking of DSCP : it will be supported in the future. As a workaround, you can mark CoS instead of DSCP and then remap the Cos to the desired DSCP on an upstream switch.

As far as policing, more policing features are coming soon, again I can't provide you here the timeframes, but your system engineer can.

Pavel Dimow Tue, 08/24/2010 - 03:41

Can you please provide us with the example config? As far as I can see, there is no option

to remark traffic, with CoS that exceeded the CIR.

Lucien Avramov Tue, 08/24/2010 - 07:18

To remark CoS that exceeded CIR you need policing on top of marking.

What I meant is that you can remark CoS from one value to another by matching CoS ingress, and changing it to another qos-group that sets a different CoS value.

If you look at my example above that can be achieved with the match, set qos-group then set cos for that qos-group (in the network-cos) policy-map.

ronbuchalski Wed, 08/25/2010 - 08:58


Two Questions:

1) Can you provide an update on allowing the Nexus 5K to support more than twelve N2Ks?  We were told that there was a roadmap plan to allow the 5K to support up to sixteen N2Ks.

2) I have an environment where Data Center VLANs exist in two adjacent locations.  The Data Center has a N7K core, and the old Data Center has a Cat4500 core.  They are interconnected via 10GB.  Most of the L3 SVIs for the Data Center VLANs still reside on the Cat4500 core, as are the STP roots.

Within the new Data Center, the N5Ks are connected to the N7Ks via vPC.  I would like to support port-channel on N2K-attached hosts, so have enabled vPC on the N5Ks for this purpose.  My concern is with STP root priority.  With vPC enabled, does the STP root need to live on the N7K?  N5K?  Or can it continue to live on the Cat4500 until migration is complete?

Thank you,


Lucien Avramov Wed, 08/25/2010 - 12:19

1) Yes it is on the roadmap and will be supported. I'm not able to disclose a date for this feature, the best I can do on this forum is to advise you to contact your Cisco System Engineer / Sales / Account team, they can provide you the specifics offline.

2) The question here is just spanning tree related, the vpc is just seen as a regular etherchannel from the 7k and 4500 side.

When you will take out of production your 4500, then you will have spanning tree to reconvergence and elect the new root.

If you move the root before from the 4500 to the 7k, the same process will occur. So really it shouldnt matter at this point.

Design guide covering on page 6 the best practices for spanning-tree on NX-OS with VPC:

NguyenT11 Thu, 08/26/2010 - 12:19

Hi Lucien,

I have servers with CNA adapters for both IP and SAN connectivity and want to connect to two Nexus 5K.  Do you recommend using VPC from the 5K and aggregating the links on the server, or would using VPC-HM (assuming I use Nexus 1000V) be a better route to go.  Do you have a sample configuration of how I would connect the server using VPC from the 5K assuming that I have 2 SAN fabrics (say VLAN 100 for fabric_A and VLAN 200 for fabric_B)?


Lucien Avramov Fri, 08/27/2010 - 08:31

You can certainly have: ESX--(fcoe)--N5K---SAN. Also VPC to the N5K along with FCoE is supported.

Regarding the question is VPC-HM better than VPC, it depends if you have also a Nexus 1000v in the picture. In the above, it can work directly with the Nexus 5000, which looks simpler at this point to me.

In typical FC topologies, the vfc are connected to different switches. The diagram attached respects the SAN A / SAN B boundaries.

FCoE configuration guide:

Quick FCoE example:

Here are the steps needed to configure the Nexus 5000 for FCoE capability.  In this example, we will use the default VSAN 1 for the virtual fibre channel (vfc) interface.  The server has a CNA connected to the Nexus 5010 on interface ethernet 1/10.

1. Create a VLAN for FCoE traffic (Configured through CLI only)

switch# configure terminal
switch(config)# vlan 50
switch(config-vlan)# fcoe vsan 1

2. Create the virtual fibre channel interface (Configured through CLI or GUI)

switch(config)# interface vfc 10 
switch(config-if)# bind interface ethernet 1/10
switch(config-if)# no shut

Note: the vfc interface number can be any number ranging from 1 - 8019

3. Configure Ethernet interface to allow FCoE VLAN to traverse the interface (Configured through CLI only)

switch(config)# interface ethernet 1/10
switch(config-if)# switchport mode trunk 
switch(config-if)# switchport trunk allowed vlan 1, 50

Note: since FCoE traffic is on VLAN 50 and regular Ethernet traffic is in VLAN 1, 
the interface will need to be in trunk mode

4. Validate vfc interface is up (Done through CLI or GUI)

switch# show interface vfc 10
vfc10 is up
Bound interface is Ethernet1/10
    Hardware is GigabitEthernet
    Port WWN is 20:00:00:0d:ec:a4:26:7f
    Admin port mode is F
    snmp link state traps are enabled
    Port mode is F, FCID is 0x660001
    Port vsan is 1
    Beacon is turned unknown
    5 minutes input rate 1164208 bits/sec, 145526 bytes/sec, 480 frames/sec
    5 minutes output rate 5024 bits/sec, 628 bytes/sec, 0 frames/sec
      81498131 frames input, 164638973896 bytes
        0 discards, 0 errors
      8986261 frames output, 4902182712 bytes
        0 discards, 0 errors

In case you have an UCS, the following may be useful:

NguyenT11 Fri, 08/27/2010 - 11:04

Thanks for the reply.  Sorry, I should've been more specific with my question.  Let me try again.

We have ESX hosts running N1000v with uplinks to 2xN5K using CNA adapters, 1 FC fabric off of each N5K, exactly like your diagram.  I'm running VPC-HM on the N1000V, everything is working as expected with FCOE and such.

Rather than VPC-HM, I'd like to run a VPC from the N5K's to the ESX hosts.  The part that I'm a little confused about is that, according to the documents,  since each VSAN requires a uniques VLAN on each N5K, the VPC consistency check will fail since the VPC interface (trunk) on each N5K must have the same VLANs allowed on the trunk.

As in the diagram, I have VLAN 900 -> VSAN 900 for Fabric A and VLAN 901-> VSAN 901 for Fabric B, VPC 10 will have different allow VLANs on each N5K, and will fail the vpc consistency check.  I can probably get this running by trunking both VLAN 900 and 901 on the vpc 10 configs on both N5K, and only do the VLAN-VSAN mapping on the necessary VLAN...

Specifically I was asking for a sample config of this scenario and looking for confirmation that the recommended configuration would have me trunking both VLANs on both N5K in order to pass the consistency check for VPC.





This Discussion

Related Content