Configure FCoE Nexus 5548UP - UCS 6120

Unanswered Question
Jun 5th, 2012

Hi, I am a networking guy and i need to understand which configuration do i need to connect 2 UCS 6120 to my Nexus 5000 (2 with VPC),

I know that i must activate npiv and fcoe feature, and read about creating vsans and vfc, my first question is does anybody have a procedure? (for example: 1.- You must create a vsan, 2.- You need to associate vsan with vlans, 3.- etc)..

Secondly, i have some doubts,

1.- If i have fc interfaces do i need to create an vfc interface? (or this is only if i am going to use the 10G ethernet interfaces?.)

2.- Why i need to bind the vsan to the vlans?.

Please help me with this, the SAN world is new to me!!

Hope you could help me,

Regards,

Juan Pablo Hidalgo

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 4.5 (2 ratings)
vjamart Tue, 06/05/2012 - 12:14

Hi Juan Pablo,

Please follow this guide for details:

http://www.cisco.com/en/US/products/ps10281/products_configuration_example09186a0080afd130.shtml

UCS Fabric Interconnect can work in two modes: native (NPV) or switch.

By default it's NPV, so it means you need to create TF-N link with Nexus 5k (with NPIV feature active on N5k) for FC links. There is no direct FCoE support between N5k and FI at this point (

https://supportforums.cisco.com/message/3497244#3497244)

-On each N5k (fabric A, VSAN10 and fabric B, VSAN20):

feature fcoe

feature npiv

interface fcx/y (attached to UCS FI)

  switchport mode F

  switchport trunk VSAN 10 (20 on B)

  no shutdown

-For FC interfaces, there is no need to create a vfc interface, it's mandatory indeed for 10GigE ports that attach to a CNA: you bind each GigE to a vfc.

-VLAN-VSAN binding is mandatory: you need to have at least one VLAN in fcoe mode that binds to a VSAN, in order for CNA to register pWWN to the name server, then receive FCID.  You must also trunk default VLAN on the GigE port attached to CNA in order to allow the DCBX exchange with switch.

vsan database

vsan 10

vsan 10 interface fcx/y

vlan 10 (20 on B)

fcoe vsan 10 (20 on B)

# sh vlan fcoe

Original VLAN ID        Translated VSAN ID      Association State

----------------        ------------------      -----------------

      10                        10               Operational

interface Ethernetx/y

  switchport mode trunk

  spanning-tree port type edge trunk

  no shutdown

interface vfcy

  bind interface Ethernetx/y

  switchport trunk allowed vsan 1 (default VLAN)

  switchport trunk allowed vsan add 10 (20 on B)

  no shutdown

vsan database

vsan 10

vsan 10 interface vfcy

You also need to configure zoning on N5k.

I strongly recommend you to acquire this book if you're new to storage networking, as it covers FCoE and FC integration on Cisco product, procedure-like:

http://www.amazon.com/Cisco-Storage-Networking-Cookbook-Families/dp/146646318X

Best Regards,

Vincent

Juan Pablo Hida... Tue, 06/05/2012 - 13:43

Hi Vicent,

Thanks for your reply and information!!,

I only have one more question, it's about the vlan and vsan, i don't understand what is the purpose of binding the vlan and vsan, i understand that the UCS and N5K needs the vsan, but why the binding, it is because i use the fcoe?,

is this vlan used for traffic between switches, or it is only used for the vsan and that's it.

again thanks you very much for your help,

Best Regards,

Juan Pablo

vjamart Wed, 06/06/2012 - 02:02

Hi Juan Pablo,

Binding VLAN-VSAN is needed only for FCoE: switch needs to know on which VLAN mapping it's expected to receive FC data, as FC is encapsulated in the ethernet frame while EtherType will be FCoE.

Also it will first receive FIP (FCoE initialization protocol=control plane) frames on that link, in order to discover/negociate FCoE-capable devices (CNA). This will be used for FCoE VLAN discovery, FCF and fabric login.

FCoE traffic (data plane) will then be received by switch.

Then binded VFC interface will collect the FCoE frames and send them to the FC fabric, decapsulated from ethernet.

So basically this VLAN-VSAN is used within switch (or between switches for multi-hop FCoE), but won't carry classic ethernet traffic. For this one you need to use other VLAN(s) trunked on the GigE port of that CNA.

Also, be aware that you need to disallow these FCoE VLANs over the vPC peer-link of your N5k pair since you need to enforce strict FC fabric A/B segregation for redundancy (load-balancing over fabrics is done by MPIO driver at host level and storage array side).

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns945/ns1060/at_a_glance_c45-578384.pdf

Best Regards,

Vincent

Juan Pablo Hida... Thu, 06/07/2012 - 09:18

hi Vicent,

Thanks for your reply and explanation.

Best Regards,

Juan Pablo

charles_morrall Fri, 01/18/2013 - 00:34

Jumping in in this thread as a latecomer, but I've not been able to find a straight answer.

"There is no direct FCoE support between N5k and FI at this point", is this still the case?

I'm a newbie in the Cisco (and pretty much Ethernet) world, I come from a traditional FC background. I'm trying to get my head around this, but if I can't connect FCoE natively between a Nexus 5548 and 6120XP FI, I have to rethink my design.

vjamart Fri, 01/18/2013 - 02:12

Hi Charles,

This is no more a limitation, we introduced support for north-bound FCoE (multihop) in latest UCS-M release 2.1(1a).

Please find more info in that article: http://blogs.cisco.com/datacenter/ucs-multihop-fcoe-in-under-an-hour/

I hope this answers your question.

Best Regards,

Vincent

charles_morrall Fri, 01/18/2013 - 04:27

Hi,

Thanks for the quick reply. I've read the post you linked, but I still feel I'm missing something quite fundamental.

I have a Cisco UCS system with the latest images, and it looks like its running 2.1(1a), so I should be covered.

There are two Fabric Interconnects, 6120XP connected to a single Nexus 5548UP. This is just a lab setup, nothing in production.

I've created a VLAN 1001, mapped this to VSAN 10. Created two vfc's, one on the port where Fabric Interconnet A connects, and one vfc on a port on the Nexus 5548 where I've got a Netapp unified storage port connected.

Both ports are trunk ports.

Created a VSAN 10 in UCS, linked it to VLAN 1001. I have an ESXi with a vHBA connected to VSAN 10.

Allowed VLAN 1001 on the ports where I have the link to the 6120 and the port where the NetApp connects.

show flogi database shows nothing

fcoe is enabled.

The 6120XP is in switch mode, but that should be ok now that we have FCoE multihop, or?

I've googled and googled, but since the multihop support is so recent, I'm still searching for some useful documentation on how to configure all this.

vjamart Fri, 01/18/2013 - 05:16

Hi Charles,

From our deployment experience, it should work fine if FI are in end-host mode (NPV), but in FC switching mode there may be a problem (hardware limitation on 1st generation FI 61xx). Ethernet switching mode is not in the scope for this limitation.

Please check if you have similar fault reported in your 6120XP switch-mode:

FI-A# scope fc-up
FI-A /fc-uplink # scope fab a
FI-A /fc-uplink/fabric # scope fcoeinterface x y
FI-A /fc-uplink/fabric/fcoeinterface # show fault
Severity  Code     Last Transition Time     ID       Description
--------- -------- ------------------------ -------- -----------
Major     F1084    2013-01-10T17:49:38.352    154788 FCoE uplink port 1/9 cannot
be supported on springfields when FI is in eth endhost mode and FC switching mode
Major     F1083    2013-01-10T15:27:45.126    105418 FCoE uplink is down on Vsan
1

If so, please modify design for FC NPV, or test with a 2nd gen FI 62xx.

Thanks.

Best Regards,

Vincent

charles_morrall Fri, 01/18/2013 - 05:42

I get this on the 6120:

fi-ucs-got-demo-A /fc-uplink/fabric/fcoeinterface # show fault

Severity  Code     Last Transition Time     ID       Description

--------- -------- ------------------------ -------- -----------

Cleared   F1082    2013-01-18T14:32:26.882    886546 FCoE uplink port 1/3 is down

Major     F1083    2013-01-18T14:32:11.505    886550 FCoE uplink is down on Vsan 1

Major     F1083    2013-01-18T14:32:11.505    886549 FCoE uplink is down on Vsan 10

fi-ucs-got-demo-A /fc-uplink/fabric/fcoeinterface #

And this on the Nexus 5548 where I have the link to the 6120 (Eth 1/3)

ucs# show int vfc 13

vfc13 is trunking (Not all VSANs UP on the trunk)

    Bound interface is Ethernet1/3

    Hardware is Ethernet

    Port WWN is 20:0c:54:7f:ee:e2:af:3f

    Admin port mode is F, trunk mode is on

    snmp link state traps are enabled

    Port mode is TF

    Port vsan is 10

    Trunk vsans (admin allowed and active) (1,10,20)

    Trunk vsans (up)                       ()

    Trunk vsans (isolated)                 ()

    Trunk vsans (initializing)             (1,10,20)

    1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec

    1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec

      0 frames input, 0 bytes

        0 discards, 0 errors

      0 frames output, 0 bytes

        0 discards, 0 errors

    last clearing of "show interface" counters never

    Interface last changed at Fri Jan 18 12:00:06 2013

padramas Sat, 01/19/2013 - 04:26

Hello Charles,

Here is sample configuration to enable FCoE between UCS and N5K

#####  On UCSM 2.1  ######

1) FC end mode

2) Configure a port as FCoE uplink  and assign the interface to VSAN 10

## Verifying the output

6248-01-A(nxos)# show run int vfc 1255

interface vfc1255

  bind interface Ethernet1/24

  switchport mode NP

  no shutdown

##### Configuration on N5K  #####

1) Enable NPIV,fcoe and lldp features

2) Create vlan, configure ethernet and VFC interfaces

----------- sample configuration ----------------

conf t

vlan 1001

fcoe vsan 10

interface Ethernet1/19

switchport mode trunk

spanning-tree port type edge

spanning-tree bpdufilter enable

no shutdown

int vfc 14

bind interface Ethernet1/19

switchport trunk allowed vsan 1

switchport trunk allowed vsan add 10

no shutdown

vsan database

vsan 10 interface vfc 119

------------------------------------

####  Comands to verify the configuration  #####

show feature

show vlan

show fcoe vlan

show int vfc 119

show fcoe database

show flogi table

rafael.guedes Wed, 02/13/2013 - 20:08

Hi padramas,

I did not understand why you used vfc14, and after vfc119? Could you explain please?

int vfc 14

bind interface Ethernet1/19

switchport trunk allowed vsan 1

switchport trunk allowed vsan add 10

no shutdown

vsan database

vsan 10 interface vfc 119

Another question is why did you allowed VSAN 1??

Lastly, if you would allow VLANs in the ethernet port, would allow VLAN 1001 or also VLAN 1?? I'm question this because I believe FIP relies in the native VLAN.. Is it right?

Thank you!

charles_morrall Mon, 01/21/2013 - 00:38

Thanks for the detailed reply. I'm probably way off here, but what if I wanted to run the FI in "switch mode" for FC? This is based on the assumption that now with FCoE multihop support, I can connect the FI to the Nexus 5548 as a switch and still have end to end FCoE connectivity-

vjamart Mon, 01/21/2013 - 02:38

Hi Charles,

The Nexus 5000 have hardware limitation for this, but issue don't exist for Nexus 5500 to run in FC host mode and FCoE multihop.

Best Regards,

Vincent

padramas Mon, 01/21/2013 - 22:58

Charles,

We do not need FI to be in FC switch mode. FC end host mode ( NPV) allows FI ports ( NP ) to connect NPIV ( N5K switch ) .

If N5K software version allows multihop, you can then extend FCoE from N5K to another switch via VE port.

HTH

Padma

charles_morrall Thu, 01/24/2013 - 10:35

Well, I got it all working after much tinkering. One lingering question, probably answered by reading the specifications but still: I had to configure the port from the FI (6120) as an "FCoE uplink port" in order to see the fabric logins in the Nexus switch. My initial assumption was that a "Network" port would be sufficient, Unified Fabric and all. The VLANs still carry, so I'm all good in this configuration, but to me it appears a bit limiting to only allow exactly one VSAN on an FCoE uplink. What if I want to have a multi-tennant solution with many more VSANs (although the FI supports 32 VSANs unless I'm mistaken).

rafael.guedes Thu, 01/24/2013 - 12:18

Hi,

We are planning to deploy a FCoE Multihop scenario where we have two FIC 6248UP connecting to two Nexus 5548UP configured in vPC. The question here is if we can carry ethernet and FCoE traffic through the same cable, as the hosts do using CNA's. When I heard about FCoE Multihop northbound support in Fabric Interconnect I always thought that FCoE traffic would pass on the same cable as the ethernet traffic. Is it not possible??

The only possible way I am seeing to do FCoE Multihop is to link FIC-A to N5K-A and FIC-B to N5K-B for FCoE traffic. And keep ethernet traffic using vPC in a classic fashion. Is it?

Jeremy Waldrop Thu, 01/24/2013 - 12:32

You can use the current vPC uplink for both Ethernet and FCoE traffic if you configure the port as Unified within UCSM. If you configure the existing uplink port as an FCoE Uplink it will change the role to Unified Uplink.

After you change the port to Unified you will need to map the appropriate VSAN to it. To switch over from traditional FC to FCoE disable the FC uplinks.

rafael.guedes Wed, 02/13/2013 - 19:42

Hi Jeremy,

I just read this article http://blogs.cisco.com/datacenter/ucs-multihop-fcoe-in-under-an-hour/ and it seems that use vPC to carry FCoE + Ethernet traffic is not a valid scenario. See bellow:

3. With the Nexus 5k systems, you can PortChannel FCoE and Ethernet links together, or you can use vPCs. You cannot do both. If you want need to use vPCs on your Ethernet links, you will need to use separate links for FCoE (but you can port-channel those together, if you wish). Using UCS with a Nexus 7k you will need to use dedicated FCoE links between the Fabric Interconnects and the 7k.

Do you have any documentation about this?? I googled a lot, but it seems that Cisco did not release many related content yet.

Thank you!

Jeremy Waldrop Fri, 02/22/2013 - 04:09

Hey Rafael, I was doing this in a lab just to see if it would work. I was able to get it working but due to the horrible performance of accessing a LUN from an ESXi host over multihop FCoE I had to revert back to traditional FC connections.

This performance thing is a major bug in my opinion. I tested it again last week on gen 2 hardware and a brand new VNX and had the same results. It was so bad I couldn't even get a Windows VM install to finish. In this setup I had 2 dedicated Ethernet uplinks configured in an FCoE Port Channel. Once I moved to traditional FC there were no performance issues.

NaelShahid_2 Thu, 02/28/2013 - 04:06

Hi Jeremy, So you reverted to native FC between the FI and Nexus or was it dedicated ‘SAN’ FCoE uplink with separate LAN uplinks (vPC)

Jeremy Waldrop Thu, 02/28/2013 - 04:57

Reverted to native FC.

When I had dedicated FCoE links there was a serious performance issue. I only intialy setup the FCoE on the vPC links just to see if it could be done but I never used it.

I have tested this on both gen 1 6100s/2104/M81KR and gen 2 6200/2208/1240/1280 hardware and it is the same result. I can boot the ESXi hosts from the SAN boot LUN but when I power on a VM it takes 20+ minutes and I can't ever login to it.

I am using ESXi 5.1 with the latest VIC enic/fnic drivers.

NaelShahid_2 Thu, 02/28/2013 - 05:04

Sounds bad! I better order some FC SFPs....

One last question, from the Nexus to storage was this FCoE or native?

Thanks for the heads up.

Jeremy Waldrop Thu, 02/28/2013 - 05:26

Native FC from an EMC VNX. In this same lab I have 2 rack mount ESXi servers with QLogic CNAs with FCoE going through the same Nexus 5548s and same VNX SAN with no performance issues.

NaelShahid_2 Thu, 02/28/2013 - 08:33

So this is a multi-hop issue then. I have done end to end fcoe deployments before direct from  host > nexus > storage without issues.

loizosko Wed, 04/10/2013 - 17:16

it appears that when you can't bind the vfc interface to a portchannel with more than 1 link.

i have 2 nexus 5K uplinked via 2 vpc etherchannels to the ucs 6296

each vpc has 4 links (2 from each nexus)

i will see if i can make the uplinks in 4 different vpc (with 2 total connections each)

anybody has any feedback on this?

vsathiam Fri, 04/12/2013 - 14:06

loizosko,

To configure ethernet network infrastructure, you can do the following - { just like yours, 4 links total, 2 links from each nexus }

Nexus 5k-1

interface ethernet 1/1

description Connected to Fabric-Interconnect-A Eth 1/13

channel-group 10 mode active

no shut

interface ethernet 1/2

description Connected to Fabric-Interconnect-B Eth 1/13

channel-group 11 mode active

no shut

interface port-channel10

switchport mode trunk

spanning-tree port type edge trunk

vpc 10

interface port-channel11

switchport mode trunk

spanning-tree port type edge trunk

vpc 11

Nexus 5k-2

interface ethernet 1/1

description Connected to Fabric-Interconnect-A Eth 1/14

channel-group 10 mode active

no shut

interface ethernet 1/2

description Connected to Fabric-Interconnect-B Eth 1/14

channel-group 11 mode active

no shut

interface port-channel10

switchport mode trunk

spanning-tree port type edge trunk

vpc 10

interface port-channel11

switchport mode trunk

spanning-tree port type edge trunk

vpc 11

Go to UCSM, and configure ports 1/13,1/14 (in Fabric-A) as Uplink Ports, Under Lan > Lan Cloud > Fabric A, you can create Port-channels. Repeat the above for Fabric B

Ethernet Uplink Port-channels should come up.

To extend SAN from Data center core to Fabric Interconnects, do the following - (assuming you are trying to configure using FCoE - between 5k and FI and FI is in end host mode)

To maintain SAN seperation, as an example, connect 2 links from 5k-1 to FI-A and 2 links from 5k-2 to FI-B. Create SAN Port-channel

On Nexus 5k-1

feature npiv

feature fcoe

feature lacp

vlan 100

name FCOE_VLAN_100

vsan database

vsan 100

vsan 100 name SAN100

vlan 100

fcoe vsan 100

interface Ethernet 1/21

description Connected to Fabric_Interconnect_A Eth 1/21

channel-group 30 mode active

no shut

interface Ethernet 1/22

description Connected to Fabric_Interconnect_A Eth 1/22

channel-group 30 mode active

no shut

interface port-channel 30

switchport mode trunk

switchport trunk allowed vlan 100

spanning-tree port type edge trunk

interface vfc 30

bind interface port-channel30

switchport trunk allowed vsan 100

switchport mode F

no shut

On Nexus 5k-2

feature npiv

feature fcoe

feature lacp

vlan 101

name FCOE_VLAN_101

vsan database

vsan 101

vsan 101 name SAN101

vlan 101

fcoe vsan 101

interface Ethernet 1/21

description Connected to Fabric_Interconnect_B Eth 1/21

channel-group 30 mode active

no shut

interface Ethernet 1/22

description Connected to Fabric_Interconnect_B Eth 1/22

channel-group 30 mode active

no shut

interface port-channel 30

switchport mode trunk

switchport trunk allowed vlan 101

spanning-tree port type edge trunk

interface vfc 30

bind interface port-channel30

switchport trunk allowed vsan 101

switchport mode F

no shut

Go to UCSM, define VSANS (find in SAN Cloud),  under Fabric A and Fabric B, create FCoE Port-channels (select appropriate interfaces), associate the Port-channels to correct VSANs.

Refer

http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.1/b_UCSM_GUI_Configuration_Guide_2_1_chapter_0110.html

loizosko Fri, 04/12/2013 - 14:39

i wanted to have 4 10GB links out of each FI

your solution only has 2 out of each FI

now i have 2 interfaces on each port channel  (4 total per vpc)

your solution calls for only 2 uplinks from each FI

review the 2 diagrams below

vsathiam Fri, 04/12/2013 - 14:57

Ioizosko,

Nexus 5k-1

feature lacp

interface ethernet 1/1

description Connected to Fabric-Interconnect-A Eth 1/13

channel-group 10 mode active

no shut

interface ethernet 1/2

description Connected to Fabric-Interconnect-A Eth 1/14

channel-group 10 mode active

no shut

interface ethernet 1/3

description Connected to Fabric-Interconnect-B Eth 1/13

channel-group 11 mode active

no shut

interface ethernet 1/4

description Connected to Fabric-Interconnect-B Eth 1/14

channel-group 11 mode active

no shut

interface port-channel10

switchport mode trunk

spanning-tree port type edge trunk

vpc 10

interface port-channel11

switchport mode trunk

spanning-tree port type edge trunk

vpc 11

Nexus 5k-2

feature lacp

interface ethernet 1/1

description Connected to Fabric-Interconnect-A Eth 1/15

channel-group 10 mode active

no shut

interface ethernet 1/2

description Connected to Fabric-Interconnect-A Eth 1/16

channel-group 10 mode active

no shut

interface ethernet 1/3

description Connected to Fabric-Interconnect-B Eth 1/15

channel-group 11 mode active

no shut

interface ethernet 1/4

description Connected to Fabric-Interconnect-B Eth 1/16

channel-group 11 mode active

no shut

interface port-channel10

switchport mode trunk

spanning-tree port type edge trunk

vpc 10

interface port-channel11

switchport mode trunk

spanning-tree port type edge trunk

vpc 11

In UCSM, configure appropriate Ethernet uplink ports (and create Port-channel configurations) for both Fabrics.

loizosko Fri, 04/12/2013 - 15:34

this is what i currently have

but with this setup cannot pass FCOE

when i try associate the vfc interface to the port channel i get this errorr

N5K-SW1(config-if)# bind interface port-channel 2

ERROR: fcoe_mgr: VFC cannot be bound to Port Channel as it has more than one member (err_id 0x4207002C)

N5K-SW1(config-if)#

all the articles indicate that can't do in this case.

if i do the scenario i indicated (possible solution??) might work but did not test it.

as alternative i could have a separate FC port channel but i would like to see if i can get it via fcoe

vsathiam Fri, 04/12/2013 - 18:06

Ioizosko,

Since you are using Nexus 5k, you have 2 options:

Option 1:

For Storage traffic, 'dedicate' 1 or more FCoE Uplinks from Fabric Interconnects to N5k

Nexus 5k-1 (1 or more links ) directly connected to Fabric Interconnect A

Nexus 5k-2 (1 or more links ) directly connected to Fabric Interconnect B

For Ethernet traffic, use vPC as shown in the example in previous posts.

Nexus 5k-1 and Nexus 5k-2 (1 or more links ) directly connected to Fabric Interconnect A

Nexus 5k-1 and Nexus 5k-2 (1 or more links ) directly connected to Fabric Interconnect B

Option 2:

If you want Ethernet and Storage traffic to use same links, dedicate 1 or more FCoE Uplinks (Converged uplinks - carry FCoE and Ethernet LAN traffic) from Fabric Interconnects to N5k - ( do not vPC )

Storage and Ethernet -

Nexus 5k-1 (1 or more converged links ) directly connected to Fabric Interconnect A

Nexus 5k-2 (1 or more converged links ) directly connected to Fabric Interconnect B

As of today, you cannot use vPC to carry both Ethernet and Storage traffic.

I suggest you take a look at the following presentation -

https://www.ciscolive365.com/connect/sessionDetail.ww?SESSION_ID=5927&backBtn=true

asif.irfan Mon, 05/13/2013 - 07:04

Hello vsathiam,  Thanks for the detailed info. Could you please share the config for the 2 options proposed? That would help.  One question, do we really need the vPC for ethernet traffic or we can use single link as well?

vsathiam Wed, 05/15/2013 - 12:18

Hi Asif,

Please check the following link (

http://www.cisco.com/en/US/docs/solutions/SBA/February2013/Cisco_SBA_DC_UnifiedComputingSystemDeploymentGuide-Feb2013.pdf ) to understand how separate FCoE and Ethernet Uplinks are setup.

With vPC setup (for ethernet traffic), you get benefits of load-balancing (traffic spread across multiple links, in addition to hardware node redundancy), maximize bandwidth utilization ..etc

-Vignesh

Actions

Login or Register to take actions

This Discussion

Posted June 5, 2012 at 7:06 AM
Updated June 5, 2012 at 8:06 AM
Stats:
Replies:32 Overall Rating:4.5
Views:24716 Votes:0
Shares:0

Related Content

 
 

Trending Topics: Storage Networking