cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3510
Views
10
Helpful
6
Replies

N5K-C5548UP-FA and N2K-C2248TP Configuration Assistance

jahanzaib amin
Level 1
Level 1

 

Dear Expert,

 

I would like to know about the Basic /Core configuration for NK5548 and NK2248. Kindly share your experience to configure them together:

Did we need to enable the Features on Nexus 5548? Like:

1: Fex Association Group

2: Pinning Max-Links

3: switchport mode fex-fabric

4: Fax association

Further, Kindly share some customize configuration guide only for 5548 and 2248

 

Thanks,

 

Jehan

 

 

 

 

2 Accepted Solutions

Accepted Solutions

Steve Fuller
Level 9
Level 9

Hi Jehan,

Once you've enabled FEX support with the feature fex command, at a minimum you must configure the fex associate <fex-number> and the switchport mode fex-fabric commands.

The fex associate <fex-number> command is required to assign each FEX with a unique identification number between 100 and 199. The fex associate command is configured within the interface context of the Nexus 5K, and must be configured with the same FEX ID on all interfaces that connect to the same physical FEX. For example if you have interface eth1/1 to eth1/4 connected to a single Nexus 2248 FEX, then you need to configure the command fex associate 100 on each of the four interfaces.

The switchport mode fex-fabric command is required on each of the Nexus 5K interfaces that connect to the FEX. This command is what actually starts the process of registering the FEX, checking compatible software versions between the Nexus 5K and the FEX etc.

The following is most basic FEX configuration:

!
feature fex
!
interface ethernet <slot/port>
  switchport mode fex-fabric
  fex associate <fex-number>
!

At this point the show fex command should indicate the FEX is online and recognise the model and serial number.

The pinning max-links command controls how traffic from the FEX host interfaces is distributed across the FEX fabric interfaces. There are two methods to pin host interfaces: static or port-channel.

When using static pinning the traffic from a group of host interfaces (HI) is sent on a specific fabric interface. For example, when using four fabric interfaces, HI 1-12 would use fabric link 1, HI 13-24 would use fabric link 2, HI 25-36 would use fabric link 3 and HI 37-48 would use fabric link 4. If one of the network interfaces fails, all the host interfaces pinned to that fabric link are placed in an operationally down state. Assuming the server is configured with some form of NIC teaming, it will detect the link failing, and use an alternate NIC. The advantage of using static pinning is that the utilisation and over-subscription of a specific fabric link is deterministic.

When using port-channel, all the fabric links are configured as member ports of a single port-channel interface, with traffic from the host interfaces distributed across all operational links. The advantage of using port-channels is that if a single member links fails, the host interfaces remain operational and so there is no server NIC teaming failover required. The disadvantage is that the over-subscription on the fabric links can change and would be higher in the event of the failure of one of the fabric links.

In my opinion there's no best practice or right or wrong way to pin the host interfaces. Both static and port-channel pinning are correct in certain scenarios and it will depend upon your requirements.

There's a discussion on the different pinning options from page 59 of the Data Center Access Design with Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders and Virtual PortChannels
that goes into more details as to the operation, pros and cons etc.

If you want to use port-channels the change to the above configuration would be as follows:

!
interface port-channel <port-channel>
  switchport mode fex-fabric
  fex associate <fex-number>
!
interface eth <slot/port>
  channel-group <port-channel>
!


Regards

 

View solution in original post

Hi,

The two Nexus 5K switches that form a vPC domain need IP connectivity between each other. This is usually accomplished with either the management interface (mgmt 0) or by creating an SVI on each of the switches.

I typically use the management interface as it's available and ensures the vPC peer-keepalive traffic is totally out-of-band of any user traffic.

If you intend to use the mgmt0 interface for out-of-band management of the switches then simply connect the two management ports into a third switch. If you won't be managing the switches via the mgmt0 interface then these can be connected directly together.

There are some further details of the vPC peer-keepalive on page 44 of the design document that I referenced in my previous response.

Regards

View solution in original post

6 Replies 6

Steve Fuller
Level 9
Level 9

Hi Jehan,

Once you've enabled FEX support with the feature fex command, at a minimum you must configure the fex associate <fex-number> and the switchport mode fex-fabric commands.

The fex associate <fex-number> command is required to assign each FEX with a unique identification number between 100 and 199. The fex associate command is configured within the interface context of the Nexus 5K, and must be configured with the same FEX ID on all interfaces that connect to the same physical FEX. For example if you have interface eth1/1 to eth1/4 connected to a single Nexus 2248 FEX, then you need to configure the command fex associate 100 on each of the four interfaces.

The switchport mode fex-fabric command is required on each of the Nexus 5K interfaces that connect to the FEX. This command is what actually starts the process of registering the FEX, checking compatible software versions between the Nexus 5K and the FEX etc.

The following is most basic FEX configuration:

!
feature fex
!
interface ethernet <slot/port>
  switchport mode fex-fabric
  fex associate <fex-number>
!

At this point the show fex command should indicate the FEX is online and recognise the model and serial number.

The pinning max-links command controls how traffic from the FEX host interfaces is distributed across the FEX fabric interfaces. There are two methods to pin host interfaces: static or port-channel.

When using static pinning the traffic from a group of host interfaces (HI) is sent on a specific fabric interface. For example, when using four fabric interfaces, HI 1-12 would use fabric link 1, HI 13-24 would use fabric link 2, HI 25-36 would use fabric link 3 and HI 37-48 would use fabric link 4. If one of the network interfaces fails, all the host interfaces pinned to that fabric link are placed in an operationally down state. Assuming the server is configured with some form of NIC teaming, it will detect the link failing, and use an alternate NIC. The advantage of using static pinning is that the utilisation and over-subscription of a specific fabric link is deterministic.

When using port-channel, all the fabric links are configured as member ports of a single port-channel interface, with traffic from the host interfaces distributed across all operational links. The advantage of using port-channels is that if a single member links fails, the host interfaces remain operational and so there is no server NIC teaming failover required. The disadvantage is that the over-subscription on the fabric links can change and would be higher in the event of the failure of one of the fabric links.

In my opinion there's no best practice or right or wrong way to pin the host interfaces. Both static and port-channel pinning are correct in certain scenarios and it will depend upon your requirements.

There's a discussion on the different pinning options from page 59 of the Data Center Access Design with Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders and Virtual PortChannels
that goes into more details as to the operation, pros and cons etc.

If you want to use port-channels the change to the above configuration would be as follows:

!
interface port-channel <port-channel>
  switchport mode fex-fabric
  fex associate <fex-number>
!
interface eth <slot/port>
  channel-group <port-channel>
!


Regards

 

Dear Steve,

Thanks for your assistance. i successfully configured the N2K and 5K. Fex has been installed and IOS also updated on Nexus 2248.

 

Further my Peer VPC is still down here is the config for VPC in between 2 Nexus 5548:

N5K-1:

vrf context management
vlan 1
vpc domain 55
  role priority 8192
  system-priority 8192
  peer-keepalive destination 10.10.10.2
  auto-recovery
port-profile default max-ports 512

N5K-2:

vrf context management
vlan 1
vpc domain 55
  role priority 9192
  system-priority 8192
peer-keepalive destination 10.10.10.1
  auto-recovery
port-profile default max-ports 512

----------------

NK5548-A# sh vpc

Legend:
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 55  
Peer status                          : peer link is down             
                                           (peer-keepalive not operational, peer never alive)                          
vPC keep-alive status             : Suspended (Destination IP not reachable)
Configuration consistency status  : failed  
Per-vlan consistency status       : success                       
Configuration inconsistency reason: Consistency Check Not Performed
Type-2 inconsistency reason       : Consistency Check Not Performed
vPC role                          : none established              
Number of vPCs configured         : 0   
Peer Gateway                      : Disabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Disabled (due to peer configuration)
Auto-recovery status              : Enabled (timeout = 240 seconds)

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans    
--   ----   ------ --------------------------------------------------
      Po50   up     -                                                         

 

Question:

Did we need to link the Management Interface of both Nexus 5548 together for keepalive syn?

 

Thanks,

 

 

 

Hi,

The two Nexus 5K switches that form a vPC domain need IP connectivity between each other. This is usually accomplished with either the management interface (mgmt 0) or by creating an SVI on each of the switches.

I typically use the management interface as it's available and ensures the vPC peer-keepalive traffic is totally out-of-band of any user traffic.

If you intend to use the mgmt0 interface for out-of-band management of the switches then simply connect the two management ports into a third switch. If you won't be managing the switches via the mgmt0 interface then these can be connected directly together.

There are some further details of the vPC peer-keepalive on page 44 of the design document that I referenced in my previous response.

Regards

Dear Steve,

Thanks for the correct and perfect answer about the VPC and FEX. I used the mgt interface to connect directly with each other and its working now. Peers are active now.

Further I connected both Nexus 5548 with Catalyst 4500 as individual trunk ports because there is HSRP on Catalyst 4500. So I just took 1 port from each nexus 5548, make it trunk with the Core Switch (Also make trunk from each Switch each port). Change the speed on Nexus to 1000 because other side is 1G RJ45.

Here is the Config:

N55481/ N55482

Interface Ethernet1/3
Switchport mode trunk
Speed 1000

Added the static route on both nexus for Core HSRP IP: ip route 0.0.0.0/0 10.10.150.39 (IP of HSRP standby)

My trunks status are UP on both sides and if I do sh cdp nei on Core Switch (4500) I can see the connectivity about Nexus 5548 But I could not able to ping from N5548 Console to core Switch IP Address.

Is there any further configuration to enable routing or ping on nexus 5548?

 

Pleas suggest

Dear Steve,

The routing issue has been resolved. 

I deployed vPC with Each Fabric Extender Dual-Connected to Two Cisco Nexus 5000 Series Switches as describe in below Link documentation: FIGURE-2

But the Issue is when I buit the VPC on Nexus 5548 the FEX disappear. Kindly see attached Sh run for both nexus and point out where is the mistake.

 

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/configuration_guide_c07-543563.html

 

Kindly see attached my Network Diagram for better understanding.

 

Thanks,

 

Jehan

 

 

Dear Steve,

Thanks for the correct and perfect answer about the VPC and FEX. I used the mgt interface to connect directly with each other and its working now. Peers are active now.

Further I connected both Nexus 5548 with Catalyst 4500 as individual trunk ports because there is HSRP on Catalyst 4500. So I just took 1 port from each nexus 5548, make it trunk with the Core Switch (Also make trunk from each Switch each port). Change the speed on Nexus to 1000 because other side is 1G RJ45.

Here is the Config:

N55481/ N55482

Interface Ethernet1/3
Switchport mode trunk
Speed 1000

Added the static route on both nexus for Core HSRP IP: ip route 0.0.0.0/0 10.10.150.39 (IP of HSRP standby)

My trunks status are UP on both sides and if I do sh cdp nei on Core Switch (4500) I can see the connectivity about Nexus 5548 But I could not able to ping from N5548 Console to core Switch IP Address.

Is there any further configuration to enable routing or ping on nexus 5548?

 

Pleas suggest

 

Thanks,

 

Jehan

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card