cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3595
Views
5
Helpful
6
Replies

Question about Nexus 3548 vPC setup

Johan Sjöberg
Level 1
Level 1

Hi.

 

We have just installed our two first Nexus 3548 switches in our Catalyst environment. We want to set up a vPC domain between the Nexuses, to use for connections to storage and other equipment.

I have read the guide at http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-685753.html and tried setting it up. I created a vPC domain on both switches like this:

nexus1:

vpc domain 1
  role priority 2000
  system-priority 4000
  peer-keepalive destination 192.168.105.40 source 192.168.105.39 vrf default

nexus2:

vpc domain 1
  system-priority 4000
  peer-keepalive destination 192.168.105.39 source 192.168.105.40 vrf default

The switches are connected with a port-channel consisting of 2x 10GE. The IP addresses above are the ones we use for managing the switches. When I configure the port-channel as "vpc peer-link", the vpc status looks OK:

vPC domain id                     : 1
Peer status                       : peer adjacency formed ok
vPC keep-alive status             : peer is alive
Configuration consistency status  : success
Per-vlan consistency status       : success
Type-2 consistency status         : success
vPC role                          : primary
Number of vPCs configured         : 0
Peer Gateway                      : Disabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Enabled
Auto-recovery status              : Disabled

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans
--   ----   ------ --------------------------------------------------
1    Po2    up     1,6,100,102,106

 

The problem I have is that I lose connection to nexus2 when I bring up the vPC. I can no longer access it on its IP (192.168.105.40). I cannot ping it from nexus1 either. The Nexus switches are connected to our core switches, which are Catalyst 6509. nexus1 is connected to coreswitch1 using a portchannel of 2xGE and nexus2 is connected to coreswitch2 the same way. A spanning-tree cost has been set on the uplink from nexus2, to make spanning tree block that uplink, and allow traffic between the nexuses to go over the 2x 10GE portchannel instead of over the core switches. I have attached a drawing of this.

Maybe I shouldn't use the management IP:s for peer keepalive? Does the peer keepalive need to be on a different physical link than the peer-link?

 

Regards,

Johan

1 Accepted Solution

Accepted Solutions

Dallas Brown
Level 1
Level 1

 

You should definetly not have the peer keepalive going accross the vPC link. If the vPC link goes down youi would get split brain scenario. I know the nexus handles this well but i would assume you would want to avoid it. Everything you need to know and more is in this doc. I used it extensively when we setup our nexus core.

 

http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

View solution in original post

6 Replies 6

Dallas Brown
Level 1
Level 1

I can't say with absolute certainity without seeing your whole config, but your not using the management link for the keepalive your using the defualt vrf. The management should be in the management vrf not default like so

!
interface mgmt0
vrf member management
ip address 192.168.0.1/30
!
vpc domain 1
peer-keepalive destination 192.168.0.2 source 192.168.0.1 vrf management
!

We are not using the management ports for management, but an ordinary Vlan Interface in the default vrf, as seen below. We can of course change that and instead use the mgmt0 port if that is the best approach.

 

vrf context management
vlan configuration 1,100
vlan 1
vlan 100
  name DMMgmtPriv
vpc domain 1
  role priority 2000
  system-priority 4000
  peer-keepalive destination 192.168.105.40 source 192.168.105.39 vrf default

interface Vlan1

interface Vlan100
  no shutdown
  no ip redirects
  ip address 192.168.105.39/23

 

I edited my reply, i didn't read the question very well. Configuring the management port is something you should be able to do without any additional licensing. What exactly can;t you configure?

I can use mgmt0 without problems, and it is currently used for both management and vpc keepalive. But there is also a physical mgmt1 port on the switch. However, the only interface available in NX-OS is mgmt0. If I try to configure mgmt1, I get an error that the interface does not exist.

Dallas Brown
Level 1
Level 1

 

You should definetly not have the peer keepalive going accross the vPC link. If the vPC link goes down youi would get split brain scenario. I know the nexus handles this well but i would assume you would want to avoid it. Everything you need to know and more is in this doc. I used it extensively when we setup our nexus core.

 

http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

Johan Sjöberg
Level 1
Level 1

Thanks.

I moved the management IP to mgmt0 and changed vrf to management for the keepalive link, and now it seems to work correctly.

However, I was thinking that maybe it would be good to use a cable directly connected between the mgmt1 interfaces for keepalive, to remove any dependencies of other switches. But it seems I cannot configure the mgmt1 port? Is it some license issue?

I took a brief look at the keepalive chapter in that document earlier, but I will read through it more thoroughly.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card