Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Question about Nexus 3548 vPC setup

Hi.

 

We have just installed our two first Nexus 3548 switches in our Catalyst environment. We want to set up a vPC domain between the Nexuses, to use for connections to storage and other equipment.

I have read the guide at http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-685753.html and tried setting it up. I created a vPC domain on both switches like this:

nexus1:

vpc domain 1
  role priority 2000
  system-priority 4000
  peer-keepalive destination 192.168.105.40 source 192.168.105.39 vrf default

nexus2:

vpc domain 1
  system-priority 4000
  peer-keepalive destination 192.168.105.39 source 192.168.105.40 vrf default

The switches are connected with a port-channel consisting of 2x 10GE. The IP addresses above are the ones we use for managing the switches. When I configure the port-channel as "vpc peer-link", the vpc status looks OK:

vPC domain id                     : 1
Peer status                       : peer adjacency formed ok
vPC keep-alive status             : peer is alive
Configuration consistency status  : success
Per-vlan consistency status       : success
Type-2 consistency status         : success
vPC role                          : primary
Number of vPCs configured         : 0
Peer Gateway                      : Disabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Enabled
Auto-recovery status              : Disabled

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans
--   ----   ------ --------------------------------------------------
1    Po2    up     1,6,100,102,106

 

The problem I have is that I lose connection to nexus2 when I bring up the vPC. I can no longer access it on its IP (192.168.105.40). I cannot ping it from nexus1 either. The Nexus switches are connected to our core switches, which are Catalyst 6509. nexus1 is connected to coreswitch1 using a portchannel of 2xGE and nexus2 is connected to coreswitch2 the same way. A spanning-tree cost has been set on the uplink from nexus2, to make spanning tree block that uplink, and allow traffic between the nexuses to go over the 2x 10GE portchannel instead of over the core switches. I have attached a drawing of this.

Maybe I shouldn't use the management IP:s for peer keepalive? Does the peer keepalive need to be on a different physical link than the peer-link?

 

Regards,

Johan

1 ACCEPTED SOLUTION

Accepted Solutions
New Member

 You should definetly not

 

You should definetly not have the peer keepalive going accross the vPC link. If the vPC link goes down youi would get split brain scenario. I know the nexus handles this well but i would assume you would want to avoid it. Everything you need to know and more is in this doc. I used it extensively when we setup our nexus core.

 

http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

6 REPLIES
New Member

I can't say with absolute

I can't say with absolute certainity without seeing your whole config, but your not using the management link for the keepalive your using the defualt vrf. The management should be in the management vrf not default like so

!
interface mgmt0
vrf member management
ip address 192.168.0.1/30
!
vpc domain 1
peer-keepalive destination 192.168.0.2 source 192.168.0.1 vrf management
!

New Member

We are not using the

We are not using the management ports for management, but an ordinary Vlan Interface in the default vrf, as seen below. We can of course change that and instead use the mgmt0 port if that is the best approach.

 

vrf context management
vlan configuration 1,100
vlan 1
vlan 100
  name DMMgmtPriv
vpc domain 1
  role priority 2000
  system-priority 4000
  peer-keepalive destination 192.168.105.40 source 192.168.105.39 vrf default

interface Vlan1

interface Vlan100
  no shutdown
  no ip redirects
  ip address 192.168.105.39/23

 

New Member

No it's not a licensing issue

I edited my reply, i didn't read the question very well. Configuring the management port is something you should be able to do without any additional licensing. What exactly can;t you configure?

New Member

I can use mgmt0 without

I can use mgmt0 without problems, and it is currently used for both management and vpc keepalive. But there is also a physical mgmt1 port on the switch. However, the only interface available in NX-OS is mgmt0. If I try to configure mgmt1, I get an error that the interface does not exist.

New Member

 You should definetly not

 

You should definetly not have the peer keepalive going accross the vPC link. If the vPC link goes down youi would get split brain scenario. I know the nexus handles this well but i would assume you would want to avoid it. Everything you need to know and more is in this doc. I used it extensively when we setup our nexus core.

 

http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

New Member

Thanks.I moved the management

Thanks.

I moved the management IP to mgmt0 and changed vrf to management for the keepalive link, and now it seems to work correctly.

However, I was thinking that maybe it would be good to use a cable directly connected between the mgmt1 interfaces for keepalive, to remove any dependencies of other switches. But it seems I cannot configure the mgmt1 port? Is it some license issue?

I took a brief look at the keepalive chapter in that document earlier, but I will read through it more thoroughly.

945
Views
0
Helpful
6
Replies
CreatePlease login to create content