cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2531
Views
0
Helpful
10
Replies

Nexus 7010 Vpc

mikegrous
Level 3
Level 3

We are replacing 2 6513's with 2 Nexus 7010's. The design is the 7010s linked together and all IDFs connecitng back.. The exact way we have it now. My question is where to configure the Vpc. Do you configure it on the trunk link between the 7010's? Or do you run another fiber/copper link and run if over that?

1 Accepted Solution

Accepted Solutions

Yes, Po10 is the port-channel between two Nexus7010s.  - Yogesh

View solution in original post

10 Replies 10

Yogesh Ramdoss
Cisco Employee
Cisco Employee

Mike,


Once you have link/channel up between 7010s, you should identify it as "vPC peer-link".


7010(config)# interface port-channel 10
7010(config-if)# vpc peer-link

Under the port-channel to the access/distribution switch, we need to configure as "vPC".

7010(config)# interface port-channel 25
7010(config-if)#
vpc 25

Hope this helps.

vPC config guide:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/interfaces/configuration/guide/if_vPC.html

- Yogesh




So in your example Int Port-channel 10 is your trunk between the 7010's? If so i understand the concept.

Yes, Po10 is the port-channel between two Nexus7010s.  - Yogesh

Thank You

darren.g
Level 5
Level 5

You actually need two seperate linke between the two Nexus switches.

One is known as the "vpc peer-link", and carries the actual traffic between the two switches which allows them to present the same "virtual" host for terminating port-channels.

The second link is known as the "peer keepalive" link, and CANNOT be the same physical connection as the vpc peer-link as it needs to be a layer 3 link.

The recommendations also advise you should run the peer keepalive in a seperate VRF to maintain its route.

The relavent parts of my configuration looks like this

vrf context keepalive

vpc domain 1
role priority 1
peer-keepalive destination 10.255.254.2 source 10.255.254.1 vrf keepalive interval 400 timeout 3
peer-gateway
reload restore delay 300

interface Ethernet8/48
description Port for VPC Peer Keep-Alive
no switchport
vrf member keepalive
ip address 10.255.254.1/24

interface port-channel1
description Intra-switch link between Nexus core
switchport mode trunk
vpc peer-link
spanning-tree port type network
mtu 9216

interface Ethernet7/25
description Port one of two port channel group - link to secondary Nexus
switchport mode trunk
mtu 9216
channel-group 1 mode active

interface Ethernet7/26
description Port one of two port channel group - link to secondary Nexus
switchport mode trunk
mtu 9216
channel-group 1 mode active

The other side is similar, with changes only to ports (the keepalive port is in a different slot), IP addresses (the other end of the keepalive link needs a diferent IP address, obviously), and the secondary core switch doesn't need the "role priority" option in the vpc group.

Once this is up and running, a "show vpc brief' gives the following

nexus1# sh vpc brief
Legend:
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                   : 1
Peer status                     : peer adjacency formed ok
vPC keep-alive status           : peer is alive
Configuration consistency status: success
Type-2 consistency status       : success
vPC role                        : primary, operational secondary
Number of vPCs configured       : 14
Peer Gateway                    : Enabled
Dual-active excluded VLANs      : -

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans
--   ----   ------ --------------------------------------------------
1    Po1    up 1-2,20,30-31,40,50,60,70,80,90,110,120,140,150,160
                   ,170,180,190,192,200-201,210-211,221,250,300,556,8
                   50,999,1601-1602,1610,1620,2000,4063,4065,4068,407
                   1

*ALL* your VLAN's should be in the peer-link unless you specifically don't want them available to be trunked via VPC.

nexus1# sh vpc peer-keepalive

vPC keep-alive status           : peer is alive
--Peer is alive for             : (5260233) seconds, (99) msec
--Send status                   : Success
--Last send at                  : 2010.10.13 15:40:06 735 ms
--Sent on interface             : Eth8/48
--Receive status                : Success
--Last receive at               : 2010.10.13 15:40:06 736 ms
--Received on interface         : Eth8/48
--Last update from peer         : (0) seconds, (198) msec

vPC Keep-alive parameters
--Destination                   : 10.255.254.2
--Keepalive interval            : 400 msec
--Keepalive timeout             : 3 seconds
--Keepalive hold timeout        : 3 seconds
--Keepalive vrf                 : keepalive
--Keepalive udp port            : 3200
--Keepalive tos                 : 192
nexus1#

Without the functioning peer keepalive your VPC peer-link will not come up, and VPC's won't work. You need to get that right (including the VRF) before you can establish the peer link.

Cheers

"peer keepalive" link that is L3 is only for keepalive and it doesnt pass traffic?

All of the network traffic will go over your port-channel 1, and VPC will work assuming your peer keepalive link is working correctly?

>> "peer keepalive" link that is L3 is only for keepalive and it doesnt pass traffic?

That's right. It is recommended to have the vPC peer-keepalive link in a dedicated VRF.

>> All of the network traffic will go over your port-channel 1, and VPC will work assuming your peer keepalive link is working correctly?

Yes. All network traffic go through Port-ch 1. 

Different failure scenarios:

(1) If the peer-keepalive goes down, peerlink stays up, both N7Ks know that peer-keepalive went down and operations will be normal.

(2) If the peerlink goes down but peer-keepalive is up, then N7Ks know both are active and peerlink went down. At this point, secondary switch suspends its local ports to avoid dual-active condition.

(3) If peerlink as well as peer-keepalive goes down at the same time, then dual-active scenario occurs.

- Yogesh

Thanks alot Yogesh!

-Mike

Absolutely correct.

The peer keepalive link will NOT pass traffic - that all goes across the peer link conenction - the keepalive is used for state updates and for synchronisation status - but your initial VPC establishment won't come up without it.

So you need a minimum of two ports linking your Nexus 7k's - one 10 gig for your peer link (from memory, peer link is not supported on 1 gig ports no matter how many you bundle, but I could be wrong), and a single one gig port in layer 3 mode for your peer keepalive link.

As shown, I use two x 10 gig ports between my Nexus 7k switches for more intra-switch bandwidth, but that's not strictly necesary, although Cisco's best practice recommends that not only do you use two x 10 gig ports, but that they be on seperate line cards - which isn't an option in my case as I only HAVE one 10 gig line card. :-)

Cheers

That is exaclty how i was going to do it too. new VRF for the l3 keepalive link and 2 10gig links inbeteen cores. Unfortunitally we only have 1 10 gb

blade in each core too. But that will change over time.

Thanks for the input.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card