Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements
Webcast-Catalyst9k
Purple

Nexus 5000 management address

    I am having trouble trying to setup a mgmt address on these boxes.  We have two 7k's above the 5k's and we are running vpc down to the 5k's.  The mgmt 0 interface on the 5k's is being used for vpc peer keepalive .  How do you add a management interface to the 5k's ?  I tried adding a new vlan on the 7k's and then allowing it down the vpc's to the 5k's .  Tried creating a SVI in that vlan and a static route pointing to the 7k vlan interface.  I cannot ping the 7k interface . VPC shows ok but on the 5k there is a mesage saying the vlan is suspended reason vlan not configured on remote vpc interface.  I have checked both ends of the vpc connection and all interfaces are configured the same so I don't know why it is suspending  the vlan , it's the only vlan in the vpc that is suspended .  Do I have to run a separate interface down to the 5k that isn't in the vpc ???That would seem stupid to waste a 10 gig connection to manage the device.  Any pointers appreciated , been staring at this too long can't see the issue.

5 REPLIES
Hall of Fame Super Silver

Nexus 5000 management address

I run the 5k (and even 7k) VPC peer keepalive link from the mgmt interface via a small switch and put a gateway on that switch to get into and out of the management network that the MgmtVRF uses.

There are other alternatives such as using front side ports of the VPC peer KA (with an associated dedicated VRF) thus freeing up the physical Ethernet mgmt ports.

IMHO Cisco didn't quite think through the design guides on this - some say use back-back mgmt ports like you are using for VPC Peer KA and others say use an intervening switch. Still others advocate using front side ports. It gets even worse with a 7k pair each with dual supervisors.

Purple

Nexus 5000 management address

  I agree they didn't think quite a few things thru with the nexus line .  Right you can run the peer keepalive on other ports but it seems like an expensive waste of money using a 10 gig fiber port to run the keepalive between the 5k's.  This 5k is all sfp ports .  Guess they could have ordered 1 gig sfp's and we could have used that  for the KA . So you use a small switch you just then ssh to the address the keepalive uses ??? 

Hall of Fame Super Silver

Re: Nexus 5000 management address

glen.grant wrote:

...So you use a small switch you just then ssh to the address the keepalive uses ??? 

Yes that's how I prefer to do it, especially when there isn't something like a lightly populated 100BaseTX card (in a 7k scenario) with free ports. Even using 1000BaseT transceivers in a 5k seems like a waste of valuable front panel ports to me. I've not had any problems using the management port for both peer KA and management tasks.

If it were up to me, I would have designed a dedicated port for peer keepalive (like the old Pix failover port) and kept the Ethernet management port for management only. I mean an Ethernet ASIC cant be more than $5 to Cisco. It could even be serial with a 50 cent UART!

Purple

Re: Nexus 5000 management address

  Thanks for the info Marvin . Seems like a bad design for manageing the 5k's .  The 7k's I did use a separate vrf  on a 10/100/100 card that we have on that for the KA  .  Been unsuccesfull at trying to add the vlan into  the vpc from the 7k to 5k  and using a SVI on the 5k.  Ever heard of anyone doing this  successfully?

Hall of Fame Super Silver

Nexus 5000 management address

You can manage it via an SVI, it just depends on having one up and reachable.

I'm not quite sure waht's up in your original post without seeing the entire configuration. I've had to back off and retry a couple of times myself in getting things up that way. It seems I am too dense to not re-learn it every time.

By the way, my preferred approach is based on the Data Center SBA, pages 14-15 here. And, in case you haven't seen it, I also  recommend the VPC Best Practices Design Guide.

182
Views
0
Helpful
5
Replies
CreatePlease to create content