Nexus Design Question

Unanswered Question
Nov 13th, 2009

Here's the scenario:

We're working on a new DC design. Dual 6509VSS in core, dual Nexus 5k and 14 FEXs. The 2ks will be dual attached to the 5ks.

Now, I read somewhere else that the 5k is limited to a maximum of 16 vPCs.

So the question comes up... does that mean that the current design with a vPC to the core VSS, the peer vPC and all 14 vPCs to the FEX are topping us out?

Also, all of the designs I've seen have only 2 5ks peering together. Can you have 3 or 4 5ks peer together?


I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
a12288 Fri, 11/13/2009 - 12:56

I won't recommend you vPC NX5k to VSS, simply because VSS doesn't understand vPC, the gain from vPC is way less than comparing the potential loop caused by this.

Also, I strongly recommend you to "read" the NX5k doc word by word, which has lots of limitations on the current release.

mmarvel Fri, 11/13/2009 - 13:20

I'm definately going to read the doc. Which specific doc would you recommend as there are quite a few?

What would you recommend for connectivity between the VSS and the NX5K?

a12288 Fri, 11/13/2009 - 12:57

BTW, i am planning to implement VSS on our distributions switches, and looking for some recommendations and suggestions, thanks.

iyde Sat, 11/14/2009 - 08:04

Hi Leo.

The best suggestion is to read the documentation, as Cisco has made a fine step-by-step document on how to convert two stand-alone Cat6500 to one VSS system.

In regard to your first reply I'd like to know why you do not recommend vPC on Nx5k connected to a VSS setup? As I understand it two Nx5k running vPC looks like one virtual switch and connecting one switch to one switch means no Spanning Tree loop. Also, all links will be active, as we are talking pt-pt connection between the two switch sets. So please elaborate on your recommendation. Thanks a lot.

HTH, Ingolf

mmarvel Mon, 11/16/2009 - 09:01

Yeah, I've implemented VSS before. The document I was asking about was which specific Nx5k doc they were referring to. I'd also like them to elaborate on how they are talking about connecting the 5ks to the VSS.

gnijs Sat, 11/28/2009 - 14:58

I have to make the same design: VSS + 2x N5000. Currently i don't plan to connect the N5000 w. VPC upstream to C6500-VSS (i do plan VPC downstream). I plan to connect each N5000 individually to the VSS chassis with 2x 10G MEC. On the N5000, this is just a local portchannel. Since the connection between the N5000 is only used for VPC and is not considered as a real L2 link, there is no STP loop in this design.

If the downsteam devices alternate their active/passive links between N5-1 and N5-2, both N5s will carry traffic.

PS. i have more confidence in a simple portchannel. You don't want VPC problems (crash/early deployment problems) on the uplinks to VSS, basically isolating your datacenter or risking L2 loops.

PPS. one remark i still have here is: on the N5K, you don't want to connect the uplinks towards VSS on adjacent ports using the same port-group. If the port-group ASIC fails, you will be blackholing traffic in the datacenter as the downstream interfaces will remain up. On C3750 switches, this is solved by Link State Tracking, however this is not supported on N5K, which i really regret. But Cisco says: we don't need Link State Tracking on N5K, as the N5K supports VPc to work around this....However, i feel this is not an excuse to not implement LinkStateTracking in N5K. VPc indeed is usefull if the upstream switches are L2 connected, however, if the upstreams are VSS switches a simple portchannel with link state tracking would suffice and have faster convergence....




This Discussion