nexus data center design... collapsed core/agg/access... and VDCs...

Unanswered Question
Oct 4th, 2011

So I've been tasked with retiring a pair of 6509s that are in our DC core. I'm looking to get some advice and input on my design considerations, I'm a Nexus newbie... First off, these 6509s are the core routers for 4 primary functional areas: DMZ (external servers), SAN, Internal Servers and the corporate user network. Today it is functioning as collapsed core, distribution and access for the internal server network. It's a distribution/core for the SAN. And the DMZs are on their own switches off to the side of the firewall.

I'm looking to consolidate all of these functions into a single pair of Nexus chassis'.

Question 1. Can I continue to have a collapsed core/distribution/access for my internal servers that would reside in a VDC, let's call it 'corp-servers'? So I'd have copper ports, F-series module fiber ports and M-series module (for L3 routing) ports allocated to this VDC to facilitate. Pros/Cons?

The next part to this would be carving up the Nexus into the other functional VDCs... I was planning on 4 VDCs, let's call them:




default (which would be the corporate user-net)

Question 2. In the above scenario, I'd be using the default VDC to pass all corporate user traffic into and out of the other VDCs. The only folks with access to this VDC would be the qualified network team members. We are a relatively small company (~1500 emp), security is important but I wouldn't classify us as high security per say. Are there any other drawbacks to this?

Question 2b. If so... I wouldn't be against combining the corp-servers and user-net into a single VDC, let's call it corp-internal. Are there any drawbacks here?

On to the storage VDC... I would have directly connected SANs but also may scatter some FEX N2Ks around and I'd like the flexibility for 1G and 10G uplinks.

Question 3. If my L3 boundary (default gw for these SAN networks) exists on the N7Ks, what would be the optimal module to use to connect my SAN and FEX switches? Can I get away with  the N7K-F132XP-15 (F-series) or would I need an M-series? I'm kind of confused on just how to implement these. I was thinking I would purchase a single 32 port 10G M-series, allocate ports within to each VDC and use that to route between VDCs while all of my uplinks are configured the f-series ports in their respective VDCs.

So in short, I would be carving up a single M-series module, allocating ports from within each VDC to facilitate routing to/from my different 'zones'. Attached is a rudimentary drawing of my plan... Note that all of the 7Ks in the picture represent a single pair.

Thanks in advance!!!

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Average Rating: 4 (2 ratings)
Marwan ALshawi Tue, 10/04/2011 - 17:25

well it is a valid solution as long as these two devices can Handel the amount of traffic utilization in L2/L3

and all of the devices multi homed to both N7K ( vPC or Non VPC ) for HA

however if you can have separate access switches and combine the Dist/Core in the N7K i would go this path for more added control, vPC flexibility alos less complexity in terms of virtualized L2/L3 environment,

mixing between F and M is possible but keep in mind that any routing need to be done over the F has to cross up to the M for L3 routing and back to the F for inter vlan routing for example

Nexus 7000 chassis with a combination of some M1/M1-XL I/O modules and some F1 I/O modules in the same VDCs

M1 modules provide a "proxy" routing function

Alternative to mixed chassis –isolate M1/M1-XL and F1 modules in separate VDCs Avoids "mixed chassis" interactions But, requires port density in both VDCs for the interconnect

Hope this help

royalle01 Tue, 10/04/2011 - 18:08

Are there drawbacks to having a mixed chassis whereby each VDC shares ports within the M mods to facilitate routing between the VDCs?

If I separate access layer from dis/core my access switches will be 3750s so no vpcs...

Marwan ALshawi Tue, 10/04/2011 - 20:18

Mixed chassis is useful when the key requirement is high density L2 with performance, but Layer 3 forwarding also required

If most of the traffic needs bridging through aggregation layer, but some interVLAN and/or outbound Layer 3 routing is required, mixed chassis makes sense If vast majority of traffic needs interVLAN and/or outbound Layer 3 routing, might make more sense to use pure M serise only

see the bellow examples/diagrams of situations might happens for traffic flow with mixed mode using same VDC

- 2 passes via the Fabric


- best case one pass via the fabric

- 2 passes via fabric

you might think about having each module type in its separate VDC ( more ports density )

M and F modules have different capabilities Three most important considerations:

Forwarding: Layer 3 versus Layer 2 (unicast and multicast)

Table sizes: MAC table, classification ACL entries

FabricPath capability and interoperation
for using Cisco Cat3750 you still can get the benifit of using vPC up to the dist/aggregation layer with the N7K ( recommended )
Hope this help
please rate the helpful posts

Marwan ALshawi Wed, 10/05/2011 - 00:30

let me know if i have answered your questions or you need any more clarifications ?

royalle01 Wed, 10/05/2011 - 07:45

I really appreciate the detailed response... But I do need a bit more clarification... If I were to put the different modules in different VDCs, then doesn't traffic from a L2 VDC (with strictly F-series) need to go up to the dist/core via a L2 trunk? Now all traffic in different VLANs are traversing physical uplinks instead of using the backplane fabric for interVLAN traffic. So I'm not sure I understand what benefit it is to seperate L2/L3 modules. It seems there is more benefit in using a mixed chassis and having each VDC be responsible for L2 and L3. Also, I would like to use VRF lite in my DMZ to further segment that VDC and force all traffic up through the firewall as needed.

I'm attaching two pictures... The 1st is the mixed chassis idea which is what I think I want, the 2nd is dedicated modules per VDC. Let me know if I'm misunderstanding a concept here.

Note... In a mixed chassis, my server VDC would have about 15 SVIs. Storage VDC about 5 SVIs. And Core VDC just a couple of SVIs. And I would likely just do static routing in between VDC networks.



Marwan ALshawi Wed, 10/05/2011 - 16:56

yes you right you will be going in some cases over the wire for one pass via the fabric in case of going up to the core and out the network for example

while for intervlan routing you you have 2 passes to the fabric and over the wire ( trunk link )

as i sated above :


Mixed chassis is useful when the key requirement is high density L2 with performance, but Layer 3 forwarding also required

If most of the traffic needs bridging through aggregation layer, but some interVLAN and/or outbound Layer 3 routing is required, mixed chassis makes sense If vast majority of traffic needs interVLAN and/or outbound Layer 3 routing, might make more sense to use pure M series

if you go with the one model VDC then you only can strictly specify in the N7K the VDC model allowed to be used

for example: ( this will limit this VDC to only F I/O modules )




limit-resource module-type ?

f1 Enable F1 type modules in this vdc

m1 Enable M1 type modules in this vdc

m1-xl Enable M1 type modules in this vdc


limit-resource module-type f1

This will cause all ports of unallowedtypes to be removed from this vdc. Continue? [yes]




based on this you can make the call which one more suitable to your DC design ( and always keep it simple )

Hope this help and thanks for the rating


Login or Register to take actions

This Discussion

Posted October 4, 2011 at 12:57 PM
Replies:6 Avg. Rating:4
Views:2287 Votes:0
Categories: Switches

Related Content

Discussions Leaderboard

Rank Username Points
1 15,007
2 8,150
3 7,725
4 7,083
5 6,742
Rank Username Points