cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4270
Views
8
Helpful
6
Replies

nexus data center design... collapsed core/agg/access... and VDCs...

royalle01
Level 1
Level 1

So I've been tasked with retiring a pair of 6509s that are in our DC core. I'm looking to get some advice and input on my design considerations, I'm a Nexus newbie... First off, these 6509s are the core routers for 4 primary functional areas: DMZ (external servers), SAN, Internal Servers and the corporate user network. Today it is functioning as collapsed core, distribution and access for the internal server network. It's a distribution/core for the SAN. And the DMZs are on their own switches off to the side of the firewall.

I'm looking to consolidate all of these functions into a single pair of Nexus chassis'.

Question 1. Can I continue to have a collapsed core/distribution/access for my internal servers that would reside in a VDC, let's call it 'corp-servers'? So I'd have copper ports, F-series module fiber ports and M-series module (for L3 routing) ports allocated to this VDC to facilitate. Pros/Cons?

The next part to this would be carving up the Nexus into the other functional VDCs... I was planning on 4 VDCs, let's call them:

corp-servers

dmz-servers

storage

default (which would be the corporate user-net)

Question 2. In the above scenario, I'd be using the default VDC to pass all corporate user traffic into and out of the other VDCs. The only folks with access to this VDC would be the qualified network team members. We are a relatively small company (~1500 emp), security is important but I wouldn't classify us as high security per say. Are there any other drawbacks to this?

Question 2b. If so... I wouldn't be against combining the corp-servers and user-net into a single VDC, let's call it corp-internal. Are there any drawbacks here?

On to the storage VDC... I would have directly connected SANs but also may scatter some FEX N2Ks around and I'd like the flexibility for 1G and 10G uplinks.

Question 3. If my L3 boundary (default gw for these SAN networks) exists on the N7Ks, what would be the optimal module to use to connect my SAN and FEX switches? Can I get away with  the N7K-F132XP-15 (F-series) or would I need an M-series? I'm kind of confused on just how to implement these. I was thinking I would purchase a single 32 port 10G M-series, allocate ports within to each VDC and use that to route between VDCs while all of my uplinks are configured the f-series ports in their respective VDCs.

So in short, I would be carving up a single M-series module, allocating ports from within each VDC to facilitate routing to/from my different 'zones'. Attached is a rudimentary drawing of my plan... Note that all of the 7Ks in the picture represent a single pair.

Thanks in advance!!!

6 Replies 6

Marwan ALshawi
VIP Alumni
VIP Alumni

well it is a valid solution as long as these two devices can Handel the amount of traffic utilization in L2/L3

and all of the devices multi homed to both N7K ( vPC or Non VPC ) for HA

however if you can have separate access switches and combine the Dist/Core in the N7K i would go this path for more added control, vPC flexibility alos less complexity in terms of virtualized L2/L3 environment,

mixing between F and M is possible but keep in mind that any routing need to be done over the F has to cross up to the M for L3 routing and back to the F for inter vlan routing for example

Nexus 7000 chassis with a combination of some M1/M1-XL I/O modules and some F1 I/O modules in the same VDCs

M1 modules provide a "proxy" routing function

Alternative to mixed chassis –isolate M1/M1-XL and F1 modules in separate VDCs Avoids "mixed chassis" interactions But, requires port density in both VDCs for the interconnect

Hope this help

Are there drawbacks to having a mixed chassis whereby each VDC shares ports within the M mods to facilitate routing between the VDCs?

If I separate access layer from dis/core my access switches will be 3750s so no vpcs...

Mixed chassis is useful when the key requirement is high density L2 with performance, but Layer 3 forwarding also required

If most of the traffic needs bridging through aggregation layer, but some interVLAN and/or outbound Layer 3 routing is required, mixed chassis makes sense If vast majority of traffic needs interVLAN and/or outbound Layer 3 routing, might make more sense to use pure M serise only

see the bellow examples/diagrams of situations might happens for traffic flow with mixed mode using same VDC

- 2 passes via the Fabric

f1M101.jpg

- best case one pass via the fabric

- 2 passes via fabric

you might think about having each module type in its separate VDC ( more ports density )

M and F modules have different capabilities Three most important considerations:

Forwarding: Layer 3 versus Layer 2 (unicast and multicast)

Table sizes: MAC table, classification ACL entries

FabricPath capability and interoperation
for using Cisco Cat3750 you still can get the benifit of using vPC up to the dist/aggregation layer with the N7K ( recommended )
Hope this help
please rate the helpful posts

let me know if i have answered your questions or you need any more clarifications ?

I really appreciate the detailed response... But I do need a bit more clarification... If I were to put the different modules in different VDCs, then doesn't traffic from a L2 VDC (with strictly F-series) need to go up to the dist/core via a L2 trunk? Now all traffic in different VLANs are traversing physical uplinks instead of using the backplane fabric for interVLAN traffic. So I'm not sure I understand what benefit it is to seperate L2/L3 modules. It seems there is more benefit in using a mixed chassis and having each VDC be responsible for L2 and L3. Also, I would like to use VRF lite in my DMZ to further segment that VDC and force all traffic up through the firewall as needed.

I'm attaching two pictures... The 1st is the mixed chassis idea which is what I think I want, the 2nd is dedicated modules per VDC. Let me know if I'm misunderstanding a concept here.

Note... In a mixed chassis, my server VDC would have about 15 SVIs. Storage VDC about 5 SVIs. And Core VDC just a couple of SVIs. And I would likely just do static routing in between VDC networks.

MIXED CHASSIS

NO MIXED CHASSIS

yes you right you will be going in some cases over the wire for one pass via the fabric in case of going up to the core and out the network for example

while for intervlan routing you you have 2 passes to the fabric and over the wire ( trunk link )

as i sated above :

 

Mixed chassis is useful when the key requirement is high density L2 with performance, but Layer 3 forwarding also required

If most of the traffic needs bridging through aggregation layer, but some interVLAN and/or outbound Layer 3 routing is required, mixed chassis makes sense If vast majority of traffic needs interVLAN and/or outbound Layer 3 routing, might make more sense to use pure M series

if you go with the one model VDC then you only can strictly specify in the N7K the VDC model allowed to be used

for example: ( this will limit this VDC to only F I/O modules )

n7018(config)#

vdcn7018

n7018(config-vdc)#

limit-resource module-type ?

f1 Enable F1 type modules in this vdc

m1 Enable M1 type modules in this vdc

m1-xl Enable M1 type modules in this vdc

n7018(config-vdc)#

limit-resource module-type f1

This will cause all ports of unallowedtypes to be removed from this vdc. Continue? [yes]

yes

n7018(config-vdc)#

exit

based on this you can make the call which one more suitable to your DC design ( and always keep it simple )

Hope this help and thanks for the rating

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card