cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1151
Views
5
Helpful
13
Replies

Campus design suggestions

exonetinf1nity
Level 1
Level 1

Greetings, ive recently been tasked with putting together a new switched infrastructure for approx 1500 users and would be grateful for some clarification and or advice on what ive come up with so far.

Im looking to base the solution around a two layer design with two Cat 6500's at the core and multiple 3750 switch stacks for access connections with dual uplinks one to each of the 6500 core switches as per the diagram. The primary link would be a 10Gb/s uplink and the second a 1Gb/s uplink for redundacy.

From reading several of the SRND guides in reference to a solution such as this it had lead me to ask a number of questions which I would be grateful if they could be answered based on the fact that we would like to use the standard image on the access switches rather than the enhanced for sake of cost.

My initial thoughts were to use routed point to point interfaces for uplinks to the core switches in combination with RIPv2 on both the core and access switches.

Each vlan on the access switches would have an IP address on its own subnet assigned which in turn would be advertised into RIP. I don't believe that sub second convergence time would not be of concern in this scenario. Each vlan would be configured with a helper address to point to a centralised DHCP server for the network.

The core switches would be configured in VTP server mode and the access switches in VTP transparent mode.

The access switches would run RSTP with the two 6500's acting as the primary and secondary root bridges.

Any thoughts or suggestions would be appreciated, im yet to get into an ip addressing scheme but a physical layout would help.

Regards

13 Replies 13

Joseph W. Doherty
Hall of Fame
Hall of Fame

From what you describe, and from you diagram, unclear exactly why you plan to mix running both routing on the edge stacks with p2p uplinks yet also run RSTP between the edge stacks and the core switches. Normally, I would expect one or the other.

Your diagram only shows two unique VLANs per edge stack, and seeing that one's voice and the other data, and seeing you have a primary 10 gig path and backup 1 gig path, also unclear what benefit you wish to obtain from routing on the edge.

With what you show in your diagram, you could just run the edge switches at L2 and perform all routing on the core 6500s. (Also, don't recall what's included in the "base" IOS for 6500s, but with the 3750s excluded from routing, you might have something better than RIP to use.)

I assume the 3750 providing the 10 gig link is the -E model? If so, and assuming the rest of the stack is 3750(G?) models, you're aware of the issues mixing the models?

PS:

BTW: There's a new stackable L2 switch, the 2975. Won't be able to do 10 gig, but likely gig Etherchannel supported.

Thank you for your comments, i havent alot of experience deploying the above on this scale appart from the theory.

If i were to run the edge switches at L2 what would be the best method of routing traffic between different networks at the core considering trunks would be used instead of routed interfaces?

Just trying to get my head arround how the 6500's would know about which networks sit on the access layer if we trunk to them from the core switch or would they just rely on arp requests?

Regards

Well if we run the edge at L2, the stack VLANs will be known not only to edge stack, but to each 6500. Since you've designed a primary core (the one with all the 10 gig connections), you would make it the gateway router for each VLAN. So, any off local subnet traffic would go to the 6500, and since it has connections to all the other VLANs, it would route to the correct other VLANs without the need of any defined static routes or a dynamic routing protocol. In fact, as long as both 6500s span all VLANs, technically, you wouldn't need to route between them or using a dynamic routing protocol.

Much appreciated, would the scenario still work in the event that the 10Gb link fails? One would assume that traffic would still be able to get to its default gateway via the second 6500 or would this be where FHRP's come into play?

Regards

If a 10 gig link fails, spanning tree should unblock the gig path and the edge stack would still be able to communicate with the primary 6500 gateway. The secondary 6500 would only function as a L2 switch.

If the primary 6500 fails, the secondary would take over as gateway (assuming its been running HSRP with the primary - a point I didn't explicitly mention).

PS:

Depending on just how large you intend to scale, as long as the secondary core is really just there for backup, and you don't need to provide the same level of performance (which seems the case because of gig link to edge stacks), you could even consider using a lower performing core, a stack of 3750G-12S might work (with or without 3750-E to provide 10 gig to 6500), or 4500 with sup V (also with or without 10 gig to 6500), or 6500 with sup32 (once again with or without 10 gig to 6500).

If both 6500 are of same configuration instead of all terminating all 10G on one 6500 you can terminate 101,102,104 on one 6500 and 100, 103,105,106 on other 6500.

the one with all 10G will be working too hard and other will be sitting almost idle.

Yes you define FHRP but for that u need connection from one edge switch stack to both 6500.

Instead of one 10G from one edge stack to 6500 you can plan 2 5G from edge stack to each 6500. I think this will give you more redundant plan as permy knowledge.

Yes you define FHRP but for that u need connection from one edge switch stack to both 6500.

You already have sopls ignore that part.

Thank you for all the replies ive got a much clearer picture now in regards to how it would likely fit together.

Would it also be relevant to configure each vlan on the two 6500's with an ip address as a point to route too from the edge switches.

eg:

vlan 100 on edge switch - 172.16.1.3 /24

vlan 100 on 6500 A - 172.16.1.1 /24

vlan 100 on 6500 B - 172.16.1.2 /24

Regards

For moving traffic, you define VLAN 100 FHRP on both core L3 switches.

eg:

vlan 100 on edge switch - no addr

vlan 100 on 6500 A - 172.16.1.1 /24 (FHRP)

vlan 100 on 6500 B - 172.16.1.1 /24 (FHRP)

You also will want device addresses. Assuming you use a production network address, which isn't best practice if the address is being used for management, you might have something like:

eg:

vlan 100 on edge switch - 172.16.1.4 /24

vlan 100 on 6500 A - 172.16.1.2 /24

vlan 100 on 6500 B - 172.16.1.3 /24

Thank you once again, yes there will be a dedicated management vlan across all switches for the second purpose on a seperate network addressing scheme.

Regards

"If both 6500 are of same configuration instead of all terminating all 10G on one 6500 you can terminate 101,102,104 on one 6500 and 100, 103,105,106 on other 6500.

the one with all 10G will be working too hard and other will be sitting almost idle. "

Correct, however if we do what you suggest, consider what happens to traffic between alternate VLANs, e.g. 100 and 101. Instead of such traffic flowing across the fabric of single 6500 it now jumps between the 6500s, where the dual 10 gig link will offer much less bandwidth then the sup720 fabric. So, one must decide which is more important distributing the packet forwarding load or providing maximum bandwidth between core connected networks.

"one must decide which is more important distributing the packet forwarding load or providing maximum bandwidth between core connected networks"

I completely agree with you it depends on what is more important to you, i gave an idea it doesnot mean he has to implement it. Thanks for showing me the other way of the side.

Now some network discussion,

1. Will 1 Gb bandwidth will be sufficient for 10Gb bandwidth if the 10G link fails or problem with main switch? For that main switch you will know better what kind of hardware redundacny is required.

2. 6503 takes 2 10G, 6503E takes 8 10G , 6506 takes 20. So for 7 10G + 2 10G between 6500 he needs atleast one 6506 and one 6503. What if he takes 2 6503E and provide 40G between 2 6500. I think 40G will be good for inter vlan connectivity nearly 60% of total traffic. Leave the price comparison for both design for you.

These are just 2 points can be some more where the exisitng network will be good or the other.

Network planning is not so easy i suppose thats why cisco has introduced CCDE.

"I completely agree with you it depends on what is more important to you, i gave an idea it doesnot mean he has to implement it. Thanks for showing me the other way of the side."

Agreed!

"1. Will 1 Gb bandwidth will be sufficient for 10Gb bandwidth if the 10G link fails or problem with main switch? For that main switch you will know better what kind of hardware redundacny is required. "

Yup, and if you go dual 10 gig from the edge, one to each core, other questions arise. Do you want to run the two 10 gig from a single 3750-E, or you do place two 3750-Es in the stack, or do you still run the two 10 gig from one 3750-E but with gig or gig Etherchannel for backup from a 3750 in case that edge switch fails?

"2. 6503 takes 2 10G, 6503E takes 8 10G , 6506 takes 20. So for 7 10G + 2 10G between 6500 he needs atleast one 6506 and one 6503. What if he takes 2 6503E and provide 40G between 2 6500. I think 40G will be good for inter vlan connectivity nearly 60% of total traffic. Leave the price comparison for both design for you."

Of course, there's also the 8 and 16 port 6500 10 gig boards. So you might be able to cram much more into a smaller chassis at the risk of more slot oversubscription. Also assuming you don't need some of the service module support that can be had for the 6500s, for just high performance 10 gig core, a 4900M can provide 16 10 gig ports with better performance than a 6500. It also can have as many as 24 10 gig ports, and can support gig copper modules or twin-gig fiber. If we do stay with twin 6500s, there's also VSS to consider.

Often many options when designing. You need to decide what features are important relative to what they're going to cost.

PS:

About two years agos, I was involved in a Campus design upgrade. Requirement was gig to the desktop and supporting bandwidth beyond, but don't spend too much. 10 gig was deemed too expensive vs. gig Etherchannel. Likewise chassis in the closet was deem too expensive. So we chose stacks of 3750Gs (7 or more high not uncommon) with up to dual 8 gig Etherchannel to dual 6500 distribution. Part of this design was implemented.

Later, the 3750-E came out. Besides the technical merits of 10 gig vs. 8 gig Ethernetchannel (not just 10 vs. 8), we were surprised to discover 10 gig was less expensive. It's not just "port" cost, but cost of the fiber plugs. Newer implementation has used top and bottom 3750-E, each with 10 gig to 6500 distribution, and 3750Gs the rest of the stack (except for server stack which are pure 3750-Es).

Just wanted you to know there's a working implementation similar to what you're designing.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco