cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1412
Views
0
Helpful
5
Replies

Routed Core Vs. Switched Core

todd.martin-02
Level 1
Level 1

My dataCenter is upgrading to 10G in the server farm Access switches. We currently have a 1G backbone switched topology.

We have had lots of discussions and have been advised that with todays modern features of Sup720 etc, switched can be as fast as routed, and there are minimal benefits.

If we do go to routed core it would involve a complete re-design of the architecture so that is a major drawback for routed core. However, we need to carefully whigh teh benefits of routed with respect to convergence and high availability.

We have a tough decision to make and would like to get some feedback from the masses.Please share you knowledge and experience with each option.

thanks,

T

5 Replies 5

Richard Burts
Hall of Fame
Hall of Fame

Todd

There are good designs with both routed core and switched core. I am sure that you can come up with a good network implementation whichever way you decide.

As you indicate there is sometime a perception that performance might be better with a layer 2 switched core. And that perception is grounded in history when there were performance differences. I believe that with most of todays systems it is no longer true. The building of the routing table is a control place function and is done in the background and does not impact the forwarding decision. In many of today's systems forwarding can be based on layer 2 information or on layer 3 information with equal efficiency.

From my perspective the advantages of better redundancy, of producing smaller broadcast domains, of minimizing the effects of spanning tree on convergence within the network lead me to prefer the routed core.

I sympathesize that it is not an easy decision and that the impact of chosing a routed core may drive a redesign of the network and that will not be easy. But depending on how long the network has had its present design, and how much the network has grown during that design, having to do a new design may, in fact, make a better network.

HTH

Rick

HTH

Rick

There really isn't much of an advantage anymore , on devices like the 720 most of the packets are fast switched using cisco's CEf switching in hardware so technically they are switched anyway .

Jon Marshall
Hall of Fame
Hall of Fame

Todd, we went through same process. We have 2 core switches with 4 server access layer switches ( all 6509 ). All access switches are connected to each core switch. There are many implications but the main ones for us.

1) Layer 3 links between access and core means 2 equal cost paths (we use EIGRP). This means if you lose a link you see no packet loss as they are immediately sent across other link.

2) Following on from 1 no spanning tree to worry about.

However there were downsides:-

1) with layer 3 uplinks you can't extend vlans across access-switches. You can pair up switches but if you have more than 2 access switches and the same vlan extends across all switches you will need to re-address servers etc.

2) Service modules. We deploy the Firewall Service Module (FWSM) in our core switches and if you want to firewall your server vlans you really want layer 2 links between the FSWM and the server vlans.

3) Spanning Tree - you will not get spanning tree to failover as quickly as routed links but with Rapid PVST+ you can get down to approx 5 seconds. This we found was acceptable for us.

These were the main considerations for us and we eventually went with layer 2 but then we do not need sub 5 second failover times.

If convergence is the main consideration and you are not deploying the FWSM then maybe layer 3 is the way to go. If you use 6500's in the access layer you can always deploy the FWSM there if needed.

Hope this helps

we are about to cut over to a core of 4 6509's and are currently going through the same question..

My thoughts on the design were to actually run two physical links between each switch, one 802.1q trunk on a 10gig link and a 1gig routed link, utilizing a common subnet to do all our intervlan routing through EIGRP.. that way actual user traffic switched over the 10gig links between all of the vlans and all our routing ARPS and messaging (convergence, routing table exchange, ect..) went over the 1gig routed link. This would allow us to speperate our layer 2 and layer 3 connectivity out on to 2 different interfaces, allowing for easier troubleshooting.. any thoughts on this

Given the data center scenarios mentioned here, have any of you thought about load balancing via Content Switching Modules (CSM's)? If any of your servers are web servers, these provide some nice functionality.

The approach we took at my former site (which we were happy with) was to keep routing functionality in the core switches (8 out of about 250 switches) and switch only at edge and data center switches. In a large enterprise (>15,000 users on campus), simplicity was our mantra. Only add complexity where it is needed - not just for small (arguable) incremental value.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: