Replacing: 3-3560 switchstack with 2 6509-E

Unanswered Question

Existing enviroment:

primary site presently has a data center 100 feet away. There are 12 fiber strands coming from the primary site to the dc.  Presently there's a 3560 switch stack in the dc. There's 2 6509 core's at the primary site. Each core switch has six fiber connections on each dedicated for the dc. The primary site and the dc are currently connected with a fiber port-channeled configuration. A majority of the equipment at the dc are windows servers. All data, one phone, no video.


remove the 3560 switch stack and deploy two 6509's (1 dual sup32, 1 dual sup720 w/ dual ACE10-6500-K9 mod and keep the existing subnets in effort to not re-ip all servers. For redundancy, purchased dual power supplies and dual sup's.  We'd like to have a configuration on the dc switches (new 6509's) were we can take on an entire switch failure and be able to stay up utilizing the other.  Load-balance where possible.

Im pursuing any insight on configuration/design recommendations and/or feedback for this task.  Let me know if you need any clarity.


I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
rahurao Fri, 04/30/2010 - 01:43


I believe the kind of scenario which you have given this can be achieved by HSRP.

As HSRP allows one router to automatically  assume the function of the second router if the second router fails.  HSRP is particularly useful when the users on one subnet require  continuous access to resources in the network.

You can use the follwoing case study for a better understanding:

Using HSRP for Fault-Tolerant IP Routing:

Hot Standby Router Protocol (HSRP): Frequently  Asked Questions

I hope this helps!

Amit Singh Fri, 04/30/2010 - 02:12

Well, in addition to Rahul's post, we need a little bit more details to suggest a better design/otion. Please let us know the following :

1. Current topology between the two DC's is a L2 topology or L3 topology?

2. Do you have any plans to consider L3 topology if it is currently L2 only. Keeping in mind that with L3 topology, you will have better convergence, load-balancing and L2-loop free network.

3. As per the post, I believe that you have all the servers connected to the 3560's. Is it a 3560 stack or you are using switch clustering to achieve the stack like funcationality. Is it a 3750 stack that you are referring to?

4. How do you plan to connect the two chassis to replace the existing fixed config switches. Will you connect all the servers to 6509 w/720 and ACE module so that you can have the SLB done close to the server access layer and then switch the traffic to upper 6509 w/sup 32 which can act as the core/aggregation layer for the DC.

5. Will you connect all the servers to all the 6509 chassis?

6. The ideal design would be to use the Cat6509 w/sup32 as the aggregation layers to connect the two DC's and Sup720/ACE module for load-balancing at the access layer. You can use it otherwise as well, where you can have Sup720 w/ACE module at the core/aggergation layer and connect the servers directly to it. You can use Cat6509/sup32 at server access layer and have trunk uplinks to the core 6509 with SLB configured at the core services layer.

7. The above design in point 5 and 6 depends upon the hardware that you have presently on both the chassis. Please have a look at the proposed/existing hardware and then design your topology.

8. If I were you, I would have opted for a L3 topology between the two DC for better load-balncing, faster convergence and easily to deploy and troubleshoot the network. Unless it is really neccessary to extend L2 between the two DC's, try using L3 design.


-amit singh

Amit Singh Fri, 04/30/2010 - 02:20

Apologies, I thought that you have 2 6509 sup/720 and 2 6509 with Sup/32. You can also use the topology as pointed out by Rahul with HSRP, either using L2/L3 topology.

rahurao Fri, 04/30/2010 - 02:25

Hi Amit,

Thanks for letting me know that i was thinking in the right direction!

As the 3560 switches are concerned i just wanted to let you know that they are stackable as well like the 3750 switches.

The customer can connect the 6500 with the servers anytime as the sup 32 will be able to provide much faster processing speed as comapred to 3560 anytime.

It really will not be concern if we are taking the design in considaration.

And the last thing he would have to use the l3 part as the servers are communicating through ip's adn l2 cannot provide redundancy if one switch is down.


Rahul K Rao

Amit Singh Fri, 04/30/2010 - 05:57

Hi Rahul,

There are different ways to design the solution and your idea fits one of them.

Could you please let us know, how would you stack 3560's together to behave like a stack of 3750's?

Cat6500 with Sup32 will provide the same perfotmance level as of 3560 switch with 32Gbps switch fabric. Depending on the switch model 3560 might be able to outperform the Sup32 on the PPS number. on performance. The only advantage with Cat6500 solution with SUP32 is tha modularity with large number of slots and mix-match of the line cards to get the better ROI.

As I said, there are a number of ways to design the solution and if I were him, I would have opted for L3 possibly.

Depending on the servers, One of the possible design would be to cross connect the servers to both the Cat6550 chassis and use Cat6500 with Sup720 as Primary gateway and other has the secondary gateway. Incase of a failure any chassi will switch the traffic.

If server port counts is high, another design could be to use Cat6500 w/Sup32 as an access layer switch with low performing servers. Use Cat6500 w/sup720 as high performing servers. Trunk both the chassis and use Sup720 chassis as the core/aggregation layer. Have two L3 links go to two seperate chassis in the Primary DC and let routing take care of the load-balancing and fail-over. Use cat6500 sup720 chassis as the gateway for the secondary DC and do inter-vlan routing and ACE SLB on this chassis.


rahurao Fri, 04/30/2010 - 07:10

Hi Amit,

My bad for terming the 3560 switches clustering technology to stack.

A switch cluster is a  set of up to 16 connected, cluster-capable Catalyst switches that are  managed as a single entity. The switches in the cluster use the switch  clustering technology so that you can configure and troubleshoot a group  of different Catalyst desktop switch platforms through a single IP  address.

Clustering Switches:


This Discussion

Related Content