primary site presently has a data center 100 feet away. There are 12 fiber strands coming from the primary site to the dc. Presently there's a 3560 switch stack in the dc. There's 2 6509 core's at the primary site. Each core switch has six fiber connections on each dedicated for the dc. The primary site and the dc are currently connected with a fiber port-channeled configuration. A majority of the equipment at the dc are windows servers. All data, one phone, no video.
remove the 3560 switch stack and deploy two 6509's (1 dual sup32, 1 dual sup720 w/ dual ACE10-6500-K9 mod and keep the existing subnets in effort to not re-ip all servers. For redundancy, purchased dual power supplies and dual sup's. We'd like to have a configuration on the dc switches (new 6509's) were we can take on an entire switch failure and be able to stay up utilizing the other. Load-balance where possible.
Im pursuing any insight on configuration/design recommendations and/or feedback for this task. Let me know if you need any clarity.
I believe the kind of scenario which you have given this can be achieved by HSRP.
As HSRP allows one router to automatically assume the function of the second router if the second router fails. HSRP is particularly useful when the users on one subnet require continuous access to resources in the network.
You can use the follwoing case study for a better understanding:
Well, in addition to Rahul's post, we need a little bit more details to suggest a better design/otion. Please let us know the following :
1. Current topology between the two DC's is a L2 topology or L3 topology?
2. Do you have any plans to consider L3 topology if it is currently L2 only. Keeping in mind that with L3 topology, you will have better convergence, load-balancing and L2-loop free network.
3. As per the post, I believe that you have all the servers connected to the 3560's. Is it a 3560 stack or you are using switch clustering to achieve the stack like funcationality. Is it a 3750 stack that you are referring to?
4. How do you plan to connect the two chassis to replace the existing fixed config switches. Will you connect all the servers to 6509 w/720 and ACE module so that you can have the SLB done close to the server access layer and then switch the traffic to upper 6509 w/sup 32 which can act as the core/aggregation layer for the DC.
5. Will you connect all the servers to all the 6509 chassis?
6. The ideal design would be to use the Cat6509 w/sup32 as the aggregation layers to connect the two DC's and Sup720/ACE module for load-balancing at the access layer. You can use it otherwise as well, where you can have Sup720 w/ACE module at the core/aggergation layer and connect the servers directly to it. You can use Cat6509/sup32 at server access layer and have trunk uplinks to the core 6509 with SLB configured at the core services layer.
7. The above design in point 5 and 6 depends upon the hardware that you have presently on both the chassis. Please have a look at the proposed/existing hardware and then design your topology.
8. If I were you, I would have opted for a L3 topology between the two DC for better load-balncing, faster convergence and easily to deploy and troubleshoot the network. Unless it is really neccessary to extend L2 between the two DC's, try using L3 design.
There are different ways to design the solution and your idea fits one of them.
Could you please let us know, how would you stack 3560's together to behave like a stack of 3750's?
Cat6500 with Sup32 will provide the same perfotmance level as of 3560 switch with 32Gbps switch fabric. Depending on the switch model 3560 might be able to outperform the Sup32 on the PPS number. on performance. The only advantage with Cat6500 solution with SUP32 is tha modularity with large number of slots and mix-match of the line cards to get the better ROI.
As I said, there are a number of ways to design the solution and if I were him, I would have opted for L3 possibly.
Depending on the servers, One of the possible design would be to cross connect the servers to both the Cat6550 chassis and use Cat6500 with Sup720 as Primary gateway and other has the secondary gateway. Incase of a failure any chassi will switch the traffic.
If server port counts is high, another design could be to use Cat6500 w/Sup32 as an access layer switch with low performing servers. Use Cat6500 w/sup720 as high performing servers. Trunk both the chassis and use Sup720 chassis as the core/aggregation layer. Have two L3 links go to two seperate chassis in the Primary DC and let routing take care of the load-balancing and fail-over. Use cat6500 sup720 chassis as the gateway for the secondary DC and do inter-vlan routing and ACE SLB on this chassis.
My bad for terming the 3560 switches clustering technology to stack.
A switch cluster is a set of up to 16 connected, cluster-capable Catalyst switches that are managed as a single entity. The switches in the cluster use the switch clustering technology so that you can configure and troubleshoot a group of different Catalyst desktop switch platforms through a single IP address.
We are pleased to announce availability of Beta software for 16.6.3.
16.6.3 will be the second rebuild on the 16.6 release train targeted
towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are
looking for early feedback from customers befor...
Introduction Featured Speakers Luis Espejel is the Telecommunications
Manager of IENova, an Oil & Gas company. Currently he works with Cisco
IOS® and Cisco IOS XE platforms, and NX to some extent. He has also
worked as a Senior Engineer with the Routing P...
In this session you can learn more about Layer 3 multicast and the best
practices to identify possible threats and take security measures. It
provides an overview of basic multicast, the best security practices for
use of this technology, and recommendati...