Cisco 3560 Configuration - Server Backbone

Unanswered Question
May 12th, 2010
User Badges:

Hi Everyone,


Current setup for our Server Room:

1) Cisco Catalyst 3560G 48 Switch

2)1 Trunk port to the Core

3) Rest of the ports simple access ports to the servers


All of our Servers are Dell, with dual NICs as part of the order.  This year, we were able to purchase another Cisco 3560G, with the idea of making our Backbone to the servers Redundant.


I have some ideas on how to do this, but wanted to bounce them off the experts, as I am by no means a Cisco professional.


Thoughts were as follows.


1) 2 Trunk Ports on each Switch to the Core (Can we bind them together for better throughput?)

2) 1 Trunk Port on each switch, to it neighbour (Backbone).


That should give me High Availability correct?


Then each Server - Nic 1 to Switch 1, Nic 2 to Switch 2 (Backbone)


I can make them standard Access ports, but I was wondering if there was a better way to do this. I.E. rather then have one NIC in standby, can I link the ports that go to each server together, both for better performance, and redundancy?  I am thinking that if the switch ports are standard access, and I plug both NICs in I may have problems.


Dell NICs are either Intel, or Broadcom.


Any help would be great.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Jon Marshall Wed, 05/12/2010 - 13:32
User Badges:
  • Super Blue, 32500 points or more
  • Hall of Fame,

    Founding Member

  • Cisco Designated VIP,

    2017 LAN, WAN

hutcha4113 wrote:


Hi Everyone,


of making our Backbone to the servers Redundant.


I have some ideas on how to do this, but wanted to bounce them off the experts, as I am by no means a Cisco professional.


Thoughts were as follows.


1) 2 Trunk Ports on each Switch to the Core (Can we bind them together for better throughput?)

2) 1 Trunk Port on each switch, to it neighbour (Backbone).


That should give me High Availability correct?


Then each Server - Nic 1 to Switch 1, Nic 2 to Switch 2 (Backbone)


I can make them standard Access ports, but I was wondering if there was a better way to do this. I.E. rather then have one NIC in standby, can I link the ports that go to each server together, both for better performance, and redundancy?  I am thinking that if the switch ports are standard access, and I plug both NICs in I may have problems.


Dell NICs are either Intel, or Broadcom.


Any help would be great.


Is the 3560G acting as a L2 switch ie. the default-gateway for the servers, is it on the 3560G or the core switches.


Assuming the core switches -


1) Yes you can tie them together - Cisco call it etherchannel so you want a L2 etherchannel trunk.

2) If you are dual honing your servers ie. each server connects to both 3560Gs and the L3 default-gateway for the server vlan is on the core switches then the 2 3560G switches don't actually need a connection between the 3560G switches.


The more standard practice is to connect each 3560G to both core switches, assuming you have more than one core switch.


Generally speaking it is better to run active/standby on your server NICs if you are running them to 2 separate switches. You could use etherchannel on the servers but you would need to connect both NICs to the same 3560G.


Jon

Actions

This Discussion

Related Content