I am re-designing my current network and am pretty much replacing all of our LAN gear. I will have dual 6509?s at the core, and pushing a mix of 4507?s and 6509?s to my access layer. I also want to move my servers off of the core and put them on their own L3 switch. The goal is to provide redundancy and to eliminate STP wherever possible.
The attached diagram is a work in progress and I would like to verify that my thinking is correct. For simplistic purposes, I am using VLAN1 for mgt and a /24 for all networks. This will most likely change in the future.
The dual core 6509?s will each have SVI ? VLAN 1 configured and connected by an etherchannel. Each access switch and server switch will have their VLAN 1 configured with an address in the same network.
Each core will connect to each access switch and the uplink ports on both the core and access switch will be in VLAN 1. The access switches will then have SVI - VLAN 2 configured and all pc?s will connect to access ports in this VLAN. The DG for the pc?s will be VLAN 2?s SVI IP address on the switch that they connect to.
Each core will connect to the server switches and the uplink ports on both core and server switch will be in VLAN 1. The server switches will then have SVI ? VLAN 100 configured. I would like to allow the servers to connect to each switch for redundancy. In order to do this, I setup an etherchannel between the server switch and am trunking VLAN 100 only. I will then setup glbp or vrrp between the server switches. This will allow for the servers to be teamed and use 1 address with a connection to each switch. If I use glbp, the server?s default gateway will be the glbp IP.
I will run EIGRP on all L3 switches.
Will this design work? If I use glbp and the server nics are teamed, am I correct that I should use the fault tolerant mode where only one nic transmits and receives. I could then alternate the switch that each active nic plugs in to (server 1 active nic to sw1, server 2 active nic to sw2, etc).
i think you did explain it correctly, i think it's me that didn't explain it properly.
In the above example you give you are not routing from the core switch to the access-layer switch, you are in fact switching.
Think of it like this. A client on access-layer switch 1 want to talk to a server which is in vlan 100 connected to your server switches.
The client needs to send the traffic to it's default gateway. Again using your example lets say the client is 172.16.5.10. So it sends the traffic to it's default gateway which is 172.16.5.254.
Now that switch needs to send the traffic towards the core 6500. But it doesn't route that traffic to the core 6500, it switches it and it has to switch it because the IP address on the core switch is in the same vlan. The actual routing between the vlans takes place on the 6500's.
So you have L2 switching from the access-layer to the 6500 switches.
Now if it is a point to point link, again the traffic goes to the default gateway but now the traffic is not switched because that vlan does not extend across the uplinks.
Does this make sense ?