Hello everyoone, thanks for your help!
I've just completed migrating my network to a fully L3 routed environment. Since this was completed, I've found that most traffic is using 100mb links to get to the Core instead of the switches' 1gb uplinks. In wiring closets where there are more than one 48-port switch (due to a large volume of hosts), traffic will traverse the 100mb trunk ports (used for HSRP negotiation) and ignore the 1gb uplinks to Distribution.
CURRENT NETWORK DESCRIPTION
My network consists of 12 buildings of various sizes. The buildings are 1 to 6 floors in height. I've provisioned eight unique VLANs per floor (used for: staff, students, voice, laptop, management, test, server and 'spare'). The number of hosts per floor ranges from 10 to 500. Each VLAN has a 21-bit mask to allow for easy expansion. I've tried to make the addressing/VLAN numbering as co-ordinated as possible for easy 'human' readability and troubleshooting.
I've completed the transition from a fully trunked layer 2 (L2) network to a fully routed layer 3 (L3) network. This L3 network uses the VLANs/IP#s as described above. All of my legacy switches have been replaced with the Catalyst 3550 series (3550-12Gs at Distribution and 3550-48s at the Edge).
All of my campus buildings are using L3 links between the Core and Distribution switches. I have L2 trunks between Distribution switches for HSRP negotiation. I have L3 links between the Distribution and Edge switches. I have L2 trunks between Edge switches for HSRP negotiation as well.
All L3 links are 1gb fiber connections. All L2 trunks are 100mb copper connections.
From the research I've done, it seems that the physical hardware I have is the problem. Since I have many floors that have more than 48 hosts (21 floors fall into this category), I am forced to use multiple switches per floor. These switches are connected with L2 trunks so that HSRP negotiation of the virtual router for each VLAN can occur. I believe this is the problem. Since the stack of switches (eight switches in one case) has only one virtual router per VLAN, this negotiated switch seems to be the chokepoint for all traffic.
I will attach Visio PDFs that show my test environment and the traffic patterns for two VLAN examples.
I will also attach command output for each relevant switch (te101*) for the following commands:
show standby brief
show interface status | inc connected
show ip interface brief | inc up
show run | inc spanning-tree
show spanning-tree summary
My question is the obvious one... how do I get each switch to use it's own 1gb uplinks to pass traffic that is local to them?
There are constraints to the solution...
#1. No hardware replacement. I have to use the 3550-48s at the edge. (I know that putting in a correctly sized chassis switch will solve this problem.)
#2. I cannot (read: really, really don't want to) provision a set of VLANs per switch at the edge. This would become a management headache big time.
#3. Replace some of the 3550-48s with 3750-48s. Use the StackWise backplane connector on the 3750s and manage the entire stack as one device that has one IP# per VLAN. Set up LACP channels using 1gb fiber uplinks on each switch in the stack. (Hey, this violates rule #1!)
#4. Avoid solving the problem by using 4 or 8-port LACP channels between switches. (This is what I've done in my test environment as a test.) This is terrible because it doesn't solve anything and chews up 8 or 16 ports per switch. Ouch!
I'm open to suggestions! Feel free to let me know if I'm hooped or not! Also, if you have suggestions re: my network topology and/or design, I'm open to comment on that too.