cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
679
Views
5
Helpful
6
Replies

L3 Network Traffic Using Non-Optimal Links

dbroder
Level 1
Level 1

Hello everyoone, thanks for your help!

PROBLEM SUMMARY

I've just completed migrating my network to a fully L3 routed environment. Since this was completed, I've found that most traffic is using 100mb links to get to the Core instead of the switches' 1gb uplinks. In wiring closets where there are more than one 48-port switch (due to a large volume of hosts), traffic will traverse the 100mb trunk ports (used for HSRP negotiation) and ignore the 1gb uplinks to Distribution.

CURRENT NETWORK DESCRIPTION

My network consists of 12 buildings of various sizes. The buildings are 1 to 6 floors in height. I've provisioned eight unique VLANs per floor (used for: staff, students, voice, laptop, management, test, server and 'spare'). The number of hosts per floor ranges from 10 to 500. Each VLAN has a 21-bit mask to allow for easy expansion. I've tried to make the addressing/VLAN numbering as co-ordinated as possible for easy 'human' readability and troubleshooting.

I've completed the transition from a fully trunked layer 2 (L2) network to a fully routed layer 3 (L3) network. This L3 network uses the VLANs/IP#s as described above. All of my legacy switches have been replaced with the Catalyst 3550 series (3550-12Gs at Distribution and 3550-48s at the Edge).

All of my campus buildings are using L3 links between the Core and Distribution switches. I have L2 trunks between Distribution switches for HSRP negotiation. I have L3 links between the Distribution and Edge switches. I have L2 trunks between Edge switches for HSRP negotiation as well.

All L3 links are 1gb fiber connections. All L2 trunks are 100mb copper connections.

TROUBLESHOOTING ATTEMPTED

From the research I've done, it seems that the physical hardware I have is the problem. Since I have many floors that have more than 48 hosts (21 floors fall into this category), I am forced to use multiple switches per floor. These switches are connected with L2 trunks so that HSRP negotiation of the virtual router for each VLAN can occur. I believe this is the problem. Since the stack of switches (eight switches in one case) has only one virtual router per VLAN, this negotiated switch seems to be the chokepoint for all traffic.

I will attach Visio PDFs that show my test environment and the traffic patterns for two VLAN examples.

I will also attach command output for each relevant switch (te101*) for the following commands:

show vlan

show standby brief

show interface status | inc connected

show ip interface brief | inc up

show run | inc spanning-tree

show spanning-tree summary

THE QUESTION!

My question is the obvious one... how do I get each switch to use it's own 1gb uplinks to pass traffic that is local to them?

There are constraints to the solution...

#1. No hardware replacement. I have to use the 3550-48s at the edge. (I know that putting in a correctly sized chassis switch will solve this problem.)

#2. I cannot (read: really, really don't want to) provision a set of VLANs per switch at the edge. This would become a management headache big time.

#3. Replace some of the 3550-48s with 3750-48s. Use the StackWise backplane connector on the 3750s and manage the entire stack as one device that has one IP# per VLAN. Set up LACP channels using 1gb fiber uplinks on each switch in the stack. (Hey, this violates rule #1!)

#4. Avoid solving the problem by using 4 or 8-port LACP channels between switches. (This is what I've done in my test environment as a test.) This is terrible because it doesn't solve anything and chews up 8 or 16 ports per switch. Ouch!

I'm open to suggestions! Feel free to let me know if I'm hooped or not! Also, if you have suggestions re: my network topology and/or design, I'm open to comment on that too.

Thanks all!

Darren.

6 Replies 6

dbroder
Level 1
Level 1

More Information For This Problem

The problem I'm having (above) is a follow up to this question (http://forum.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Network Infrastructure&topic=LAN, Switching and Routing&topicID=.ee71a04&fromOutline=&CommCmd=MB?cmd=display_location&location=.2cbff660) I posted a few months ago. (If the included link doesn't work, do a search for 'Moving From L2 Trunks to Routed L3 Links'.)

I could only attach three files per post, so attached here are the traffic pattern Visio PDFs that show how I see network traffic traversing my network.

Thanks again!

Darren.

dbroder
Level 1
Level 1

I knew this was a tough problem, but figured that somebody here would have some pointers! ;)

Thanks!

Darren.

Hello Darren,

when you use HSRP the end user devices are configured manually or by DHCP to use the HSRP VIP as its default gateway.

The HSRP VIP is emulated at OSI layer3 and OSI layer2.

So, the users that are connected on L2 ports on the edge switch that is not the HSRP Active router will send frames with a dest MAC = HSRP VIP MAC these frames will travel to the colocated edge switch to be then routed to the core.

So you fill the 100 Mbps link between the edge switches not only with HSRP packets but with user traffic.

This is the result of current network design not a misbehaviuor.

What you can do here:

a) dismiss HSRP and split the subnets : end user devices connected to edge1-1 cannot take advantage of the the second edge switch edge1-2 if the first is off, you can just cover the failure of SVI interface on edge1-1 with HSRP.

OR

b) keep using HSRP and without changing readdressing move the L2 trunk to the uplink ports

you make the uplink L2 trunks carrying all vlans : access vlans + backbone vlan, move the L3 backbone config to the SVI, make sure that for all access vlans of building I the root bridge is the edge I-1, in this case traffic will go via the GE links one or two times:

one time if received on the device that is HSRP Active :

if not received on HSRP Active traffic will go via an uplink, via a distribution switch, to the other edge switch

c)

possible improvement : use two HSRP groups one for each edge switch :

all hosts connected to edge I-1 needs to have their GW set to VIP address I-K-1 K is the vlan

but still return traffic in all cases could arrive on the switch were the device is seen via the inter-switch trunk.

Routing does not provide a way to advertise single hosts of a broader subnet

Solution a) is the better one : when implementing the L3 routed access layer it reflects the nature of separated switches on each floor and you never mind about the 100 Mbps links that become useless and can be removed.

solution b) and c) can be implemented together to minimize inter edge switch traffic to the core but nothing can be done for traffic coming from the core to the edge: you have a 50% chance to hit the wrong edge switch.

Another point against sol b) is that on the disribution switches you need to define all the L2 Vlans used in the edge.

solution b is viable if you have a L3 distribution layer between edge and core.

Hope to help

Giuseppe

Hi Giuseppe. Thanks for your help.

I have a couple questions, just so we're on the same page.

Regarding option a), this means that each switch will have it's own set of unique VLANs, correct?

Regarding option c), this means that my DHCP server will provide differing gateways to each host depending on which switch they're connected to, correct?

Thanks again!

Darren.

Hello Darren,

thanks for your kind remarks

a) yes different access vlans on different switches each with a single SVI defined on it

c)

this is difficult to achieve without using the dhcp feature locally on the edge switch.

DHCP software should use the GW field in the DHCP request to look up the right scope

This means two scopes for each ip subnet.

But:

the DHCP request is heard by both edge switches and potentially both forward it to the DHCP servers so there are chances that the wrong switch is served first.

I would recommend solution A in any case: the more we look at the scenario more drawbacks of the other approaches b/c are found

Hope to help

Giuseppe

Hi Giuseppe. Thanks again for your comments.

I agree with you regarding b) and c)... they each have their own bad points that make their use in the 'real' world difficult.

I can't imagine trying to manage the hosts and IP addressing in these situations -- very complex and a lot of details to worry about. The real problem is that it won't just be me and my team that deals with it -- my Help Desk and Jr. techs will be involved as well.

Regarding option a), this is Cisco's suggested solution as per their white papers -- unique VLANs per switch (not per wiring closet or per cluster). The sheer number of VLANs needed (some wiring closets have 8 switches at 8 VLANs each = 64 VLANs = ugly!)

It seems that there's no good solution here aside from replacing my hardware! :(

Thanks

Darren.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card