cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
811
Views
4
Helpful
7
Replies

Problem switch stack 3750

andreas.plaul
Level 1
Level 1

Hello,

we have a problem with our switch stack.

Currently we have 9 switches in the (2 GigabitEthernet Switches and 7 FastEthernet Switches).

Basically the transfer rate between Servers and Users located on different switches is creating speed problems.

Please see the following details:

1)Correct speed:

Server/User -> GigabitEthernet Switch -> Server/User

Server/User -> GigabitEthernet Switch -> GigabitEthernet Switch -> Server/User

Server/User -> FastEthernet Switch -> Server/User

Server/User -> FastEthernet Switch -> FastEthernet Switch -> Server/User

2) Very slow speed (~100Kbits):

Server/User -> FastEthernet Switch -> GigabitEthernet Switch -> Server/User

As most of our users are connected via option 2) they have the most problems.

Mainly it looks like the FastEthernet switches are not able to communicate at full speed via the stack with the Gigabit switches.

Any ideas?

Best regards,

Andreas Plaul

7 Replies 7

glen.grant
VIP Alumni
VIP Alumni

Look for obvious stuff first like your connections between your switches , do you have speed/duplex mismatches on the links , are you seeing errors etc... Look at user ports also for the same problems . If you are usuing cisco switches do a show int status and look for any interfaces that are showing half duplex , this is usually a mismatch scenario . If you are unsure then hardcode the speed/duplex to 100/full on "both" ends of any connecting links. You should be seeing very little speed degradation just by adding a fast ether to a gig switch. You can check link utilization also between the switches , usually this is not a problem unless you are getting over 80 % link utilization .

A further point on Glen's comment. What version of software are you using on the 3750s?

Some versions have problems that cause previously configured switch ports to drop back to autonegotiation. This is a problem if both ends were statically configured to full duplex!

check by doing a sh int at each end.

Paul.

Another obvious one to check:

Server adapter teaming is known to cause similar behaviour, especially since you are stating that it happens on the switches where the server is not connected. Dissolve the team and test for speed increases.

regards,

Leo

Hello all,

thanks for the replys.

First the IOS on the switches is: 12.2(20)SE3.

The checking of the counters / errors on the interfaces doesn't show any big numbers.

So basically:

sh interfaces counters errors

sh interfaces

are looking good.

Igijssel, you are right the worst performance we see on the communication between a NAS (Netapp) with Etherchannel (4x Gigabit) and the users on the FastEthernet switches.

Configuration of the Etherchannel and the ports:

interface Port-channel1

switchport trunk encapsulation dot1q

switchport trunk allowed vlan xxx

switchport mode trunk

switchport nonegotiate

bandwidth 4000000

speed 1000

flowcontrol receive desired

interface GigabitEthernet1/0/1

switchport trunk encapsulation dot1q

switchport trunk allowed vlan xxx

switchport mode trunk

switchport nonegotiate

speed 1000

flowcontrol receive desired

channel-group 1 mode on

spanning-tree portfast trunk

!

interface GigabitEthernet1/0/2

switchport trunk encapsulation dot1q

switchport trunk allowed vlan xxx

switchport mode trunk

switchport nonegotiate

speed 1000

flowcontrol receive desired

channel-group 1 mode on

Here are the speed findings:

From What To Speed

FastEthernet upload GigabitEthernet 80Mbits

FastEthernet download GigabitEthernet 57Mbits

GigabitEthernet upload FastEthernet 32Mbits

GigabitEthernet download FastEthernet 53Mbits

GigabitEthernet upload GigabitEthernet 573Mbits

GigabitEthernet download GigabitEthernet 245Mbits

FastEthernet upload Etherchannel (4xGiga) 57Mbits

FastEthernet download Etherchannel (4xGiga) 400Kbits

GigabitEthernet upload Etherchannel (4xGiga) 163Mbits

GigabitEthernet download Etherchannel (4xGiga) 245Mbits

Please have a look on the config, maybe I am doing somwthing wrong.

Best regards,

Andreas Plaul

Hello,

I am testing now with a simple one port connection between the NAS (1 Gigabit) to a client (100Mbits). Still very slow connection (400Kbits).

When I force the interface on Netapp side to 100Mbits, it works on full speed. Changing back (Auto -> 1Gbit) it slows to 400Kbits.

Also checked the cabling which is correct (trie multiple Cat5e cables).

Maybe this helps, I am out of ideas.

Best regards,

Andreas Plaul

When at auto, whar speed/duplex does the switch report? What does the server report? Are there any errors in the counters on the switch? can uou get stats from the server?

Hello,

yesterday for testing we disabled "mls qos" on the whole stack and that solved the speed issue. Looks like we hit this bug:

CSCsc96037: "CAT3560 / 3750:QoS causes slow TCP performance"

"Configuring Quality of Service on a CAT3560 or CAT3750 running any IOS can cause certain TCP applications such as NFS to run slower.

We will test the 12.2(37)SE1 IOS and see if it fixes the issue.

Best regards,

Andreas Plaul

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card