3750 - anyone running 7-9 switches in one stack?

Unanswered Question
Aug 7th, 2012
User Badges:

Looking for feedback from other organizations that have large 3750 stacks.  I've got one stack of (8) 3750's composed of (6) 3750G's and (2) plain 3750's.  This particular stack is usually unresponsive to SNMP queries and often fails to write config when we make changes.  After a couple tries it will finally go.  Part of my probably here is likely the plain 3750's that always boot faster and come up as the master.  I should manually set the master to one of the G's.  What I'm wondering is who else has 7-9 3750's stacked and are they performing well for SNMP, telnet, etc?  I've got another newer stack of 7 3750E's that I need to add one more switch to.  Need to decide if I want extended downtime to break the stack up or just add the 3750X to make member 8 and hope it performs well.  I have 50+ 3750 stacks working great on our campus, but only a couple that are this big.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Loading.
Leo Laohoo Tue, 08/07/2012 - 15:24
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Looking for feedback from other organizations that have large 3750 stacks.  I've got one stack of (8) 3750's composed of (6) 3750G's and (2) plain 3750's.  This particular stack is usually unresponsive to SNMP queries and often fails to write config when we make changes.

This is a typical behaviour when you stack seven or more 3750 to form a single stack.  My rule-of-thumb is six in a stack.  No more.


Part of my probably here is likely the plain 3750's that always boot faster and come up as the master. 

Alot of factors go into this and it's got nothing to do with what model.  Heck, 3750/3750G boots faster, 3750E is slower and 3750X is the slowest.  You want it to speed up the bootup?  Make sure you manually configure switch priority so the stack doesn't have to go into a stack election.



Kindly compare your 3750E/X costing with a 4510R+E with Sup7E.  There are cases I've seen if one where to stack three or more 3750X, it is better to get a 4500R+E with Sup7E.


The 3750E/3750X has a backplane of 32 Gbps (full duplex) while the 4500R+E with Sup7E has a backplane of 40 Gbps (full duplex).

Mr.Callahan Mon, 09/23/2013 - 09:29
User Badges:

Hey Leo,


Digging up an old question here, but do you have any links to documentation stating that you should limit the stacking to 6? I'm have a client that is having similiar issues to the original post, where they have 9 switches in the stack, and they are having big problems with SNMP, and SSH sessions seem to hang during troubleshooting processes.


Any help would be appreciated!


Thanks,

Mike

glen.grant Mon, 09/23/2013 - 12:04
User Badges:
  • Purple, 4500 points or more

  I don't think here is doc that says only use 6 , people are talking from experieince. Cisco has never been able to get a good handle on the memory leak issues on the stacks when you start getting large stacks . It  shows up as snmp,  telnet and ssh issues where you won't be able to login or memory low messages  and the only fix is then to reload the entire stack .

Leo Laohoo Mon, 09/23/2013 - 15:35
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

but do you have any links to documentation stating that you should limit the stacking to 6?

No official documentation that the "best practices" should be up to 6.  As what Glen has stated, I speak from experience.


I'm have a client that is having similiar issues to the original post, where they have 9 switches in the stack, and they are having big problems with SNMP, and SSH sessions seem to hang during troubleshooting processes.


One of the things I've seen with SNMP leaks is people enable ALL the SNMP traps.  My recommendation is only enable the SNMP traps that you want and disable the rest.


Another thing, IOS version.  For all 3750-series switches, I would recommend 12.2(55)SE8 for either Layer 2 or Layer 3.  If you need to go to a higher level then I'd go for 15.0(2)SE4.

ieee1284c Mon, 10/21/2013 - 07:31
User Badges:

Following up on my original post from some time ago:


For the stack of (7) 3750E's that we needed to add one more to, we did go ahead with adding the 8th and have had no issues.  For the other mixed stack of older 3750's, I broke that stack in half and also manually set switch priority to ensure that the plain 3750 models were not being elected stack master.  All is running well.

jack.riley1 Wed, 02/22/2017 - 05:48
User Badges:

You may be running into this, the bigger your stack gets the more you will see this. We have it on site when a backup runs & the backup appliance execute a 'wr' remotely. Seems like a CPU spike is causing the momentary lapse in available memory in the management plane.


High CPU During a Configuration Change

If Catalyst 3750 Switches are connected in a stack, and if there are any configuration changes made to a switch, the hulc running config process wakes up and generates a new copy of the running configuration. Then, it sends to all the switches in the stack. The new running configuration is CPU-intensive. Therefore, the CPU usage is high when building a new running configuration process and when forwarding the new configurations to other switches. However, this high CPU usage should exist only for the same amount of time it takes to perform the building configuration step of the show running-configuration command.

There is no need of a workaround for this problem. The CPU usage is normally high in these situations.

HTH - Jack

Actions

This Discussion