We run a gigabit backbone at two sites. The core switch at one site is a cisco 3550-12T (1 server & 70 workstations) and the other an 3550-12G(15 servers and 400 workstations). Performance after migrating to them has been extremely slow (previously used cisco 1900 series switches and performance was better with them!!). All other switches are cisco 3548xl's and 3550-24 or 3550-48. We currently use both IP and IPX/SPX protocols. Performance for IPX/SPX connections is significantly better than for IP. Any tools to monitor performance of these core switches might help or any methods to follow that might lead to a clue would be appreciated.
Only 1 VLAN and traffic is slow between users and servers located at the same site. Two sites are connected via a fractional T1 and traffic is limited to AS/400 ip traffic and email.
Earlier had mentioned there were no collisions or errors and this applies to ports where the other 3500 series are connected. There are collisions on ports where a router and hub are connected to the backbone switch and on ports connecting workstations though.
Collisions are to be expected (and are quite normal) on half-duplex interfaces (such as ones that connect to a hub, & depending upon what series.. a router too.) I would recommend checking for duplex mismatches & hardcoding the duplex/speed on each port/interface, instead of having it auto-negotiate. I would also check to make sure there are no spanning-tree loops going on in the network (such as having the portfast feature configured on ports that are directly connected to another switch, router, or anything that can generate BPDU frames.) A good way to prevent against having loops like that inadvertently happen, is to implement errdisable traps (bpdu-guard, etc.) How many users do you have sharing this 1 VLAN? Just my 2 cents.. hope this helps.
One site has approx. 550 nodes and the second site has approx 75 nodes. We have checked to make sure any ports that connect to other switches do not have port fast enabled. Speed/duplex has been forced on the ports to match nics on nodes connected. We have looked at bandwidth stats on the backbone switches and others in each closet. No link is using more than 1.5% (unless I am reading this wrong). The average is closer to 0.08xx% on these ports.
Question We run asr9001 with XR 6.1.3, and we have a very long delay to
login w/ SSH 1 or 2 to the device compare to IOS device. After
investigation, the there is 1s delay between the client KEXDH_INIT and
the server (XR) KEXDH_REPLY. After debug ssh serv...
Introduction The purpose of this document is to demonstrate the Open
Shortest Path First (OSPF) behavior when the V-bit (Virtual-link bit) is
present in a non-backbone area. The V-bit is signaled in Type-1 LSA only
if the router is the endpoint of one or ...
Hi, I am seeing quite a few issues with patch install and wanted to
share my experience and workaround to this. Login to admin via CLI, then
access root with the “shell” command Issue “df –h” and you’ll probably
see the following directory full or nearly ...