Hello all, I have a 4006, SUPII with about 4 48 port blades and almost all of the ports that show connected have Xmit-ERR's! Could this be an autonegotiation issue or CDP? Most of them have autonegotiated @ 100Mbps full duplex but I was wondering if anyone out there has seen this before? Any ideas on troubleshooting this would be greatly appreciated. Thanks, ~zo
Here is a snap shot of a "show port" on my 4006....
Within 5 minutes of double checking the PC's hardcoded TCP/IP full-duplex and speed settings, rebooting the PC and then clearing the counters on the 4006 I have already seen 797 Rcv-Err, 771 Undersize and 21 Runts. I think things were better when I had the workstation @ autonegotiate. Also, I've used a Pentascanner to test the network cable all the way back to the patch panel. The tester tested things like Length, Impedance, Loop Resistance, Capacitance, Impulse Noise.... they all passed! I'm running CATos version 7.1(2) with bootstrap version 5.4 (1). I'll keep troubleshooting and maybe I can knock this out before the Holidays.. Thanks all :-) ~zo
Have you been able to figure this out yet. I am curious because we are having similair issues with one of our 4506's running CatOS 7.4(1) with bootstrap 5.4(1). We also have tried hard coding the speed/suplex and have gone as far as to switch all modules and the chassis. Within 2 mins of clearing counters we are seeing hundreds of errors. We do have AV on all machines that are kept up to date daily. Anyone have any ideas?
I'm sorry patrick, I've been so busy with other projects I really haven't closed this case yet, but it is very important for us to find the root cause and stop these errors. Please keep me in the loop and I'll do the same with you if you find out anything on the resolution of this problem. I know Cisco will probably say to upgrade to a newer catOS but that doesn't always go smooth. I'll get back to you when I return back to this case. Thanks ~zo
Guys, I am having similar problems with a 4006/SupII. Mine differs in that I have 10/100/1000 ports with 14 or so GB connections. I have a theory but am not able to test it because the servers on the switch are critical. I think any slow link on the switch (10/half, 10/full, 100/half) may be causing the problem. My problem first started to happen when I added two servers with 4 trunked gb ports each. What I notice is the slower links gain xmit-err's quicker then the 100/full links, but all the links receive xmit-err's. My theory is that the switch is waiting to long to forward packets from the slower links thus forcing the other ports to wait and eventually error out. I don't know if all your links are 100/full or not, but maybe bumbing up all your ports. If you can't do that, try plugging in only the fast ports to see if you still get xmit-err's. Cisco does mention slower links as being a cause of these eroors. In my case I wonder if the 2 servers with 4 trunked ports is asking for too much bandwidth. I hope you let us know your findings, I will be happy to share mine.
We are pleased to announce availability of Beta software for 16.6.3.
16.6.3 will be the second rebuild on the 16.6 release train targeted
towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are
looking for early feedback from customers befor...
Introduction Featured Speakers Luis Espejel is the Telecommunications
Manager of IENova, an Oil & Gas company. Currently he works with Cisco
IOS® and Cisco IOS XE platforms, and NX to some extent. He has also
worked as a Senior Engineer with the Routing P...
In this session you can learn more about Layer 3 multicast and the best
practices to identify possible threats and take security measures. It
provides an overview of basic multicast, the best security practices for
use of this technology, and recommendati...