cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1684
Views
0
Helpful
7
Replies

Could high "Total Output Drops" on one interface on a 3560G, be caused by faulty hardware on another interface?

Chris Swinney
Level 5
Level 5

Hi All,

I have been trying to diagnose a issue we have been having with packet loss on video calls (which I think we may have now resolved as the problem lay elsewhere), but in the process we have trailed some equipment from PathView and this seems to have created a new problem.

We have a standalone 3560G switch which connects into a providers 3750G as part of an MPLS network. There is a single uplink to the 3750 from the 3560 (@ 1Gbps) and whilst I can  manage the 3560, I have no access to the providers switch. Our 3560 has a fairly vanilla config on it with no QoS enabled.

There are only a few ports used on the 3560, mainly for Cisco VCS (Video Conferencing Servers) and a PathView traffic analysis device.The VCS devices are used to funnel videoconferencing traffic across the MPLS network into another institutions network.The PathView device can be used to send traffic bursts (albeit relatively small compared with the Bandwidth that is available) across the same route as the VC traffic to an opposing device, however, I have also disabled all of these paths for the moment.

I can run multiple VC calls which utilise the VCS devices so traffic is routing into the relevant organisations and everything is good. In fact, I have 5 x 2Mb calls in progress now and there are 0 (or very, very few) errors.

However, I have actually shut-down the port (Gi0/3) connected to the PathView device for the moment. If I re-enable it I start to see a lot of errors on the VC calls, and the Total Output Drops on the UPLINK interface (Gi0/23) starts rising rapidly. As soon as I shut-down the PathView port again (Gi0/3), the error stop and all returns to normal.

I have read that issues on the Output queue are often attributed to a congested network/interface, but I don't believe that this is the case in this instance. My 5 VC calls would only come in at 10Mbps so is a way short of the 1000Mpbs available. Even the PathView device only issue burst up to 2Mbps, and with the Paths actually disabled even this shouldn't be happening, so only a small amount of management traffic should be flowing. Still, as soon as I enable the port, problems start.

So, is it possible that either the port on the switch, cable or PathView device is actually faulty and cause such errors? Has anyone seen anything like this?

Cheers

Chris

 

7 Replies 7

Reza Sharifi
Hall of Fame
Hall of Fame

Hi Chris,

Since the issue appears only when you have are enabling the PathView device on port gi0/3 and don't have any issue with any other ports, I would look at the PathView device itself.  One thing you can do to test is to move the PathView device to a different port (gi0/3). You can also try connecting a different device (PC, laptop, etc..) to port gi0/23 and see if the issue appears again.

HTH

Yep, this is my little job to Tuesday when I'm back in.

Have you seen anything like this though?

 

Cheers

 

No, I have not.

One other thing.  If the issue persist after you switch ports, I would also replace the copper cable. Just in case.

HTH

I just thought I would report back on this. looks like the interface drops were being caused by  the PathView device becoming compromised and being infected with some malware. The initial password was week when the device was sent from the manufacturers, and although I change this, it was not changed straight away. The device was compromised literally within a couple of hours of being live.

Re-flashing the device resolved the issue. Looks like it was sending out a lot of traffic and overloading the egress switch port.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As far as I know, such drops shouldn't be caused by faulty hardware, but if the hardware is really faulty, you would need to involve TAC.

 

BTW, all the other interfaces, which have the low bandwidth rates you describe, are physically running at low bandwidth settings on the interface, e.g. 10 Mbps?  If not, you can have short transient micros bursts which can cause drops.  This can happen even when average bandwidth utilization is low.  (NB: if these other ports average utilization is so low, if not already doing so, you could run the ports at 10 Mbps too.)

 

Also, even if you have physically low bandwidth ingress, with a high bandwidth egress, and even if the egress's bandwidth is more than the aggregate of all the ingress, you can still have drops caused by concurrent arrivals.

 

Some other "gotchas" include, you mention you don't have QoS configured, but you're sure QoS is disabled too?

 

Lastly, Cisco has documented, at least for the 3750X, that uplink ports have the same buffer RAM resources as 24 copper edge ports.  Assuming the earlier series are similar, there might be benefit to moving your uplink, now on g0/23, to an uplink port (if your 3650G has them).

"As far as I know, such drops shouldn't be caused by faulty hardware, but if the hardware is really faulty, you would need to involve TAC."

Ok, thanks.

 

"BTW, all the other interfaces, which have the low bandwidth rates you describe, are physically running at low bandwidth settings on the interface, e.g. 10 Mbps?  If not, you can have short transient micros bursts which can cause drops.  This can happen even when average bandwidth utilization is low.  (NB: if these other ports average utilization is so low, if not already doing so, you could run the ports at 10 Mbps too.)"

No. All ports on the switch connect to devices with 1Gb capable interfaces. They have been left to auto negotiate and have negotiated at 1000/full. The bandwidth described is more with regard to the actual data throughput of a call. Technically, the VCS devices are licence to handle 50 simultaneous call of up to 4Mbps so potentially could require a bandwidth of 200Mbps, although it is unlikely that we will see this amount of traffic.

 

"Also, even if you have physically low bandwidth ingress, with a high bandwidth egress, and even if the egress's bandwidth is more than the aggregate of all the ingress, you can still have drops caused by concurrent arrivals."

In general, the ingress and the egress should be similar. Think of this as a stub network - one path in and out (via Gi0/23). The VCS act as a kind or proxy/router for video traffic, simply terminating inbound legs, and generating a next hop outbound leg. The traffic coming in  to the VCS should be the same as the traffic go out.

There will of course be certain management traffic, but this will be relatively low volume, and of course the PathView traffic analyser can generate a burst of UDP packets to simulate voice traffic.

 

"Some other "gotchas" include, you mention you don't have QoS configured, but you're sure QoS is disabled too?"

Yes.

switch#show mls qos
QoS is disabled
QoS ip packet dscp rewrite is enabled

I can't see a lot of point enabling QoS on this particular switch. Pretty much all of the traffic passing through it will be QoS tagged at the same level. Therefore it ALL prioritised.

Indeed running a test overnight with these multiple calls live and the PathView port shutdown, resulted in 0 Total Output Drops.Each leg did suffer a handful of dropped packets end-to-end, but I think I can live with 100 packets dropped in 10 million during a 12 hour period (and this, I suspect, will be somewhere else on the network).

 

"Lastly, Cisco has documented, at least for the 3750X, that uplink ports have the same buffer RAM resources as 24 copper edge ports.  Assuming the earlier series are similar, there might be benefit to moving your uplink, now on g0/23, to an uplink port (if your 3650G has them)."

Unfortunately, no can do. we are limited to the built in ports on the switch as we have no SFP modules installed.

 

Apologies about the formatting - this is yet another thing that has been broken in these new forums. I looks a lok better in the Reply window than it looks in this normal view.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Ok, from what you describe, your average bandwidth utilization is low, but again, microbursts from other devices can cause drops.  If the other device average bandwidth utilizations are so low, also again, you might try setting the ports at a lower physical bandwidth.  What this insures is a microburst is "metered out" or "shaped".

 

Ok, QoS is disabled.  Not suggesting to enable it.

 

If you have no SPF modules, you can buy one.  Assuming you're using copper, copper SPFs are usually the least expensive.  (Try slowing the other ingress ports first, though, as it doesn't require you to purchase anything.)

Review Cisco Networking products for a $25 gift card