cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10689
Views
37
Helpful
29
Replies

Overrun Errors on ASA 5550

Peter_T
Level 1
Level 1

I have been getting overrun errors on 3 different ASA 5550 HA pairs with traffic rates less than 100Mbps total.  I was told by one TAC guy to split the traffic between the two slots so that traffic comes in one and exits the other to maximize throughput because the 5550 was designed to work that way.  Another TAC guy told me to enable ethernet flow control to alleviate the overrun errors because the traffic was bursty, but this doesn't seem to address the root cause of the problem to either.  TCP traffic is bursty by nature and has it own flow control mechanism.  I can't seem to find any detailed info on why traffic needs to be split for 100Mbps when the marketting throughput number is 1.2G.  Is this a design flaw or limitation?  Is there a way to alleviate overrun errors?

29 Replies 29

Marcin Latosiewicz
Cisco Employee
Cisco Employee

Good place to start:

http://www.cisco.com/en/US/docs/security/asa/quick_start/5500/5500_quick_start.html#wp35995

Shaping/flow contro/other machanisms can normally alleviate some of the overflow issues.

For the rest you need to check what is causing the overflows, netflow/wireshark/syslog analysis is a good place to start.

Additional note, If you don't have it configured, unicast RPF on all interfaces :-)

HTH,

M.

Thanks so much for the info, Marcin.  The quick start doc basically said the same thing as the TAC person: "For maximum throughput, configure the ASA so that traffic is distributed equally between the two buses. Lay out the network so that traffic enters through one bus and exits through the other. "  But do you know the reason why that is necessary?  Can you please elaborate on how syslog can reveal what is causing the overflow?  How does enabling RPF help with alleviating overflow?

-Peter

Patrick0711
Level 3
Level 3

Are you seeing 'no buffer' counters on the interfaces as well?  What is the low count for the 1550 byte memory block in the output of 'show blocks'?

The TAC engineer suggested that you split traffic between the different interface modules because they both have their own internal backplane interfaces (Internal-Data0/0 and Internal-Data1/0).  However, this will only help if you're actually overrunning the internal interfaces.

Flow control and overrunning an interface are unrelated.  Flow control is determined by a client and server's receive buffers/windows, not by intermediary devices.  Additionally, flow control only dictates how many much data can be sent without an ACK but doesn't specify the rate at which the data is sent. 

My concern is that there are numerous cases where overruns occur and there are plenty of 1550 byte memory blocks available and the interface doesn't show 'no buffer' counters.  I can't imagine why an interface would be overrun in this scenario but have never been able to find a conclusive answer from Cisco. 

Thanks for your reply, Patrick.  The no buffer counter is 0. 1550 block looks normal.

asa00k/pri/act# show interface gig 1/1

Interface GigabitEthernet1/1 "inside", is up, line protocol is up

  Hardware is VCS7380 rev01, BW 1000 Mbps, DLY 10 usec

        Auto-Duplex(Full-duplex), Auto-Speed(1000 Mbps)

        Media-type configured as RJ45 connector

        MAC address 0025.4538.83cd, MTU 1500

        IP address 10.174.1.253, subnet mask 255.255.255.0

        131235122531 packets input, 66053054633431 bytes, 0 no buffer

        Received 147575117 broadcasts, 0 runts, 0 giants

        34201765 input errors, 0 CRC, 0 frame, 34201765 overrun, 0 ignored, 0 abort

-----------------------------------------------------------------------^^^^^^^^^^^---------------------------------

        0 L2 decode drops

        214442064068 packets output, 47534402931048 bytes, 0 underruns

        0 output errors, 0 collisions, 0 interface resets

        0 late collisions, 0 deferred

        0 input reset drops, 0 output reset drops

        0 rate limit drops

        input queue (blocks free curr/low): hardware (0/0)

        output queue (blocks free curr/low): hardware (0/0)

  Traffic Statistics for "inside":

        131189361555 packets input, 63641462929613 bytes

        214411432424 packets output, 43673527445793 bytes

        331334100 packets dropped

      1 minute input rate 7085 pkts/sec,  3171622 bytes/sec

      1 minute output rate 12327 pkts/sec,  1481679 bytes/sec

      1 minute drop rate, 1 pkts/sec

      5 minute input rate 7797 pkts/sec,  3358887 bytes/sec

      5 minute output rate 13731 pkts/sec,  1683791 bytes/sec

      5 minute drop rate, 1 pkts/sec

asa00k/pri/act# show blocks

  SIZE    MAX    LOW    CNT

     0   1450   1386   1447

     4    100     99     99

    80    400    336    400

   256   1612   1497   1612

  1550   7296   6013   7037

  2048   2100   1577   2100

  2560    164    164    164

  4096    100    100    100

  8192    100    100    100

16384    110    110    110

65536     16     16     16

asa00k/pri/act# show blocks

  SIZE    MAX    LOW    CNT

     0   1450   1386   1447

     4    100     99     99

    80    400    336    400

   256   1612   1497   1612

  1550   7296   6013   7037

  2048   2100   1577   2100

  2560    164    164    164

  4096    100    100    100

  8192    100    100    100

16384    110    110    110

65536     16     16     16

Hello,

Is there a way you can hardcode the interface speed and duplex?

Julio Carvajal
Senior Network Security and Core Specialist
CCIE #42930, 2xCCNP, JNCIP-SEC

Thanks for your reply.  Can you please elaborate on how hardcoding the speed and duplex will help in this case?  I see no CRC errors and no collisions. 

I am going to do something better, I am going to give you a link that I used the first time I learn about it!

http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a008009491c.shtml#speed

Regards,

Do rate all the helpful posts

Julio Carvajal
Senior Network Security and Core Specialist
CCIE #42930, 2xCCNP, JNCIP-SEC

Thanks so much for the link - lots of useful info.  Unfortunately, I did checked speed and duplex and they matched on both sides and also saw no CRC and no collisions so I ruled it out.  CPU utilization is also very low so enable RPF probably won't help either.  No buffer count is 0 so I am not sure why we are getting overrun errors at very low traffic rates.  We saw overrun errors at as low as 5Mbps with 1 min sampling interval.  That doesn't tell whether the traffic is highly bursty instantaneously at some point, but how bursty can it be with 5Mbps worth of traffic? And how much burstiness can the ASA tolerate?  I have been trying to get a logical explaination for the overrun errors from Cisco TAC for about 2 weeks, but they just danced around the question.  I believe there is an inherent design limitation or software bug that causes this.  I am afraid that without getting to the root cause, turning flow control just maquerates the problem.

Borman Bravo
Level 1
Level 1

Hi, were you able to resolve this issue, I'm having the same problem. Thanks

Hi, I am getting a lot closer, but not quite there yet.  It appears that the ASA5550 cannot tolerate highly bursty traffic even for a very short period of time (saw errors for less than 10Mb worth of bursty traffic).  Turning on flow control will clear up the input errors, but I am still trying to gather the data to understand the full impact of flow control on the performance.  I am very surprised by the fact the input buffers can not handle such a low level of bursty traffic. 

Just got off the phone with the TAC engineer, he was very helpful in providing me with the performance test data.  It looks like turning on flow control is the only option.

Thanks for sharing this information, I tried looking up the commands to

turn on flow control on the ASA but can't find instructions for 8.3, would you mind sharing this as well? I appreciate it.

You need to use 8.2.5 and later or 8.4.3 and later.  The "feature" is not supported in 8.3.  The commands are:

On the ASA:

GigabitEthernet0/2

flowcontrol send on

On the Ethernet Switch (if you are using Cisco):

int gigabitEthernet 0/2

flowcontrol receive on

Make sure you do it during maintenace hours or when nobody is looking as this will reset all of your connections .

Thanks again, I appreciate your help/

Patrick0711
Level 3
Level 3

Maybe someone from Cisco can actually chime in to better explain how 'bursty' traffic can overrun an interface even though there are sufficient 1550byte memory blocks, 0 'no buffer counters' low throughput, and low CPU utilization. 

I've seen this same issue many times in the past and have always received the same response about bursty traffic and flow control.  Obviously, there's a limiting factor somewhere in the ASA architecture.  Why should I need to use flow control and have the switch buffer the data if, supposedly, there is plenty of buffer space and memory blocks on the ASA?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: