cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1080
Views
0
Helpful
1
Replies

6509 1P3Q8T Output drops on gig ports set to 100Mb

JOHN WAITE
Level 1
Level 1

Hi,

Am trying to determine why we are seeing outpiut drops on some of our interfaces connected to servers with 100Mb NIC's.

6509 with WS-X6748-GE-TX

Example,

GigabitEthernet1/3/25 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0000.0000.0018 (bia 0021.d8e7.1948)
  Description: SRV-VAN-KDCM1 PRIMARY A6-21
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseT
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interface" counters 00:24:40
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 5895
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 12000 bits/sec, 2 packets/sec
     0 packets input, 0 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     11497 packets output, 10231823 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

Interface GigabitEthernet1/3/25 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled
Trust boundary disabled

  Port is untrusted
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
    Queueing Mode In Tx direction: mode-cos
    Transmit queues [type = 1p3q8t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       01         WRR                 08
       02         WRR                 08
       03         WRR                 08
       04         Priority            01

    WRR bandwidth ratios:   47[queue 1]  42[queue 2]  11[queue 3]
    queue-limit ratios:     55[queue 1]  25[queue 2]   5[queue 3]  15[Pri Queue]

    queue tail-drop-thresholds
    --------------------------
    1     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
    2     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
    3     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

    queue random-detect-min-thresholds
    ----------------------------------
      1    80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      2    80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      3    60[1] 70[2] 80[3] 90[4] 100[5] 100[6] 100[7] 100[8]

    queue random-detect-max-thresholds
    ----------------------------------
      1    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      2    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      3    70[1] 80[2] 90[3] 100[4] 100[5] 100[6] 100[7] 100[8]

    WRED disabled queues:  

    queue thresh cos-map
    ---------------------------------------
    1     1      1
    1     2     
    1     3     
    1     4     
    1     5     
    1     6     
    1     7     
    1     8     
    2     1      0
    2     2     
    2     3     
    2     4     
    2     5     
    2     6     
    2     7     
    2     8     
    3     1      2
    3     2      3
    3     3      6
    3     4      7
    3     5     
    3     6     
    3     7     
    3     8     
    4     1      4 5

    Queueing Mode In Rx direction: mode-cos
    Receive queues [type = 1q8t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       01         WRR                 08

    WRR bandwidth ratios:  100[queue 1]
    queue-limit ratios:    100[queue 1]

    queue tail-drop-thresholds
    --------------------------
    1     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

    queue thresh cos-map
    ---------------------------------------
    1     1      0 1 2 3 4 5 6 7
    1     2     
    1     3     
    1     4     
    1     5     
    1     6     
    1     7     
    1     8    


  Packets dropped on Transmit:
    BPDU packets:  0

    queue              dropped  [cos-map]
    ---------------------------------------------

    1                        0  [1 ]
    2                     5895  [0 ]
    3                        0  [2 3 6 7 ]
    4                        0  [4 5 ]

  Packets dropped on Receive:
    BPDU packets:  0

    queue              dropped  [cos-map]
    ---------------------------------------------
    1                        0  [0 1 2 3 4 5 6 7 ]

interface GigabitEthernet1/3/25
description SRV-VAN-KDCM1 PRIMARY A6-21
switchport
switchport access vlan 47
switchport mode access
no logging event link-status
wrr-queue bandwidth 47 42 11
wrr-queue queue-limit 55 25 5
wrr-queue random-detect min-threshold 1 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 2 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 3 60 70 80 90 100 100 100 100
wrr-queue random-detect max-threshold 1 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 2 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 3 70 80 90 100 100 100 100 100
wrr-queue cos-map 1 1 1
wrr-queue cos-map 2 1 0
wrr-queue cos-map 3 1 2
wrr-queue cos-map 3 2 3
wrr-queue cos-map 3 3 6
wrr-queue cos-map 3 4 7
priority-queue cos-map 1 4 5
mls qos vlan-based
no cdp enable
spanning-tree portfast edge
spanning-tree bpduguard enable
end

server traffic is very low (often <10% port speed) but seeing drops occuring every 10-20mins in Q2. Any thoughts?

1 Reply 1

eorfanos
Level 1
Level 1

Hi John,

Is the server traffic unmarked? COS 0 maps to Queue 2, and the number of drops in this queue correspond directly with the number of output drops on the interface.

When the output was gathered, the 5 minute output rate was quite low. Are you able to change the load-interval to 30 seconds, for a more accurate idea of traffic rate? At the 10-20min mark, we may be seeing a burst of traffic.

Also, looking at your configured queue-limit and bandwidth ratios, traffic that matches queue 1 and queue 2 should have the same profile (ie. rate, bursts), is this correct? If not, we may need to start tweaking these values.

Is the scenario the same for the other interfaces that are also seeing the drops?

Elli.

Review Cisco Networking products for a $25 gift card