Adjusting queuing on 6509 gig e port

Unanswered Question
Mar 18th, 2009

We have been having some trouble with a disk to disk backup from one device to another.

One of the ports is showing output drops, and I am not sure how to interperate the Queueing output.

GigabitEthernet9/10 is up, line protocol is up

Hardware is C6k 1000Mb 802.3, address is 0002.7e39.8ecd (bia 0002.7e39.8ecd)

Description: 10.1.11.52 test spot

MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 1000Mb/s

input flow-control is off, output flow-control is on

Clock mode is auto

ARP type: ARPA, ARP Timeout 04:00:00

Last input never, output 00:00:18, output hang never

Last clearing of "show interface" counters 1w6d

Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 565750

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 1000 bits/sec, 1 packets/sec

3302909568 packets input, 236242332513 bytes, 0 no buffer

Received 21373 broadcasts (0 multicasts)

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog, 0 multicast, 0 pause input

0 input packets with dribble condition detected

9575391701 packets output, 14301832898411 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out

#sh queueing interface g9/10

Interface GigabitEthernet9/10 queueing strategy: Weighted Round-Robin

Port QoS is enabled

Port is untrusted

Extend trust state: not trusted [COS = 0]

Default COS is 0

Queueing Mode In Tx direction: mode-cos

Transmit queues [type = 1p2q2t]:

Queue Id Scheduling Num of thresholds

-----------------------------------------

1 WRR low 2

2 WRR high 2

3 Priority 1

WRR bandwidth ratios: 100[queue 1] 255[queue 2]

queue-limit ratios: 70[queue 1] 15[queue 2] 15[Pri Queue]*same as Q2

queue random-detect-min-thresholds

----------------------------------

1 40[1] 70[2]

2 40[1] 70[2]

queue random-detect-max-thresholds

----------------------------------

1 70[1] 100[2]

2 70[1] 100[2]

queue thresh cos-map

---------------------------------------

1 1 0 1

1 2 2 3

2 1 4 6

2 2 7

3 1 5

Queueing Mode In Rx direction: mode-cos

Receive queues [type = 1p1q4t]:

Queue Id Scheduling Num of thresholds

-----------------------------------------

1 Standard 4

2 Priority 1

queue tail-drop-thresholds

--------------------------

1 100[1] 100[2] 100[3] 100[4]

queue thresh cos-map

---------------------------------------

1 1 0 1 2 3 4 5 6 7

1 2

1 3

1 4

2 1

Packets dropped on Transmit:

BPDU packets: 0

queue thresh dropped [cos-map]

---------------------------------------------------

1 1 565750 [0 1 ]

1 2 0 [2 3 ]

2 1 0 [4 6 ]

2 2 0* [7 ]

3 1 0* [5 ]

* - shared transmit counter

Packets dropped on Receive:

BPDU packets: 0

queue thresh dropped [cos-map]

---------------------------------------------------

1 1 0 [0 1 2 3 4 5 6 7 ]

* - shared receive counter

Could I adjust the queueing to make this more efficient?

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (3 ratings)
Loading.
Joseph W. Doherty Wed, 03/18/2009 - 13:02

"Could I adjust the queueing to make this more efficient?"

Perhaps. Much depends on whether there's insufficient bandwidth, where drops are going to happen, or whether there's insufficient device allocation. For instance, for the latter, increasing the buffer allocation for this traffic might decrease the drops.

Also you need to consider whether the drop rate % is high enough to be an issue (not the absolute number). Assuming your counters haven't wrapped, your drop percentage is only .006%, which often shouldn't be an issue.

wilson_1234_2 Fri, 03/20/2009 - 06:55

We moved the devices to a dell 48 port switch with no qos appled.

Is it possible the qos policy could be affecting the throughput of this data?

The jobs are running at night when the switch has very little else it needs to do.

Joseph W. Doherty Fri, 03/20/2009 - 07:23

"Is it possible the qos policy could be affecting the throughput of this data? "

Yes.

(NB: QoS polices can have no effect, negative effect or positive effect. Of course, we attempt for the postive effect.)

pciaccio Fri, 03/20/2009 - 08:11

Your issue may not be queue size related. It may be Flow control related. I see that you have Input flow control off. I would set flow control to on for both send and receive for your port. To verify if the port is being over run, use the show interface gig xx/yy flowcontrol command. If you have issues you will see them there. Enable the flowcontrol on both the switch and server or NAS or storage side on the NIC cards. Good Luck...

Actions

This Discussion