×

Warning message

  • Cisco Support Forums is in Read Only mode while the site is being migrated.
  • Cisco Support Forums is in Read Only mode while the site is being migrated.

QoS on 6500

Unanswered Question
Aug 11th, 2008
User Badges:
  • Bronze, 100 points or more

Hi All,


This question pertains to implementing a QoS policy on a 10Gig interface within two 6509-E chassis'.


I have two 6509-E chassis connected together via 10Gig interfaces.

The first chassis is a 6509E with the following line cards:

slot 1 - WS-X6708-10GE (with a DFC3CXL daughtercard)

slot 5 - VS-S720-10G (Supervisor with PFC3C and MSFC3 daughtercards)

slot 6 - VS-S720-10G (Supervisor with PFC3C and MSFC3 daughtercards)


The second chassis is a 6509-E with the following line cards:

slot 1 - WS-X6704-10GE (with a DFC3B daughtercard)

slot 5 - WS-SUP720-3B (Supervisor with PFC3B daughtercard and MSFC3 daughtercard)


Chassis 1 and chassis 2 are connected via 10Gig interfaces in a point to point layer 3 configuration (ie /30 subnet). I have OSPF running between both devices.


I have a policy-map that I am trying to implement on the 10Gig interface. The policy-map looks like:


policy-map CORE-QOS

class ROUTING

bandwidth percent 1

class VOICE

priority percent 1

class VOICE-CONTROL

bandwidth percent 1

class MISSION-CRITICAL

bandwidth percent 35

class MULTIMEDIA

priority percent 5

class TRANSACTIONAL

bandwidth percent 30

class NETWORK-MANAGEMENT

bandwidth percent 1

class SCAVENGER

bandwidth percent 1

class BULK-DATA

bandwidth percent 25

class class-default

fair-queue

random-detect dscp-based



When I apply the above policy to the 10Gig interface I get the following error:


corertr1(config-if)#service-policy output TNET-CORE-QOS

bandwidth percent command is not supported in output direction for this interface

Configuration failed on:

TenGigabitEthernet1/3


I did a bit of fishing around Cisco's website and found the following page (http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/guide/qos.html).


I now understand that each interface has specific queues (WS-X6704-10GE - 8q8t/1p7q8t, WS-X6708-10G-3CXL - 8q4t/1p7q4t) and that DSCP markings can be mapped to COS markings and therefore to a certain queue.


My questions are;

- Is there a way I can guarantee bandwidth to a particular class?

- Do the WRR weights equate to percentage of bandwidth, or percentage of CPU or something????


I was under the false impression that a 6500 could server as a purely layer 3 device, but after a little investigating I have found that the only apparent way of performing QoS on the 10Gig blades is through layer 2. Is this correct or am I missing something?


Any help, knowledge or suggestions anyone gives me are greatly appreciated!


Regards,


Brad

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 3 (2 ratings)
Loading.
Marwan ALshawi Mon, 08/11/2008 - 19:45
User Badges:
  • Purple, 4500 points or more
  • Community Spotlight Award,

    Best Publication, December 2015

first

about the WRR is related to port transmit and recieve queues

according to cisco SRND


Ingress congestion implies that the combined ingress rates of traffic exceed the switch fabric channel

speed, and thus would need to be queued simply to gain access to the switching fabric. On newer

platforms, such as the Catalyst 6500 Sup720, this means that a combined ingress rate of up to 40 Gbps

per slot would be required to create such an event.

However, to obviate such an extreme event, the Catalyst 6500 schedules ingress traffic through the

receive queues based on CoS values. In the default configuration, the scheduler assigns all traffic with

CoS 5 to the strict-priority queue (if present); in the absence of a strict priority queue, the scheduler

assigns all traffic to the standard queues. All other traffic is assigned to the standard queue(s) (with

higher CoS values being assigned preference over lower CoS values, wherever supported). Additionally,

if a port is configured to trust CoS, then the ingress scheduler implements CoS-value-based

receive-queue drop thresholds to avoid congestion in received traffic. Thus, even if the extremely

unlikely event of ingress congestion should occur, the default settings for the Catalyst 6500 linecard

receive queues are more than adequate to protect VoIP and network control traffic.

Therefore, the focus of this section is on Catalyst 6500 egress/transmit queuing design

recommendations.

There are currently six main transmit queuing/dropping options for Catalyst 6500 linecards:

• 2Q2T-Indicates two standard queues, each with two configurable tail-drop thresholds.

• 1P2Q1T-Indicates one strict-priority queue and two standard queues, each with one configurable

WRED-drop threshold (however, each standard queue also has one nonconfigurable tail-drop

threshold)

1P2Q2T-Indicates one strict-priority queue and two standard queues, each with two configurable

WRED-drop thresholds.

• 1P3Q1T-Indicates one strict-priority queue and three standard queues, each with one configurable

WRED-drop threshold (however, each standard queue also has one nonconfigurable tail-drop

threshold).

• 1P3Q8T-Indicates one strict-priority queue and three standard queues, each with eight

configurable WRED-drop thresholds (however, each standard queue also has one nonconfigurable

tail-drop threshold).

• 1P7Q8T-Indicates one strict-priority queue and seven standard queues, each with eight

configurable WRED-drop thresholds (on 1p7q8t ports, each standard queue also has one

nonconfigurable tail-drop threshold).

Almost all Catalyst 6500 linecards support a strict-priority queue and when supported, the switch

services traffic in the strict-priority transmit queue before servicing the standard queues. When the

switch is servicing a standard queue, after transmitting a packet, it checks for traffic in the strict-priority

queue. If the switch detects traffic in the strict-priority queue, it suspends its service of the standard

queue and completes service of all traffic in the strict-priority queue before returning to the standard

queue.

Additionally, Catalyst 6500 linecards implement CoS-value-based transmit-queue drop thresholds to

avoid congestion in transmitted traffic. WRED thresholds can also be defined on certain linecards, where

the CoS value of the packet (not the IP Precedence value, although they likely match) determines the

WRED weight.

in ur case why u dont apply it in the inbound direction?

u can apply ur policy like


corertr1(config-if)#service-policy input TNET-CORE-QOS


and the link u use is very good refrence

good luck


please, if helpful rate



Actions

This Discussion