Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Catalyst 3750 Ingress SPQ/SRR behavior

Do Cisco engineers review this community at all?

I am working on the latest version of QoS standard for our Enterprise and noticed the following conflicting information officially provided by Cisco.

My question relates to ingress/pre-ring Strict Priority Queue (SPQ) logic.

 

Cisco Catalyst 3750 QoS Configuration Examples document states that SPQ on ingress is configured and serviced as follows

mls qos srr-queue input priority-queue 2 bandwidth 10
mls qos srr-queue input bandwidth 90 10
  1. SPQ services Q2 up to the configured 10% of ingress bandwidth
  2. Any excessive traffic in Q2 is not dropped, but is serviced by SRR in accordance with the configured weights

For example, a momentary 5Gbps of aggregated ingress EF traffic will be serviced in the following way

  • SQP services 10% of total ring's bandwidth, or 3.2Gbp, leaving 1.8Gbps for SRR processing
  • SRR services excessive 1.8Gbps in accordance w/ weights Q1 - 90 and Q2 - 10, such as Q1 gets 25.92Gbps and Q2 get 2.88Gbps more.

The following pictures provides in-depth look into Ingress queuing logic.

Alternatively, Cisco Medianet Campus Design v4.0 provides the following example w/ comments

C3750-E(config)#mls qos srr-queue input priority-queue 2 bandwidth 30
! Q2 is enabled as a strict-priority ingress queue with 30% BW
C3750-E(config)#mls qos srr-queue input bandwidth 70 30
! Q1 is assigned 70% BW via SRR shared weights
! Q2 SRR shared weight is ignored (as it has been configured as a PQ)

Basically, they now say Q2 bandwidth weight is ignore because it is configured as Strict Priority Queue.  Doesn't it look contradictory?

In my humble opinion Medianet (or SRND v4.0!!!) provides an incorrect information re ingress queuing on Catalyst 3750 platform.

 

I am not sure I can easily test it, providing that an internal ring must experience a congestion. I don't think I can send more than 32Gbps of traffic into any of my lab 3750 switches.

Also, I don't think this mistake can be critical in my environment as I don't expect to have momentary full capacity load on those... but it can be critical for others.

 

Much appreciate

Tim

 

Everyone's tags (5)
93
Views
0
Helpful
0
Replies
CreatePlease to create content