What is it with QoS and flowcontrol that you cannot do both at the same time on a 3550 switch? Can anyone tell me why? I cannot really understand what they have to do with each other.
This flowcontrol, is it the GigE flowcontrol, as in flowcontrol receive on? Or is it something else they are talking about?
Does this restriction apply only to the 3550 platform, or is it universal? I have 4500 in my Gig core. I have phone traffic sharing links with bulk data transfer, so I need QoS and flow control. Is the 4500 affected by this issue?
"This flowcontrol, is it the GigE flowcontrol, as in flowcontrol receive on? Or is it something else they are talking about?"
Yes, according to the configuration guide. Mentioned both in the QoS chapter and interface chapter. I.e. the latter has: "Note You must not configure both IEEE 802.3z flowcontrol and quality of service (QoS) on a switch. Before configuring flowcontrol on an interface, use the no mls qos global configuration command to disable QoS on the switch."
"What is it with QoS and flowcontrol that you cannot do both at the same time on a 3550 switch? Can anyone tell me why? I cannot really understand what they have to do with each other."
I suspect the thinking is with the additional QoS features, you no longer need the simpler back pressure model provided by Ethernet flow control. Also consider that Ethernet flow control is an all or nothing model. E.g. you have a host sending different priority traffic, how do you allow the high priority traffic precedence over the low priority traffic just by stopping the host from sending?
The solution to your sharing a link with phone and bulk traffic is placing the former in a 3550 expedite queue and the latter in another queue.
I am not an expert but I came across a document which discusses about why Flow control is not compatible with QoS.
While flow control offers the clear benefit of no packet loss, it also introduces a problem for quality of service. When a source port receives an Ethernet flow control signal, all microflows originating at that port, well behaved or not,are halted. A single packet destined for a congested output can block other packets destined for uncongested
outputs. The resulting head-of-line blocking phenomenon means that quality of service cannot be assured with high confidence when flow control is enabled.
Thanks for participating in this discussion guys, both comments are useful, although I am still not convinced.
I understand now that we are talking about the PAUSE feature defined in 802.3x (100 Mpbs) and 802.3z (Gig). (I temporarily got confused with 802.1x, which of course is something else entirely.)
I understand that flowcontrol can mean that the QoS is no longer able to guarantee a QoS, but I still don't see why the two features shouldn't work in harmony. What I am after is flow control so that I don't drop any frames, but that when flow control is released, it should still look to the expedited queue before servicing any others. I don't really see why that should cause any head-of-queue blocking.
Does anyone know how much jitter can be introduced by flow control, assuming the expedited queue is the first to be serviced once the PAUSE is released?
Looking at the Zarlink chip, it seems to say that when you have flow control, it only has one queue, and therefore has the head-of-queue blocking problem. I can understand that. But I am not sure whether that applies to the queueing schemes on the 4500 switches.
Or is it saying that the flow-control applies only to the lowest priority queue? If that is the case, then I actually want QoS and flow control at the same time. My first priority is the expedited queue. The bulk data can suffer packet loss, but if I can avoid most of the packet loss by flow control, that is even better. But I can still accept a small amount of packet loss on the bulk queue if that is the price of have QoS and flowcontrol at the same time.
This is a very useful discussion, so if you have any more information, please come back to me with it. I might also open a TAC case in parallel, and I'll post any more information here.
Currently I have QoS on my core links, but no flow control. I therefore have packet drops that could be avoided. I need to make a decision before my scheduled downtime on Saturday whether to introduce flow-control into my core links.
My understanding of Ethernet flow control was it was really intended to pause the sending host, not buffer management between switches. Only the actual sending host has ideal blockage to avoid drops since it can indicate to the individual sourcing application the path is blocked. (Actually not ideal when you consider real-time apps.)
Consider you have a congested link between distribution feeding core, i.e. typical aggregation. What distribution downstream access link do you issue the pause to? Unless you're tracking the actual impact of each flow, the distribution could send a pause to all its access downlinks. Unlikely what we desire. In an aggregation situation with a larger bandwidth uplink, we would likely need to pause multiple downstream access links. But whether it's one downstream access link, multiple links or all links, all traffic from the access link(s) is blocked. This creates a head of line blocking issue for all the other access flows that are not making the congestion problem.
You mention you want to avoid drops. Without source quench or ECN, drops shouldn't be avoided, they should be managed. Without some type of source flow control, networks that have oversubscription can easily reach congestion collapse. The latter being where the network is full of traffic, very little is effectively being delivered. A simple example of such is what happens on shared Ethernet under high load. The latter is especially interesting since some switches, as Padmanabhan's attachment also mentions, can provide back pressure to half duplex connected devices by making the link appear busy.
Today, most often the best way to indicate congestion to flows is by dropping packets such that all the available bandwidth is utilized but without overdriving it. This does incur some loss due to the drops, but it provides the best "goodput". In the near future (within 5 years?), ECN should accomplish the same but without packet loss. Some interim approaches (if interim), are the newer TCP stacks that analyze congestion based on high resolution ACK timings, Microsoft's Vista CTCP does such.
The ProblemEnter EVCsHow It Works (Ingress)How It Works
(Egress)Step-by-Step ExampleFinal Thoughts The ProblemOn traditional
switches whenever we have a trunk interface we use the VLAN tag to
demultiplex the VLANs. The switch needs to determine which MAC ...
The ProblemEnter EVCsHow It Works (Ingress)How It Works
(Egress)Step-by-Step ExampleFinal Thoughts Introduction: Netdr is a tool
available on a RSP720, Sup720 or Sup32 that allows one to capture
packets on the RP or SP inband. The netdr command can be use...
IntroductionOSPF, being a link-state protocol, allows for every router
in the network to know of every link and OSPF speaker in the entire
network. From this picture each router independently runs the Shortest
Path First (SPF) algorithm to determine the b...