Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity get an update on QoS on the Catalyst 6500 with Cisco expert Patrick Warichet. Patrick is a technical marketing engineer for the campus switching systems technology group. He is specialized in Cisco IOS modularity and has expertise in Quality of Service (QoS) and hardware architecture for the Catalyst 6500. He has worked for the data center, switching & services (DCSS) group in the customer operation group. Within this group Patrick worked on customer critical issues focusing on the Catalyst 6500. Prior to join DSSG, Patrick was a customer support engineer for the LAN Switching group in Brussels (Belgium). Prior to joining Cisco, he was a consultant for Digital Equipment Corporation, focusing on European commission network operation. Patrick is an industrial engineer from the Industrial Engineering School of Brussels, Belgium.
Remember to use the rating system to let Patrick know if you have received an adequate response.
Patrick might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through July 17, 2009. Visit this forum often to view responses to your questions and the questions of other community members.
I'm interested in doing shaping in Cat 6500, but i don't know what kind of hardware i need, or if shaping could be done by software on the Supervisor.
At this point in time only the WAN family of module supports shaping, those are the SIP-200, SIP-400, Flexwan, etc ...
The reason is that the supervisor supports only policing at the PFC level, the WAN module have a dedicated processor that can achieve shaping.
On the new line card (6708,6716) we do support a form of shaping call SRR it is more a buffer management methods than a traffic shaping based on class, if you are interested please have a look at:
What's the best way to ensure that your policies on a 6500 are working correctly?
I'm using per-microflow service policicies on my interfaces with src-only masks. Output for "show mls qos ip interface" looks similar to:
QoS Summary [IPv4]: (* - shared aggregates, Mod - switch module)
Int Mod Dir Class-map DSCP Agg Trust Fl AgForward-By AgPoliced-By
Gi8/46 5 In TELEPRESEN 0 0* dscp 5 85456780540 0
Gi8/46 5 In VOIP-RTP 46 0* No 1 85456780540 0
Gi8/46 5 In VOIP-SIGNA 24 0* No 2 85456780540 0
Gi8/46 5 In VOIP-OTHER 0 0* No 3 85456780540 0
Gi8/46 5 In class-defa 0 0* No 4 85456780540 0
All ports on the switch have the same numbers associated- meaning I don't have a ready way to see which ports were sending traffic that was being passed or policed down. I woudln't expect 100% of my traffic to be matched by all of my various classes.
On the 3750 platform, the "sh mls qos int giXXX statistics" command shows me exactly which DSCP values packets are marked with . I realize this is probably due to hardware limitations, but it'd be nice to have a way to see that the policers are doing their jobs.
Unfortunately The "show mls qos ip type mod/number" command does not show the microflow policing statistics. It only shows the aggregate policing statistics. You may consider using netflow and a netflow collector to get QoS statistics for microflow.
I have 1 unit of Cat6509 SUP720 in each site and connected via Metro Ethernet. The committed speed from Service Provider is 50mbps. Our Cat6509 is connected to FastEthernet interface(100mbps) of SP switch directly. We have VoIP running across these 2 sites. What is the best QoS policy for my scenario, assuming there is no QoS on MetroEthernet network ?
In fact, I am looking for switch QoS feature that is equivalent to hierarchical QoS of IOS router(shape first, then LLQ).
HQoS and LLQ are only available on our WAN modules.
On the LAN module you can always create an aggregate policer using at least 2 classes one for your VOIP traffic (I also suggest you let that traffic go to the PQ) and another for the reminder of the traffic (more classes can be used and we support up to 1023 policers)
One of the main difference with other QOS implementation is that the Cat6500 will always have the policer active even if there are plenty of bandwidth available (something you may take into account)
Another difference is the Strict Priority Queue (PQ vs LLQ) the strict priority queue will be served as soon as a packet is presented to the queue the queue scheduler will stop serving the other WRR queues.
Thanks for your reply.
"On the LAN module you can always create an aggregate policer using at least 2 classes one for your VOIP traffic (I also suggest you let that traffic go to the PQ) and another for the reminder of the traffic (more classes can be used and we support up to 1023 policers) "
Do you mean that I need to configure egress aggregate policer on SVI , and WRR scheduler with PQ on Layer 2 LAN interface connected to SP switch ?
If I interpret your advice correctly, can I assume that 50mbps egress policing on SVI can lead to a congested state on 100mbps layer 2 LAN interface connected to SP switch ?
What Patrick might be suggesting, you can use policers to insure you don't oversubscribe SP bandwidth. For instance, if you have 50 Mbps, you could police VoIP at 3 Mbps and other at 47 Mbps. Of course, this approach doesn't allow one class of traffic to utilize unused bandwidth from another class and/or there's the impact of policing to traffic vs. queuing.
On the 6500, as Patrick has noted in other posts, the "WAN" cards provide much more extensive QoS support, including, I believe, what you would prefer to implement (and what's commonly found on many of Cisco's software routers.)
As an inexpensive alternative, you might drop an 8 port 2960 in-line between the 6500 and SP switch. You can use "srr-queue bandwidth limit" to "shape" egress bandwidth (NB: hardware uses increments of six) and then use "priority-queue out" for real-time traffic such as VoIP, and the other 3 port queues for remaining traffic.
I just need your advice. In the next following months I will start my studies for the CCIE. I am between R&S ans SP. I am not going for the other CCIEs as I have already followed the routing and security path and because its easier for me to make a lab.
So as you know the industry better than me can you please advice what is better to choose, R&S or service provider? From my understanding many people are already CCIEs for R&S and I wouldn't like to be one of the many. Secondly I don't want to spend 2-3 years of my life studying something that will not be in the next years.
So can you please advice on that?
Also I am in the service provider industry.
I am interested to build my own lab so I wonder if someone knows how many 3550 cisco switches I need and which model.
At the moment I try to make my decision which path I have to follow so your advice counts a lot of me.
I am sorry but I can't really help you out, I only prepared for R&S #14218.
I advice you to read the different exam blue print before committing to anything.
please can you explain me how can i doing good write-up for any project , cuold you send me technical write-up you did it to take it as a refernce .
please can you explain me how can i doing good write-up for any project , cuold you send me technical write-up you did it to take it as a refernce .
I just posted the reference to my QoS white paper maybe this is the technical write-up that you are looking for ?
To help you with all your Catalyst 6500 QoS questions a new white paper has just been published. It contains massive amount of information to help you understand the specific of QoS on the Catalyst 6500 and how to configure your QoS strategy.
Extract from the Table of Content:
What is Layer 2 and Layer 3 QoS
Why The Need for QoS in a Switch
Hardware Support for QOS in the Catalyst 6500
Catalyst 6500 Software support for QoS
QoS Flow in the Catalyst 6500
Queues, Buffers, Thresholds and Mappings
Configuring (Port ASIC based) QoS on the Catalyst 6500
Hi Patrick, I'm looking to get hold of a Catalyst 6509 for my datacentre. One of the main requirements is load-balancing inbound traffic to my application servers. What's the relevant 6509 module that I need to manage this?
We have different modules that can achieve Server Load Balancing (SLB). The latest and most performing is the ACE :)
Hi Patrick, hoping you can help me with a problem I am dealing with at work. Packet loss was being experienced on a circuit we increased the queue length 32 to 64 in the past which resolved the issue and was still in place. Packet loss was experienced again and only when the queueing strategy was removed the issue of the packet loss was resolved. Need to understand why the solution applied resolved the issue and the removal of the same solution also corrected it. I am a bit confused.
I would suggest that you identify the source of your packet loss first. If you suspect is happening at a specific interface you can always use the command "show queueing interface
If you remove your queuing strategy and did not apply a trust statement all traffic will be consider "untrusted" and will end up in the default queue.
Patrick I have been advised that the issue of packet loss was resolved by changing the queuing depth from 32 to 64 on the transmit ring of the PE interface. After which no changes were made to the PE or the managed CE router. Packet loss occured again on the circuit and only stopped when the above was removed and let the router make traffic routing issues. I am not very technical and do not understand how applying queuing strategy fixed the problem and removing it also fixed the same problem. I need to explain this to a third party who is also not technical but do not know how because I do not understand.We have checked the circuit (link) and there are no errors
It is difficult for me to pinpoint the exact nature of the issue with the level of info you are providing. feel free to provide me the original configuration with the queue depth to either 64 or 32 and the module type so I can provide you more information.
I am looking for a product [switch?] that will allow me to connect my home PC and work laptop so that I can use either to access the internet [concurrently] from my desk, with no additional interaction on my part once they have been setup/configured.
My home has structured wiring, the incoming internet source is connected directly to my cable modem, which is then connected to my router, giving each room connection [R-45 outlet] direct access to the internet. My PC is connected to the R-45 outlet via CAT5 wiring.
Is there a Cisco product that will connect to the R-45 wall connection [via CAT5] of my home office, then let me connect my home PC and work laptop to it so that both will have a direct and concurrent connections to the internet?
I am hoping for something simple and relatively inexpensive -- $100 or less, if possible.
I am afraid I am not very familiar with our SOHO line of product, maybe there is another forum open for them where you question will be better answered ?
Having a distributed architecture, it is understandable why there is such a huge scale of QoS capabilities. Having already rached it's peak as a platform, I cannot expect too much, but still I would like you to elaborate on the subject, history and nuances why the situation like it is are very welcome. Why there is lack of coherency? Is the platform just too advanced for it's own good in this sense?
To be less vague, here are examples:
1. 6748 vs. 6148. While 6748 is considered higher value module, it has much lesser output buffering capacity (1.2MB vs 5.4MB) per port. This difference can be critical for gigabit. And even more so because 6748 will appear much often in aggregation/core where bursts are much more violent.
2. SUP720-3B is very close to SUP720-3C, yet 3B's ports have queue configuration of SUP2
3. Already 2960/3560/3750 (it's actually same hardware from QoS perspective) have the ability to make use of input queuing based on DSCP, yet most 6500 modules lack that ability, and still depend on CoS for ingress queuing.
4. Queue configuration of 6704 was largely overlooked, so we ended up with customer ditching 6704 for 6708 and shutting off half of the ports. All because the lack of buffers, which for 10GbE are so critical.
5. Lack of Shared buffer technique, where by segmenting buffer space we forcefully specify allocated buffer size, without the ability to allow this space to be used by other queues when buffers are not actually used.
6. ASIC's lack of QoS control on per port basis, where we have to control a port group. So there is a forced QoS consistency per port-ASIC, which is not always welcome. On the other hand, it's nearly impossible to keep QoS consistency across different type of linecards because of such different capabilities.
I understand that distributed nature of the architecture and the internals working of modules are hindering the consistency, but can you please elaborate more on this.
Pavlo, I'm not now or have ever been affiliated with Cisco or other hardware provider (i.e. can't speak for Cisco), but they all face a couple of common issues when designing hardware that might explain some of the issues you note.
When hardware is being designed, there are many conflicting goals. For instance, there's often a major conflict between what's possible and what it will cost. Also, both of these are often very much impacted by what's the current state-of-the-art, and with computer technolgy, much can change in a relatively short timespan. (Do any other technologies hold to Moore's Law?)
In your first point, you note the port buffer size differences between the 6148 (actually the 6148A, not the 6148, I believe) and the 6748. However, although each provides 48 10/100/1000 Ethernet ports, the 6148A is classic bus card, the 6748, dual 20 Gbps fabric connections; the 6148A does not support a DFC, the 6748 can; the 6148A provides Rx-1p2t, the 6748 Rx-1q8t (with DFC); the X6148A-GE-TX can support POE, the 6748 does not. If you take into account all the functional differences between these cards (beyond providing 3x Ethernet ports), and the technology "age" difference; trade-offs likely needed to be made somewhere and port memory could have been one of them. If you think, RAM is cheap and "small", surely similar (or more) RAM per port could be provided on the newer line card, then you don't fully appreciate how complex (and expensive) providing memory can be, especially when improving performance.
Hopefully you can see how the above might also apply to your other points.
An old industry joke, used to be (something like), if cars were like computers, by now we all would be driving ones that could do Mach 2, get a million miles to the gallon, and cost 98 cents. (A counter point from the auto side was [something like]; if you turn on the radio and wipers at the same time, your right front wheel would fall off, although that's planned to be corrected with the .3 model year update. [work around - only turn on radio and wipers when in reverse gear, while backing to the right])
In the above paragraph, I knew the joke text wasn't exact, tried to find it. Not sure I found an exact match, but here's another couple of variations:
1) You are correct the 6148A-RJ45 and 6148-FE-SFP have 5.2 MB of buffers, the reason is that those cards can only be connected to the Legacy Bus (16 Gb FD shared) and needed large buffers to accommodate speed differences within the same slot.
2) I am not sure I understand correctly your statement
3) the new generation of ASIC are capable of using DSCP based queuing and are present in the 6708 and 6716 and future Line cards
4) I agree the first generation of modular 10 GEG card the Xenpack form factor and related constrain limited the amount of buffer we could place on the board.
5) Dynamically allocated buffer space is something we are looking into, it may be relatively easy to do on a SW based router but with all the buffering happening in HW on the Cat6500, adding this type of logic to the port ASIC is challenging.
6) I agree too ! Unfortunately to maintain the density of ports on Cat6k we have to share ASIC for multiple ports. Since the queuing is happening at the ASIC level the configuration will spread to over ports belonging to the same ASIC. Do not forget that you can always disable the QoS checking with the command "no mls qos channel-consistency" configured at the Port-channel level
I hope this answers your questions, please let me know
Thanks. Actually, what I wanted is for you to elaborate on QoS philosophy on for 6500. You have mentioned that form factor and related constrains led to the buffer space decision for 6704, and also that you are looking into dynamically allocated buffer space.
So what does future holds for QoS on 6500? What features are you looking into? What challenges do you encounter? What do you have to have constantly in mind when thinking about implementing QoS functionality on 6500?