I'm testing QoS in our environment and want to be able to show management some hard results of the QoS configuration. I'm trying a simple test of uploading a large file from site A to site B, and during this period of congestion am pinging a host that I've setup to be caught by an access-list and applied that access list to a class map. Then that class map is added to a policy map and given bandwidth priority. That policy map is then applied to the serial interface in the outbound direction, which leads to the other end of a point to point T1. The return traffic, host to any IP address is caught in another access list and given the same priority. However, all my ping responses, to both the host whose traffic is being caught in my access list and another host not being classified are returning avg response times essentially the same. Is there a better way of testing, or does this test actually prove my configuration is wrong?
My goal ultimately is to provide preferential treatment to traffic heading to certain web application servers using QoS. I have some match protocol http host and url statements, but it's hard to show that the QoS policies are actually improving performance. I get matches on my class maps and policy map as evidenced by my sh service policy-map inte s 0/0/0 command.
I've double checked my config on both ends and everything looks fine. I even replaced the bandwidth percent 30 in my policy map to priority percent 30 and saw no change in ping response times. They are the same whether or not the traffic is being classified as priority. The link is a point to point leased T1. It doesn't require any special service like MPLS to be able to carry QoS tags does it?
The extracts from your configuration is good. No you do not require MPLS for QoS. You can use DHCP to mark your traffic, this marking will be kept in the IP header, although for your scenario this is not required. What is the load on your link? The queuing strategies only really come into play under congestion.
I'm not sure exactly what the load was, I think that was the problem. I was uploading a 35mb file and the line wasn't really getting congested. I did a sh int and took a look at the tx load. I started copying several 35mb files and saw the tx load begin to increase. I'm not sure exactly when but at least when the counter hit around 170/255 I saw a significant difference in ping response times. Does anyone know what percentage is considered "congested" and QoS begins to kick in?
thx Phillip, it wasn't until your reply I took a closer look at the congestion.
QoS is applied trough a queuing strategy, in your case LLQ. Lets say you go to the super market to buy a can of coke, there is no queues at all so you can go to any till and will be on your way immediately. If there are long queues with people buying their monthly groceries you will go to the express till and get out of the shops quicker than the other people.
We try to apply the same methodology in the router, the express till is the class with the priority statement under your policy map.
Now back to your question when does QoS kick in? as soon as the hardware buffer of the interface is full. You might not realize the difference with just two or three people at each queue but it is working, and as soon as the shop is full you will really see the difference.
Introduction: The "external-out enable" command is available for
configuration under the "router ospf process" in case of the IOS-XR
operating system. This command basically enables advertisement of
intra-area routes on the device as external routes in th...
Introduction Basic configuration for netflow Scale parameters for
netflow Netflow support Architecture Packet flow for netflow Inside the
LC CPU Netflow Cache size, maintenance and memory Sample usage Cache
Size Aging Permanent cache Characteristics Which...