I have been testing Qos with the 7206VXR and I am having concerns about the way the QOS is working. I am using ping to generate the majority of the traffic, but it seems that once a class reaches its peak that all traffic sessions on that class lose connectivity until traffic returns to a normal volume. Can you please review the multilink, policy map, and class map config below and make any suggestions? Thanks.
CM-DM - This is like a replication process, it is a high volume of data that will consume as much bandwidth as it can get until it is caught up.
Current ATM pvc setup: abr 5120 2557
CM-CTI - This is a low volume of traffic, separated from general traffic to guarantee connectivity of these systems between the sites.
Current ATM pvc setup: abr 512 128
CM-ETL - This is for TestLab traffic and can vary, but bandwidth limit is set low.
Current ATM pvc setup: abr 128 64
default-class - This is everything else and can vary from high to low bandwidth.
Current ATM pvc setup: abr 6300 4452
All traffic is high priority so I didn't try to classify based on protocol or setup CoS.
We have an 8 T1's channel (12Mb).
We would like CM-DM and default-class to use/share all unused bandwidth as needed.
I'm having a little trouble understanding exactly what you're goal is here. Are CM-DM, CM-CTI, CM-ETL services you're guaranteeing or destinations you're shaping to?
If they're services, where you want to guarantee certain amounts of bandwidth for each, you're going to want to use 'bandwidth percent' under each class, and get rid of the 'shape average', as this will limit your traffic to the throughput specified.
If these classes are there in order to shape for destinations (say, off of an MPLS cloud), dont bother with the 'bandwidth percent' statement. In the case of MPLS, your best bet is to shape based on the port speed of the destination, that way you dont overrun it.
As for the actual problem description, would an example be if you throw a lot of traffic down CM-CTI, CM-DM loses connectivity? Please further clarify exactly what happens, and what you're using to measure it.
Thanks for your response, I hope the following answers your questions.
The class-maps are setup for destinations, not services. We have 8 point-to-point T1's so there is no MPLS cloud involved. I am using the "sh policy-map int mult1" to monitor the shaping, delaying, and dropping of packets. The actual problem is that a lot of traffic on CM-CTI will cause all communication sessions on CM-CTI to fail. I haven't noticed any failures accross the different classes due to high traffic on one class.
I guess i'm having trouble picturing where the multilink bundle is going physically. Typically you only need to shape per-destination on a multilink bundle if it is going to an MPLS cloud, and hence being routed to multiple destinations.
If you have multiple point-to-point t1's going to different destinations, there is no need to do this type of shaping.
We have 8 T1's bundled into a single multilink which go from one site to another site. The class-map for CM-DM is a replication process that will take as much bandwidth as it can. We want this class to use as much bandwidth that is available, but we still want the class-default (general traffic such as file transfers, email, internal http traffic) to have a set bandwidth and be able to burst as needed. The two small class-maps (CM-CTI and CM-ETL) need to have a minimum of bandwidth with a small burst available on occasion. If I don't use QoS to control the CM-DM traffic it will consume all of the bandwidth.
We are currently setup using a 7206 and IGX running ATM/IMA and the traffic is separated by pvc's which control the bandwidth via abr and pcr. Thanks again for your quick reply. I hope this clarifies things for you.
This is a interesting way to use both these statement. I assume your goal is guarentee a amount of bandwidth but limit the maximum they can use at the same time.
First you need to make sure you set the bandwidth on your multilink since I do know if it sets it correctly and you are using percent to calculate it. Even if that is wrong it will not cause traffic to be limited it will just not be prioritiesed enough.
The main limitation I can see is your total shape bandwidth is much lower than total for all your lines combined. I think it adds to about 6.5m which means you will not use all the bandwidth.
Although shaping does not drop packets like policing does it still cannot buffer a unlimited number of packets and will drop packets. So if for example you shape to 128k and transfer data at 130k for a extended period of time you will only get 128k the shaper will become a policer when it hits its queing limit. This is what you describe your problem to be if I read it correctly.
You can to a point increase the queue size but this is limited by the memory on your router.
A better solution would be to using policing to mark your packets inbound into the router. You would marj them differntly depending on if they conform or exceed but you would always transmit them.
You would set the policer the same as your shaper and mark this amount of packets to a number and then mark the rest as default or less than default if you want your normal traffic preferred over this excess traffic.
On your output side you would match these marking and use the bandwidth as you currently do. If you wanted to prefer normal traffic over this excess traffic you would need to add another class for normal traffic and set a bandwidth for that or if you have a new enough IOS there is a "remaining" option. You would then put excess traffic into class default and it would only get serviced if nobody else wanted it.
Hopefully I am somewhat clear but I can try again if you have questions.
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...