I have a fault at the moment ongoing. Symptoms are low TCP throughput but UDP/Ping tests are okay. On the PE front, there is a traffic-shaping policy applied outbound and policing inbound. I have 2 questions though:-
1. The policy-map applied inbound which is configured to police is onlyl givening 8000bits to the default traffic but 776000 to the mgmt traffic(the customer is meant to be receiving 80meg) - surely these values are the wrong way around particularly as the mgmt class only matches traffic with dscp value of 63 so all other traffic will hit the default queue. 2nd question is that when I look at the policy-map on the PE , I can see traffic shaping is consistently active but at this is the odd thing, I am not seeing this reflected on the Physical interface.
Am I correct in my assumption that the values for the policing inbound policy-map pe_in_child_XXX_hub are incorrect ie. too small for the default and too big for the mgmt class?
I would post the configs but can't see an upload button...so here is the bits I am worried about...
For #1, what's odd about both policer statements, if I understand them correctly, neither drops any packets, all packets are marked the same regardless of actual data rate.
I haven't worked with actual MPLS, maybe you can't directly set set-mpls-exp-transmit without using the policer.
Personally, I wouldn't be too happy with "set ip dscp default" if that is resetting my DSCP markings.
For #2, the shaper appears to be configured for about 80 Mbps, but the 5 minute (average) rate is only about 9 Mbps. It's possible, since shapers queue bursts, for the shaper to be active from time to time although the average data rate is so low.
A very common cause of poor TCP throughput is dropped packets which ping tests might not see (unless the drops are really bad). Try to investigate whether you're encountering TCP drops.
Another reason for poor TCP throughput, if your WAN has high bandwidth, e.g. 80 Mbps, but typical WAN latencies, many TCP stacks, default receive buffer, won't allow a single flow to ramp up. For instance, Windows XP clients connected to 100 Mbps, might not utilize more than 2 Mbps on a coast-to-coast jump.
If you run multiple TCP flows from the same host, and aggregate bandwidth jumps with each flow, you're likely seeing the issue just described.
Or, if you use a differnet host, e.g. Windows Vista (as receiver or better host to host), and again a bandwidth jump, same issue.
Also note, even when TCP can utilize full link bandwidth, high WAN latency will slow how fast TCP ramps up and how fast it recovers bandwidth if it sees a drop. Both of these can be very noticable on international links, even with much bandwidth.
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...