Customer site connected to a PE using 2 x E1 circuits aggregated using Multilink PPP. The CE is a 1760, and the PE is a 7206. This is the customer HQ site.
Connected to the same PE are three other sites from the same customer. Each is connected with a single E1, again each CE is a 1760.
During some testing the customer did some large file transfers concurrently from each of the three remote sites into the HQ, so we had data coming into the PE via the three E1's (6Mbps) and then being routed onto the 4 Mbps Multilink PPP interface into the HQ, which is a potential bottleneck. This test failed miserably with packets being dropped on the output interface on the PE facing the HQ.
But then if Multilink PPP encap is removed, and we just connect the HQ using one E1 with HDLC encapsulation, the test works, even though we have half the bandwidth to the HQ, so an even greater potential bottleneck.
Well considering you are using high speed lines between the routers and mlppp originated for low speed links you should turn off at least MLPPP fragmentation (reduces router CPU utilization greatly). Also, if you are using MLPPP interleaving you probably don't need that either.
(no ppp multilink fragmentation, no ppp multilink interleave)
Thanks for your post, and initially I thought this could be down to CPU issues.
So we set up the following test. We had four Cisco 1760 routers connected to a Cisco 7206. Three were single attached via a 2 Mbps E1 circuits. The fourth was dual attached via two 2 Mbps E1 circuits. We configured Multilink PPP on the two links.
We connected four ports of a Performance Analyser to the Ethernet ports of all four Cisco 1760 routers. We then sent a Stream of data into each of the Single Attached routers and collected the data received on the Ethernet interface of the dual attached router.
If the rate of all three Streams added up to less than or equal to 4 Mbps, then we saw no packet loss. As soon as we increased the rates so the their sum added up to greater than 4 Mbps (even if it was slightly greater) we saw massive packet loss and the throughput dropped to virtually nothing.
I concluded that if the CPU did not have the processing power to handle Multilink across 2 E1 circuits, then we would have seen packet loss when the rate of all three Streams added up to 4Mbps. But we didnt.
It seems that as soon as the router has to drop packets, because we are sending more than 4Mbps of data down a 4Mbps pipe, then the throughput on the Multilink PPP drops to virtually nothing, rather than staying at 4 Mbps.
Very interesting blog. I am also having performance problems with 6xT1 MLPPP line. The site complains, i don't see 100% utilisation on the line EVER, the PE is managed by our SP, so i don't have visibility into this. Performance complaints are also from PE -> CE direction and it is quiet possible that congestion on the PE occurs. I would expect to see the line at least at 90% utilisation at that moment, but this discussion suggest otherwise...Any ideas on this ?
We are pleased to announce availability of Beta software for 16.6.3.
16.6.3 will be the second rebuild on the 16.6 release train targeted
towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are
looking for early feedback from customers befor...
Introduction Featured Speakers Luis Espejel is the Telecommunications
Manager of IENova, an Oil & Gas company. Currently he works with Cisco
IOS® and Cisco IOS XE platforms, and NX to some extent. He has also
worked as a Senior Engineer with the Routing P...
In this session you can learn more about Layer 3 multicast and the best
practices to identify possible threats and take security measures. It
provides an overview of basic multicast, the best security practices for
use of this technology, and recommendati...