cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1698
Views
0
Helpful
19
Replies

Performance problem with Multilink PPP on 7206VXR w/PA-MC-8T1

I have a 7206VXR/NPE400 with a PA-MC-8T1 installed. I also have a T3 card in it for other connectivity.

I have 4 of the T1 ports used. 2 of those are in a Multilink PPP configuration to form one 3Mbps connection.

On the other side of the connection, I have a 2801 with 2 WIC-1DSU-T1-V2 modules in it.

The customer is complaining of slowness, and the bandwidth graphs from my NMS show that the Multilink interface isn't hitting 100% at any time.

So, I'm doing some bandwidth testing with an FTP site. I've move all other traffic off of the Multilink connection and I'm downloading/uploading a large file from an FTP site to one client plugged directly into the router.

The results are interesting. Traffic from the 2801 side, destined for the FTP server behind the 7206, reaches the full 3Mbps just fine. However, traffic in the other direction, from the FTP server behind the 7206 destined to the FTP client behind the 2801, only reaches about 1.8Mbps total.

I've taken each T1 down and run the test on either of them alone, and they both perform at full 1.5Mbps on their own. I've also torn down the Multilink PPP configuration and had then be simply HDLC with IP addresses, and they do 1.5Mbps on their own as well. I've also tried using 'ip load-sharing per-packet' instead of MLPPP, and that configuration only gets me to 1.6Mbps.

There are no errors on the T1s or any of the Ethernet ports along the way. I do FTP through this router's T3 interface to the same FTP site and it reaches 45Mbps just fine, so the FTP site/server isn't the issue.

CPU doesn't seem to be high (5-6%), and the memory on the router seems fine.

Any ideas from anybody? Has anyone actually used a PA-MC-8T1 to do Multilink PPP?

19 Replies 19

Forgot to mention...

I've got a TAC case open as well...they seem to be at a loss for why this is happening.

If I don't get this resolved soon, I'll have to explain to my boss how the big, expensive 7206vxr we bought can't do what a little, cheap 2801 can and we'll have to purchase one to do this...

I just keep forgetting to put information in.

We're running 12.4(15)T1 IOS on the 7206 right now. It has a bunch 'o bugs, so we've scheduled an upgrade to 12.4(15)T3, but that's in two whole weeks.

Hi,

chances are the problem is on the 2801 side, not the 7200. Very recently I've enabled multilink into a 2801 with three E1s and basically the cpu experienced terrible spikes and could not keep up with ospf. Was 12.4(11)XJ4, downgraded to 12.4(3j) without noticeable improvement. CEF enabled and all the unnecessary has been removed, still the fact is that the router gets on his knees with few hundred PPS!

I'm not even sure right now what I got to do with this 2801 because something hints me the problem is MLPPP processing, on input, for it.

So my advice is before getting mad with the 7200, check CPU usage on the 2801 when sending much data from you side.

Let us know!

The highest CPU I saw on the 2801 during the FTP testing was 16%. The average was more like 2%.

Istvan_Rabai
Level 7
Level 7

Hi,

I suppose the problem might not be on the 7206vxr side, but on the client side.

It is possible that tcp flow control keeps the traffic down in your client pc, when receiving traffic from the FTP server.

Can you do a packet capture either at the client or the FTP side, so we can investigate this situation with Wireshark?

Or it would even be better to do a packet capture for the customer's application.

Thanks:

Istvan

I can easily sniff packets on the FTP server side, on the actual server. I'll do this during the next testing window (today hopefully).

What specifically should I look for in the capture, however? I'll probably see TCP slowing itself down, but that doesn't tell me which hop or device is causing the slowdown, will it?

Hi,

If there is a problem at the client side, like not enough buffer space, then it will stop the traffic while it writes data to the hard disk and the buffer gets freed.

From the packet capture you will be able to see if the tcp flow control slows down the traffic by 0-ing the window size (if this is the problem).

In some cases this can cause the slowdown. I think it is worth to take a look at this side of the problem.

Chees:

Istvan

Joseph W. Doherty
Hall of Fame
Hall of Fame

Just curious, have you tired MLPPP both with and without fragmentation? If you have, any difference in results?

I have not tried this. However, it's my understanding that fragmentation would only slow down 'data' type applications like FTP and HTTP/HTTPS. Is that correct? Even if it doesn't necessarily affect the speed of traffic, it would surely increase CPU which could result in slowdowns, I think...

From what I've read fragmentation is only advantageous if you are using small-packet 'real-time' apps like voice/video.

I'm willing to try it, even though I'm doubtful it will work, so I'll put that in the list for the next testing window (hopefully this afternoon).

No not worry about fragmentation. It is only needed whn sum of circuits is less than 768 or you need to bound latency to a low value. As I said, I suspect the issue is on the 2801, not your router.

Actually, I didn't mean to imply to try fragmentation if you were not doing so. (Although, I might have suggested its deactivation if it had been enabled.)

As to advantages, in theory, if it splits each packet across both links, you'll obtain better 50/50 load balancing. Without it, I believe, packets are alternated (similar to packet-by-packet forwarding), which could skew the load balance at any momement. Overtime it should balance out.

I too would be concerned about the imposed CPU load (which is why, above, I might recommend its deactivation if active).

Hi, the "overtime" to re-establish load balancing using mlppp without fragmentation is actually instantaneous, because the code will always queue to the interface with less bytes waiting to be sent. Then on receiving side there's a buffer to ensure no out of sequencing will occur.

Although your (proprietary?) knowledge of how Cisco implements their multilink PPP scheduler (akin to SHOULD within RFC2686 and RFC2688 vs. left up to the implementation within RFC1990) provides perhaps the best method of scheduling the links without fragmentation, I don't see how it "is actually instantaneous" because different sized packets can still skew the link load. If your point was there's usually little practical difference, I would tend to agree. Although, there's also the issue that fragmented packets, again in theory, would have less transmission latency, since they are being transmitted in parallel. (Similar to the recommendation of using fragmentation on slower links, reducing latency, to allow small real-time packets, e.g. VoIP, to interleave within large packets.)

I guess I haven't seen the reply I was expecting yet...has anyone done MLPPP with two T1s on a 7206VXR/NPE400 with a PA-MC-8T1? If anyone has, what IOS version? What other modules?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: