cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
350
Views
0
Helpful
3
Replies

MLPPPoF MLPPPoA packet loss

dwilliams
Level 1
Level 1

We have been having a heck of a time troubleshooting packet loss on a MLPPP link that terminates on a 7206VXR and a 2801. There are several other Multilinks on the same 7206 that are not experiencing any problems. We have tried:

Adding the tx-ring-limit 4 on the ATM pvcs on the ISP side.

Modifying shaping on both the ATM side and the frame side routers.

Rebuilding the entire Frame and ATM pvcs in the FR/ATM switch

Replaced the 2801

Added ther ppp multilink interleave, ppp multilink fragment delay 10 commands to the VT on both routers.

Tested all three T1 circuits(all are clean)

I have attached router configs, show commands, and ASCI diagram. I would appreciate any advice anyone may be able to offer.

3 Replies 3

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Dave,

some PPP fragments are lost in translation from FR to ATM.

14974 lost fragments, 41400972 reordered

29650/36752227 discarded fragments/bytes, 5763 lost received

0x4D8B02 received sequence, 0xA188C2 sent sequence

and on the core router side:

0/0 fragments/bytes in reassembly list

43021 lost fragments, 3799912 reordered

83637/50018769 discarded fragments/bytes, 61686 lost received

0xA3069C received sequence, 0x4F219E sent sequence

the following document can help

http://www.cisco.com/en/US/tech/tk1077/technologies_tech_note09186a00800b6098.shtml

Hope to help

Giuseppe

Thanks Giuseppe,

That was a very good link, thank you. It does help explain some of the details regarding what happens with this type of setup. Normally I would agree that there must be some drops in the Frame/ATM realm; however, I have been unable to find any drops. The frame/atm switch isn't dropping, and neither is the ATM interface on the core router. We are still seeing the fragment discards with the show ppp multilink command. What would cause the multilink to drop when the frame and atm interfaces are not discarding?

Additionally, I have shaped all the outbound frame on the cpe down to 1305k. I have also moved this client to a different 7206VXR running a PA-A6-OC3c card and new code. I have also split off each T from the multilink bundle, IPd them and ran ping across the individual Ts to test latency on each T1. All of them ran within 1 to 2 ms of each other. We have been running ip sla across this as well and it seems as though the loss is limited to Source Destination traffic from the cpe.

We are looking at options to bypass the frame/atm switch and make the a native MLPPP link, but are currently having trouble putting together the hardware to do so.

Any additional comments/suggestions would be appreciated.

Thanks

Dave

Hello Dave,

even without any drops in the Frame/ATM switch some PPP fragments can be lost for buffers missing.

An IP packet is fragmented by PPP multilink in PPP fragments.

Each PPP fragment is sent within a Frame Relay L2 PDU.

At the FR/ATM Service interworking boundary the PPP fragment is placed within an AALx PDU.

The AALx PDU is fragmented by ATM SAR in ATM cells.

So there is a two level fragmentation in your scenario.

On the receiving side :

ATM SAR buffers has to host ATM cells to rebuild a PPP fragment, then the PPP fragment has to be stored waiting for the other PPP fragments to rebuild the original IP PDU.

For ATM statical multiplexing ATM cells of different PPP fragments arrive and need to be stored in different SAR buffers.

The number of these buffers is limited.

So if an ATM cell is received and is part of a AALx PDU that contain a PPP fragment but there is no free SAR buffer to host it it is dropped internally and all other ATM cells of the same AAL5x can be dropped.

This can count as lost PPP fragments in the sh ppp multilink.

You have changed the 7206VXR on the ATM side and verified each member link.

I would point to the C2801 being it the less powerful device.

The document suggests to attach a service-policy to the virtual-template.

Have you applied shaping in this way ?

To be noted that the document says the following:

MLPPP load balancing over ATM or frame relay might show noticeably less effectiveness than the same load balancing over physical interface.

To verify if the problem is related to fragmentation you could try to configure PPP multilink so that it never fragments an IP packet.

(of course this has an impact on Voip and so it has to be tested during a maintanance window).

If you were not supporting VoIP traffic over the bundle an idea could be that of moving to parallel L3 links each with its own PVC service interworked between FR and ATM

Hope to help

Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card