cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1494
Views
0
Helpful
7
Replies

PPP Multilink using 2 different serials with different bandwidth

william.tituana
Level 1
Level 1

Hi everybody,

Sorry about this simple question but i didn't find information about.

I configured a PPP Multilink interface using 2 serial interfaces: one of them is at 960Kbps and the other one is at 2048Kbps.

In order to verify total capacity of multilink I'm using a tool to get link saturation, but i see total bandwidth is just at 1.8Mbps

Is this because of different capacities in serial links? how can i configure a propper ppp multilink in this case to get total bandwidth of 2.9Mbps?

Configuration is as follows:

ROUTER A:

interface Multilink1

description MULTILINK

ip address 192.168.101.13 255.255.255.252

ppp multilink

no ppp multilink fragmentation

multilink-group 1

!

interface Serial5/0/3

description S1

no ip address

encapsulation ppp

no ip mroute-cache

no fair-queue

ppp multilink

multilink-group 1

!

interface Serial5/0/2

description S2

no ip address

encapsulation ppp

no ip mroute-cache

no fair-queue

ppp multilink

multilink-group 1

ROUTER B:

interface Multilink1

description MULTILINK

ip address 192.168.101.14 255.255.255.252

ppp multilink

no ppp multilink fragmentation

multilink-group 1

!

interface Serial0

description S1

no ip address

encapsulation ppp

no ip mroute-cache

no fair-queue

ppp multilink

multilink-group 1

!

interface Serial1

description S2

no ip address

encapsulation ppp

no ip mroute-cache

no fair-queue

ppp multilink

multilink-group 1

1 Accepted Solution

Accepted Solutions

Hello William,

my suggestion was to use Frame-relay encapsulation on faster link without multilink PPP involved

sorry if I haven't been clear.

R1

int ser0/0

clock rate 2000000

enc frame-relay

frame-rel intf-type dce

int ser0/0.30 point-to-point

frame-rel interface-dlci 30

bandwidth 960

ip address 192.168.1 255.255.255.252

int ser0/0.40 point-to-point

frame-rel interface-dlci 40

bandwidth 960

ip address 192.168.5 255.255.255.252

int ser0/1

enc ppp

ip address 192.168.9 255.255.255.252

bandwidth 960

clock rate 960000

on the other side equivalent commands without the clock commands and the frame-rel intf-dce

if this is possible you have your 3 equal costs paths links for example to use with OSPF.

Edit:

as probably noted by Paolo you have something that with normal CEF based switching should use only one of your possible paths.

You have a potential of multiple paths but if all the static routes are host based in reality one path will be used the one chosen by CEF based on ip source and ip destination.

Hope to help

Giuseppe

View solution in original post

7 Replies 7

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello William,

PPP multilink makes the hyphotesis that all member links are equal and has its own load balancing algorithm that uses member links in round robin.

In a case like yours:

you should have a chance to use FR on the 2048 link and to make two subinterfaces on it.

Then IP routing could be modified so that 3 1 Mbps links are seen as equal cost paths to destinations.

In other terms you need a way to logically divide the faster link in logical links that can run at a speed that is comparable with the slower link.

This can be done if you have a leased line that can be configured as a back to back FR setup.

one side has to act as FR DCE to make it working.

then on the FR link the 2.048 Mbps link you define two point-to-point subinterfaces and you can apply to them the command bandwidth 960.

each subif requires its own DLCI.

Hope to help

Giuseppe

Thank you very much for your suggestion Giuseppe!

I'm going to try it!

regards

Hi Giusepee, unfortunately multilink with Frame Relay didn't work.

However I solved the problem following the next advice:

The 2048K is around twice the other link about 960K. I stablished two HDLC links, and I used five different static routes using imaginary ip addresses for destination, then I routed three imaginary IP addresses trhough the 2048K link end two ip addresses through the 960K link.

using this I got to balance pakets between both unequal links sending more pakets for the bigger link than the other one.

configuration is as follows:

ROUTER A:

interface Serial1

description S2

bandwidth 2048

ip address 192.168.101.17 255.255.255.252

!

interface Serial2

description EDWIN SALAZAR GUALAQUIZA S1

bandwidth 2048

ip address 192.168.101.13 255.255.255.252

!

ip route 200.25.199.24 255.255.255.248 172.16.31.1

ip route 200.25.199.24 255.255.255.248 172.16.31.3

ip route 200.25.199.24 255.255.255.248 172.16.31.5

ip route 200.25.199.24 255.255.255.248 172.16.31.7

ip route 200.25.199.24 255.255.255.248 172.16.31.9

!

ip route 172.16.31.1 255.255.255.255 Serial2 192.168.101.14

ip route 172.16.31.3 255.255.255.255 Serial2 192.168.101.14

ip route 172.16.31.5 255.255.255.255 Serial1 192.168.101.18

ip route 172.16.31.7 255.255.255.255 Serial1 192.168.101.18

ip route 172.16.31.9 255.255.255.255 Serial1 192.168.101.18

ROUTER B:

interface Serial0

bandwidth 2048

ip address 192.168.101.14 255.255.255.252

no fair-queue

!

interface Serial1

bandwidth 2048

ip address 192.168.101.18 255.255.255.252

no fair-queue

!

!

ip route 0.0.0.0 0.0.0.0 172.16.31.2

ip route 0.0.0.0 0.0.0.0 172.16.31.4

ip route 0.0.0.0 0.0.0.0 172.16.31.6

ip route 0.0.0.0 0.0.0.0 172.16.31.8

ip route 0.0.0.0 0.0.0.0 172.16.31.10

ip route 172.16.31.2 255.255.255.255 Serial0 192.168.101.13

ip route 172.16.31.4 255.255.255.255 Serial0 192.168.101.13

ip route 172.16.31.6 255.255.255.255 Serial1 192.168.101.17

ip route 172.16.31.8 255.255.255.255 Serial1 192.168.101.17

ip route 172.16.31.10 255.255.255.255 Serial1 192.168.101.17

I hope this be useful for anybody.

Thank you

Can you send some interface statistics showing that one interface is handling approx. twice the traffic than the other ?

some CEF show would also be interesting. Because I think CEF is smart enough to build only two equal weight adjacencies anyway.

Beside, EIGRP has the features of balancing proportionally to the bandwidth without using static or recursive routing.

Hello William,

my suggestion was to use Frame-relay encapsulation on faster link without multilink PPP involved

sorry if I haven't been clear.

R1

int ser0/0

clock rate 2000000

enc frame-relay

frame-rel intf-type dce

int ser0/0.30 point-to-point

frame-rel interface-dlci 30

bandwidth 960

ip address 192.168.1 255.255.255.252

int ser0/0.40 point-to-point

frame-rel interface-dlci 40

bandwidth 960

ip address 192.168.5 255.255.255.252

int ser0/1

enc ppp

ip address 192.168.9 255.255.255.252

bandwidth 960

clock rate 960000

on the other side equivalent commands without the clock commands and the frame-rel intf-dce

if this is possible you have your 3 equal costs paths links for example to use with OSPF.

Edit:

as probably noted by Paolo you have something that with normal CEF based switching should use only one of your possible paths.

You have a potential of multiple paths but if all the static routes are host based in reality one path will be used the one chosen by CEF based on ip source and ip destination.

Hope to help

Giuseppe

sorry Giuseppe, i didn't understand your suggestion:

Here are Interface statistics por ROUTER A

Serial1 is up, line protocol is up

Hardware is cyBus Serial

Description: EDWIN SALAZAR GUALAQUIZA S2

Internet address is 192.168.101.17/30

MTU 1500 bytes, BW 2048 Kbit, DLY 20000 usec,

reliability 255/255, txload 113/255, rxload 18/255

Encapsulation HDLC, crc 16, loopback not set

Keepalive set (8 sec)

Restart-Delay is 0 secs

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 3d22h

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 94522

Queueing strategy: dual fifo

Output queue: high size/max/dropped 0/800/0

Output queue: 0/400 (size/max)

30 second input rate 145000 bits/sec, 111 packets/sec

30 second output rate 910000 bits/sec, 117 packets/sec

20446847 packets input, 3706549029 bytes, 0 no buffer

Received 0 broadcasts, 957 runts, 0 giants, 0 throttles

201362 input errors, 196545 CRC, 0 frame, 958 overrun, 2259 ignored, 3859 abort

26010944 packets output, 2850952244 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 11615358 output buffers swapped out

150 carrier transitions

RTS up, CTS up, DTR up, DCD up, DSR up

=================================================================

Serial2 is up, line protocol is up

Hardware is cyBus Serial

Description: S1

Internet address is 192.168.101.13/30

MTU 1500 bytes, BW 2048 Kbit, DLY 20000 usec,

reliability 255/255, txload 66/255, rxload 7/255

Encapsulation HDLC, crc 16, loopback not set

Keepalive set (10 sec)

Restart-Delay is 0 secs

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 3d22h

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 257845

Queueing strategy: fifo

Output queue: 0/400 (size/max)

30 second input rate 64000 bits/sec, 54 packets/sec

30 second output rate 533000 bits/sec, 83 packets/sec

17326915 packets input, 3214788014 bytes, 0 no buffer

Received 0 broadcasts, 5 runts, 0 giants, 0 throttles

248897 input errors, 248849 CRC, 0 frame, 35 overrun, 0 ignored, 13 abort

21745431 packets output, 3038411926 bytes, 0 underruns

0 output errors, 0 collisions, 11 interface resets

0 output buffer failures, 0 output buffers swapped out

8 carrier transitions

RTS up, CTS up, DTR up, DCD up, DSR up

7500_CENTRUM#

It's not exactly but it's better for much what was happening. Multilink was saturating at just 1.8M

Thank you very much for your interest and suggestions

Serial1 is up, line protocol is up

...

20446847 packets input, 3706549029 bytes, 0 no buffer

Received 0 broadcasts, 957 runts, 0 giants, 0 throttles

201362 input errors, 196545 CRC, 0 frame, 958 overrun, 2259 ignored, 3859 abort

26010944 packets output, 2850952244 bytes, 0 underruns

...

Serial2 is up, line protocol is up

...

17326915 packets input, 3214788014 bytes, 0 no buffer

Received 0 broadcasts, 5 runts, 0 giants, 0 throttles

248897 input errors, 248849 CRC, 0 frame, 35 overrun, 0 ignored, 13 abort

21745431 packets output, 3038411926 bytes, 0 underruns

These interfaces have been statistically handling the same amount of traffic (less than 5% difference, and the interface that sent more packets, has sent less bytes anyway). That is not surprising, since you cannot fool CEF by virtue of few recursive static routes.

As mentioned above, if you want proportional traffic destination, either you run EIGRP with appropriate configuration, or statically decide which traffic to send on which link (more specific destination or PBR).

However, you have an excessively high percentage of input errors, indicator either faulty circuit or clocking problems. You should focus on that before traffic sharing.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: