cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1069
Views
5
Helpful
15
Replies

MlutiLink P2P Optimization ( Two T1)

eweber1234
Level 1
Level 1

Hello,

We have connected two of our branches using a bundle of two T1s, using two Cisco 2610 routers, with a dual T1 WIC-T1 card on each one of them. However, I feel that there is something wrong with my configurations since we are not getting a fast/reliable throughput. I noticed a lot of packet lost when I ping a device from one side to the other side. I have included both routers configuration. Please comment on anything that could improve the communication. Thank you,

Ellsworth-R#sh config

Using 2020 out of 29688 bytes

!

version 12.3

service timestamps debug datetime msec

service timestamps log datetime msec

no service password-encryption

!

hostname Ellsworth-R

!

boot-start-marker

boot-end-marker

!

enable password ********

!

clock timezone ES -5

no aaa new-model

ip subnet-zero

ip cef

!

!

ip name-server 10.31.1.180

!

no ip bootp server

!

!

!

controller T1 0/0

framing esf

linecode b8zs

channel-group 1 timeslots 1-24 speed 64

!

controller T1 0/1

framing esf

linecode b8zs

channel-group 1 timeslots 1-24 speed 64

!

!

!

interface Multilink1

ip address 10.31.100.21 255.255.255.252

no cdp enable

ppp multilink

ppp multilink fragment disable

ppp multilink group 1

!

interface Ethernet0/0

description connected to local subnet 10.31.1.X

ip address 10.31.6.2 255.255.255.0

ip route-cache flow

full-duplex

!

interface Serial0/0:1

no ip address

encapsulation ppp

ip route-cache flow

tx-ring-limit 26

tx-queue-limit 26

no fair-queue

ppp multilink

ppp multilink group 1

!

interface Serial0/1:1

no ip address

encapsulation ppp

ip route-cache flow

tx-ring-limit 26

tx-queue-limit 26

no fair-queue

ppp multilink

ppp multilink group 1

!

no ip http server

ip classless

ip route 0.0.0.0 0.0.0.0 10.31.100.22

!

!

access-list 100 permit ip any any dscp cs7

access-list 101 permit ip any any dscp cs6

access-list 102 permit ip any any dscp cs5

access-list 103 permit ip any any dscp cs4

access-list 104 permit ip any any dscp cs3

access-list 105 permit ip any any dscp cs2

access-list 106 permit ip any any dscp cs1

access-list 107 permit ip any any dscp default

priority-list 1 protocol ip high list 100

priority-list 1 protocol ip high list 101

priority-list 1 protocol ip medium list 102

priority-list 1 protocol ip medium list 103

priority-list 1 protocol ip normal list 104

priority-list 1 protocol ip normal list 105

priority-list 1 protocol ip low list 106

priority-list 1 protocol ip low list 107

!

line con 0

login

line aux 0

password ********

line vty 0 4

exec-timeout 30 0

password ********

login

!

!

end

Ellsworth-R#

/////////////////

see the next post for the second router's configuration

1 Accepted Solution

Accepted Solutions

Any reason for having fragmentation disabled on the ppp multilink?

According to the usage guidelines, this command can cause performance degradation;

http://www.cisco.com/en/US/docs/ios/dial/command/reference/dia_p2.html#wp1013306

Also, why are you limiting 26 packets in the transmission ring? Having more packets in the ring (default is 60) will improve the router's throughput.

I also recommend removing the tx-queue limit as an incorrect value can produce adverse results - best to leave it as default and let the physical interface dictates its limit.

HTH,

__

Edison.

View solution in original post

15 Replies 15

eweber1234
Level 1
Level 1

Second Router's configuration:

Main_to_Ells#sh config

Using 3399 out of 29688 bytes

!

version 12.3

service timestamps debug datetime msec

service timestamps log datetime msec

no service password-encryption

!

hostname Main_to_Ells

!

boot-start-marker

boot-end-marker

!

enable secret 5 $1$67dJ$rfd2sWrP4OuyIBn9rjH4U/3ln1

enable password *****

!

clock timezone ES -5

no aaa new-model

ip subnet-zero

ip cef

!

!

ip name-server 10.31.1.180

!

no ip bootp server

!

!

!

controller T1 0/0

framing esf

linecode b8zs

channel-group 1 timeslots 1-24 speed 64

!

controller T1 0/1

framing esf

linecode b8zs

channel-group 1 timeslots 1-24 speed 64

!

!

!

interface Multilink1

ip address 10.31.100.22 255.255.255.252

no cdp enable

ppp multilink

ppp multilink fragment disable

ppp multilink group 1

!

interface Ethernet0/0

description connected to local subnet 10.31.1.X

ip address 10.31.1.2 255.255.255.0

ip route-cache flow

full-duplex

!

interface Serial0/0:1

no ip address

encapsulation ppp

ip route-cache flow

tx-ring-limit 26

tx-queue-limit 26

no keepalive

no fair-queue

ppp multilink

ppp multilink group 1

!

interface Serial0/1:1

no ip address

encapsulation ppp

ip route-cache flow

tx-ring-limit 26

tx-queue-limit 26

no keepalive

no fair-queue

ppp multilink

ppp multilink group 1

!

no ip http server

ip classless

ip route 0.0.0.0 0.0.0.0 10.31.1.1

ip route 10.1.1.0 255.255.255.0 10.31.1.253

ip route 10.1.2.0 255.255.255.0 10.31.1.253

ip route 10.31.2.0 255.255.255.0 10.31.1.1

ip route 10.31.3.0 255.255.255.0 10.31.1.1

ip route 10.31.4.0 255.255.255.0 10.31.1.1

ip route 10.31.5.0 255.255.255.0 10.31.1.1

ip route 10.31.6.0 255.255.255.0 10.31.100.21

ip route 170.209.0.2 255.255.255.254 10.31.1.30

ip route 192.168.237.0 255.255.255.0 10.31.1.253

ip route 192.168.244.0 255.255.255.0 10.31.1.253

ip route 206.104.53.132 255.255.255.255 10.31.1.253

ip route 206.104.53.142 255.255.255.255 10.31.1.253

ip route 206.104.53.143 255.255.255.255 10.31.1.253

ip route 206.104.53.144 255.255.255.255 10.31.1.253

ip route 206.104.53.145 255.255.255.255 10.31.1.253

ip route 206.104.53.146 255.255.255.255 10.31.1.253

ip route 208.75.53.5 255.255.255.255 10.31.1.253

ip route 208.75.53.6 255.255.255.255 10.31.1.253

ip route 208.75.53.7 255.255.255.255 10.31.1.253

ip route 208.75.53.8 255.255.255.255 10.31.1.253

ip route 208.75.53.9 255.255.255.255 10.31.1.253

ip route 208.75.53.10 255.255.255.255 10.31.1.253

ip route 208.75.54.66 255.255.255.255 10.31.1.253

ip route 208.75.54.67 255.255.255.255 10.31.1.253

ip route 208.75.54.68 255.255.255.255 10.31.1.253

ip route 208.75.54.69 255.255.255.255 10.31.1.253

!

!

access-list 1 permit 10.1.1.0 0.0.0.255

access-list 1 permit 192.168.244.0 0.0.0.255

access-list 100 permit ip any any dscp cs7

access-list 101 permit ip any any dscp cs6

access-list 102 permit ip any any dscp cs5

access-list 103 permit ip any any dscp cs4

access-list 104 permit ip any any dscp cs3

access-list 105 permit ip any any dscp cs2

access-list 106 permit ip any any dscp cs1

access-list 107 permit ip any any dscp default

priority-list 1 protocol ip high list 100

priority-list 1 protocol ip high list 101

priority-list 1 protocol ip medium list 102

priority-list 1 protocol ip medium list 103

priority-list 1 protocol ip normal list 104

priority-list 1 protocol ip normal list 105

priority-list 1 protocol ip low list 106

priority-list 1 protocol ip low list 107

!

line con 0

login

line aux 0

line vty 0 4

password ******

login

!

!

end

Main_to_Ells#

Any reason for having fragmentation disabled on the ppp multilink?

According to the usage guidelines, this command can cause performance degradation;

http://www.cisco.com/en/US/docs/ios/dial/command/reference/dia_p2.html#wp1013306

Also, why are you limiting 26 packets in the transmission ring? Having more packets in the ring (default is 60) will improve the router's throughput.

I also recommend removing the tx-queue limit as an incorrect value can produce adverse results - best to leave it as default and let the physical interface dictates its limit.

HTH,

__

Edison.

THANK you Edison,

per your recommendation I have removed the lines that you mentioned above and now the configuration looks like below on both ends:

interface Multilink1

ip address 10.31.100.22 255.255.255.252

no cdp enable

ppp multilink

ppp multilink group 1

!

interface Ethernet0/0

description connected to local subnet 10.31.1.X

ip address 10.31.1.2 255.255.255.0

ip route-cache flow

full-duplex

!

interface Serial0/0:1

no ip address

encapsulation ppp

ip route-cache flow

no keepalive

no fair-queue

ppp multilink

ppp multilink group 1

!

interface Serial0/1:1

no ip address

encapsulation ppp

ip route-cache flow

no keepalive

no fair-queue

ppp multilink

ppp multilink group 1

However, I'm still losing ping pakets whe I ping a divice across as shown below:

Reply from 10.31.6.180: bytes=32 time=18ms TTL=126

Request timed out.

Reply from 10.31.6.180: bytes=32 time=112ms TTL=126

Request timed out.

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=32ms TTL=126

Reply from 10.31.6.180: bytes=32 time=48ms TTL=126

Reply from 10.31.6.180: bytes=32 time=35ms TTL=126

Reply from 10.31.6.180: bytes=32 time=40ms TTL=126

Reply from 10.31.6.180: bytes=32 time=72ms TTL=126

Reply from 10.31.6.180: bytes=32 time=55ms TTL=126

Reply from 10.31.6.180: bytes=32 time=41ms TTL=126

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=118ms TTL=126

Request timed out.

Reply from 10.31.6.180: bytes=32 time=45ms TTL=126

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=7ms TTL=126

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=121ms TTL=126

Reply from 10.31.6.180: bytes=32 time=67ms TTL=126

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=81ms TTL=126

Reply from 10.31.6.180: bytes=32 time=36ms TTL=126

Request timed out.

Reply from 10.31.6.180: bytes=32 time=9ms TTL=126

Reply from 10.31.6.180: bytes=32 time=63ms TTL=126

Request timed out.

Reply from 10.31.6.180: bytes=32 time=49ms TTL=126

Reply from 10.31.6.180: bytes=32 time=25ms TTL=126

Reply from 10.31.6.180: bytes=32 time=29ms TTL=126

Request timed out.

Reply from 10.31.6.180: bytes=32 time=26ms TTL=126

Reply from 10.31.6.180: bytes=32 time=17ms TTL=126

Reply from 10.31.6.180: bytes=32 time=31ms TTL=126

Reply from 10.31.6.180: bytes=32 time=62ms TTL=126

Request timed out.

Request timed out.

Request timed out.

Reply from 10.31.6.180: bytes=32 time=93ms TTL=126

Reply from 10.31.6.180: bytes=32 time=54ms TTL=126

Reply from 10.31.6.180: bytes=32 time=130ms TTL=126

Request timed out.

Can you post the output from typing

show proc cpu sorted

show proc cpu his

show interface Serial0/0:1

show interface Serial0/1:1

from both routers.

Also, try pinging from multilink to multilink IP, do you get request time out then?

TTL=126 seems a bit odd.

__

Edison.

Edison, Thanks for looking into this, I have to post it in three section due to the size limitation- Here you go:

Main_to_Ells#show interface serial 0/0:1

Serial0/0:1 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,

reliability 255/255, txload 177/255, rxload 7/255

Encapsulation PPP, LCP Open, multilink Open, loopback not set

Keepalive not set

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 05:15:50

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: priority-list 1 [suspended, using FIFO]

Output queue (queue priority: size/max/drops):

high: 0/20/0, medium: 0/40/0, normal: 0/60/0, low: 0/80/0

5 minute input rate 45000 bits/sec, 64 packets/sec

5 minute output rate 1069000 bits/sec, 182 packets/sec

1005461 packets input, 70427156 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

4 input errors, 0 CRC, 4 frame, 0 overrun, 0 ignored, 0 abort

2457898 packets output, 2451140888 bytes, 0 underruns

0 output errors, 0 collisions, 10 interface resets

0 output buffer failures, 0 output buffers swapped out

1 carrier transitions

Timeslot(s) Used:1-24, SCC: 1, Transmitter delay is 0 flags

Main_to_Ells#show interface serial 0/1:1

Serial0/1:1 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,

reliability 255/255, txload 176/255, rxload 6/255

Encapsulation PPP, LCP Open, multilink Open, loopback not set

Keepalive not set

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 05:16:15

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: priority-list 1 [suspended, using FIFO]

Output queue (queue priority: size/max/drops):

high: 0/20/0, medium: 0/40/0, normal: 0/60/0, low: 0/80/0

5 minute input rate 40000 bits/sec, 62 packets/sec

5 minute output rate 1061000 bits/sec, 179 packets/sec

1006206 packets input, 70497417 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 1 giants, 0 throttles

126 input errors, 8 CRC, 111 frame, 0 overrun, 0 ignored, 7 abort

2449709 packets output, 2445243413 bytes, 0 underruns

0 output errors, 0 collisions, 10 interface resets

0 output buffer failures, 0 output buffers swapped out

1 carrier transitions

Timeslot(s) Used:1-24, SCC: 2, Transmitter delay is 0 flags

Main_to_Ells#

Main_to_Ells# show processes cpu history

Main_to_Ells 12:15:14 AM Monday Mar 1 1993 ES

2222222222222222222333331111111111111111111111111222221111

0000888884444422222000008888888888666668888833333333339999

100

90

80

70

60

50

40

30 ***** *****

20 ******************************************** ***********

10 ************************************************************

0....5....1....1....2....2....3....3....4....4....5....5....

0 5 0 5 0 5 0 5 0 5

CPU% per second (last 60 seconds)

22323223222233322322233233333323333322212231111 222

9614386326880308519392068349318541007306540885633212324241

100

90

80

70

60

50

40 * * *

30 *** **** ********** ******##****#**#* * *

20 #######################################**##**** *###*

10 ############################################### #####

0....5....1....1....2....2....3....3....4....4....5....5....

0 5 0 5 0 5 0 5 0 5

CPU% per minute (last 60 minutes)

* = maximum CPU% # = average CPU%

35448

96575

100

90 *

80 *

70 *

60 * *

50 ****

40 *****

30 *****

20 ##***

10 #####

0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.

0 5 0 5 0 5 0 5 0 5 0 5 0

CPU% per hour (last 72 hours)

* = maximum CPU% # = average CPU%

CPU utilization for five seconds: 19%/17%; one minute: 21%; five minutes: 21%

PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process

10 38203 38851 983 0.39% 0.20% 0.16% 0 ARP Input

35 50505 65535 770 0.31% 0.22% 0.19% 0 IP Input

64 396 122 3245 0.23% 0.38% 0.10% 66 Virtual Exec

46 3085 522 5909 0.15% 0.01% 0.00% 0 IP Background

6 0 2 0 0.00% 0.00% 0.00% 0 Timers

7 4 7 571 0.00% 0.00% 0.00% 0 Serial Backgroun

8 0 2 0 0.00% 0.00% 0.00% 0 AAA high-capacit

9 0 630 0 0.00% 0.00% 0.00% 0 Environmental mo

4 9997 2008 4978 0.00% 0.01% 0.01% 0 Check heaps

5 0 2 0 0.00% 0.00% 0.00% 0 Pool Manager

12 8 8 1000 0.00% 0.00% 0.00% 0 DDR Timers

2 0 3770 0 0.00% 0.00% 0.00% 0 Load Meter

14 0 1 0 0.00% 0.00% 0.00% 0 SERIAL A'detect

15 16 18844 0 0.00% 0.00% 0.00% 0 GraphIt

16 0 2 0 0.00% 0.00% 0.00% 0 Dialer event

1 0 1 0 0.00% 0.00% 0.00% 0 Chunk Manager

11 24 3770 6 0.00% 0.00% 0.00% 0 HC Counter Timer

19 132 4971 26 0.00% 0.00% 0.00% 0 Net Background

13 8 2 4000 0.00% 0.00% 0.00% 0 Entity MIB API

21 76 18807 4 0.00% 0.00% 0.00% 0 TTY Background

22 76 18851 4 0.00% 0.00% 0.00% 0 Per-Second Jobs

23 4 2 2000 0.00% 0.00% 0.00% 0 SM Monitor

17 0 2 0 0.00% 0.00% 0.00% 0 SMART

25 0 1 0 0.00% 0.00% 0.00% 0 dev_device_remov

26 1198 6294 190 0.00% 0.00% 0.00% 0 Net Input

27 0 3771 0 0.00% 0.00% 0.00% 0 Compute load avg

28 9014 315 28615 0.00% 0.05% 0.00% 0 Per-minute Jobs

29 36 75375 0 0.00% 0.00% 0.00% 0 e1t1 Framer back

30 0 2 0 0.00% 0.00% 0.00% 0 AAA Server

31 0 1 0 0.00% 0.00% 0.00% 0 AAA ACCT Proc

32 0 1 0 0.00% 0.00% 0.00% 0 ACCT Periodic Pr

18 0 1 0 0.00% 0.00% 0.00% 0 Critical Bkgnd

34 0 2 0 0.00% 0.00% 0.00% 0 AAA Dictionary R

20 4 69 57 0.00% 0.00% 0.00% 0 Logger

36 0 1 0 0.00% 0.00% 0.00% 0 ICMP event handl

37 656 2469 265

This is crazy it won't let me to copy and past stuff. I guess there is a limitation. can i email you all of that if that is the case send you email to me and i will reply with all the infor here is my email: """essi@boaa.com"""

Ellsworth-R#ping 10.31.100.22

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.31.100.22, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 12/19/32 ms

Ellsworth-R#

Main_to_Ells#ping 10.31.100.21

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.31.100.21, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 8/12/16 ms

Main_to_Ells#ping 10.31.100.22

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.31.100.22, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 20/25/32 ms

Main_to_Ells#

Can you send more than 4 icmp packets from the router to router connection and try to increase the size of the packet to a larger value - say 900?

As for copy and paste on these forums, there is a limit which you can circumvent by pasting the entire content onto notepad and upload the file.

If you come back with no packet lost from router to router, we need to investigate your LAN environment at both locations. That TTL count that you posted previously is quite abnormal for a point-to-point site interconnection. Can you post a traceroute from that same device while targeting the same destination?

Please see the attached files. I have included all the info regarding the current in production routers and the new routers.

Thanks for looking into this,

Best,

E.

Looking at your interfaces, you have an alarming count of input errors in all interfaces (including the LAN interface) on the Ellsworth-R router.

E0/0

21877392 packets input, 4152764533 bytes, 0 no buffer

Received 1019873 broadcasts, 0 runts, 0 giants, 0 throttles

110051 input errors, 109951 CRC, 0 frame, 0 overrun, 100 ignored

Serial interfaces

S0/0:0

18362420 packets input, 1006175883 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 13 giants, 0 throttles

34491294 input errors, 1126098 CRC, 33345521 frame, 0 overrun, 240 ignored, 19675 abort

S0/1:1

18223028 packets input, 804705597 bytes, 0 no buffer

Received 0 broadcasts, 1 runts, 824 giants, 0 throttles

34685830 input errors, 1120534 CRC, 33404118 frame, 0 overrun, 391 ignored, 161178 abort

Same on Main_to_Ells Router:

E0/0

5 minute output rate 54000 bits/sec, 67 packets/sec

10828503 packets input, 1890964668 bytes, 0 no buffer

Received 529504 broadcasts, 0 runts, 0 giants, 0 throttles

539895 input errors, 539880 CRC, 85720 frame, 0 overrun, 15 ignored

________

The LAN cabling needs to be verified. What type of device is connected to each of these routers?

On the Serial errors, you need to contact your provider.

__

Edison.

at the both end the Ethernet ports are connected to a 3Come 4400 LAN Switch. Since you mentioned the cabling they are connected using a normal stright RJ45 cables. Do you want me to swap the cables with any other model/version? I have just opend a new ticket with the service provider to come and check the SmartJak and run a test end to end again to see what they may discover.

I suggest swapping cabling and verify both devices are running compatibles speed/duplex setting.

I see you are running full duplex on the cisco router, make sure the switches are also set to full duplex.

__

Edison.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card