Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Uploading Slows Download Speed (and vice versa) behind Cisco 2901 NAT

Hi there,

We have recently had a new leased line installed.  It comes in over fiber with an NTE which converts to a 100Mb Ethernet connection (straight into the router).

We are guaranteed 100Mb up and 100Mb down from our provider.

We are running NAT on the router to provide Internet connectivity to clients private side.

We noticed that when downloading, we can achieve full throughput as this example shows:

The same is vice-versa; when we upload (without running a parallel download) we can obtain full throughput for the upload.

However, when we run both at the same time (upload and download) we notice that throughput is limited (for both).

See an example of an upload with a parallel download taking place:

... notice the decreased throughput.

We checked the CPU on the router and it's running at about 58% with both upload/download threads running at the same time.  I don't think CPU is the issue here.

Perhaps it's simply the overheads of TCP/IP?

The inside interface (connected to our switch) is running at 1Gbps / full duplex.

The outside interface (connected to the NTE) is running at 100Mbps / full duplex.

Here is some of our configuration:

NAT/Routing

ip nat inside source list 100 interface GigabitEthernet0/1 overload
ip nat inside source static tcp 192.168.0.160 51413 interface GigabitEthernet0/1 51413
ip nat inside source static 192.168.0.210 x.x.x.x
ip nat inside source static 192.168.0.250 x.x.x.x
ip route 0.0.0.0 0.0.0.0 x.x.x.x

Interface

interface GigabitEthernet0/0
ip address 192.168.0.1 255.255.255.0
ip nat inside
ip virtual-reassembly in
duplex auto
speed auto
no mop enabled

interface GigabitEthernet0/1
ip address x.x.x.x 255.255.255.248
ip nat outside
ip virtual-reassembly in
duplex auto
speed auto

We are also running CEF

ip cef

Thoughts around queueing

Perhaps running some kind of fair-queue on the interface will improve throughout but I was under the impressions that it was for slower links that have trouble like this.  I am also under that impression that it uses lots of CPU.

Currently, both interfaces are running default FIFO.

Any thoughts, suggestions, explanations and help are greatly appreciated and I thank you in advance.

Everyone's tags (6)
53 REPLIES
VIP Purple

Hello,

Hello,

this could simply be an MTU issue. Try and configure:

ip mtu 1460

ip tcp adjust-mss 1420

on your interfaces.

Also, post the output of 'show interfaces GigabitEthernet0/1, if there are drops, we can look at implementing some sort of QoS.

New Member

Hi George,

Hi George,

Thank you for your reply.

If it was an MTU issue, surely it would be present when simply downloading (without uploading)?  We get full throughput when only going one way.

Also, it's Ethernet to Ethernet we have the router placed in.  We are able to use 1500 packet size without fragmentation or drops.

Here is the sh of GigabitEthernet0/1

Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1297400
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 142000 bits/sec, 11 packets/sec
  5 minute output rate 15000 bits/sec, 12 packets/sec
     2516901849 packets input, 2142836399 bytes, 1297115 no buffer
     Received 45832 broadcasts (0 IP multicasts)
     0 runts, 0 giants, 0 throttles
     2790 input errors, 0 CRC, 0 frame, 2790 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     1283937838 packets output, 3980297965 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     1 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out
VIP Purple

Hello,

Hello,

you have a lot of output drops. Try and implement the below:

policy-map SHAPE_100
 class class-default
  shape average 100000000

interface GigabitEthernet0/1
 service-policy output SHAPE_100

MTU can be tricky, but in your case you are probably right, and MTU might not be an issue. You can test the max MTU size by sending a ping with different packet sizes to the IP address where you try to download from, check at which size you get a reply:

C:\windows\system32>ping -f -l 1500 8.8.8.8

Pinging 8.8.8.8 with 1500 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.

Ping statistics for 8.8.8.8:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

C:\windows\system32>ping -f -l 1472 8.8.8.8

Pinging 8.8.8.8 with 1472 bytes of data:
Reply from 8.8.8.8: bytes=64 (sent 1472) time=17ms TTL=45
Reply from 8.8.8.8: bytes=64 (sent 1472) time=16ms TTL=45
Reply from 8.8.8.8: bytes=64 (sent 1472) time=22ms TTL=45
Reply from 8.8.8.8: bytes=64 (sent 1472) time=15ms TTL=45

Ping statistics for 8.8.8.8:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 15ms, Maximum = 22ms, Average = 17ms

New Member

Hi again Georg,

Hi again Georg,

We are getting responses to pings with packet size of 1472 and lower - so of course, with the 28 byte overhead, this equates to an MTU of 1500 - so that seems fine.

ryan:~ ryan$ ping -D -s 1472 8.8.4.4
PING 8.8.4.4 (8.8.4.4): 1472 data bytes
1480 bytes from 8.8.4.4: icmp_seq=0 ttl=60 time=15.523 ms
1480 bytes from 8.8.4.4: icmp_seq=1 ttl=60 time=11.091 ms
1480 bytes from 8.8.4.4: icmp_seq=2 ttl=60 time=17.449 ms

We have applied the shaping configuration you supplied but this, however, has made throughput even worse.  Here is a screenshot of up/down threads:

Notice that we are only achieving 7.1MB/s for upload and only 10.1MB/s for download when running up/down together - we can usually achieve ~11.2 for both when running separately.

VIP Purple

Hello,

Hello,

what is the URL where you download this from ? Do you get slow downloads just from that one site ?

That said, can you post the full config of your router ?

New Member

We are downloading from an

We are downloading from an external server in the Netherlands that has a 10GbE connection to the world - this thing can keep up.  We are also uploading to the same server.  No, download performance is impacted from all sources when we are also running upload.

Of course, config below:

sh config
Using 2304 out of 262136 bytes
!
! Last configuration change at 12:56:56 UTC Wed Jul 19 2017 by ryan
!
version 15.5
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname xxx.net
!
boot-start-marker
boot system flash:c2900-universalk9-mz.SPA.155-3.M5.bin
boot-end-marker
!
!
enable secret 5 xxx
enable password xxx
!
no aaa new-model
ethernet lmi ce
!
!
!
!
!
!
!
!
!
ip dhcp excluded-address 192.168.0.1
ip dhcp excluded-address 192.168.0.80 192.168.0.255
!
ip dhcp pool NET-POOL
network 192.168.0.0 255.255.255.0
default-router 192.168.0.1
dns-server 8.8.8.8 8.8.4.4
!
!
!
no ip domain lookup
ip domain name xxx
ip name-server 8.8.8.8
ip name-server 8.8.4.4
ip cef
no ipv6 cef
multilink bundle-name authenticated
!
!
!
!
license udi pid CISCO2901/K9 sn xxx
!
!
username xxx privilege 15 secret 5 xxx
!
redundancy
!
!
!
!
!
!
interface Embedded-Service-Engine0/0
no ip address
shutdown
!
interface GigabitEthernet0/0
ip address 192.168.0.1 255.255.255.0
ip nat inside
ip virtual-reassembly in
duplex auto
speed auto
no mop enabled
!
interface GigabitEthernet0/1
ip address xxx.xxx.xxx.xxx 255.255.255.248
ip nat outside
ip virtual-reassembly in
duplex auto
speed auto
!
ip default-gateway xxx.xxx.xxx.xxx
ip forward-protocol nd
!
no ip http server
no ip http secure-server
!
ip nat inside source list 100 interface GigabitEthernet0/1 overload
ip nat inside source static tcp 192.168.0.160 51413 interface GigabitEthernet0/1 51413
ip nat inside source static 192.168.0.210 xxx.xxx.xxx.xxx
ip nat inside source static 192.168.0.250 xxx.xxx.xxx.xxx
ip route 0.0.0.0 0.0.0.0 xxx.xxx.xxx.xxx
!
!
!
snmp-server community public RO 12
snmp-server location xxx, United Kingdom
snmp-server contact Ryan Barclay, xxx, xxx
access-list 12 permit 192.168.0.0 0.0.0.255
access-list 100 permit ip 192.168.0.0 0.0.0.255 any
!
control-plane
!
!
!
line con 0
line aux 0
line 2
no activation-character
no exec
transport preferred none
transport output pad telnet rlogin lapb-ta mop udptn v120 ssh
stopbits 1
line vty 0 4
access-class 12 in
password xxx
login local
transport input ssh
!
scheduler allocate 20000 1000
!
end

I have used xxx to remove sensitive data.

Thanks, Georg.

VIP Purple

Hello,

Hello,

on a side note, 100MB down is guaranteed by the provider...it is probably worth checking if they have actually implemented this (correctly). Since downloading from all sites is affected, I wonder if you really get 100MB.

New Member

Yes, we get 100Mb down when

Yes, we get 100Mb down when downloading one-way (tested).  We get 100Mb up when uploading one-way (tested).  But we get throughput issues when we do both at the same time.

VIP Purple

Hello,

Hello,

sounds almost like a speed/duplex problem. Try and set GigabitEthernet to 'speed 100' and 'duplex full' manually.

New Member

Interesting, we set "speed

Interesting, we set "speed 100" and "duplex full" and the NTE won't provide a link to the router even though that's the agreed settings when it's set to auto:

However, we can set the duplex to full and it still provides a link.

I've tested that with "duplex full" and performance does indeed seem better but not perfect.  We are getting perhaps around 9.5 MB/s for up and down.

VIP Purple

Ryan, 

Ryan, 

9.5MB is still miserable. Is the Ethernet cable you are using Cat5e or better ?

Bronze

Are you running both of these

Are you running both of these downloads on the same computer?  What happens if you try it on two separate computers simultaneously?

New Member

Just tested running the

Just tested running the upload on one machine and the download on the other.  Same results.

When running an upload/download on separate machines, I can get ~11MB/s upload and the download stays 5-7MB/s.

If I start the upload first, the download starts very slow and works its way up to around 5-7MB/s very slowly (over 1 minute) but it fluctuates between 5-7MB/s over the course of the test - constantly up and down.  The upload stays consistent at 11MB/s.  I don't know if this indicates anything (FIFO).

As soon as I cancel the upload, the download works its way up to ~11MB/s pretty quickly.

VIP Purple

Hello,

Hello,

sorry for the confusion: so when your inside interface (GigabitEthernet0/0) is set to auto speed/auto duplex, this is the result ? Problem solved ? What do you have connected to GigabitEthernet0//0 ?

Speed test.

New Member

No, those speed tests run

No, those speed tests run upload and download test separately.  They are not run in parallel.  The problem still exists.

VIP Purple

Hello,

Hello,

what is the NTE device, which type/model ? I still would try and replace the cable, it could be just this. Also, have the provider do a test and make sure they get the full 100MB both ways simultaneously. Who is the provider anyway, BT ?

New Member

Cat6 UTP (2m) straight

Cat6 UTP (2m) straight through.  It's an off-the-shelf cable (I didn't make it myself).

Super Bronze

NB: If auto is being used,

NB: If auto is being used, and your show interface shows desired speed and duplex, then you should be fine.

Setting duplex to full, and taking advantage of it, requires other side to also be hard coded to full duplex.

If you do have a duplex mismatch, traffic will transfer, but very, very slowly.

Super Bronze

you have a lot of output

you have a lot of output drops. Try and implement the below:

policy-map SHAPE_100
 class class-default
  shape average 100000000

interface GigabitEthernet0/1
 service-policy output SHAPE_100

NB: If the interface is running, or can be run physically, at 100 Mbps, you shouldn't shape.

VIP Purple

Joseph,

Joseph,

good information, thanks. What if the interface runs at Gigabit speed ?

Super Bronze

Then you may want to do it if

Then you may want to do it if you want to manage congestion (rather than a downstream bottleneck to do so).

Super Bronze

Perhaps running some kind of

Perhaps running some kind of fair-queue on the interface will improve throughout but I was under the impressions that it was for slower links that have trouble like this.  I am also under that impression that it uses lots of CPU.

Yea, original interface WFQ would consume lots of CPU on a high speed interface, but the FQ provided in CBWFQ doesn't appear to have the same problem.  That said, it doesn't address what you're describing (including for a slower link).

Any thoughts, suggestions, explanations and help are greatly appreciated and I thank you in advance.

Cisco recommends the 2901 for up to 25 Mbps of WAN bandwidth.  However, with 1500 byte packets and a stripped config, it documents a 2901 as providing up to 3 Gbps of throughput.  Your 58% CPU is likely due to your large packet sizes on these bulk transfer tests.  Although your average CPU load perhaps indicates your CPU isn't a problem, even so, a 2901 is a bit undersized for a 100 duplex WAN link.  Also remember your CPU stat is an average over multiple seconds, it's possible during combined up/down TCP bursts the CPU cannot meet the imposed load and slows the transfer rate.

Do your interfaces show any drops during the dual upload/download test?  Ingress queues all show zero stats?  What do your buffer stats look like?  Do you know if TCP is showing drops for the flows?

I'm also wondering whether when you run the up/down test, whether some other equipment, other than this router, cannot handle the up/down load.

New Member

Hi Joseph,

Hi Joseph,

Thank you for your reply.

I was simply running sh proc cpu for the CPU data.  I ran it again while running the up/down test:

CPU utilization for five seconds: 59%/58%; one minute: 56%; five minutes: 57%

Is there a better way to check live CPU activity?

I'm self-taught-Cisco so some of this is new to me - so please be patient with my questions - hence why I'm posting on here!

Yep, I read those throughput stats too when we purchased the router.  But I figured as we are running full-size packets then it should be able to keep up without any problems.

The only piece of equipment between the router and the world is the carrier supplied NTE.  Apparently, these things can handle Gigabit links so I can't see it being a problem.

Here is the latest output of sh int Gi0/1

The 5 min in/out was collected while running the up/down test.

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1312761
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 82540000 bits/sec, 9661 packets/sec
  5 minute output rate 71480000 bits/sec, 6982 packets/sec
     2534358289 packets input, 4065249378 bytes, 1306045 no buffer
     Received 45956 broadcasts (0 IP multicasts)
     0 runts, 0 giants, 0 throttles
     2790 input errors, 0 CRC, 0 frame, 2790 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     1295912807 packets output, 1903063013 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
    0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     1 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

Is this the data you were looking for?  If there is anything else that I've missed, let me know.  How am I checking the ingress queues and buffer stats?  I can use sh buffers - is that what you're looking for?  How am I checking TCP drops for flows (router side, wireshark)?

Thank you again for your help.

Super Bronze

Is there a better way to

Is there a better way to check live CPU activity?

None that I'm aware of.

Yep, I read those throughput stats too when we purchased the router.  But I figured as we are running full-size packets then it should be able to keep up without any problems.

Yea, you have lots of full-size packets, but not all.  For example, how large are TCP ACK packets?  Again, although your CPU stats look good, it's an average.

Is this the data you were looking for?

It helps.

You have lots of output drops although as a percentage, not horribly bad.  However, increasing your egress queue may help reduce those.  Since you're running at 100 Mbps, a queue depth of 40 is perhaps too shallow.  Try 128, 256 or 512.

Although you don't have many, you also have overrun errors.  Those, I believe, are indicative of your hardware being unable to keep up with the ingress rate.

I can use sh buffers - is that what you're looking for?

Yes, another stat I would like to see.

How am I checking TCP drops for flows (router side)?

That would be done by seeing if the end hosts note something like TCP retransmissions in their TCP stats.  Can also be done if we copy packets off the wire and analyze them (e.g. using WireShark).

New Member

Thank you for your reply,

Thank you for your reply, Joseph.

I can't seem to find the correct command to change the size of the egress queue - do you have the config to hand?

When you say "hardware" do you mean the router or NTE?

I already have Wireshark installed and ready to go on my testing machine so I'll have a play with that.

I noticed something very interesting that I wanted to mention...

I thought I would try setting the inside interface to "speed 100" and "duplex full" so it was running at 100Mb not 1000Mb (full duplex).  I noticed some very strange results.  This is a crude speed test that I used but it confused me.

When the inside port on the router was set to 100Mb/full-duplex these were the results:


When the switch is set to auto speed and auto duplex, it negotiates at 1000Mb full-duplex.  These were the results:

... I thought that was very bizarre and can't figure out why the upload was only ~6Mb when the inside router port was set to 100/full-duplex.

When changed back to auto/auto it goes back to normal.

Do you know what's going on here?

Super Bronze

I can't seem to find the

I can't seem to find the correct command to change the size of the egress queue - do you have the config to hand?

Off the top of my head, interface command hold-queue output #.

When you say "hardware" do you mean the router or NTE?

Your router.

... I thought that was very bizarre and can't figure out why the upload was only ~6Mb when the inside router port was set to 100/full-duplex.

When changed back to auto/auto it goes back to normal.

Do you know what's going on here?

Also again, off the top of my head, no, but then we would need to understand your internal infrastructure.  However, I presume you're more concerned about your WAN issue.

BTW, to confirm, you're running the external interface auto/auto?  If so, it connects up as?

New Member

Thanks for the reply again

Thanks for the reply again Joseph.

Those speed tests I provided were going over the WAN - it's simply the speed test from speedtest.net.

When I set the internal interface Gi0/0 (the one connected to our switch) to speed 100 and duplex full - those were the results (~6Mb up).  Setting them back to auto seemed to have fixed it.  I just have no idea why?

We have a very simple internal set-up - the router plugs into a switch and my test machine is also connected to the same switch.

The external interface (connected to the NTE) only works on auto/auto and negotiates at 100Mb/full-duplex.

Super Bronze

What kind of switch, and if

What kind of switch, and if one you can configure, how is it configured for queuing?

New Member

It's a very simple non

It's a very simple non-managed hard coded Cisco SG100-16 port:

We don't have any problems with it - it can push gigabit to our storage array and get full duplex throughput.  We changed the port to eliminate port failure and that was just the same.

We do have one of these sitting on the shelf though:

43
Views
5
Helpful
53
Replies