07-19-2009 02:17 AM - edited 03-04-2019 05:28 AM
Hi,
I have performance Issue once I have Router between Internet and Microsoft ISA.
If I connect Internet directly with additional NIC on Microsoft_ISA it works great...
I didnt find any communication issue between router and ISA.
Can someone advice, here is router configuration**
interface FastEthernet0/0
description Connected to service-provider
ip address 10.10.10.2 255.255.255.252
ip nat outside
!
interface FastEthernet0/1
description Connection to LAN
ip address 192.168.1.100 255.255.255.0
ip nat inside
ip route 0.0.0.0 0.0.0.0 10.10.10.1
ip nat inside source list 99 interface FastEthernet0/0 overload
access-list 99 permit ip host 192.168.1.200 any
07-20-2009 06:35 AM
One theory are output drops.
Do this:
policy-map SHAPE
class class-default
shape average 5000000
interface FastEthernet0/0
service-policy output SHAPE
If your connection to ISP is really only 5M, this will help a lot on the upstream side.
Persuade provider to do shaping instead of policing on the output to your site on their PE.
Remember how TCP works. If you create a policer for 10Mbps on a 100Mbps interface, it one TCP session will only be able to use up only about 1.5Mbps on average. That's a huge difference. With shaping, one TCP session will be able to use almost all of the link.
Ofcourse with many users, there will be more TCP sessions and therefore higher overall throughput (up to the speed of the policer).
But it all depends on what apps are used and what is tested.
07-20-2009 09:39 AM
Pavlo, excellent information, although if we keep in mind, if I understand issue correctly, additional insertion of router reduces perfomance, I'm unsure that addition of a shaper, assuming one isn't already configured, would improve performance vs. decrease of performance when router added. Does this make sense?
As I asked in one of my prior posts, we need more information because I think we're trying to ascertain why addition of the router is reducing performance.
Something like a shaper could indeed enhance performance over no router or router with basic configuration, but again, the mystery is why a basic router configuration is reducing performance vs. no router.
07-20-2009 10:45 PM
07-21-2009 03:27 AM
More so the the "ignored" errors on fastE 0/1; of concern are the many "output errors" and "late collision". Duplex mismatch? Bad or wrong spec cable?
PS:
NB: "Number of late collisions. Late collision happens when a collision occurs after transmitting the preamble. The most common cause of late collisions is that your Ethernet cable segments are too long for the speed at which you are transmitting. "
07-21-2009 06:19 AM
Hi slider,
"Remember how TCP works. If you create a policer for 10Mbps on a 100Mbps interface, it one TCP session will only be able to use up only about 1.5Mbps on average. That's a huge difference. With shaping, one TCP session will be able to use almost all of the link"
Do you mind explaning this in a little brief.
Chao
Vishwa
07-21-2009 06:13 PM
"I would like to challenge josephdoherty's assertion on this. How did you come up with this assumption?
Challenge away ;)"
I don't think the data that joseph provided is accurate. See below:
Output queue: 0/40 (size/max)
30 second input rate 97738000 bits/sec, 9023 packets/sec
30 second output rate 64403000 bits/sec, 5321 packets/sec
The rest of the output:
c2811#sh int f0/1
FastEthernet0/1 is up, line protocol is up
Hardware is MV96340 Ethernet, address is 001e.7a6d.8149 (bia 001e.7a6d.8149)
Description: LAB_INTERFACE
Internet address is 192.168.15.246/24
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 164/255, rxload 249/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, 100BaseTX/FX
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 00:02:19
Input queue: 5/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 97738000 bits/sec, 9023 packets/sec
30 second output rate 64403000 bits/sec, 5321 packets/sec
1264186 packets input, 1715894352 bytes
Received 579 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog
0 input packets with dribble condition detected
728879 packets output, 1102886611 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
c2811#
Clearly, the 2811 platform can deliver 100Mbps throughput without any issues. Not sure how did your testing but as you can see with mine, performance is pretty good even with some NATs.
07-21-2009 07:10 PM
What data do you believe I've posted that's inaccurate?
I'm guessing you still disagree with my original "really capable" adjectives. If so, again, another way to look at that is "really capable" means there's no question, and from your final remark you believe the 2811 is fully capable of 100 Mbps (i.e. "Clearly, the 2811 platform can deliver 100Mbps throughput without any issues.") Yet, even with your latest interface stats, you still have not shown 100 Mbps egress! Forwarding performance is traffic that transits the device, not just received on an interface. You're also using large to max size packets. Try minimum size packets and see how that behaves. Also, although your new stats don't show the same interface errors the first post did, how do you explain the first post's interface errors stats? Lastly, on your latest stats, did you notice:
Input queue: 5/75/0/0 (size/max/drops/flushes?
If you have FastE in and FastE out, why are any packets being queued? (To me, that's an issue.)
PS:
BTW, I can provide a copy of the performance sheet I was looking at, if requested, but I believe the numbers haven't changed for the 2811 on the latest revision.
As to how I had tested a 2811, I used a utility, pcattcp, and pushed 100 Mbps of UDP through a 2811. I didn't pay much attention to the packet sizes. I did note, receiving interface showed 100 Mbps (like yours), but egress was far short (about half?) and CPU hit 100%. (Purpose was to estimate how much of a full T3 2811 might handle.)
07-21-2009 07:34 PM
"If you have FastE in and FastE out, why are any packets being queued? (To me, that's an issue.)"
I did not clear the counters before running the test in my first post so there were errors and that output is not an accurate one. The 2nd output is an accurate reflection of the test.
Of course, if you use 64 bytes size packet, then you will not be able to push 100Mbps through put. I am mainly talking about ftp, scp or oracle sqlnet traffics. In that case from what I see, the router can forward close to 100Mbps throughput without issues. My NMIS system confirms that I am getting close to 100Mbps on the servers in front and behind the routers. The reason I am seeing packets being queued is because the I am pushing and pulling iperf traffics on the same server and that the server behind the router is a dell optiplex P-III 800Mhz. The result would have been closed to 100Mbps both way if the server is a quad-cpu/quad-core with lot of RAM.
07-21-2009 08:46 PM
"I did not clear the counters before running the test in my first post so there were errors and that output is not an accurate one. The 2nd output is an accurate reflection of the test. "
Ah, but the queued packets, in your 2nd post, which you note "is an accurate reflection of the test", still indicates a performance issue. Consider, data can't arrive faster than 100 Mbps (or shouldn't, although it might, but we'll assume it doesn't), and if data forwarded at 100 Mbps, there should be no queuing. I.e., packet arrives, packet leaves.
I'm also glad to read your 2nd output is accurate, although would have liked to have seen the paired ingess/egress interfaces (especially 100 ingress to 100 egress).
"Of course, if you use 64 bytes size packet, then you will not be able to push 100Mbps through put. I am mainly talking about ftp, scp or oracle sqlnet traffics."
Oh, you're talking about certain types of traffic. I wasn't, which is why I asserted the 2811 "isn't really capable" because, to me, really capable means there are no conditions.
To put this even another way, if a device can not deliver wire-speed/line-rate for all traffic, then I believe the device "isn't really capable" for that bandwidth. This is not to be confused with it might sometimes deliver wire-speed/line-rate but under certain conditions.
My belief is a 2811 is not a wire-speed/line-rate device for 100 Mbps Ethernet especially duplex. Do you disagree?
Now you also write "In that case from what I see, the router can forward close to 100Mbps throughput without issues." "close to"? Did you mean the 64 Mbps shown for egress in your 2nd post for near maximum packet sizes? Or, did you mean what your NMIS is reporting?
"The reason I am seeing packets being queued is because the I am pushing and pulling iperf traffics on the same server and that the server behind the router is a dell optiplex P-III 800Mhz. The result would have been closed to 100Mbps both way if the server is a quad-cpu/quad-core with lot of RAM."
So you're thinking a more powerful host would create less queuing on the router by pushing a full 100 Mbps each way? Is this a fact or assumption? (I would assume the opposite.)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide