I have a 2620 router, with default flash and DRAM. Very simple config with only a WIC-1T serial card. This router serves as an Internet access edge router, and is directly connected to the Internet edge router through a point-to-point 512Kbps leased circuit.
There is a CheckPoint 4.1 firewall between this router and the LAN. My LAN user population is about 200 users. Through this connection, I maintain about 5 site-to-site VPNs (besides the typical Internet surfing) through the CheckPoint.
Now the problem. Lately, my user utilization has peaked consistently at 98% of my subscribed bandwidth. Naturally, the RTD performance suffers drastically, at times giving me about 1500ms to 2400ms RTDs. Even when I ping/traceroute from the router itself, the throughput doesn't seem to get better.
So I asked my engineers to isolate the problem. These guys disconnected the LAN from the edge router, and did several throughput tests. It seems the RTD, even at 98% utilization, didn't go pass 500ms (for international routes). Of course, when the engineers did the test, it was only 1 notebook connected to my router.
Here's my questions:
1. Are there drastic performance effect on the router, between one user maximizing the utlization, versus 200 users maxing the utilization?
2. If yes, which is the area in the router that drags down the throughput? The CPU, RAM?
The performance of the router is dependant on the number of frames it is passing regardless of whether these originate from one user or multiple users, although one user typically cannot generate enough traffic to saturate a link in both directions. The exception to this is when the router itself is terminating the IP connection eg when actually terminating the VPN connection.
Both CPU and memory are possible bottlenecks. A 'show proc cpu' will give CPU utilisation. Typicall performance will start to degrade as CPU exceeds 50%, and a sh mem/sh buffers can be used to monitor memory utilisation.
There is a big performance difference (from a user/session standpoint) between 1 connection saturating a link and many connections saturating a link, though it's more of a protocol/queuing issue than a router issue.
The easiest way I can think of to explain this is as follows: All things being equal, if there are two users competing for a link's bandwidth, each user will get 50% of the bandwidth. If there are 200 users competing for this bandwidth, however, each user will get only 0.5% of the bandwidth. As the number of users (or more accurately, the number of sessions) competing for a given link's bandwidth increases, the amount of bandwidth each user/session will get decreases.
This doesn't directly address latency, but there tends to be a correlation depending on various things such as the type of queuing going on at each end of the link, packet sizes, and so forth. As rsissons alluded to, RTD will tend to be greater when a link is saturated in both directions relative to when a link is saturated in one direction. In the former case, both packets (the outbound packet and the return packet) are likely to get queued for a longer period of time relative to an unsaturated link. In the latter case, only one packet is likely to experience a queuing delay because the link is only saturated in one direction.
Hi guys, thanks for the suggestions. It makes much sense.
I suspect that the large number of sessions created does eat up memory heap and CPU cycles, bandwidth utilization notwithstanding.
One question. Let's say there is a broadcast storm from a rogue application within the LAN, and with countless broadcast packets perpetually hitting the router's ethernet interface. How will this affect the CPU utilization of the router?
I think it's safe to say that the number of sessions may or may not have an affect on router CPU/memory usage depending on what the router is actually doing. Access lists are the first thing that come to mind, especially ones that keep state information such as reflexive lists or CBAC.
But for pure layer-3 routing, I'm having difficulty coming up with reasons why CPU/memory usage on the router would correlate with the number of TCP/UDP sessions. The router is only looking at the destination IP address of each packet, so technically it has no knowledge of layer-4 sessions. If the router is using fast switching, one could make the argument that multiple sessions will result in more cache misses than a single session, but I suspect you'd have to have a lot of packets going to a lot of different destinations for this to make a significant difference.
As for broadcasts, they'll affect the CPU of every device they reach to some extent, but you'd probably have to try it on the router in question to quantify it -- different devices have different CPUs.
Hi everyone, I would like to thank you in advance for any help you can provide a newcomer like myself!
Im studying the 100-105 book by Odom and am currently on the topic of Port security. I purchased a used 2960 and I'm trying to follow a...
While deploying a number of 18xx/2802/3802 model access points (APs), which run AP-COS as their operating platform. It can be observed on some occasions that while many of their access points were able to join the fabric WLC withou...
I am going to design and build an LAN network under a tunnel underground with long distance between the switches.
I will have 2 Catalyst switches and 8 Industrial IE3000, and they will be connected with fiber.
For now I am planning on use Layer-2 s...