We have a small network of routers and switches between 3 distinct sites. The Cisco devices consist of 1841, and 2621 routers; and 2950 switches.
Interconnecting the sites are 2 telco LAN Extension services rated at 100 Mbit/s. From end to end is 100Mbit interfaces / technologies.
It has been noticed that there is a consistant drop in throughput depending on what network device is traversed.There is no other traffic present yet.
e.g. An 1841 router with 2 fastethernet ports results in a throughput of 85Mbits/s from 1 subnet to another. The 2 network devices were baselined on the same network segment at 92Mbits/s which shows a 7 Mbits/s drop throgh a router with basic configuration,no routing protocols running, and no other traffic flowing. The links between the Cisco router and 2950 switches are configured as 100 and full duplex (no errors showing).
I have similar drops in throughput when 2950 switches are connected to each end of the telco LAN Extension where there is a 8Mbits/s drop in throughput.
The most dramatic drop in throughput is if a 2621 router is connected to the switches on each end of the LAN Extension.
The resulting throughput drops by 34Mbits/s. Again router has basic config, EIGRP running for 1 test and replaced by static routes for retest with no real deviation of throughput drop.
Am I over optimistic to think the throughputs should be close to the maximum throughput they are designed for, especially when they not subjected to normal user traffic ?
The performance tool used is netperf and seems to give reasonable accurate results.