Customer posed me a simple question. He asked what is the reason to upgrade connectivity to desktop to GE speed but in actual fact, any FTP or file transfer will never reach the GE throughput. I believe the max throughput is around 400Mbps (40% utilization)
He would like to know what causes the bottleneck. Is it because of the application transfer rate, OS limitations or the switchport architecture itself.
Can anyone provide more information or explanation?
There are many possible causes for the bottleneck. It is improbable that the switchport or the switch fabric itself is the limiting factor. In a desktop environment, the bottleneck is often found on hard drives, busses and OS limitations.
400 Mbps translates roughly to 50 MBps which is a fine throughput for a common hard drive today. When I perform a simple block-read test on my hard drive at my desktop, I get the following results:
handel:~# hdparm -tT /dev/sda
/dev/sda: Timing cached reads: 13152 MB in 2.00 seconds = 6591.16 MB/sec Timing buffered disk reads: 328 MB in 3.00 seconds = 109.29 MB/sec
The interesting number is in the second line - the buffered disk read. The drive is WDC WD6400AAKS-22A7B0 (SATA-II), so the speed is quite nice. However, with the SATA I or EIDE, the speeds would be significantly slower, nearing the 50 MBps you have observed yourself, or even lower.
Another issue is the bus. Ordinary 32-bit PCI with 33 MHz is capable of running theoretically up to 133 MBps which is slightly over 1Gbps. However, as many devices compete for this bus, the maximum attainable throughput for a single device is thus limited. Putting a gigabit Ethernet adapter on an ordinary PCI bus therefore strains the bus to its limits and may eventually prove as a limiting factor.
Yet another limiting factor may be the OS itself. I have been actually surprised to learn that Windows Vista and newer actually try to limit the throughput of networking adapters in favor of realtime applications. Read more about it here:
Ideally, if a performance testing is to be done then I suggest using a protocol with minimum overhead (such as plain FTP or HTTP download, no SMB/CIFS/NFS file sharing), perhaps even better using a traffic generator on top of the UDP transport protocol (as the UDP will not try to control the flow). First, test the file transfer on a direct PC-to-PC connection to see whether the speed is close to the link maximum speed, and then proceed to testing the network infrastructure itself.
Question We run asr9001 with XR 6.1.3, and we have a very long delay to
login w/ SSH 1 or 2 to the device compare to IOS device. After
investigation, the there is 1s delay between the client KEXDH_INIT and
the server (XR) KEXDH_REPLY. After debug ssh serv...
Introduction The purpose of this document is to demonstrate the Open
Shortest Path First (OSPF) behavior when the V-bit (Virtual-link bit) is
present in a non-backbone area. The V-bit is signaled in Type-1 LSA only
if the router is the endpoint of one or ...
Hi, I am seeing quite a few issues with patch install and wanted to
share my experience and workaround to this. Login to admin via CLI, then
access root with the “shell” command Issue “df –h” and you’ll probably
see the following directory full or nearly ...