Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Re: GE Throughput for Desktop NOT Achievable

Hi all,

Customer posed me a simple question. He asked what is the reason to upgrade connectivity to desktop to GE speed but in actual fact, any FTP or file transfer will never reach the GE throughput. I believe the max throughput is around 400Mbps (40% utilization)

He would like to know what causes the bottleneck. Is it because of the application transfer rate, OS limitations or the switchport architecture itself.

Can anyone provide more information or explanation?

Thanks in advance


Cisco Employee

Re: GE Throughput for Desktop NOT Achievable


There are many possible causes for the bottleneck. It is improbable that the switchport or the switch fabric itself is the limiting factor. In a desktop environment, the bottleneck is often found on hard drives, busses and OS limitations.

400 Mbps translates roughly to 50 MBps which is a fine throughput for a common hard drive today. When I perform a simple block-read test on my hard drive at my desktop, I get the following results:

handel:~# hdparm -tT /dev/sda

Timing cached reads:   13152 MB in  2.00 seconds = 6591.16 MB/sec
Timing buffered disk reads: 328 MB in  3.00 seconds = 109.29 MB/sec

The interesting number is in the second line - the buffered disk read. The drive is WDC WD6400AAKS-22A7B0 (SATA-II), so the speed is quite nice. However, with the SATA I or EIDE, the speeds would be significantly slower, nearing the 50 MBps you have observed yourself, or even lower.

Another issue is the bus. Ordinary 32-bit PCI with 33 MHz is capable of running theoretically up to 133 MBps which is slightly over 1Gbps. However, as many devices compete for this bus, the maximum attainable throughput for a single device is thus limited. Putting a gigabit Ethernet adapter on an ordinary PCI bus therefore strains the bus to its limits and may eventually prove as a limiting factor.

Yet another limiting factor may be the OS itself. I have been actually surprised to learn that Windows Vista and newer actually try to limit the throughput of networking adapters in favor of realtime applications. Read more about it here:

Ideally, if a performance testing is to be done then I suggest using a protocol with minimum overhead (such as plain FTP or HTTP download, no SMB/CIFS/NFS file sharing), perhaps even better using a traffic generator on top of the UDP transport protocol (as the UDP will not try to control the flow). First, test the file transfer on a direct PC-to-PC connection to see whether the speed is close to the link maximum speed, and then proceed to testing the network infrastructure itself.

Just my two cents on this...

Best regards,


CreatePlease login to create content