Hi, what is theoretical limit on an interface vs the actual practical limit. For eg. if the actual link is a 100Mb Internet Circuit but the physical interface is a Gigabit interface on switch configured with speed 100 than is it possible to go above the 100Mb. I have always been under assumption the speed setting on the interface controls the actual theoretical traffic limit as well as the practical maximum. If there is more traffic than 100Mb (the speed of interface) than it will be buffered and than dropped if there are no buffers but it will never send out more than the configured speed setting. Is this correct. Thanks
If we're discussing Ethernet, a multiple speed interface only runs as one speed. So, for a 10/100/1000 interface connected to, on the other side of the link, a 10/100 interface, your actual speed would be either 10 or 100.
For Ethernet, the speed could be fixed for the port (i.e. there are 10, 100 and gig only ports), auto configured (on multi speed capable interfaces),or manually configured (on multi speed capable interfaces), e.g. "speed 100".
When working with WAN Ethernet hand-offs, such as perhaps your 100 Mbps Interlink, it's quite common for the WAN to physically send/receive at physical bandwidth but measure actual bandwidth utilization. Bandwidth utilization beyond the agreed usage might be dropped or shaped. I.e. a 100 Mbps hand-off might only offer "25" Mbps.
This is a different from some serial interfaces that either use channels of blocks of bandwidth (e.g. DS0 channels of a DS1) or physically "clock" a certain bandwidth rate.
Neither of these bandwidths should be confused with the bandwidth statement offen seen on interfaces, which provides a logical bandwidth to other device functions.
Oh and yes, you can't send/receive faster than the phyical bandwidth provided.
If you actually configured both sides of an Ethernet interface at different physical speeds, e.g. one side 100, other gig, they would be unable to communicate at all (I believe).