cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
25400
Views
5
Helpful
12
Replies

Bandwidth command in interface mode

arpitdesai12
Level 1
Level 1

Hello

I am reading Todd Lammle's CCNA study guide....Oer der he says that bandwidth command used under interface config mode has nothing to do with how fast data is transferred...he says that it is only useful for routing protocols like OSPF etc to count metrics....but while explaining OSPF he says bandwidth is used  as cost to find fastest route...now what is this about...bandwidth is obviously going to effect data speed....could you plz make things clear for me

1 Accepted Solution

Accepted Solutions

Peter Paluch
Cisco Employee
Cisco Employee

Hello Arpit,

The point of what Todd Lammle says is that regardless of what value the bandwidth command is set to, the interface will not change its transmission speed. For example, imagine you configured bandwidth 1234 on a FastEthernet interface. Is it going to change its speed somehow? Not at all. As you know, FastEthernet interfaces support only two data transmission speeds: 10Mbps (Ethernet) and 100Mbps (FastEthernet). The data link speed will be negotiated with the connected device and the bandwidth command will not have any influence on this speed whatsoever. The same goes for any other types of interfaces - Serial, DSL, Wireless, Tunnel, etc. Many of these interfaces do support changing their true operational speed but that strongly depends on the actual physical layer and the commands to change the link speed are diverse; I am not going to list them here. Once again, the bandwidth command will not influence in any way how fast a particular interface will send or receive data.

You are correct in your observation that both OSPF and EIGRP take the value of the bandwidth into account when computing their metrics. Both EIGRP and OSPF then try to prefer routes that are, according to the bandwidth command setting, faster. Hence, the bandwidth setting influences the choices made by OSPF and EIGRP - and surely, if there are multiple different paths to a destination, the bandwidth setting has a profound effect on which path will be used and therefore loaded with data. But the true data link speed is never changed.

Please feel welcome to ask further!

Best regards,

Peter

View solution in original post

12 Replies 12

Peter Paluch
Cisco Employee
Cisco Employee

Hello Arpit,

The point of what Todd Lammle says is that regardless of what value the bandwidth command is set to, the interface will not change its transmission speed. For example, imagine you configured bandwidth 1234 on a FastEthernet interface. Is it going to change its speed somehow? Not at all. As you know, FastEthernet interfaces support only two data transmission speeds: 10Mbps (Ethernet) and 100Mbps (FastEthernet). The data link speed will be negotiated with the connected device and the bandwidth command will not have any influence on this speed whatsoever. The same goes for any other types of interfaces - Serial, DSL, Wireless, Tunnel, etc. Many of these interfaces do support changing their true operational speed but that strongly depends on the actual physical layer and the commands to change the link speed are diverse; I am not going to list them here. Once again, the bandwidth command will not influence in any way how fast a particular interface will send or receive data.

You are correct in your observation that both OSPF and EIGRP take the value of the bandwidth into account when computing their metrics. Both EIGRP and OSPF then try to prefer routes that are, according to the bandwidth command setting, faster. Hence, the bandwidth setting influences the choices made by OSPF and EIGRP - and surely, if there are multiple different paths to a destination, the bandwidth setting has a profound effect on which path will be used and therefore loaded with data. But the true data link speed is never changed.

Please feel welcome to ask further!

Best regards,

Peter

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Just want to add to Peter's posting, some of the later IOS versions the BANDWIDTH setting can set the allowance for a percentage based policer or shaper.  For these, the policer or shaper may restrict the physical interface from sending at its maximum performance.

Hello Joseph,

Yes, you are very true and I was aware of that - in fact, to my best knowledge, the bandwidth setting always declared the maximum allowance for a shaper/policer (we also have to consider the max-reserved-bandwidth here). In addition, the value of bandwidth may influence the size of the Tx-ring of the interface (IOS likes to adapt the Tx-ring size according to the bandwidth setting).

Nevertheless, all these issues are far beyond CCNA scope and I did not want to confuse Arpit with too much detail here. The bottom line remains - as intuitive as the bandwidth keyword may sound, it has indeed nothing to do with the speed of data sent and received on a particular interface.

Best regards,

Peter

P.S.: Joseph, I've sent you a private message again - three weeks ago or so

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Peter, agree we don't want to confuse Arpit with too much detail, but on the other hand, didn't want him to believe the bandwidth statement never (ever) impacts actual transmission bandwidth.

Hi Joseph,

I just had and interesting discussion with our provider saying:

"When there is a shaper configured on the interface (means shaping to some bandwidth value, not by percent probably), the "bandwidth remaining ..." commands used in Class-map configurations are ignoring the bandwidth value configured on the interface but are using the shaped bandwidth value for their calculations."

Pretty confusing, isn't it?

@Peter:

I know this is out of CCNA scope, but quite interesting, isn't it?

BR,

Milan

Hi Milan,

I know this is out of CCNA scope, but quite interesting, isn't it?

Surely it is - for us But the point here is to elucidate the key points for Arpit, and so far, I am afraid we're blurring them with unnecessary details.

Best regards,

Peter

Okay i've got it....Bandwidth command will not change speed of link...well it can be used for routing protocols such as OSPF nd EIGRP....but one more thing that confuses me is difference between MTU nd BW

HI Arpit,

as you are aware the data that an application like an email or a database query can produce, are divided in segment in order to be encapsulated at the lower layers. In the layer three for example we will add the IP headers, ethernet headers on layer two and so forward. The MTU (Maximum Transmission Unit) is the max size that a datagram can have while crossing the infrastructure. There are mechanisms to understand which is the optimum MTU size such:

http://www.cisco.com/en/US/docs/ios/12_2/ip/configuration/guide/1cfip.html#wp1001001

and settings you can adjust in global and interface configuration. Although a bit older i do like the following document :

http://www.cisco.com/en/US/tech/tk870/tk877/tk880/technologies_tech_note09186a008011a218.shtml

because it provides a better idea of how the size of a datagram can impact on the data transmission.

Hope this helps

Alessio

Hi Arpit,

your understanding is correct except when you define "fastest" route. OSPF as other routing protocols find the "best" route and NOT the fastest. Each protocol has different metric and way to find the best route:

RIP uses hop count

EIGRP uses a complex metric

OSPF uses link cost

etc..

Now Finding the best route sometimes can coincide with the fastest too, but very often many other different factors define the route that will be installed in the routing table. Consider for example the loop-free routing protocols.. maybe the fastest route is not loop-free..

Hope this helps

Alessio

Hello Alessio,

maybe the fastest route is not loop-free..

Can you provide an example for this? I have a feeling that this is not possible but so far I do not have a counter-proof. I'm working on it.

Best regards,

Peter

Hi Peter,

if you define "fastest" a route , this means that the time to reach your destination is, numerically speaking, inferior to other paths(less milli or microsecons). This can be for example a path with all XEN interfaces(quicker transceivers etc..) on the way or a more performing media(optical rather than copper). Stating that the speed of a link/path is a sufficient condition because no loop occurs is quite reductive when we think about all the conditions that can occur at MAC and IP layer for a loop to exist. Indeed, no metric is exclusively referred only to the time  required by a packet in oredr  to cross the infrastructure  from a router A to a router B.

You possibly meant something else that i did not get.

Keep repeating that the best route can be/is loop free, not the fastest.

Hope this is clearer

Alessio

Alessio,

Thank you for your response but it is not entirely what I meant.

I am thinking in terms of graph theory that is the principial foundation for all routing protocols, especially link-state protocols. Let's have a network represented by a connected graph, nodes representing routers and edges representing interconnections between them. Each edge is assigned a cost relative to its bandwidth (it may be the bandwidth itself, or it may be an inverse of it - the higher the bandwidth, the smaller the cost - depends on what you choose).

Now, you propose that the fastest path may not always be loop free. From a principial point of view, this is arguable. First, the "path" in terms of graph theory is defined as an alternating sequence of nodes and edges in which no node is present twice or more - but what we are talking about here should properly be called a walk. Second, what is your definition of "fastest" or "fast" in this graph? Is it the minimum bandwidth along the walk towards destination, similar to EIGRP? Or is it the sum of link costs, similar to OSPF?

Best regards,

Peter

Review Cisco Networking products for a $25 gift card