cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5453
Views
11
Helpful
16
Replies

BW and Delay for Ethernet Subinterfaces

ronbuchalski
Level 1
Level 1

We connect to our Metro Ethernet provider via multiple Gigabit Ethernet connections. On each connection we have configured 802.1Q as an encapsulation, and have created multiple subinterfaces, defined by 802.1Q tags, and having /30 IP subnets assigned to them. (Essentially, it's the Ethernet equivalent of a channel bank!)

My question: Each subinterface has been configured with a bandwidth statement that corresponds to the bandwidth provided by the ME provider (mostly 10M, but there are some 5M, 20M, one 30M and one 100M).

However, the delay parameter remains the default for the interface (10usec), and is the same for all subinterfaces.

Should the delay be set, per interface, to a lower value, corresponding with the lower speed of the actual bandwidth available per subinterface?

By the way, I have searched extensively and have yet to find how IOS determines the value of delay for an interface. Does anyone know?

16 Replies 16

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Ron,

you should be fine with the delay value inherited from main interface even if you use EIGRP as your routing protocol.

>> Should the delay be set, per interface, to a lower value, corresponding with the lower speed of the actual bandwidth available per subinterface?

Actually a tuning could require to increase the delay.

The delay should be related to the physical speed of the interface the more the interface is faster the less is the delay associated to it at least this can be seen on ethernet, fastethernet, gigabit ethernet interfaces.

But as I wrote above I wouldn't change the delay value even if you are using EIGRP unless you use EIGRP and you have multiple links to the same remote site and you want to have a primary/secondary path.

Hope to help

Giuseppe

Giuseppe,

Thank you for your response. However, it does not really answer my question. If I connect to a Metro Ethernet provider with a Fast Ethernet port running 100M/Full, but I have purchased 10M of bandwidth, I have been setting the interface bandwidth statement to 10M for EIGRP route calculation purposes. However, our router will send data to the Metro Ethernet provider at 100M/sec, with the knowledge that the provider limits the path throughput to 10M. So, is the actual bandwidth the physical speed of 100M, or the policed ME bandwidth of 10M?

I still have the outstanding question of how does IOS determine the value for delay that it applies to an interface? I have found no documentation that states how the value is calculated.

In fact, the Command Reference says to issue the 'show interface' command to see what value of delay has been assigned, but has nothing to describe how the value is determined.

Regards,

Ron Buchalski

Ron,

In addition to Guiseppe's post, I've posted the link for the delay calculation:

http://www.cisco.com/en/US/tech/tk365/technologies_white_paper09186a0080094cb7.shtml#eigrpmetrics

You can set EIGRP's usage of bandwidth by percent also. (I'm not sure if this is what you're asking.)

On the primary interface, you would set:

ip bandwidth-percent eigrp 10 (if you want it to take 10mb of your 100mb connection.) By default, eigrp can take up to 50 percent with no percentage restrictions.

HTH,

John

HTH, John *** Please rate all useful posts ***

John,

You are talking about something totally different. You are referring to how much bandwidth EIGRP can use in order to send queries and updates to neighbors.

I have read the EIGRP delay metric calculations. I am not asking about EIGRP delay metric calculations.

I repeat...I am NOT asking about EIGRP delay metric calculations.

I am asking how Cisco IOS calculates a delay value that it applies to an interface. There is a delay value applied to any device running Cisco IOS, including switches. This is the value I am trying to understand.

I am very familiar with how EIGRP uses this value along a path to determine the path delay metric.

Regards,

Ron Buchalski

Ron,

I'm just trying to help. I can sense your frustration in your last post, but there really isn't a need for it.

Here's another link, and then I'll bow out:

http://books.google.com/books?id=of76QmGUnAoC&pg=PA375&lpg=PA375&dq=how+is+delay+calculated+on+an+interface+on+cisco+router+interface&source=bl&ots=nbQ5So7YVQ&sig=AvgEvTOpIQOKjWv0Hq-raUWSQCs&hl=en&ei=EVBCSuo1qL-3B4fDiacJ&sa=X&oi=book_result&ct=result...

I am very familiar with how EIGRP uses this value along a path to determine the path delay metric.

I wasn't questioning your abilities.

John

HTH, John *** Please rate all useful posts ***

John,

Thank you for the link. Unfortunately, it provides the same minimal information that the Command Reference does regarding the delay parameter for an interface. It tells how to see what the default delay value is, but never tells how IOS calculates the default value based on the interface type.

http://www.cisco.com/en/US/docs/ios/interface/command/reference/ir_d1.html#wp1012209

Sorry if I offended you with my response. No offense was intended. I am frustrated by the lack of this information, as I have searched cisco.com, Netpro, other cisco forums on the Internet, as well as several textbooks, and have yet to find the answer.

Ron Buchalski

Hello Ron,

what you wrote move to the QoS realm:

having bought a subrate link you need to use traffic shaping to ensure your outbound traffic is under contract.

Depending on your device this can be made using only shaping or two levels QoS with parent policy map (shaper) and child policy that does LLQ / CBWFQ.

By the way shaping implies delaying packets exceeding contract and also queueing involves holding the packets for some time (especially in congestion)

setting the BW to 10 Mbps is good to have a correct reference value for QoS commands using the percent options instead of absolute rates.

As a result of using shaping or a combination of shaping and queueing the delay experienced by user packets varies and it is different from the delay parameter of the interface.

Let me say I don't know how the delay parameter is calculated by IOS, but I hope I've suggested you that its value should have minimal to null impact on user traffic.

I would focus on the QoS part.

Hope to help

Giuseppe

Giuseppe,

Thank you for the info on QoS policy. It was quite an ordeal to set the two-level QoS policy for Metro Ethernet, compared to the standard MQC policy configuration that is done on standard routers using point-to-point T1 or ATM PVCs. And yes, in this case, having the correct BW statement is important for proper QoS configuration and operation.

Getting back to delay calculations, the reason I am trying to understand how IOS calculates an interface delay is because I see discrepancies when using default values. For example, if I have a T1 configured on a DS3 interface, the delay value applied to the T1 channel is the same as the delay value applied to the T1 interface at the remote router, so EIGRP is happy.

However, using a similar scenario where I have a DS3 ATM circuit with a 1500kbps PVC going to a remote router, the default delay values do not match. The PVC on the DS3 inherits the delay value for a DS3 port, while the PVC on the T1 inherits the delay value for a T1 port. This results in a K-value mismatch for EIGRP. In order to resolve it, every ATM PVC on the DS3 needs to have its' delay value set based on the speed of the PVC. So, I would like to know how IOS calculates this delay value.

See below for an example of the delay mismatch. This is for the DS3 side and T1 side of a point-to-point ATM PVC running at T1 speeds (well, 1500kb/s):

MainRouter#sh int a2/0.2

ATM2/0.2 is up, line protocol is up

Hardware is ENHANCED ATM PA

Description: To Location 1500

Internet address is 10.204.215.1/30

MTU 4470 bytes, BW 1500 Kbit, DLY 190 usec,

reliability 255/255, txload 14/255, rxload 18/255

Encapsulation ATM

230036632 packets input, 43475672291 bytes

265626099 packets output,56341079304 bytes

557643 OAM cells input, 557639 OAM cells output

AAL5 CRC errors : 23

AAL5 SAR Timeouts : 0

AAL5 Oversized SDUs : 0

MainRouter#

1500Router#sh int a0/0/0.1

ATM0/0/0.1 is up, line protocol is up

Hardware is ATM AIM T1

Description: To Location MainRouter

Internet address is 10.204.215.2/30

MTU 4470 bytes, BW 1500 Kbit, DLY 20000 usec,

reliability 255/255, txload 36/255, rxload 42/255

Encapsulation ATM

7370007403 packets input, 1477481369681 bytes

5937998372 packets output, 1006760491189 bytes

12631810 OAM cells input, 12631260 OAM cells output

AAL5 CRC errors : 81

AAL5 SAR Timeouts : 0

AAL5 Oversized SDUs : 0

Last clearing of "show interface" counters never

1500Router#

Thanks again,

Ron Buchalski

Hello Ron,

I had no chance to see this to happen however this just means that the two devices see different EIGRP metric parameters for the same link.

I mean the EIGRP K vector can match even if delay is different on the two sides and actually needs to match to build an EIGRP neighborship.

The EIGRP K values just says what components of the composite metric to take in account: by default

k1=k3= 1 and all other are 0

BW and delay only.

this doesn't require the values to be the same for the routers.

There are some tricky scenarios where this tuning is used to allow weighted load balancing over links that otherwise could not satisfy feasibility condition.

The mismatch on parameters could cause asymmetric routing if another path to core is available with a more attractive metric.

For example you can build an EIGRP adjacency between two routers one using a GE interface and one using an FE interface if there is a L2 lan switch in the middle again delays will not match.

Also OSPF allows for this metric asymmetry.

Hope to help

Giuseppe

Joseph W. Doherty
Hall of Fame
Hall of Fame

"Should the delay be set, per interface, to a lower value, corresponding with the lower speed of the actual bandwidth available per subinterface? "

That depends on what you're trying to accomplish. EIGRP already uses bandwidth as part of its (default) route metric; but Cisco recommends "The delay should always be used to influence EIGRP routing decisions.". This, I believe, because delay is cumulative, while bandwidth isn't. I.e., if your concern is beyond just this one interface, you can/should manipulate the delay value.

As to what value to set delay to, you could use it to reflect bandwidth and/or link distance latency. (I would recommend to use delay to reflect bandwidth, much as Cisco costs OSPF links to reflect bandwidth. I would only use delay to reflect distance latency, when there's a major latency difference, e.g. satellite vs. surface.)

"By the way, I have searched extensively and have yet to find how IOS determines the value of delay for an interface. Does anyone know?"

Believe it's just hard coded defaults that correspond to certain media types.

Joseph,

Thank you for your response. I agree that delay is the metric that should be adjusted to influence EIGRP routing decisions. I'd like for you clarify your statement regarding the setting of delay to reflect bandwidth. Could you be more specific? For example, should the delay parameter value be independent of media type (10M MetroE vs 10M ATM PVC)?

We have the scenario where multiple MetroE-attached remote locations (generally 10Mb/s each)

connect to the main location and terminate on a 1Gb/s port. Measured delay to these remote locations varies, depending on the distance from the main location. Local locations are generally 1ms round trip, while remote locations can be 20ms or highter, all for 10Mb/s connections.

I imagine it would not really matter for most of them, which are single-connected to the main location, so this is the only path to reach them. However, for dual-connected remote locations, this delay may mean that we want the 'longer' path to be considered secondary, even if it has higher bandwidth than the other available path.

Thank you,

Ron Buchalski

" I'd like for you clarify your statement regarding the setting of delay to reflect bandwidth."

My thinking is similar to using cumulative delay to reflect end-to-end path bandwidth, much as OSPF does with it link costs, which at least on (all?) Cisco routers is autocosted based on bandwidths. So I think, 10 Mbps should be costed as 10 Mbps regardless of media, most of the time. Besides single path issues, if alternative paths are available, actual delay between paths (for similar bandwidths) usually are not that different to reach the same end-point. The latency difference you note, based on distance, are there, but again, unless one path takes a totally different topology path (best example satellite), there's often not a huge latency difference between p-2-p, frame-relay, ATM, MPLS, etc., for the same bandwidth (discounting cloud congestion issues).

From reading your later post to Forin, I see you have an example/issue with two paths having both dissimilar bandwidths and latencies, i.e. 100 Mbps at 20 ms vs. 20 Mbps at 10 ms. As to finding thoughput worst on higher bandwidth link, increased latency can make problems for applications or TCP settings that aren't optimized for WAN latencies. For such, the issue isn't really 20 vs. 100 Mbps, but 10 vs. 20 ms. Making the lower bandwidth path appear better for some traffic, is certainly a solution. As you note, PBR could be part of such a solution. Other approaches include WAAS/WAFS devices, OER/PfR, and having developers re-architect applications to behave better across a WAN vs. a LAN.

Joseph,

Thank you for your insight. I will need to do some research on our network and develop a set of rules that our group can follow for optimizing the performance of our routing protocols based on the the factors we've been discussing.

Regarding your suggestion about optimizing the Long Fat Pipe (100Mb/s with 20ms delay), we evaluated the Cisco WAAS solution on the link, and it did improve performance. We also evaluated a solution from Riverbed, and a decision was made to deploy the Riverbed solution rather than Cisco. The Riverbed solution does not seem to optimize connections traversing the Long Fat Pipe in the same way that the WAAS does, so the performance improvement is not what was expected. I'm still looking at ways to tweak the Riverbed equipment for improvements.

But it is important to note for anyone reading this thread that the Long Fat Pipe issue is a real problem that others will face as they increase the size of their WAN bandwidth links. There are many good resources on the Internet which discuss the issue.

A good starting point is:

http://www.psc.edu/networking/projects/tcptune/

Thanks again,

Ron Buchalski

Florin Barhala
Level 6
Level 6

Hello,

As I know the delay value for an interface is not calculated; it is set by default from a list of couple values built in, proposed by Cisco technicians.

About another question:

when you have an 100MBps negotiated on the interface and you re limited by your ISP to 10MBps, the traffic will not go on 10mbps.

For a fraction it will burst to ~100mbps then pause, and so again so on AVERAGE you'll have a 10mbps connection.

If you set your BW value on 20MBPS on that interface (the interface with your ISP), it will not influence how traffic will flow.

As I know the BW value can be manually set and it's there only for other purposes as EIGRP, QOS and not to limit your real bandwidth value.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: