QOS and Link Efficiency Mechanisms (LFI & Compression)

Unanswered Question
Jul 31st, 2009
User Badges:

I'm currently studying for the Cisco ONT exam, much of which relates to Quality of Service (QOS). I'm trying to get some of the basic principles clear in my mind.

Two things which I'm not sure of are the 'Link Efficiency Mechanisms' of compression and LFI. Would I be correct in the following two statements?

1). Compression (RTP header compression, TCP header compression, payload compression, Stacker & Predictor) should not be used (unless the data is being sent over a line of below 768Kbps bandwidth for software compression or a line of below 2Mbps if done in hardware). These days lines of this little bandwidth are very uncommon and so I take it that compression in this instance is really a thing of the past in most organisations (assuming there are no lines of this little bandwidth).

2). LFI (Link Fragmentation and Interleave) where, if you're running VOIP, you can turn on LFI so that large data packets are split into fragments and voice packets slotted between the fragments to reduce delay and jitter: This should not be used either unless you're doing it on a link of below 768Kbps.

Basically, if I'm looking to implement QOS and make the network more efficient should I not even bother to think of Compression or LFI unless I've got very slow links to implement them on? All the documentation I've read so far says not to. However, I was thinking, what if you've got a 10 or 100Mbps link and it's running at high utilisation maybe 60 - 70% much of the time and perhaps you're running voice over it, then wouldn't it be better to at least implement header compression so that the bandwidth usage would go down, so then wouldn't this cause less packets to be dropped from the output queues or eliminate the dropping altogether?

Thanks for any replies,


  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 3.3 (4 ratings)
Paolo Bevilacqua Fri, 07/31/2009 - 03:53
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

Do you need the answers for certification purpose or a real network ?

If the first, no dot try to second guess anything in the training material, just memorize the book answers for better chances of passing.

If the second, mention which exact circuits speed and type you have and any other detail.

KonradStepniewski Fri, 07/31/2009 - 05:36
User Badges:
  • Silver, 250 points or more

Both QoS mechanisms - compression and LFI - are about serialization time (compression save bandwidth as well), so you should enable them only on really slow links below 768 because time to send data to wire from router is siginificat.

If you have faster links like you said 10/100Mb then count how many calls you can add to 60-70% utilized link? How much procesor will it take to do compression? Is it really wort it? Probably not...

Nicholas Matthews Fri, 07/31/2009 - 06:13
User Badges:
  • Red, 2250 points or more

In addition to the diminishing effects of header compression on links faster than 768, you will also get a greatly increasing use of the processor.

Basically, it is worth using the processor cycles for a small amount of packets that give a fairly substantial bandwidth savings. However, the processor requirements will increase much faster than your bandwidth savings - especially on FastE links.

Regarding interleaving: On links faster than 768, generally LLQ is to prevent other packets from getting in front of the voice. The only delays you should have with LLQ are software/hardware processing, any other LLQ packets, and the serialization delay of any current packets being sent. Since the serialization should be much faster for the more modern links, the gains would be minimal if you already have LLQ configured.

Hope that clarifies.


Joseph W. Doherty Fri, 07/31/2009 - 17:00
User Badges:
  • Super Bronze, 10000 points or more

Somewhat redundant with the other posts, but . . .

Compression (if it actually compresses - not always a given) effectively increases the bandwidth (which we generally like). Problem is, compression can demand much, much more CPU per packet processed. Not a big deal when link's bandwidth is small and lots of excess CPU cycles, but at 10 or 100 Mbps, many of the smaller software router's entire CPU can be consumed with normal packet forwarding processing. I.e., no CPU cycles to spare. Even on L3 switches, compression might (probably) requires main CPU rather than being implemented on/in ASICs. (NB: some software routers have [often as a option] compression hardware which can skew the break-even point.)

(BTW, newer WAN acceleration hardware implements compression and/or other data reduction techniques that are even more CPU intensive, but they also have dedicated hardware to support this load which also permits utilization on much higher bandwidth links.





On the issue of LFI, as you note, it's needed on low bandwidth links where something like a 1500 byte packet, already ready being sent right before a VoIP packet arrives, serialization delay will "bust" VoIP timer constraints. It too, though, imposes extra processing on the CPU to break packets.

BTW, you can also get much of the benefit of LFI, with less impact to the router, if you set the MTU down and PMTU is working correctly. Even better, for TCP, is when mss-adjust works. (I.e. it avoids many of the large packets being made that require fragmention. Also, can help other very time sensitive traffic on low bandwidth links, e.g. Citrix.)

Peter.D.Brown Tue, 08/04/2009 - 09:54
User Badges:

Thanks everyone for the replies. It's pretty much as I thought then - they're mainly helpful on very slow links and could be counter productive on fast links due to processor usage.

I wanted to know for my own benefit so that I've got more knowledge to implement QOS when the time arises. Also because the ONT exam stuff wants you to just select 'RTP header compression' as the answer, without taking note of the link speed, which conflicts with what I've read what you guys have told me.

I've got my ONT exam tomorrow, which is the final in my CCNP, so hopefully I'll be a CCNP by lunch time. Cheers, Pete.


This Discussion