I have just recently become aware of a phenomenon known as "Buffer Bloat". As I understand it, the problem arises when a downstream carrier buffers data for an excessive length of time and the end result is terrible latency spikes and poor performance. From what I can find, large file uploads / downloads effectively choke out the link causing other traffic to be affected.
The following is an excerpt from the Q&A section from a Network Analysis tool developed by a group at Berkley University:
Q.· Netalyzr reports large buffers in my up/downlink. How can I fix that?
A.· The first option is just to be aware of the issue. If you don't try to perform large file transfers or P2P applications while also websurfing, gaming, or using VoIP, you shouldn't notice a problem. Buffer sizing is only a problem if you try to perform both large transfers and interactive applications simultaneously. The second is to pay for a higher bandwidth service, if it is important for you to be able to perform both large file transfers and interactive applications at the same time. The problem is due to the ratio between the buffer's capacity and the bandwidth of the connection, so if you pay for more bandwidth, the buffering problem is reduced. Unfortunately the real solution, namely access devices which allow programmable buffer sizing or dynamically resize their buffer based on available bandwidth, is not generally available to the customer at this time.
To put this into perspective, we have a Cisco 2821 with a T1 serial link into a remote office which runs an application in our data center. That application transfers files to and from the data center as a normal part of it operation. We are seeing latency spikes of 400 - 600ms at an alarming rate (even though average bandwidth utilization is around 400Kbps). We are not seeing this behavior in our other remote offices at this time, so I do not think we are 'maxing out' the link at intervalls too short to be missed in the average utilization calculated on the link.
I have been told that what we are seeing are the classic signs of Buffer Bloat and so I ran the Berkley developed tool and it reported:
We estimate your uplink as having 380 msec of buffering. This level may serve well for maximizing speed while minimizing the impact of large transfers on other traffic.
We estimate your downlink as having 5200 msec of buffering. This is quite high, and you may experience substantial disruption to your network performance when performing interactive tasks such as web-surfing while simultaneously conducting large downloads. With such a buffer, real-time applications such as games or audio chat can work quite poorly when conducting large downloads at the same time.
In contrast I have tested other of my remote offices and the 'norm' seems to be in the 200 - 300 msec of buffering.
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
First I've heard this called "Buffer Bloat", but I've noticed for years many service providers often have deep buffers so they won't drop your packets. Sufficient buffering is also needed to support large TCP BDP (bandwidth delay product). Of course, as you've noticed, deep buffering can be adverse to other traffic.
The "easy fix" is minimal QoS that doesn't place all traffic into a single very deep FIFO queue. Unfortunately, many service providers only provide a single FIFO queue, so if possible, use your own QoS so you manage your bandwidth, not the service provider.
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...