cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1596
Views
13
Helpful
7
Replies

PPS on a T1

mmaskart
Level 1
Level 1

Does anyone have a rule of thumb on how many packets per second can be transmitted on a T1 before packet loss and other problems crop up? I think i've hit my high water mark...

7 Replies 7

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Matt,

you can consider 6 or 7 bytes of L2 overhead depending on L2 encapsulation used on links

for example :

ip packet size: 500 bytes

l2 overhead: 6 bytes

max pps = 1408000/(8*506) 347 pps

Hope to help

Giuseppe

my ISP is telling me 50 - 70 pps is acceptable. My traffic bursts to 150pps and my monitoring software starts to report errors and losses.

Here is some additional details. Note the discards in the attached files.

Hello Matt,

I've used a size of 500 bytes you receive packets with size 1,1 kbyte

1024*1,1 = 1126,4 bytes

606966,4 bps

be aware that on TCP sessions there is the question of the Bw . delay product

a TCP session is negotiated and acknowledgements are expected before sending more packets (more bytes)

It is the TCP window

A single FTP tranfer can have this result on a T1 I think

You should try to perform parallel downloads to try to use the link unless they have given you a fractional T1 N*64 kbps.

Hope to help

Giuseppe

Joseph W. Doherty
Hall of Fame
Hall of Fame

Necessary PPS depends on both the bandwidth and the packet's size.

For easy math, assume your T1 provides 1,500,000 bps. If packets were 1,500 bytes, 12,000 bits, then full speed pps would be about 125 pps. However, if packets were only 50 bytes, 400 bits, then full speed pps would be about 3,750 pps. In other words, divide bandwidth bps by packet size (in bits) for pps. (Also keep in mind, most serial links are duplex, i.e. a router would need to provide up to 2x pps.)

For an actual T1, or other media, you need to also account for L2 and framing and other L2 overhead. (Which reduce pps required since some bandwidth is "lost" to this overhead; more lost on the small packet end of the scale.)

Most (modern) equipment that can deal with a T1 usually has sufficient performance capacity the the pps requirements shouldn't be an issue.

[edit]

To see the above calculation hold up, on your first attachment we see:

Interface bandwidth - 1.54 Mbps

Current Traffic - 571.49 Kbps

Percent Utilization - 37%

Packets per Second - 67.0

Average Packet Size - 1.1 Kbytes

571.45 Kbps / 1.54 Mbps = (about) 38% - which agrees with percent utilization

67.0 pps * 1.1 KB = (about) 589.6 Kbps - which isn't too different from Current Traffic stat (could be rounding issue with average packet size, e.g. 1.05 KB = 562.8 Kbps)

Regarding your 2nd attachment, drops tend to happen more do to link congestion on a T1 vs. insufficient pps.

The network topology is as follows: LAN < - (100Mb Ethernet) -> firewall <- (100Mb Ethernet) -> managed router <- (Serial T1) - > Internet

The network monitor (SolarWinds) watching the external interface of the firewall.

S.W. reports that my average packet size is 1.1Kbytes or 9011.2 bits. My packets per second = 67. On an average second, that's 603,750.4 bits leaving the firewall.

When the traffic is placed on the T1, L2 overhead would be added. Using Giuseppe's estimation of 6 bytes overhead for each packet, that's 1.106 Kbytes or 9060.4 bits. 9060.4 bits * 67packets = 607,046.8 bps.

607,046.8 bps is approx 40% of a T1. So, my question is, at what average utilization percentage does a circuit start to cause network degradation and problems? Obviously, the drops indicate that I'm above that point.

Any information or insight would be greatly appreciated. Giuseppe and Joseph, thank you for your assistance.

Well the trouble with average anything, it's an average. Transient congestion, or other transient issues, doesn't always easily show on "averages".

However beyond that, we might be mixing "apples and oranges". You're concern is about T1 utilization but you're measuring drops on a FW interface? For FWs, PPS can be very important (as it often is with a router). 500 Kbps makes little difference to the link regardless of packet size, but packet size and PPS can make a huge difference to devices that examine packets.

Also, however, I wouldn't expect T1 bandwidth to have much impact to a (recent) FW, so I would suspect transient congestion. It might be something as simple as the 100 Mbps LAN side is feeding packets to the FW, in bursts, faster than the FW can process and forward. (The WAN side, although also 100, is limited by the T1 link.)

You might try, if possible, running your FW LAN side at 10 Mbps. (Since the ultimate bottleneck is the T1, it shouldn't have any real adverse impact to current functioning, but protects the FW from 100 Mbps bursts.)

Beyond that, can't suggest more, but if the drops you're seeing aren't on the router's serial (T1) interface, you might look carefully at the FW.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card