cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
579
Views
9
Helpful
5
Replies

Build a network for mostly tiny packets?

MATTHEW BECK
Level 1
Level 1

Hello all,

I'm wondering how to build or optimize a network where 35-50% of the packets are under 128 bytes in size. We have a few applications that send lots of small packets very fast and we seem to be suffering from buffer overruns in numerous devices. Does anyone have any suggestions as to how to better design a network with so many small packets? I'm having trouble finding information about how to do such a thing.

Thanks for any help you can provide!

Matt

5 Replies 5

mheusing
Cisco Employee
Cisco Employee

Hi Matthew,

Is this your only design requirement? No limits in budget, capacity, no other delay sensitive traffic ... only small packets at a fast rate, ... cool! Get 16-slot CRS-1, they do non-blocking 40Gbps linerate per slot with 64 Byte packets ... ;-)

OK, seriously now ... buffer tuning is likely what you want to have a look at. And get a good understanding of hardware specifics when it comes to small packets, feature impact on pps and the like. Even application specifics might play a role - UDP vs. TCP, flow control mechanism, drop sensitivity, etc.

Unfortunately there is no simple answer without knowing additional requirements, restrictions, existing network-topology, traffic matrix and application behaviour.

Crossing my fingers for you, don't loose humor!

Regards,

Martin

Thanks for the input Martin! And no, humor is never in short supply around here - I won't lose it.

I knew I was asking a loaded question but responses like yours are helping me shorten my learning curve. I'm pretty sure we're just running out of buffers and my only options are to increase them - either through config change or new hardware.

To answer some of your questions, all of the traffic we care about is TCP. I've argued with the developers that UDP would be smarter but management would rather throw hardware at the problem than rewrite code. (I'm very cool with that!) I'd rather not get into the dirty details of modifying queues and actually writing a policy for this traffic but that may be what I have to do. I'd prefer to just install gear that has deep queues. The CRS is probably a little out of the budget, but 10gig 6500s, 4900Ms and 5000s are not.

There are also non-Cisco load balancers in the mix which seem to be causing us some trouble. I'm curious as to how the Cisco load balancer in a 6500 would fare with 128 byte packets compared to what we're using.

The application is a streaming market data application. Tons of little packets to users on the Internet so multicast is out. The network topology is typical data center design with access layer switches connected to distribution 6500s that link to a routing core.

Other than the streaming data we have http/s and SQL to worry about - that's about it.

Thanks again for your input. I'm going to keep searching and waiting... :-)

Thanks,

Matt

Joseph W. Doherty
Hall of Fame
Hall of Fame

Where possible, likely you'll want to increase default queue depths by a factor between 5 to 10 times. Reason being, default queue depths appear to assume high bandwidth data flows are using typical Ethernet 1500 byte packet sizes. I.e., you'll need to allocate x times the packet queue allowance for the same resource usage since the packets are x times smaller.

You may have to also account for the additional PPS load. For instance, gig Ethernet line rate requires only 81 Kpps for 1500 bytes size packets, but 845 Kpps for 128 bytes size packets or 1.488 Mpps for 64 bytes size packets. In other words, devices can't always support the same bandwidth as PPS becomes more demanding.

Thanks for the suggestions and the numbers. My coworker started computing these same numbers on pps and we're in pretty close agreement.

So I'm starting to feel that my view of switching needs to change away from a heavy emphasis on bandwidth requirements to pps and buffer size requirements. Particularly when small packets are involved.

Can anyone provide any clarity on the difference between "non-blocking" and "line-rate"? I see some switches advertised one way and other switches the other. Neither do a very good job of telling me buffer sizes for input packets. It all seems to focus on QoS and output queues.

Thanks again,

Matt

"Line-rate" means the switch is able to accept and forward Ethernet frames as fast they can possible be sent on Ethernet. Usually this is PPS rating, which varies as does the packet size. (It also needs to account for Ethernet overhead.) 1.488095 Mpps for 1 Gbps for 64 byte packet would be wire rate.

"Non-blocking" usually either refers to a architecture that doesn't have head of line blocking, and/or one that has sufficient fabric bandwidth that all ports can run at line-rate. Assuming ports are duplex, you need 2x your port bandwidth for your fabric bandwidth.

e.g.

48 gig ports would require about 72 Mpps and 96 Gbps fabric (criteria met by 4948).

PS:

Attachment has these basic specs for various Cisco switches.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco