Bidirectional TCP through-put testing

Unanswered Question
Jun 18th, 2009

Does anybody know of something to test bidirectional TCP flows, that will actually give you your actual CIR that has been provisioned on a particular uplink interface?

See the problem is, when you provision a customer with a CIR of say 60Mbps down and 2Mbps up and they run a bidirectional TCP test, say an FTP ingress and egress, once the ingress pipe gets filled up, the egress rate drops dramatically. Now, if I run bidirectional UDP test to flood the pipes everything is good and the client can see their true CIR, 60x2 in this case.

Of course, I know this has to do with all the TCP congestion control mechanisms involved with TCP, especially in Microsoft's TCP/IP stack. But this is the real world and 90% of internet traffic is TCP and my clients want to see their ingress/egress limits reached with TCP traffic.

Does anybody know of anything software or hardware related that can generate bidirectional TCP flows, that doesn't use all the default congestion control crap, that can simulate network traffic whereby I can show my customers valid through-put at any given time.

Thanks for your help!

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Joseph W. Doherty Thu, 06/18/2009 - 13:46

Don't expect you'll have much luck finding a TCP variant without its "congestion control crap" since that's an integral part of TCP.

You should be able to get a single TCP flow to run at about 100% of a desired bandwidth, but this would require insuring there's sufficient buffering to handle TCP bursting up to the flow's BDP and that the receiving host's receive window size allows just for BDP (which should eventually allow TCP to self-clock at desired bandwidth). Outside of a lab or one-time test setup, unlikely you get similar results for routine Internet TCP traffic.

Much of "real world" lower than expected TCP performance stems from lack of knowledge how TCP does function and/or device configurations that works against getting the most from it. Even how "CIR" is enforced, can have a huge impact against TCP.

Even if you found some TCP test tool that easily showed your CIRs were in fact allocated as contracted, similar to your UDP tests that's demonstrate this already, it's comparing apples to oranges. You're customer probably wouldn't see similar performance with their "normal" Internet TCP traffic.

Instead of trying to find a TCP test tool, you might explain why TCP traffic has difficulty obtaining 100% CIR (e.g. "default congestion control crap"), explain options that improve TCP bandwidth utilization (e.g. RED, shaping, FQ, BDP, host RWIN, etc.), and/or suggest an appliance that "massages" TCP traffic to maximize bandwidth utilization (e.g. products like Packeteer or Exinda).

eknell Thu, 06/18/2009 - 14:44


Thanks for the reply. My problem is not with our Cisco gear. I can use ingress/egress policers and or ingress policers and egress traffic shaping/limiting using SRR queuing, while adjusting for bursting and get my CIR all day long, bidirectionally.

The problem is with our Passive Optical Network gear that does rate-limiting, that hangs off of our Cisco core. If I set the CIR with this equipment, I can only achieve my bidirectional through-put tests with UDP. But this equipment doesn't support ingress bursting, so I don't know what the deal is. I can receive my TCP rates unidirectional, but not bi. Egress traffic starts to drop by 50% in some cases while ingress is maxed at 100%.

Some of TCP's congestion control mechanism's and trying to "tune" TCP just get a little confusing when trying to optimize the network for the different “bandwidth throttling” functions to work correctly.

I am just trying to prove it from my Core to the client. I wasn't worried about anything past that ("their "normal" Internet TCP traffic"). Thanks for your help!

Joseph W. Doherty Fri, 06/19/2009 - 03:35

If I understand you correctly, what you're seeing is, when the 60 Mbps ingress completely fills, the 2 Mbps egress utilization drops, and this when running FTP in both directions.

If my understanding is correct, this is somewhat unusual since bandwidth oversubscription normally has its most impact upon same direction traffic, not the return path. Yet, if ACKs are severely impacted (dropped or delayed) on one path they would impact the other path's TCP flow forwarding performance. I.e. this could explain why you see as expected, either direction, unidirection, FTP performance, but not expected bidirectional FTP performance.

You mention "rate-limiting", and as I noted in my first post, how CIR is enforced can have a huge impact on TCP.

Also from what you describe, when rate-limiters are involved, since they do often behave differently from true link bandwidths (e.g. 10 Mbps CIR on 100 Mbps Ethernet vs. true 10 Mbps Ethernet), what UDP tests show don't mean you will obtain like TCP bandwidth performance.

In other words, if you're selling the 2/60 such that it should perform the same as a true links of those bandwidths for all traffic, especially TCP, and you're unable to tune the CIR values (Bc/Be/Tc) to deliver like performance (which can be very difficult), you might have to increase the values of your CIR bandwidth settings until you do deliver the expected TCP performance. (E.g. if 2/60 loses 50% on the "2", try 4/60 but to the customer it's "sold" as 2/60.)

Otherwise, you can explain to your customers that rate-limited bandwidths are not the same as true bandwidths and TCP performance might not be as expected. I.e., there's nothing wrong, per se, it's just the way it works.


This Discussion