i need an expert opinion.
Where I work, we have a private MPLS network that connects over 25 remote sites to our IT center (Server Farm). We have a 100 Mb link between the IT center and the cloud . All other sites (Montreal and Quebec area) vary in speed.
The ISP that handles our MPLS network just install a new remote site situated in Toronto at 100 Mb. To confirm that were getting the promised throughput, I use a tool called Iperf. Simply, it's a Client-Server DOS application that sends packets and measure the throughput of the network.
Iperf is telling me the throughput between Toronto and IT center is around 30 Mb.
According to me, i should be hitting the 70 Mb mark. Speaking with my ISP, their telling me it's because of the distance (latency) that i'm getting this result (they can't do anything about it). I'm having a hard time believing this explanation. All my other sites including quebec city (running at 5 Mb) respect the speed that were paying forâ¦ the problem is with the Toronto site.
Starting from the IT center, tracing to Toronto, i go through 5 hops and quebec city I go through 4 hops. Can one more hope make that much of a difference ?
Pinging Toronto, I get a average of 23 ms. Pinging quebec city, I get a average of 17 ms.
I agree that the packet has to travel a longer distance, but I wasn't expecting that much of a low throughput.
Any opinions is greatly appreciated.
The distance is a factor, but not that much as you already expect. It is more important in interactive applications than it is in simple file transfers. In a simple file transfer you can reach pretty high rates almost regardless of distance, because once the network pipe is full (Imagine the network path as being filled with water. It takes some time to fill it, but as long as it is full, it doesn't matter how long the pipe is.), you keep receiving packets at high rates.
Try to make sure that you are measuring the throughput correctly before blaming the ISP. Are you using Iperf with TCP traffic or UDP? With TCP you will normally get slightly less throughput than with UDP. TCP needs you to keep the traffic running for some minutes in order to see your best throughput (The logic behind this is to minimize the initial cost in time and throughput to fill the pipe with traffic and give TCP the chance to increase its window). Also make sure that you have no congestion issues in the LANs where traffic sending/receiving PCs are located. In addition, make sure that those PCs are not doing much of other work, especially one that involves networking.
One more thing: The difference is quite possible to have to do with the time you let Iperf running. It doesn't take much time to reach 5 Mbps (as you did with another site), but it will take more for the applications to "trust" the capability of sending data at 100 Mbps.
thx for your quick reply,
When i use TCP i get about 30Mb/s. UDP runs around 90 Mb/s.
Obviously, most applications use TCP.
i understand that UDP has less overhead, but i still considered that my link is not running at it's full capacity.
i will let iperf run a little longer.
i'll post the results.
The difference between TCP and UDP is not just headers. It is much more. Your UDP performance is perfect and you cannot expect more from the network (Remember that you will not see 100 Mbps because of lower layer header overhead). It does show you the network performance and it is great! If you want to see even more from the network with TCP, try sending traffic from multiple PCs to multiple PCs.
here are my result:
sending 100 MB = 32 Mb/s throughput
sending 200 MB = 33 Mb/s throughput
sending 500 MB = 26 Mb/s throughput
right now i can't test it with multiple workstation.
i never expected to lose that much bandwidth when the packet has to travel over 500 Km.
Montreal and Quebec is around 200 Km and i don't lose any bandwidth.
I don't know if it's a urban legend, i heard sometimes ISP lower the bandwidth threshold (TCP).
Can they do that??
Of course they can. And often their infrastructure is largely overbook and they have big problem to maintain the service levels sold to customers. You should really spend one day a done into a SP's noc to see all the things that go behind curtains, as you can image, not all are nice.
I don't know about the urban legend that you mention.
What command did you use in the CLI for those experiments? How much time did you have those sessions running? This is not a lot of data that you have sent (not enough total MBytes for you to be able to reach high rates in 10 seconds, even if TCP wouldn't slow start). Remember, each experiment is different, you have to make each session run for some minutes, not a lot of successive sessions. It is not the same thing. Each session has the same bad luck at the beginning. There is an option in the CLI to make an individual TCP session run longer, but I do not remember it and I do not have Iperf downloaded right now. You could explore the CLI help to quickly find this out.
Also, consider setting TCP window size (iperf -s -w
Now that Paolo answered I see what you were getting at. I have been in SP NOC for a couple of years, but we were not that bad. Overbooking is done based on statistics, but we always tried to be able to satisfy the peak hour traffic (I know they still do). I mean, even though the theoretically sold could be higher, the real traffic would have to pass through for us to be able to survive the customer complaints and rumours such as those you mention. Sometimes it wouldn't be possible due to delays in circuit delivery or some other technical issue. Then again, it all depends in the competition between SPs in your area. Anyway, let's try to make sure we are doing our best to measure the performance before starting to spread any rumours.
I let the test run for 15 minutes with window size set to 64Kb.
Transfer 1.36 GB : 19.6 Mb/s
as you can see, the throughput got worse.
so basically, right now i have to conclude that my ISP can't do anything about this issue, even if i disagree with their explanation...
i'll keep reading. if i find anything i'll post it..
if anybody has any suggestion, i would greatly appreciated.
We had to do a great deal of Iperf experiments when we took a networking class about 3 years ago. I took a look in our reports (creepy :-) and I see in the graphs we plotted the following information that might interest you:
2 parallel connections from a single iperf client machine to a single iperf server machine can hardly go above 80 Mbps in 10 seconds with a window size less than 18 KByte. You need more than that to see good results and take into account that the experiment was in a 100 Mbps LAN with only us playing around. So, no question about how bad it can be in the WAN in only 10 seconds.
Start the server and then start 2 clients on the client machine that you are already using. Try to press enter in the client command windows as simultaneously as possible :-) (Still haven't found out how we can extend duration of individual session.) Remember to add to the end of the command the window option (20K or more): -w 20K
p.s. Just saw your response. I will think about it, but at a first glance the results are pretty disappointing :-(
can't believe it either. i have to lease a 100 Mb/s link to get back a 30Mb/s perfomance. wow...
FYI: i added the switch -t:seconds to extend my session.
thx for your help. If i find anything else i will post it.
May I ask something else? Just to make sure we are not troubleshooting Iperf instead of network performance. Have you tried a "heavy" file transfer from one site to the other over this connection? Do you see any performance issues in any of your activities besides the Iperf results?
So far, the users are not complaining. On day one, they started with this link, so i don't have a chart to compare it with...
I used iperf on other sites and all my results are satisfactory. so i'm pretty sure it's not iperf.
Maria, your personal opinion,
Is it common practice that a ISP would in fact reduce their throughput without advising their customer??
Can I jump in again.
By no means I was implying that your SP is limiting you traffic. What is the minimum committed bandwidth that you have purchased ? Let's call that X.
If you can repeatedly measure UDP throughput equal or above X, your SP is delivering what you've paid for. TCP performances as Maria suggested is actually more testing the PC's and their tcp stack tuning, rather the the circuit. Also note that some stacks behave differently in TCP when they see the target on a different subnet, that may not be you case as you is a L2 service from what I understand.
Insinuating that an SPs handles TCP traffic differently than UDP, that's a bit too much and again, I only said it can be done, not that it is being done.
If i look at the bandwidth of UDP, the SP is delivering what i paid for. But you have to agree that most of the application that we use today are using TCP.
anyway, thx for all your help including Maria. I appreciate all of your input.
I'll be doing some more reading on this matter. If i find anything new i'll post it.
Last mile local loop (both ends) I believe the pipe size is guaranteed as to what you may have subscribed. But inside MPLS network, with multiple classes of services over a common shared infrastructure each with particular performance and bandwidth characteristics, you may not have subscribed to the highest classes of service that guarantees bandwidth end-to-end.
Ok, Paolo said one thing I thought at some point but perhaps did not stress enough (so gets my 5 points, always a pleasure to see you around :-). You do get your bandwidth. UDP demonstrates that enough.
And since I bother posting, one more thing to add: SP's are pretty careful with the TCP traffic. This is usually legitimate traffic they try not to touch even in a DDoS situation. UDP is more often the victim of strict policies. UDP is more "hostile" traffic, doesn't know when and where to stop (often times does it on purpose). TCP on the other hand is usually a fair game player. You drop a few of its packets and it cooperates.
p.s. And now that I said "drops", make sure your LAN functions perfectly. A few errors every now and then can make your TCP sessions crawl. UDP on the other hand simply does not care.
p.s.2 To answer the question about throughput reduction:
If you mean bandwidth reduction (downgrade), I have never seen such thing. I was lucky enough to only see upgrades. In Greece DSL came somewhat late and the upgrades were massive. One year you had some international lousy satelitte, the next a 155Mbps, then another and now crazy gigabit rates. Every ISP or area is different and sure each business gets different pressure. Sure thing ISP doesn't have to let you know about everything they do inside. But then, think that your interests are not that much conflicting. Satisfied customers is a good thing for the SP and for the customer :-)
One last thing (until I say something else :-): You might get your bandwidth in the near future when you will actually need it for legitimate traffic instead of some sudden urge to test the connection. As I said previously, an ISP carefully monitors the trends in traffic. Upgrages happen when statistics indicate increasing trends that will fail to be satisfied.
i agree completely.
It still frustating that i lose so much bandwidth compared to the links with Quebec city.
anyway thx again for your insight.
Somewhat implied that your SP is not giving you now what you have paid for. I was talking in general. It is a bit unrealistic for an SP to intentionally let your UDP reach so high rates and deny the same "favor" for your TCP traffic if bandwidth is an issue for them. So, try to be patient. Monitor the situation for some time. A small issue with errors in a circuit could cause your TCP to crawl, while your UDP passes. There does exist bandwidth for you there.
Talking in general again: The customer support might play it a bit cool with you, but all issues are considered. When you hang up, complaints are propagated and reach the poor guys. I remember so many times breaking my head because some dial-up customer could not see site www.
You might be bumping up against TCP's BDP (bandwidth delay product) speed limitation.
For TCP to push 100 Mbps across 23 ms requires a receive window of 287,500 bytes. If you're receive window is smaller, you'll shouldn't get better than the fraction smaller. E.g. a 64 KB window (maximum if window scaling not active or supported), would allow only about 22 Mbps for 23 ms.
great information, didn't know you could increase the windows size more than 64KB by enabling windows scaling.
i'm definitely going to look into it.
thx for passing that on, i'm sure other people
will benefit from it.
you've been great help!! thumbs up!!
Some of the older TCP stacks don't support TCP windows scaling, RFC1323. Those that do, you're often still stuck with the pain of manual configuration.
For a quick test, try using a Windows Vista host as a recipient; out-of-the-box it will try to use a huge receive window.
Having correctly sized TCP receive windows is often one technique used by WAN acceleration products. Interestingly, even Cisco documents how to configure it for some of their products.
Calculating the TCP Buffers for High BDP Links
Cisco WAAS can be deployed in different network environments, involving multiple link characteristics such as bandwidth, latency, and packet loss. All WAAS devices are configured to accommodate networks with maximum Bandwidth-Delay-Product (BDP) of up to the values listed below:
â¢WAE-511/512-Default BDP is 32 KB
â¢WAE-611/612-Default BDP is 512 KB
â¢WAE-7326 -Default BDP is 2048 KB
If your network provides higher bandwidth or higher latencies are involved, use the following formula to calculate the actual link BDP:
BDP [Kbytes] = (link BW [Kbytes/sec] * Round-trip latency [Sec])
When multiple links 1..N are the links for which the WAE is optimizing traffic, the maximum BDP should be calculated as follows:
MaxBDP = Max (BDP(link 1),..,BDP(link N))
If the calculated MaxBDP is greater than the DefaultBDP for your WAE model, the Acceleration TCP settings should be modified to accommodate that calculated BDP.
Once you calculate the size of the Max BDP, enter that value in the Send Buffer Size and Receive Buffer Size for the optimized and original side on the Acceleration TCP Settings window.
thx again for taking the time.
After reading on Window scaling, i have one question:
To accommodate huge window sizing, do i need to modify any router configuration or is modifying workstation settings on both ends enough?
As to modifications to routers, from host to host, i.e. where the traffic transits the router, you need to insure the queue size on the outbound interface will also support the BDP. Otherwise the router might drop packets before the flow fully ramps up.
E.g. a 256 KB BDP would require a queue to support about 171 packets (@ 1500 each). (NB: a single FIFO queue of this many packets will likely make for highly variable latency.)
As to end hosts, only the receiving host normally needs to be adjusted.
Since routers can be a receiving host (e.g. doing over the WAN flash upgrades using FTP or RCP), adjusting the router's receive window to its max (64 KB?) and enabling SACK, can help a bit, although physical flash write time seems more of the copy performance limiter.
Your welcome. If you do set a host on the far side with a larger receive window, let us know the results.
From my prior post, might not have been clear, if you tune a router for BDP, the resource allocation is for all TCP flows across the link. I.e. you don't allocate per host.
If the router doesn't provide sufficient BDP buffer space, this doesn't mean multiple flows are unable to drive the link to 100%, just means one TCP flow, alone, won't be able to.