cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
665
Views
5
Helpful
11
Replies

Unexpected behaviour

Djule2804
Level 1
Level 1

Hi,

I'm doing test on test plateform and I try some QoS policies. I'm testing my netwok with WFQ algorithm as scheduling algorithm. I do FTP transmission across my network and I overload my network with trafic created by a IXIA traffic generator.

I use shapping on my FastEthernet Interface to limit the output bandwidth and so that output trafic be schedule by WFQ algorithm.

I became to limit speed to 2Mbps, and I start my ftp transfert. My ftp transfert use all the available bandwidth. I overload my network with 12 Mbps flux created by the IXIA trafic generator. The result is as predicted, i.e,the two flows share among them the available bandwith

(1 Mbps for ftp transfert and 1Mbps for IXIA flux)

I restart the test with a shapping values of 4Mbps , 6Mbps and 8Mbps. the results are as predicted.

The problem begin when I do my test with 10Mbps value shapping. The result is completely unexpected. The IXIA flux use 9Mbps of available bandwith and FTP flux fall to 1Mbps.

I obtain exactly the same result when I use a 10Mbps FastEthernet Interface with fair-queuing enable.

I can't explain this behaviour of FTP traffic. Does the problem come from cisco router and WFQ algorithme???Does the problem come from FTP software, TCP stack???

Has someone already seen this strange behaviour?

Thank you very much for your help.

PS:I join sreenshoot of my test result (wireshark graph)

11 Replies 11

jwdoherty
Level 1
Level 1

Could you post your config?

I use the most restrain configuration. I toke off all policy-map, I just use shapping on the FastEthernet Interface.

!

interface FastEthernet0/0

ip address 192.168.1.1 255.255.255.0

ip pim dense-mode

duplex auto

speed auto

no keepalive

!

!

interface FastEthernet0/1

ip address 192.168.2.1 255.255.255.0

ip pim dense-mode

duplex auto

speed auto

traffic-shape rate 2000000 50000 50000 1000

!

Is your non-FTP traffic TCP?

My other flux is created by IXIA traffic generator. I can create any sort of traffic. So I created an 12Mbps udp flux. But I also tried with a 12Mbps tcp flux with the same result.

When mixing TCP and UDP, I'm not surprised that the UDP grabs most of the bandwidth. This because TCP will slow down its send rate when there's drops where often most UDP doesn't. If true, one wonders why you only see this effect at the higher shaping settings. My guess would be it that it might be caused by slow start at the higher settings vs. fast retransmit as the lower settings. (I.e., multiple packet drops as the congestion window opens vs. single packet drops.)

You seeing the same effect when using your traffic generator to make TCP traffic doesn't make any sense unless the generator isn't following the rules for TCP backoff. If the rules aren't being followed, then the TCP generated traffic would behave like UDP.

Some things you could try:

Instead of your traffic generator, try another TCP source. I would suggest a tool like TTCP.

While the traffic is flowing, see if you can see the flow queues. ("show traffic shape queue"?) If possible, look at how deep the queues form, how large they are permitted to be, and what the drop rates appear to be. Especially take note whether the drop percentage seems different at the different shape rates for the FTP flow.

I would be interested in what you find.

Hi, thank you for all your responses!

I tried all the day to find the solution to my problem. I looked at the actives flow queues during my test. And I could see that active queue relative to ftp traffic desepeared when I did 10Mbps traffic shapping but as I said you, FTP transfert continued to a lower rate.

So, I tested many things all the days and I could't stop to have trouble with WFQ algorithm on 10Mbps interface:

Here is another trouble I have had:

I created one 12Mbps UDP traffic flux with IXIA traffic generator with UDP dest/src port equal to 11. There was no problem. I created a second 12Mbps UDP flux with UDP dest/src port equal to 22. All was all right, the two flux shared half of the available bandwidth. I created a third flux with UDP dest/src equal to 33 and suddenly my telnet session was broken by this last flux. I suppose packets from Telnet session and the third UDP flux must share the same queue.

So still an incomprehensible behaviour with this algorithm.

So, here what I conclude about my test series:

I think WFQ algorithm don't work with speed interface as 10Mbps FastEthernet interface. My Cisco 2600 routers aren't enought powerfull to deal with WFQ algorithm on speed interface.

Do you think it's could be correct?

Yes, the problem you saw during your three generated flows test might be you're pushing more traffic than the box can handle.

I haven't asked what model 2600 nor what version of IOS you're using. The entry models are rated at about 15 Kpps, good for 10 Mbps with 64 byte packets, where the high end model hits 70 Kpps, good for about 45 Mbps.

If your traffic generator can size the packets as you desire, you might see a CPU impact difference for the same Mbps rate but with different sized packets. I.e. less CPU impact with larger packets.

The telnet traffic shouldn't have shared the same flow queue as your third test stream, or other test stream. More likely you pushed the CPU to 100%. Did you note the cpu on the two generated flow test?

If your doing "real" WFQ, i.e. using the fair-queue command, on a Ethernet interface it doesn't work well at higher speeds. Try instead using CBWFQ, if supported by your IOS. Also insure, if supported, CEF is active.

E.g.

policy-map CBWFQ

class class-default

fair-queue

int Fastethernet 0

service policy output CBWFQ

If I'm reading your remarks correctly, the two 12 Mbps flows each got half the bandwidth? I.e., one didn't fallback like FTP with one generated flow? If true, then what I posted earlier about FTP competing against simulated traffic, might be correct.

At some point though, as you may have already found, you can overrun the box. Consider what these small routers do best, handle T-1/E-1 WAN links. They really don't have the power to handle high speed Ethernet. Which is why Cisco has a nice portfolio of LAN routers.

I thank you for all your response Joseph.

I tested the CPU as you said me. And you was all right, the fact that telnet traffic shared the same flow queue as my third stream was because of a lack of CPU ressource.

So in this case, can we protect the router against this sort of dangerous flux?

Should we do traffic policing on input Interface in order to delete flux which can paralize my router?

for my second problem (FTP trafic), I have some news :

I made exactly the same test than previously (shapping to 2Mbps, then 8Mbps, and 10Mbps) but this time I put the Interface speed to 100Mbps (instead of 10Mbps). So the bandwidth limit fixed by traffic shapping don't reach the Interface physical limit.

And this time, my FTP flux don't fall with a 10Mbps traffic shapping. The two flux (UDP and TCP) have an equal share of the available bandwidth. So I can't succeed to explain this TCP behaviour.

So,it would mean that WFQ directly apply on interface (with fair queue command) would be inadapted to TCP trafic!??

Your question about protecting your router from being overrun falls into the area of denial of service (DoS) attacks. These attacks can sometimes be very hard to prevent. In this case, it seems it was just too much "normal" traffic; more than the box could handle. Your idea of using a policer to prevent this might work.

There are other DoS attacks that can attack some routers using surprisingly low bandwidth data streams. The attack vector is often against the "control plane". Cisco has some great info and recommendations on avoiding these situations.

As to your last tests, interesting results, but I'm become confused. Your original post didn't mention running the 100 Mbps port at 10 Mbps. Also need to clarify whether your fair queuing was via fair-queue on the interface, GTS (as in your post) or fair queue within CBWFQ.

My guess, however, if you set shaping near or at the same speed of the physical interface, you actually had two queue locations that formed. One at the software shaper, and one on the physical interface. The latter, by default on Ethernet, would have been FIFO, mixing all your traffic. In the same FIFO queue, I would expect a true TCP flow to back off relative to a non-responsive traffic flows.

In fact, here is my test network :

....... LAN1--------------WAN-------------LAN2......

PCs-----Cisco2600---Cisco2600----Cisco2600---PCs

LAN1(FastEthernet-100Mbps)---WAN(FAstEthernet-10Mbps)-----LAN2(FastEthernet-100Mbps)

So, I simulated WAN with a core router, linked with the two other routers in a 10Mbps FastEthernet Interface. My congestion point was in the edge of the LAN and the WAN

I had to test some scheduling algorithm. So, I test WFQ algorithm in enabling fair-queue on the 10Mbps interfaces of the two edges routers.

So as, I said you in the beginning I obtained unexpected r?sult when I injected udp flux with FTP flux. (FTP was falled to 1Mbps instead of 5Mbps).

So, to find an answer to this problem, I decide to test WFQ on lower speed. So, I let default FIFO queuing algorithm on the interfaces and a applied traffic-shapping 2000000 on theses same interfaces.

So I known in this case that my output trafic would be reduce and queues would be manage by WFQ algorithm.

I had the result I said you (All was all right except when I put the traffic-shapping to 10Mbps, the physical limit of my interface, I obtained in this case the same result than my first test (fair-queue on the interface).

So my last test was to put the change the FastEthernet Interface speed.I put the Interface speed to 100Mbps (instead of 10Mbps) so that the bandwidth limit fixed by traffic shapping don't reach the Interface physical limit.

I obtain the result I said you. So it prove that WFQ can work on speed interface.

But I must return a rapport about WFQ behaviour and I can't explain this TCP behaviour.

So,can we conclude that WFQ directly apply on interface (with fair queue command) would be inadapted to TCP trafic?

If I understand this correctly, you set the connections between lan1-wan-lan2 to 10 Mbps and the PC-lan1, lan2-PC connections to 100 Mbps.

Then you set a GTS for lan1's 10 Mbps connection, at 2, 4, 6, 8 and 10 Mbps. All worked as expected, except the 10 Mbps test, correct? If true, and if the 10 Mbps interface was running default FIFO, I would expect FTP to have a problem, as explained in my prior post. (For the 10 Mbps test, you can do a show int for the FastE and see if there are drops or packets in the FIFO queue. Trying increasing the output queue to the maximum setting and see if the results change.)

Then your removed(?) the shaper, tried "fair-queue" on the 10 Mbps interface but again saw the same unexpected results you saw with the 10 Mbps shaper, correct? If true, it could be as I mentioned "fair-queue" doesn't work well at higher port speeds. Or, if might be the default WFQ settings aren't sufficient. If the latter, try increasing the congestive-discard-threshold (first parameter) to its maximum setting and see if there's a difference in behavior.

Your last test set the port to 100 Mbps, and used a shaper set to 10 Mbps, but you did not see a problem.

So far, these results show FQ within GTS works as expected to 10 Mbps, as long as the shaper's limit isn't the same as the physical interface. They also show the default WFQ may not work well at 10 Mbps.

One more test you could try, beside the ones I suggested above, is FQ within CBWFQ, if your IOS supports it. See one of my prior posts for a sample config.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card