cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1304
Views
0
Helpful
6
Replies

debug on CSS 11506

Marko Leopold
Level 1
Level 1

Hello!

I need to know if there is a chance to see content-related debug information inside the debug-mode (llama)? Specially an output why a flow is put to a specific content. Is there and command or way to see this?

Regards,

Marko

1 Accepted Solution

Accepted Solutions

This will happen if a flow gets idle.

You idle timeout is 16 x 8 = 128.

More or less 2 minutes.

This is quite low.

Increase the flow-timeout-multiplier to 80.

This should at least reduce the chance to see the problem.

Gilles.

View solution in original post

6 Replies 6

jason.espino
Level 1
Level 1

Hello Marko,

Unfortunately, there is no debug command that can be run in llama that will show the information your requesting (connection flow to a specific content rule). However, be advised enabling any kind of debug output in llama to your screen or in buffer can cause the CSS to lock up and and seize to pass/process traffic. This is dependent on what debug you perform and how much traffic is flowing through the CSS. In previous firmware versions on the CSS you were able to use the flow options in llama mode to trace a connection as it establishes from a specific source IP address; newer versions did away with that feature.

There is a couple of things that could cause a connection(flow) to get forwarded to a specific content rule.

#1. Is there any advance-balance method configured within the content rule the flow is getting "stuck" on?

#2. Do you have multiple layer 7 rules configured on the CSS? If so, do you have "no persistent" on the content rules?

The "no persistent" command is content persistence. By default (persistent enabled) the CSS will balance upon the 1st GET sent by the client. If there is a more specific URL/layer 7 VIP the CSS will not match the request once the client's connection has been placed on a content rule.

#3. Is the connection(flow) coming from a single IP address using a proxy server?

#4. Is there any optimization enabled on the client side that is pushing multiple GET requests to flow through the same/single TCP stream?

#5. Can you provide a screen shot of your configuration? Or can you provide the content rule configuration?

- Jason

Hello Jason!

Firstly thank you for your answer. And yes i can provide you a snapshot of our configuration which is quite simple. The configuration to look at is this:

service webserver1_e (to webserver3_e)

ip address 10.10.20.10

protocol tcp

port 8090

keepalive type http

keepalive uri "/probe.gif"

keepalive frequency 10

active

service webserver1_upload (to webserver3_upload)

ip address 10.10.20.10

protocol tcp

port 8190

keepalive type http

keepalive uri "/probe.gif"

keepalive frequency 10

active

content Accelerator_e

vip address 10.10.10.10

protocol tcp

port 8081

url "/*"

no persistent

flow-timeout-multiplier 8

primarySorryServer sorry

add service webserver1_e

add service webserver2_e

add service webserver3_e

active

content Accelerator_e

vip address 10.10.10.10

protocol tcp

port 8081

url "/upload/*"

no persistent

flow-timeout-multiplier 8

primarySorryServer sorry

add service webserver1_upload

add service webserver2_upload

add service webserver3_upload

active

That is the main part of the configuration. Means when the user is requesting something from uri "/*" he will be forwarded to webserverx_e (tcp 8090). And if he is requesting something from uri "/upload/*" he will be forwarded to the webserverx_upload (tcp 8190). Inside the way to the webserver we have a firewall. This firewall is seeing packets containing uri "/upload/..." forwarded to the tcp port 8090. The question is if the loadbalancer is choosing the wrong content here and how to see why he decides to forward the packets into the "wrong way"?

Kind regards,

Marko

This will happen if a flow gets idle.

You idle timeout is 16 x 8 = 128.

More or less 2 minutes.

This is quite low.

Increase the flow-timeout-multiplier to 80.

This should at least reduce the chance to see the problem.

Gilles.

Hello Gilles!

Can you explain me your hint a bit more detailed? Infact, why should the CSS forward the packet to the wrong content when the flow has timed out?

HTTP 1/1 uses persistency.

So 1 TCP connection for multiple HTTP request.

The CSS parses the TCP connection to locate each http request and select the corresponding content rule.

This works ok, except if your connection becomes idle.

In this case, the CSS considers the connection as *dying* and it stops parsing it for new http request.

So, if after being idled you to go to a different rule through the current tcp connection, the CSS does not see it and it keeps forwarding the traffic to the selected server.

In conlusion, increasing the idle timeout will fix the problem.

I think I provided this explanation around a million times in this forum.... :-)

G.

Hello Gilles!

Thank you for your answer. We applied a higher timeout-multiplier now and it works. The number of wrongly send packets is decreasing a lot. Now it will be the goal to have this number as low as possible.

Kind regards,

Marko