cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1488
Views
0
Helpful
4
Replies

ACE behind Reverse Proxy - performance issue

stevens_jj
Level 1
Level 1

Hi,

  I've got a config working to accommodate the required use of reverse proxy servers infront of my application servers.  Traffic comes into the Front ACE and I insert a header "SRCIP" with the original client IP address which is preserved through the Rev Proxy servers and is then inspected on the Back ACE to create a sticky to a given application server/SRCIP pairing.  The use of the RP's appears to require using the persistence-rebalance option otherwise the traffic get stuck to the wrong app server.  The app functions perfectly with this config; however, there is a severe performance impact.  Using load-runner, we see response times go from 1.5 seconds to 16 seconds for the same transactions comparing this config to a previous config which used static sticky to bind the RP to the app servers..

Question:  Is there a better way to do this and remain dynamic, or some way to optimize this approach to reduce the performance impact.

Relevant Config for both ACE's here:

!!!!!!!!!!!!!!!!!
!!Front ACE

parameter-map type http HTTP_REBAL
  persistence-rebalance
  length-exceed continue

sticky ip-netmask 255.255.255.255 address source ALPHA-SRCIP-sticky
  timeout 60
  replicate sticky
  serverfarm ALPHA

policy-map type loadbalance first-match vip-R1A-ALPHA
  class class-default
    sticky-serverfarm ALPHA-SRCIP-sticky
    insert-http SRCIP header-value "%is"

policy-map multi-match PREP-VIP
  class VIP-ALPHA-R1A
    loadbalance vip inservice
    loadbalance policy vip-R1A-ALPHA
    appl-parameter http advanced-options HTTP_REBAL
    ssl-proxy server SSL_ALPHA_R1A

!!!!!!!!!!!!!!!!!
!!Back ACE

parameter-map type http HTTP_REBAL
  persistence-rebalance
  length-exceed continue

sticky http-header SRCIP ALPHA-SRCIP-sticky
  timeout 60
  replicate sticky
  serverfarm coresoms-ALPHAfarm

class-map type http loadbalance match-all SRCIP-MAP
  2 match http header SRCIP header-value ".*"

policy-map type loadbalance first-match vip-lb-ALPHA
  class SRCIP-MAP
    sticky-serverfarm ALPHA-SRCIP-sticky

policy-map multi-match lb-vip
  class VIP-ALPHA
    loadbalance vip inservice
    loadbalance policy vip-lb-ALPHA
    appl-parameter http advanced-options HTTP_REBAL

1 Accepted Solution

Accepted Solutions

Hi Joseph,

To achieve this you need to do stickiness based on some L7 parameter (either the header you are currently using or some cookie), so, whatever you do you will have to use persistence rebalance.

I have one possible theory for your issue.

The ACE has two different ways of treating the L7 connections internally, that we call "proxied" and "unproxied". In essence, the proxied mode means that the traffic will be processed by one of the CPU (normally to inspect/modify the L7 data), while, on the unproxied mode, the ACE sets up a hardware shortcut that allows forwarding traffic without the need to do any processing on it.

For a L7 connection, the ACE will proxy it at the beginning, and, once all the L7 processing has been done it will unproxy the connection to save resources. Before it goes ahead with the unproxying, it needs to see the ACK for the last L7 data sent. This wait, on a Internet environment can introduce around 100-200ms of delay for each HTTP request, which can end up adding into a very big delay. By default, if the ACE sees that the RTT to the client is more than 200ms, the connection will never be unproxied to avoid these delays, so I think we could fix your issue by tweaking this threshold.

From what you described, I asssume you don't have many connections (because they all come through a proxy) and that the connections will have a lot of HTTP requests inside. With that in mind, I would suggest setting the threshold to 0 to ensure to keep connections always proxied. To do this, you would nee to configure a parameter map like the one below and add it to your VIP

    parameter-map type connection 
      set tcp wan-optimization rtt 0

Even though this setting may avoid your issue, it also has some drawbacks. The main one is that the ACE20 only supports up to 512K simultaneous L7 connections in proxied state (which includes also the connections towards the servers, so, it would be 250K for client connections), so, if the amount of simultaneous connections reaches that limit, new connections would be dropped. The second issue, although not so impacting, would be that the maximum number of connections per second supported would also go down slightly due to the increased processing needed.

I hope this helps

Daniel

View solution in original post

4 Replies 4

stevens_jj
Level 1
Level 1

Anyone?

Hi Joseph,

To achieve this you need to do stickiness based on some L7 parameter (either the header you are currently using or some cookie), so, whatever you do you will have to use persistence rebalance.

I have one possible theory for your issue.

The ACE has two different ways of treating the L7 connections internally, that we call "proxied" and "unproxied". In essence, the proxied mode means that the traffic will be processed by one of the CPU (normally to inspect/modify the L7 data), while, on the unproxied mode, the ACE sets up a hardware shortcut that allows forwarding traffic without the need to do any processing on it.

For a L7 connection, the ACE will proxy it at the beginning, and, once all the L7 processing has been done it will unproxy the connection to save resources. Before it goes ahead with the unproxying, it needs to see the ACK for the last L7 data sent. This wait, on a Internet environment can introduce around 100-200ms of delay for each HTTP request, which can end up adding into a very big delay. By default, if the ACE sees that the RTT to the client is more than 200ms, the connection will never be unproxied to avoid these delays, so I think we could fix your issue by tweaking this threshold.

From what you described, I asssume you don't have many connections (because they all come through a proxy) and that the connections will have a lot of HTTP requests inside. With that in mind, I would suggest setting the threshold to 0 to ensure to keep connections always proxied. To do this, you would nee to configure a parameter map like the one below and add it to your VIP

    parameter-map type connection 
      set tcp wan-optimization rtt 0

Even though this setting may avoid your issue, it also has some drawbacks. The main one is that the ACE20 only supports up to 512K simultaneous L7 connections in proxied state (which includes also the connections towards the servers, so, it would be 250K for client connections), so, if the amount of simultaneous connections reaches that limit, new connections would be dropped. The second issue, although not so impacting, would be that the maximum number of connections per second supported would also go down slightly due to the increased processing needed.

I hope this helps

Daniel

Thanks - going to try this now.  I'll post results shortly.

Looks like this helped greatly. The response time went from 16 seconds down to about 4. It's still higher then the original config, but a lot closer. I'm going to try and put that same config into the front ACE to see if it further improves.

Thanks !

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: