We have a pair of Active/Standby ACE modules doing SSL offload with 10K conn/s licenses. These has been working fine for over a year but recently
are getting occasionaly failures. When establishing a connection to the VIP you get a full TCP handshake and then are disconnected immediately. N
certificate is passed, and the "show resource usage" counters do not indicate that is is denied due to license issues. Nothing is logged. "show stats crypto server" does show a failed negotiation, and some of the show np 1 me-stat command indicate failures, though I'm having trouble interpreting the results. The only suspicious this I can see is that in "show np 1 me-stat -scrypto" nitrox_contexts_in_use seems to flutter between 99,999 and 100,000 during the times we are having the problems).
The conn/s isn't going much about 800 (occasionally bursting up to 1200). None of the show resource usage stats seems to be anywhere near capacity (the boxes do about 500Mb/s peak, fairly continuous. system memory looks fine too.
We are running A2(2.3), I couldn't see anything in the 2.4 release notes that indicated any known related issues.
Any help would be much appreciated. I can put output up here, but I'll have to sanitize it first (our "security" folks insist).
We actually tracked this down to issues with concurrent sessions. We were hitting the 200,000 concurrent connections limit on the module. TAC have confirmed that this limit is hard and there is no work around. We have moved this particular traffic back onto the servers. The traffic causing the issue was actually Outlook Anywhere. It holds open large numbers of HTTPS connections per client, so although our TPS is relatively low, our concurrency is unusually high.
Unfortunately "show resource usage" doesn't include a stat for concurrent connection (probably becuase it's not a controllable resource). show np 1 me-stat -scrpyto shows you the number of active nitrox contexts, the limit is 100,000 per NP, affter that, connection will get disconnected and "failed negotiations" will be registered.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...