Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

UCCX Calls failing with CTI Timeout Errors

I am running UCCX Premium version in an HA environment. I have a 3 node CUCM cluster running version

We are using a Cisco 2951 as CUBE with 100 SIP Ports with automatic routing to an additional 3 PRI for a total of 169 channels.

Our customer is a power company using the UCCX for the Call Center and IVR Applications (Outage Reporting). We are doing a back end integration in to Oracle for gathering/reporting the power outage.

The issue that I am having is that when we have a power outage, we with in seconds are receiving 169 calls filling up the all PSTN trunks and staying full for a long time (greater than 1 hour)

The UCCX is answering all of the calls and processing the caller input.


We place the caller on hold(MoH), to do the database lookup (Complex lookup, can take 5 - 10 seconds), after the lookup, we are trying to unhold the call in the script and getting the following error message:

HELD; nested exception is: Cti request timed out

This also happens when the script tries to transfer the call to the agent, causing the agents to go into reserved mode and having to close the CAD and log back in as the script has aborted the caller.

I have a case open with Cisco on the CUCM about the CTI Timeout, but I want to be able to deal with this in the script. The exception is not listed as exception that can be caught.

I want to be able to catch the exception, and delay and then retry to at sometime later, will the be able to catch it? or since the exception is happening outside of the script, can anything be done to keep from dropping the caller?

Can anyone suggest a way to test high call volume?

Need help as the issue only shows up at the worst possible time.




Hi,can you please enable SS


can you please enable SS_TEL debug and post the logs to here?

I am afraid is not a parent of; there might be a slight change something rethrowing the latter as an Exception recognised by UCCX but I would not count on it.

What you are writing about is actually very interesting and I will definitly test this in my lab. I believe what happens is when you place the call on hold you actually leave the CTI Port and when you unhold it there's a chance that there is no available CTI Port and the JTAPI request fails or times out.

Did Cisco TAC come up with any explanation yet?


New Member

Thanks Gergely, for

Thanks Gergely, for responding.

We have over written our logs, so I can't provide the logs for the last time that we had this occur (09/29/2014). TAC transferred the logs via the WebEx and they are not attached to the case.

I did forget to mention that every time that we have this happen, we are getting a Code Yellow on the CUCM that is the main call processor for the UCCX.

TAC has indicated that the CUCM is unable to write all of the logs, cdrs, etc under the high call volume. I am following up behind another Cisco partner and I do not believe that the UCS were sized correctly for my customers call volume. They are running Cisco UCS C220-M3 LFF with the 7.2 SATA drives. We have virtually disabled all logging on the CUCM as to lower the impact of all of the holds, transfers, etc.

The previous partner had written a lot of custom JAVA for doing the database and backend API integration. This took a very long tome to execute on each of the calls, 20 -30 seconds or longer per call. I have rewritten scripts to do all of the integrations, via subflows and reduced this time to 5 - 10 seconds.


In the mean time, I have removed all instances of placing calls on hold, by playing prompts instead of placing them on hold, where I can. I have also combined scripts as to reduce transferring from script to another script. So I have lessened holds and transfer by about 60%. We have been unable to tell if this working or not, as we have to wait for the next large power outage.

The one place that I am unable to remove placing the caller on hold is when, we do the database dips or call the backend API(HTTP), any suggestions?

Any help, pointers, information would be greatly appreciated as the customer is getting very frustrated.

I was under the impression that when the CTI port placed the caller on hold, that it was still reserved for that contact to return from being on hold or transferring?






Hi,thanks for this detailed


thanks for this detailed explanation. I can understand your struggle and I believe you have already heard this like a dozen times.

There's no good workaround to this, but here's what I would suggest: forget the Database Subsystem altogether. It's very powerful as we all know but it places unnecessary load on the UCCX in this situation. I would build a light HTTP-to-DB proxy somewhere real close to the UCCX server and use it to persist data asynchronously (if it's not absolutely necessary to tell the caller whether the DB operation was succesful or not). I would not even care to send a nice XML back to UCCX, just a plaintext document to see whether the HTTP "proxy" is alive or not.

The key is to kick the call through the system as quickly as possible.

I think the CTI port is still reserved but the JTAPI handle is freed up - again, this is just a wild guess. I will take a look at this but I will need to have some time to explore the internals.


New Member

Thanks Gergely We will look

Thanks Gergely


We will look into the Proxy, the caller needs to be informed if the information they entered was correct or not, so we must have a return from the DB or the API.

So from a load perspective on the UCCX, the HTTP to Proxy would be less of a load than calling straight to DB? I have never done a comparison. I know that the DB calls are much faster than the HTTP API calls that I am doing.


By the way, I do appreciate all of your posts on this forum.




Danny,there's nothing wrong


there's nothing wrong with the DB subsystem. There's nothing wrong with JDBC, and naturally, there's nothing wrong with pooled database connections. However, under the circumstances, watching a pool of connections, creating new ones and destroying abandoned/idle ones is something I would happily give up, even for the higher price of creating HTTP connections.

Some programmers argue that creating a HTTP thus TCP connection is slower than sending a simple message over an already established connection (that is, one of the connections of the JDBC pool). That's absolutely correct. However, it is still simpler and eats up less CPU cycles and requires less RAM (and it's most certainly easier on the Garbage Collector process) than the DB connection. Yes, I know, now I am supposed to give you the numbers and I would will try to do the profiling as soon as I have the opportunity.


P.S.: I can help you with creating that HTTP "proxy", the Grails framework that I use is ideal for this task.

CreatePlease login to create content