Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

CTI Ports going OOS unexpectedly

IPCC Enterprise 6.0

CCM 4.1(3)sr2

CRS 3.5(3)sr2, 30 Trans RPs, 90 CTI ports

Very shortly after our contact center opens, the JTAPI subsystem on CRS goes into PARTIAL_SERVICE. We begin seeing CTI ports going OOS with a CALL_CONTROL_INVAL_STATE exception. Calls seem to be reassigned a new port and do not fail (provided there are good ports available), but as the day goes on, more and more ports go OOS. They always stay registered in CCM and they are associated with the proper IVR JTAPI user. The ports eventually come back into service only to fail again with the next call they take. during peak times, however, they don't recover quickly enough, and the number of available ports starts to dive. We have worked around by issuing restarts from CCM to the ports. That seems to recover them at least momentarily, but they typically go right back OOS after receiving a call.

Has anyone seen a symptom like this, or have any suggestion for dealing with it? I am working it with TAC, but we're spinning our wheels somewhat.

Thanks for any help!!!

4 REPLIES
New Member
New Member

Re: CTI Ports going OOS unexpectedly

I have the same issue at a customer site but it only appears to be happening with 2 of their 80 ports. Was TAC able to find a fix for you?

Thanks

New Member

Re: CTI Ports going OOS unexpectedly

Our issue turned out to be a bad interaction with a Q.SIG PRI to an Avaya PBX that was sending additional information that CRS couldn't deal with.

We were able to turn that information off on the Avaya side and the CRS/JTAPI engine became happy again.

New Member

Re: CTI Ports going OOS unexpectedly

We have the same issue with our auto-attendant ports. 2 days in a row now the jtapi subsystem has shut itself down over night, so the auto attendant is not working come morning. The fix, for now, is stopping and starting the App Admin Engine, which brings the jtapi subsystem back online. It has been working well for 8 months, and this just started happening yesterday.

We also have an issue where at random times when you call, it gives a busy signal (and this is an older issue), but if you call back again it usually has freed itself. We have 10 licensed ports, and using the session monitor it does not seem that we are ever reaching 10 incoming sessions. What else would cause the auto attendant (or the trigger) to reject calls? The trigger is set to accept 25 sessions, but it never even gets close to that before calls are rejected. Is it just in need of more IVR ports?

Sorry that this is 2 seperate issues, but they just came to a head today and are linked to jtapi.

Any ideas?

187
Views
0
Helpful
4
Replies