Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 
You may experience some slow load times, errors, and slight inconsistencies. We ask for your patience as we finalize the launch. Thank you.

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member


Hello eveveryone,

1) I am looking for an expected release date of UCCX 8.5(1.11003.1) at CCO, any ideas? I also have few questions around it.

2) Does TAC have access to it already if public release is going to get delayed?

3) If public release is going to take time, can leaving the system running with this issue cause adverse impact?

Following is the reason why I need this version.

I've a HA setup of UCCX servers running 8.5.1 SU2 (

I've got the following alert in RTMT today:

At Thu Mar 22 10:11:17 PDT 2012 on node, the following SyslogSeverityMatchFound events generated:

SeverityMatch : Critical

MatchedEvent : Mar 22 10:11:09 MIGUCCX01 local7 2 : 9515: MIGUCCX01: Mar 22 2012 17:11:09.947 UTC : %UC_HR_MGR-2-UCCX_HISTORICAL_DATA_WRITTEN_TO_FILES: %[Module Name=HISTORICAL_REPORTING_MANAGER][Module Failure Name=HISTORICAL_DATABASE][UNKNOWN_PARAMTYPE:Module Run-time Failure Cause=3][Module Failure Message=Historical database queue is full. Historical data saved in files. Queue size= 1 Total number of lost records= 1 Please check dat][AppID=Cisco Unified CCX Engine][ClusterID=][NodeID=UCCX01]: Historical Data is being written to files please check the DB connectivity and availability AppID : Cisco Syslog Agent ClusterID :

NodeID : UCCX01

TimeStamp : Thu Mar 22 10:11:10 PDT 2012

I did some research and it seems I am hitting with a Bug: CSCtj88620

I can see the bug is fixed in 8.5(1.11003.1).

I am looking for an expected Release date for the above said version, is there anyone having ideas  around it?



  • Contact Center
Cisco Employee



8.5(1.11003.1) looks like an ES ( engineer special ), so BU/TAC should be able to provide on demand basis, hence it is not released publically on CCO. Please proceed opening a TAC case for getting that ES.



Pls rate helpful posts !!

New Member

GP - my customer is getting

GP - my customer is getting this same alert in UCCX  This bug obliviously would be fixed in 9.0(2)




New Member

Hi Dan,Did you get a response

Hi Dan,

Did you get a response for this one .

My customer is also getting the same error on 9.0.2.

Would that be the same bug in 9.0.2

Thank you 



New Member

Hi Kapil,We received our

Hi Kapil,

We received our first RTMT alert and think we may be seeing the same bug (we're on version 8.5.11001-5). Did this issue cause any major issues? The reason I ask is we're moving to version 10 in about two months anyways, so if I can put off patching this soon to be retired version I'd rather wait.



New Member

Hi,I'm getting same alert for


I'm getting same alert for

Does anyone know if the same bug (CSCtj88620) is on 10.5 ?

UNKNOWN_PARAMTYPE:Module Run-time Failure Cause : 3
Module Failure Message : Historical database queue is full. Historical data saved in files. Queue size= 0 Total number of lost records= 1 Please check dat
AppID : Cisco Unified CCX Engine

Cisco Employee

Hi Saima, To start with, the

Hi Saima,


To start with, the defect CSCtj88620 is not applicable for UCCX 10.5.

Please go to Cisco Unified CCX Serviceability > Tools > Datastore Control Center > Replication Servers and Datastores and make sure replication is fine as well as the Historical DataStore.

After this go to CCX Historical Datastores and check if they are in running state or not. There will be a lens icon in the very last for each server, click on that and check if the number of rows for the Historical tables is same for both the nodes and Job Status is showing completed or not.

Go to Tools > Control Center-Network Services and check the status of Cisco Unified CCX Database service on both the nodes. If all the above mentioned things are fine and still the alerts are coming, proceed with a complete cluster reboot once (first primary once it comes up then reboot the secondary server) and see if that helps. If the alerts do not stop even after that, then your best shot is to open a TAC case and get it t/s effectively as there are lot of things which could be possible for this and hence a detailed log analysis by TAC will be the next step in resolving the issue.