Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 

TAC Recommendation to proactively move off of Unity Connection and Busines Edition 7.0.1

Unity Connection 7.0.1 will eventually have a failure due to CSCsx32588.  The work around is fairly quick once it's been identified but it appears that everyone running 7.0.1 will eventually have this issue.  This is fixed in an ES for 7.0.1 (ES 35) as well as 7.0.2 but this would be a good time to move to 7.1.

Symptom:
Some users hear failsafe.

Not able to delete  messages.  They get deleted but still exsist.

MWI is not working.   If you manually try to reset it, you get the
following error:  ISAM  error: no free disk space
Could not reset  Message Waiting Indicators

Trying to set traces you get the  following error:
An error occured while saving:   java.sql.SQLException

If you have these symptoms you can ssh to the server and run this command

admin:show cuc dbserver disk

You can see that this system has free space in the dyn dbspace so it's not broken yet.

Dbspaces
========
Dbspace  Dbspace                 Size    Used    Free    Percent
Number   Name                    MB      MB      MB      Free
-------  ----------------------  ------  ------  ------  -------
1        rootdbs                 500.0   118.0   382.0   76
2        cuc_er_sbspace          1830.0  1476.8  353.2   19
3        cuc_er_dbspace          170.0   0.1     169.9   99
4        ciscounity_sbspace      20.0    19.0    1.0     5
5        ciscounity_sbspacetemp  900.0   854.9   45.1    5
6        dir16                   2048.0  1800.1  247.9   12
7        rpt                     2048.0  455.7   1592.3  77
8        temp                    1024.0  0.1     1023.9  99
9        mbx16                   8192.0  198.1   7993.9  97
10       temp2                   1024.0  0.1     1023.9  99
11       dyn                     256.0   19.6    236.4   92
12       mbx                     256.0   12.6    243.4   95
13       dir                     2048.0  1250.0  798.0   38
14       log                     300.0   250.1   49.9    16

Chunks
======
               Size    Free
Chunk  Offset  MB      MB      Path
-----  ------  ------  ------  -------------------------------------------------                                                                             --
1      0       500.0   382.0   /var/opt/cisco/connection/db/root_dbspace
2      0       1830.0  353.2   /var/opt/cisco/connection/db/cuc_er_sbspace
3      0       170.0   169.9   /var/opt/cisco/connection/db/cuc_er_dbspace
4      0       20.0    1.0     /var/opt/cisco/connection/db/ciscounity_sbspace
5      0       900.0   45.1    /var/opt/cisco/connection/db/ciscounity_sbspacete                                                                             mp
6      0       2048.0  247.9   /var/opt/cisco/connection/db/dir16_dbs
7      0       2048.0  1592.3  /var/opt/cisco/connection/db/rpt_dbs
8      0       1024.0  1023.9  /var/opt/cisco/connection/db/temp_dbs
9      0       8192.0  7993.9  /usr/local/cm/db/informix/data/mbx16_dbs
10     0       1024.0  1023.9  /var/opt/cisco/connection/db/temp2_dbs
11     0       256.0   236.4   /var/opt/cisco/connection/db/dyn_dbs
12     0       256.0   243.4   /usr/local/cm/db/informix/data/mbx_dbs
13     0       2048.0  798.0   /var/opt/cisco/connection/db/dir_dbs
14     0       300.0   49.9    /var/opt/cisco/connection/db/log_dbs

Andy

RTP-TAC Unity

Version history
Revision #:
1 of 1
Last update:
‎05-11-2010 07:13 AM
Updated by:
 
Comments
New Member

OK , what is the solution i have the same problem ?

admin:show cuc dbserver disk

Dbspaces
========
Dbspace  Dbspace                 Size    Used    Free    Percent
Number   Name                    MB      MB      MB      Free
-------  ----------------------  ------  ------  ------  -------
1        rootdbs                 500.0   117.7   382.3   76
2        cuc_er_sbspace          1830.0  1476.8  353.2   19
3        cuc_er_dbspace          170.0   0.1     169.9   99
4        ciscounity_sbspace      20.0    19.0    1.0     5
5        ciscounity_sbspacetemp  900.0   854.9   45.1    5
6        dir16                   2048.0  1802.6  245.4   11
7        rpt                     2048.0  569.8   1478.2  72
8        temp                    1024.0  0.1     1023.9  99
9        mbx16                   8192.0  1812.6  6379.4  77
10       temp2                   1024.0  0.1     1023.9  99
11       dyn                     256.0   256.0   0.0     0
12       mbx                     256.0   35.9    220.1   85
13       dir                     2048.0  1251.3  796.7   38
14       log                     300.0   250.1   49.9    16

Cisco Employee

Open a case, give them the bug ID CSCsx32588 and the ouput you just put here.  I've heard you might be able to upgrade while the systems in this state but I've never tried it.  It only takes a few minutes to work around.

New Member

Hi.

I have the same problem. do anybody know the workaround?

I have shared support, but waiting on the correct contract number before I can contact TAC.

Cisco Employee

Without involving TAC you can attempt an upgrade to 7.0(2) or 7.1(2). If you can get the upgrade to finish the problem will be corrected.  The workaround TAC uses requires root access as it involves changing the database sizes.

New Member

Thanks anmcbrid ,

In fact  the TAC  has accessed by Webex , to the unity connection by SSH and created a temporary remote account called ciscotac ( utils remote_account create ..) and then reconnected to unity with login as :ciscotac

and created au secondary dbspace with capacity of 2Go

The pb is resolved now .

New Member

I have a customer who has hit this bug, TAC have not had root access yet but I have just tried to upgrade to

7.1(2)b and it failed, can anyone confirm if the upgrade can be completed once the bug has been hit or do TAC still need the root access.

New Member

I upgraded to 7.1.2.31900-1 today, worked fine. about 2 hours to upgrade (no downtime), and another 30 minutts to switch version.

After completed upgrade and switched version it's possible to import users again.