Hi Guys we have 3 CCM servers one publisher and two subscribers.
Virtual memory is running at 80% on the publisher and about 60% on the subscribers.
I keep getting the below alerts.
Should I be overly worried about this alerts? And if so what action should I take?
On Fri Sep 14 12:20:28 BST 2007 on node 10.52.224.10.
Available virtual memory below 30 Percent.
dbmon (1127 MB) uses most of the memory.
Solved! Go to Solution.
We have resolved this issue now. We rebooted the publishing server.
If the virtual memory keeps increasing on the publisher server you will lose services such as extension mobility and will be unable to update phones.
Since the reboot publisher virtual memory is holding steady.
I have the same problem in a cluster of 4 CCMs. The symptoms are web acces fail y Corporate Directory fail. I check the RTMT and i see the virtual memory are 98%. Time ago i have tha same problem and i solved rebooting the publisher. The version is 18.104.22.1680-12. Who knows the final fix?
we are even experiencing the high virtual memory issue on a 6.1.2 cluster (3 nodes), exclusively with one of the nodes.
Guys, can you check in RTMT the Calls in progress counter? Because on the node with high VM alerts, the Calls in progress statistics are constantly growing, never released.
We have the same problem!
Two servers in cluster. On Publisher Virtual Memory Usage increase every day on 2-3 %.
System version: 22.214.171.1240-3
Is it a MGCP controlled gateway? What is on the other side of the ISDN line, the telco?
Go to Serviceability/Tools/Serviceability Reports Archive, select the last PDF report for PerformanceRep, and try to upload Call Activity and System Resources utilization graphs.
I would see especially the call activity graph, to see if the Calls in Progress counter is constantly growing up. It's our problem and we are currently troubleshooting this with TAC.
I have the same issue with the call in progress that keep growing without legit reasons... How did you resolve your issue ?
We are using CCM 6.1
Unfortunately, the issue is still not fixed. A case is opened for a lot of months, but Cisco does not catch the root cause.
Please keep us updated if anyone find a cause.
Exactly, we have 2 installations with this issue. One under 6.1.2, the other under 5.1.1.
Both have E1 QSIG/MGCP GW. But other installations without the problem have E1 QSIG MGCP Gw too...
This bug is indeed interesting. The issue would occur with the CallBack feature from remote site through QSIG.
But, in our situation, the calls in progress graph is growing up linearly, and I am pretty sure users don't use CallBack every minute, including nights and week-ends ;).
Actually the engineer handling our case saw the Cdcc process crashs too, but not the cause.
What I am thinking is, there is another QSIG cause for the subprocess Cdcc crashs... I will push your bug id to TAC.
Thank you for this help! Very interesting.
Additionally, I found this bug CSCsk60495, related to similar behavior with Path replacement. And indeed, this feature is implemented between our Cisco cluster and the Siemens PBX. It's a good way to troubleshoot in, we will try to disable path replacement.
Same was the issue I was facing and found the same bug ID. Spent sleepless nights with Cisco TAC, who did anatomical study deep in to the OS level, found the bug ID and finally suggested to upgrade the cluster from 5.1.2 to 6.1.2.
Just check again and found that we haven't faced the same problem (High VM %)for over 11 months since then.
The fact is we already are in 6.1.2, meaning there is surely another bug related to MGCP QSIG gateways...
I will try to disable Path replacement.
Rebooting does solve the problem (temporarily) and solved in my case also. But is never a suggested solution as it can reoccur.
Solution implemented in my cluster- Upgrade the CCM cluster.
I know this shows closed but i am still haveing this issue, running CCM