cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1233
Views
0
Helpful
8
Replies

VCSC Critical error.....

Hi,

I am getting a critical error for TANDBERG VCSC saying that memmory utlization is 93% , Error condition is Critical form the OPM console  .   


Does anyone know how to fix this problem? Let me know if you need any logs. VCS version is X7.2.1. Please note i cannot find any error  message from VCSC web page.

Thank you...


8 Replies 8

Alok Jaiswal
Cisco Employee
Cisco Employee

Hi Ashiq,

can you login as root and paste the output for following commands.

# df -h

# top

Rgds

Alok

Yes, starting with whatAlok said is a good thing, btw, the # upfront to not need to be entered and

you have to log in as root via ssh or console.

That at least gives more info for trouble shooting.

Top is an interactice process, you have to press q or ctrl-c to leave it.

I think for this case its nicer to enter:

top -bn1

which will just output the current info once.

also the output of a

ps xaufww

could be interesting

In addition you could also log in as admin and capture the output of a xconfig and xstatus.

Is this a system in production, do you experiance any issues?

Did this happen before or is this the first time?

It could be handy to have a system snapshot as well, though I am not sure how

well it will perform when you have a memory warning

if the 93% is the ram, to "fix" it I would try to restart the VCS, also look into upgrading it to X7.2.2 and see if it happens again.

If its the hard drive, I would look into what is filling it up (log files, forgot to close a tcpdump session, ...)

If you have problems on fixing it, escalate this case as a service request to TAC!

Please give a feedback on how you fixed it.

Please remember to rate helpful responses and identify

Thanks for the response Mr Alok & Mr Martin !!

root output for the above commands..

df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda6       955M  442M  465M  49% /

devtmpfs        2.0G  220K  2.0G   1% /dev

/dev/ram0       190M   56M  124M  32% /var

/dev/ram1       1.5G  5.9M  1.4G   1% /tmp

/dev/sda8       955M  622M  285M  69% /tandberg

/dev/sdb2       221G   11G  199G   6% /mnt/harddisk

/dev/tmp        1.5G  5.9M  1.4G   1% /tmp

/dev/tmpstore   180K   14K  157K   9% /var/tmpstore

I rebooted the server still the same issue. This system is in production without any problem.

Thank you..

Regards,

Ashiq

The df (df=disk free) looks ok, do you also have the output of the

top -bn1

Please post it here as well!

Do you have other VCS as well? Anything special with this one?

(lot of usage, registration, ...)

Is it an appliance (hardware box) or a virtual session? If virtual

does it comply with the full system requirements and also does not

share resources?

In addition:

the following you do not need to post here, but prepare it as

I would recomend that you escalate this case to Cisco TAC as

a service request (you can escalte it directly from this message thread!)

take the xconfig and xstatus (log in with admin instead of root, or if logged in as root

type in:

tsh

first, to get to the "Tandberg Shell")

and then run:

xconfig

and

xstatus

that will give you some generic xonfig and status info.

A backup of your current config (some parts are not shown in the xconf) is handy anyhow

so take it under:

Maintanance > Backup&Restore >

also on the webinterface go to

Maintenance > Diagnostics > System snapshot >  Create system backup file

(and if used the TMS Agent Backup file as well)

in addition take and download a full system snapshot. Do that in an offpeak moment

as it increases the cpu load on the system.

If Alok or somebody else does not have some suggestion what the cause could be I

would strongly recomend you esclate it to TAC. If you do so please update us what

the problem was and how you fixed it!

Please remember to rate helpful responses and identify

Hi Ashiq,

To me the physical memory looks ok. I was interested in seeing if the hardidisk is not full. seems this alarm might be related to logical memory.

we need to see if the VCS is overutilizing the logical memory or using swap memory continuously. As Martin mentioned and in earlier post we asked to get the output for the "top" command. Along with i would want to see the output of "free" command.

There are some linux command which can be run to free the memory if VCS overutilizing and then you can keep monitoring the memory afterwards. But again i need to see the output of the above said commands first to understand better.

Also if the memory utilization alarm keeps popping again and again even after acknowledging the alarm we recommend you to open the TAC case. Could be some process is taking large chunk of memory from VCS which can be analyzed through system snapshot.

Rgds,

Alok

Thank you All..!!

Attached output of the top -bn1

We have one vcs control and vcs expressway in production not overloaded. Running  appliance , hardware box not a virtual session.


......................................................................................................................

# top -bn1

top - 09:52:10 up 1 day,  1:38,  1 user,  load average: 0.01, 0.02, 0.05

Tasks: 145 total,   1 running, 144 sleeping,   0 stopped,   0 zombie

Cpu(s):  0.6%us,  0.3%sy,  0.1%ni, 99.0%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:   4044580k total,  1857068k used,  2187512k free,   122552k buffers

Swap:  9775516k total,        0k used,  9775516k free,   408772k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           

4899 root      20   0 30392 5756 1880 S    2  0.1   0:38.91 snmpd             

7017 root      20   0  542m 158m  20m S    2  4.0   6:45.33 app               

10340 root      20   0  8884 1852 1220 S    2  0.0   0:03.72 proxy-registrat   

    1 root      20   0  4076  572  488 S    0  0.0   0:01.26 init              

    2 root      20   0     0    0    0 S    0  0.0   0:00.01 kthreadd          

    3 root      20   0     0    0    0 S    0  0.0   0:00.48 ksoftirqd/0       

    6 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0       

    7 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1       

    8 root      20   0     0    0    0 S    0  0.0   0:00.77 kworker/1:0       

    9 root      20   0     0    0    0 S    0  0.0   0:00.26 ksoftirqd/1       

   10 root      20   0     0    0    0 S    0  0.0   0:01.57 kworker/0:1       

   11 root       0 -20     0    0    0 S    0  0.0   0:00.00 cpuset            

   12 root       0 -20     0    0    0 S    0  0.0   0:00.00 khelper           

   13 root      20   0     0    0    0 S    0  0.0   0:00.05 kworker/u:1       

  168 root      20   0     0    0    0 S    0  0.0   0:00.30 sync_supers       

  170 root      20   0     0    0    0 S    0  0.0   0:00.00 bdi-default       

  172 root       0 -20     0    0    0 S    0  0.0   0:00.00 kblockd           

  283 root       0 -20     0    0    0 S    0  0.0   0:00.00 ata_sff           

  291 root      20   0     0    0    0 S    0  0.0   0:00.00 khubd             

  398 root       0 -20     0    0    0 S    0  0.0   0:00.00 rpciod            

  422 root      20   0     0    0    0 S    0  0.0   0:00.01 khungtaskd        

  427 root      20   0     0    0    0 S    0  0.0   0:00.00 kswapd0           

  491 root      20   0     0    0    0 S    0  0.0   0:00.07 fsnotify_mark     

  504 root       0 -20     0    0    0 S    0  0.0   0:00.00 nfsiod            

  507 root       0 -20     0    0    0 S    0  0.0   0:00.00 crypto            

1170 root      20   0     0    0    0 S    0  0.0   0:00.02 scsi_eh_0         

1173 root      20   0     0    0    0 S    0  0.0   0:00.01 scsi_eh_1         

1176 root      20   0     0    0    0 S    0  0.0   0:00.03 kworker/u:2       

1184 root      20   0     0    0    0 S    0  0.0   0:00.02 scsi_eh_2         

1187 root      20   0     0    0    0 S    0  0.0   0:00.02 scsi_eh_3         

1262 root      20   0     0    0    0 S    0  0.0   0:02.35 kworker/0:2       

1263 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/1:1       

1289 root      20   0     0    0    0 S    0  0.0   0:00.01 kjournald         

1295 root      20   0 14576  440  244 S    0  0.0   0:02.38 bootlogd          

1312 root      20   0  9004 1540  788 S    0  0.0   0:01.07 udevd             

1448 root      20   0  9000 1248  488 S    0  0.0   0:02.71 udevd             

1449 root      20   0  9000 1192  440 S    0  0.0   0:00.00 udevd             

1494 root       0 -20     0    0    0 S    0  0.0   0:00.00 loop16            

1498 root       0 -20     0    0    0 S    0  0.0   0:00.06 loop17            

1579 root      20   0     0    0    0 S    0  0.0   0:02.79 kjournald         

1715 nobody    20   0 13136 1024  752 S    0  0.0   0:00.47 dbus-daemon       

1788 root      20   0     0    0    0 S    0  0.0   0:05.10 kjournald         

1791 root      20   0     0    0    0 S    0  0.0   0:00.36 flush-1:0         

1793 root      20   0     0    0    0 S    0  0.0   0:00.87 flush-8:16        

1794 root      20   0     0    0    0 S    0  0.0   0:00.28 flush-8:0         

1819 root       0 -20     0    0    0 S    0  0.0   0:01.26 loop20            

1893 root      20   0  8464 1384 1180 S    0  0.0   0:00.00 requestd          

1896 root      20   0  6232  392  312 S    0  0.0   0:00.00 inotifywait       

1897 root      20   0  8468  920  708 S    0  0.0   0:00.01 requestd          

2573 root      20   0     0    0    0 S    0  0.0   0:00.42 flush-7:20        

4290 root      20   0 61972 6528 1156 S    0  0.2   0:00.25 python            

4343 root      20   0 47104 8716 2852 S    0  0.2   0:00.03 python            

4355 root      20   0  8600 1540 1196 S    0  0.0   0:00.00 packagesd         

4361 root      20   0  6232  396  312 S    0  0.0   0:00.00 inotifywait       

4362 root      20   0  8600 1024  680 S    0  0.0   0:00.00 packagesd         

4433 root      20   0 24408  616  416 S    0  0.0   0:00.00 syslog-ng         

4434 root      20   0 50092 3376 2368 S    0  0.1   0:19.96 syslog-ng         

4488 nobody    -2   0 11224 1304  648 S    0  0.0   0:28.43 LCDd              

4489 root      20   0  140m  21m 5504 S    0  0.5   0:33.47 python            

4522 root      20   0 31752 2472 1860 S    0  0.1   0:02.57 ntpd              

4551 root      20   0  132m  14m 4232 S    0  0.4   0:00.14 python            

4735 root      20   0 31784 1268  844 S    0  0.0   0:00.00 sshd              

4760 root      20   0  8456 1228 1028 S    0  0.0   0:00.00 sh                

4761 root      20   0 19608 1968 1544 S    0  0.0   0:00.02 racoon            

5187 root      20   0  132m  14m 4232 S    0  0.4   0:00.15 python            

5202 root      20   0  132m  14m 4232 S    0  0.4   0:00.15 python            

5207 root      20   0  9224 2216 1248 S    0  0.1   0:04.88 updatesw          

5266 root      20   0  8948 1956 1248 S    0  0.0   0:03.20 sysmonitor        

5276 root      20   0  9468 2396 1184 S    0  0.1   0:08.87 logrotated        

5295 root      20   0  181m  75m 3924 S    0  1.9   2:12.67 beam.smp          

5302 root      20   0 11288  852  232 S    0  0.0   0:02.24 epmd              

5466 root      20   0  3916  380  312 S    0  0.0   0:00.24 heart             

6029 root      20   0  8456 1224 1024 S    0  0.0   0:00.00 sh                

6129 root      20   0  127m  15m 4248 S    0  0.4   0:00.15 python            

6156 root      20   0  9028 2024 1244 S    0  0.1   0:02.83 tmpstored         

6157 root      20   0 20204 1528 1204 S    0  0.0   0:00.73 filestored        

6192 root      20   0  140m  14m 4232 S    0  0.4   0:00.14 python            

6217 root      20   0  348m  29m 5524 S    0  0.7   1:35.92 python            

6721 root      20   0  132m  15m 4944 S    0  0.4   0:00.13 python            

6977 root      20   0  8612 1600 1248 S    0  0.0   0:00.00 tandberg          

7264 root      20   0  302m  19m 2368 S    0  0.5   2:45.19 python            

7380 nobody    20   0 13608 1520  636 S    0  0.0   0:00.02 dnsmasq           

7706 root      20   0  137m  17m 5044 S    0  0.4   0:00.31 python            

9049 root      20   0  132m  14m 4228 S    0  0.4   0:00.14 python            

9152 root      20   0  8456 1224 1028 S    0  0.0   0:00.00 sh                

9158 nobody    20   0  155m  14m 4196 S    0  0.4   0:00.35 python            

9169 root      20   0     0    0    0 S    0  0.0   0:00.10 flush-7:18        

9711 root      20   0  132m  14m 4228 S    0  0.4   0:00.13 python            

9733 nobody    20   0  160m  19m 4620 S    0  0.5   0:18.23 python            

9825 root      20   0  132m  14m 4228 S    0  0.4   0:00.13 python            

9846 root      20   0  8456 1220 1024 S    0  0.0   0:00.00 sh                

9849 nobody    20   0  168m  19m 6080 S    0  0.5   0:14.18 python            

9938 root      20   0  132m  14m 4228 S    0  0.4   0:00.13 python            

9961 root      20   0  8456 1224 1024 S    0  0.0   0:00.00 sh                

9964 nobody    20   0  153m  21m 5368 S    0  0.5   0:14.13 python            

10075 root      20   0  132m  14m 4228 S    0  0.4   0:00.14 python            

10097 root      20   0  8456 1220 1028 S    0  0.0   0:00.00 sh                

10099 nobody    20   0  156m  22m 5888 S    0  0.6   0:16.13 python            

10187 root      20   0  132m  14m 4228 S    0  0.4   0:00.12 python            

10200 root      20   0  215m  46m 5476 S    0  1.2   0:50.71 python            

10291 root      20   0  132m  14m 4228 S    0  0.4   0:00.13 python            

10303 root      20   0  221m  51m 6768 S    0  1.3   1:02.05 python            

10451 root      20   0  124m  11m 1628 S    0  0.3   0:00.75 python            

10484 root      20   0  4068   84    0 S    0  0.0   0:00.00 telnetd           

10506 root      20   0  132m  14m 4228 S    0  0.4   0:00.14 python            

10532 root      20   0  8456 1224 1028 S    0  0.0   0:00.00 sh                

10534 root      20   0  130m  14m 4220 S    0  0.4   0:00.16 python            

10555 root      20   0  239m  29m  10m S    0  0.7   5:32.90 ivy               

10749 root      20   0  8760 1740 1232 S    0  0.0   0:02.67 clusterWatchdog   

10793 root      20   0  132m  14m 4228 S    0  0.4   0:00.13 python            

10869 root      20   0  128m  16m 4384 S    0  0.4   0:01.28 python            

10910 root      20   0  124m  14m 4228 S    0  0.4   0:00.14 python            

10928 root      20   0  134m  16m 5368 S    0  0.4   0:08.57 python            

10974 root      20   0  132m  14m 4228 S    0  0.4   0:00.14 python            

11006 root      20   0  106m  11m 6400 S    0  0.3   0:01.25 httpd             

11032 root      20   0  241m  19m 4920 S    0  0.5   1:01.71 python            

11072 root      20   0  4048  564  472 S    0  0.0   0:00.00 taalogger         

11073 root      20   0  4048  568  472 S    0  0.0   0:00.19 taalogger         

11074 root      20   0  4048  564  472 S    0  0.0   0:00.15 taalogger         

11075 root      20   0  4048  564  472 S    0  0.0   0:00.00 taalogger         

11130 nobody    20   0  112m  19m 8664 S    0  0.5   0:01.55 httpd             

11131 nobody    20   0  114m  21m 8696 S    0  0.5   0:01.36 httpd             

11132 nobody    20   0  112m  19m 8672 S    0  0.5   0:01.11 httpd             

11133 nobody    20   0  118m  27m  10m S    0  0.7   0:24.78 httpd             

11134 nobody    20   0  112m  19m 8540 S    0  0.5   0:01.50 httpd             

11704 root      20   0  4096  532  444 S    0  0.0   0:00.00 acpid             

11732 root      20   0  4048  572  480 S    0  0.0   0:00.08 inactived         

11955 nobody    20   0  112m  19m 8728 S    0  0.5   0:01.86 httpd             

12027 root      RT   0  4068  564  476 S    0  0.0   0:00.00 getty             

12028 root      RT   0  4068  564  476 S    0  0.0   0:00.00 getty             

12029 root      RT   0  4068  564  476 S    0  0.0   0:00.00 getty             

13553 nobody    20   0  114m  21m 8664 S    0  0.5   0:01.03 httpd             

13557 nobody    20   0  114m  21m 8508 S    0  0.5   0:01.24 httpd             

13568 nobody    20   0  112m  19m 8664 S    0  0.5   0:01.47 httpd             

13571 nobody    20   0  114m  21m 8508 S    0  0.5   0:01.34 httpd             

28207 root      20   0 76116 3728 3004 S    0  0.1   0:00.01 sshd              

28222 root      20   0 10700 1776 1388 S    0  0.0   0:00.00 sh                

29568 root      20   0  5504  644  552 S    0  0.0   0:00.00 sleep             

30172 root       0 -20     0    0    0 S    0  0.0   0:00.00 loop18            

30180 root      20   0  5504  644  552 S    0  0.0   0:00.00 sleep             

30272 root      20   0  5504  644  552 S    0  0.0   0:00.00 sleep             

30273 root      20   0 10896 1128  820 R    0  0.0   0:00.00 top               

30280 root      20   0  5504  648  552 S    0  0.0   0:00.00 sleep             

30296 root      20   0  5504  644  552 S    0  0.0   0:00.00 sleep             

30298 root      20   0  5504  644  552 S    0  0.0   0:00.00 sleep             

Thank you.....

Regards,

Ashiq


Hi Ashiq,

seeing at the output i think the memory utilization is proper and doesn't looks overutilized at the moment.

Cpu(s):  0.6%us,  0.3%sy,  0.1%ni, 99.0%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:   4044580k total,  1857068k used,  2187512k free,   122552k buffers

Swap:  9775516k total,        0k used,  9775516k free,   408772k cached

currently the swap is not used and also approx 2GB is free..

when does the alarm appeared and are you still see this alarm popping up again.

Rgds

Alok

aostense
Level 1
Level 1

Hi Ashiq,

Do you see this warning/error in the VCS alarm page (https://vcsip/alarms)?

Could you send me a screenshot of the error (so I can see the error code, and understand where you see this as well)?

When you ack the alarm (if it's an alarm), does it come back?

The "top" output identifies the VCS as stable and healthy, so it could be this is an old warning? If so, did you have any special configurations in the timeframe you got the error the first time?

If you have clustered VCSs (?), you could have been affected by this bug: CSCub42318

..and an upgrade to X7.2.2 will fix this issue.

I'll be waiting for your reply.

Cheers,

Arne