after upgrading our 50-user-deployment from 220.127.116.113.A to 18.104.22.168.B-AE, the creation of backups for Disaster Recovery fails:
We get noticed on the dashboard that the system-backup hasn't been completed and find the storage server volume empty.
Meeting recordings can be created and played back using the same NFS volume without any problems.
The Disaster Recovery Feature is able to read the backup from our 22.214.171.1243.A deployment (and complains about a version mismatch, which we believe is the expected behavior).
Moving the previous backup folder to another storage device, removing the storage server from the configuration, restarting the services, re-adding the storage server to the configuration did not help.
Any idea how this may be fixed?
Solved! Go to Solution.
If recording is working but backup is failing and they are definitely using the same NFS storage, this issue would require some troubleshooting and log analysis. I would advise you to open a ticket with Cisco TAC to take a look at it.
The same problem here. I have deployed CWMS 126.96.36.199.B-AE and defined NFS storage.
Recordings are working properly, but system says: "System backup failed to complete at Apr 5, 2014 4:21 am". System backups are continually failing since then. When we had CWMS 1.5, Snapshot directories were created on NFS correctly.
Any findings with TAC?
Seems that this behavior is issued from a mismatch between the web-config-ui entry for NFS and the corresponding database entry.
After BU manually edited a script and / or a database field, we now see new backups created, but old backups are not cleaned automatically.
SR is still open, but we haven't seen a bug-ID yet.
having 2.0 MR3 installed, we see new backups are created successfully, but old snapshots are not cleaned automatically.
Is this the expected behavior in 2.0?
Ensure that you allowed Everyone Full Control on the NFS storage mount point, so that CWMS service can purge the previous backup files. Keep in mind that from the time it starts working properly, it will start purging previous backup. If you have something that is older and going back for many days, you will have to purge those manually. From that point on, the system will always purge the backup from the previous day when a new backup is created. You should have only one set of backup files (the latest one) on the NFS.
NFS access rights have been double-checked (TAC and DEV).
Since the developers have implemented the described workaround via remote access, backups are created on a daily basis and old backups are never purged on our deployment.
Updating to 2.0 MR3 has not changed this behavior.
We've manually moved all snapshots to another volume now and will monitor the NFS volume to see if a clean start will do a change.
Not sure what was done on the CWMS side to initiate start of the backup and if any customization was done to fix this. Continue to work with TAC and developers to resolve this. Backups should be purging after a new successful backup is completed.
We were able to resolve this issue by manually removing any previous snapshot.