cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3335
Views
14
Helpful
39
Replies

Inventory Collection Failed on Cisco 3550 Switches LMS 3.0

dionjiles
Level 1
Level 1

Problem Details: I have 75 devices that fails when performing inventory collection. The

Message ID I am receiving is RICS0001 (Cannot successfully collect Inventory Information

for device) Internal Error. After trying to collect inventory again it happened again,

here is the information I was able to collect for one of the devices in the IC_Server.log

I have a TACService Request opened, but its been a couple of days since I heard anything. Just wondering has anyone else encountered issues with these devices. Wasn't a problem for me when I on LMS 2.5.1

Attached is the log I ran after setting the log level to fatal.

3 Accepted Solutions

Accepted Solutions

Joe Clarke
Cisco Employee
Cisco Employee

Are all 75 failing devices 3550s? There was a new bug for in RME 4.1 that would account for IOS switches seeing this problem. The bug is CSCsk65355. There is a patch available by calling the TAC.

View solution in original post

You can always runs a sniffer trace. This is really the best way to diagnose inventory problems when the stack trace contains a MIB module name.

View solution in original post

We are constantly evolving our device packages. We may be polling for newer objects in newer device packages, and tickling some device-side bugs. As I said, the sniffer trace for a working and non-working device would give a more complete picture as to why one is failing. Something must be different in the ENTITY-MIB data one device is returning.

View solution in original post

39 Replies 39

Joe Clarke
Cisco Employee
Cisco Employee

Are all 75 failing devices 3550s? There was a new bug for in RME 4.1 that would account for IOS switches seeing this problem. The bug is CSCsk65355. There is a patch available by calling the TAC.

Thanks for the update. I have a case currently opened Case #607070583. You are always such a good help when it comes to these types of questions.I guess I will keep waiting on them to send me the patch....

I just recently installed lms 3.0 in parallel to 2.6 and am having the exact same problem. It seems to only happen with the Cat3550 - 48port switches in the network. I tried to do an SNMP walk on it and i get a "No Such Object available on this agent at this OID"

I don't know on what object you are performing the SNMP walk, but as I said, there is a bug that affects inventory collection of 3550 switches in RME 4.1. You will need to call the TAC to get the patched SharedInventoryCatIOS.zip to fix the problem.

That is correct. I have a current TAC request open as well and I have received the patch and install so far unsuccessful the developers are now looking into the logs I sent to determine the problem.

Here are the Steps they sent to install the patch.

* Stop the daemons

* Take backup of SharedInventoryCatIOS.zip present in the location NMSROOT\MDC\tomcat\webapps\rme\WEB-INF\lib\pkgs.

* Copy the attached zip file(SharedInventoryCatIOS.zip) to the location NMSROOT\MDC\tomcat\webapps\rme\WEB-INF\lib\pkgs.

* Start the daemons

* The affected devices need to be deleted and re-added.

These instructions are slightly wrong. When you backup SharedInventoryCatIOS.zip, you must either put the backup in another location, or name the backup something that does not end in .zip or .jar (e.g. SharedInventoryCatIOS.zip.orig). If the .zip extension is preserved, the patch will not work.

Actually I moved the current SharedInventoryCatIOS.zip to the desktop on the server first and then copied over the patch they sent me to the directory that's in the instructions and started the daemons.

Deleted all the devices in re-added them and I still get the exact same number of failed devices. I'm actually going to request a webmeeting today to see if I can get this matter resolved.

Ah, if you did that, check the perms on the patched SharedInventoryCatIOS.zip. I'll bet casuser has no access to the new file.

I just opened my TAC case... If you get a resolution, please post it. Thanks.

I just checked and noticed the local admin user I have on the server is casuser, but when I checked the properties files on the zips in that directory its casusers so I manually added casuser full rights to the folder and now i'm starting the daemons now and I will delete and re-add the affected devices. I will follow up with you in a moment with the results.

Here is what I just noticed...in the local admin group I a casuser account but when I check the properties of the CSCOpx it looks like it using casusers account is being used for some reason will this have an affect on the perms?

The local admin group is casusers and the local acct is casuser.

I just ran an inventory collection once again with casuser and casusers full control of the file and still the same devices are failing.

casuser and casusers should not be administrators on the system for security reasons. They must, however, have full control to all files and directories under NMSROOT.

For the devices that are failing, do you see the same errors in IC_Server.log?

Thanks.....Attached is the IC_Server.log I ran the Inventory Collection and set the log level to debug mode as instructed from the person who is working my request.

Assuming one of the failing 3550s is 172.18.73.242, this looks like a different problem now. Please post the running config from this switch.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco