In this document we'll be discussing the SNMP architecture as it is implemented in IOS-XR. As you can read in the IOS to XR migration guide (A starting point), some of the high level differences between IOS and XR are already being highlighted.
As IOS-XR is a highly distributed operating system and is using hardware forwarding, the way that SNMP retrieves counts and responds to requests is a bit different then what you might be used to and in this article we deep dive into the architecture of stats collection, how it operates and what show commands you can use to verify the performance of your SNMP in regards to IOS-XR and specifically for the ASR9000 (though this article also applies to CRS and GSR running IOS-XR).
XR routers are highly distributed. Increasing capacity by distribution and replication does come at a cost. In any scaled design where replication or multiplication of the processing devices is used, a critical additional component is the design is the inter process communication path between the processing components
The nature for this article originated from the fact that some of our customers have seen SNMP timeouts in XR 4.2.3 and has raised a lot of questions in regards to caching, stats collection and the way SNMP operates. Hopefully with this technote we can clear up some of the confusion.
SNMP architecture in IOS-XR
This section describes the symptoms of the problem and the main issue the document resolves.
SNMP Packet flow inside the system
Depending on your configuration SNMP packets can be received in band or out of band (as per MPP definitions, see article on LPTS and MPP for more info) and after intial reception and punting to the control plane (RSP), they are handed over to NETIO. NETIO is sort of an IP INPUT process in IOS that deals with process level switching.
IF the SNMP requests are "for me" they are handed over to the SNMP-D process for evaluation of the request and dispatch to the next layer of processing.
XR SNMP Specifics
Informs supported as of 4.1 (Inform proxy not supported)
Full AES Encryption support in 4.1 (V3 related)
Full IPv6 support In 4.2 (snmp engine transport)
VRF-aware support in 3.3 (snmp engine, some MIBs already available)
Across Cisco capability files not well supprtoed, ASR9K MIB guide developed to improve situation
Event/expression MIB support for extendibility as in IOS
Warm standby on snmp agent
Management plane protection (mpp) / snmp overload control to limit impact of snmp on device
Bulk processing (dedicated processing path for bulking) (4.2)
Data Collection Manager – bulk MIB data collection and file push (4.2.0 & 4.2.1)
Additional IPv6 / VRF aware MIB support (4.2 and after)
Additional improvements with Async IPC and SysDB Backend infra (4.1)
Overload Control Integration (4.0)
SNMP request processing blocked during critical event periods (i.e. OSPF convergence)
Additional PDU performance monitoring support (4.2)
MIB guide update (4.2)
Caching is an integral part of IOS XR SNMP processing allowing it to perform at best performance while maintaining the most accurate stats possible.
There are various levels of caching and some of them are configurable, some of them are not. The reason why we cache is also to alleviate the hardware from the burden of getting continuous requests, especially in WALK scenarios retrieving many requests for eg interface stats counters.
There is a process called STATS-D which is a proc running on the linecard that periodically scrapes statistics of the linecards hardware and updates the interface counters and MIB stats.
This means that if you poll within the stats-D update time, you'd realistically see the same counter being returned twice.
Show interface commands (depending on release) will force a direct update read from hardware to get the most accurate reading, but the IF-MIB stats are cached.
1.The SNMP UDP transport receives sends a SNMP GetRequest-PDU, GetNextRequest-PDU or GetBulk-PDU to the SNMPD.
2.The SNMP Engine parses the PDU and dispatches the individual variable bindings. IF-MIB objects are dispatched to mibd_interface process & IF-MIB DLL callbacks get invoked.
3.If the request is a getnext, the IF-MIB’s cache of variable bindings is checked to see if there is a cache hit. If so, the value is returned to the engine and the response PDU is sent. ***look-ahead cache
4.If no cache hit, the IF-MIB passes a message to the statsd_manager process to get the information for the interface (and the next 99 interfaces for the cache in the getnext case). IPC = LWM The sysdb direct EDM connection invokes the EDM for statsd.
5.The statsd_manager gets the interface data from its cache and returns the statsd bags for the interfaces to IF-MIB.
Visualizing caching differently:
Two caching mechanisms:
1: Statsd caching:
Used for interface related statistics (IF-MIB, IF-EXTENSON-MIB, etc.)
Statsd caching is configurable (via CLI).
2: Lookahead caching:
Conceptually a varbind cache.
Not all MIBs leverage/use this cache.
– Use command “snmp-server ifmib stats cache”* to enable it.
– This is a periodic cache which gets refreshed every 30 seconds for all interfaces.
– Statsd cache maintenance is done irrespective of this command. The command only dictates from where to fetch the stats.
– Without the above command stats are actually fetched from the linecard, real-time counters. (Default behavior).
• Involves more number of processes and hence more CPU utilization and latency. Additional tax for real-time counters.
System maintains look ahead cache:
– Stats fetched for next 100 rows (interfaces) in bulk and cached.
***Data for up to 500 interfaces kept in cache
– Cache is maintained for a max of 20 seconds.
– Oldest used blocked is reused to maintain a new set of cache.
– There is no **** to enable/disable this cache.
– Provides good performance improvement if used along with statsd cache.
Parallel vs Serialized processing
The following picture tries to explain what the serialized processing means:
When an SNMP request is being received they are handled in a sequential manner. If one request currently in progress is "slow", subsequent requests are waiting to be handled and may time out.
The NMS station may resend its SNMP request building up the request queue potentially causing more trouble.
Now the good news is in 431 we have the ability to detect duplicate requests and throw them out of the queue making sure we're dealing only with "NEW" requests.
Enhancements in XR 4.1
Enhancements in XR 4.2
Example (performance) trace point logging
SNMP process architecture
All management interfaces (SNMP, XML, CLI) utilize the same core processing architecture [sysdb].
The SNMP processing architecture serializes PDU processing (pre-4.2).
Request PDUs for all pollers effect the response rate seen for a single poller.
The SNMP per-OID polling rate is very MIB specific (each MIB’s underlying data model dictates the performance of MIB’s OID access)
MIB request processing commonly involves the GSP IPC mechanism, sysDB (data store) and statsd in some cases.
In band and out of band SNMP requests are treated the same within SNMP.
(In band means that the SNMP request can be received on an interface that is also transporting customer/user traffic. Out of band interfaces, such as the MGMT interfaces on the RSP are dedicated for management and carry management traffic only).
The current SNMP architecture has an SNMP daemon enqueue requests and separate MIB daemons process requests (requests are enqueued from transport layer receive fairly quickly)
There are multiple MIB-specific caching mechanisms in place to improve performance which also complicate the polling rate calculations.
There is no queue size limit for SNMP requests (grows with memory).
XR processes referenced
StatsD is a process that collects statistics from various places (eg hardware) and updates tables on the LC shared memory.
IPC is an inter process call or communication that is used by processes to talk to each other to request data or send commands.
GSP is group services protocol, which is a process in IOS-XR that allows for one process to communicate with multiple "nodes" at the same time (like a sort of multicast way that the RSP can use to talk to multiple linecards, for instance to update a FIB route).
“show snmp trace requests” is a sliding window of logs indicating the above information about PDU processing
XR MIB implementation specifics
Implementations of specific MIBs packaged as individual DLLs. Each MIBd process “houses” a group of MIB DLLs
Grouped according to the “type” of MIB—interface, entity, route, infra, at runtime, grouping is determined via a config file in XR source control
MIB DLLs handle the specifics of mapping MIB defined data model to XR data model. MIB DLLs map MIB namespace to XR data owner access
APIs (sysdb EDM is most common)
Look-ahead Caching—Any support for look-ahead cacheing is done within the MIB DLL. (No generic support for all MIBs)
Non-look-ahead cacheing—Some features may support access to cached managed data. These are accessed via separate data access point (ie. separate sysdb EDM path)
Troubleshooting commands and what they do
The following show and debug commands are very powerful to verify and track SNMP.
Global agent counters—incoming, outgoing (request and trap), & error PDUs
- Periodically collect output to determine overall PDU response rate and identify error rate.
show snmp trace requests
Log of high level PDU processing tracepoints—Rx, Proc Start, Tx time
Periodically collect this log. Decode and use the data to determine the following per-PDU data:
1.Source IPs of pollers
2.Queue lengths of per-source IP PDU queues
3.Types of request PDUs being used
4.Timestamp when PDUs are enqueued into the queues for the source IPs
5.Duration of the PDU enqueued & waiting to be processed
6.Processing time of PDUs from pollers
show snmp mib access
Per-OID counters indicating the number of times an operation was done on that OID, ie. GET, GETN, SET.
Periodically collecting & diff will indicate what was polled during the time periods.
show snmp mib access time
Per-OID timestamp of the last operation on the OID.
Periodically collecting & diff will indicate if any polling on the OID was done in the time period.
debug snmp request
Enable to log every OID being processed by every PDU to syslog. Need to enable “debug snmp packet” as well to identify source of PDUs.
NOTE: Disable “logging trap debug” if “snmp trap syslog” is configured!!!
debug snmp packet
Enable to log same data as “sh snmp trace requests” to syslog.
NOTE: Disable “logging trap debug” if “snmp trap syslog” is configured!!!
Show commands that are new to XR 4.2 onwards
show snmp mib statistics
Per-OID statistics summarizing transaction times within the mibd level—count + min/max/avg .
Collect to determine if specific MIB objects are averaging high processing times and/or large variance (low min, high avg & max).
show snmp queue rx
Indicates the min/max/avg queue sizes for the PDU receive and pending queues. Real-time and 5min views.
show snmp queue trap
Indicates the min/max/avg queue sizes for the internal trap PDU queue
(config)# snmp logging thresh oid
show snmp trace slow oid
Allows configuring a duration threshold for logging per-OID transactions exceeding the time threshold.
This is measured within the mibd process beginning with the call to the MIB specific handler for the OID and ending with the response from the same.
(config)# snmp logging thresh pdu
show snmp trace slow pdu
Allows configuring a duration threshold for logging per-PDU transactions exceeding the time threshold. When logging all OIDs within the PDU are also logged to this buffer.
This is measured within the snmpd process beginning with the dequeue of the PDU from the receive queue and ending when all the OIDs in the PDU have been processed and the response is ready to be sent.
Troubleshooting PDU performance issues
Some MIBs dont have accelerated processing or dont have caching and because in certain releases SNMP is processed serially, it could happen that you'll see timeouts on OID requests that are normally operating perfectly fine. An example of a slow MIB is the SONET MIB. Because this mib needs to talk from the SNMP process all the way down to the SPA of the SIP-700 linecard (on the ASR9000), the response may not be provided in a timely manner. At the same time new requests for other OID's may be in the holding or pending queue causing timeouts and retries.
Retries to an already under performing MIB may exacerbate the overal issue.
The vast majority of PDU performance issues are related to a poller polling a specific MIB which is slow to process its OIDs.
This causes all other pollers to see some of their PDUs slowed due to queueing delays (waiting on slow MIB)
Identify the slow MIB/MIBs being polled
Use SNMP View Access Control to block access to the slow MIB tables / objects
Use ACLs to permit only “known” NMS devices/applications . In this case “known” is referring to content of requests issued from the app
Determining Internal Timeout of a MIBd
snmpd will timeout a mibd process if it has not received a response to a request for an OID/s within 10s by default.
Once in timeout state, snmpd will continue processing requests BUT it will mark the mibd as unavailable until it responds to the timed-out request.
Getnext operations to any OIDs for MIBs in the timed out mibd will skip to the lexi-next OID owned by a different mibd process.
Get/Set operations to any OIDs for MIBs in the timed out mibd will be responded to with a PDU error-code of “resourceUnavailable”.
(in addition to normal “slow OID” techniques):
If able to catch mibd in the state:
run attach_process –p <PID of mibd process> -i 5 –S
May be possible to identify the MIB being polled via examining “show snmp lib group agent ipc” for “request timeout” to get the timestamp for when
the mibd timeout is detected.
Using the timeout timestamp, “sh snmp mib access time” may still have an OID timestamp correlating to 10s prior.
Examples and Recommendations
For the purpose of clarification the following is an example of an snmp table. The columns (vertical) represent the instance or entity, and the rows represent the objects. In this case we have 3 instances 1, 2 and 3, and each instance has 3 objects, ifName, ifInOctets and ifMtu.
The customers current snmp design is using snmpwalk. Snmpwalk works by performing a sequence of get-nexts, but on a column by column basis if the column object is specified as the starting point.
An example of a column walk specifying the ifDescr from IF-MIB
[no-sense-1 68] ~ > snmpwalk -c public 10.66.70.87 IF-MIB::ifDescr
IF-MIB::ifDescr.1 = STRING: Loopback0
IF-MIB::ifDescr.2 = STRING: Bundle-POS1
IF-MIB::ifDescr.3 = STRING: Bundle-Ether1
IF-MIB::ifDescr.4 = STRING: TenGigE1/2/0/0
IF-MIB::ifDescr.5 = STRING: TenGigE1/2/0/1
IF-MIB::ifDescr.6 = STRING: SONET0/2/0/0
IF-MIB::ifDescr.7 = STRING: SONET0/2/0/1
IF-MIB::ifDescr.8 = STRING: SONET0/2/0/2
IF-MIB::ifDescr.9 = STRING: SONET0/2/0/3
IF-MIB::ifDescr.10 = STRING: SONET0/2/0/4
Snmpwalk can also be used to get a single object only, for instance, the object IF-MIB::ifDescr.9. It does not support the ability to specify any more than 1 object in its request. The example below shows two objects being requested, but only the first returned.
[no-sense-1 69] ~ > snmpwalk -c public 10.66.70.87 IF-MIB::ifDescr.9
IF-MIB::ifDescr.9 = STRING: SONET0/2/0/3
[12:18 - 0.31]
[no-sense-1 70] ~ > snmpwalk -c public 10.66.70.87 IF-MIB::ifDescr.9 IF-MIB::ifDescr.10
IF-MIB::ifDescr.9 = STRING: SONET0/2/0/3
[12:18 - 0.36]
For efficiency row traversal is preferred, with multiple objects requested in a single snmp transaction. This reduces unnecessary overhead on the XR system. For this reason snmpwalk is not recommended.
Examples of row traversal
The customer is currently requesting via snmpwalk the following IF-MIB objects
The preferred method is to specify all the objects required from an instance/entity in a single command such as get-next or bulk-get. An example follows using snmpbulkget
Note above that all the objects in a row for all instances (columns) are obtained with one command. The same can be done with a get-next, however the added overhead of including the instance must be used for each instance present.
Although the examples are specific to IF-MIB, the same concept is relevant to all MIBs.
Timeout and Retry Setting on NMS
use dynamic timeout when available
if dynamic timeout is not available, increase timeout if more management applications are simultaneously polling the SNMP agent on asr9k. Multiply the default timeout by the number of applications that are simultaneously polling the SNMP agent on asr9k.
use dynamic retry when available
if dynamic retry is not available, establish number of retries based on testing