This document decribes the ASR9000 netflow architecture.
It provides a basic configuration how to set up netflow and what the parameters are for scale and how netflow is implemented in the ASR9000/XR
Basic configuration for netflow
The basic configuration for netflow consists of an
Flow monitor map
An exporter map
and a sampler map.
The Flow monitor MAP pulls in the Exporter map
On the interface you want to enable netflow on, you pull in the monitor map and the sampler map.
flow monitor-map FM record ipv4 exporter FE cache permanent cache entries 10000
! cache timeouts define how frequently we export what, max of 1M per LC
cache timeout active 2 cache timeout inactive 2 !
flow exporter-map FE version v9 options interface-table timeout 120
! these 2 define the exports of the sample map and interface table to theflow collector for sync'ing indexes
! to names etc. options sampler-table timeout 120 !
transport udp 1963
sampler-map FS random 1 out-of 1
interface GigabitEthernet0/0/0/20 description Test PW to Adtech G4 ipv4 address 22.214.171.124 255.255.255.0
flow ipv4 monitor FM sampler FS ingress
Scale parameters for netflow
Ø Trident: 100kpps/LC (total, that is in+out combined) Typhoon: 200kpps/LC Ø 1M records per LC (default cache size is 64k)Ø 50K Flows per sec export per LCØ Sample intervals from 1:1 to 1:64k
Ø Up to 8 exporters per map, vrf aware
ØMPLS (with or without IPv4/IPv6 fields)
Netflow is not hardware accelerated in the ASR9000 or XR for that matter, but it is distributed.
What that means is that each linecard individually runs netflow by itself.
Resources are shared between the interfaces and NPU's on the linecard.
When you have 1 interface to one NPU on one linecard enabled for netflow, the full rate is available to that interface, which is 100k pps for trident and 200k for typhoon.
When you enable 2 interfaces on the same NPU on the same LC, then both interfaces share the 100k pps (trident) or 200k pps (typhoon)
When you enable 2 interfaces on 2 different NPU's, then both NPU's share the total rate of 100k/200k amongst them giving each NPU 50k or 100k depending on the LC type.
Packet flow for netflow
•Once they pass through the sampling policer, the ucode extracts data from the header fields and sends to LC CPU to construct a flow record.
•The LC CPU sends the flow record to netflow cache on the LC.
•The flow records remain in the LC cache untill they are aged due to either timer expiry or cache exhaustion.
•There are two timers running for flow aging, the active timer and the inactive timer.
Inside the LC CPU
Netflow Cache size, maintenance and memory
In IOS-XR platforms, it is the LC processor memory that holds the netflow cache.
NetFlow Cache is a Section of memory that stores flow entries before they are exported to external collector.
The ‘nfsvr’ process running on the linecard, manages the netflow cache.
The memory used can be monitored via this command:
show flow monitor FM cache internal location 0/0/CPU0
Memory used: 8127060
Total memory used can be verified by checking the process memory util of "NFSVR"
show processes memory location 0/0/CPU0 | inc nfsvr
257 139264 65536 73728 12812288 nfsvr
The memory used with the cache size of default 64k entries for ipv4 & MPLS is about 8MB & for ipv6 is about 11MB.
The memory used with the cache size of maximum 1M entries for ipv4 & MPLS is about 116 MB & for ipv6 is about 150MB.
The memory used with cache size of maximum 1M entries (default is 65535) is about 116 MB per ipv4 flow monitor .
If ‘n’ ipv4 flow monitors are used all with maximum 1M entries, the memory used would be n x 116 MB.
The default size of the netflow cache is 64k entries. The maximum configurable size of the netflow cache is 1M entries.
Configuration to set the cache entries to ten thousand looks as follows:
flow monitor-map FM
cache entries 10000
95% of configured cache size is the high watermark threshold. Once this threshold is reached, certain flows (longest idle ones etc) are aggressively
timed out. XR 4.1.1 attempts to expire 15% of the flows.
The show flow monitor FM cache internal location 0/0/cpu0 command will give you the data on that:
Cache summary for Flow Monitor :
Cache size: 65535
Current entries: 17
High Watermark: 62258
this syslog message means that we wanted to add more entries to the cache than what it could hold. There are a few different reasons and remediations for it:
- the cache size is too small, and by enlarging it we can hold more entries
- the inactive timeouts are too long, that is we hold entries too long in the cache not getting aged fast enough
- we have the right size cache, and we do export them adequately, but we are not getting the records out fast enough due to volume, in that case we can tune the rate limit of cache expiration entries via:
The permanent cache is very different from a normal cache and will be useful for accounting or security monitoring. The permanent cache will be a fixed size chosen by the user. After the permanent cache is full all new flows will be dropped but all flows in the cache will be continuously updated over time (i.e similar to interface counters).
Note that the permanent cache uses a different template when it comes to the bytes and packets.
When using this perm cache, we do not report fields 1 and 2, but instead use 85 and 86.
Fields 1 and 2 are “deltas” 85 and 86 are "running counters".
In your collector you need to "teach" it that 1 and 85, 2 and 86 are equivalent.
Number of flows : Total number of unique flows going through the interface in a given time period.
Cache timeout values : In general, the longer the timers, the larger the needed cache size. Short timers dictate that most records will be removed due to aging.
Average flow duration : The longer the average flow duration, the longer the timers that are needed, and thus the larger the cache.
Sampling Rate : Lower the sampling rate (i.e. lower the X, for 1:X), more flows would be populated in the cache and hence larger cache size is needed.
Which packets are netflowed?
All packets subject to sampling, regardless or whethe they are forwarded or not are subject to netflow.
This includes packets dropped by ACL or QOS policing for instance!
A drop reason is reported to NF..
* ACL deny
* policer drop
* WRED drop
* Bad IP header checksum
* TTL exceeded
* Bad total length
* uRPF drop
Export occurs when data in the cache is removed which can occur in one of three ways.
Inactive timer expiry : The cache entry expires due to not matching an incoming packet for a specified amount of time. Default value is 15 seconds.
Active timer expiry : The cache entry, though still matching incoming packets, has been in the cache so long that it exceeds active timer. Default value is 30 minutes
Cache Exhaustion : The cache becomes full, so some of the oldest entries are purged to make room for new entries.
The netflow exporter can be in a VRF, but can not be out of the Mgmt Interface.
Here’s why. The netflow runs off of the line card (LC interfaces and NP) and there is, by default, no forwarding between the LCs and the management Ethernet.This because the MGMT ether is designated out of band by LPTS (local packet
transport services). More detail in the ASR9000 Local packet transport services
document here on support forums).
Netflow records can be exported to any destination that may or may not be local to the LC where netflow is running. For example, LC in slot 1 & 2 are running netflow & the exporter may be connected to an interface reachble via LC in slot 3.
A total of 8 exporters per MAP is allowed.
RP/0/RSP0/CPU0:A9K-TOP#show flow exporter FE location 0/0/CPU0 Tue Nov 16 11:23:41.437 EST Flow Exporter: FE Flow Exporter memory usage: 3280812 Used by flow monitors: FM