This document describes how to work with and implement QOS Policy Propagation for BGP, or in short QPPB.
QPPB is the concept of marking prefixes received from BGP with a certain QOS group assignment and this QOS-group can be used
in egress policies (or even ingress policies after marking) to perform certain actions.
While this seems so simple, and it is technically, the ability to achieve this functionality in hardware is quite complex.
In this article I hope to unravle some of these gotcha's and operation when it comes to QPPB.
QPPBwith egress qos policy is supported in Trident, Thor and Typhoon. Both IPv4 & IPv6 is supported.
ASR9K is the first XR platform to support IPv6 QPPB.
Based on interface qppb settings, for the source/Destination prefix, the IP Precedence and/or QOS group is set in ingress linecard and policy can be attached in egress linecard to act on the modified IP Precedence and/or QOS group.
In short the purpose of QPPB is to allow packet classification based on BGP attributes.
IPv4/IPv6 unicast only feature. IPv6 Support only added in 4.2 only in ASR9k platform.
Classification based on source or destination IP address
Classify on BGP IP prefix / AS path / Community
Utilizes Route Policy Language (RPL) for table policies
QoS Group ID values 0 – 31 supported
IOS XR started supporting in 3.6 in CRS & C12K
CRS-1 only supports setting QoS Group ID
C12k supports setting IP Precedence and/or QoS Group ID
ASR9k supports setting IP precedence and/or QoS Group ID and QOS policy can configured to match on modified IP prec/QOS Group only on egress.
Configuration and verification
In the following example, Router A learns routes from AS 200 and AS 100. QoS policy is applied to all packets that match the defined route maps. Any packets from Router A to AS 200 or AS 100 are sent the appropriate QoS policy.
uIDB is a micro interface descriptor block, it is a data structure that represents the features that are applied to an interface.
In this case this command explains what features are applied to the interface specific to QPPB.
RP/0/RP0/CPU0:pacifica#show uidb data TenG 0/0/0/0 location 0/0/cpu0
IPV4 QPPB En 0x1
IPV6 QPPB En 0x1
IPV6 QPPB Dst QOS Grp 0x0
IPV6 QPPB Dst Prec 0x1
IPV6 QPPB Src QOS Grp 0x1
IPV6 QPPB Src Prec 0x0
IPV4 QPPB Dst QOS Grp 0x0
IPV4 QPPB Dst Prec 0x1
IPV4 QPPB Src QOS Grp 0x0
IPV4 QPPB Src Prec 0x0
Restrictions and limitations
This to the best of my knowledge
In Trident/Typhoon, due to limited space in prefix leaf, all traffic coming in that interface will undergo QPPB processing (IP Precedence and/or QOS Group) based on that interface setting. i.e there is no per prefix level validity checking for IP precedence or QOS group.
The loopback decision is made in TOP Parse, so every packet coming into the interface, which has the “QOS group” policy configuration with QPPBenabled will be looped back.
Due to the loopback and extra cycles the performance will get impacted.
Also the trigger for loopback is the “QOS group” present in policy, if the user wants to have a policy to act only on the precedence value modified by qppb, user still needs to have a dummy “QOS group” in the policy for the loopback to happen to take effect of the new precedence value.
Only QOS format 1 and format 2 is supported in ingress QOS policy.
The loopback decision is made in TOP Parse, so every packet coming into the interface, which has the “QOS group” policy configuration with QPPBenabled will be (pipeline)looped back.
Due to the pipeline loopback and extra cycles the performance may get impacted.
Also the trigger for loopback is the “QOS group” present in policy, if the user wants to have a policy to act only on the precedence value modified by qppb, user still needs to have a dummy “QOS group” in the policy for the loopback to happen to take effect of the new precedence value
These debugging examples are not necessarily related to the configuration example above.
Check for QPPB values in route
RP/0/RSP0/CPU0:public-router#show route 126.96.36.199 de Wed Feb 23 15:20:01.263 UTC Routing entry for 188.8.131.52/24 Known via "bgp 500", distance 200, metric 0, type internal Installed Feb 20 23:58:13.096 for 2d15h Routing Descriptor Blocks 10.1.7.2, from 10.1.7.2 Route metric is 0 Label: None Tunnel ID: None Extended communities count: 0 Route version is 0x1 (1) No local label IP Precedence: 6 QoS Group ID: 5 Route Priority: RIB_PRIORITY_RECURSIVE (7) SVD Type RIB_SVD_TYPE_LOCAL No advertising protos.
(maybe a bit too deep for the general troubleshooting, but since I had this info I thought it might be nice to share in case, for those interested)
Another gotcha to this example below is a Gig interface in a SIP700, while this works, it is officially not supported, it is just for illustrational purposes to show how to feel inside info out to the surface and how to interpret it.
RP/0/RSP0/CPU0:ASR-PE3#show uidb data location 0/0/cpu0 gigabitEthernet 0/0/0/4.1 ingress | inc QOSQOS Enable 0x1QOS Inherit Parent 0x0QOS ID 0x2004AFMON QOS ID 0x0IPV4 QPPBSrc QOS Grp 0x1IPV4 QPPBDst QOS Grp 0x0IPV6 QPPBSrc QOS Grp 0x0IPV6 QPPBDst QOS Grp 0x0 QOS grp present in policy 0x1 QOS Format 0x1
If the user wants to have a policy to act only on the precedence value modified by qppb, user still needs to have a dummy “QOS group” in the policy for the loopback to happen to take effect of the new precedence value. This requires to configure something like this: