Has anyone ever ran across what causes "Internal Rx Errors:"? I've noticed that this counter is increasing quite rapidly, after clearing the interface stats first of course. This is occuring on my uplink interfaces (e13 and e14) which are the gig interfaces on the CSS. I've found an explanation of the errors in the CSS documentation:
"Internal RX Errors
The number of frames for which reception on the interface failed due to an internal MAC sublayer receive error."
But I'm still a little unclear about how to go about how to rid the CSS of these errors as well as what is and isn't acceptable as a threshold for these errors.
The Rx errors mean not getting packets back correctly from service.
I found the following bug corresponding to the issue of Internal Rx errors on gig ports. CSCdv48405. You could view the details using the Bug Toolkit. This occurs because the CSS's gig ports only work in full duplex mode.
Hmm...very interesting. The bug details does not explain if the errors are something to be concerned about or what can be done to fix the problem. I'll keep looking though. Thank you for the information!
I think that I'm just going to keep monitoring the interfaces for now since I don't have any hard evidence, i.e. dropped packets etc., that would lead me to believe that this is causing a performance degredation. Thank you for the information, it was very helpful in getting to the bottom of this mystery!
In configuration #1, the "Internal RX Errors" for the GigE ports on the CSS were indicate drops due to contention for slower egress ports (ingress is GigEthernet, egress is FastEthernet). We have decided to use configuration #2 (we no longer use the CSS GigEthernet ports). We are letting the Cat4006 switch take care of the GigE-to-FastE buffering, but this does not appear to have improved the situation.
With configuration #2, the CSS is no longer logging the "Internal RX Errors" for the GigE ports. Instead, the Cat4006 is now logging "txQueueNotAvailable" errors for the FastEthernet port connected to the CSS.
Basically, GigE flow control doesn't seem to work between our servers and the Cat4006 and CSS switch ports. Luckily, all of the sensitive/critical applications hosted on our servers run on TCP (which takes care of retransmission)!
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...