We have 2 Microsoft ISA servers attached across a backbone in separate locations. They are configured to cache HTTP traffic and loadbalance using Multicasting. All clients on separately routed Vlans point to the virtual ip address of the ISA server cluster, using this address in their proxy settings.
Ever since the cluster went live the CPU utilization on the Each backbone Switch has hit 90%. The utilization goes up and down as you would expect during office hours. Out of hours, the CPU is at about 6%. During 12 noon, when everyone wants to use the internet the CPU on both core routers for the VLAN that the ISA servers sit on, hits 90%.
I have checked the vlan interface and the input packets are huge, with a high percentage of dropped packets and the input queue filling up regularly.
Is it possible that the SVI is having to process switch the multicasts packets from the Client PC's in order to get to the ISA server somehow ?
pim sparse-dense mode is configured on all routed SVI's with 2 RP's and 1 mapping agent.
The below SVI counters were cleared about 2 hours before showing the output. All over SVI's are no where near this input :
Vlan200 is up, line protocol is up
Hardware is Cat6k RP Virtual Ethernet, address is 0030.8515.4d02 (bia 0030.851
Internet address is x.x.x.x/24
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters 04:49:29
Not sure about this one, it's quite a puzzle! I am just going to throw a few ideas, hope one may give you a clue. Could you possibly be experiencing only dense mode flooding? Have you checked if your pim routers are switching to sparse mode after they have received traffic from the RP? If you have dense mode flooding problems obviously there is an issue soemwhere, however Cisco has introduced auto rp listener to avoid such scenarios. Runs on sperse mode.
On another occassion that I had a 90% cpu load on 65k switches was when there was a STP loop. Have you checked the sh proc cpu output, does it give any clues? What does the output of the sh int command on the physical port the mcast servers are connected onto, say?
Are the RPs and MA on the same routers which have the problem? If yes, have you tried moving them?
Have you tried storm control on the switches where m/cast servers connect to see if the problem goes away?
First, you need to determine, what is contributing to the high CPU - high proc cpu.
sh proc cpu
CPU utilization for five seconds: 0%/0%; one minute: 0%; five minutes: 0%
The first number on the five second interval is the process and the second number is interrupt. If most of the process are interrupt then it's traffic related but you can alos look at what process is hogging the CPU. Second, if you determined that the high CPU is on the RP and it's traffic related, then determine what kind of traffic and why it's hitting the RP. Packets should be switch at hardware and not be processes by the RP's SW.
Hope this leads you to the right path in resolving the issue.
This is actually a pretty cool feature, i didn't even know it existed until I was looking for a solution to advertise a subnet (prefix in BGP talk), only if a certain condition existed. This is exactly what conditional advertisements does
j ai une question j ai achete un routeur cisco 887VA-k9 , je le configuré avec la configuration ci- dessous
si je le lier avec mon pc portable sur l un de ses ports directement ça marche toute est bien ( la connexion internet + m...
Attached policy provides CLI access to the Cisco 4G router over text messaging. Two files are in the attached .tar file:
2. PDF with instructions on how to load and use the .tcl file.