We have a LAN of 20 switches - 19 L2 2950 and one L3 3560 switch. Total number of VLAN are 20. L3 swicth acts as the default gateway of all VLAN - subnet. All the servers are attached to L3 swicth. Default STP is running on all switches and one of the L2 switch has been chosen ( automatically ) as root bridge. Now as we added VLAN ports to the swicth directly connected to L3 swicth( server switch ) suddenly server access became very very slow from all over the network. so we rolled back the switch configuration to original that is all ports in the VLAN1 ( that is also the server VLAN ) except trunk ports. CPU utilization was only 8 %. Then I added back the ports to VLAN one by one and not by using interface range command. After adding each port to VLAN I checked the server access, it was perfect. So after 40 iterations all the necessary ports were in the new VLAN and also server access was perfect. I didnt get exactly the reason why server access became suddenly slow and again back to normal after adding the ports step by step. As a safe precaution we have rolled back the switch configuration to original. What I feel is that recalculation of STP for that time has slowed down the server access. Coincidebtly server swicth has only one trunk ( that is bad design - single pt of failure !! ). In full network there is only one redundant link. What can be the possible reason for sudden slow responce of the network. In default STP version do switches run STP for each VLAN that means we are running 20 STP instances and that might have caused the sudden delay in the network access.
if the ports are not configured for STP portfast you are generating a storm of events for STP: each port invoked in port range that is added to new vlan X causes STP instance for vlan X to be recalculated.
It is also an event for previous vlan 1 so both vlan X and vlan1 recalculate.
during STP recalculations CAM aging time is reduced from 300 seconds to 15 seconds this increases the probability that traffic can be treated as unknown unicast and flooded out all ports in vlan causing a performance reduction. This happens also on the vlan the port was associated to before the change.
By adding the port one by one the process is smoother.
"each port invoked in port range that is added to new vlan X causes STP instance for vlan X to be recalculated."
Not sure i agree with this although very happy to be proved wrong. It's something that has always confused me to be honest. I used to think that TCN's always caused an STP recalculation but i don't think it does. As far as i know an STP recalculation will happen under 2 conditions -
1) When a bridge receives a BDPU with better paths through the L2 topology
2) if the bridge stops receiving configuration BPDUs from the root bridge
Neither of these would occur by what Subodh did with the port range command as far as i can see so i'm not sure there was a recalculation.
I do agree though about the CAM aging time being reduced and perhaps because of the range command being used each port generated a TCN (assuming no portfast as you say) and so the switches were continually aging entries.
I agree that surely CAM aging time is shortened to 15 seconds as a result of all these events.
I assumed that by issuing a TCN BPDU an STP calculation is triggered but this may not be the case as you suggest.
Subodh has meaningfully noted that cpu usage was not high only 8% during the issue.
This points to only cam short timers.
Let me make an example of what STP excessive calculations can mean.
Some mounths ago during a night time upgrade of uplinks of a server farm from 1GE links to 10GE uplinks server people complained about performance slow down.
In our case cpu usage on distribution switches C6500 with sup720 3BXL went high both peak and average during the migration (average peaked at 70-80% according to sh proc cpu history).
Unfortunately none of people performing the upgrade issued a sh proc cpu to see what processes had required more resources.
the day after we noticed two aspects:
some of the new links were allowing all possible vlans on trunk
comparing the sh proc cpu history of access layer switches with AAA accounting it was possible to show peaks of cpu also on access layer switches for each device in the minutes near to the event shutting old GE links enabling new 10GE links.
In our case involving uplinks STP recalculation should be the origin of high cpu usage on devices.
We had tried to open a service request but there were not enough details to work on and we couldn't reproduce the environment because migration had been completed.
Probably in our case something else had gone wrong but STP had played a role in that issue.
After taking into account all feedback, if I add ports in VLAN X, one by one with spanning portfast and give sufficient time for network to converge it shoud be good to go.
Is it due to too many VLANs on the single L3 switch. Causing CPU to be oerused?
As our next and last activity is to create two VLAN on this L3 switch one for server and other for some another set of machines. I am afraid that this should not cause slow network access.
As of now, technically all users are in seperate VLAN and servers are in seperate VLAN only drawback is, servers are in VLAN1 and all other users are in VLAN from 3 to 18. We plan to moves servers from VLALN1 to VLAN2 hope it should not slow down the access.
This document gives several answers on frequently asked questions for PFRv3 channel state behavior.
Q1: What are all the channel operational states from a BR (border role) perspective and what are the rules/conditions to be in each st...
The need was to reach an host inside a LAN through a VPN connection managed by the LAN gateway (Cisco 1921).
The LAN gateway performs NAT and there was a dedicate nat rule for the host i wanted to reach through VPN.
I couldn't connect to the hos...
We have 3 identical switches configured by someone else and would like to claim some of the Gigabit ports(G1/G2/G3/G4) for use on servers. When we try to change the wiring and configuration, we run in to connectivity issues. Attached is a des...