10-20-2010 07:48 AM - edited 03-06-2019 01:37 PM
Hi,
Can somebody confirm me that each time a root is lost within a spanning-tree domain, the negotiation starts all over and every switches pretend that it is the root until each on root is identify while Rapid Spanning-Tree switch keeps track of the different root value of each switches and does not pretend to be the root if it has no chance of becoming one.
I know that one main difference between Spanning-Tree and Rapid Spanning-tree is the fact that every switches transmit it`s own BPDU for Rapid Spanning Tree while switches has to wait for the BPDU before transmitting it for Spanning-Tree but I am not sure if the way the root is elected is different or not for these two protocols.
Thanks
Stephane
10-20-2010 09:10 AM
Hello Stephane,
Can somebody confirm me that each time a root is lost within a spanning-tree domain, the negotiation starts all over and every switches pretend that it is the root until each on root is identify while Rapid Spanning-Tree switch keeps track of the different root value of each switches and does not pretend to be the root if it has no chance of becoming one.
Actually, I cannot confirm this.
I think that many people like to see the STP operation divided into three nice stages taking place in the entire network - first, the root switch is elected, second, the root ports are identified, third, the designated ports are identified and all other ports are put into the Blocking/Discarding state. This may be illustrative but it does not correspond to how the STP operates. The main difference is that this description assumes the entire process to be synchronized all over the switched domain while in real life, these three steps take place individually and independently on each switch with every single BPDU received.
In other words, every time a BPDU is received on a port, the switch running legacy 802.1D performs roughly the following evaluation:
Of course, optimizations to this basic algorithm can be done but this is how I perceive the elementary working of 802.1D.
In RSTP, there is an important change to the step 2: even if the received BPDU is inferior to the one stored on the port, it will be accepted if it comes from the designated or root bridge connected to that port, and the switch continues the reevaluation. This modification is to allow rapid acceptance of information sent from the current designated/root switch whose RSTP parameters have worsened for any reason. See the following document for slightly more information about this:
http://www.cisco.com/en/US/tech/tk389/tk621/technologies_white_paper09186a0080094cfa.shtml#inferior
Note that this document actually talks about a switch running RSTP erroneously believing itself to be a new root, and a non-root switch actually correcting its idea.
It is true that in RSTP, each switch sends BPDUs on its own and does not wait for the root switch to generate a BPDU so that it can be relayed. The BPDUs thus become more similar to the Hello packets used in many Layer3 protocols (though they are not bidirectional until the so-called Bridge Assurance functionality is configured that treats the BPDUs as real Hello packets sent from each and every port including Alternate and Blocking ports). Thanks to this, an outage can be detected more quickly and precisely because if a BPDU is not received then it is clear that the problem is between these two particular neighbors, not somewhere in the network between the root switch and this switch. But apart from that, there are no further important differences as far as I am concerned (of course, not talking about Proposal/Agreement but this is not about election of port roles).
You wrote that the "RSTP keeps track of different root value of each switches" - I am not sure what you meant by it. Can you perhaps clarify?
Best regards,
Peter
10-22-2010 08:23 AM
Hi Peter,
First of all, thanks for that very helpful response.
What I meant by "RSTP keeps track of different root value of each switches" is based on my understanding of STP that any bridge which does not receive a BPDU in the last max_age seconds will pretend that it is the root until it receive some new inferior BPDU showing that the bridge is not the root.
I thought that RSTP in order to increase the convergence of the network kept the root value of the other switches during the first convergence and will not try to pretend that it is the root if it saw some inferior BPDU in the past.
Thanks again for your help
Stephane
10-22-2010 01:41 PM
Hello Stephane,
You are heartily welcome. A few more comments:
What I meant by "RSTP keeps track of different root value of each switches" is based on my understanding of STP that any bridge which does not receive a BPDU in the last max_age seconds will pretend that it is the root until it receive some new inferior BPDU showing that the bridge is not the root.
Well, the same goes for the RSTP. If a switch in RSTP stops receiving all BPDUs, it declares itself as the new root switch after 3x Hello time. The main difference between STP and RSTP in this aspect is that if in STP the BPDUs got lost for any reason, the part of the spanning tree "under" the breakage remained silent for max_age seconds - no BPDUs were sent, no BPDUs were received.
In RSTP, if a switch stops receiving BPDUs on its root port, it may still have another ports put into Alternate or Backup role that are still receiving BPDUs from their neighbors, because those neighbors send BPDUs on their own. So, even with a breakage in the network, the switches still exchange BPDUs. And because the original root switch may still be known to those neighbors, they will simply continue to send their BPDUs declaring the original root switch. This will effectively prevent our switch from ever declaring itself to be the new root because it still understands from the continuously received BPDUs that there is a switch a lower BID.
Note that this logic is not particularly about keeping track what other switches say - you have to do that if you want to maintain your port roles stable so it's not a dramatically new functionality (a Root port must receive the best BPDUs, a Designated port must send better BPDUs than it would receive, the Alternate and Backup ports must receive better BPDUs than they would send). What is new is the persistence of sending BPDUs despite not receiving them on the Root port. That allows the original root information to be retained in the BPDUs, thereby somewhat limiting the erroneous declaring of self to be the root when in fact there still is the original root switch available.
Perhaps you meant this logic?
Best regards,
Peter
11-01-2010 02:00 PM
Hi Peter,
This is exactly the logic I was trying to understand, thanks for making this much more clear. This explains why convergence could be faster in the case whrere a local bridge loose it`s main link to the root but still has a neighbor which still receives BDPU from the root through an another path.
Thanks for your help
Stephane
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide