For those of you running storm-control, I'm interested to hear where you set your thresholds. We're experimenting with a 10% gigabit host port and 15% gigabit trunk port threshold for both broadcast and multicast storms on the 3750 platform.
We see the 15% threshold get bumped just for a second perhaps once or twice in a 24-hour period, by either broadcast and multicast traffic. There's no discernible pattern to the events. They seem to be false alarms, probably bursts of legitimate traffic, although a sniffer hasn't given us any clues as to what that might be.
So just by way of comparison, do any of you have thresholds that you're comfortable with in your environments?
There is actually no "best-practice" vaules for the storm control. It varies from each customer, depending on the servers and applications.. It will be actually a good practice to monitor this for a while and then tweak the values, either UP or DOWN. you can use the commands "show storm control", "show ip interface g1/1" etc to baseline this value.. I guess you just need to do some kinda mathematics during the peak,non-peak hours and see the total traffic bytes which is allowed on that port.. Any broadcast over that will be basically dropped.
Hope this helps. all the best. rate replies if found useful.
The TAC gave me much the same answer, and that's logical enough.
Here's the answer from TAC, just for the record, since other people have asked the the same question about storm-control thresholds:
"I noticed you want a recommended threshold for the broadcast and multicast storm control. Unfortunately there is no a recommended threshold level because that will depend of the normal broadcast traffic on your network.
A way to determine that could be perform the following tasks during a normal day (common/usual traffic flow and amount patterns).
1. Clear the counters
- switch#clear counters
2. Leave the switch working during 24 hours
3. Port by port (physical interfaces) check the amount of broadcast input
packets, multicast input packets and the total input packets.
- Switch#Show interfaces //look for
- packets input value (TPI)
- Received broadcasts value (BPI)
- (multicast) value (MPI)
4. Let's do some mathematics:
- To get unicast packets you do TPI - BPI, TPI - BPI = UPI
- Normal percentage of broadcast = (BPI/TPI) * 100
- Normal percentage of multicast = (MPI/TPI) * 100
- Normal percentage of unicast = (UPI/TPI) * 100
That could give you an idea of the daily unicast, multicast and broadcast percentage on your network and could help you to set the proper threshold.
Now the formulas that I am giving you will give a general idea, just to have a projection, nevertheless the error range is great.
In order to know the real values, you will require to monitor the traffic during a month or 2 getting the same statistics and perform and statistical analyze based on average and variance to get a closer real-life value. Also probably your network experienced seasons that some times could be on a low
traffic season and some times could be on a high traffic season. The traffic analysis is a task that requires getting constant traffic samples to adapt the thresholds to the real life and of course based on the statistical
analysis you will be able to determine the range you need to add to the threshold. For example, let's say you noticed the broadcast traffic is a 12% and the error range is between +2.76 and -2.76, so the value I will use on the threshold will be in the range 15% to 18%.
Please do not think the formula is the best way to determine the threshold, they are just to give a general idea but a deeper research should be done to determine that properly."
Thanks for giving me a calculation in order to find out the threshold in percentage. Please let me know how to find out the PPS (packets per second) that has to be assigned in the switch as a threshold value to stop broadcast storm, multicast storm and unknown unicast storm.
I wanted to add that we've found on the 3750 platform that the SNMP traps for what I'm calling storm-control "false alarms" come in pairs: a "trafficTypeFiltered" trap (storm-control kicked in, threshold exceeded) should be followed up with a "forwarding" trap (storm-control is off, back below threshold) later on. In our experience, we're seeing these traps 1 second apart, indicating to us that the switch saw a quick burst during the one-second storm-control monitoring interval, but during the next second life was okay. IOW, nothing to worry about.
We also found that there's NOT a syslog event for the "forwarding" trap. There IS a syslog event when the port starts to suppress, but based on syslog alone, you'd never know that the port went back to normal. Sort of annoying, but we set up our SNMP trap manager to handle the 2 traps so that we don't have to spend too much time on false-alarms.
[toc:faq]The ProblemOn traditional switches whenever we have a trunk
interface we use the VLAN tag to demultiplex the VLANs. The switch needs
to determine which MAC Address table to look in for a forwarding
decision. To do this we require the switch to do...
[toc:faq]Introduction:Netdr is a tool available on a RSP720, Sup720 or
Sup32 that allows one to capture packets on the RP or SP inband. The
netdr command can be used to capture both Tx and Rx packets in the
software switching path. This is not a substitut...
IntroductionOSPF, being a link-state protocol, allows for every router
in the network to know of every link and OSPF speaker in the entire
network. From this picture each router independently runs the Shortest
Path First (SPF) algorithm to determine the b...