I am looking to limit the transfer rate on a single interface of a 2960 switch. The problem I am having in determining where and how to configure the shaper. In doing a little reading on these forums there are hints that rate limiting "inbound" is not very effective, however, what I could not tell was whether this definition of inbound referred to inbound as in data outside of the local network or a more generic inbound on an interface.
I have attached a crude diagram to help clarify my confusion. Essentially, I would like to limit the transfer rate of the Server in the diagram to 4Mbps of traffic that is leaving our network and going out ot the internet. As nearly 100% of the traffic generated by this server is traffic leaving our network, I have no problem limiting the rate of the physical interface if thats possible and easier. But basically the traffic is "inbound" on FastE0/9, Outbound on Gig0/0 or Gig0/1 (these might be mislabeled numbers wise), inbound on Gig1/0/24, and outbound again on Gig1/0/23. In terms of interfaces, the limiting would be that traffic that started inbound on FastE0/9 should not go outbound on Gig1/0/23 any faster than 4Mbps. The server is in Vlan 200 configured on the main routers and for the purpose of this question is assigned an ip in the 192.168.0.248 /29 network.
What would be the best way of applying this shaper?
First, about the possible confusion concerning limiting "inbound" traffic; what's normally in mind, regarding effectiveness, is reducing traffic utilization on a link that contains inbound traffic. In your example, this would be trying to limit your server's traffic to only 4 Mbps as it hits your routers, by controlling the traffic on those routers. However, there's usually little problem controlling traffic leaving your routers, again in your case, limiting your server's traffic being sent to the Internet.
Second, you didn't mention what platform the Internet routers are. Most Cisco routers support various methods to police or shape outbound traffic. The problem issue would be limiting traffic to 4 Mbps aggregate across both routers; easy for either to limit traffic to 4 or 2 Mbps each.
If aggregation limitation is really necessary across both routers, and assuming you're not policing on the switch, it could be accomplished by policy routing such traffic through just one router so its volume can correctly be counted.
I would suggest avoiding policing, and using independent shapers on both routers of either 2 or 4 Mbps each.
The issue of aggregate bandwidth across both routers isn't really an issue as the routers are technically 3750 layer 3 switches.
As such there isn't load balancing occuring over the routers except for that the primary VLAN trunks are divided among the uplinks. So only 1 path exists out of the network from the server in question at any one time.
However, with the "routers" being 3750 switches, I am unsure how to fairly easily shape the traffic as they do not support generic traffic shaping (I don't think anywhere) and I am not too familiar with srr queueing. I think in order to use srr queueing I'd need to setup seperate queues for the interface I'm looking to limit and the rest of the interfaces and it would best be done on the 2960.
In order to avoid that (assuming I'm correct on needing to configure multiple queues), I'm curious if you know how the srr-queue bandwidth limit speed is calculated. It takes an argument in the range of 10-90 which indicates the % of port speed to limit the bandwidth too, however is the port speed based on the theoretical maximum of the physical port or based on the operating speed of the port.
In other words I'm thinking of:
srr-queue bandwidth limit 40
Would this restrict bandwidth to 4 Mbps or 40 Mbps or have I completely misunderstood the usage. As mentioned earlier, since 99.99% of the traffic coming from this server leaves the local network, I'm not concerned that local switching speeds will be affected as well.
Regarding your question of what's the bandwidth percentage calculated against:
"Bandwidth Limit Configuration:
In order to limit maximum output on a port, configure the srr-queue bandwidth limit interface configuration command. If you configure this command to 80 percent, the port is idle 20 percent of the time. The line rate drops to 80 percent of the connected speed. These values are not exact because the hardware adjusts the line rate in increments of six. This command is not available on a 10-Gigabit Ethernet interface.
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...