6500 fa cabled to a 3750 gig e. 3750 is trunked to ATT transport. For testing instead of a trunk, the int facing the 6500 is in it's own vlan. A second int on the 3750 is placed in that same vlan and that runs to my test laptop. P2P IP on 6500 and my laptop. Can ping back and forth fine, run iPerf to server, etc.
When using the exact same setup in all of my tests I was able to clearly see a bandwidth limitation via iPerf output AND PRTG graph. Graph would flatten/plateau at approx 50 meg with srr queue bandwidth limit 50 set. When I put the srr queue command on a customer's interface but have it set to 10 it doesn't seem to be working. I keep seeing customer push over 10meg. Spikes, no plateau. I have a second customer set to 45. Still doesn't work. They peaked at 60meg last night.
The only difference is that customer is transported from ATT cloud and my tests are to my laptop.
Works great with my laptop, doesn't work at all with customer. Both test int and customer int are switchport access vlan ###.
Basically, why is srr queue working properly with my iPerf test but not working at all when applied to a customer?
Related but not.
police rate 54600000 bps burst 400000 bytes
Placing this on the core interface itself kills all traffic. Int stays up but can not get anything across. access-match just points to an access list that permits any any.
I need a good QoS/rate limit/policing primer. Open to anything, at this point, as I can't get shit to work some days, some stuff doesn't work as I understand it, etc.
Took a GK switching course and not even the instructor could adequately tell me how in bloody hell I'm supposed to setup something to limit bandwidth.