I am just starting to ACNS equipment to stream video to our LAN and our WAN. I have a strange issue in remote sites.
On our LAN network, the video streaming in UDP works. However, ACNS has a windows size of > 1500 bytes, so the stream gets shopped in segments of +/- 4000 bytes and this segment is fragmented onto the network in +/- 4 packets.
In remote sites, i am unable to connect to the video stream. When i do a sniffer, i can see the packets arriving. However, of every 4 packets in a segment, i only receive 3 packets. After that it is finished. Windows Media PLayer stays in "buffering" state and after a while disconnects. This is because he misses the last packet of each segment. So now my question is:
1) can i increase the UDP receive windows on XP so that it can receive all fragments ? I think it can be a UDP receive window thing on windows
2) Can i configure ACNS device NOT to use a segment size of +/- 4000 when doing UDP streaming, but just a "segment" and "packet" size of 1500, therefore preventing ANY kind of fragmentation on UDP stream packets ?
Well, i even digged a bit futher into the "problem". The "problem" seemed to be the buffer size allocation to Q2 in the queue-set 2. The "6" in "buffers : 16 6 17 61" seems just to small. I guess the buffer is just to small to hold the packets arriving on 2 gigabit fiber interfaces and going out on a single 100 Mbps interface. An additional single video UDP stream of +/- 600 kbps, marked with AF31, was enough to saturate Q2T3.
(i tried to simulate this on a lab switch, however i failed. No problem in the lab. Of course, i don't have real-world ospf,pim,best effort traffic and the port assignments (ASICs) were also different so very difficult to simulate)
When i changed the maximums to "400 400 400 400" in production the problem went away.
I also noticed that the switch was sometimes dropping Q4 traffic (best effort) even though the utilisation of the 100 Mbps link doesn't go above 30 Mbps. Playing around with the "maximum" or "reserved" thresholds didn't help here. The cause seemed to be the default "25 25 25 25" buffer space allocation (when not using AutoQos). When i changed this to "10 30 10 50", the drops in Q4 went away. I also noticed that i could change the maximum setting from 1-3200 where Cisco is using 400 as a maximum everywhere (older IOS versions had this limit). I put them now on 1000 (just a number). So my settings are now:
Queue : 1 2 3 4
buffers : 10 30 10 50
threshold1: 100 200 100 100
threshold2: 100 200 100 100
reserved : 50 50 50 50
maximum : 1000 1000 1000 1000
and i haven't had a single drop in 5 days now (woohoo). Your milage may vary however...
PS. Some very good test information regarding qos on C3560 can be found here:
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...