I have recently taken over the role of network admin in my company.
We are doing a review of the network as the previous guy implemented the network making use of general configurations (smartports - macros)
One thing we have noticed that is configured on nearly all switchport connections -> desktop PCs is flowcontrol. I have also noticed this configured on the trunk connections between the switches.
We are using 802.1q to trunk between the switches, we have a core stack made up of 4 x 3750e that has eight portchannels that connect to different switches in the business. There are other things on the switches that we want to get rid of such as MLS which we dont use for QOS. I have read a bit about flowcontrol and from what I understand it is not really necessary at this layer. Our network is not overly busy, devices connect auto 1000/Full Duplex and we dont have a mass of data passing over the network.
Do I really need this feature?
If you are sure you will not be overloading ports so that the buffers get full and hence packet drops occur then yes you could turn it off.
To be honest though unless you really need to turn it off is it doing any harm ?
The answer is im really not sure? My concern is that the way it is configured in our environment is not consistent. For example on some TRUNK links we have flowcontrol receive enabled, on others we have flowcontrol send enabled... there is no obvious reason for doing this, I dont think the guy actually appreciated what he was configuring as there are inconsistencies in the trunk configs. I guess what I am trying to acheive is a network that operates at optimal performance, whilst also reducing any unecessary configurations applied in the configuration of the switches. For example our stack of four 3750e's that are not fully populated at present and only do layer 2 consistently operates at 50% and above which is unusual wouldnt you say? Imagine if the stack was fully loaded and had to do routing? I doubt flowcontrol is causing these problems but I just want to justify for myself the need for its prescense on our switches?
Flow control enables connected Ethernet ports to control traffic rates during congestion by allowing congested nodes to pause link operation at the other end. If one port experiences congestion and cannot receive any more traffic, it notifies the other port by sending a pause frame to stop sending until the condition clears. Upon receipt of a pause frame, the sending device stops sending any data packets, which prevents any loss of data packets during the congestion period.
The 3750 switch can only recieve the pause frame, it cannot send that.So untill there is some flowcontrol mechanism bulitinto the desktop PC's NIC you dont need flowcontrol on the switchports. You may not need it on the trunk ports connecting the switches as neither of the side would be sending the pause frame during congestion , it will only expect to recieve a frame and then act upon it.
Thanks for the information.
I guess my next question is what is the impact of it being there if it is not used?
Most of the work in our business is actually done on one of 10 core servers. These are patched into the 3750. I have monitored the links to the desktop and they are really not utilised.
In removing the flowcontrol feature on the switchport does this cause a recalculation at all of the spanning tree protocol or anything like that? Our network should be very simple we only have four vlans (workstation, server, printers, management), we have 3560's which at present do the routing between the VLANS (3750s only introduced last week) and everything else goes out via our PIX firewall.
I just want to keep it as simple as possible but am concerned by the high utilisation of the new stack given it is only doing L2 at present?
It should not impact anything. You can leave it safely on the ports. By removing it from the switch ports will not cause any STP calculation.
In my suggestion, this command wont effect much even if its there on the switch ports.
"Do I really need this feature?"
Pehaps not, but its purpose is really for sending hosts, not switch to switch.
You note your devices are using gig but you don't note what your uplinks are. If they were also just gig, it' simple for a single host transfering a large file to congest an uplink. If it pushes hard enough, it will overflow the uplink's outbound buffers which will not only drop packets for the offending flow, but might drop packets for other concurrent flows also using the uplink. Well implemented flow control would indicate to the offending sending host to pause its transmission hopefully before switch buffers overflow.
Between switches, flow control would pause all traffic across the interswitch link, hence blocking all flows and likely just forcing additional buffer overflow in the upstream switch, so it's often not as useful.
If the traffic in question is TCP, TCP will backoff its transmission rate when it sees drops, so source flow control isn't quite as important, yet other flows will also be impacted, the offending TCP flow will be less than optimal, and there's the issue of non-TCP flows.
The above is how flow control should work, but much depends on device implementations and if your network isn't busy, you wouldn't encounter the situations where flow control would help much, such as across-the-network backups.
BTW, some switches can also provide source flow control for half duplex, they make the wire look busy to the sending host.
Makes sense and confirms the theory I have been reading from several sources.
Just for your info I use portchannels to the core switch made up of between 2 and 4GB depending on the switch. We use 2 ports to provide a 2gbps on the 2960g's and four ports on tehe 3560g's to give a 4gbps link. Probably not necessary in our small environment but this is what was configured some time ago.
So in summary to close this port it is a worthwhile feature to have on PC switchports but no worth having on trunk/uplinks?
"So in summary to close this port it is a worthwhile feature to have on PC switchports but no worth having on trunk/uplinks?"
I would say it's better for usage on host port, such as PC, vs. an interswitch link.
Of course, it's not the only way to deal with this issue and it suffers from being an "all-or-nothing" reponse. Other methods include using the ICMP source quench message, although for this method, in practice, likely nothing uses it, or usage of QoS to keep heavy flows from adversely impacting other flows. Some hosts even support QoS at their interface, such a Windows XP QoS scheduler, but host QoS also seems seldom used.
BTW, Using port channels to increase uplink bandwidth is good, but realize you can still have congestion on one of the channel links while other channel links are unused. Careful selection of the channel hashing method, if more than one method supported, may mitigate the chances of this happening, but it can't be totally eliminated.