My company is a Fortune 500 company, yet our networking director has always prohibited the use of 100 Mbs to the desktop. From day one. His fear of overloading servers by requests, and his fear of spreading malware "even faster" are the stated reasons. We mostly just roll our eyes, and have decided to keep our jobs by keeping our mouths shut. We finally have a meeting with other senior managers present where we can present the case for 100 Mbs to the desktop.
Would anyone care to mail me their favorite short list of reasons to speed up access? I know the basics, of course, but am looking for reasons I may have overlooked. And what about that "overloading servers" reason he has? Any suggestions appreciated. We'd really like to win this one for the users.
In my, admitidly limited, experiance, I have never seen a user with sustainded utilization above 2-3 M-bps. Is it possible to measure your utilisation, maybe for a sample set of users, to show that "normal" network utilisation will not be impacted, but that peak performance will be improved.
The "spreadimg malware" reason is bogus, malware tends to be very small in size so bandwidth is not a limiting factor.
What bandwidth are your servers connected with? if they are 100 M-bps you could propose an upgrade to both infrastuctures, and keep the 10:1 ratio.
As an aside, how are you limiting users bandwidth? It is no longer possible to buy a 10 M-bps switch, and if you are setting the speed/duplex capabilities on each port you are likely to get mismatches needing support calls etc. a poor use of resources IMHO.
Mark, Thanks for your thoughtful reply. I particularly liked the way you phrased "normal utilization will not be impacted, but peak performance will be improved". This is exactly the type of condensed phrase I need for the meeting.
We limit bandwidth to the desktop by setting switch ports to Auto/10 and we have not had any issues; the switches are all HP 2524s, btw, but all our routers are Cisco. All servers get switch ports set to Auto, so they all connect at 100. We've developed a feeling for the various Unix and Wintel servers' requirements, and have solved early mismatch issues.
We'll first have to upgrade our switch-to-switch speeds to 1000 before we can start using Gb at the servers.
Thanks again, - Bill Higgins
Your Boss must be on Crack :-) But again, The Boss is not always right but still he is the Boss. We have some devices that connected to a gig to the switch and if you try to transfer files, (big files) the network utilization doesn't even go up to 10%.
to bauti1428 -
Always the boss...
I'm going to do some research on how the servers limit their connections. My theory on this issue has always been to open up the network and let the servers handle the load. They should know how to back off when overloaded. There are limits to TCP connections, after all. I reckon that somewhere in the history of networking, our boss had a server that stalled out and died because of heavy incoming traffic and the boss never wants to see it happen again.
10 meg half duplex will give you about 4 meg of useable bandwidth before you start feeling the performance hit of collisions and retransmissions within the user domain . I would ask what the network users time is worth to him . All newer equipment can handle todays load without a problem. Also tell him he is the only one I have ever heard use that argument most everyone else in the IT world is looking to go faster with more bandwidth . You do have to look at your equipment and see if it is up to the requirements of putting everyone at a higher speed, if the equipment is early generation then maybe you can coax upgrades out of him .
The best argument for increasing bandwidth is always network performance as experienced by the users. Are users twiddling their fingers while waiting for files to download? Is application response slow due to congestion?
Do you have any management tools to measure utilization? We run Statseeker and can show real time and historical utilization for every port in the network. It is very easy to make a case for more bandwidth when you have the facts. In lieu of tools, even looking at utilization with show commands during your busiest time of day can tell you a lot. Ping response times is another indicator of congestion.
Most networks that deliver 100mb to the desktop are using gigabit uplinks for access switches as well as gigabit to busy servers. If your users really need 100mb you almost certainly need higher capacity upstream as well. If the bottleneck in your network is server bandwidth, or server horsepower, your users will not notice any improvement when you run 100mb to the PCs.
That's kind of funny to me. I hope that is not the only defense being used against such malware. Before I upgraded our network with Cisco gear, we had old 10mb connections to the desktop. When the blaster virus hit, it had no problem spreading right away. I doubt things would have been any worse with faster network connections. Besides, it was probably our low bandwidth T1 that infect us in the first place. I just don't see that as much of a defense against anything.
The proper design would be the fastest connection you can get to your desktops with a minimum of 100mb and if you want to limit the impact of malware then design a QOS structure with a scavenger class also if there is an issue with overloading your servers then they need to be upgraded too.
I would approach this meeting with a common sense desire to future proof your network infrastructure. No one should be running 10Mb to the desktop today? Why?
Well for one, imaging machines is quite common using ghost/ris in enterprise environments. This will would take a lot longer with MS Vista (a very large OS) and a 10Mb connection.
A user's time is very valuable. A fortune 500 should know that! If a user's files, or outlook application takes just a few seconds longer x the number of users x the number of instances of the delay per year, its in the millions of dollars for a few seconds of wait time per year, with just a 1000 or so users.
Time is more valuable than money. If your management can spend a few thousands dollars to upgrade the network they will make than money back by a productivity increase. Also do you foresee your organization running ip telephony?
Well you can share a single data connection to the desktop for a phone and a pc. The phone functions a small switch and provides a pc port. this is even greater savings- a single cabling infrastructure.
I wont even go into the reasons why a 10Mb port is a ineffective protection from viruses, etc, but I will say this.
The server should be 1000Mbps if its a risk for being saturated. I have seen servers with sustained 400+Mbps traffic flows all day long on a gigabit port. This was at a company with only 2000 users, and it was a windows file server.
Regarding the comment earlier about users not seeing any benefit from 100Mbps, that is not true. What if the person is doing an outlook archive job? Or copying their documents between their desktop and the server?
Users are famous for sitting around waiting for one PC task to complete. Your boss is throwing the company's money away. users are being paid to sit idle because of his slow network. I would look for another job, get it, and go over his head to get him fired. Serves him right.
Thanks to all for the very helpful suggestions. It is always best to be prepared well for a corporate battle in the conference room. Every suggestion was well received, and now I will prepare my presentation.
Especially appreciate the last comments from jbrunner007 on preparing a network for the future, and from dgahm on network perf tools.
- Bill H
Nowadays, access switch support 10/100/1000. Most desktop PC and laptops also supports 10/100/1000. Switch ports connected to users is best leave to auto/auto as the connection is not permanent, the user may connect a PC with 10BaseT, 100BaseT, or 1000BaseT - I'm talking about diferent users using/sharing one port (this is not impossible). It's also a common practice of users to share files between their PCs, so it's best to have 100BaseT or above between them, connections between users in the same department should not reach the trunk or gateway if they are in the same vlan.
However, user connection to server farm should be control using a QoS device. If you allow 100BaseT or even 1000BaseT connection from user PC up to server farm (considering a large corporation - a fortune 500 company - may have several thousand users in the network) even if you channel multiple GE or use 10G in a trunk, the server will not be able to serve this big bandwidth to all users. Even if you allow 100BaseT or 1000BaseT from the user to the server, the server admin will/may throttle the application (i.e. FTP, WEB) to minimum bandwidth per connection (might even lower than 10Mbps, depends on the number of hits) because at the end the server will only have a maximum of 1000BaseT connection to the network consider also the processing power of the server. Take note that not all application can be configured to throttle the bandwidth per connection. Bottomline, if you allow high bandwidth from user PC to server farm and for some reason it's fully or nearly utilized per user, either the trunk or the server will become a bottleneck affecting all other services, especially important services like email, network printer, netmeeting, VoIP, internet.
Unless your application footprints change, you're not going to increase the traffic loads on your systems.
Effectively unless the applications and usage changes, the amount of data transferred isn't affected. What does happen is the the workstation tends to spend even less time talking to the network as most PC traffic tends to be bursty. IE: Checking mail, loading a web page, saving a file to a server, reading a file from a server.
If you transfer 100 megabytes of data, you transfer megabytes. The difference is whether you spend 16 (50 megabit at 100/full) seconds or 200 (4 megabit at 10/half) seconds doing so.
Viruses and Malware can be very effectively managed via security policies, properly patching and securing your hosts, regularly anti-virus/anti-spyware, SMTP virus scanning, user education, and even network based intrusion detection.
The biggest thing you can do is simply hold a person accountable when they do something that jeopardizes the network. IE: Unleashing elf bowling on the office.