We have a LAN with 35 Dell GX280 workstations and 15 Dell GX520 workstations connected to a fairly new IBM File server (sorry no spec but it is quite high using raid5) using a SQL database, the server has a fail over server controlled by Legato software but at this point I dont think that is relevant.
My main point here is that most of the workstations are connected using 100Mbs Full duplex and about 15 of them are only on 10Mbs. To make them all 100Mbs we need to alter a setting on the switch ports and force the onboard NICs on each worktation to use 100Mbs full duplex. If we do this does will it have more strain on the Server/SQL database or will it improve as we have opened more bandwidth "so to speak".
To simplify my question is is better for the server to have all workstations on a 10Mbs rather than 100Mbs. What constraint differences are on the server between the two bandwidths? regarding hardware resources and the Sql database, i.e processor usage on the server.
Thanks to all that respond