There is a discussion going on concerning how many servers are best for a certain scenario and I wanted to get the collective wisdom of this group. This is the scenario: XYZ Engineering company has a regional HQ office that is home to approximately 125 engineers. The engineers are using softwares that generate large data files, this software includes CADD software, Modeling software, and Structural Analysis software, among others, as well as a standard support staff using the MS Office suite. 4 regional branch offices also have connectivity to these servers for data sharing among the offices. The office currently has 3 data servers with each machine serving a particular logical area, for example one server houses CADD files, one houses modeling data, one houses office data, etc.
One side of this debate believes that it would be best to consolidate all the data on one server. This reduces hardware costs and provides one backup point and less confusion among data points. The other side of this debate believes that by distributing the data among several servers that there is a natural load balancing of data I/O at the disk array as well as the network interface and that several points of failure exist rather than one. In other words, if all the data were on one server and this server failed everyone in the office is out of work until it is fixed, if there are three servers and one fails, only one third of the staff is out of work. All advice, comments, and snide remarks are welcome:
I would go for the centralized data center/data server solution and put my efforts into providing a redundant sever system. My guess is that server technology nowadays is so advanced that with the right hardware and a RAID system, and something like e.g. Microsoft Cluster Server, you can pretty much avoid a failure of the central server. As you said yourself, it makes backups much easier as well. You could make sure that there is a redundant and/or backup WAN connection form your remote offices to the central office.
Hi Georg, thanks for the input. It has been suggested that trying to pump 125 engineers running some serious data intensive apps through a single or even a two interface server would create a bottleneck at the server. In other words 125 times 100megabits is much more that 2 times 1000 megabits.It has also been argued that regardless of the reliability of server technology nowadays all it would take is one outage with an average 4 hour vendor response time to account for the cost of additinal points of failure. In other words, 125 engineers out of work of 4 hours would pay for several clusters :) I personally like the idea of the centralized data center but I am not sure I am ready to bet my job on either Microsoft Cluster Server technology or advanced Dell equipment. What do you think?
Best thing would be to go in for SAN (Storage attached network), which can give you high availability and great scalability. As far as redundancy is concern there will be raid capabilities and hardware redundancy option available with respective vendors
Thanks for the suggestion. SAN has been discussed and would be better than simply one server. However, the arugment has been introduced that there is still only one network interface, in other words, all ofthese engineers and their data intensive programs are trying to get data into and out of the same pipe at the same time. The pipe is a gigbit over fiber connection but that is still a lot of data to push through one pipe. yes, no, maybe? What do you think?
[toc:faq]The ProblemOn traditional switches whenever we have a trunk
interface we use the VLAN tag to demultiplex the VLANs. The switch needs
to determine which MAC Address table to look in for a forwarding
decision. To do this we require the switch to do...
[toc:faq]Introduction:Netdr is a tool available on a RSP720, Sup720 or
Sup32 that allows one to capture packets on the RP or SP inband. The
netdr command can be used to capture both Tx and Rx packets in the
software switching path. This is not a substitut...
IntroductionOSPF, being a link-state protocol, allows for every router
in the network to know of every link and OSPF speaker in the entire
network. From this picture each router independently runs the Shortest
Path First (SPF) algorithm to determine the b...