We have a large implementation of ESX/VM servers in our environment. Five clusters and as many as 7 ESX's per cluster. Each server has four virtual switch groups. Three of which reside in the same Vlan. Each server has 2 quad nic cards with all eight connections physically connected. Viewing network statistics in VMWare's Virtual Console, our production cluster is averaging .7Gbps with a combined spike of all 7 servers showing a spike of 3.3Gb. This means I have 56 1Gb connections for the cluster. I am trying to use the available switch resources in a responsible manner. What is the limitations on the number of virtual servers in a virtual switch groups. I realize at any given time network usage can go up and down, so I have to be able to provide the necessary bandwidth. Based on the info I can get from the virtual console, I am lead to believe if I give each ESX server 4 1Gb connections I should be good. Doing so would allow the cluster 28Gb across the cluster which should be more than sufficient.
I know many will say check the VMWare website, but I am trying to sort it out from the network side to build my case to reduce physical connections as I am getting push back from server admins. Thanks for any info you can provide.
The answer is: "It depends upon the hardware you are connecting to, and your network infrastructure"
From a locally switched perspective you need to look at the backplane speed of the switches that are directly attached to your VMHosts, and whether there is any oversubscription of ports (as there are on several switches / line cards). Then you need to know the path that the traffic will take, and therefore the speed of the uplinks, and the speed of your distribution & backbone.
I'm afraid the question is far too ambiguous to provide a meaningful answer.
I'm working on a project that includes basic router configurations. I configurated everything including: line console 0, line vty 0 15 and secret passwords. There are 3 routers in the network and every LAN is going t...