Gigabit switches - expected server to server speeds?
I have some of our server infrastructure people working on building a test environment for a new application we want deployed. In doing so they need to copy very large files (90Gigabytes) from server to server.
I set them up on their own VLAN. They have 3 servers, each connected to a 3750G-24TS or 3750G-48TS switch on our network. The core of our network is a Catalyst 6500/ Sup720 running Native IOS. The switches they're on are trunked to our 6500's.
They are trying to copy these files from a server on one switch to a server on another one. They would go through the 6500 for this, but they are both on the same VLAN so the traffic isn't routed at all. When doing so the best they get is 100-120 Megabits/second.
However, if we take a 3750G-24TS and hook both servers directly into it and do the same copy, they get over 500 Megabits/sec throughput.
Each switch has a 1Gb uplink to the core. These uplinks are typically less then 2% utilized with bursts up to 5% utilization.
I might expect a little bit of a slowdown, but thats quite a bit more of a drop than expected...
Any ideas why the performance is so low? Any ideas where to look? I've looked at all the "show int <mod>/<port> counter errors" or "show controllers ethernet-controllers" I can and there is no indication of any error or anything??
Re: Gigabit switches - expected server to server speeds?
Be careful with this, when you tell your application people that your switch can provide this blah blah blah throughput, they will expect that throughput. Take note that to push that much data , server resources is one of the bottleneck. Also application is another bottleneck, depends on how it was build and the settings (throttling)
Just a word of advice, so you don't find yourself a fall guy during this test. Because you know, application people don't think :)
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...