I have Windows 2008R2 installed on a Bare Metal B200M2 Blade and I'm seeing slow file transfers(70-150KB/sec) trying to get to plain old windows file shares on an IBM N5200(NetApp Gateway). All other workstations/servers don't see this slowdown. I have windows 2008R2 VMs in the same UCS environment on different blades and not seeing this issue. On the packet capture its showing there is over 800ms for the server to send the missing packet after receiving an ack that infors the server a packet was dropped. I have installed the Cisco ethernet driver. I don't have a network flow policy or a QOS Policy set. Any thoughts?
I still do have open ticket on issue. It was only slow going from copying a file down from the N5200 to the B200M2, but it was fast uploading a file from B200M2 to the N5200. I changed the load balancing mode on the nics on the N5200 from Round Robin to IP Load Balancing. Once I did that the issue went away. TAC or IBM or Netapp didn't give me this solution, I decided to try things out on my own. I think what I did was a work around, I think the Windows driver on the M81KR couldn't deal with the etherchannel mode on the N5200 being in Round Robin Mode. Things are better now, but I shouldn't have to change the etherchannel mode for only 1 device out of a thousand plus that have been working fine.
Thanks for the update & glad to hear you're working properly now.
Round robin would not be an etherchannel. It simply routes traffic flows on an a,b,a,b etc basis. There was no teaming driver for Windows until very recently. I'm curious if you've tried the latest Palo teaming driver for windows or not.
This sounds like a traffic path issue rather than a driver problem. Copying files to the N5200 would be fast as the blade's path would likely take the same path from blade -> array. On the reverse however sounds like there may be multiple paths involved, some possible less optimal or congested. The IP hash would keep a single path for sending traffic back towards the blade which is probably why you're seeing better performance now. Without knowing the overall topology & information about other connected devices its hard to confirm.
See my other post for teaming driver information if interested.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...