WAE installed between two sites for NAS filer replication has failed during numerious attempts to kick off NetApp SnapMirror Replication. When we turn off TCP optimization, compression and application services the replication continues. Cisco CCIE looked at it and through his hands up. Is anyone troubleshooting WAE [WAAAS] discovering a possible problem/resolution similar to what is described?
The boss is wrapped a little tight and warned me not to put it on the forum. To make things complicated this gear sits between the main building and bureau which both are distant locations from me.
The config is similar to that pulled right out of the WAE quick configuration guide which we entered via console CLI. We have enabled the tcp optimization groupings et cetera with the Java based Cisco generic GUI.
Would it be out of the realm of possibility that the upstream routers with old sup 720s could have their input and congestion buffers overun by throughput hitting line rate for more than 5 seconds(Hello for large pipe) or 30 seconds(Hello for small pipe), dropping EIGRP control data, removing its route from the topology then dropping the path completely?
I am asking my guys to look for No EIGRP Adjacency errors at around the time they test putting the WAAS back online.
This is a dedicated pipe with only the IP Netapp data. We don't see this issue with other types of traffic but other traffic isn't optimized either.
In a former life,
I was experimenting with FCIP on a production IP path between buildings. Though I had the tunnel set for 500Mbs/ at an apportioned 550 of an OC-12 and it was sustaining 500Mb/s in one direction, a burst of traffic in the other direction close to 550Mb/s caused the 6509 with circa 2003 sup 720s got overwhelmed, dropped eigrp control data and the route/connection causing flapping at Layer 8 of the OSI model (political). With that said why are upstream routers buffers (input/output queue buffers as well as congestion management) never questioned?
I am glad to see someone else feeling my pain. Quick questions:
1. How big is the pipe?
2. When you say you have 50% reduction does that mean you could push ~400Mb/s over an OC-12 then when turning on optimization you see port stats showing throughput at 200Mb/s?
3. Do you suspect the throughput reduction is caused by limitations of upstream routers?
4. What do you see when you check sh proc status?
5. What do you see when you check the ingress ports of the upstream routers? [there is a cool free perl app named MRTG you can configure to do an snmp walk of all your routers and show daily, weekly and monthly graphs of Mbits/s]
Usually, we can access ESXi Shell by pressing Alt+F1 from ESXi DCUI (Direct Console User Interface).
But on HyperFlex system, it just shows black window.
This is expected behavior because HyperFlex redirects ESXi Shell output to SoL...
Configuring an Export Policy Using the GUI
This procedure explains how to configure an Export policy using the APIC GUI. Follow these steps to trigger a backup of your data:
On the menu bar, choose Admi...
RBAC users like firstname.lastname@example.org may fail HX Connect login. At that time, "Incorrect user name or password(100005)" is shown as a failure reason.
RBAC users can login to vCenter server. So, RBAC username and passwo...