Before I pull out all my hair and run screaming out into the snow can I get a reality check on that's going on.
I've got a single ACE 4710 (a ft peer to be added later once his services are migrated to the new config (aka out of the admin context)
My design plan was to have critical services use DNS round robin to two virtual ip's. One to be managed by context OVIP and the other to be managed by context EVIP (odd and even respectively) Once the FT peer was added it would split the traffic roughly between the two ace 4710's so only half the traffic would see any effect from a peer failover. This is a one armed connection as the real servers are all over the place pending a network cleanup.
From my attached config you will see that I've configured an ldap service running on EVIP at 220.127.116.11. This tests and works fine including ping responses. I then add the services matching pair to the OVIP context which uses the same vlan 24 and subnet at 18.104.22.168. As soon as the ovip service is added I can no longer use the service on 202 although it still responds to vip pings. On reboot the working session moves back to EVIP and OVIP responds to the vip pings but doesn't respond to the tcp session. This looks like some kind of ACL clobbering going on between contexts which to my understanding should each have their own virtual interface to vlan24 and so should be isolated.
Any help would be appreciated.