I am not sure there is any way to do what you want, even with the use of custom keepalive scripts. But if the end goal is to ensure the front-end client connection is not re-established once the primary service becomes ALIVE, you can try adding 'no persistent' to the content rule and adding 'persistence reset remap' to the global config. This will cause only the backend connections to be remapped from the sorry server to the restored primary server once it comes back online.
From the manual:
"If you configure the persistence reset remap command in the global configuration and no persistent command on the content rule, when a local service becomes available again, the CSS remaps any new or in-progress persistent connections to the local server from the sorry server. Otherwise, new connections go to the available local services, but in-progress persistent connections stay on the sorry server."
Yeah, I understand what it is you're trying to do, but unfortunately I don't think there's any way to do it fully on the CSS. The idea is that the sorryserver would store some sort of static content, like a 'server down' or 'ongoing maintenance' page, that clients would be directed to incase the primary service failed. Once the primary service comes alive all *new* connections would be directed to the primary service, while existing connections to the sorry server could A) continue to be mapped to the sorry server until the entry in the persistence table expired or B) be remapped to the now-up primary service.
The only way I can think to implement what you want would require something like this:
- Turn on logging
- Configure an email address on the CSS that logs can be sent to
- Ensure the keepalive method is 'get' for the primary service
When the state transition states place for the primary service (ALIVE to DEAD/DOWN), it is recorded in the traplog. Once this is emailed to the customer, there can be logic on their end that will change the contents of the healthcheck file on the primary server if it sees the primary service on the CSS failed. The primary service will then continue to fail its keepalives due to a hash mismatch and would require the customer to manually change the healthcheck file to bring the server back into rotation.
I know it sounds a little far fetched, but perhaps it can be done. I've seen variants of this method implemented so it might be worth looking at.
Introduction This article will help you understand the steps on how to
download the UCS licenses from the Cisco Systems website and then
installing it on the UCS. The redacted (blue lines) just covers up
certain numbers for privacy please do not take them...
Introduction This article will help you understand and educate the
customer on how to clear their "expired licenses"
(license-graceperiod-expired) from their UCS-M. If a customer just
purchased a license and needs a step by step guide on how to download
==================== VIC FNIC driver does not support Virtual Volumes (
second level LUN ID ) An enhancement request has been created to track
this feature - CSCux64473 UPDATE - 12-14-2016 We made some traction on
the enhancement request - The Fix is in t...