cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6305
Views
5
Helpful
8
Replies

https connections, with certificate errors.

Jishar k
Level 1
Level 1
Hi
We face issues in getting some https connections, with certificate errors.
While logging to any 'novell' site having https, its being shown certificate error.
This happens even if we add these sites in passthrough mode, where https check is disabled,
Please provide a solution ,
note : i am here attaching the acceslogs , sceenshot of error messege , and snap shot
Rgds,
J.k
8 Replies 8

khoanguy
Level 1
Level 1

If you add to a custom category for pass through, do verify in the accesslogs that it is matching the correct decryption policy the destination site is shown as pass through, use the grep command.

If it is not matching the correct decyrption policy it might not be set for pass through, another option is to create an identity based on a single client ip address only (for testing) and map that to a new top-most decryption policy and set all sites for pass through, then re-attempt the https site to see if certificate errors persist.

Hi,

We have the same problem.

What we need is to block HTTPS requests based on our existing access policies and without decrypting the SSL payload. In other words, we do not want to maintain two sets of access policies, plus we see no reason to breach users' privacy by examining outbound communication.

How can this be accomplished?

hallvard.solem
Level 1
Level 1

Try to put the decryption/access policy with the SIBCPoint passthrough further up in the policy list.

Thanks for your suggestion.

However in our tests only the Global Decryption Policy was present and we experienced this behavior. I'm really looking for a way to apply the existing policies to HTTPS traffic, without any decryption occuring at all and without having to deal with cert errors. How?

Alexandre,

Unfortunately, it's not technically possible to do what you are asking for. It's a chicken vs. egg scenario.

When HTTPS traffic comes into the WSA, it starts  as SSL - the encryption negotiation. At this stage, there is no information about where the client is going other than:

  • The destination IP address (which doesn't tell us the server / host info. If it's a multi hosted server, it could be any site)
  • The server certificate's CN when the WSA fetches SSL to the same destination IP address

At this stage, you could drop the HTTPS connection abruptly, using the "drop" action in the decryption policies. This compares the category hostnames to the CN provided by the server and then FINs the connection. The only issue with this is that since the SSL connection is terminated, the client will not receive any kind of friendly block page. The browser will see it as a network issue and provide an error message as such, leaving the client to believe they should just keep trying.

NOTE: The CN of a certificate can only state the FQDN, it cannot list other information, like directories and such. This means that any categories that specify directories or objects won't apply at this stage of the Decryption policy evaluation.

The only way to truly see exactly where the client is going, scan for malware, and to be able to inject our own block pages, is when the stream is being decrypted. Preventing you from being able to this is the point of encrypting . So the WSA has to decrypt, which means the client will receive a certificate signed by the WSA, so if the client doesn't trust the WSA (no clients will by default - also one of the main points of security in SSL encryption), you'll have trust warnings from the clients, like you're running into.

Unfortunately, there isn't anything that can be changed about this. SSL was intended to be secure between the client and the server. By introducing a third party device (WSA) to filter, you've essentially added an intentional "man in the middle" attack, which the SSL protocol was design to protect users from.

The only way for the users to not get the warnings is if you push out the WSA's root decryption certificate to all of the client browsers. This will make it so they truse the WSA, just as they would any other trusted CA. Then you can decrypt and block without having to worry about client issues. The only issue with that is determining the best way to deploy the WSA root to all of the clients.

I hope this clears up the issue. Please let me know if you have any further questions on this.

Cheers,

Josh

Josh, thanks a bunch for your detailed answer -- it does help.

If I understand correctly, even if decryption policies specify the "pass-through" action, traffic *is* decrypted on the box and re-encrypted using the target web server certificate. Doesn't this situation introduce security issues on its own?

Is there any way the existing ruleset (Access Policies) can be reused  for HTTPS to avoid having to maintain two different rulesets on two  distinct boxes (that's 4 config screens to update whenever a change is  needed, which happens regularly here)?

Thanks.

If I understand correctly, even if decryption policies specify the "pass-through" action, traffic *is* decrypted on the box and re-encrypted using the target web server certificate. Doesn't this situation introduce security issues on its own?

This is incorrect.

The process that you have described is the "decrypt" action.

When the "pass through" action is choses, the WSA will not do any decryption at all. The WSA will only act as a tunnel, sending each byte from the server to the client and from the client to the server.  The encryption will be done between the client and the web server and the WSA will be unable to view any of the streams contents.

You will either need to decrypt everything and then use the access policies to granularly block / allow. Or you will need to set certain decryption categories to drop to drop it at the SSL layer. Which will not send a block page back to the client.

The only thing I can really recommend regarding having to update 4 WSAs, would be to look into getting a Management appliance (SMA) in order to manage the configurations all from one centralized location.

Cheers,

-----------------------------------------------------------------

Josh Wolfer - Senior Escalation Engineer - Web Security Appliances

Cisco IronPort Systems, LLC

We have been dealing with this problem since putting the Ironports online.  We have had to import the cert used by the Ironport itself into all of our workstations firmwide to fix some of the issues.

When we had HTTPS proxy enabled, we ran into a defect #71012 where some websites dont decrypt properly and wont step down to TLS when the Ironport tries to get them to.  What we have to do in that case is put the site in a No Decrypt custom URL so that it will allow access to the site but wont try the man in the middle stuff that can sometimes cause a problem.

Ron                        

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: