I'm looking at 2294 alerts from 1012 sources... I think there is a problem here.
Default configuration doesn't log anything (the sig is a meta). Turning on verbose alerts or logging just gives me the tail end of the conversation, which appears to be a normal SMTP session (I've seen RCPT TOs, QUITs, and even some content packets).
Haven't been able to get a full session since it's only triggering on our production MX gateway and I don't have a nice expen$ive correlation package to let me pick the alerting flows out of the rest of the noise.
Anyone else getting lots of 5748s?
I've received a lot more... have pcaps. Looks like the initial SYN, SYN/ACK, ACK gets the banner prompt from the SMTP server, then the next packet is a zero-length ACK of the banner before the following HELO.
This seems to be the case in the few I've checked. The flows are otherwise normal.
Drat!... I had this running clean on live traffic for well over two weeks before this was released. Yes, if you have pcaps, I'd love to see them and I'll modify the signature accordingly.
You can PGP encrypt them to me at firstname.lastname@example.org
To tail onto this thread... jkell sent me some pcaps, unfortunately nothing triggered 5748-0 on my sensor. I looking at the pcaps some were missing the three-way handshake which gave me an idea and I set my sensor to not require the three-way handshake and set stream reassembly to loose... now I get alerts firing all over the place on parts of the streams that start with some of the smtp verbs like "data", "quit" etc. So here, if there's assymetric traffic and/or the sensor is set to ignore the three-way handshake and loose stream reassembly set, then yes, this alerts going to go off.
The signature itself is really tight, I look at the very beginning of the stream and only 4 bytes into the stream, if that's not HELO, EHLO or XXXX the meta sig fires.
jkell mentioned the 0 data ack after the banner... that won't trigger this sig.
I have no problem in adjusting/fixing this if there's legitimate traffic thats causing this to fire. I just need to see what traffic is causing this.
We have seen 208 alerts from 10 different sensors at different client locations in about a 12 hour period.
The majority of these are coming from MessageLabs(trusted vendor for email filtering).
Logging packets now so should have more information soon.
May consider filtering MessageLabs IP space to INTERNAL but would like to know if this is a false positive.
Anyone else seeing patterns?
Thank you for alerting us to this problem.
We will investigate further and update this thread once we know more.
Cisco IDS/IPS Signature Development Team
We have seen only 2 alerts on 1 sensor, from the mail server, to differing locations in a 24 hour period.
So the spread of this signature is only minimal.
However, there seems to be little info available on the known cause and solution to this signature.
We have just a brief explanation, sourced from the NSDB, with a 2 line summary.
Reading the earlier posts, it seems more involved than simply "caused by an SMTP session initiating with something other than HELO or EHLO".
Any info would be gratefully received.
Actually that's pretty much it, I'm only lookibg at the first 4 bytes sent in a stream to port 25. If it's not HELO, EHLO or XXXX (PIXs mailguard feature), the alert triggers.
We just upgraded to S238 and are also seeing many hundreds of these and from different sources. I would investigate, but why bother? This signature does not appear to be particularly useful. While not technically RFC compliant, many mail servers will happily accept mail without a HELO/EHLO. It is not an indication of anything particularly interesting IMHO. Please correct me if there is something I'm missing here.
I'm going to respond to a number of points all in this post.
-For the most part, it's an RFC compliance signature. And you're correct, while technically not rfc compliant, many mailservers will accept mail w/o a helo or ehlo, but your mail client should start with the helo/ehlo. This signature firing *could* be an indication of an attempt to exploit some mail servers. It is rated low severity. I leave that decision to the end user as to whether or not its usefull to them.
-That brings us to another point, the FPs jkell has seen. He's been working with me and a couple of the other developers to identify what the issue is. At this time, the traffic samples provided, don't trigger the alerts on a standalone sensor, but do fire on the ASA/AIP-SSM. We are continuing to investigate the potential that this is a platform dependant problem. The fact that someone else is seeing a number of alerts fire on the same platform starts to back that up.
In the interim, since this signature doesn't neccessarily indicate anything extremely horrible, it can be disbaled. As we find out more, I'll keep this thread updated.
Just a FYI, it does fire on the 4250xl, I started noticing the 5748 firing after the Sig238 update also. However I came to the conclusion that it must be a asymetric routing issue with our ISP side of the house. After a bit of researching it seems our ISP side is trying to send outage alerts to our enterprise Exchange server,however we are seeing a legacy asymetric route between the devices within the network.
Same here (I started this thread). The FPs appear to be specific to the ASA/AIP-SSM combination, as the pcaps I've sent to the engineers on the list do not generate alerts on a standalone IPS appliance.
I'm more concerned now about the AIP-SSM anomalous behavior than I am about the orignal sig (which as you noted isn't a real show stopper to begin).
More will be revealed, I suppose; we are collecting data to present to TAC formally.