False Positives
What is a False Positive (FP) in security? In working with different analysts and consultants, I realised they each have their definitions, and because of that misalignment, it resulted in inconsistencies when closing out security incidents. Common definitions of FP look at the flagged activity itself, whether or not it represents a threat or not. I have a differing view on that, and thought I would share it to see if that makes sense, or most would disagree upon.
Instead of looking at the outcome, I think we should look at the detection alert/rule and results pair. Consider a couple of made-up alert examples.
1A. Network horizontal scan from a corporate machine
1B. Corporate machine hitting multiple hosts
The two alerts above contain the same SIEM logic over a span of 1 hour, for instance. If they pick up the same activity of a single machine in corporate network reaching 50 different hosts within the hour and investigations reveal that it was a user multitasking — doing video calls while working with multiple applications and having concurrent file uploads to different file shares — then I would say that while it is a FP for the first alert, it is a True Positive (TP) for the second alert. Even if the logic is the same. Just because the second alert did not claim to have caught an attack/abusive behaviour.
Another example. If we have an alert that says 2A. Spike in Sign Ups and investigations reveal that there was an ongoing marketing campaign, then the alert did its job and I wouldn’t think it is an FP, even though there is no threats. Saying that it is an FP would effectively claim that there was in fact no spike in sign up traffic (I assume the alert logic is sound). If the alert was titled 2B. Surge in Malicious or Botnet Sign Ups, then yes, it would rightly be a FP.
So context is important.
What then could we improve on the alerts? I think we can reduce the noise levels, as well as the specificity of the alerts.
Alerts could start off vague, and at the same time smaller in scope. And as time passes, tuned to be more specific towards a threat or attack, and suppressed (but still logging) of events that we’re not interested in. So keeping it in balance with what SOC analysts can do in terms of intake. For instance, start off a new alert with a logic level or scope that triggers 5 times a week, and once it’s tuned to once a week or so, increase the amount of noise or scope to catch events of lower confidence levels in order to increase coverage (to reduce False Negatives). And tune them and suppress uninteresting events in repeated iterations.
And if we have too many noisy alerts/rules, bundle them up to create a higher fidelity correlation alert. They might be noisy, but probably not FPs.
Thoughts?