This will be a series of articles on how to investigate MCAS alerts. In this first article I will start to cover the basics for activity and anomaly alerts.
As we already know, in todays Microsoft Cloud App Security, we can encounter different alerts every day. How do we know where to start the investigation?
Let’s go ahead and start with an overview!
Firstly, we need to understand how alerts are being generated:
The detection engine uses user and entity behavioral analytics (UEBA) and it uses also machine learning in the background, so that the signals can generate accurate alerts. The basic concept of an anomaly detection is actually the comparison of what is “natural” or normal behavior. Anything that crosses the “normal” threshold is flagged as an assault by the system, therefore raising an alert.
Since the anomaly detection policies are activated automatically, Cloud App Security requires a seven-day learning cycle during which not all anomaly detection warnings are raised.
Following that, when data is gathered from the installed API connectors, each session is compared to the behavior observed during the previous month, including when users were involved into an activity, IP addresses, computers, and so on, as well as the risk score of these events. This is the event enrichment phase.
Therefore, the activities are going through the alerts engine, where they are evaluated against any anomaly detection and activity policy. If any of them match, the alert will be generated and displayed accordingly.
Now that we got a deeper look into the workflow of the alerts, let’s discuss how to investigate them.
In the Microsoft Cloud App Security portal, we can observe hundreds of alerts a day. Depending on the present environment, the products, or licenses available, they can be quite different. A lot of Security Analysts from all over the world have stumbled across the same question: How can we efficiently investigate the alerts and how can we reduce the noise of those alerts, without letting any attacks go by unnoticed?
Today we are going to cover the anomaly alerts.
Impossible travel alert or Activity from infrequent country
This is a noisy one. The impossible travel alert means mainly when a user logs in from two or more different location in a very short timeframe. Usually this should point to a potential compromise, but the most common situation that is encountered from most organizations, is when an IP address is being masked by a VPN connection.
How do we investigate?
First, we need to understand what happened.
We need to investigate the three core elements that make up this alert: Username, IP address and Location. Taking a deeper look into these 3 elements, we will be able to understand if this is a false positive or an actual threat. Another thing that we can check is if the sign-in was marked as failed or successful.
After identifying the key elements, we need to further investigate and take governance actions.
If we recognize the IP addresses as our own and we are familiar with the locations (e.g. they can be locations from your VPN connection), then we might want to configure some changes in MCAS:
- Add IPs to your trusted IP list / mark them as corporate.
- Configure the alert policy only to match when the sign-in is successful.
- Mark the alert as a false positive.
If we recognize this alert as being an actual possible threat, we might need to take another approach:
- Check the IP address and the locations.
- Check the activity logs for other activities performed by the supposably compromised user. (e.g. downloads, mass deletions, risky activities)
- Check also other related alerts that might be generated for that user.
- Take into consideration to suspend the user or to require the user to sign in again.
- If the account is compromised: force password reset and enable MFA.
- Block some user agents.
Multiple Failed Login Attempts or Multiple Failed User Logins to a Service
Since we discussed about the anomaly detection, it also very important to remember that Microsoft Cloud App Security defines an authentication pattern which can only warn when suspicious activity is present in the environment and generate alerts only when a clear indication of threat has been identified.
Logins happen every single day and even if a login attempt fails, it may signify an attempt to compromise an account, but it can also represent a common occurrence. For instance, when a user inadvertently enters the wrong password, a situation which can be considered very common.
What steps can we follow for this specific alert?
We begin by investigating the 3 elements mentioned above: Username, IP address and location. The investigation starts with understanding if there is a possible breach, meaning we need to check all the related activities and alerts that are open. Furthermore, the need to check the source IP address and source location of the attempt is crucial to the investigation.
Also checking the device may point to specific indicators of a compromise. We can check the devices health, version, compliance, user agents used, for the reason that all this information might make us realize the actual impact of the present threat.
If the investigation point to a normal behavior (wrong password inputs by the end user), then we can mark this as a false positive.
If we classify this incident as a threat, we need to remediate the situation using account compromise remediation actions:
- Take governance action on the user (suspend the user, mark him as compromised).
- Reset the password for the user, enable MFA.
Activity from suspicious IP address or Activity from anonymous IP address
These alerts are being generated when the detection engine has observed an activity coming from an either anonymous proxy IP address or it has been identified as risky for the organization. The IPs could have been used to hide the true IP address and involved in different malicious activities.
What steps can we follow for this specific alert?
We begin by investigating the known 3 elements: Username, IP address and location. The investigation starts with understanding if there is a possible breach, meaning we need to check all the related activities and alert that are open. Events like file download or inbox forwarding may be an indicator for a compromised account and for data leakage.
This alert can easily be simulated by attempting to login into your account from a TOR Browser, but if that is not the case, we need to take specific remediation actions:
- Suspend the user and mark him as compromised.
- Remediate the account compromise.
- Remove also any forwarding rule and block the download (If the alert is present for that) to stop the bleeding of information until the user has been remediated.
I think we can all agree that Security is one the most important aspects of our every day online life. Knowing how to invest into your security posture, detect, protect and respond to any cyber threats should be of utmost importance.
This is the first part of working with anomaly detection in Microsoft Cloud App Security. Stay tunes for upcoming blog posts!
You must log in to post a comment.