This post is part of an ongoing series to educate about the simplicity and power of the Kusto Query Language (KQL). If you’d like the 90-second post-commercial recap that seems to be a standard part of every TV show these days…
The full series index (including code and queries) is located here:
The book version (pdf) of this series is located here:
The book will be updated when each new part in this series is released.
Now, that we’ve talked about using the Search operator in Part 4 to answer those three basic SOC analyst questions of: 1) Does it exist? 2) Where does it exist? and, 3) Why does it exist?, we can take that learning and the results of that type of query and meld it with the standard search query structure I talked about in Part 3.
In part 4, I ended with a query to locate activity by a user called “rodtrent“. I found that this rodtrent person had performed potentially strange activity in the OfficeActivity table (the table for Office 365 activity) that needs to be checked out. As shown, the search operator is a powerful tool to find things of interest. The results of the search operator query was thousands of rows of data. That’s inefficient.
So, now that we’ve found something interesting, we want to use the structure of the Search Query to pare down the results to minimize the effort and workload to identify that that something interesting is actually something notable and worth investigating.
If you need to, open up Part 3 in a new Window or browser Tab to review the Search Query Workflow as I walk through the next section.
In the following example, note that this is a non-issue situation, but I want to start with a basic Search query before we start building toward more complex queries in future posts to get a fully rounded understanding of the “why” behind why we do this. The one below is even simpler than the one discussed in Part 3 where I also talk about aggregating and ordering data. I’ll come back to those concepts at a later date, particularly when I get into creating your own in-query visualizations like pie and bar charts. No, for our efforts in this post, I want to focus on how easy it is to filter the data. Again, KQL isn’t hard, and some of your most powerful queries may only be a few lines of code.
Turning your hunting operations into more formal Search structure queries is the building blocks for creating your own Analytics Rules in Microsoft Sentinel. Analytics Rules should be precise logic to enable your operations to focus exactly where it needs to focus; and because, capturing data outside of what was intended is both inefficient and problematic for isolating actual security events.
The example (available from the series’ GitHub repo at: https://cda.ms/3jd):
Let’s break this new Search query down together like was done in Part 3. This one, again, is even a tad bit simpler than when describing the Search workflow, but as you’ll see, it’s the where operator that is sometimes our biggest, most powerful, and best workhorse and pal for tuning efficient results.
- The first step in our workflow is to query the OfficeActivity table. If you remember, from our time together in Part 4, we’re looking for user activity (in our case the user “rodtrent“) in Microsoft Office.
- As per the discussion in Part 3 on workflow, I want to highlight the importance of the pipe command once again. I don’t rehash the importance here. If you missed it, jump to Part 3 to catch up.
- In step 3 of the new Search query, I’m filtering how the query engine searches. I’m first telling to only look at data in the last 24 hours (TimeGenerated), then only looking through a column called UserId for the string “rodtrent”, then telling the query engine to only capture Exchange activity from the RecordType data column, and finally pinpointing the search to only Send operations. So, essentially, I’m looking for any emails that rodtrent sent in the last 24 hours.
- Filtering the data is the key to everything. <= Read that again. Filtering the data that is returned produces exact, actionable data. It also improves the results performance of our queries. Where the search operator may return thousands of rows of data in 15 seconds (or less), by properly filtering the data to return exactly what is necessary returns just the number of rows of data we asked for which greatly improves the processing time. Where the search operator may have taken 15 seconds, our new Search structure query will take 5 seconds or less. The Where operator is the key to this operation. Learn it. Know it. Keep the Where operator reference page handy: https://cda.ms/3jh.
- Finally, I’m using the project operator to control exactly what is show in the results window. In this case, I only want to show the user, the user’s IP address, and the server where the email originated from.
As you can plainly see in the query results, this matches exactly what my query proposed.
EXTRA: We saw in Part 4 with our Search operator, how results from our queries are in named rows and columns of data. And, you see here in this post, how I’m constantly filtering against known column names in the tables. Some might wonder how I come up with those schema names. Of course, it helps that I work with these tables constantly, but I do have a couple secrets to share. First off, as noted in Part 1, I use the Azure Monitor Logs table reference quite a bit. However, there’s also the Rosetta stone of KQL operators: getschema
Running a simple…
OfficeActivity | getshema
…will produce a list of all the named columns of a specific table. The example above displays all the named columns of the OfficeActivity table. Each of these columns can be used in your where operator filtering efforts.
In this post, I’ve given you a simple query to practice with. In Part 6, I’ll come back and dig into the actual interface for developing your own queries (instead of just running the ones I’ve given).
[Subscribe to the RSS feed for this blog]
[Subscribe to the Weekly Microsoft Sentinel Newsletter]
[Subscribe to the Bi-Weekly Defender for Cloud Newsletter]