In-Depth
Microsoft Security Copilot & Agents
When ChatGPT first entered the collective consciousness and became the fastest growing consumer technology ever, there was a fair bit of handwringing in the cybersecurity space. AI was going to churn out malware automatically, produce infinite variants of flawless phishing emails, analyze firewall configurations in real time and find unknown vulnerabilities to exploit. By and large, most of these risks haven't materialized, and indeed, at least for now, AI has mostly popped up all over cyber defenders' tools instead.
Microsoft released Security Copilot about a year ago, and recently at their Secure event announced their new Security Copilot agents. In this article I'll look at Security Copilot, what it can and can't do, as well as dive into these agents, followed by my own takes on the overall usefulness, licensing model and where I see AI in Microsoft's security tools go in the future.
Security Copilot
In a true sign that there are definitely too many marketing people at Microsoft compared to engineers, this product has already had one name change, and it's not even a walking toddler yet -- originally it was Copilot for Security.
It's built on one of the latest OpenAI models (they update the base LLM model regularly) and runs in Azure. Microsoft then layers on top a security specific orchestrator, their Threat Intelligence (TI) database that's updated in real time and specific skills in each of the areas where it can be used.
It's available in two modes, the standalone UI at securitycopilot.microsoft.com and embedded in the various portals such as Entra, Defender XDR, Defender for Cloud, and Purview. Initially targeted at SOC analysts, the scope for who can benefit from this tool has expanded to data governance folks in Purview, device administrators in Intune and identity managers in Entra.
Here you can see the flow of a prompt a user has entered. There are two takeaways: first that your prompt is both pre and post processed by the applicable plugins (Intune, Sentinel, Defender for Endpoint for example); and that Responsible AI is built in to make sure the user isn't entering a malicious prompt and that the response back to the user isn't dodgy.
[Click on image for larger view.] Security Copilot Prompt FlowLet's start with the standalone experience. Here you can deploy it to your tenant by creating capacity for it to run, called Secure Compute Units (SCUs). These are priced at $4 USD per hour, per SCU, and Microsoft recommends at least 3 (max is 100) to start with for testing. I delivered a three-day course on Security Copilot recently and let me tell you, three SCUs don't get you very far. And then Microsoft recommends that you leave them available 24x7. When I delivered that course, if I used more than 3 in my demos, the over usage was provided for free. Since then Microsoft has started charging for this overage, at $ 6 per SCU! (I'll come back to the cost question at the end of the article).
[Click on image for larger view.] Create Security Copilot CapacityHere you can see the standalone experience in action, summarizing critical incidents in Sentinel for me.
[Click on image for larger view.] Security Copilot Standalone ExperienceInteractive prompting is all good, but often security investigations are about building on what you learn in one step in the next one and writing a report at the end. This is encapsulated in Promptbooks, where two or more prompts are run in sequence. There are some built in, and you can create your own, and (given the right permissions) share them with the rest of your organization.
[Click on image for larger view.] Security Copilot Prompt BookThe last thing analysts need is another portal to work in, though, and while there are some tasks that can only be done in the standalone UI (activating plugins, creating and running promptbooks), most admins will interface with the embedded experiences.
Security Copilot Embedded
Initially Security Copilot was only embedded in the Defender XDR portal, appearing as a button, and if clicked, a fixed size panel on the right, allowing you to prompt and investigate whatever incident you were working on.
[Click on image for larger view.] Incident Summary in Defender XDRSince then, it's now appearing inside the Entra portal where it at first only gave you some background information about identity risks but is now being enhanced with many more skills to help with identity tasks. It can now (in private preview but officially announced) help you streamline your identity lifecycle workflows (automating joiner, mover and leaver steps for your user accounts). It can also help you with application risks, identifying app / service principal owners to help you find the right person to talk to for apps that are unused, plus check if apps have verified publishers to understand your risks.
It also handles complex questions and asks clarifying questions back to you when required ("there are three user accounts named Paul, which one did you mean?").
Security Copilot is also embedded in Purview, where it helps you triage Data Loss Prevention (DLP) and Insider Risk Management (IRM) alerts. It's also in Intune where it can help you optimize your configuration and compliance policies, and help you troubleshoot settings.
Oh, and Security Copilot has skills to translate natural language into Kusto Query Language (KQL) to help you in Advanced Hunting in XDR and Sentinel. Plus, it also understands Keywork Query Language (KeyQL) as it's used in Purview to create content searches or eDiscovery cases. Furthermore, it can analyze malicious / obfuscated scripts and break down their logic, something that's a rare skill, even in a larger SOC team.
Security Copilot Goes 007
That's Security Copilot up until recently, a custom prompt-based interface in various Microsoft security / IT products. But it never took any action, simply providing you a handy shortcut to existing information, speeding up triage and helping junior analysts learn on the job with an AI assistant.
Announced, with one in private preview at the moment, are Security Copilot agents. Specialized independent AI agents that run on a schedule or on-demand and complete a particular task, and that can learn and adapt to your environment over time. The Microsoft ones are:
- Conditional Access Optimization Agent
- Phishing Triage Agent
- Alert Triage Agents for Data Loss Prevention and Insider Risk Management
- Vulnerability Remediation Agent
- Threat Intelligence Briefing Agent
There are also five partner-developed agents:
Let's start with the Microsoft ones, where the CA optimization agent runs once a day to investigate if there are any applications not protected by a CA policy, or if new users have been added that are also not covered by CA policies.