Mission

5 AI Agent Security Gaps in Microsoft 365

5 AI Agent Security Gaps in Microsoft 365

A thread started circulating in MSP communities recently that should have stopped every managed service provider mid-scroll. A frustrated IT admin described watching multiple clients self-deploy AI agents — tools like Hermes and Cowork — with delegated access to their entire Entra tenant and every email inbox in the organization. No security review. No scoped permissions. No guardrails. Just a marketing-driven "connect your Microsoft 365" button and suddenly an AI agent had the keys to everything.

This isn't a hypothetical edge case. It's happening right now, across small business tenants everywhere. And the threat landscape surrounding it is evolving faster than most MSP security stacks.

Why This Is an Active Crisis, Not a Future Problem

According to The Hacker News, 2026 is already being defined by AI-assisted attacks in which threat actors use automation to accelerate reconnaissance, credential theft, and lateral movement at a scale that was previously impossible. When your client's AI agent holds delegated access to every inbox and document in their Microsoft 365 tenant, it doesn't just represent a misconfiguration — it represents a pre-staged attack surface waiting to be discovered.

Make no mistake: attackers are already looking for it.

According to The Hacker News, cybercrime groups like Cordial Spider and Snarky Spider are actively exploiting SSO abuse and SaaS environment access to conduct rapid, high-impact data theft with minimal forensic traces. An AI agent granted broad Entra SSO-level delegated permissions is precisely the kind of over-privileged SaaS access these groups are weaponizing. The MSP community's Reddit concern isn't theoretical — it's a live target.

And the tools themselves aren't ready for the responsibility being handed to them. A recent Security News report confirmed that AI agent integrations are being rushed into production environments without proper security testing, with real-world consequences including deleted production databases and unrecoverable data loss. The guardrails simply aren't there yet — which means MSPs have to be.

The 5 Security Gaps Agentic AI Creates in Microsoft 365

1. Overprivileged Entra App Registrations

Most agentic AI tools request broad OAuth scopes during setup because it's easier to develop against. When a client clicks "Allow," they're often granting Mail.ReadWrite, Files.ReadWrite.All, and Directory.Read.All — permissions that give the AI agent read and write access to every email, file, and user record in the tenant. There is no business justification for a productivity AI to need directory-level access, but the permission dialog rarely explains that.

What to do: Audit all Entra app registrations immediately. In the Azure portal, navigate to Enterprise Applications > All Applications and review OAuth permissions granted. Flag any app with tenant-wide mail or file write permissions that wasn't explicitly approved through your change management process.

2. No Conditional Access Policies Scoping AI Agent Behavior

Conditional Access is one of Microsoft 365's most powerful security controls — and it's almost never applied to AI agent service principals. That means an AI agent authenticating from an unusual IP, at an unusual time, with unusual data access patterns won't trigger any of the same alerts a human user would.

What to do: Create Conditional Access policies that apply to service principals and app registrations, not just human users. Require that AI agent tokens can only be used from known IP ranges and flag anomalous token usage through Microsoft Defender for Cloud Apps. As we covered in our post on device code phishing and MFA bypass in Microsoft 365, token-based authentication abuse is already a primary attack vector — AI agents expand that surface significantly.

3. No Audit Logging on AI Agent Actions

When an AI agent reads 10,000 emails, moves files, or drafts responses on behalf of a user, those actions are often logged under the delegated user's identity — not the application's. This makes forensic investigation after an incident nearly impossible. You can't tell what the AI did versus what the human did.

What to do: Enable Unified Audit Logging in Microsoft Purview and configure alerts specifically for application-level access events. Look for MailItemsAccessed and FileAccessed operations attributed to service principals, not just user accounts.

4. Shadow AI Deployment Without MSP Visibility

This is the core of the Reddit problem: clients are self-deploying these tools without telling their MSP. The AI agent is live, connected, and accessing data before anyone on the security side knows it exists. This is shadow IT at its most dangerous — and it's accelerating. We've written about the broader shadow IT crisis and how department heads bypass security controls, and agentic AI is the newest and fastest-moving front in that battle.

What to do: Implement a formal app approval process communicated clearly to all clients. Use Microsoft Defender for Cloud Apps to detect new OAuth app consents in near real-time. Configure alerts for any new enterprise application granted permissions above a defined threshold.

5. CMMC and SB 220 Compliance Exposure

For government contractors pursuing CMMC Level 1, allowing an unreviewed third-party AI agent to access Controlled Unclassified Information (CUI) stored in Microsoft 365 is a direct compliance violation. For Ohio businesses relying on SB 220 safe harbor protections, an undocumented AI agent with broad data access undermines the very security program documentation required to claim that protection. The liability exposure for MSPs who manage these tenants and fail to catch this is significant.

What to do: Add AI agent inventory and permission review to your quarterly compliance checklist. Document every approved application, its granted permissions, and its business justification. For clients with CMMC requirements, review our CMMC Level 1 compliance guide for small businesses to understand what access controls must be in place before any third-party tool touches CUI.

The MSP Liability Question

Anthropics's increasingly capable agentic AI models — and the ecosystem of tools built on top of them — are entering enterprise environments faster than security frameworks can adapt. Security News recently reported on what comes next for cyber as these models become more autonomous, and the consensus is clear: the security industry cannot wait for vendor guardrails to catch up. MSPs who manage Microsoft 365 tenants on behalf of clients bear a professional and increasingly legal responsibility to identify and remediate these risks proactively.

If a client self-deploys an AI agent that becomes the pivot point in a data breach, the question won't be "why didn't the AI vendor prevent this?" It will be "why didn't your MSP catch it?"


Take Action: Catch AI Permission Sprawl Before Attackers Do

The gaps described above are detectable — but only if you're actively scanning for them. Waiting for an incident to surface a misconfigured AI agent app registration is not a security strategy.

Oscar Six Security's Radar ($99/scan) gives MSPs and small businesses an affordable, repeatable way to identify over-privileged app registrations, misconfigured access controls, and compliance gaps before they become breach headlines. Whether you're managing a single Microsoft 365 tenant or dozens of client environments, Radar surfaces the issues that manual reviews miss.

Focus Forward. We've Got Your Six.

Frequently Asked Questions

How do I find what permissions AI apps have in Microsoft 365?

In the Azure portal, navigate to Azure Active Directory > Enterprise Applications > All Applications and review the Permissions tab for each app. Look specifically for delegated permissions like Mail.ReadWrite or Files.ReadWrite.All, which give AI agents broad access to user data. Regular audits of Entra app registrations should be part of any MSP's quarterly security review.

Can an AI agent in Microsoft 365 cause a data breach?

Yes — an AI agent granted overprivileged OAuth permissions to a Microsoft 365 tenant can read, exfiltrate, or modify sensitive data if its credentials are compromised or if it's misconfigured. Cybercrime groups are actively targeting SaaS environments with broad delegated access, making over-permissioned AI agents a high-value attack target. Scoping permissions and monitoring agent activity are essential controls.

Does using an AI tool in Microsoft 365 affect CMMC compliance?

Yes — if an AI agent has access to Controlled Unclassified Information (CUI) stored in Microsoft 365 and that agent hasn't been reviewed and documented as part of your security program, it can represent a CMMC Level 1 compliance violation. Government contractors must ensure all third-party applications accessing their environment are inventoried, scoped, and approved. Oscar Six Security's Radar can help identify undocumented app access before a CMMC audit.

What tool should I use to audit Microsoft 365 app permissions for my clients?

Microsoft Defender for Cloud Apps provides native visibility into OAuth app consents and can alert on new high-permission app registrations. For MSPs who want an affordable external scan to validate their posture and catch gaps across client tenants, Oscar Six Security's Radar ($99/scan) is designed specifically for this use case. Combining both gives you internal telemetry and an independent verification layer.

Are MSPs liable if a client's AI tool causes a data breach?

MSP liability in AI-related breaches is an emerging legal area, but if an MSP manages a client's Microsoft 365 environment and fails to detect or remediate a known risk like over-privileged AI agent access, they may face professional liability claims. Proactive documentation, app permission audits, and client communication about approved tools are the best defenses. Establishing a formal AI app approval process and scanning regularly with tools like Radar creates a defensible paper trail.