The mechanics, why CA does not fire
Conditional Access is the default answer that gets pulled out for almost every identity attack. It is the right answer for credential compromise. It is the wrong primary answer for consent attacks. Here is why.
When a user clicks a consent lure, they are routed to login.microsoftonline.com and authenticate normally. That authentication passes Conditional Access, it is a legitimate user, on their real device, satisfying MFA exactly the way they have a hundred times before. CA evaluates the sign-in and approves the session. The session is now authenticated and CA's job is done.
After CA passes the session through, Microsoft renders the consent prompt. Whether the user then clicks Accept on a malicious app is a question of the consent framework, not Conditional Access. CA never sees the consent click. CA does not gate consent. CA cannot prevent a consent grant to a malicious app because by the time the consent decision happens, CA has already evaluated and let the session through.
The one CA policy that touches consent is CA-03 in this bundle (auth context for sensitive admin operations). It can require step-up authentication for admin consent operations. It does nothing about standard user consent, which is the path most consent campaigns take.
This is where most defenders get this wrong. They write a CA policy that "blocks risky apps" and assume they have hardened against consent phishing. They have not. CA is hardening for credential-theft paths that the attacker is not using.
Where prevention actually lives
The Entra ID consent framework is in a different part of the admin experience than CA. CA lives at Identity → Protection → Conditional Access. The consent framework lives at Identity → Applications → Enterprise applications → Consent and permissions. Different navigation, different mental model, different team often owns it.
Three settings in the consent framework do the actual prevention work:
- The user consent setting, determines whether standard users can self-consent at all, and to what category of app
- Permission classifications, defines which delegated permissions count as "low-impact" for self-consent
- Admin consent workflow, provides a structured approval path so users who hit a non-self-consentable scope get a "request approval" button instead of a dead end
These three together are the prevention layer. We have deployed them in our own production tenant. We have rolled them out for client engagements. The deploy order matters and the failure modes are real, so the rest of this post walks through each in the order you should ship them.
Control 1. Admin consent workflow (deploy first)
The admin consent workflow lets users request admin consent for an application instead of being silently blocked. When the user lands on a permission they cannot self-consent to, they see a "request approval" button. The request fires an email to designated reviewers. The reviewer approves, denies, or blocks from the Entra portal.
Why deploy this first. Without it, restricting user consent (Control 3) traps users in a dead-end whenever they hit a legitimate app, and the help-desk volume that follows will pressure you to roll the restriction back. The workflow is the channel that makes user-consent restrictions humane. Without it, the restrictions get rolled back.
Configuration
Pre-requisites: Entra ID P1 or higher. A reviewer security group, 3–5 members, ideally across two time zones. Reviewer role: at minimum Cloud Application Administrator (sufficient for non-Microsoft apps and avoids tenant-wide exposure if a reviewer account is compromised).
Via the Entra portal:
- Sign in as Global Administrator.
- Identity → Applications → Enterprise applications → Consent and permissions → Admin consent settings.
- Users can request admin consent to apps they are unable to consent to: Yes.
- Who can review admin consent requests: select the reviewer group.
- Selected users will receive email notifications for requests: Yes.
- Selected users will receive request expiration reminders: Yes.
- Consent request expires after (days): 30 (the platform max).
- Save.
Via Microsoft Graph PowerShell (script-friendly):
Connect-MgGraph -Scopes 'Policy.ReadWrite.ConsentRequest', 'Application.Read.All'chr(10)chr(10)$reviewerGroupId = '00000000-0000-0000-0000-000000000000' # replacechr(10)chr(10)$body = @{chr(10)isEnabled = $truechr(10)notifyReviewers = $truechr(10)remindersEnabled = $truechr(10)requestDurationInDays = 30chr(10)reviewers = @(chr(10)@{chr(10)query = "/groups/$reviewerGroupId/members"chr(10)queryType = 'MicrosoftGraph'chr(10)queryRoot = $nullchr(10)}chr(10))chr(10)}chr(10)chr(10)Update-MgPolicyAdminConsentRequestPolicy -BodyParameter $bodySLAs that make it sustainable
If reviewers cannot meet these SLAs, the workflow is functionally broken, users will go around it or send the request to a colleague who has rights. Both failure modes reopen the surface this control closes.
- First-touch review: 4 business hours. Anything longer and users will retry under social pressure.
- Decision (approve/deny/block): 1 business day for normal requests, 4 hours for any request flagged as critical (a senior leader, finance, IT admin tooling).
- Communication on deny: always include a one-line reason. The reason field is exposed to the requester. Use it.
Operational signals to track
- Daily volume of admin consent requests. Baseline establishes itself in 1–2 weeks. Sudden spikes (>2× baseline) warrant investigation, could be a phishing campaign driving users into the workflow.
- Time-to-first-touch and time-to-decision as standing SLA metrics.
- Deny rate. Below 5% suggests reviewers are rubber-stamping. Above 40% suggests procurement is being shoved into this workflow rather than handled upstream.
End-user communication template
Send one message when you turn it on. Resist the temptation to explain consent phishing in this message, that belongs in security-awareness training, not in a control-rollout email.
t; Starting [date], some Microsoft 365 apps will require approval before you can use them. When you hit the approval prompt, click "Request approval" and a member of [team] will review within one business day. If you have an urgent need, contact [help-desk channel]. This change is part of our standard security baseline and applies to all employees.
Control 2. Permission classifications (deploy second)
Permission classifications define which delegated permissions count as "low-impact" for self-consent. Microsoft's default low-impact set is intentionally tiny: User.Read, openid, email, profile. That is the right starting point. Expanding it carelessly reintroduces the surface this control closes.
What to add to the low-impact set. Permissions that are read-only on data the user already controls and that cannot be used to enumerate other users or read other users' data:
User.ReadBasic.All, read basic profile of other users (name, email). Useful for app directory features. Low blast radius.offline_access, when paired only with permissions already in the low-impact set, allows refresh tokens. Do not add this if you also expand the set to include any read scopes on other users' data.
What to leave out. Anything the threat model in this bundle treats as high-risk. Do not add:
- Mail scopes (
Mail.Read,Mail.ReadWrite,Mail.Send), mailbox access is the primary objective of consent campaigns. - Files and Sites scopes (
Files.Read.All,Files.ReadWrite.All,Sites.Read.All), bulk file collection is the second-most-common objective. - Directory and Group enumeration scopes (
Directory.Read.All,Group.Read.All), used for lateral-movement reconnaissance. - Application-management scopes (
Application.Read.All,Application.ReadWrite.All), adversaries pursue these for persistence. - Chat / channel / notes scopes (
Chat.Read,ChannelMessage.Read.All,Notes.Read.All), collaboration content.
The full classification table with reasoning lives in the bundle at hardening/consent-policies/permission-classifications.md. Treat it as a starting point and adjust for your tenant's data-classification posture.
How the classification interacts with the user consent setting
Control 3 (the user consent setting) restricts self-consent to "verified publishers with selected permissions". The "selected permissions" reference is this list, the permission classifications you just defined. So the two controls compose: a user can self-consent only to a verified-publisher app and only to permissions in your low-impact classification.
This is the combination that beats the Verified Publisher 2022 incident. The fraudulent-publisher campaign passed the publisher check. It would have failed the permission check because the campaign requested mailbox-read scopes, which sit outside any sensible low-impact classification.
Control 3. User consent setting (deploy last)
This is the lockdown. It restricts who can self-consent and to what.
The recommended setting: "Allow user consent for apps from verified publishers, for selected permissions". With Control 1 (admin consent workflow) and Control 2 (permission classifications) already in place, this works:
- Verified-publisher apps requesting low-impact permissions: user self-consents, no friction.
- Verified-publisher apps requesting beyond the low-impact set: user sees the "request approval" button (Control 1), reviewer decides.
- Unverified-publisher apps: user sees the "request approval" button, reviewer almost always denies.
The mechanic that stops the typical consent phishing campaign: the attacker's app is registered in an external tenant with no publisher verification. Under this setting, the user cannot self-consent. They get the "request approval" path. The reviewer sees an unverified-publisher app from an external tenant requesting mailbox scopes and denies it. The campaign dies on first contact with the reviewer.
Configuration
Via the Entra portal:
- Identity → Applications → Enterprise applications → Consent and permissions → User consent settings.
- User consent for applications: select "Allow user consent for apps from verified publishers, for selected permissions".
- Save.
Via Microsoft Graph PowerShell:
Connect-MgGraph -Scopes 'Policy.ReadWrite.Authorization'chr(10)chr(10)$policy = Get-MgPolicyAuthorizationPolicychr(10)$body = @{chr(10)defaultUserRolePermissions = @{chr(10)permissionGrantPoliciesAssigned = @("ManagePermissionGrantsForSelf.microsoft-user-default-low")chr(10)}chr(10)}chr(10)Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId $policy.Id -BodyParameter $bodyThe policy ID microsoft-user-default-low is the built-in policy for the verified-publishers + low-impact-permissions configuration.
What you should expect after enabling
In the first 1–2 weeks, you will see a spike in admin consent requests. The baseline is what users were silently self-consenting to before. Most requests will be for legitimate productivity SaaS the team is already using. Approve the legitimate ones, add the AppIds to the ApprovedAppIds watchlist (so Detection 01 does not fire on subsequent users consenting), and the volume settles.
The deny rate stabilizes around 15–25% in tenants we have rolled out for. Higher than that suggests a procurement upstream issue (users finding apps Slack-style instead of through IT). Lower than that suggests reviewers are rubber-stamping.
CA as defense-in-depth
CA is not the primary control here, but three CA policies belong in the bundle as supporting hardening. Each closes a specific adjacent surface that the consent-framework controls do not touch.
CA-01. Block legacy authentication
Legacy auth bypasses MFA. An attacker who can compromise a user account via password spray against a legacy-auth endpoint then has a user account that can consent. CA-01 closes that path. Near-universal prerequisite for any identity-hardened tenant. Ship it if you have not.
CA-02. Workload identity baseline (requires Workload Identities Premium)
Limits where a service principal can sign in from. If a malicious app's SP tries to call Graph from a DigitalOcean droplet and your policy allows only your corporate egress IPs and known SaaS IP ranges, the call fails. This is genuinely useful, it puts an IP-based gate on the attacker after consent has already been granted. The cost is the Workload Identities Premium add-on, which is per-SP and gets expensive in tenants with many workload identities.
Deploy in report-only first for at least seven days. CA-02 in particular will find edge cases your design did not anticipate (SaaS providers signing in from regions you did not know they used). The seven-day soak is not optional.
CA-03. Block unverified-publisher admin consent
Requires step-up authentication for admin consent operations on apps that are not verified publishers. This narrows the path Midnight Blizzard used in the January 2024 intrusion, that attack chained account compromise into admin-consent abuse. CA-03 forces an additional authentication step for the admin operation, raising the bar for the chained path.
The hardening/conditional-access/README.md in the bundle is explicit about what CA-03 cannot do, it does not prevent admin consent for verified-publisher apps, and it does not prevent standard user consent at all. Read that section before deploying so you do not over-rely on it.
What this bundle does not cover
A few adjacent threats that we deliberately left out, with where to look for them:
On-premises Active Directory pivots into Entra ID. If an attacker compromises on-prem AD and pivots via Entra Connect or federation, the consent framework here is irrelevant at the initial access stage. Hybrid-identity defense is its own bundle.
Cross-tenant access settings and B2B guest abuse. External collaboration settings interact with consent settings in ways that create their own attack surface. Cross-tenant synchronization and direct connect policies bypass some of the consent-framework protections documented here. Worth a dedicated treatment.
Device code phishing. Related technique that abuses the authorization flow in a different way, no consent prompt, uses the device-code exchange to steal tokens. Detection logic is different. Future bundle.
Application-level DLP. Detection 05 catches the read activity inside the platform; stopping the bytes from leaving the tenant requires DLP egress policies (Microsoft Purview or equivalent). Different control stack.
Microsoft 365 Copilot scope abuse. Copilot exposes new Graph scopes and new consent scenarios. The consent-framework hardening here applies, but the detection coverage for Copilot-specific abuse patterns is immature across the industry. Tracking closely for a future update.
Final word
If you only do three things: turn on the admin consent workflow with a real reviewer group, lock the permission classifications down to genuinely low-impact, and switch user consent to verified-publishers-with-low-impact-only. In that order. The controls compose, and together they close the path the typical consent campaign uses. Conditional Access on top of that catches the rest.
Read the IR runbook next, what to do when something gets through anyway, and the order the four containment moves have to happen in.