Home Security Microsoft 365 OAuth Consent Defense

Five Sentinel detections for OAuth consent attacks (with the KQL inline)

Why most SOCs miss this

Two reasons. First, the consent event lives in AuditLogs, not in SigninLogs. Most identity-attack detection logic was built around user sign-ins, and OperationName == "Consent to application" was not on the original list. Second, what the attacker does after consent — call Graph, read mail, add credentials — happens in service-principal context. SP sign-ins live in AADServicePrincipalSignInLogs, which is a separate table from the user SigninLogs and is not visible in many SIEM dashboards by default. The attacker is operating in a part of the audit surface that most SOCs do not look at.

The five detections below close that gap. Each fires at a different point in the kill chain (mapped in the threat model). Detection 01 fires at the consent moment, before any data has been read. Detection 02 fires when the consent moment is happening to multiple users at once — that is a campaign in progress. Detections 03 and 05 fire on data access. Detection 04 fires on persistence establishment.

We deploy these in our own tenant, we have tuned them against real noise, and the false-positive analysis below is what we actually see in production rather than what theory says we should see.

This is the headline detection. It fires on the consent event itself, scored against four risk markers: unverified publisher, multiple high-impact scopes, offline_access paired with read scopes, and tenant-wide (admin) consent. A consent event hitting two or more markers fires a Medium-severity alert. The score escalates to High when the consenting user holds a directory role.

Run frequency: 15 minutes. Lookback: 30 minutes with 15-minute overlap.

let highRiskScopes = dynamic([chr(10)"Mail.Read", "Mail.ReadWrite", "Mail.ReadBasic", "Mail.Send",chr(10)"Files.Read.All", "Files.ReadWrite.All", "Sites.Read.All", "Sites.ReadWrite.All",chr(10)"Directory.Read.All", "Directory.ReadWrite.All",chr(10)"User.Read.All", "User.ReadWrite.All",chr(10)"Group.Read.All", "Group.ReadWrite.All",chr(10)"Application.Read.All", "Application.ReadWrite.All",chr(10)"AppRoleAssignment.ReadWrite.All",chr(10)"Chat.Read", "Chat.ReadWrite", "ChannelMessage.Read.All",chr(10)"Notes.Read.All", "Calendars.ReadWrite",chr(10)"offline_access"chr(10)]);chr(10)let microsoftPublisherTenants = dynamic([chr(10)"f8cdef31-a31e-4b4a-93e4-5f571e91255a",chr(10)"72f988bf-86f1-41af-91ab-2d7cd011db47"chr(10)]);chr(10)let approvedAppIds = externaldata(AppId: string)chr(10)[@"https://example.invalid/approved-apps.csv"]chr(10)with (format="csv", ignoreFirstRecord=true);chr(10)AuditLogschr(10)| where TimeGenerated > ago(30m)chr(10)| where OperationName == "Consent to application"chr(10)| extend InitiatedByUser = tostring(InitiatedBy.user.userPrincipalName)chr(10)| extend InitiatedByUserId = tostring(InitiatedBy.user.id)chr(10)| mv-expand TargetResource = TargetResourceschr(10)| extend modifiedProps = TargetResource.modifiedPropertieschr(10)| mv-expand modifiedProp = modifiedPropschr(10)| extend propName = tostring(modifiedProp.displayName)chr(10)| extend propNew = tostring(modifiedProp.newValue)chr(10)| summarize props = make_bag(pack(propName, propNew)) bychr(10)CorrelationId, TimeGenerated, OperationName, Result, InitiatedByUser, InitiatedByUserId,chr(10)AppDisplayName = tostring(TargetResource.displayName),chr(10)AppId = tostring(TargetResource.id)chr(10)| extend ConsentAction_Permissions = tostring(props["ConsentAction.Permissions"])chr(10)| extend ConsentType = tostring(props["ConsentType"])chr(10)| extend PublisherTenantId = tostring(props["AppPublisherTenantId"])chr(10)| extend isPublisherVerified = tostring(props["VerifiedPublisher"]) has "MPNId"chr(10)| extend isMicrosoftPublisher = PublisherTenantId in (microsoftPublisherTenants)chr(10)| extend grantedScopes = extract_all(@"Scope:\s*([A-Za-z\.]+)", ConsentAction_Permissions)chr(10)| extend grantedScopes = iff(array_length(grantedScopes) == 0,chr(10)split(replace_string(ConsentAction_Permissions, "\"", ""), " "),chr(10)grantedScopes)chr(10)| extend highRiskScopeMatches = set_intersect(grantedScopes, highRiskScopes)chr(10)| extend highRiskScopeCount = array_length(highRiskScopeMatches)chr(10)| extend hasOfflineAccess = grantedScopes has "offline_access"chr(10)| join kind=leftanti (approvedAppIds) on AppIdchr(10)| extend riskMarkers = pack_array(chr(10)iff(not(isPublisherVerified) and not(isMicrosoftPublisher), "publisher_unverified", ""),chr(10)iff(highRiskScopeCount >= 2, "multiple_high_risk_scopes", ""),chr(10)iff(hasOfflineAccess and highRiskScopeCount >= 1, "offline_access_with_read", ""),chr(10)iff(ConsentType =~ "AllPrincipals", "tenant_wide_consent", "")chr(10))chr(10)| extend riskMarkers = set_difference(riskMarkers, dynamic([""]))chr(10)| extend riskScore = array_length(riskMarkers)chr(10)| where riskScore >= 2chr(10)| project TimeGenerated, CorrelationId, InitiatedByUser, InitiatedByUserId,chr(10)AppDisplayName, AppId, ConsentType, PublisherTenantId,chr(10)grantedScopes, highRiskScopeMatches, riskMarkers, riskScorechr(10)| extend AccountCustomEntity = InitiatedByUserchr(10)| extend AppCustomEntity = AppDisplayName

False positives we keep hitting

Honest list because alert fatigue is what kills these in production.

Legitimate IT-procured SaaS rollout. A new vendor app is being onboarded — the AppId is not in any allowlist yet, the publisher is unverified or recently verified, and it requests several high-impact scopes by design. We solve this with a ApprovedAppIds watchlist (the externaldata reference in the query above — replace with your actual watchlist URL). Onboarding workflow should add the AppId before broad rollout, not after the detection fires on it.

Microsoft first-party apps re-prompting for incremental consent. Common when users opt into a new Microsoft service like Power Automate, Loop, or Copilot. These carry Microsoft-verified publisher status. The two Microsoft tenant IDs in the microsoftPublisherTenants list (f8cdef31-a31e-4b4a-93e4-5f571e91255a and 72f988bf-86f1-41af-91ab-2d7cd011db47) suppress these. Do not remove them.

Developer self-consent against tenant-internal apps. Common in tenants with active engineering. We exclude AppIds whose PublisherTenantId matches the home tenant — those are not external apps, and the threat model assumes external origin.

Microsoft Graph PowerShell first-run consent (AppId 14d82eec-204b-4c2f-b7e8-296a70dab67e). Frequently consented to by admins. Decide policy: either allowlist explicitly or accept as a low-priority hit.

Tuning levers

The risk-score threshold of 2 is calibrated for a typical mid-size tenant. Drop it to 3 in tenants with active SaaS procurement (more noise but fewer false positives). The high-risk scope list is opinionated — add Sites.Read.All and Reports.Read.All if your tenant holds regulated data. Do not narrow the list below what is shown.

Fires when five or more distinct users in your tenant consent to the same AppId within one hour. The verified-publisher and approved-app exclusions mean this almost never fires on legitimate rollouts. When it fires, it is a campaign. Severity: High. Act immediately.

This is the detection that would have caught the Google Docs worm in 2017 — that worm hit roughly a million Gmail accounts inside an hour, propagating by sending the consent lure to every contact in each victim's address book. The 5-user / 1-hour threshold is calibrated for exactly that signature.

Run frequency: 15 minutes. Lookback: 1 hour.

The query joins against the same risk-marker logic from Detection 01 and aggregates by AppId. When the user-count crosses the threshold and the AppId still matches the high-risk markers (so it is not a legitimate vendor with a sudden onboarding push), it fires.

Two tuning notes:

  • The threshold is per-AppId, not per-publisher. Microsoft first-party apps with a hundred users opting in within the same hour do not fire because they pass the publisher gate.
  • Watch the PublisherTenantId field on the firing alert. If multiple firings share the same external tenant ID, the attacker is running a multi-app campaign from one home tenant — pivot on that tenant ID and find any other apps they have registered against your users.

Detection 03 — Service principal anomalous sign-in

Fires on SP sign-ins that break the SP's established baseline. Three sub-triggers, any one of which fires the detection:

  • Dormancy break. SP has been silent for 30+ days, then suddenly active. Strongest signal — legitimate workload identities have stable usage patterns; dormant SPs that wake up are usually attackers using a previously-consented app.
  • New country. SP signs in from a country it has never signed in from before. Useful for SPs that have an established geographic baseline.
  • Hosting ASN. SP signs in from a known cloud-hosting ASN (DigitalOcean, OVH, Hetzner, AWS, Azure, Cloudflare, etc.). Real workloads sometimes legitimately run on these — but a freshly-consented user-facing app suddenly making Graph calls from DigitalOcean is the canonical attacker pattern.

Source table: AADServicePrincipalSignInLogs. Run frequency: 30 minutes. Lookback: 1 hour with 30-minute overlap.

let lookbackForBaseline = 30d;chr(10)let recentWindow = 1h;chr(10)let hostingASNs = dynamic([chr(10)"AS14061", "AS16509", "AS8075", "AS15169",chr(10)"AS24940", "AS16276", "AS9009", "AS20473",chr(10)"AS13335", "AS14618", "AS62874"chr(10)]);chr(10)let baseline = AADServicePrincipalSignInLogschr(10)| where TimeGenerated between (ago(lookbackForBaseline) .. ago(recentWindow))chr(10)| summarizechr(10)knownCountries = make_set(LocationDetails.countryOrRegion),chr(10)knownASNs = make_set(AutonomousSystemNumber),chr(10)lastSeen = max(TimeGenerated),chr(10)signInCount = count()chr(10)by ServicePrincipalId, AppId, ServicePrincipalName;chr(10)let recent = AADServicePrincipalSignInLogschr(10)| where TimeGenerated > ago(recentWindow)chr(10)| extend country = tostring(LocationDetails.countryOrRegion)chr(10)| extend asn = tostring(AutonomousSystemNumber);chr(10)recentchr(10)| join kind=leftouter baseline on ServicePrincipalId, AppIdchr(10)| extend isDormancyBreak = datetime_diff('day', TimeGenerated, lastSeen) >= 30chr(10)| extend isNewCountry = isnotempty(country) and not(set_has_element(knownCountries, country))chr(10)| extend isHostingASN = asn in (hostingASNs)chr(10)| where isDormancyBreak or isNewCountry or isHostingASNchr(10)| extend triggers = pack_array(chr(10)iff(isDormancyBreak, "dormancy_break", ""),chr(10)iff(isNewCountry, "new_country", ""),chr(10)iff(isHostingASN, "hosting_asn", "")chr(10))chr(10)| extend triggers = set_difference(triggers, dynamic([""]))chr(10)| project TimeGenerated, ServicePrincipalId, AppId, ServicePrincipalName,chr(10)country, asn, IPAddress, triggers, lastSeen, signInCountchr(10)| extend AppCustomEntity = ServicePrincipalName

False positives

Scheduled batch SPs. Many production SPs are dormant for weeks, then fire on a monthly batch. Maintain a ScheduledBatchSPs watchlist and exclude. The list is small (10–30 entries in most tenants) and stable.

Multi-region SaaS providers. Your CASB or other vendor SP may legitimately sign in from multiple regions as their backend autoscales. The watchlist approach above handles these too.

Hosting-ASN false positives are rare. A user-facing OAuth app legitimately running on AWS or Azure usually has its hosting tagged and the ASN baseline reflects it. The signal is the unfamiliar hosting ASN — an SP that has historically signed in from Microsoft IPs suddenly appearing from DigitalOcean.

Detection 04 — App credential addition

Fires when a credential or owner is added to an app registration or service principal. Looks for OperationName matching "Add service principal credentials", "Update application – Certificates and secrets management", or "Add owner to application".

The high-fidelity variant joins against the prior 72 hours of consent events for the same AppId. Credential addition on an app that received consent within the last 72 hours is the canonical T1098.001 persistence signature — the attacker just got delegated access via consent and is now bolting on app-only access for persistence. Severity: High when post-consent. Medium otherwise.

Routine secret rotation produces this same operation legitimately. We handle it with a PlannedAppChanges watchlist that contains AppId + change-window pairs — when an app is in scheduled rotation, suppress the medium alert; high-severity post-consent alerts always fire regardless.

Fires when a service principal whose AppId received a consent grant in the prior 72 hours performs high-volume mail or file reads. This is the last-line detection — by the time it fires, data has been accessed and the response is a containment-now decision, not investigation-first.

Primary path: CloudAppEvents from Microsoft Defender for Cloud Apps. MDA gives you per-object Graph activity tagged to the SP, with the read counts and resource identifiers. If you have MDA, use that.

Fallback for tenants without MDA licensing: OfficeActivity. Coarser-grained — you see the Exchange and SharePoint operations but not the underlying Graph object reads. Still catches the bulk-read pattern, just with lower precision.

The threshold is 200 reads in 30 minutes by default. Tune up for tenants with heavy automation. Tune down if you have a clean baseline.

The audit script — Get-RiskyConsentGrants.ps1

Before you deploy any of the above, run a baseline. The script in the bundle (audit-tooling/Get-RiskyConsentGrants.ps1) iterates every service principal in your tenant, scores each on:

  • Number of high-impact scopes granted
  • Tenant-wide vs per-user consents
  • Publisher verification status
  • Age of any client secrets or certificates attached to the app
  • Last sign-in time of the SP

Output is a CSV ranked by risk score. The legacy test app that Midnight Blizzard exploited would have been near the top — long-lived credentials, elevated permissions, dormant. The script exists to surface exactly that profile.

Run it monthly. The point is the audit cadence, not the tool — the tool just makes the audit cheap.

# Quick startchr(10)Connect-MgGraph -Scopes 'Application.Read.All','AuditLog.Read.All','Directory.Read.All'chr(10).\Get-RiskyConsentGrants.ps1 -OutputCsv .\consent-baseline-$(Get-Date -Format yyyy-MM-dd).csv

The script's risk-scoring logic and the IR companion Get-ServicePrincipalActivity.ps1 (for per-SP timeline pulls during an incident) are documented in the audit-tooling README in the bundle.

Deploy order

If you are deploying the full set:

  • Run Get-RiskyConsentGrants.ps1 first. You want the pre-change snapshot before any of the detections fire so you can answer "was that there before?" when something hits in week one.
  • Deploy Detection 01 and 02 first. These fire at the consent moment itself and have the highest fidelity. Tune the ApprovedAppIds watchlist as alerts come in.
  • Deploy Detection 03 next. This is the SP-baseline detection — it needs 30 days of AADServicePrincipalSignInLogs to build the baseline before it is useful. Deploy in detect-only mode for the first 30 days.
  • Deploy Detection 04 last. Wire it to the same ApprovedAppIds watchlist Detection 01 uses, plus the PlannedAppChanges watchlist for routine rotations.
  • Detection 05 is optional unless you have MDA. The OfficeActivity fallback is noisy and is best treated as a hunting query rather than a scheduled alert.

The full Sentinel analytics-rule JSON files (importable directly into your workspace) are in the bundle at detections/analytics-rules/. The KQL files live alongside them at detections/kql/ for reading and modification.

Read the mitigations post next — detections are how you find what got past the controls. The controls are what stop it from happening in the first place.

Need help wiring these into your Sentinel workspace?

We deploy and tune the full set against your tenant baseline, build the watchlists, and stand up the IR-routing automation so the alerts go to the right people. Two-week engagement, fixed price.

Get a Sentinel deployment
Share:
Previous OAuth consent phishing against Microsoft 365 — what happens when no password is stolen Next Why Conditional Access will not stop OAuth consent attacks (and what will)

More in Microsoft 365 OAuth Consent Defense