Home Security Microsoft 365 Device Code Defense

Device code phishing incident response, what to do when you find a sign-in you cannot explain

What you are dealing with

A device code phishing incident means an attacker has a Microsoft refresh token for the affected user. The access token derived from it expires every 60-90 minutes, but the attacker can silently refresh it for up to 90 days without any further interaction from the user. A password reset alone does not revoke it. Disabling the user account does revoke it, but that is a disruptive action for active employees.

You need to verify the scope of the compromise (what has the attacker accessed since the token was issued), invalidate the token, and close whatever CA gap let the flow complete.

Containment: four moves in order

Move 1 — Revoke all sessions for the affected user.

# Requires Microsoft Graph PowerShell SDKchr(10)# Scope: User.ReadWrite.All or Directory.ReadWrite.Allchr(10)chr(10)Connect-MgGraph -Scopes "User.ReadWrite.All"chr(10)Revoke-MgUserSignInSession -UserId "<UserPrincipalName>"

This invalidates all refresh tokens and active sessions for the user. The attacker's next polling attempt to get a new access token will return invalid_grant. This is your primary containment action. Do it first, before any other investigation steps, to stop the bleeding.

Note: Revoke-MgUserSignInSession is the current Graph SDK command. The older Revoke-AzureADUserAllRefreshToken from the MSOL module has the same effect but the MSOL module is deprecated.

Move 2 — Verify the CA policy gap and close it.

Check your CA policies:

// What CA policies evaluated during the suspect sign-inchr(10)SigninLogschr(10)| where CorrelationId == "<correlation-id-from-the-alert>"chr(10)| mv-expand ConditionalAccessPolicieschr(10)| extendchr(10)PolicyName   = tostring(ConditionalAccessPolicies.displayName),chr(10)PolicyResult = tostring(ConditionalAccessPolicies.result)chr(10)| project TimeGenerated, UserPrincipalName, PolicyName, PolicyResult

If no policy with AuthenticationFlows condition appears in the output, you do not have the device code block policy deployed. If the policy appears with result notApplied or reportOnly, the policy exists but was not enforcing. Either the user was excluded, the app was excluded, or the policy was in report-only mode.

Fix the gap before the investigation continues. The CA policy configuration is in the mitigations post.

Move 3 — Audit what the attacker accessed.

The attacker calls Microsoft Graph from their own infrastructure. Their activity appears in separate log tables from the victim's interactive sign-ins.

// Non-interactive sign-ins (token refresh activity) for the affected userchr(10)AADNonInteractiveUserSignInLogschr(10)| where TimeGenerated > ago(90d)chr(10)| where UserPrincipalName == "<affected-user>"chr(10)| where AppId == "d3590ed6-52b3-4102-aeff-aad2292ab01c" // Microsoft Office, most common attacker AppIdchr(10)| project TimeGenerated, IPAddress, AppDisplayName, ResourceDisplayName, ResultTypechr(10)| sort by TimeGenerated asc
// Exchange Online activity — mail reads and moveschr(10)OfficeActivitychr(10)| where TimeGenerated > ago(90d)chr(10)| where UserId == "<affected-user>"chr(10)| where Operation in ("MailItemsAccessed", "MessageBind", "HardDelete", "MoveToDeletedItems", "SendAs")chr(10)| project TimeGenerated, Operation, ClientInfoString, ResultStatuschr(10)| sort by TimeGenerated asc
// SharePoint and OneDrive file accesschr(10)OfficeActivitychr(10)| where TimeGenerated > ago(90d)chr(10)| where UserId == "<affected-user>"chr(10)| where RecordType == "SharePointFileOperation"chr(10)| where Operation in ("FileAccessed", "FileDownloaded", "FileCopied")chr(10)| project TimeGenerated, Operation, ObjectId, ClientIPchr(10)| sort by TimeGenerated asc

The TimeGenerated of the first non-interactive sign-in after the device code authorization is your T0 for the attacker's access window. Everything from T0 to the revocation timestamp is potentially accessed.

Move 4 — Force re-authentication and monitor.

After revoking sessions, require the user to re-authenticate. If your tenant has Conditional Access with sign-in frequency controls, force a session policy re-evaluation. If not, have the user sign out of all devices and sign back in.

After re-authentication, re-run Rule 01 from the detection post with the user's UPN as a filter. Any device code sign-in after containment indicates a second compromise attempt — likely the attacker trying again once they detect their token was revoked.

What to look for in AuditLogs

The device code authorization itself generates a sign-in row in SigninLogs. It does not generate a separate row in AuditLogs the way an OAuth consent grant does. What AuditLogs shows you is downstream activity: mail read, file access, any admin actions the attacker took if the user held a privileged role.

If the attacker added credentials to an application or service principal (persistence escalation beyond the delegated token), that will appear in AuditLogs with OperationName == "Add service principal credentials". Check this for any high-privilege user accounts involved in the incident.

What attackers do with captured tokens in practice

From our lab work and the documented Storm-2372 activity pattern:

  • Mail collection. GET /me/messages?$top=999&$select=subject,from,body,receivedDateTime in batches. Quick and quiet.
  • File enumeration and download. GET /me/drive/root/children to list, then targeted file downloads.
  • Directory read. GET /users, GET /groups for the tenant org chart. Useful for targeting follow-on attacks.
  • Token refresh only. Some operators capture the token and hold it for later use. If you see a device code sign-in followed by only silent token refresh activity and no Graph reads, that is either a probe or the operator deferring exploitation.

A note on attacker-side tooling gaps

During our lab work we identified a persistent bug in one category of device code phishing tooling: captures stored only in memory and not persisted to disk. If the attacker's process restarted between capture and exploitation, the token was lost. This type of bug is more common in open-source or improvised tooling than in commercial or nation-state frameworks.

The practical implication for IR: the absence of evidence that an attacker used the token does not mean the token was not captured. It may mean their tooling did not survive a restart. Assume exploitation unless your audit logs confirm otherwise.

Documentation checklist for the incident record

  • Affected user(s) and UPN(s)
  • CorrelationId of the device code sign-in event
  • Timestamp of the device code authorization (T0)
  • Timestamp of session revocation (containment T0)
  • Window between T0 and containment: anything in this window is in scope for data impact assessment
  • CA policy status at time of incident (was the block policy deployed?)
  • Apps the attacker used (AppId from the non-interactive sign-in logs)
  • Source IP(s) from the attacker's non-interactive sign-ins
  • Data accessed (OfficeActivity query results)
  • Whether the CA gap has been closed post-incident

Active incident? We can help.

If you have a device code sign-in you cannot attribute and need help with containment, audit reconstruction, and tenant hardening, reach out. We work active M365 incidents.

Contact us
Share:
Previous Blocking device code phishing in Microsoft 365, the CA policy that closes the flow

More in Microsoft 365 Device Code Defense