Home Security Gmail BitM Defense
Gmail BitM Defense Mitigation High

BitM Shield and the broader posture — what actually stops Browser-in-the-Middle

The headline — BitM Shield

We built BitM Shield in the same research cycle that produced our lab analysis of the attack. It is a Chrome MV3 browser extension, roughly 30 KB, runs entirely locally with zero network calls and zero telemetry, and detects the BitM capture chain before the victim has typed a credential. We verified it on 2026-05-02 against the BitM technique we replicated in our research lab — risk score reached 110/100+ within 1.5 seconds of page load, red blocking banner injected via Shadow DOM, OS-level desktop notification fired, the dismiss button enforced an 8-second cooldown.

The extension is free and open-source under the MIT license. Source is on GitHub at github.com/lexlabtools/bitm-shield. Install instructions are below. If you only do one thing from this bundle, it is this.

How it works

BitM Shield watches for fingerprints that real login pages never produce, scores them on a weighted model, and surfaces the result as a toolbar badge plus an in-page warning when the score crosses a threshold.

Detection layers:

  • injected.js (page world, document_start). Patches window.WebSocket constructor before any page JavaScript runs. Reads each socket's first server message — the noVNC handshake is the literal string RFB 003.xxx\n. Fires immediately on detection.
  • injected.js global scan. Polls for window.RFB, window.WebUtil, #noVNC_canvas, .noVNC_container, and scripts containing rfb.js or websockify. noVNC's library exports stable globals across versions 1.x through 4.x.
  • content.js DOM analysis. Detects pages with login-context cues (URL, title, visible "Password" labels) but zero <input type="password"> elements in the real DOM. Canvas-rendered noVNC streams cannot contain real form inputs.
  • content.js origin check. Detects pages with Google or Microsoft visual cues on a non-IdP origin. Lure domains never match accounts.google.com or login.microsoftonline.com.
  • Shadow DOM warning overlay. The warning is injected via a closed Shadow root attached to <html>. Page CSS and JavaScript cannot reach into it or remove it. Robust against any defensive page-side manipulation by the lure.

Risk scoring

Each signal contributes to a weighted score. Thresholds: 40 for amber CAUTION, 80+ for red DANGER (blocking overlay).

  • RFB protocol handshake on a WebSocket: +100 (instant DANGER trigger)
  • noVNC globals or noVNC DOM elements present: +90
  • Origin mismatch with IdP visual cues: +55
  • Canvas-only login (password requested, no <input type=password>): +50
  • WebSocket to non-IdP host during a login context: +40
  • BitM-style lure path pattern (/f/<slug>): +20

The scoring is conservative — RFB or noVNC globals alone fire DANGER, because nothing legitimate triggers either. The lower-weight signals are designed to catch the wrapper page before the noVNC stream loads, so the warning fires early in the page lifecycle.

Live test result — what happened when we ran it against our own framework

On 2026-05-02 we loaded the extension into a fresh Chromium profile, navigated to http://localhost:8443/f/defense-test-... — a real lure URL on the running BitM server.

Within ~1.5 seconds of page load:

  • Risk score reached 110/100+
  • Two signals fired immediately: noVNC library detected (+90) and BitM lure path pattern (+20)
  • Red blocking banner injected at the top of the page via Shadow DOM
  • OS-level desktop notification fired
  • Toolbar badge flipped to red !!
  • The "I understand, continue anyway" dismiss button was enforced with an 8-second cooldown before it became clickable

The lure social-engineering page (a Google Drive "James Carter has shared a file with you" template, an authentic-looking phishing pretext) was rendered behind the shield warning. The victim reads the warning before they ever see the Open with Google Docs button that triggers the noVNC stream.

The significance: the entire BitM capture chain — credentials, session cookies, OAuth refresh token, live VNC takeover — is rendered moot by a ~30 KB browser extension. The detection latency is short enough that no realistic victim will type credentials before the warning appears.

Install

Manual / developer install (until the Chrome Web Store listing is live):

  • Download or clone the repo: git clone https://github.com/lexlabtools/bitm-shield.git (or download the ZIP from the GitHub releases page)
  • Open chrome://extensions/ in Chrome, Edge, Brave, or any other Chromium-based browser
  • Enable Developer mode (toggle in the top-right corner)
  • Click Load unpacked and select the folder containing the extension
  • The shield icon appears in your toolbar

Test it without a real BitM attacker

The extension ships with a self-test page at test/index.html. Load it in your browser after installing the extension and click Run All Tests — the extension fires all detection signals and shows the red DANGER banner within 2 seconds. Confirms the install is working end-to-end without needing to run a BitM framework yourself.

What it does and does not do

Does:

  • Detect noVNC streams in real time via RFB handshake fingerprinting
  • Detect canvas-rendered login pages with no real password input
  • Detect origin mismatches between page visual cues and location.hostname
  • Inject a warning overlay that page CSS cannot remove
  • Run entirely locally — zero network calls, zero telemetry

Does not:

  • Block the page automatically. Warns prominently. The page remains functional underneath the warning so legitimate noVNC use cases (cloud VM dashboards, remote IT tools) are not locked out — users read the warning, decide it is safe, dismiss after the cooldown.
  • Work on Firefox yet. Current build targets Chrome MV3. Firefox port is on the roadmap; the manifest needs minor changes for the browser.* API.
  • Detect non-noVNC remote-desktop technologies (RDP-over-WebRTC, VMware Horizon, etc.). Those are out of scope — they are not what BitM frameworks use.

The full extension source, plus the privacy statement (no data collection, ever), lives on GitHub at github.com/lexlabtools/bitm-shield — MIT-licensed, every file is plain JavaScript or HTML, no minification, no obfuscation, no third-party bundles. Audit it yourself.

Network-level mitigations

These complement the extension by raising the cost of operating BitM in environments you control.

Egress filtering

Block raw WebSocket frames going to non-allowlisted hosts during sign-in flows. Hard to do context-sensitively without a managed-browser policy — the gateway does not know the user is in a sign-in flow. Works best as part of a managed Chrome / Edge configuration where the browser itself enforces the policy.

DNS-layer blocking of newly-registered domains

Most BitM lure domains are less than 7 days old. Block resolution for newly-registered domains older than your organization's threshold (we recommend 30 days for sensitive flows). Cisco Umbrella, Cloudflare Gateway, NextDNS, and most enterprise DNS filters expose this as a category. The trade-off is that legitimate fresh domains (a vendor's new product launch) are also blocked — handle via an exception process.

TLS interception with category-aware allow-list

For environments with SSL inspection: treat any "login" page served from a non-IdP-allowed domain as blocked. Requires the gateway to classify pages by content (Microsoft Defender for Cloud Apps, Netskope, Zscaler all do this for Workspace and M365). Tightens the egress to the IdPs you have onboarded.

Browser / endpoint mitigations

For Workspace tenants and managed-browser fleets:

Managed Chrome / Edge policies

  • URLBlocklist — block known noVNC client paths: /vnc.html, /websockify, /novnc/, common operator-panel paths. Catches lazy operators who deploy with default paths.
  • Connection-type policies to block WebSocket to non-approved origins during sensitive contexts. Limited in vanilla Chrome; better-supported in Edge for Business and BeyondCorp Enterprise environments.
  • Disable Clipboard.read() API in lure-likely contexts to prevent the operator from pulling clipboard contents from the victim's browser via injected scripts.

Browser extension

Push BitM Shield as a managed extension via Chrome Enterprise policy. Force-installed to the entire fleet. Users cannot uninstall. The extension's local-only operation means it does not introduce telemetry concerns for compliance reviews.

Identity-provider-side mitigations

These do not stop the capture but raise the cost of replay and shorten the attacker's window.

Conditional Access and CAE

Microsoft Entra Conditional Access and Google's Continuous Access Evaluation: require device compliance and evaluate impossible-travel for every token-bound action. BitM containers run in datacenter ASNs — high-value flag in any policy that scores network risk.

The catch: Conditional Access evaluates sign-in, not the OAuth grant that follows. The grant happens after CA has already let the session through. CAE narrows the window for revocation but does not prevent the grant from being issued. Pair with the OAuth-grant hygiene below.

Device-bound session cookies (DBSC)

Chrome's standard for binding session cookies to a TPM-derived key on the user's device. Once universally deployed, raw cookie replay dies — the captured cookie jar is useless without the TPM key it was bound to. Currently rolling out on Google's own properties; broader adoption is in progress.

OAuth refresh tokens are still vulnerable under DBSC because the token is not cookie-bound. The cookies become harder to replay; the refresh token does not. DBSC is real progress but does not close the BitM-against-Gmail surface.

Phishing-resistant MFA — with caveats

FIDO2 and passkeys defeat AiTM cleanly. They do not defeat BitM because the credential ceremony fires inside the attacker's browser against the real Google origin. The signature is valid, Google accepts it, the session is established. The operator with VNC access is the user as far as Google can see.

This is counterintuitive and worth restating: rolling out FIDO2 against BitM does not solve BitM. It solves the easier AiTM problem. For BitM, you need the noVNC-detection layer (BitM Shield) plus the OAuth-grant hygiene below.

Session-binding to client TLS fingerprint

Where supported by the IdP: bind the session to the client TLS JA3 or JA4 fingerprint. Flags the moment captured cookies are replayed from a different fingerprint. Limited deployment — Google does not expose this as a Workspace policy today.

OAuth-grant hygiene — the most under-emphasized layer

This is the layer most IR runbooks and most SOC playbooks skip. It is the layer that stops the long-term damage.

The pattern: BitM captures cookies and a refresh token. Cookies expire or get invalidated. The refresh token does not. The attacker keeps reading mail for weeks. Until you specifically audit and revoke OAuth grants, your remediation is incomplete.

Quarterly grant audits

For individuals: review myaccount.google.com/permissions quarterly. Remove any third-party app you do not actively use. Pay attention to apps with broad scopes (mail.google.com, gmail.modify, gmail.send) — these are the ones an attacker would have granted to themselves.

For Workspace admins: review the OAuth API access dashboard at admin.google.com/ac/owl/list quarterly. Trust only known apps. Block all third-party apps by default and allowlist by exception. The dashboard shows app name, scopes, user count, and last-used timestamp — the last-used field is the audit signal for stale grants.

Configure Workspace to require admin consent for any scope that includes mail read/send, calendar full access, or drive full access. Prevents standard users from self-consenting to apps with high-impact scopes. The friction is real — users will hit the request-approval flow when they try to authorize legitimate apps — but the surface this closes is the canonical BitM persistence path.

Detect refresh-token issuance from new IP geographies

Workspace audit log signal: new refresh token issued for a user from an IP geography (or ASN) the user has not authenticated from before. Pair with the detection post's hosting-ASN list — token issuance from DigitalOcean, OVH, Hetzner, AWS, or Azure for a user whose normal sign-in is residential is the BitM signature.

Fire as a Workspace audit alert. Investigate within hours, not days — once the token is in the attacker's hands, it is replayable indefinitely.

IR runbook update

On any suspected phishing event, revoke OAuth grants in addition to rotating the password and killing sessions. The IR runbook walks through the exact sequence. The short version: password rotation alone leaves the attacker reading mail.

What does not work

Some controls people assume help that actually do not against BitM:

  • SMS-based 2FA. The 2SV push happens inside the attacker's browser, against real Google. The user approves on their phone, Google completes the sign-in, the attacker is in. SMS does not gate the attacker.
  • TOTP authenticator apps. Same problem. The TOTP code is typed inside the attacker's browser. Google validates it, the session is established.
  • Phish-resistant MFA / FIDO2 / passkeys. Counterintuitively, no — see the IdP section above. The credential ceremony fires against the real Google origin, satisfies normally, and the attacker holds the resulting session.
  • Email gateway URL rewriting. Helpful for known-bad domains. Does not help for fresh domains the attacker registered yesterday.
  • Domain blocklists. Same limitation. Fresh lure domain, no blocklist coverage.
  • Phishkit-signature scanners. No HTML clone exists in BitM. Nothing for the scanner to fingerprint.

The reason BitM Shield works where these others fail is that it does not try to identify the lure page from its content (which can be anything) — it identifies the noVNC stream from its protocol fingerprint, which is invariant. The attacker can change the lure copy, the visual styling, the wrapper URL, the hosting provider — none of that changes the RFB handshake on the WebSocket. That is what we fire on.

The rollout sequence we recommend

For an individual:

  • Install BitM Shield in your primary browser today
  • Audit myaccount.google.com/permissions and remove any third-party app you do not actively use
  • Bookmark accounts.google.com directly. Sign into Google by clicking the bookmark, never by clicking a link from email or chat
  • Enable a passkey for Google as defense against AiTM (it does not stop BitM but it stops the much-more-common AiTM attacks)

For a Workspace tenant:

  • Deploy BitM Shield as a managed extension via Chrome Enterprise policy — entire fleet, force-installed, week one
  • Configure Workspace to require admin consent for sensitive OAuth scopes — week two, after the help-desk briefing
  • Stand up the Workspace audit-log query for OAuth-grant anomalies (covered in the detection post) — week three
  • Schedule quarterly OAuth-grant audits as a standing ops process — ongoing
  • Layer Conditional Access / CAE / device compliance on top — months one through three, depending on your existing identity posture

The single highest-leverage action is deploying BitM Shield. Everything else is depth behind it.

Read the IR runbook next — what to do when something gets through, and the OAuth-revoke step that most IR playbooks miss.

Want help rolling out BitM defenses across your tenant?

We deploy BitM Shield to managed Chrome fleets, write the Workspace OAuth-grant audit policy, and stand up the IR-routing automation so the alerts go to the right people. Two-week engagement, fixed price.

Get a deployment
Share:
Previous Detecting BitM against Gmail — network signals, browser signals, and the Workspace audit query Next Responding to a Gmail BitM compromise — the OAuth-revoke step every other playbook skips

More in Gmail BitM Defense