Zero-Click Vulnerability in Claude Chrome Extension Allows Zero-Click XSS Prompt Injection via Any Site
- 4 minutes ago
- 3 min read

What Happened with the Claude Extension
On March 26, 2026, security researchers publicly detailed a vulnerability in Anthropic’s official Claude Chrome extension. The flaw, responsibly disclosed in December 2025, allowed attackers to run malicious prompts inside the AI assistant without any user interaction—no clicks, no warnings, no permission prompts.
Anthropic released a fix in extension version 1.0.41. The third-party CAPTCHA provider involved also patched its component. The issue is now resolved for users who have updated. But the incident reveals a broader problem: AI tools that sit inside your browser can become high-value targets once they gain enough access to your data and actions.
How the Attack Actually Worked
The researchers called it ShadowPrompt. It combined two separate weaknesses.
First, the extension had an overly broad allowlist that trusted any subdomain ending in *.claude.ai. Second, a DOM-based cross-site scripting (XSS) flaw existed in an Arkose Labs CAPTCHA component hosted on a-cdn.claude.ai.
An attacker could host a page that quietly loaded the vulnerable CAPTCHA inside a hidden iframe, send a crafted message, and execute JavaScript in the trusted Claude domain. That script then told the extension to run whatever prompt the attacker wanted—stealing conversation history, pulling access tokens, or making the AI draft and send emails as the user.
The victim would see nothing. They could be reading the news or checking email and never know the AI assistant had been hijacked.
Why This Matters to Business Operations and Leadership
Many companies now let teams use Claude (or similar AI assistants) to draft proposals, summarize contracts, analyze spreadsheets, or generate client emails. That convenience comes with deep browser access: the extension can read open tabs, interact with web apps, and act on the user’s behalf.
When that access is compromised without any visible sign, the risk moves from “theoretical AI prompt injection” to a practical operational threat. Executives and operations leaders who approve these tools need to understand that the security boundary is only as strong as the weakest link in the chain—in this case, a third-party CAPTCHA service.
Real-World Business Impact
Picture a finance manager in Los Angeles using Claude to review a vendor contract while several tabs are open. An attacker on a seemingly legitimate site triggers the flaw. The AI suddenly “decides” to export the conversation history and email it to an external address, or it begins drafting wire-transfer instructions that look perfectly normal.
The damage is quiet, fast, and hard to trace. You lose sensitive data, face compliance questions under CCPA or upcoming AI regulations, and deal with the inevitable insurance and board inquiries about why AI tools weren’t properly vetted. Downtime isn’t the only cost—lost trust and incident response hours add up quickly.
Immediate Actions You Can Take Today
Confirm every user with the Claude extension has version 1.0.41 or newer.
Review which team members have AI browser extensions installed and what data they routinely feed them.
Disable or restrict extensions that request broad permissions unless there is a clear, documented business need.
Scan recent Claude conversation exports for any unusual activity (most organizations don’t realize how much corporate IP now lives inside these chats).
These steps take minutes but close the immediate gap.
Building Lasting Protection Against AI Tool Risks
Patching one extension is not enough. The real work is aligning AI adoption with your existing security, compliance, and incident readiness programs.
That means:
Creating clear governance for which AI tools your teams can use and how.
Regularly testing browser-based tools the same way you test other third-party software.
Giving leadership visibility into AI-related risks so decisions are made with eyes open rather than after an incident.
At Purple Shield Security, we work alongside executive teams, internal IT, and managed service providers to do exactly that. We’re not here to sell another tool. We help you cut through the noise, identify the real exposures, and close the gaps that matter to your operations.
How an Independent Cybersecurity Partner Helps
Purple Shield Security provides independent cybersecurity leadership and vCISO services to companies in Los Angeles that want practical protection without the marketing hype. Whether you need a focused review of your AI tool usage, help strengthening incident readiness, or ongoing guidance that fits your compliance and insurance requirements, we act as your trusted advisor.
If this latest Claude extension incident has you wondering what else might be slipping through the cracks, you’re not alone.
Reach out to Purple Shield Security today for a no-pressure conversation about where your organization stands on AI security and what targeted steps will give you and your board real confidence.
