The Anthropic logo will appear on your smartphone screen, with multiple Claude AI logos in the background. After releasing Claude Opus 4.6 on February 5, Anthropic continues to challenge its leading competitors in the generative AI market on February 6, 2026 in Creteil, France.
Samuel Boivin | Null Photo | Getty Images
Anthropic has leaked some internal source code for Claude Code, its popular artificial intelligence coding assistant, the company acknowledged on Tuesday.
An Anthropic spokesperson said in a statement that “no sensitive customer data or credentials were involved or compromised.” “This is a release package issue caused by human error and is not a security breach. We are taking steps to ensure this never happens again.”
The source code leak is a blow to the startup, as it could give software developers and Anthropic’s competitors insight into how it built its virus-coding tools. A post on X with a link to Anthropic’s code has garnered more than 21 million views since it was shared at 4:23 a.m. ET on Tuesday.
The breach also marks the second major data error Anthropic has made within the past week. Fortune magazine reported on Thursday that descriptions of Anthropic’s upcoming AI models and other documents were recently discovered in a publicly accessible data cache.
Anthropic was founded in 2021 by a group of former OpenAI executives and researchers and is best known for developing a family of AI models called Claude.
The company made Claude Code generally available in May to help software developers build features, fix bugs, and automate tasks.
Claude Code saw massive adoption last year, with run-rate revenue growing to over $2.5 billion as of February.
The success of this tool has spurred companies like OpenAI to take action. google xAI devotes resources to developing competitive products.
WATCH: A new human model rumored to disrupt the cybersecurity field

