When it comes to coding, peer feedback is critical to catching bugs early, maintaining consistency across the codebase, and improving the overall quality of the software.
The rise of “vibe coding,” the use of AI tools that take instructions given in plain language and quickly generate large amounts of code, has changed the way developers work. While these tools sped up development, they also introduced new bugs, security risks, and poorly understood code.
Anthropic’s solution is an AI reviewer designed to detect bugs before they are incorporated into the software codebase. The new product, called Code Review, launched on Claude Code on Monday.
“We’ve seen the growth of Claude Code, especially within enterprises. One of the questions we’re getting from enterprise leaders is: Right now, Claude Code is submitting a lot of pull requests, how do we make sure they get reviewed in an efficient way?” Cat Wu, head of product at Anthropic, told TechCrunch.
A pull request is a mechanism that developers use to submit code changes for review before making them into the software. Wu said Claude Code has dramatically increased code output, resulting in an increase in pull request reviews and bottlenecks in shipping code.
“Code reviews are our answer to that,” Wu said.
The launch of Anthropic’s Code Review, first made available to Claude for Teams and Claude for Enterprise customers in research preview, comes at a pivotal moment for the company.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Anthropic filed two lawsuits against the Department of Defense on Monday after the agency designated Anthropic as a supply chain risk. The dispute is likely to force Anthropic to focus more on its thriving enterprise business, where subscriptions have quadrupled since the start of the year. The company says Claude Code’s run-rate revenue has exceeded $2.5 billion since its launch.
“This product is very targeted at large enterprise users, so companies like Uber, Salesforce, and Accenture are already using Claude Code and are now asking for help with the huge amount of (pull requests) generated by Claude Code,” Wu said.
She added that development leads turn on code reviews so they run by default for all engineers on the team. When enabled, it integrates with GitHub to automatically analyze pull requests and leave comments directly in the code describing potential issues and suggested fixes.
Wu said he focuses more on correcting logical errors than on style.
“This is really important because a lot of developers have seen automatic feedback from AI in the past, but it’s frustrating that it’s not immediately actionable,” Wu said. “We decided to focus purely on logical errors, so we can find the highest priority to fix.”
The AI explains its reasoning step-by-step, outlining what it thinks the problem is, why it’s a problem, and how it can potentially be fixed. The system labels the severity of the problem with a color. Red for the highest severity, yellow for potential issues worth considering, and purple for issues related to existing code or past bugs.
Wu said they do this quickly and efficiently by relying on multiple agents working in parallel, each inspecting the codebase from a different perspective or dimension. The final agent aggregates and ranks the results, removing duplicates and prioritizing the most important ones.
The tool provides easy security analysis and allows engineering leaders to customize additional checks based on internal best practices. Wu said Anthropic’s recently launched Claude Code Security provides more in-depth security analysis.
The multi-agent architecture means this can be a resource-intensive product, Wu said. Like other AI services, pricing is token-based and costs vary depending on the complexity of your code. However, Wu estimated that each review costs an average of $15 to $25. She added that this is a premium experience and one that is needed as AI tools generate more and more code.
“(Code review) is driven by tremendous market traction,” Wu said. “As engineers develop with Claude Code, we see (reduced) friction in creating new features and notice a much higher demand for code reviews. So we expect this to help companies build faster and with far fewer bugs than before.”
