Anthropic has added a Code Review feature to Claude Code, extending the company’s coding product into pull request review as enterprises absorb more AI-generated code into everyday software work.

TechCrunch reported that the feature is entering research preview for Claude for Teams and Claude for Enterprise customers and can analyze pull requests automatically through GitHub once enabled.

The trust gap in AI-generated code

The launch lands as software teams grapple with persistent verification problems around AI-generated code. Google Cloud’s 2024 DORA report found that while AI tools improved some measures of individual productivity, a 25% increase in AI adoption was associated with a 1.5% drop in delivery throughput and a 7.2% decline in delivery stability, and 39% of respondents said they had little or no trust in AI-generated code. (Its 2025 report found that AI adoption no longer negatively correlates with delivery throughput, though stability impacts remain.)

Stack Overflow’s 2025 developer survey found that more developers distrusted AI tool accuracy than trusted it, while 66% said their biggest frustration was dealing with AI solutions that were almost right but not quite. Veracode’s 2025 GenAI Code Security Report, meanwhile, found that 45% of code samples in its benchmark failed security tests and introduced OWASP Top 10 vulnerabilities.

Anthropic Head of Product Cat Wu told TechCrunchthat Claude Code had increased code output and, with it, the volume of pull requests awaiting human review, creating a drag on shipping code. She said the new feature was built in response to that pressure from enterprise customers.

How the review feature works

TechCrunch reported that Code Review uses multiple agents in parallel to examine a pull request from different angles, then passes findings to a final agent that removes duplicates and ranks the most important issues.

Wu told TechCrunch that the product is focused on logical errors over style, reflecting a decision to prioritize feedback developers can act on immediately. The system explains why it thinks an issue matters and uses a severity scheme that includes red, yellow and purple markers.

Integration with existing security workflows

Anthropic’s own documentation provides broader context on how Claude Code already fits into team and enterprise development workflows.

Anthropic’s Help Center says Claude Code is included with every Team plan seat, while Enterprise availability depends on plan and seat type. Separate Anthropic documentation describes automated security reviews in Claude Code through a security-review command and GitHub Actions for pull requests, and says those checks are meant to complement, not replace, existing security practices and manual code reviews.

Wu told TechCrunch that the new Code Review feature includes light security analysis, while deeper security work sits elsewhere in Anthropic’s tooling.

What the automated security checks cover

Anthropic’s Help Center documentation on automated security reviews says the product can check for issues including SQL injection risks, cross-site scripting, authentication and authorization flaws, insecure data handling and dependency vulnerabilities.

TechCrunch reported that the feature is coming first to Teams and Enterprise customers in research preview and that pricing is token-based, with Wu estimating a typical review at $15–$25 depending on code complexity.

Anthropic’s enterprise materials also show that Claude Code sits inside a more managed environment for larger organizations, with Team and Enterprise access varying by plan structure and usage analytics available to owners and, in Enterprise, admins.

The competitive context

Anthropic is not entering an empty category. GitHub put Copilot code review into public preview in February 2025 and general availability in April 2025, making AI-assisted pull request review available inside a workflow many engineering teams already use.

The category also includes dedicated tools built specifically around the review bottleneck. CodeRabbit, one of the more widely used standalone options, offers a flat-rate model for unlimited AI pull request reviews that could undercut Anthropic’s per-use pricing.

That tradeoff may sharpen enterprise procurement conversations, even among organizations already invested in Claude Code.

Personalized Feed
Personalized Feed