There is an old idea in management that “what is important is what you measure.” And you usually get more results for whatever you’re measuring.
Software engineers have been discussing productivity metrics for decades, starting with the line of code. But as a new generation of AI coding agents delivers more code than ever before, it becomes less clear what managers should be measuring.
Among Silicon Valley developers, huge token budgets (essentially the amount of AI processing power developers are allowed to consume) have become a badge of honor, but this is a very strange way of thinking about productivity. Perhaps it makes little sense to measure the input to a process if you care about the output. It might make sense if you’re trying to encourage more AI adoption (or token sales), but it doesn’t make sense if you’re trying to become more efficient.
Consider the evidence from a new class of companies operating in the “Developer Productivity Insights” space. They found that developers using tools like Claude Code, Cursor, and Codex were producing code that was much more acceptable than before. But we also found that engineers had to go back more often to fix accepted code, undermining claims of increased productivity.
Alex Circei, CEO and founder of Waydev, is building a layer of intelligence to track these dynamics. His company works with 50 different customers that employ more than 10,000 software engineers. (Circei has written for TechCrunch in the past, but this reporter had never met him before.)
He said engineering managers know that code acceptance rates are 80% to 90%, or the percentage of AI-generated code that developers approve and keep, but they miss the churn that occurs when engineers have to fix that code within a few weeks, and the actual code acceptance rate drops to between 10% and 30% of generated code.
Founded in 2017 to provide developer analytics due to the rise of AI coding tools, Waydev has completely reworked its platform in the past six months to address the rapid proliferation of coding tools. Now, the company is releasing new tools to track metadata generated by AI agents and provide analytics on code quality and cost, giving engineering managers more insight into both AI adoption and effectiveness.
tech crunch event
San Francisco, California
|
October 13-15, 2026
While analytics companies have an incentive to highlight problems they discover, there is growing evidence that large organizations are still looking for ways to use AI tools effectively. Major companies are also paying attention. Last year, Atlassian acquired another engineering intelligence startup, DX, for $1 billion to help customers understand the return on investment in coding agents.
Data from across the industry tells a consistent story. More code is being written, but a disproportionate amount of it doesn’t stick around.
GitClear, another company in this space, published a report in January that found that AI tools improve productivity, but its data also showed that “regular AI users had an average of 9.4x higher code churn than non-AI users,” and the productivity gains provided by the tools were more than double.
Faros AI, an engineering analytics platform, leveraged two years of customer data for its March 2026 report. As a result, code churn (lines of code removed vs. added) increased by 861% when AI adoption increased.
Jellyfish, which touts an intelligence platform for AI integrated engineering, collected data on 7,548 engineers in the first quarter of 2026. The company found that engineers with the largest token budgets generated the most pull requests (suggested changes to a shared codebase), but the productivity gains did not scale. We achieved 2x the throughput at 10x the cost of tokens. In other words, tools create quantity, not value.
When you talk to developers, statistics like this ring true. Even as developers enjoy the freedom of new tools, they find that code reviews and technical debt are piling up. One common finding is the difference between senior and junior engineers, with the latter accepting far more AI-generated code and, as a result, handling a larger amount of rewrites.
Still, developers are working to understand exactly what the agents are trying to do, but don’t expect to backtrack any time soon.
“This is a new era in software development, and we have to adapt, and we’re forced to adapt as a company,” Circei told TechCrunch. “The cycle doesn’t pass.”
