BLOG

41% of All Code Is Now AI-Generated — But Developers Using AI Are Actually 19% Slower

M MegaOne AI Apr 1, 2026 Updated Apr 2, 2026 3 min read
Engine Score 7/10 — Important
Editorial illustration for: 41% of All Code Is Now AI-Generated — But Developers Using AI Are Actually 19% Slower
  • GitHub reports that 41% of all new code is now AI-generated, with Copilot specifically generating 46% of code for its users — reaching 61% in Java projects.
  • A randomized controlled trial by METR found that experienced open-source developers were 19% slower when using AI tools, despite believing they were 24% faster.
  • The METR study tested 16 developers across 246 real-world tasks on repositories averaging 22,000+ stars and over 1 million lines of code, paying participants $150/hour.
  • GitHub Copilot reached 4.7 million paid subscribers as of January 2026, with deployment at approximately 90% of Fortune 100 companies.

What Happened

Two data points are colliding in software development. GitHub’s own statistics show that 41% of all new code written on its platform is now AI-generated, with GitHub Copilot specifically responsible for 46% of code produced by its users. At the same time, a rigorous randomized controlled trial published by METR (Model Evaluation & Threat Research) found that experienced developers using AI coding tools actually took 19% longer to complete tasks than those working without them.

The disconnect between adoption and measured productivity has become one of the most debated findings in software engineering research. AI code generation is accelerating while evidence mounts that speed gains remain elusive for experienced developers working on complex codebases.

Why It Matters

The perception gap is striking. Before the METR study, participating developers predicted that AI tools would make them 24% faster. After completing their tasks — and objectively taking 19% longer with AI assistance — they still believed AI had sped them up by 20%. The tools feel faster even when they are not.

This matters because enterprises are making hiring and productivity projections based on the assumption that AI coding tools deliver measurable speed gains. GitHub Copilot is deployed at roughly 90% of Fortune 100 companies. If the productivity gains are perceptual rather than real for experienced developers, corporate planning models built on those assumptions may be flawed.

Technical Details

The METR study, conducted by Joel Becker, Nate Rush, Elizabeth Barnes, and David Rein between February and June 2025, used a randomized controlled trial design. Sixteen experienced open-source developers were given 246 real-world issues — bugs, features, and refactors — from their own repositories, which averaged 22,000+ GitHub stars and over 1 million lines of code. Each task averaged approximately 2 hours. Developers were randomly assigned to either use or not use AI tools for each task.

The primary AI tools used were Cursor Pro with Claude 3.5 and 3.7 Sonnet models. Participants had substantial prior experience with LLM-based coding tools, logging dozens to hundreds of hours of prior use. Time was self-reported and verified against screen recordings. Participants were compensated at $150 per hour.

The researchers framed their result carefully: “We view this result as a snapshot of early-2025 AI capabilities in one relevant setting.” They noted that AI tools are evolving rapidly and plan to repeat the study with newer models.

Who’s Affected

The findings carry different implications depending on developer experience. The METR study specifically tested experienced developers working on large, familiar codebases — the exact scenario where deep contextual knowledge might outweigh AI assistance. Junior developers or those working on unfamiliar code may see different results. GitHub’s own research, using a broader sample of 4,800 developers, found a 55% task completion speed increase with Copilot, though that study did not use the same controlled methodology.

Engineering managers face a practical question: AI coding tools have reached 84% adoption among developers, with 51% using them daily. The tools are already embedded in workflows regardless of whether they deliver net speed gains.

What’s Next

Only about 30% of AI-suggested code gets accepted by developers, indicating that human review still dominates the workflow. The METR team plans to repeat the study as AI models improve, which will help determine whether the 19% slowdown is a temporary limitation of early-2025 models or a more fundamental challenge with AI-assisted development on complex codebases.

GitHub Copilot reached 4.7 million paid subscribers as of January 2026, with 75% year-over-year growth, suggesting that adoption will continue regardless of the productivity debate. Competitor Cursor captured 18% market share within 18 months of launch. The key unanswered question is whether newer models — released since the METR study’s February-June 2025 window — have closed the gap between perceived and actual productivity gains.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy