“Vibe coding” — the practice of describing intent in natural language and letting AI handle implementation — is the dominant trend in software development in 2026. The term, coined to describe developers who code by vibes rather than syntax, has spawned an entire category of tools from Cursor to Lovable to Antigravity. But the most rigorous study on AI-assisted coding tells a story the industry would rather not hear.
The METR Study in Detail
METR conducted a randomized controlled trial with 16 experienced open-source developers across 246 real issues on codebases they maintained. This wasn’t a synthetic benchmark — these were developers working on their own production code. With AI tools enabled, they were 19% slower. Without AI tools, they completed tasks faster.
The methodology matters: developers were randomly assigned to AI-assisted or unassisted conditions for each issue, controlling for task difficulty, developer skill, and codebase complexity. The 19% slowdown is not a perception — it is a measured outcome across hundreds of real tasks.
Why the Perception Gap Exists
Developers predicted they would be 24% faster with AI. After completing tasks 19% slower, they still believed they were 20% faster. The 43-percentage-point gap between perceived and actual performance is the study’s most alarming finding.
The mechanism: AI generates code quickly, creating a subjective feeling of speed. But the total workflow — prompting, reviewing suggestions, debugging AI errors, re-prompting after failures, and integrating AI output into existing code — takes longer than the developer writing the code directly. The AI produces the illusion of velocity while adding friction to the overall process.
Who Actually Benefits
The study’s limitation is its sample: experienced developers on familiar codebases. Newer developers, developers learning unfamiliar technologies, and developers working on greenfield projects likely see genuine gains. The distinction is critical: vibe coding accelerates exploration but slows execution for developers who already know what to write.
Senior developers with 10+ years of experience report the highest satisfaction with AI tools (81% productivity gains) — but this may reflect AI handling tedious boilerplate that seniors evaluate quickly, not a contradiction of METR’s findings. The question is whether the time saved on boilerplate exceeds the time lost on AI-generated bugs and review cycles. For experienced developers on mature codebases, METR says it doesn’t.
