ANALYSIS

Software Engineer Argues AI Coding Agents Should Be Banned from Production Code

M megaone_admin Mar 28, 2026 2 min read
Engine Score 7/10 — Important

This article provides valuable, actionable insights into the practical limitations and challenges of AI coding agents, impacting a broad segment of the tech industry. The critical analysis offers important guidance for developers and companies navigating the adoption of these tools.

Editorial illustration for: Software Engineer Argues AI Coding Agents Should Be Banned from Production Code

Software engineer Joel Andrews has published a detailed critique arguing that LLM-based AI coding agents should never be used to generate production code, citing four critical concerns in a blog post published March 26, 2026.

Andrews, who has been following generative AI development for several years and experimented with large language models in proof-of-concept applications, states definitively that “LLM-based AI coding agents have no place now, or ever, in generating production code for any software I build professionally.”

The critique comes as AI coding agents gain adoption across major companies including Notion, Spotify, and Stripe. These tools combine large language models with feedback loops to generate code, with proponents arguing they can work “faster and cheaper” than human developers.

Andrews identifies four primary issues: skill atrophy, artificially low cost, prompt injections, and copyright/licensing concerns. On skill atrophy, he argues that software engineers relegated to reviewing AI-generated code will “become rusty over time” and “gradually lose the ability to tell a good change from a bad one because they” no longer actively write code themselves. He describes the emerging role as becoming “a sort of software engineering manager” who supervises AI agents rather than writing code directly.

The post acknowledges that AI coding agents are “absolutely” powerful and that “anyone who has been paying attention and who is being honest with themselves can see that plainly.” However, Andrews maintains his position against their use in production environments while noting that LLMs have utility in other software engineering contexts.

Andrews plans to elaborate on the remaining three concerns—artificially low cost, prompt injections, and copyright issues—later in his analysis, suggesting this represents the first part of a broader critique of AI coding tools in professional software development.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy