Software engineer Joel Andrews has published a detailed critique arguing that LLM-based AI coding agents should never be used to generate production code, citing four critical concerns in a blog post published March 26, 2026.
Andrews, who has been following generative AI development for several years and experimented with large language models in proof-of-concept applications, states definitively that “LLM-based AI coding agents have no place now, or ever, in generating production code for any software I build professionally.”
The critique comes as AI coding agents gain adoption across major companies including Notion, Spotify, and Stripe. These tools combine large language models with feedback loops to generate code, with proponents arguing they can work “faster and cheaper” than human developers.
Andrews identifies four primary issues: skill atrophy, artificially low cost, prompt injections, and copyright/licensing concerns. On skill atrophy, he argues that software engineers relegated to reviewing AI-generated code will “become rusty over time” and “gradually lose the ability to tell a good change from a bad one because they” no longer actively write code themselves. He describes the emerging role as becoming “a sort of software engineering manager” who supervises AI agents rather than writing code directly.
The post acknowledges that AI coding agents are “absolutely” powerful and that “anyone who has been paying attention and who is being honest with themselves can see that plainly.” However, Andrews maintains his position against their use in production environments while noting that LLMs have utility in other software engineering contexts.
Andrews plans to elaborate on the remaining three concerns—artificially low cost, prompt injections, and copyright issues—later in his analysis, suggesting this represents the first part of a broader critique of AI coding tools in professional software development.
