ANALYSIS

Engineer Joel Andrews Makes Case Against AI Coding Agents in Production

E Elena Volkov Mar 28, 2026 Updated Apr 7, 2026 4 min read
Engine Score 7/10 — Important

This article provides valuable, actionable insights into the practical limitations and challenges of AI coding agents, impacting a broad segment of the tech industry. The critical analysis offers important guidance for developers and companies navigating the adoption of these tools.

Editorial illustration for: Software Engineer Argues AI Coding Agents Should Be Banned from Production Code

Software engineer Joel Andrews published a pointed critique on March 26, 2026, arguing that LLM-based AI coding agents have no legitimate role in generating production code. Writing on the standupforme.app blog, Andrews frames the post—estimated at 15 to 22 minutes of reading time—as a deliberate verdict reached after years of observing and personally experimenting with generative AI systems.

  • Andrews states definitively: “LLM-based AI coding agents have no place now, or ever, in generating production code for any software I build professionally.”
  • He identifies four core objections: skill atrophy, artificially low cost, prompt injections, and copyright and licensing concerns.
  • Major companies including Notion, Spotify, and Stripe are cited as organizations that have publicly embraced AI coding tools.
  • Andrews acknowledges AI coding agents are “absolutely” powerful but argues capability alone does not justify their use in professional production environments.

What Happened

Joel Andrews, a software engineer writing for the standupforme.app blog, published “Some Uncomfortable Truths About AI Coding Agents” on March 26, 2026. The post calls for a personal blanket ban on LLM-based AI coding agents in professional production code—a position Andrews describes as one he arrived at after extensive hands-on experimentation and deliberate reflection, not a reflexive reaction.

Andrews traces his engagement with the technology to the emergence of OpenAI’s early models, which he says were built on “a relatively niche deep learning research paper from Google and a bit of reinforcement learning from human feedback.” He developed proof-of-concept applications using large language models before arriving at his current position, a background he uses to establish that his critique comes from engagement with the tools, not avoidance of them.

Why It Matters

The post enters an active industry debate at a moment when AI coding agents are being adopted at scale. Andrews names Notion, Spotify, and Stripe as established, well-regarded companies that appear “fully onboard” with the tools, citing the argument circulating in the industry that AI agents can complete coding work “faster and cheaper” than human developers.

Entire companies are being founded with AI coding agents as their core product offering. Andrews positions his critique as a counterweight to that momentum, arguing that the speed and cost advantages being cited do not address the structural risks these tools introduce into professional software development.

Technical Details

Andrews identifies four specific objections to using LLM-based AI coding agents in production: skill atrophy among human engineers, artificially suppressed cost estimates, exposure to prompt injection attacks, and unresolved copyright and licensing liability. The post elaborates most extensively on the first of these in the section available at time of publication.

On skill atrophy, Andrews challenges the framing that senior engineers can sustainably serve as reviewers of AI-generated code. “The software engineers that have been relegated to code review duty will become rusty over time,” he writes. “Their coding and software design skills will atrophy and they will become worse software engineers as a result.”

He argues that the emerging role—described by some in the industry as a “software engineering manager” who supervises AI agents rather than writing code—is structurally self-defeating. Even engineers who begin with rigorous review habits will, he contends, “gradually lose the ability to tell a good change from a bad one” once active coding is removed from their daily work.

The remaining three concerns—artificially low cost, prompt injections, and copyright and licensing—are named explicitly in the post but were not yet elaborated in the portion of the article published as of March 26. Andrews indicates the post is structured as an ongoing series addressing each issue in sequence.

Who’s Affected

The argument is directed at professional software engineers working at organizations that are evaluating or already using AI coding agents for production systems. Andrews specifically names Notion, Spotify, and Stripe as companies whose public embrace of AI-assisted development makes them directly relevant to his critique.

Developers and engineering teams building on LLM-based coding tools are also the implied subjects of the concerns he flags around prompt injection vulnerabilities and copyright exposure—risks that carry legal and security implications for any organization shipping AI-generated code at scale.

What’s Next

Andrews signals that the March 26 post is the opening installment of a longer critique. The three remaining objections—artificially low cost, prompt injections, and copyright and licensing—are each expected to receive dedicated coverage in subsequent posts on the standupforme.app blog.

He draws a distinction between AI coding agents and LLMs used in other software engineering contexts, noting he plans to address where LLMs do offer legitimate utility. His current published position, however, is categorical on production code: the capability of these tools does not make them appropriate for professional use.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime