BLOG

The New York Times Just Fired a Writer Because His AI Tool Secretly Plagiarized The Guardian — 3 Journalists Down in 2026

Z Zara Mitchell Apr 6, 2026 7 min read
Engine Score 7/10 — Important
Editorial illustration for: The New York Times Just Fired a Writer Because His AI Tool Secretly Plagiarized The Guardian — 3

The New York Times (NYSE: NYT) dropped freelance contributor Alex Preston on April 5, 2026, after his AI writing tool silently copied passages from a Guardian book review — plagiarizing a direct competitor without Preston’s knowledge or consent, as reported by The Decoder. It is the third confirmed journalist termination or suspension linked to AI tool failures in 2026 alone.

Preston told reporters he was “hugely embarrassed” and said he had no idea the tool was conducting web searches and lifting text from existing published work. That statement is not a defense. It is a diagnosis of the actual problem: writers are deploying AI tools they do not understand, and the consequences have now reached editorial severance.

The New York Times Case: An AI Tool That Searched Without Asking

Preston, a freelance book critic, submitted a review to the New York Times containing passages silently pulled from a Guardian review of the same novel. His AI writing tool — which he had not configured to perform web searches — did so anyway, retrieving and incorporating existing critical text without flagging it as sourced material.

The Guardian being the plagiarized source adds a specific dimension. The NYT and Guardian compete directly for international English-language readership, cover the same books, and operate in overlapping critical ecosystems. The AI did not copy from an obscure blog. It lifted from a named competitor’s original work and passed it off as original criticism submitted under Preston’s byline.

Many AI writing products now ship with web-search capabilities enabled by default, embedded in settings menus that most users never open. Product updates frequently activate new retrieval features silently. The user experience is: prompt in, text out. The underlying process — including what external sources the tool accessed — is opaque by design. Preston’s tool did exactly what it was built to do. Preston did not know what it was built to do.

The Ars Technica Case: Claude Code and Fabricated Quotes

Before the Preston story broke, Ars Technica terminated a reporter whose AI-assisted workflow — built on a Claude Code-based tool — generated fabricated quotes that appeared in published articles. Unlike the NYT case, which involved reproducing real text from an external source, this failure mode ran in the opposite direction: the model invented statements and attributed them to real people.

Fabricated quotes represent a categorically different editorial risk than plagiarism. Plagiarism reproduces existing work and harms the original author. Hallucinated quotes create new false records and harm the named source — their reputation, legal standing, and professional relationships. Both failures are disqualifying. Their mechanisms could not be more different.

Claude Code is a developer-facing tool, not a journalism product. Its integration into a reporter’s live workflow suggests a broader pattern: journalists building custom AI pipelines using tools designed for software engineers, without the editorial safety constraints that purpose-built journalism assistants are beginning to include.

The Mediahuis Case: 15 Articles, Fake Quotes, One Suspension

In a third incident, a journalist employed by Mediahuis — one of Europe’s largest media groups, with titles including De Standaard, Het Nieuwsblad, and the Irish Independent — was suspended after editors discovered AI-generated fake quotes had been inserted across 15 published articles. The scale distinguishes this case from the others. Fifteen contaminated pieces represent an extended period of systematic failure, not a one-time oversight.

Mediahuis has not disclosed which AI tool was involved or whether the journalist knowingly passed off generated quotes as real. That distinction matters legally. Editorially, it matters less: 15 articles require review, potential corrections, and retractions, alongside damaged relationships with every source who was falsely quoted in print.

De Standaard, Het Nieuwsblad, and the Irish Independent each carry institutional reputations built on sourcing credibility. AI-generated quotes threaten that credibility not just in the contaminated articles, but retrospectively — every reader who encounters a correction will now apply skepticism to adjacent coverage.

Three Failures, Three Distinct AI Failure Modes

These incidents are not variations on a single problem. They represent three separate technical failure modes that AI writing tools exhibit:

  • Silent web retrieval (Preston/NYT): The tool accessed external content without the user’s knowledge and reproduced it verbatim. The journalist did not know search was enabled.
  • Quote hallucination (Ars Technica): The model generated plausible attributed statements with no factual basis. The journalist appears to have assumed the tool was summarizing real source material.
  • Systemic insertion (Mediahuis): Fake quotes were embedded across 15 articles over an extended period, indicating either a persistent workflow error or deliberate misuse — a distinction that investigators will need to establish.

Each failure mode demands a different organizational response. Silent retrieval demands tool capability disclosure and mandatory settings audits before deployment. Hallucinated quotes require verification checkpoints before any attributed statement publishes. Systemic insertion requires editorial oversight at the workflow level — not just the article level — with audit trails that can detect patterns before 15 articles have already been contaminated.

Why Journalists Don’t Know What Their AI Tools Actually Do

Preston’s statement — that he did not know his tool searched the web — is the most consequential disclosure in the entire episode. It is not an anomaly. A significant share of working journalists using AI assistants cannot accurately describe the technical capabilities of the tools they use daily.

This is partly a product design failure. AI writing tools are marketed on output quality, not technical transparency. Capability disclosures are buried in documentation that users rarely read. Web search, retrieval-augmented generation, and external API calls are frequently activated silently in product updates with no user notification. The journalist experience is: prompt in, text out. The underlying retrieval process remains invisible.

It is also a newsroom training failure. Outlets that have adopted AI tools have largely done so without structured curricula covering model behavior, hallucination risks, or sourcing mechanics. The Humans First movement has pushed back against uncritical AI adoption in creative and journalistic fields, but journalism’s specific failure modes — false attribution, plagiarism, sourcing fabrication — require domain-specific education that general AI literacy programs have not developed. Knowing that AI can hallucinate is not the same as knowing that your specific tool, in its current configuration, is actively searching the web while you write.

The 2026 Crisis in AI-Assisted Journalism

Three journalist terminations or suspensions linked to AI tool failures in the first quarter of 2026 is not a statistical anomaly. It is the leading edge of a systemic problem that adoption curves made inevitable. AI writing assistance in newsrooms has accelerated sharply since 2024, while editorial governance frameworks have lagged by at least 18 months.

The industry is now collecting the cost of that gap. Every confirmed AI-related editorial failure increases the verification burden on editors reviewing AI-assisted work — and compounds the reputational cost to outlets that publish it without adequate oversight. The New York Times, Ars Technica, and Mediahuis are not small or careless organizations. Their failures are not arguments that AI tools are uniquely dangerous. They are arguments that deployment without institutional understanding is.

MegaOne AI tracks 139+ AI tools across 17 categories. The writing and research category is among the fastest-growing in 2026, with web-search-enabled tools now representing the majority of new product launches in the segment. The gap between what these tools can do and what their users believe they can do is widening, not closing — and AI tools becoming more capable accelerates that gap, not the reverse. As the pattern of AI integration across every content sector demonstrates, capability outpacing comprehension is not unique to journalism. The stakes in journalism are simply higher.

A tool with live web access can plagiarize. A tool trained on interview transcripts can hallucinate quotes. A tool that drafts full articles can produce content that reads as reported but contains nothing independently verifiable. The journalistic problem is not that AI is bad at writing. It is that AI is very good at writing things that are not true.

What Newsrooms Must Do Before the Next Incident

All three cases share one organizational failure: no one audited the tools being used against a defined editorial standard before they were deployed in live workflows. Reactive enforcement — terminating the journalist after publication — protects the institution. It does not protect the reader, the source, or the outlet’s long-term credibility.

Three requirements should apply before any AI writing tool enters a journalism workflow:

  1. Capability disclosure: Every tool must be formally documented before deployment — does it search the web, use retrieval-augmented generation, call external APIs? Writers must know this before using the tool, not after an incident surfaces it.
  2. Quote verification protocol: No attributed statement processed or generated by an AI tool should publish without direct confirmation from the named source or a verifiable transcript. No exceptions for deadline pressure.
  3. Plagiarism scanning: AI-assisted drafts should pass standard plagiarism detection before submission — the same threshold applied to student work at any accredited academic institution. If it is rigorous enough for undergraduate essays, it is the minimum for published journalism.

These are not extraordinary standards. They are the floor. The NYT, Ars Technica, and Mediahuis all had existing editorial standards. None appear to have had standards specifically governing AI tool behavior before these failures occurred. The fourth case in 2026 will almost certainly happen at a newsroom that still does not have them.

Related Reading

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime