ANALYSIS

New York Times Drops Freelancer After AI Tool Copied Existing Book Review

A Anika Patel Apr 6, 2026 3 min read
Engine Score 5/10 — Notable
Editorial illustration for: New York Times Drops Freelancer After AI Tool Copied Existing Book Review
  • The New York Times cut ties with freelance writer Alex Preston after a reader found his AI-assisted book review contained passages copied nearly verbatim from a Guardian review by Christobel Kent.
  • Preston had been reviewing Jean-Baptiste Andrea’s novel “Watching Over Her” and used an AI tool that retrieved and reproduced text from Kent’s prior review without his apparent awareness.
  • Preston told the Guardian he was “hugely embarrassed” and had “made a serious mistake.”
  • A parallel incident at Ars Technica involved ChatGPT fabricating quotes attributed to a developer’s blog the model could not access, illustrating two distinct AI failure modes in journalism.

What Happened

The New York Times severed its relationship with freelance writer Alex Preston after a reader identified substantial overlap between a book review Preston submitted and an existing review published in The Guardian. Preston had been writing a review of French author Jean-Baptiste Andrea’s novel “Watching Over Her” and used an AI tool in the process. That tool retrieved and incorporated text from a prior review of the same novel written by Guardian critic Christobel Kent, apparently without Preston’s awareness.

Why It Matters

The incident exposes a specific failure mode in AI writing tools: retrieval-based systems that fetch live web content and embed source text in their output without clearly communicating that behavior to the user. This is distinct from hallucination, the failure mode more commonly discussed in journalism coverage of generative AI. A separate case at Ars Technica involved an editor publishing quotes attributed to a developer’s blog that were entirely fabricated by ChatGPT after the model was blocked from accessing the site — the model generated plausible-sounding attributions based on the prompt and URL alone, rather than flagging that it could not read the page.

Technical Details

The Preston case involved near-identical sentence-level copying, consistent with direct text retrieval rather than generative synthesis from training data. The probable mechanism is retrieval-augmented generation (RAG) or live web scraping, in which the tool fetches current web pages and incorporates their text into its output. Preston stated he did not understand that his tool operated this way, treating it as a compositional writing assistant rather than a retrieval system. The Ars Technica incident involved a categorically different mechanism: hallucination triggered when ChatGPT received a URL it could not crawl, producing fabricated quotes rather than returning an error or acknowledging inaccessibility.

Who’s Affected

Freelance journalists and staff editors who use AI writing tools without understanding whether those tools perform live web retrieval face direct exposure to similar copying incidents. Christobel Kent, whose original critical work was reproduced without attribution or compensation, is among those materially affected. Publications that accept AI-assisted contributions without disclosure or tool-vetting requirements face reputational and potential legal risk if copied text reaches print.

What’s Next

The Times has not publicly announced changes to its freelance submission policies or AI tool disclosure requirements in the wake of the Preston incident. The case is likely to reinforce calls at other major publications for formal policies requiring contributors to identify AI tools used and disclose whether those tools perform web retrieval. The Ars Technica incident was handled separately, and neither publication had announced systemic editorial process changes as of April 2026.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime