European publishing group Mediahuis has suspended Peter Vandermeersch, one of its most senior editorial figures, after an internal investigation revealed he published dozens of fabricated quotes generated by AI tools including ChatGPT, Perplexity, and Google’s NotebookLM. The suspension, effective March 20, 2026, follows reporting by NRC, a Mediahuis-owned Dutch newspaper where Vandermeersch once served as editor-in-chief.
Vandermeersch, who held the title of “fellow of journalism and society” at Mediahuis and previously led the company’s Irish operations from 2022 to 2025, admitted in a public Substack post that he had used AI to summarize reports and failed to verify whether the resulting quotes were accurate before publishing them. At least seven individuals quoted in his newsletter confirmed they never made the statements attributed to them. Mediahuis has removed eight of his articles from the Irish Independent website.
The case illustrates a specific failure mode in AI-assisted journalism: large language models generate plausible-sounding quotes during summarization that do not correspond to anything the cited person actually said. Vandermeersch described the problem in precise terms, noting he “fell into the trap of hallucinations” and “wrongly put words into people’s mouths.” He acknowledged the irony, stating this was “precisely the mistake I have repeatedly warned colleagues about.”
Mediahuis CEO Gert Ysebaert responded by reaffirming the company’s AI usage policies, which require “diligence, human oversight and transparency.” The publisher operates titles including De Telegraaf, the Irish Independent, and NRC across multiple European markets. The incident arrives as newsrooms globally are integrating generative AI into editorial workflows, with most major outlets now permitting some form of AI-assisted research or drafting.
The suspension carries broader implications for media organizations establishing AI governance frameworks. Unlike cases of deliberate fabrication, Vandermeersch’s errors stemmed from over-reliance on AI output without verification — a pattern researchers have termed “cognitive surrender.” His newsletter, Press and Democracy, regularly covered the intersection of press freedom and democratic accountability, making the failure particularly notable.
Mediahuis has not disclosed whether additional disciplinary action will follow. The company’s response — investigating through its own publication and acting publicly — suggests an attempt to demonstrate accountability. For other newsrooms, the case offers a concrete data point: AI summarization tools will fabricate quotes, and editorial processes must include mandatory verification of any attributed statement generated through these systems.
