Wikipedia enacted a formal ban on the use of large language models for generating or rewriting encyclopedia content on March 27, 2026, following a vote among the site’s community of volunteer editors. The policy change, first reported by Oliver Milman for The Guardian, applies to the English-language edition of the encyclopedia — the largest version of the platform, with more than 7.1 million articles. A vote among the site’s editors supported the ban, according to reporting by 404 Media.
The policy is direct in its framing: the use of LLMs in content creation “often violates” Wikipedia’s core editorial principles. The decision resolves a period of internal debate within the editor community about whether AI-assisted writing can meet the platform’s standards for accuracy and source attribution.
- Wikipedia’s English-language encyclopedia — more than 7.1 million articles — now prohibits AI-generated and AI-rewritten content
- A vote among the platform’s volunteer editor community supported the ban, per 404 Media
- Two narrow exceptions apply: AI may assist with translations and suggest minor copy edits, both subject to mandatory human review
- Founder Jimmy Wales previously described current AI outputs as a “mess” and told the BBC current models are “nowhere near good enough” by Wikipedia’s standards
What Happened
Wikipedia published a formal policy on March 27, 2026, barring editors from using large language models to generate new content or rewrite existing articles across its English-language encyclopedia. The Guardian reported the change on the same day, citing the policy text directly. The ban covers both outright AI generation and AI-driven rewriting of existing material.
The policy change followed a vote among Wikipedia’s volunteer editor community — the decentralised base of contributors who collectively maintain the platform. Wikipedia’s governance model requires community consensus for policy decisions of this kind, and the editor vote provided that consensus. The specific vote margin was not disclosed in available reporting.
Why It Matters
Wikipedia has operated as one of the internet’s primary reference sources since 2001, built on a model in which human editors are accountable for sourced, verifiable claims. The proliferation of AI writing tools has put pressure on that model, particularly given the documented tendency of LLMs to produce fluent but factually inaccurate text.
The competitive pressure on Wikipedia is measurable: ChatGPT reportedly surpassed Wikipedia in monthly website visits in 2025, according to data referenced in The Guardian’s reporting. Tech companies have also embedded AI into web searches and email composition tools, accelerating the displacement of traditional reference sources in everyday information use. Wikipedia founder Jimmy Wales has characterised current AI outputs as a “mess,” citing hallucinated results as a central concern.
Technical Details
The policy establishes a precise operational boundary between prohibited and permitted uses. Editors may use LLMs to “suggest basic copyedits to their own writing” and may incorporate some of those suggestions “after human review, provided the LLM does not introduce content of its own,” the policy states. AI assistance with translations falls within the same framework of permitted use under human oversight.
The prohibition on content generation addresses a specific and identified risk in LLM behaviour. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” the policy warns. This concern — where a model subtly alters factual claims while performing an editing task — is distinct from outright fabrication but presents a comparable threat to source integrity.
Wikipedia’s sourcing standards require all factual claims to be verifiable through cited, reliable external sources. Content generated or modified by an LLM fails that standard structurally, since the models produce text derived from training data rather than retrievable, attributable sources.
Who’s Affected
The policy’s immediate effect falls on Wikipedia’s community of active volunteer editors, who maintain the English-language encyclopedia’s more than 7.1 million articles. Editors who had been using AI tools to draft new entries or expand existing ones are now required to produce that content through conventional research and writing.
Developers building editing tools for Wikipedia’s workflow — including browser extensions, third-party editing interfaces, and automated bots — will need to verify that any AI-assisted features do not generate or rewrite article text. Any tool that uses an LLM to produce encyclopedic content falls within the ban’s stated scope.
What’s Next
Jimmy Wales indicated in a prior BBC interview that the prohibition is not necessarily a permanent position. “I wouldn’t say absolutely never, but at least not in the short run,” he said. “The latest models are still, from a Wikipedian standpoint, nowhere near good enough.” He acknowledged that AI could assist with some aspects of Wikipedia’s operation without specifying which.
The policy as published does not include a formal review timeline or a mechanism for reassessment. Any future revision to the ban would, consistent with Wikipedia’s governance model, likely require another vote among the volunteer editor community. Whether improved AI models would be sufficient to prompt such a reconsideration remains an open question within the platform’s editorial framework.