- Wikipedia editors approved new guidelines restricting LLM use in article writing, with 44 votes in favor and 2 opposed, closing on March 20, 2026 under the “SNOW” rule for overwhelming consensus.
- The guidelines ban AI-generated article content but permit two narrow exceptions: basic copyediting of an editor’s own writing and first-pass translation.
- The proposal by editor Chaotic Enby replaced previous policy text that had failed to gain traction due to the complexity of addressing LLM use comprehensively.
- Several supporters view the guidelines as a stepping stone toward a complete LLM ban on Wikipedia, though that outcome would require further community deliberation.
What Happened
Wikipedia editors overwhelmingly approved new guidelines restricting the use of large language models in article writing, following a request for comment that concluded on March 20, 2026. The proposal by editor Chaotic Enby passed with 44 votes in favor and only 2 opposed, prompting closure under Wikipedia’s “SNOW” rule, which allows early closure when consensus is overwhelming.
The approved guidelines replace existing policy text at Wikipedia’s “Writing articles with large language models” page. According to the closing statement by editor Knightof theswords, the amendments “target blatantly problematic issues with LLM use, while still giving leeway for what are seen as decent uses for it, mainly copyediting and translation.”
Why It Matters
Wikipedia is the most widely used reference source on the internet, and its editorial policies set norms that influence how other platforms handle AI-generated content. The decision reflects a growing recognition that LLM output, despite its fluency, systematically violates Wikipedia’s core verification and sourcing requirements. Language models generate plausible-sounding text that may contain fabricated citations, hallucinated facts, or subtle distortions of meaning that are difficult for other editors to detect.
The concern extends beyond Wikipedia itself. LLM-generated content that enters the encyclopedia gets scraped by AI companies and incorporated into future model training data, creating a feedback loop where AI-generated errors propagate through the information ecosystem. By restricting LLM use, Wikipedia is attempting to protect the integrity of its content as a training data source.
Technical Details
The guidelines permit two narrow exceptions. Editors can use an LLM to suggest basic copyedits to their own writing, such as grammar and clarity improvements. Editors can also use LLMs to produce a first-pass translation of content from other language editions of Wikipedia. In both cases, the editor must verify the output and ensure it does not introduce content not supported by cited sources. The policy explicitly warns that LLMs can change the meaning of text beyond what the editor intended.
The ban covers only the English-language Wikipedia. Other language editions operate under their own governance structures and may adopt different policies on LLM use. The guidelines also include provisions to “ward off baseless or malicious accusations of LLM use against editors who may have a writing style akin to many LLMs,” addressing concerns that the policy could be weaponized against legitimate human contributors.
Who’s Affected
Wikipedia editors who have been using LLMs to draft or substantially rewrite articles must stop doing so immediately. The policy change has direct implications for new editors in particular, who may have relied on LLMs to help navigate Wikipedia’s complex formatting and citation requirements. Experienced editors who use LLMs only for grammar corrections or translation can continue doing so under the two permitted exceptions, provided they verify every change against cited sources.
AI companies that train models on Wikipedia content face an indirect consequence: if the policy successfully reduces AI-generated text in the encyclopedia, the training data quality for future models may improve. Wikipedia remains one of the largest and most commonly used datasets in LLM training. OpenAI, Google, Anthropic, and Meta have all used Wikipedia data in their training pipelines, making the encyclopedia’s content integrity directly relevant to model quality across the industry.
The enforcement challenge is significant. Supporters of the guidelines acknowledged during the RfC that “people already lied about LLM usage,” and current detection tools for AI-generated text are unreliable. The policy relies primarily on community vigilance and the existing editorial review process, which may prove insufficient as language models become more difficult to distinguish from human writing.
What’s Next
Several editors who voted in favor described the guidelines as a “good placeholder” and expressed hope to use them “as a stepping stone towards an imagined, total LLM ban on Wikipedia.” Whether the community will pursue a complete ban remains uncertain. As the closing statement noted, that question “will require lots of discourse between fellow Wikipedians.” Prior attempts at comprehensive LLM policy have failed due to the complexity of implementation details, suggesting any expansion of the current restrictions will be incremental rather than immediate. The ban currently applies only to the English Wikipedia; other language editions will make their own decisions independently.