Bloomberg Law research, cited in The Neuron‘s April 14, 2026 digest, found that anonymous users on 4chan discovered chain-of-thought reasoning — the technique of prompting AI models to work through problems step by step before producing an answer — while playing AI Dungeon in 2022, more than a year before Google Research published the paper the field credits as the discovery. The users were not running controlled experiments. They were trying to get better dungeon loot.
The finding directly challenges who owns credit for one of the most consequential prompting techniques in modern AI. Google’s 2022 paper “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” by Jason Wei, Xuezhi Wang, Dale Schuurmans, and colleagues is among the most-cited AI papers of the decade. According to Bloomberg Law’s analysis, it may not have been first — it just got published.
What 4chan Users Actually Found While Playing AI Dungeon
AI Dungeon, the text-based RPG built on GPT-3 by Latitude and released in 2019, gave mass public access to a large language model before most researchers had comparable access through OpenAI‘s API waitlist. Players could input any text and receive GPT-3 continuations — which made every session an uncontrolled prompting experiment at scale.
According to the Bloomberg Law research, 4chan users discovered that instructing GPT-3’s in-game AI characters to walk through math and logic problems step by step — narrating the reasoning process before reaching a conclusion — produced dramatically more accurate outputs. Responses to multi-step arithmetic problems that had previously been incoherent became solvable. They documented the technique in forum threads, refined it collaboratively, and shared it across boards.
The mechanism they stumbled onto is precisely what Wei et al. would later formalize: chain-of-thought prompting improved model performance on arithmetic reasoning, commonsense reasoning, and symbolic reasoning tasks by eliciting intermediate steps. The 4chan users didn’t name it. They didn’t submit it to arXiv. They moved on to the next dungeon.
The Google Paper’s Timeline Creates a Direct Conflict
Wei et al. submitted “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” to arXiv on January 28, 2022. The paper was accepted at NeurIPS 2022 and has since accumulated thousands of citations. It is the canonical academic origin of chain-of-thought prompting as a documented, reproducible technique.
The paper does not cite community experimentation, AI Dungeon user behavior, or 4chan forum threads as prior art. In the formal record, the Google Research team is the originating source — not because they suppressed other evidence, but because forum posts don’t exist in the citation graph.
If Bloomberg Law’s dating of the 4chan discovery is accurate, the gap between community discovery and academic publication is at minimum several months, possibly longer. The technique was known, working, and shared before the paper that codified it was written.
The Structural Reason Underground Experimentation Consistently Outpaces Research
This is not the first time informal communities have preceded formal AI research. Jailbreaking techniques, persona injection, few-shot formatting patterns, and temperature manipulation all appeared in Discord threads and Reddit posts before landing in papers. The pattern has a structural explanation.
Academic researchers have compute budgets, IRB requirements, and publication cycles measured in months. Forum users have none of those constraints and collectively run millions of informal experiments per day. The decentralized exploration model generates search paths no directed research program would fund. A researcher designs experiments around hypotheses they already have. A 4chan user trying to optimize a game character’s responses tests everything, including things no hypothesis would predict.
OpenAI’s API access controls in 2020 and 2021 inadvertently amplified this effect. While researchers waited on waitlists, AI Dungeon players had live GPT-3 access through Latitude’s wrapper. The people with the most experimental time on the most capable models were not at universities — they were anonymous users on a message board arguing about video games.
The Attribution Gap Is Structural, Not Accidental
Academic attribution requires citations. Forum posts don’t accumulate them. The result is a one-way valve: techniques that emerge from communities enter the literature only when a researcher formalizes them, at which point the researcher receives credit and the community does not. This is not misconduct — it is an emergent property of how knowledge becomes canonical.
The problem has grown faster than any proposed solution. Between 2020 and 2025, AI-focused communities on Discord, Reddit, and anonymous boards grew by orders of magnitude in both size and technical sophistication. Prompt engineering as a discipline was largely assembled by hobbyists who will never appear in a reference list. The research community’s awareness of this work remains inconsistent.
The push for broader, less controlled AI access often cites exactly this dynamic — that throttled access to powerful models doesn’t prevent discovery, it just prevents credit from flowing to the people who make it.
Bloomberg Law’s Framing: When Forum Posts Become Prior Art
Bloomberg Law’s framing of this as a legal question is significant. As AI-derived techniques become commercially valuable — and chain-of-thought reasoning underpins billions of dollars in capability improvements across coding assistants, math tutors, and structured output systems — the question of who originated a technique stops being purely academic.
Under patent law, prior art includes any public disclosure that predates a patent application, regardless of whether it appeared in a journal. A 4chan thread from 2021 or early 2022 with a documented date and a reproducible technique qualifies. The competitive dynamics between AI labs already make priority claims contentious; adding a category of community-derived prior art makes the evidentiary record significantly more complex.
4chan’s archiving infrastructure — both native and third-party — timestamps posts and preserves threads that would otherwise disappear. That infrastructure is now legally relevant in a way it was not five years ago.
What the AI Dungeon Case Reveals About Latent Model Capabilities
There is a deeper implication beyond attribution. GPT-3 was publicly characterized as limited in reasoning capability. The 4chan users, by discovering chain-of-thought prompting, unlocked a reasoning capability that OpenAI’s own evaluations had not documented. The capability existed in the model. The technique to elicit it was found by people trying to win a text-based RPG.
This suggests formal benchmark evaluation is a floor, not a ceiling. Researchers test what their hypotheses lead them to test. Millions of users in an open-ended game test everything else. The gap between what models can do on benchmarks and what they can do under adversarial, creative, or playful prompting conditions may be wide — and the people mapping that gap are mostly not publishing.
MegaOne AI tracks 139+ AI tools across 17 categories; a large share of the highest-rated tools in reasoning, coding assistance, and math tutoring are built directly on chain-of-thought techniques. The originating researchers are cited in every technical paper these products reference. The originating users are not.
The Research Community’s Response — and Its Limits
Some researchers have started treating community forums as informal literature. Labs including Anthropic have hired from communities like LessWrong and Alignment Forum, where informal technical work is treated as substantive. But citing a 4chan thread in an NeurIPS submission is not yet standard practice, and the peer review system has no mechanism for routing credit backward after publication.
The Bloomberg Law research doesn’t resolve this. It creates a documented record of the gap. What happens with that record depends on whether the legal questions it raises — around prior art, novelty claims, and the definition of publication — get tested in litigation or regulatory proceedings.
The more actionable implication is forward-looking: any lab making novelty claims about prompting or fine-tuning techniques should now treat community archive searches as mandatory due diligence alongside academic database searches. The technique that will define the next capability jump may already be in a Reddit comment from 2023. The question is whether the research community finds it there, or finds it again in two years and calls it a discovery.
Related Reading
- Anthropic’s CPO Just Quit Figma’s Board — The Same Week Anthropic Launched a Tool That Competes With Figma
- Anthropic’s Ballard Partners Hire Signals a Pentagon Peace Treaty
- OpenAI Memo Claims Anthropic Inflated Revenue by $8 Billion
- Anthropic Signs Multi-Gigawatt TPU Deal with Google and Broadcom, Targets 2027 Deployment