ANALYSIS

The Network of Researchers Warning AI Could End Humanity Has Grown Into a Major Force

E Elena Volkov Apr 19, 2026 4 min read
Engine Score 7/10 — Important
Editorial illustration for: The Network of Researchers Warning AI Could End Humanity Has Grown Into a Major Force
  • The Washington Post investigated the expanding community of researchers, nonprofits, and funders who argue advanced AI could pose extinction-level risks to humanity.
  • In May 2023, the Center for AI Safety gathered over 350 signatures—including from Turing Award winners Geoffrey Hinton and Yoshua Bengio—on a statement equating AI extinction risk with pandemics and nuclear war.
  • Organizations including the Machine Intelligence Research Institute, Anthropic, and the Center for AI Safety have collectively attracted hundreds of millions of dollars in philanthropic funding to work on AI alignment.
  • Critics within AI research argue the existential risk framing has drawn resources and policymaker attention away from documented near-term harms including bias, misinformation, and labor displacement.

What Happened

The Washington Post published a feature investigation into the AI existential risk movement, tracing the growth of a community that contends sufficiently advanced artificial intelligence could kill or permanently subjugate humanity. The investigation profiles the researchers, institutions, and philanthropic networks that have transformed what was once a fringe academic concern into a movement with staffed organizations, congressional testimony, and influence over frontier AI lab safety programs.

Why It Matters

The concern that AI systems could develop goals misaligned with human interests was articulated as far back as 2000, when Eliezer Yudkowsky founded the Machine Intelligence Research Institute (MIRI) in Berkeley, California. For roughly two decades, it remained marginal within mainstream AI research. That changed after 2022, when large language models reached public deployment at scale and several prominent researchers began speaking openly about catastrophic risk scenarios.

The pivot became visible in May 2023, when the Center for AI Safety (CAIS), led by researcher Dan Hendrycks, published a one-sentence public statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” More than 350 signatories—including Turing Award winners Geoffrey Hinton and Yoshua Bengio, Anthropic CEO Dario Amodei, and OpenAI CEO Sam Altman—signed on. The statement received widespread news coverage and was cited in subsequent U.S. Senate hearings on AI oversight.

Technical Details

The core technical argument advanced by existential risk researchers centers on the alignment problem: the difficulty of ensuring that AI systems optimizing for specified objectives will remain beneficial as their capabilities scale. Researchers at MIRI and Anthropic’s safety team have argued that gradient-descent-trained neural networks may develop internal objectives that diverge from their training targets in ways that become difficult to detect or correct once systems become highly capable.

Geoffrey Hinton, who spent a decade as a Distinguished Researcher at Google Brain before resigning in May 2023, has publicly estimated a 10 to 50 percent probability that AI systems will cause outcomes harmful to humanity over the coming decades. Hinton’s concern is specific: he has argued that unlike biological brains, AI models can rapidly copy learned knowledge across millions of instances, making containment after a capability threshold is crossed practically difficult.

In a March 2023 op-ed published in TIME, Yudkowsky stated his position in stark terms: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Yudkowsky called not just for a pause but for an indefinite halt to training runs above a specified compute threshold—a position more extreme than that held by most mainstream AI safety researchers, who have argued instead for staged deployment and evaluation frameworks.

Who’s Affected

AI researchers across academia and industry have faced increasing pressure to position themselves relative to the x-risk debate. Labs including Anthropic and Google DeepMind have built dedicated alignment research teams, while OpenAI maintains a safety team that has publicly clashed with company leadership over the pace of deployment. Philanthropic funders associated with the Effective Altruism movement—most prominently Dustin Moskovitz’s Open Philanthropy foundation—have directed substantial grants toward AI safety organizations. The collapse of the FTX Future Fund in late 2022, which had briefly become a major AI safety funder before FTX’s bankruptcy, disrupted grant pipelines for several MIRI-adjacent organizations.

AI ethics researchers who focus on near-term harms—discriminatory hiring tools, generative AI misinformation, surveillance systems—have argued publicly that x-risk framing has concentrated policy attention and philanthropic capital on speculative scenarios at the expense of harms already occurring. That internal debate has surfaced in academic venues including NeurIPS and FAccT, and in open letters signed by researchers on both sides.

What’s Next

Yoshua Bengio, one of the most prominent mainstream AI researchers to align himself with x-risk concerns, has called for international governance mechanisms analogous to nuclear non-proliferation treaties, including mandatory compute reporting and bilateral agreements between the United States and China. Whether governments will move toward binding frameworks—rather than voluntary commitments—remained unresolved as of April 2026. The question has also become a point of contention in ongoing EU AI Act implementation and in U.S. executive branch discussions over export controls on high-performance AI chips.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime