REGULATION

OpenAI Backs Illinois Bill to Limit AI Lab Liability for Mass Casualty Events

P Priya Sharma Apr 10, 2026 3 min read
Engine Score 8/10 — Important

OpenAI backs Illinois bill limiting AI liability for mass-casualty incidents — controversial

Editorial illustration for: OpenAI Backs Illinois Bill to Limit AI Lab Liability for Mass Casualty Events
  • OpenAI testified in support of Illinois SB 3444, which would exempt frontier AI developers from civil liability for events causing death or serious injury to 100 or more people, or property damage exceeding $1 billion.
  • The liability shield applies only if a developer did not intentionally or recklessly cause the harm and had published safety, security, and transparency reports.
  • The bill defines frontier models as those trained at a computational cost exceeding $100 million, a threshold that would cover OpenAI, Google, Anthropic, xAI, and Meta.
  • A poll cited by critics found 90 percent of Illinois residents oppose exempting AI companies from liability, and the state legislature has simultaneously introduced bills that would increase developer liability.

What Happened

OpenAI has formally backed Illinois Senate Bill 3444, a measure that would shield frontier AI developers from civil liability for “critical harms”—defined as events causing death or serious injury to at least 100 people, or property damage exceeding $1 billion. Wired reported on OpenAI’s support in April 2026. Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony before Illinois lawmakers, while OpenAI spokesperson Jamie Radice issued a public statement endorsing the bill.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” Radice said in an emailed statement. Niedermeyer’s testimony additionally called for a federal AI regulatory framework, arguing the bill could “reinforce a path toward harmonization with federal systems” rather than producing divergent state-level rules.

Why It Matters

Several AI policy experts told Wired that SB 3444 represents a departure from OpenAI’s prior legislative posture, which had been primarily defensive—opposing bills that would expand AI liability, rather than actively promoting liability shields. No U.S. federal law currently establishes whether AI model developers can be held liable for catastrophic events caused by their technology. As more powerful AI systems have been released in recent months, questions about catastrophic liability have received greater legislative attention.

California and New York have moved in a different direction, passing SB 53 and the Raise Act respectively, which require AI model developers to submit safety and transparency reports but do not exempt them from liability. The Trump administration has issued executive orders and published AI frameworks, but federal legislation has not advanced in Congress.

Technical Details

SB 3444 defines a “frontier model” as any AI system trained at a computational cost exceeding $100 million—a threshold that would apply to America’s largest AI labs. The bill’s “critical harm” category covers mass casualty events, property damage of at least $1 billion, and AI-enabled creation of chemical, biological, radiological, or nuclear weapons. The exemption also extends to cases where an AI model autonomously engages in conduct that, if committed by a human, would constitute a criminal offense leading to those outcomes. In all cases, liability protection is conditional: the developer must not have acted intentionally or recklessly, and must have published safety, security, and transparency reports on its website.

Who’s Affected

The bill would most directly benefit large frontier AI developers that meet the $100 million compute threshold, a group that includes OpenAI, Google, xAI, Anthropic, and Meta. OpenAI is separately facing civil litigation from families of children who died by suicide after allegedly developing unhealthy relationships with ChatGPT—individual-harm cases that fall outside SB 3444’s mass-casualty scope and would not be covered by the bill’s liability protections.

What’s Next

Scott Wisor, policy director for the Secure AI project, told Wired he believes the bill has a slim chance of passage in Illinois. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor said. Illinois passed the Biometric Information Privacy Act in 2008 and in August 2025 became the first state to enact legislation limiting AI use in mental health services; its legislature has also submitted bills that would increase rather than reduce liability for AI model developers, creating competing legislative pressure around SB 3444.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime