ANALYSIS

Ex-OpenAI Researcher Jerry Tworek Founds Core Automation Lab

M Marcus Rivera Apr 23, 2026 3 min read
Engine Score 7/10 — Important
Editorial illustration for: Ex-OpenAI Researcher Jerry Tworek Founds Core Automation Lab
  • Jerry Tworek, who spent seven years at OpenAI before leaving in January 2026, has launched Core Automation with the stated goal of becoming “the most automated AI lab in the world.”
  • Core Automation says it is developing learning algorithms intended to move beyond pre-training and reinforcement learning, and model architectures it claims will outscale transformers.
  • Tworek stated that deep learning research “is done,” framing current AI paradigms as having reached a practical ceiling.
  • The lab joins Thinking Machines Lab and Safe Superintelligence among a cluster of ventures founded by OpenAI alumni who argue that real AI progress now requires architectural departures from current methods.

What Happened

Jerry Tworek, a former researcher at OpenAI who left the company in January 2026 after seven years, has publicly unveiled Core Automation, a new AI lab with the stated ambition of becoming “the most automated AI lab in the world.” The lab’s initial focus is automating its own internal research processes as a proof-of-concept for its operational model. Tworek cited the impossibility of pursuing this kind of foundational research at OpenAI as the reason for his departure.

Why It Matters

Tworek’s move is part of a sustained exodus of senior OpenAI researchers who have founded independent labs on the premise that large-scale transformer training has hit diminishing returns. He stated plainly that deep learning research “is done” — meaning the current pre-training paradigm has exhausted its frontier potential, a position that remains contested across the field.

Core Automation joins what observers have labeled a cohort of “Neo Labs” founded by OpenAI alumni. Thinking Machines Lab is led by OpenAI’s former chief technology officer; Safe Superintelligence was co-founded by former chief scientist Ilya Sutskever. All three share the explicit premise that meaningful progress now requires moving past existing training methods and architectures, though none has yet published peer-reviewed results to support that claim.

Technical Details

Core Automation says it is building learning algorithms designed to go beyond the two dominant paradigms in current AI systems: large-scale pre-training on text data and reinforcement learning used to align model behavior. The lab is also targeting model architectures it claims will scale more effectively than the transformer — the attention-based design introduced in Google’s 2017 “Attention Is All You Need” paper that underlies essentially every major language model deployed today.

The team is described as drawing on expertise in frontier models, optimization, and systems engineering. The lab’s stated operational structure relies on small human teams augmented by capable AI agents, intended to replicate the output of much larger organizations. As of April 2026, no technical papers, benchmark results, or model evaluations supporting these claims have been made public; all technical assertions from Core Automation are currently unverified.

Who’s Affected

A demonstrated post-transformer architecture with competitive scaling properties would directly challenge the infrastructure bets made by OpenAI, Google DeepMind, Anthropic, and Meta — all of which have oriented hardware procurement, research pipelines, and deployment infrastructure around transformer-based systems. AI accelerator manufacturers including Nvidia would also face pressure if new architectures require different memory access patterns or parallelism strategies than those optimized for current workloads.

Academic researchers and engineers who have built evaluation frameworks, fine-tuning toolchains, and benchmarks around transformer models would need to re-tool if a viable alternative architecture gains traction. The practical impact, however, remains speculative until Core Automation releases reproducible results.

What’s Next

Core Automation has not disclosed external funding, a publication timeline, or a schedule for releasing model evaluations. Tworek has not announced when the lab’s automated research outputs will be made available for external review. The lab’s claims about post-transformer architectures and novel learning algorithms will remain unverifiable until technical documentation is released.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime