- Recursive Superintelligence emerged from stealth on May 14, 2026 with $650 million in funding, founded by Richard Socher.
- The company aims to build a recursively self-improving model that identifies its own weaknesses and redesigns itself without human involvement.
- Co-founders include Peter Norvig, Cresta co-founder Tim Shi, and Tim Rocktäschel — who previously led open-endedness and self-improvement teams at Google DeepMind and worked on the Genie 3 world model.
- The launch was reported by Russell Brandom at TechCrunch on May 14, 2026.
What Happened
Richard Socher announced Recursive Superintelligence on May 14, 2026, a San Francisco-based AI startup that emerged from stealth with $650 million in funding to build a recursively self-improving model. The company was reported by Russell Brandom at TechCrunch. Socher is joined by Peter Norvig, Cresta co-founder Tim Shi, and Tim Rocktäschel, who previously led the open-endedness and self-improvement teams at Google DeepMind. Socher previously founded chatbot startup You.com and led the original ImageNet work that helped catalyze the modern deep-learning era.
Why It Matters
Recursive self-improvement — the idea of an AI system that autonomously identifies its own weaknesses and modifies its own architecture or training to fix them — has been a stated long-term goal across AI labs for over a decade. Recursive Superintelligence joins a wave of well-funded research-focused AI startups that includes Safe Superintelligence Inc, Reflection AI, and Thinking Machines Lab. Socher specifically pushed back against the “neolab” framing in the launch interview, arguing his company will ship products rather than only conduct research — a positioning intended to differentiate from competitors that have raised similar capital without committed product timelines.
The technical claim that Recursive can reach recursive self-improvement first is bold. Socher told TechCrunch: “Our unique approach is to use open-endedness to get to recursive self-improvement, which no one has yet achieved. It’s an elusive goal for a lot of people.”
Technical Details
Socher described open-endedness as having a specific technical meaning rather than a marketing label. He cited Tim Rocktäschel’s prior work on Genie 3 — a world model at Google DeepMind that can generate any concept, world, or agent on demand and remain interactive — as a working example. Socher framed the parallel to biological evolution: “In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations. It’s just a process that can evolve for billions of years, and interesting stuff keeps happening.”
A second technique he cited from Rocktäschel’s research is rainbow teaming, an extension of red teaming where two AI models co-evolve adversarially. One AI attempts to elicit harmful outputs from a target AI; the target AI is then inoculated against the discovered attack patterns; the cycle repeats for millions of iterations. “You can actually allow two AIs to co-evolve. One keeps attacking the other, and then comes up with not just one angle but many different angles, and hence the rainbow analogy,” Socher said. The technique is now used at all major labs, per Socher.
Socher distinguished true recursive self-improvement from what he called “auto-research” — the more common pattern of using AI to incrementally improve another system. “That’s not recursive self-improvement. That’s just improvement,” he said. The Recursive approach targets full automation of the entire research loop: ideation, implementation, and validation, eventually extending to physical-domain research as well as AI itself.
Who’s Affected
The named co-founders bring direct credibility on multiple fronts. Norvig is the longtime Google research director and AI textbook author. Shi built Cresta into a contact-center AI platform. Rocktäschel’s open-endedness and Genie 3 background gives the company a specific technical lineage that competitors do not have. OpenAI, Anthropic, Google DeepMind, and Safe Superintelligence are the explicit competitive frame for any model that targets frontier capability. Existing investors in the AI research lab category — Andreessen Horowitz, Sequoia, NEA, and the family offices that have funded recent stealth rounds — gain a new comparison data point for valuation discipline at the $650 million round size.
What’s Next
Socher told TechCrunch he intends to ship products on a schedule that distinguishes Recursive from research-only neolab peers, but did not commit to specific product categories or timelines. The technical bet on open-endedness as the path to recursive self-improvement will be evaluated against direct alternatives — automated machine learning loops, Constitutional AI–style self-critique, and direct neural-architecture search — whose results are publishable and reproducible. Asked when recursive self-improvement is “done,” Socher answered: “I suppose it’s never done. Some of these things will never be done. You can always get more intelligent.” The company’s first technical paper has not been announced.