- Yann LeCun, a Turing Award winner and former chief AI scientist at Meta, left the company in late 2025 to found AMI Labs, a Paris-based startup pursuing AI architectures that do not rely on large language models.
- AMI Labs secured approximately $5 billion in funding to develop what LeCun calls “world models” that learn by observing and predicting physical reality rather than processing text.
- LeCun argues that LLMs are fundamentally limited because they lack persistent memory, cannot plan reliably, and do not understand the physical world.
- The venture represents the largest known financial bet against the LLM paradigm that dominates commercial AI development.
What Happened
Yann LeCun, who spent over a decade as Meta’s chief AI scientist and led the company’s Fundamental AI Research (FAIR) lab, departed to launch AMI Labs in Paris. MIT Technology Review reported on January 22, 2026, in an article by journalist Caiwei Chen, that LeCun’s new company had raised approximately $5 billion to pursue an alternative to the large language model approach that powers ChatGPT, Claude, Gemini, and virtually every other commercial AI product.
LeCun is one of three researchers who received the 2018 Turing Award for foundational work on deep learning, alongside Geoffrey Hinton and Yoshua Bengio. His departure from Meta represents one of the highest-profile exits in AI research history and the clearest institutional expression of his long-standing disagreement with the LLM-dominated direction the field has taken.
Why It Matters
Nearly every major AI company has concentrated its resources on scaling large language models. OpenAI, Google, Anthropic, and Meta have collectively spent tens of billions of dollars training systems that work by predicting the next token in a text sequence. LeCun is betting that this paradigm is a dead end for achieving general intelligence.
His core argument is that LLMs operate on text alone and cannot develop genuine understanding of the physical world. “These systems can generate fluent text, but they don’t understand anything,” LeCun has stated in public remarks. He contends that real intelligence requires the ability to build internal models of how the world works, predict outcomes of actions, and plan over extended time horizons. Current LLMs cannot do any of these things reliably.
The $5 billion funding round gives AMI Labs the resources to pursue this thesis at a scale comparable to the frontier model labs it is challenging.
Technical Details
AMI Labs is pursuing what LeCun calls a Joint Embedding Predictive Architecture (JEPA). Unlike LLMs, which are trained to predict the next word in a sequence, JEPA-based systems learn by predicting abstract representations of sensory input. The model observes video, audio, or other physical data and learns to predict what will happen next in a compressed representation space rather than in raw pixel or text token space.
This approach is designed to enable “world models”: internal simulations of physical reality that an AI system can use to reason about consequences, plan actions, and understand spatial and temporal relationships. LeCun published foundational research on JEPA while leading Meta’s FAIR lab, including V-JEPA for video understanding and I-JEPA for image understanding. Both demonstrated the ability to learn useful representations from unlabeled data without generative pretraining.
AMI Labs intends to scale these architectures using its funding to train world models on massive datasets of video, robotics interaction data, and other physical-world observations. The company is recruiting researchers from European universities, INRIA, and Meta’s own FAIR lab.
Who’s Affected
The AI research community is most directly affected. If AMI Labs produces systems that outperform LLMs on planning, physical reasoning, or real-world interaction, it would validate LeCun’s critique and potentially redirect billions in research investment. Companies built on LLM infrastructure, notably OpenAI and Anthropic, would face questions about whether their core architecture has a ceiling.
Investors also have a stake. The $5 billion raise signals that major funders believe the LLM approach has fundamental limitations that additional scaling will not resolve. Meta, which lost its chief AI scientist and potentially several FAIR researchers, must continue its AI efforts without one of the field’s most recognized figures.
What’s Next
AMI Labs has not announced a product timeline or specific benchmarks it intends to target. LeCun has indicated that the company’s initial focus will be on training large-scale world models and publishing peer-reviewed research. The central unanswered question is whether JEPA-based architectures can match or exceed LLMs on practical, commercially relevant tasks. Until AMI Labs demonstrates concrete results on recognized benchmarks, the $5 billion investment remains a high-conviction bet on a framework that most of the industry has not adopted.