- OpenAI President Greg Brockman declared on the Big Technology Podcast that GPT reasoning models represent a direct path to artificial general intelligence (AGI).
- Brockman said OpenAI has “definitively answered” whether text-based models can achieve AGI, calling the Sora video model “a different branch of the tech tree.”
- The claim is contested by prominent researchers including Yann LeCun and Google DeepMind’s Demis Hassabis, who argue LLMs alone cannot achieve human-level intelligence.
- OpenAI recently shut down the consumer Sora app, concentrating resources on GPT reasoning model development.
What Happened
OpenAI President Greg Brockman stated that the company’s GPT reasoning models have a “line of sight” to achieving artificial general intelligence, declaring one of AI research’s central open questions effectively settled. “I think that we have definitively answered that question—it is going to go to AGI. Like we see line of sight,” Brockman said on the Big Technology Podcast, as reported by The Decoder on April 2, 2026.
Brockman described OpenAI’s now-shuttered Sora video model as sitting on “a different branch of the tech tree” from the GPT reasoning series, explaining that limited computing resources forced the company to prioritize one approach. OpenAI shut down the consumer Sora app in March 2026, though the company said world model research would continue for robotics on a smaller scale.
Why It Matters
Whether large language models trained primarily on text can achieve general intelligence is among the most contested questions in AI research. Brockman’s claim puts OpenAI firmly on one side of this debate, but several of the field’s most respected voices disagree. Yann LeCun has argued for years that LLMs lack understanding of logic, the physical world, permanent memory, and hierarchical planning. Google DeepMind founder Demis Hassabis holds a similar position, arguing that LLM scaling alone is insufficient.
AI researcher Francois Chollet, who defines intelligence as the ability to efficiently learn new skills, has placed current language models very low on his intelligence scale. Jerry Tworek, a former OpenAI researcher who helped build the company’s reasoning model breakthroughs, described deep learning as “done” and founded Core Automation to pursue simulation-based learning instead.
Technical Details
Brockman’s comments center on OpenAI’s reasoning model series, which includes the o1 and o3 model families. These models use extended chain-of-thought processing to work through complex problems, spending more compute at inference time rather than relying solely on pattern matching from training. The approach has produced strong results on mathematical reasoning benchmarks, coding challenges, and scientific problem-solving tasks.
Google DeepMind researcher Adam Brown recently offered a counterpoint defense of the LLM architecture, comparing token prediction to biological evolution: a simple rule that through massive scaling creates emergent complexity. Brown argued this complexity could potentially lead to consciousness, though this remains a minority view among AI researchers.
Who’s Affected
OpenAI’s strategic bet on text-based reasoning models affects its roughly 1,700 employees, its investors including Microsoft, and the developer ecosystem built around its APIs. Research labs pursuing alternative paths to general intelligence—including world models, embodied AI, and simulation-based learning—may face funding pressure if Brockman’s framing gains mainstream acceptance. Companies building products on OpenAI’s platform need to assess whether the text-first strategy aligns with their use cases.
What’s Next
OpenAI is expected to release GPT-5.4, which reportedly brings a million-token context window and an extreme reasoning mode, according to The Decoder. The model’s capabilities on general reasoning benchmarks will provide concrete evidence for or against Brockman’s claims. David Silver, formerly of DeepMind, has founded a startup focused on simulation learning as an alternative path, and Jerry Tworek’s Core Automation is pursuing a similar direction, indicating that the AGI methodology debate is far from settled regardless of OpenAI’s confidence.
