BLOG

Meta’s First AI Model Under Scale AI Founder Won’t Launch Open-Source

Z Zara Mitchell Apr 8, 2026 5 min read
Engine Score 1/10 — Logged
Editorial illustration for: Meta's First AI Model Under Scale AI Founder Won't Launch Open-Source

Meta (NASDAQ: META) is preparing to release its first large language model built under Alexandr Wang — the Scale AI co-founder who joined Meta following the company’s acquisition of Scale AI — with open-source versions planned for after launch, not at it, according to Axios reporting from April 2026. The delay has two stated causes: proprietary components that must be stripped and safety risks that require pre-release mitigation.

This is Meta’s first major model release since Wang’s appointment restructured its AI division. It is not a Llama update. It is something categorically new.

What Wang’s Appointment Actually Changed at Meta

Wang co-founded Scale AI in 2016, at age 19, building what became the dominant infrastructure layer for AI training data — high-quality labeled datasets, structured evaluation frameworks, and the RLHF pipelines that determine whether a model actually behaves as intended in production. Meta’s acquisition brought him into senior leadership with oversight of AI product and research strategy across the entire division.

Scale AI’s competitive advantage was never building models. It was the unglamorous work that precedes models: collecting data at quality, running structured evaluations, and managing human-feedback loops that improve output alignment without brute-forcing it with compute spend.

The acquisition consolidated Meta’s previously fragmented AI organization — spanning FAIR (Fundamental AI Research) and multiple product teams — under clearer accountability. The result is a division that now applies Scale AI’s operational precision to what had been a cadence-driven, open-source-first release strategy.

The New Model: What Axios Reported

Axios reported that Meta is preparing a new LLM — the first built directly under Wang’s leadership — with a phased release strategy. The closed version launches first. Open-source weights follow only after Meta completes two specific tasks: removing proprietary components and addressing safety risks.

This marks a deliberate departure from the Llama release pattern, where open weights typically shipped at or near launch. The existence of proprietary elements in this model suggests the architecture or training pipeline incorporates Scale AI infrastructure that cannot be freely distributed without restructuring licensing arrangements and removing third-party dependencies.

Meta has not disclosed parameter count, benchmark performance, or target use cases. The Axios report characterizes the release timeline as near-term, without a confirmed date.

Why Open-Source Is Being Delayed — And What That Signals

Meta is citing two blockers: proprietary components and safety risks. Both are direct consequences of integrating Scale AI’s infrastructure into a new model architecture.

The proprietary component issue traces to Scale AI’s training data and evaluation infrastructure. Scale AI built proprietary RLHF pipelines and labeled datasets central to its commercial model — assets Meta now controls but cannot distribute without restructuring licensing arrangements and scrubbing third-party dependencies from the weight files.

The safety risk framing echoes the dynamics exposed when Anthropic accidentally released Claude agent source code — revealing how frontier model deployment introduces alignment risks that require structured red-teaming before weights become publicly available and fine-tunable without guardrails. Open weights can be modified by anyone; the risk surface is fundamentally different from API access.

The closed-first sequence also provides commercial intelligence Meta couldn’t otherwise obtain. Real-world deployment feedback before releasing weights that third parties can modify in unpredictable directions is a meaningful operational advantage. This is not a retreat from open-source values — it is a sequencing decision driven by the realities of frontier model distribution.

Llama’s Position in This New Architecture

The Llama model family remains Meta’s primary open-source offering. Llama 4, released in early 2026, maintained the franchise’s dominance in the open-weight tier. MegaOne AI tracks 139+ AI tools across 17 categories, and Llama derivatives appear in more deployment configurations than any other open-weight model family — across fine-tuned vertical applications, local inference setups, and enterprise wrappers that never touch the original weights directly.

The Wang-led model occupies a different tier: frontier closed, competing directly with GPT-5, Claude 4, and Gemini Ultra. Meta’s strategy now explicitly bifurcates — frontier capability under controlled access, open ecosystem through Llama. These are not competing priorities; they are distinct products serving distinct markets.

This mirrors the approach that OpenAI has applied to its most capable models, where enterprise agreements and API access precede any open release. Meta’s open-source identity is intact but now sequenced at the frontier tier rather than unconditional.

What “Scale AI Thinking” Means for Model Quality

Scale AI’s founding thesis: model quality is a data problem before it is a compute problem. Organizations that invest in labeling quality, evaluation rigor, and structured human-feedback pipelines produce better models per compute dollar than those optimizing for scale alone.

Wang applying this at Meta means systematic investment in evaluation frameworks, red-teaming infrastructure, and high-quality synthetic data generation — the unglamorous pipeline work that determines whether a model generalizes correctly rather than pattern-matching to training distribution artifacts that look right on benchmarks but fail in deployment.

Llama releases have historically benchmarked competitively on public evaluations — MMLU, HumanEval, MATH — but have sometimes underperformed closed models on open-ended tasks where RLHF sophistication matters more than parameter count. Wang’s background addresses exactly that gap.

The Humans First movement, which advocates for meaningful human oversight in AI development, would find Scale AI’s labor-intensive, human-in-the-loop labeling model philosophically aligned with its goals — even as Scale AI’s technology accelerates AI capability overall. Wang’s model of human feedback is expensive. It is also what separates well-aligned frontier models from those that merely scale.

The Competitive Stakes for Meta in 2026

Meta enters mid-2026 with more focused AI strategy than it carried 18 months ago. The Scale AI acquisition, Wang’s appointment, and this first proprietary model represent a deliberate pivot toward frontier competition — rather than conceding that ground exclusively to OpenAI, Anthropic, and Google while staying in the open-source lane alone.

The infrastructure investment underpinning that pivot is substantial. Meta guided for $60–65 billion in capital expenditure for 2025 — the largest single-year AI infrastructure commitment the company has made. Nebius Group’s planned $10 billion AI data center in Finland illustrates how compute infrastructure has become a baseline requirement for frontier model competition across the industry, not a differentiator.

The closed model that launches is a test case. The open-source version that follows — stripped of proprietary components, safety-hardened, and built on Scale AI’s data infrastructure advantages — is the release that will define whether Wang’s appointment actually changes Meta’s competitive position in the open ecosystem.

Watch the open-source release date, not the launch announcement. That is when Meta’s bet on Alexandr Wang either validates itself or doesn’t.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime