Meta has pushed back the release of its next flagship AI model, internally code-named “Avocado,” from a March 2026 target to at least May or June 2026. The delay, first reported by The New York Times and confirmed by Reuters, stems from the model’s failure to meet internal performance benchmarks, particularly in reasoning, coding, and writing tasks where it falls short of Google’s Gemini 3.0.
Internal testing placed Avocado’s performance between Google’s Gemini 2.5 and Gemini 3.0 — a range that would make it competitive but not leading in the current generation of frontier models. For Meta, which has positioned its Llama model family as the premier open-weight alternative to closed models from OpenAI and Google, releasing a flagship that trails Gemini 3.0 on key benchmarks would undermine the competitive narrative the company has spent two years building.
The financial stakes are substantial. Meta projected capital expenditures between $115 billion and $135 billion for 2026, the majority directed toward AI infrastructure including GPU clusters, data centers, and model training compute. The company spent $72 billion on AI in 2025. A delayed flagship model means that spending continues without the product release that would justify it to investors — though Meta’s AI investments also serve internal applications across Instagram, Facebook, WhatsApp, and its advertising platform.
The delay follows a pattern that has become common among frontier AI labs. OpenAI postponed GPT-5’s full release multiple times in 2025. Google delayed Gemini 3.0’s general availability. Anthropic took longer than expected to ship Claude Opus 4.6. The recurring theme is that each generation of models requires disproportionately more compute and tuning to achieve incremental capability gains — a dynamic that challenges the assumption of continuous rapid improvement that has driven AI investment.
For the open-source AI ecosystem, Avocado’s delay creates a gap. Llama 4, Meta’s current open-weight release, remains competitive but is aging relative to newer closed models. Developers and researchers who built on Llama’s open-weight advantage are waiting for the next generation to maintain parity. The delay gives competitors — particularly Alibaba’s Qwen and Mistral — additional months to capture open-weight market share with their own releases.
