ANALYSIS

CNBC: Anthropic Is the Only AI Company Pricing for Reality

A Anika Patel Apr 21, 2026 5 min read
Engine Score 9/10 — Critical

This story is critical due to its potential to fundamentally reshape AI pricing models, with a major structural analysis from CNBC and public agreement from OpenAI's product head. It offers high actionability for companies and investors, backed by a highly reliable source.

Editorial illustration for: CNBC: Anthropic Is the Only AI Company Pricing for Reality

CNBC published a major structural analysis on April 17, 2026 arguing that Anthropic, the AI safety company valued at $61.5 billion and backed by Google and Amazon, is the only frontier AI developer pricing its products to reflect actual compute economics. OpenAI’s head of ChatGPT product, Nick Turley, publicly agreed the current model is broken: “Having an unlimited plan is like having an unlimited electricity plan — it just doesn’t make sense.” Both companies are expected to IPO in 2026, and public-market investors will scrutinize unit economics in ways venture capital never did.

Why Per-Token Pricing Is the Only Model That Survives at Scale

Every token an AI model processes consumes real compute — GPU inference cycles, memory bandwidth, electricity. That cost is variable, scales linearly with usage, and does not disappear because a pricing team decided to charge $200 per month for unlimited access.

Anthropic has moved its Claude Max plans to per-token billing. OpenAI has not applied the same structure to ChatGPT subscriptions. The practical consequence: Anthropic’s revenue scales with actual usage, while OpenAI’s consumer revenue is capped at the subscription price regardless of how much compute a subscriber consumes in a given month.

Cloud providers — AWS, Azure, Google Cloud — have never offered unlimited compute plans. The analogy holds across every infrastructure market: variable-cost services require variable-cost pricing, or the vendor absorbs the spread indefinitely. The anthropic pricing token economics shift is not a product experiment. It is a structural correction.

The 0 vs. ,000 Arbitrage Hiding in Plain Sight

The CNBC analysis named a specific number that illustrates the problem at its extreme: a heavy Claude Code Max user pays $200 per month for usage that, billed at standard Anthropic API rates, would cost over $12,000. That is a 60x subsidy ratio embedded in a single pricing tier.

Claude Code is an agentic coding assistant. Unlike a conversational chatbot — where a typical session consumes a few thousand tokens — an agentic coding session can process millions of tokens through multi-step reasoning loops in a single working day. The flat-rate economics designed for chat fail completely when applied to autonomous agents running continuously.

As MegaOne AI has reported, Anthropic’s agent infrastructure is built for exactly these high-volume autonomous workloads. Pricing that doesn’t reflect actual token consumption cannot survive that architecture at scale. Moving Claude Code Max to consumption billing removes the $11,800 monthly subsidy Anthropic was absorbing on its most active developer users — and reveals which of those users represent genuine willingness to pay.

OpenAI’s Own Head of ChatGPT Says the Math Doesn’t Work

The most significant admission in the CNBC analysis came not from Anthropic but from OpenAI. Nick Turley, OpenAI’s head of ChatGPT product, stated: “Having an unlimited plan is like having an unlimited electricity plan — it just doesn’t make sense.”

That comparison — made by the person responsible for ChatGPT’s product strategy — describes OpenAI’s own current primary consumer offering as structurally irrational. ChatGPT Pro at $200 per month offers unlimited access to GPT-4o and o1. OpenAI has not published average cost-per-user under that plan, and there is no public indication it intends to.

OpenAI does operate consumption-based pricing on its API tier, which charges per token. OpenAI’s major enterprise contracts, including its reported $1 billion deal with Disney, are almost certainly structured on usage terms rather than flat rates. This creates a bifurcated business: per-token economics for the enterprise customers who generate the most revenue, and subsidized flat-rate pricing for the consumer base used to report subscriber growth. That split is defensible under venture capital. It is harder to present coherently in an S-1.

Ramp CEO Eric Glyman: AI Spending Grew 13x With No Budget Framework

Eric Glyman, CEO of corporate spend management platform Ramp — which processes billions in annual business payments — provided the enterprise data anchor for the CNBC analysis. AI spending across Ramp’s customer base grew 13x in one year, and Glyman stated plainly that “no one knows how to budget for it.”

That observation cuts directly against flat-rate pricing logic. When enterprise AI spend is growing at 13x annually without reliable budget frameworks, consumption-based billing is the only model that generates revenue proportional to actual usage. Under flat-rate plans, a 13x increase in a customer’s AI consumption generates zero additional revenue for the vendor. Under per-token billing, that same 13x increase generates 13x the revenue.

MegaOne AI tracks 139+ AI tools across 17 categories, and the enterprise spending pattern visible across that coverage is consistent with Glyman’s figure: AI budgets are scaling faster than procurement systems can track. The companies capturing revenue proportional to that growth — rather than capping it at a monthly subscription price — are building fundamentally different businesses from those that aren’t.

IPO Filings Will Force the Question Both Companies Have Avoided

Subscription revenue from flat-rate plans reads well in S-1 filings: recurring, predictable, growing. Gross margin tells the actual story. A company running at negative gross margin on its consumer product — subsidized by API revenue and enterprise contracts — is not presenting a SaaS business to public investors. It is presenting a growth-stage company with a structural cost problem that scale will worsen, not solve.

Anthropic’s shift to per-token billing is a deliberate pre-IPO stress test. The users who remain after the pricing change represent genuine willingness to pay at real market rates. That cohort — likely smaller in headcount, almost certainly higher in revenue per user — is the base that can survive quarterly earnings calls. Deal activity and acquisition strategies across the AI sector have been built on demand assumptions untested at market-rate pricing. IPO filings will apply that test regardless of preparation.

OpenAI’s timeline for addressing the flat-rate problem at the consumer level is not publicly disclosed. Turley’s comment signals internal recognition that the model needs to change. The gap between recognition and execution is exactly where public-market scrutiny lands hardest.

What Happens When the Correction Arrives

CNBC’s closing thesis is direct: “If even a meaningful fraction of today’s AI demand is inflated, the company that priced for reality will be the one still standing when the correction arrives.”

Flat-rate unlimited pricing creates one specific type of artificial demand: users who consume AI at volumes they would not pay for at actual unit economics. The marginal cost to those users is zero, so usage is unconstrained. That usage shows up in engagement metrics, which show up in investor decks, which show up in valuations. When pricing shifts to cost-reflective levels, a portion of that demand disappears — because it was never real demand. It was a subsidy utilization pattern.

Historical infrastructure markets have demonstrated this predictably: broadband unlimited plans, streaming bundle libraries, cloud free-tier compute — all showed demand contraction when prices moved toward actual cost levels. The question for AI isn’t whether the pattern repeats. It’s which company discovers the gap between subsidized and genuine demand in a private board meeting versus a public earnings call.

Anthropic is finding out before the IPO. That is the structural advantage CNBC is identifying — not model benchmarks, not safety positioning, but the discipline of building a revenue model that works at the usage volumes the product actually generates. OpenAI’s own product leadership has said as much. The execution is what remains.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime