ANALYSIS

OpenAI Memo Names ‘Spud’ Model and Accuses Anthropic of $8B Revenue Inflation

M Marcus Rivera Apr 14, 2026 4 min read
Engine Score 8/10 — Important
Editorial illustration for: OpenAI Memo Names 'Spud' Model and Accuses Anthropic of $8B Revenue Inflation
  • An internal OpenAI memo from CRO Denise Dresser, leaked by The Verge on April 13, 2026, outlines five Q2 enterprise priorities including a new model codenamed “Spud.”
  • “Spud” is described as delivering stronger reasoning, better dependency understanding, and more reliable production output, with claims it will improve all core OpenAI products significantly.
  • OpenAI is developing an enterprise agent platform called “Frontier,” positioned as the default infrastructure for autonomous business workflows.
  • Dresser accuses Anthropic of overstating its $30 billion annual run rate by approximately $8 billion due to gross rather than net accounting for Amazon and Google revenue share payments.

What Happened

OpenAI Chief Revenue Officer Denise Dresser circulated an internal memo in early Q2 2026 outlining the company’s enterprise strategic direction for the quarter. The document, first reported by The Verge and subsequently covered by The Decoder on April 13, 2026, was not intended for public release. It covers five priorities: a forthcoming model codenamed “Spud,” an agent platform called “Frontier,” an expanded Amazon Bedrock integration, a deployment service called “DeployCo,” and a direct challenge to Anthropic’s revenue accounting.

Dresser frames the enterprise AI market as entering a “more mature phase” in which raw model performance is no longer a standalone differentiator. According to the memo, capacity — not demand — is now the primary constraint on growth, with multi-year contracts in the nine-figure range described as increasingly common.

Why It Matters

The memo offers a direct look at how OpenAI internally frames its competitive position as enterprise AI procurement accelerates. The revenue dispute with Anthropic has prior context: The Information previously reported on accounting differences between the two companies’ revenue recognition from their respective cloud partnerships with Amazon and Google. Dresser’s characterization of the market as having shifted “from prompts to agents” mirrors public positioning from Google, Microsoft, and Salesforce over the past year.

The document also signals that OpenAI views its compute infrastructure — not model capability alone — as a durable competitive advantage, citing customer-facing evidence including higher token limits, lower latency, and more consistent execution of complex multi-step workflows.

Technical Details

“Spud” is described in the memo as an “important step in the intelligence foundation for the next generation of work.” Dresser writes that early customer feedback shows the model delivers stronger reasoning, improved understanding of “intentions and dependencies,” and more consistent production results. She states the model will make all of OpenAI’s core products “significantly better” as part of an iterative deployment strategy aimed at building toward a “super app.”

The Amazon partnership, announced in late February 2026, introduces what Dresser calls an “Amazon Stateful Runtime Environment” — a layer on top of Amazon Bedrock providing memory, context, and continuity across interactions for multi-level agent workflows. This goes beyond basic model API access. Dresser cites three specific advantages: lower adoption barriers for AWS-native customers, improved access to regulated industries, and deeper integration down to production runtime.

The “Frontier” agent platform is designed to be “the default platform for enterprise agents.” Dresser articulates the strategic rationale directly: “Better models make the platform more valuable, deeper integration raises switching costs, and every workflow running through the system makes OpenAI harder to rip out. That is how we move from product vendor to operating infrastructure.”

Who’s Affected

Enterprise customers evaluating or currently running workloads on OpenAI, Anthropic, or Amazon Bedrock are the primary audience for the memo’s competitive framing. AWS-native companies and businesses in regulated industries are specifically named as targets for the Amazon integration. Software development teams are addressed through Codex; general knowledge workers through ChatGPT for Work.

Anthropic faces reputational exposure from Dresser’s accounting claim. According to the memo, Anthropic’s reported $30 billion annual run rate is overstated by approximately $8 billion because the company books revenue share payments from Amazon and Google on a gross rather than net basis. Dresser writes that OpenAI reports its Microsoft revenue share on a net basis, “which is more inline [sic] with standards we would be held to as a public company.” Neither OpenAI nor Anthropic is publicly traded, so neither claim can be independently verified through public filings.

What’s Next

OpenAI is developing “DeployCo,” a deployment engine intended to address what Dresser identifies as the primary enterprise bottleneck: scale rollout, not demand. It will operate alongside a network of “Frontier Alliance” partners. The memo does not include a release date for Spud, a launch timeline for Frontier, or a go-live date for DeployCo.

Dresser frames OpenAI’s overall strategy as a self-reinforcing cycle: “Better models drive more usage, more usage drives deeper integration, deeper integration drives multi-product adoption, and multi-product adoption makes us harder to replace.” The memo does not address whether Anthropic will respond to the revenue accounting claims publicly.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime