BLOG

George Hotz Says Closed-Source AI Is Creating a New Feudal Class System

M MegaOne AI Apr 4, 2026 4 min read
Engine Score 7/10 — Important
Editorial illustration for: George Hotz Says Closed-Source AI Is Creating a New Feudal Class System
  • George Hotz (geohot), founder of tinygrad and the hacker who first jailbroke the iPhone, published an essay arguing that closed-source AI creates “neofeudalism” by concentrating intelligence itself into the hands of a few corporations.
  • Hotz’s core claim is that when intelligence becomes a product controlled by private entities, the rest of society becomes a permanent dependent class, analogous to feudal serfs who did not own the land they worked.
  • The essay generated over 1,200 comments on Hacker News, with strong support from open-source developers and pushback from those who argue that closed models drive faster safety research.
  • Hotz positions open-source AI frameworks like tinygrad, llama.cpp, and GGML as explicitly anti-feudal infrastructure that distributes compute access.

What Happened

George Hotz, known for jailbreaking the iPhone at age 17 and later founding autonomous driving startup comma.ai and the tinygrad deep learning framework, published an essay on his personal blog titled “Neofeudalism and the Closed-Source AI Trap.” The essay, posted on March 28, 2026, argues that closed-source AI development by companies like OpenAI, Google, and Anthropic represents a structural threat to human autonomy that goes beyond typical debates about open versus closed software.

“This is not about licensing,” Hotz wrote. “This is about who owns intelligence. In the feudal system, lords owned the land and peasants worked it. In the AI feudal system, corporations own the intelligence and humans rent access to it. The asymmetry is the same. The dependency is the same. The power dynamic is the same.”

Why It Matters

Hotz’s argument extends a thread that has been building in the AI community since OpenAI’s transition from a nonprofit to a capped-profit entity in 2019. The debate has typically centered on safety (closed models are easier to control) versus access (open models democratize capability). Hotz reframes the question as one of political economy rather than technology policy.

The argument resonates because of observable market dynamics. As of April 2026, three companies, OpenAI, Google DeepMind, and Anthropic, control the most capable foundation models. Training a frontier model costs an estimated $200 million to $1 billion, according to estimates compiled by Epoch AI. This cost barrier means only entities with access to massive capital can produce frontier intelligence, creating what Hotz calls a “compute aristocracy.”

Technical Details

Hotz’s essay makes several specific technical claims. First, he argues that the compute concentration is accelerating, not stabilizing. Citing Epoch AI data, he notes that the compute required for frontier training runs has increased by approximately 4x per year since 2020, meaning the barrier to entry doubles roughly every six months. Second, he points to talent concentration: an analysis of LinkedIn data he conducted shows that 73% of researchers who have published first-author papers on frontier model training at NeurIPS or ICML in the past two years are employed by five companies.

Third, Hotz argues that API access is not equivalent to ownership. “When you use the OpenAI API, you are a sharecropper,” he wrote. “You build your product on their land. They can change the price, change the model, change the terms, or shut you off. You own nothing. You control nothing. This is the definition of feudal dependency.” He cites the specific example of OpenAI’s deprecation of the GPT-3.5-turbo fine-tuning API in 2025, which forced thousands of developers to migrate or rebuild.

Hotz positions open-source projects as the structural countermeasure. He specifically names tinygrad (his own project), llama.cpp (Georgi Gerganov’s inference engine), GGML (the underlying tensor library), and the broader Llama model family from Meta as examples of infrastructure that distributes compute capability. “Every open-source model that runs on consumer hardware is a freed serf,” he wrote. “Every quantization technique that reduces memory requirements is a tool of liberation.”

Who’s Affected

The essay’s framing has implications for multiple constituencies. Developers building on closed APIs are implicitly characterized as feudal dependents, a framing that clearly struck a nerve given the Hacker News response. Several commenters noted that they had migrated products from OpenAI to locally-hosted open-source models specifically to avoid platform risk, citing experiences that aligned with Hotz’s sharecropper analogy.

Policymakers engaged in AI regulation are a secondary audience. The EU AI Act and proposed US legislation have focused primarily on safety and liability. Hotz’s argument suggests that market structure and compute access should be regulatory priorities. If intelligence becomes as essential as electricity, the argument for treating frontier AI providers as utilities or common carriers gains force.

The closed-source AI companies themselves have generally responded to this line of criticism by emphasizing safety. Dario Amodei, CEO of Anthropic, has argued that premature open-sourcing of frontier models creates unacceptable risks. Sam Altman of OpenAI has described a “gradual release” philosophy. Hotz dismisses these arguments as self-serving: “The lord always has a reason why the serfs cannot own land.”

What’s Next

Hotz announced that tinygrad will release a fully open training framework for models up to 70 billion parameters by Q3 2026, designed to run on consumer GPU clusters. The stated goal is to reduce the cost of training a competitive 70B model to under $500,000 using commodity hardware. Whether this target is achievable remains an open technical question, but the project has attracted over 200 contributors since the essay was published.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy